text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Structural basis for COMPASS recognition of an H2B-ubiquitinated nucleosome
Methylation of histone H3K4 is a hallmark of actively transcribed genes that depends on mono-ubiquitination of histone H2B (H2B-Ub). H3K4 methylation in yeast is catalyzed by Set1, the methyltransferase subunit of COMPASS. We report here the cryo-EM structure of a six-protein core COMPASS subcomplex, which can methylate H3K4 and be stimulated by H2B-Ub, bound to a ubiquitinated nucleosome. Our structure shows that COMPASS spans the face of the nucleosome, recognizing ubiquitin on one face of the nucleosome and methylating H3 on the opposing face. As compared to the structure of the isolated core complex, Set1 undergoes multiple structural rearrangements to cement interactions with the nucleosome and with ubiquitin. The critical Set1 RxxxRR motif adopts a helix that mediates bridging contacts between the nucleosome, ubiquitin and COMPASS. The structure provides a framework for understanding mechanisms of trans-histone cross-talk and the dynamic role of H2B ubiquitination in stimulating histone methylation.
Introduction
The histone proteins that package eukaryotic DNA into chromatin (Andrews and Luger, 2011) are subject to a huge variety of post-translational modifications that regulate chromatin structure, nucleosome positioning and protein recruitment, thereby playing a central role in regulating transcription (Kouzarides, 2007). Methylation of histone H3 at lysine 4 (H3K4) is a mark of actively transcribed genes and is enriched in promoter regions (Barski et al., 2007). H3K4 is methylated in yeast by the Set1 methyltransferase (Roguev et al., 2001), which can attach up to three methyl groups to the lysine e-amino group (Santos-Rosa et al., 2002), and in humans by the six related SET1/MLL family of methyltransferases (Meeks and Shilatifard, 2017). Methylation of nucleosomal H3K4 depends on the prior ubiquitination of histone H2B (H2B-Ub) at Lysine 120 (Lys 123 in yeast) (Dover et al., 2002;Shahbazian et al., 2005;Sun and Allis, 2002), an example of histone modification 'cross-talk' in which attachment of one histone mark templates the deposition of another. H2B-Ub and H3K4 diand tri-methylation are strongly associated with active transcription in both yeast and humans (Barski et al., 2007;Jung et al., 2012;Liu et al., 2005;Minsky et al., 2008;Pokholok et al., 2005;Santos-Rosa et al., 2002;Shieh et al., 2011;Steger et al., 2008). H3K4 methylation can serve as a recruitment signal for various transcription activators (Sims et al., 2007;Vermeulen et al., 2010;Vermeulen et al., 2007;Wysocka et al., 2006) including SAGA, whose acetyltransferase activity is stimulated by H3K4 methylation (Bian et al., 2011;Ringel et al., 2015).
Structural studies of the yeast core subcomplex (Hsu et al., 2018) and the H2B-ubiquitin-sensing subcomplex (Qu et al., 2018;Takahashi et al., 2011) have shown that COMPASS adopts a Y-shaped, highly intertwined structure with the Set1 catalytic domain at its core. Furthermore, a recent structure of the related human MLL1 core complex bound to a ubiquitinated nucleosome revealed the underlying mechanisms of nucleosome recognition by human COMPASS-like complexes (Xue et al., 2019). In addition, a recent structure of the related COMPASS complex from K. Lactis has shown how COMPASS binds to a ubiquitinated nucleosome . However, there is currently no structural information on how the full H2B-ubiquitin-sensing COMPASS subcomplex from Saccharomyces cerevisiae binds and recognizes the H2B-Ub containing nucleosome. We report here the 3.37 Å resolution cryo-EM structure of the H2B-Ub sensing COMPASS subcomplex from the yeast, Saccharomyces cerevisiae, bound to an H2B-Ub nucleosome. The structure shows that COMPASS contains multiple structural elements that position the complex on the nucleosome disk though interactions with nucleosomal DNA and three of the core histones. The position of the Set1 catalytic domain suggests that COMPASS methylates H3K4 in an asymmetric manner by targeting H2B-Ub and H3K4 on opposite sides of the nucleosome. Structuring of a critical RxxxRR motif to form a helix enables the complex to associate with the nucleosome acidic patch, with the RxxxRR helix forming the bottom edge of an extended ubiquitin interaction crevice that underlies the structural basis of H2B-ubiquitin recognition by COMPASS. Comparison with other ubiquitin-activated methyltransferases shows that interactions with the H2B-linked ubiquitin are highly plastic and suggests how a single ubiquitin mark can be utilized by several different enzymes. Our findings shed light on the long-standing mystery of how H2B-Ub is recognized by COMPASS and provide the first example of trans-nucleosome histone crosstalk.
Architecture of the COMPASS H2B-Ub nucleosome complex
We determined the cryo-EM structure of the minimal, H2B-ubiquitin-sensing subcomplex of Saccharomyces cerevisiae COMPASS bound to a Xenopus laevis nucleosome core particle ubiquitinated at histone H2B K120 via a non-hydrolyzable dichloroacetone (DCA) linkage (Morgan et al., 2016). The ubiquitinated residue corresponds to K123 of yeast H2B. To drive tight association between COM-PASS and the nucleosome, we utilized a variant of histone H3 in which K4 was substituted with the non-native amino acid, norleucine (Nle) (Worden et al., 2019). Lysine-to-norleucine mutations have been shown to greatly increase the affinity of SET-domain methyltransferases for their substrates in a S-adenosylmethionine (SAM)-dependent manner (Jayaram et al., 2016;Lewis et al., 2013;Worden et al., 2019). To assess the gain in affinity imparted by the H3K4Nle substitution, we used gel mobility shift assays to measure binding of COMPASS to different nucleosome variants in the presence of SAM (Figure 1-figure supplement 1). Surprisingly, COMPASS binds to unmodified and H2B-Ub nucleosomes with the same apparent affinity, indicating that H2B-Ub does not contribute significantly to the energy of COMPASS binding to the nucleosome (Figure 1-figure supplement 1). However, H2B-Ub nucleosomes that also contain the H3K4Nle mutant bind COMPASS with 2-5 fold higher affinity than unmodified nucleosomes (Figure 1-figure supplement 1, compare the 0.125 mM lane for all samples). Methyltransferase activity assays on an H3 peptide fragment (residues 1-21) confirmed that the COMPASS was active (Figure 1-figure supplement 1). We therefore prepared complexes between COMPASS and H2B-Ub nucleosomes containing the H3K4Nle mutation in the presence of saturating SAM and determined the structure of the complex to 3.37 Å by single particle cryo-EM ( Figure 1a, Figure 1-figure supplements 2-3 and Table 1).
In the reconstruction, two COMPASS complexes are bound to opposite faces of the nucleosome in a pseudo-symmetric 2:1 arrangement (Figure 1a, Figure 1-figure supplement 2) The online version of this article includes the following figure supplement(s) for figure 1: Figure 1 continued on next page one of the two bound COMPASS assemblies resolved to high resolution. The final model, therefore, includes one COMPASS complex and the nucleosome core particle (Figure 1a-b). To build the yeast COMPASS complex, models of Spp1 and the N-set region of Set1 were taken from the cryo-EM structure of S. cerevisiae COMPASS (Qu et al., 2018) and docked into the EM density. For the rest of the COMPASS model, crystal structures of K. lactis Bre2, Swd1, Swd3, Sdc1 and Set1 subunits (Hsu et al., 2018) were utilized to create homology models with the S. cerevisiae sequence (25%-50% sequence identity) using Swiss-model (Waterhouse et al., 2018). The homology models were docked into the EM density, manually re-built in COOT (Emsley et al., 2010) and refined using Phenix (Adams et al., 2010) (see Materials and methods). Spp1, Bre2 and the Sdc1 dimer are less well resolved than the other COMPASS subunits due to their location on the periphery of the complex and consequent higher mobility ( Figure 1, Figure 1-figure supplement 3). In particular, the EM density corresponding to the N-terminal portion of Spp1 was very weak and precluded accurate model fitting. Therefore, the N-terminal portion of Spp1 was excluded from our final model (Figure 1-figure supplement 4).
The COMPASS complex spans the entire diameter of the nucleosome and is anchored by contacts between DNA and Bre2/Set1 at one end of the complex and Swd1 and Spp1 at the other ( Figure 1b, Figure 2). These DNA contacts position COMPASS such that Swd1 and the Set1 catalytic domain can contact the central histone core. Compared to isolated S. cerevisiae COMPASS (Qu et al., 2018) moves Swd3, Spp1 and Swd1 toward the nucleosome by~36 Å (Figure 1c). This movement allows Spp1 to bind the nucleosomal DNA, and allows Swd1 to interact with the histone core. Notably, a subset of particles in the cryo-EM structure of COMPASS in the absence of nucleosome (Qu et al., 2018) exhibited flexing about the same axis, although not to the extent observed here when COM-PASS is bound to a nucleosome ( Figure 1c). Our structure suggests that the previously observed conformational flexibility of COMPASS is important for nucleosome recognition. The Set1 active site is oriented away from the nucleosome and contains density for the SAM cofactor and the bound H3 tail (Figure 1d, Figure 1-figure supplement 3). The intervening sequence of the H3 tail, from its exit point in the nucleosome (P38) to the first resolved residue in the Set1 active site (R8), is not visible in the maps, suggesting that these residues are highly mobile (Figure 1b,d). To determine which copy of histone H3 was connected to the portion of the H3 tail bound to Set1, we compared the distance between the last H3 residue in the Set1 active site (R8) and the first H3 residue (P38) on each face of the nucleosome. The Set1 active site is positioned much closer to the exit point of the H3 subunit on the opposite face of the nucleosome (trans-H3,~35 Å ) than to the exit point of the H3 subunit on the same side of the nucleosome (cis-H3,~88 Å , Figure 1d). The 88 Å end-to-end distance between the exit point of cis-H3 and H3 in the Set1 active site is too long to be spanned by the intervening unstructured H3 tail residues given that an even greater distance (~100 Å ) would be needed for the H3 residues to wrap around the nucleosomal DNA near the dyad axis. This arrangement therefore indicates that COMPASS methylates the nucleosome in an asymmetric manner by recognizing ubiquitin on one face of the nucleosome and targeting H3K4 on the opposite, trans-H3, face of the nucleosome.
COMPASS also interacts directly with the core histone proteins. A pair of loops in Swd1 anchor COMPASS near the C-terminal H2B helix ( Figure 3). In addition, a long helix in Set1 containing the RxxxRR motif (Kim et al., 2013) extends along the surface of the histone core and makes multiple interactions with the nucleosome acidic patch formed by histones H2A and H2B ( Figure 4). Finally, Set1, Bre2 and Swd1 bind to the H2B-linked ubiquitin ( Figures 5 and 6), providing a structural basis for the crosstalk between H2B-Ubiquitination and H3K4 methylation.
COMPASS interacts with DNA using three distinct interfaces
COMPASS interacts with DNA in three distinct locations, thereby orienting the complex on the face of the nucleosome (Figure 2). At one end of COMPASS, the nucleosomal DNA docks into a concave surface at the interface of Bre2 and Set1 (Figure 2a,b). This concave surface is lined with several basic residues that can potentially contact the DNA. In particular, Bre2 K318 is in a position to interact directly with the sugar-phosphate backbone ( Figure 2b). We note that we did not observe clear sidechain density for Bre2 K318, so the position of this sidechain is inferred from the conformation of the protein backbone. Set1 residues R1034, K1029 and K1026 are also located at this interface ( Figure 2b) but are too far away to directly contact the DNA. Instead, these basic residues likely serve to increase the local positive charge of the concave DNA binding surface. On the opposing edge of the nucleosome, COMPASS contacts the DNA at two distinct interfaces mediated by Spp1 and Swd1 (Figure 2c,d). Fragmented density connecting Spp1 and the nucleosomal DNA corresponds to a loop in Spp1 that is disordered in our structure (residues 241-261). This Spp1 loop ( Figure 2c) contains a patch of positively-charged amino acids (KRKKKK) that likely interact with the negatively charged DNA backbone and major groove. Swd1 interacts with the nucleosomal DNA using two highly conserved basic resides, R236 and K266, which emanate from loops in blade 5 of the Swd1 WD40 domain (Figure 2d). Substitution of Swd1 K266 with an alanine decreased H3K4 diand tri-methylation in yeast (Figure 3c-d), indicating that the loss of even a single DNA contact can impair COMPASS function. As discussed further below, this region of Swd1 is highly conserved and also mediates contacts between Swd1 and the core histone octamer, making this part of Swd1 a highly utilized surface for COMPASS interaction with the nucleosome.
Importantly, similar interactions between these COMPASS subunits (Bre2, Spp1 and Swd1) and nucleosomal DNA have recently been observed in the human MLL1 complex (Xue et al., 2019) and the K. lactis COMPASS complex , indicating that these DNA interactions are critical for COMPASS function and are highly conserved.
A conserved loop in Swd1 contacts the core histone octamer
The structure reveals that Swd1 contains two loops that interact with three different histones in the nucleosome core ( Figure 3a). Swd1 Loop 1 connects b21 and b22, and, along with the edge of bstrand 25, embraces the H2B C-terminal helix, a4 (Figure 3a). At the tip of L, V263 and I264 insert into a small hydrophobic crevice at the three-helix interface consisting of H2B helices, a3 and a4, and H2A a3 (Figure 3a,b). This hydrophobic crevice includes H2A Y50, and H2B V118, which are in van der Waals contact with Swd1 V263 and I264. In addition to the hydrophobic interactions in L, Swd1 N265 is positioned close to H2B Q95, potentially forming a hydrogen bond and further stabilizing the Loop 1 interaction with the histone core ( Figure 3b). To assess the importance of the interaction between Swd1 Loop 1 and the H2A/H2B hydrophobic crevice, we examined the effects of alanine substitutions on histone H3K4 methylation in S. cerevisiae. As shown in Figure 3c R904 R908 substitution with the greatest effect was Swd1 I264A, which completely abolished H3K4 di-and trimethylation and greatly reduced H3K4 mono-methylation (Figure 3c,d). I264 is positioned in the center of the H2A/H2B hydrophobic crevice and the strong defect in H3K4 methylation seen for the I264A mutation indicates that this interaction is critical for COMPASS activity in vivo. The V263A mutant slightly decreased H3K4 di-and tri-methylation, but did not change H3K4 mono-methylation, whereas an N265A mutation had no effect (Figure 3c,d). We note that it is not possible to determine from these data if the changes in the relative H3K4 mono-, di-and tri-methylation levels in the COMPASS mutants are caused by altered product specificity, or slower enzyme turnover. Loop 1 is highly conserved (Figure 3-figure supplement 1) and the residues that correspond to V263 and I264 are always hydrophobic in character, indicating that that the interaction we observe between Loop 1 and the histone octamer is structurally conserved among Swd1 homologs. Indeed, the human MLL1 core complex subunit, Rbbp5, binds to the histone octamer using a similar interface (Xue et al., 2019) and another recent study reports that the K. lactis COMPASS complex Swd1 subunit also utilizes the hydrophobic cleft interface in a similar manner .
In addition to Swd1 L, Loop 2 connects b-strands b23 and b24 and is oriented toward the histone H3 a2-a3 loop and histone H4 a3 (Figure 3b). At the tip of Loop 2, S289 forms a hydrogen bond with H3 K79. As compared to L, Loop 2 is not well conserved (Figure 3-figure supplement 1) and the contact it makes with H3K79 is not recapitulated in other COMPASS-like complexes (Xue et al., 2019). Taken together, these results show that the interaction between Swd1 and the nucleosome is conserved from yeast to humans and is critical for COMPASS function in vivo.
The Set1 RxxxRR motif forms an adaptor helix that bridges the nucleosome acidic patch and ubiquitin
The arginine-rich RxxxRR motif in the N-set region of Set1 is critical for stimulation of methyltransferase activity by H2B ubiquitination and mutants in this motif impair COMPASS activity in yeast (Kim et al., 2013). Our structure reveals the critical role that the RxxxRR motif plays in COMPASS interactions with both the nucleosome and ubiquitin. In isolated yeast COMPASS (Qu et al., 2018), the RxxxRR motif is disordered. When bound to the H2B-Ub nucleosome, Set1 residues 897-921 become ordered, forming a long helix that passes underneath COMPASS and makes extensive contacts with the nucleosome acidic patch (Figure 4a-e), as well as with the H2B-linked ubiquitin (see below). The RxxxRR helix docks on the nucleosome surface parallel to the C-terminal helix a4 of histone H2B, positioning multiple arginine residues opposite the negatively charged residues of H2A that make up the acidic patch (Figure 4d,e and Figure 1-figure supplement 4). Residues in the Set1 RxxxRR helix form several specific electrostatic interactions with acidic patch residues: R908 is in a position to contact H2A residue E56 and H2B E113, R904 contacts H2A residue E61 and E64 and R901 contacts H2A residues D90 and E92 (Figure 4d,e). The extensive electrostatic network mediated by the RxxxRR helix is flanked by interactions between Set1 R936 and H2A D72 on one side and between R909 and H2B S112 on the other. We note, however, that there is no clear sidechain density for R909 in our structure, so the contact between R909 and S112 is inferred from the conformation of the backbone. Interestingly, the RxxxRR motif is conserved in human Set1 orthologs, but not in the human paralogs MLL1-4 ( Figure 3-figure supplement 1) which have recently been shown not to bind the nucleosome acidic patch (Xue et al., 2019).
In addition to formation of the RxxxRR helix, nucleosome binding is accompanied by a profound restructuring of Set1 a-helix, 926-933, which unravels and forms an extended strand that lies parallel to the RxxxRR helix (Figure 4f and Figure 1-figure supplement 4). This extended strand lies along the histone core, orienting R936 toward the nucleosome surface (Figure 4d,f). The conformational Previous mutagenesis studies of the N-set region of yeast Set1 (Kim et al., 2013) showed that a combined R909, R908 and R904 triple mutant abolished COMPASS activity in vitro and in vivo. Furthermore, a large-scale alanine screen of histone residues in yeast previously determined that residues in the H2A/H2B acidic patch are required for H3K4 methylation by COMPASS (Nakanishi et al., 2008). To assess the contribution of individual Set1 amino acids to H3K4 methylation by COMPASS in vivo, we generated yeast strains in which a set1 deletion was complemented with mutant set1 containing point mutations designed to disrupt interactions with the nucleosome acidic patch. As compared to wild-type, R936A and R936E Set1 mutations both reduced H3K4 methylation (Figure 4g,h). The charge reversal mutation, R936E, was more severe and almost entirely abolished di-and tri-methylation by COMPASS (Figure 4g,h). Individual alanine substitutions of R901A, R904A and R901A reduced mono-methylation by COMPASS and abolished di-, and tri-methylation. The R909A mutation, which does not form electrostatic contacts with the acidic patch, greatly reduced di-and tri-methylation. As expected, the quadruple mutant R901A, R904A, R908A and R909A (4R->4A) completely abolished all methylation of H3K4. Charge reversal mutants R901E, R904E, R908E, R909E and the quadruple 4R->4E mutants all abolished H3K4 methylation, expect for R909E, which retained some residual H3K4 mono-methylation (Figure 4g,h). Together, these structural and mutational data indicate that interactions between the RxxxRR helix and the acidic patch are critical for COMPASS activity.
Structural basis of ubiquitin recognition
The COMPASS-Ubiquitin interaction is different from, and much more extensive than, the interaction between ubiquitin and the MLL1 core complex subunit RbBP5 (Xue et al., 2019) or Dot1L (Anderson et al., 2019;Valencia-Sánchez et al., 2019;Worden et al., 2019), the histone H3K79 methyltransferase which is also stimulated by H2B-Ub (Briggs et al., 2002;McGinty et al., 2008;Ng et al., 2002) (Figure 5a-c). Moreover, the H2B-linked ubiquitin is positioned in different orientations relative to the face of the nucleosome in each of these complexes (Figure 5d), indicating that H2B-Ub is a conformationally plastic epitope that can be recognized in structurally distinct ways. Ubiquitin binds to COMPASS in a large cleft located between Swd1, Set1 and Bre2 (Figure 6a), burying 930 Å 2 of total surface area. The H2B-linked ubiquitin sits on top of the Set1 RxxxRR helix and makes multiple contacts with the N-terminal and C-terminal extensions of Swd1 (Figure 6a,b). In addition to the Set1 and Swd1 contacts, there is substantial connecting density between Bre2 and the N-terminus of ubiquitin that likely corresponds to a 38 amino acid loop in Bre2 (residues 140-178) that is unmodeled in our structure (Figure 6a). This Bre2 loop is not found in human or other yeast COMPASS complexes and appears to be specific to the S. cerevisiae COMPASS complex (Figure 3-figure supplement 1).
The primary contact between ubiquitin and COMPASS is mediated by the N-and C-terminal extensions of Swd1 (Figure 6b). These Swd1 extensions are critical for COMPASS assembly and mediate multiple contacts between COMPASS subunits (Hsu et al., 2018;Qu et al., 2018). Compared to the nucleosome-free (apo) structure of COMPASS (Qu et al., 2018), the N-and C-terminal extensions of Swd1 change conformation when bound to ubiquitin (Figure 6c,d). These Swd1 extensions present an extended surface of hydrophobic residues which interact with the ubiquitin C-terminal tail and the hydrophobic 'I44 patch' on ubiquitin comprising I44, V70, L8 and H68 (Figures 5a and 6b). The I44 patch is a canonical interaction surface that is contacted by many different ubiquitin binding proteins (Komander and Rape, 2012), and by the related MLL1 complex subunit, Rbbp5 (Xue et al., 2019) (Figure 5b). At the center of the I44 patch interaction, Swd1 L12, V401 and P8 contact Ub I44 and Swd1 F9 interacts with Ub L8, V70, H68 and the aliphatic portion of K6 (Figure 6b,e and Figure 1-figure supplement 4). The I44 patch contact is flanked by electrostatic Figure 6 continued on next page interactions between Swd1 E14 and Ub R42 on one side, and between Swd1 E397 and Ub K48 on the other (Figure 6b). In addition to the I44 patch contacts, Set1 contacts the hydrophobic 'I36 patch' of ubiquitin consisting of I36, L71 and L73 (Figure 6b,f) (Komander and Rape, 2012). While the I36 patch of ubiquitin is not as widely utilized by ubiquitin-binding proteins, it is also used by the Dot1L methyltransferase (Figure 5c). At the end of the RxxxRR helix Ub L73 contacts I914 and the aliphatic portion of N917 through van der Waals interactions (Figure 6b). The importance of the L73 contact is supported by previous in vitro studies showing that ubiquitin residues, L71 and L73, are critical for ubiquitin-dependent COMPASS activity (Holt et al., 2015). Finally, Set1 L928 inserts into a small hydrophobic pocket within ubiquitin and likely interacts with ubiquitin I13, I36, T7 and the aliphatic portion of K11 (Figure 6f and Figure 1-figure supplement 4). However, this region of the map does not show clear sidechain density for Set1 L928, so the position of the L928 sidechain is inferred from the conformation of the protein backbone.
To assess the contribution of interfacial Swd1 residues in ubiquitin-dependent H3K4 methylation in vivo, we generated swd1 deletion yeast strains expressing mutant Swd1 and measured the effects on H3K4 methylation. As shown in Figure 6g-h, the Swd1 L12A mutation completely abolished H3K4 methylation by COMPASS in vivo, indicating that the interaction between this Swd1 residue and Ub I44 is critical to COMPASS function. Swd1 E14A and E14R mutations both greatly decreased H3K4 di-and tri-methylation and also reduced H3K4 mono-methylation to an intermediate level (Figure 6g,h). Unsurprisingly, a Swd1 F9A, L12A and E14A triple mutant (FLE) completely abolished H3K4 methylation. Finally, the E397R mutation, which lies on the periphery of the ubiquitin interaction, modestly reduced all methylation states of H3K4 (Figure 6g,h). Together, these data show that the contacts observed between COMPASS and the H2B-linked ubiquitin are critical to activate COMPASS for H3K4 methylation.
Discussion
Our structure reveals the basis of crosstalk between H2B-ubiquitination and H3K4 methylation by Saccharomyces cerevisiae COMPASS and shows that it methylates its target lysine in an asymmetric manner by recognizing the H2B-Ub and H3K4 on opposite sides of the nucleosome (Figure 1d). This asymmetric recognition is distinct from the H3K79 methyltransferase, Dot1L, which is also stimulated by H2B-Ub but which methylates H3 on the same, cis-H3, side of the nucleosome (Figure 5c) (Anderson et al., 2019;Valencia-Sánchez et al., 2019;Worden et al., 2019). To our knowledge, the asymmetric recognition of H2B-Ub and H3K4 by COMPASS and COMPASS-related complexes Xue et al., 2019) is the first example of trans-nucleosome histone crosstalk. This observation highlights the importance of asymmetry in histone modifications. Previous studies have shown that 'repressive' H3K27 tri-methyl and 'active' H3K4 tri-methyl marks can be deposited asymmetrically in the same nucleosome, but on opposite H3 tails (Voigt et al., 2012). This asymmetric, bivalent modification of H3K27 and H3K4 is believed to be associated with maintaining promoters in a poised state during differentiation. Our structure provides a mechanistic framework to understand how asymmetric nucleosome modifications can be read out and deposited during histone crosstalk.
Compared with other histone modifications such as methylation and acetylation, ubiquitination is a large, highly complex mark that presents a chemically rich interaction surface to potential binding partners and also decompacts chromatin (Fierz et al., 2011). In addition, because it is conjugated to the nucleosome through its flexible C-terminus, ubiquitin can adopt a range of orientations on the interaction with ubiquitin. The sharpened EM density around Swd1 is shown as a semi-transparent gray surface and the EM density around ubiquitin is shown as a semi-transparent pink surface. (f) Detailed view of the interaction between Set1 L928 and ubiquitin. The sharpened EM density around Set1 is shown as a semi-transparent gray surface. (g) Western blot analysis of H3K4 methylation states in swd1D yeast strains transformed with plasmids containing the indicated swd1 variants. (h) Quantification of the western blots from panel g. Error bars are the standard deviation of the data (n = 4). The online version of this article includes the following source data and figure supplement(s) for figure 6: Source data 1. Western blot quantification for Swd1 Ubiquitin-binding mutants. nucleosome, enabling even greater complexity it its recognition. Several structures now exist of H2B-Ub-activated methyltransferases bound to ubiquitinated nucleosomes, which reveal highly divergent strategies for ubiquitin recognition ( Figure 5). The distinct ubiquitin binding modes employed by COMPASS ) (this study), Dot1L (Anderson et al., 2019;Valencia-Sánchez et al., 2019;Worden et al., 2019) and the MLL1 core complex (Xue et al., 2019) reveal a striking plasticity in how ubiquitin is able to interact with and activate these different histone methyltransferases. The high complexity of the ubiquitin mark likely enables H2B-Ub to communicate with these different enzymatic complexes and template the deposition of 'activating' marks during transcription. The size of ubiquitin might also impose a significant steric obstacle that could inhibit the activity of enzymes which do not directly recognize the ubiquitin. Indeed, H2B-Ub may even impose a steric barrier to the transcriptional machinery as evidenced by the observation that efficient transcription elongation requires removal of H2B-Ub (Wyce et al., 2007).
Our structure suggests a mechanism by which H2B-Ub stimulates COMPASS to methylate H3K4. Compared to the nucleosome-free Saccharomyces cerevisiae COMPASS structure (Qu et al., 2018) and the structure of Kluyveromyces lactis COMPASS bound to an H3 peptide and SAM (Hsu et al., 2018), there are no discernable conformational changes in the Set1 catalytic domain that could explain how H2B-Ub stimulates methylation ( Figure 6-figure supplement 1). It is notable that, for both COMPASS and Dot1L, the presence of ubiquitin conjugated to H2B does not appear to increase affinity of the enzyme for the nucleosome (Figure 1-figure supplement 1) (Worden et al., 2019). The lack of a discernable effect on binding energy suggests that ubiquitin binding primarily affects catalytic activation and, in the case of Dot1L, has been shown to increase k cat but not K M (McGinty et al., 2009;Worden et al., 2019). Alternatively, It has been suggested that ubiquitin activates Dot1L by using its binding energy to pay the energetic cost of inducing a conformational change in the globular core of histone H3, thereby inserting K79 into the Dot1L active site (Worden et al., 2019). We speculate that H2B-Ub may similarly activate COMPASS by providing the binding energy needed to induce the disordered Set1 RxxxRR motif to form a helix that mediates contacts with the nucleosome. Ubiquitin binding may also compensate for the energetic cost of inducing the Set1 helix, 326-331, to unravel and form an extended b-strand that buttresses the RxxxRR helix and the catalytic domain of Set1 (Figure 4). Our structure provides a basis for further elucidating the role that H2B ubiquitination plays in stimulating histone methyltransferase activity.
Continued on next page
Purification of 601 DNA The pST55À16 Â 601 plasmid was a generous gift from Dr. Song Tan (Makde et al., 2010). The pST55À16 Â 601 plasmid containing 16 repeats of the 147 base pair Widom 601 positioning sequence (Lowary and Widom, 1998) was grown in the E. coli stain XL-1 blue. The plasmid was purified and the 601 DNA was excised with EcoRV and recovered essentially as described previously (Dyer et al., 2004).
Electrophoretic Mobility Shift assays
For the electrophoretic mobility shift assays (EMSA), 50 nM of each nucleosome variant was mixed with COMPASS at the indicated concentrations and incubated at room temperature for 30 min in EMSA buffer (20 mM HEPES pH 7.5, 100 mM NaCl, 1 mM DTT, 0.2 mg/ml bovine serum albumin (BSA), 100 mM S-adenosyl methionine (SAM)). Samples were then diluted with an equal volume of 2x EMSA sample buffer (40 mM HEPES pH 7.6, 100 mM NaCl, 10% Sucrose, 2 mM DTT, 0.2 mg/ml BSA) and 10 ml of sample was loaded onto a 6% native Tris-borate EDTA (TBE) gel at 4˚C. The gel was stained with SybrGold DNA stain (Thermo-Fisher) to visualize bands.
Yeast strains
All yeast strains were prepared from the BY4743 background and cultured using standard methods. The Swd1D strain was obtained from the Yeast Knock-Out collection (Dharmacon) and was a generous gift from Dr. Carol Greider. The Set1D strain was prepared using PCR-mediated gene disruption with the KanMX gene as a selectable marker. Wild type (WT) SWD1 and SET1 were isolated by PCR from Saccharomyces cerevisiae genomic DNA with a native 600 base pair (bp) upstream promotor and a 200 bp terminator. The isolated SWD1 and SET1 genes were cloned into pRS415 and mutants were generated using inverse PCR. Empty pRS415 vector or plasmids containing WT or mutant variants of SWD1 or SET1 were introduced to the Swd1D or Set1D strains using standard yeast transformation techniques and Leucine selection.
Protein extraction and western blot analysis
Yeast deletion strains containing the pRS415 vector, WT or mutant variants of Swd1 or Set1 were grown in 50 ml of SD-Leu media at 30˚C to an OD 600 of 0.7-1.0. A volume of 40 ml of the yeast culture was pelleted and resuspended in 10% tri-chloroacetic acid (TCA) to a final volume with an OD 600 of 6 and incubated at room temperature for 30 min. 3 ODs of cells (500 ml) were aliquoted into 1.7 ml Eppendorf tubes, pelleted and frozen. For total protein extraction, the cell pellet was thawed on ice and resuspended in 250 ml of 20% TCA. A volume of 250 ml of 0.25 mm -0.5 mm glass beads were added to the resuspended cell pellet and the cells were lysed by vortexing for 6 min. The bottom of the Eppendorf tube was punctured, placed into a fresh tube, and the lysed cells were collected by centrifugation. The glass beads were washed with an additional 300 ml of 5% TCA and discarded. Total protein was pelleted by centrifugation at 20,000 G for 10 min at 4˚C. The resulting pellet was washed with 100% ethanol at À20˚C and resuspended in 2x SDS sample buffer. The efficiency of the total protein extraction was evaluated by SDS PAGE followed by stain-free protein imaging (BioRad). For Western blotting, equal amounts of protein extraction were separated by SDS-PAGE, transferred to PVDF membranes, blocked with 5% milk in TBST buffer and probed with anti a-H3me1 (Abcam Cat# ab8895, RRID:AB_306847) and anti a-H3me2 (Abcam Cat# ab7766, RRID:AB_2560996) and anti a-H3me3 (Abcam Cat# ab8580, RRID:AB_306649) and anti a-GAPDH (Abcam Cat# ab125247, RRID:AB_11129118) antibodies. A total of three technical replicates were analyzed from the same yeast growth.
COMPASS activity assay
A 2x concentrated 2-fold dilution series of COMPASS (2560 mM -40 nM) was prepared in reaction buffer (20 mM HEPES pH 7.5, 100 mM NaCl, 1 mM DTT, 0.2 mg/ml BSA). To initiate the reaction, 6 ul of each 2x stock of COMPASS was added to 6 ul of 500 mM H3 peptide (residues 1-21) dissolved in reaction buffer that either contained 500 uM SAM or contained no SAM. The reaction was allowed progress at 25˚C for 35 min and then the reaction was quenched by the addition of 3 ul 0.5% trifluoroacetic acid (TFA). 10 ul of the quenched reaction was transferred to a microplate and the amount of S-adenosyl homocysteine (SAH) was quantified using the MTase-Glo assay (Promega) according to the manufacture's instructions. Raw luminescence values were measured on a POLARstar Omega fluorescence plate reader (BMG Labtech).
Cryo EM sample preparation
A 2.44 ml volume of 300 nM COMPASS, 100 nM of nucleosome containing H2B-Ub and H3K4Nle, and 200 mM SAM was prepared in crosslinking buffer (25 mM HEPES pH7.5, 100 mM NaCl, 1 mM DTT). The sample was incubated on ice for 30 min and then mixed with 2.44 ml of 0.14% glutaraldehyde to initiate crosslinking. The crosslinking reaction progressed on ice and was quenched after one hour by the addition of 1 M Tris pH 7.5 to a final concentration of 100 mM. The reaction was incubated for 1 hr on ice and then concentrated to~50 ml using an Amicon Ultra 30K MWCO spin concentrator. The concentration of the complex was determined using the absorption at 260 nm of the 601 nucleosome DNA. Quantifoil R2/2 grids were glow discharged for 45 s at 15 mA using a Pelco Easyglow glow discharger. A volume of 3 ml of 0.5 mg/ml crosslinked sample was added to the glow-discharged grids and flash frozen in liquid ethane using a Vitrobot (Thermo Fisher) at 4˚C and 100% humidity with a 3.5 s blot time.
EM data collection and refinement
All data were collected at the National Cryo-Electron Microscopy Facility (NCEF) at the National Cancer Institute on a Titan Krios (Thermo-Fisher) at 300 kV utilizing a K3 (Gatan) direct electron detector in counting-mode with at a nominal magnification of 81,000 and a pixel size of 1.08 Å . Data were collected at a nominal dose of 50 e -/Å 2 with 40 frames per movie and 1.25 e -/frame. A total of 5784 movies were collected. The dataset was processed in Relion 3.0 (Zivanov et al., 2018). All movies were motion-corrected and dose-weighted using the Relion 3.0 implementation of MotionCorr2 (Zivanov et al., 2018). An initial batch of 500 micrographs were randomly selected and used to pick 315,372 particles using the Laplacian of gaussian auto-picking feature. After 3 rounds of 2D-classification to remove junk particles, 216,107 particles were used for 3D classification. A single good class of 79,832 particles was refined and used as a model for template-based picking on the entire dataset, resulting in 2,036,654 particles. The particles were extracted and binned by a factor of 4. 1,357,004 particles were retained after 2D classification and used for 3D classification with six classes. Three classes emerged from the 3D classification which seemed to have a well resolved COMPASS on at least one side of the nucleosome. These three classes (1103264 particles) were merged and subjected to another round of 3D classification with four classes using a mask that encompassed the nucleosome and one COMPASS molecule. Two of the resulting Classes appeared to have high resolution features, were merged (650847 particles) and subjected to 3D refinement using the same mask that encompassed the nucleosome and COMPASS. After refinement the particles were reextracted at full resolution and refined again using the same mask. The unbinned particle stack was subjected beam tilt correction and per-particle contrast transfer function (CTF) estimation in Relion 3.0. The final particle stack was then subjected to masked refinement using the same mask that was used in the previous masked refinement and classification steps. The final structure was sharpened using the Relion postprocessing tool with a soft mask that encompassed the more well resolved COMPASS molecule and the nucleosome. A sharpening B-factor of À122.6 Å 2 was applied. The final resolution of the COMPASS-nucleosome structure is 3.37 Å according to the Fourier shell correlation (FSC) 0.143 criterion.
Model building and refinement
Coordinates for the Xenopus laevis nucleosome core particle (PBD: 6NJ9), Saccharomyces cerevisiae Spp1, and the N-set domain of Set1 (PDB: 6B Â 3) were docked into the EM density using Chimera. Crystal structures of Kluyveromyces lactis Sdc1, Swd3, Swd1, Bre2 and the Set1 catalytic domain (PDB: 6CHG) fit the EM density better than the existing cryo-electron microscopy structures (PDB: 6B Â 3) of S. cerevisiae Sdc1 (25% identity, 67% in modeled area), Swd3 (50% identity), Swd1 (50% identity), Bre2 (42% Identity) and Set1 (41% identity, 86% in modeled area). Therefore, homology models of K. lactis Sdc1, Swd3, Swd1, Bre2 and the Set1 catalytic domain were prepared using the Swiss-model software and docked into the EM density using Chimera. The resulting model was iteratively refined in Phenix (Afonine et al., 2018) using reference restraints for Ubiquitin, the Set1 N-set domain, Spp1, Bre2 and the Sdc1 dimer (from PDBs 1UBQ, 6B Â 3 and 6CHG) and edited in COOT (Emsley et al., 2010). The model was refined against the full map and any overfitting of the model was assessed by calculating the model-map FSC between the refined model and the sharpened, masked half-maps (Half one and Half 2) that were filtered to the FSC = 0.143 resolution cutoff for each half map (3.96 Å ). The model-map FSCs of the two half maps agree well indicating that there is little overfitting of the model. Furthermore, the FSC = 0.5 resolution estimate of the model/map (Full) does not exceed the calculated map resolution, indicating that the model is not overfit. | 8,831.2 | 2019-11-13T00:00:00.000 | [
"Biology",
"Chemistry"
] |
GoSam 2.0: Automated one loop calculations within and beyond the Standard Model
We present GoSam 2.0, a fully automated framework for the generation and evaluation of one loop amplitudes in multi leg processes. The new version offers numerous improvements both on generational aspects as well as on the reduction side. This leads to a faster and more stable code for calculations within and beyond the Standard Model. Furthermore it contains the extended version of the standardized interface to Monte Carlo programs which allows for an easy combination with other existing tools. We briefly describe the conceptual innovations and present some phenomenological results.
Introduction
Two of the main challenges for the upcoming run 2 of the LHC will be a more precise determination of the Higgs [1,2] properties and its couplings to bosons and fermions as well as the continued searches for new physics. Both cases require a precise prediction for both signal and background processes. This particularly includes the calculation of next-to-leading order corrections in QCD. One of the main bottlenecks of such a computation is the calculation of the virtual one loop amplitude. The complexity and the need for having reliable tools for a large variety of different processes has lead to the development of multi-purpose automated tools. An example of such a tool is the GoSam package [3] that focuses on the efficient generation and numerical evaluation of one loop amplitudes. The continious refinement and extension of the existing package has lead to the publication of the version 2.0 [4]. In this talk we will describe the improvements and new features contained in the the new version and present selected results that have been obtained with GoSam 2.0.
. Code optimisation with FORM
GoSam generates an algebraic expression for each amplitude which is written in a Fortran90 file. It is obvious that the time needed to evaluate a single phase space point is highly dependent on how optimised the expression is written. In the first version the generation of an optimised expression has been done with the help of haggies [5]. In the new version we make use of new features provided by FORM version 4.x [6]. The new features result in a more compact code and a gain in speed of up to an order of magnitude.
2.1.2. Summing of diagrams with common subdiagrams In order to improve the efficiency and the evaluation time, GoSam 2.0 is able to automatically sum algebraically diagrams that exhibit a similar structure to arXiv:1410.3237v1 [hep-ph] 13 Oct 2014 a 'meta-diagram', which is then treated as a single diagram. In particular, diagrams that differ only by a propagator, which is not in the loop (e.g. Z vs. γ), are summed. Also diagrams with the same loop, but with a different external tree part are summed up. In the same way, diagrams, that share the same set of loop propagators but with different particle content in the loop, are combined to a single diagram. This summing is controlled by the option diagsum, which is set to True by default.
Numerical polarisation vectors
To reduce the size of the code, numerical polarisation vectors are used for massless gauge bosons. This means that an algebraic expression is only written for a minimal set of helicity combinations and not for each combination separately. Per default this option is used, however it can be switched off by setting polvec=explicit.
New reduction method
The default reduction method in GoSam 2.0 is NINJA [7,8,9]. It is a further improvement of the integrand reduction method [10,11,12], based on the idea, that the coefficients of the residues of a loop integral can be extracted in a more efficient way by performing a Laurent expansion of the integrand. This methods requires less numerical sampling and therefore leads to a faster and more stable reduction.
Higher rank integrals
The tensor integral of a one loop calculation can be written in a very general form as In the Standard Model the maximal value for r is r = N. However, in BSM theories and effective theories, larger values for r can occur. Therefore, the libraries NINJA [7,8,9], GOLEM95 [13,14,15,16] and SAMURAI [17] have been extended to deal with the case of r = N + 1. This is an important ingredient for Higgs production in gluon fusion which we discuss later.
The derive extension
The new version contains an improved tensorial reconstruction, based on the idea that the numerator can be Taylor expanded around q = 0, This allows to read off the coefficients of the tensor integrals. It leads to a further improvement of the speed and the precision of the tensorial reconstruction.
Electroweak scheme choices
There are a various number of electroweak schemes, depending on which parameters are used as input parameters and which parameters are derived from the input paramaters. A consistent treatment requires that a minimal set of input parameters are given, and all other parameters are then derived. GoSam 2.0 allows to choose all possible sets of consistent schemes and the remaining parameters are then automatically derived.
Rescue system
The new release contains a rescue system to automatically detect and rescue numerically unstable points. Unstable points are triggered by an insufficient cancellation of infrared poles. Several checks and re-evaluation with different reduction methods can then be performed. For further details, see Refs. [4,8].
New ranges of applicability 2.3.1. Color-and spin-correlated matrix elements
The use of subtraction methods for NLO calculations require the calculation of color-and spin-correlated matrix elements, i.e. born-like matrix elements with either a modified color structure or the combination of amplitudes where the helicity of one external leg is flipped. The color correlated matrix elements are defined as and the spin-correlated matrix element are defined as Bot color-and spin-correlated matrix elements contain implicitly the sum over all helicities, only the helicities with the indices i and j are fixed, i.e.
In the new release, GoSam is able to generate these matrix elements and the information can be passed via the BLHA2 interface [18].
Complex mass scheme
A gauge invariant treatment of massive gauge bosons requires the use of the complex mass scheme, where the widths also enter in the definition of the weak mixing angle. The masses of the bosons are given by In order to maintain gauge invariance this affects the definition of the Weinberg angle: The complex mass scheme is implemented in GoSam via new models called sm complex and smdiag complex, depending on whether one wants to use the full CKM matrix or a unit matrix.
Phenomenological applications
The new version GoSam 2.0 has been recently used in a sizeable number of challenging calculations of both within and beyond the Standard Model [19,20,21,22,23,24,25,26,27,28,29,30]. In this talk we discuss the calculation of Higgs plus jets in gluon fusion and the calculation of a neutralino pair in association with a jet in the MSSM.
Higgs plus jets in gluon fusion
The gluon fusion channel is the dominant production mechanism of a Standard Model Higgs at the LHC. Even if one is interested in the vector boson fusion channel the gluon fusion mechanism is an irreducible background and therefore its precise determination is mandatory. In particular we have calculated the NLO QCD corrections to H + 2 [20] and H + 3 [22], and a comparison between the two processes has been studied in Ref. [30]. For the H + 2 jets process the results have been obtained by interfacing GoSam with Sherpa [31] via the BLHA interface [32]. In the case of H + 3 jets we have used MadGraph [33,34] for the real emission matrix element, and MadDipole [35,36] for the generation of the dipoles and the integrated subtraction terms. The phase space integration for these pieces has been performed using MadEvent [37], for the tree-level contribution and the integration of the virtual amplitude we have again used Sherpa. We have obtained the numerical results with a basic setup of 8 TeV center of mass energy, basic cuts on the jets with p T > 30 GeV, η < 4.4 and an anti-kt jet algorithm [38,39] of R = 0.4. Renormalization-and factorization scales have been chosen to be equal and set to For the LO PDFs we have used the cteq6l1 pdf set, for the NLO PDFs we have used the ct10nlo pdf set. The main results on the level of total cross sections are summarized in Table 1. Both processes show a sizeable global K-factor of roughly 1.3 which stresses the importance of including the NLO QCD corrections. The K-factor increases if NLO PDFs are used for both LO and NLO calculation. One interesting aspect is the fact that if one looks at the ratios of the total cross sections of H+3 over H+2 they show, they are to a very good approximation constant, independent whether one looks at the ratio of LO cross sections with LO or NLO pdfs or at NLO cross sections.
The comparison between the two processes allows to asses the effects of the additional jet on Higgs observables. Two examples are shown in Fig.1, namely the rapidity and the p T distribution of the Higgs. Looking at the ratios allows us to asses the effect of the third jet on the observables. The ratio plots are normalized to the H + 2 result. One can see that the rapidity distribution is rather insensitive to the radiation of an additional jet, whereas the p T distribution shows a clear increase of the importance of the third jet on the Higgs p T in the high-p T region. The reason for this increase is a mere phase space argument. A high p T for the Higgs can be more easily obtained by distributing the necessary recoil p T on three jets rather than on two. The high p T phase space points are almost evenly distributed in rapidity space, therefore there is hardly any effect visible.
Neutralino pair production in association with a jet
An example for a highly non-trivial BSM process is the production of a pair of the lightest neutralinos in association with a jet [19]. We have calculated the Susy-QCD corrections to this process in the MSSM [19]. The neutralino is the LSP which makes this process lead to the simple experimental signature of missing energy and a mono-jet. From a calculational point of view it is a very challenging process as it contains several mass scales. And as full off-shell effects are taken into account one has to deal with a non-trivial resonance structure. The most complicated loop diagrams involve rank 3 pentagons 4 internal masses. Concerning the computational setup GoSam has been used for the generation of the virtual one loop amplitude. In order to deal with a certain model ( Model, however new models can easily be imported with the help of Feynrules [40] which can be used to write a model file in the UFO format [41]. A model file in this format is automatically understood by GoSam. Four our studies we chose a pragmatic and experimentally motivated parameterisation of Susy, known as the phenomenological MSSM (pMSSM) [42,43,44], in a variant involving 19 free parameters (p19MSSM). The relevant Susy parameters are given in Table 2.
The calculation of the UV counter terms has been done separately. Tree-level and real emission matrix elements have been calculated using MadGraph, for the subtraction terms we have used MadDipole. For processes involving unstable particles, the proper definition of the set of diagrams contributing to the nextto-leading order corrections is not obvious. There are problems of double counting as diagrams with additional real radiation from the unstable particle in the final state can, if it becomes resonant, also be regarded as part of a leading order process with the decay already included in the narrow width approximation. More specifically, in the real emission contribution there is the possibility of producing a squark pair, where the squarks decay into a quark and a neutralino. Close to the resonance, this contribution gets quite large, and in fact Table 2: Masses and widths of the supersymmetric particles for the benchmark point used. The second generation of squarks is degenerate with the first generation of squarks. All parameters are given in GeV. should rather be counted as a leading order contribution to squark pair production with subsequent squark decay, because here we are interested in the radiative corrections to the final state of a monojet in association with a neutralino pair. Therefore the calculation was carried out in two different ways. In the first approach we take into account all possible diagrams leading to the required final state consisting of two neutralinos and two QCD partons. In particular this includes the possibility of having two onshell squarks. In the second approach we remove the diagrams with two squarks in the s-channel from the amplitude. In general, the removal of diagrams leads to a violation of gauge invariance, however one can show that gauge invariance ist still preserved for a large class of gauges [19] or found to lead to a small effect only [45].
The difference between the two approaches in case of the p T of the jet is shown in Fig. 2. The distribution is normalized to the total cross section. The curve in blue shows the distribution at leading order, the red curve shows the NLO distribution where the doubly resonant squark pair diagrams have been removed. The green curve shows the full result, also taking these resonant diagrams into account. As can be clearly seen these resonant diagrams lead to a huge enhancement spoiling the perturbative convergence. Subtracting the diagrams then leads to a well-behaved perturbative expansion. For a more detailed phenomenology of this process we refer to Ref. [19].
Conclusions
In this talk we have presented the new release of GoSam, which contains a multitude of improvements compared to the previous version. Refinements have been made in the context of diagram generation as well as on the reduction side, both leading to a substantial gain in generation time, size of the code and the time needed for the evaluation of a phase space point. New reduction mechanisms and a rescue system have lead to a more stable and reliable performance. We have discussed the new features and as selected examples for recent phenomenological applications we presented the calculations of Higgs plus jets in gluon fusion and the production of a neutralino pair plus one jet in the context of the MSSM.
Acknowledgments
We would like to thank the present and former members of the GoSam collaboration for their effort in the development of GoSam. Furthermore we would like to thank Joey Huston, Jan Winter and Valery Yundin for their collaboration and their work in the Higgs plus jets project. | 3,431.4 | 2014-10-13T00:00:00.000 | [
"Physics"
] |
Cost Control of Treatment for Cerebrovascular Patients Using a Machine Learning Model in Western China
Background Cerebrovascular disease has been the leading cause of death in China since 2017, and the control of medical expenses for these diseases is an urgent issue. Diagnosis-related groups (DRG) are increasingly being used to decrease the costs of healthcare worldwide. However, the classification variables and rules used vary from region to region. Of these variables, the question of whether the length of stay (LOS) should be used as a grouping variable is controversial. Aim To identify the factors influencing inpatient medical expenditure in cerebrovascular disease patients. The performance of two sets of classification rules, and the effects of the extent of control of unreasonable medical treatment, were compared, to investigate whether the classification variables should include LOS. Methods Data from 45,575 inpatients from a Healthcare Security Administration of a city in western China were used. Kruskal–Wallis H tests were used for single-factor analysis, and multiple linear stepwise regression was used to determine the main factors. A chi-squared automatic interaction detector (CHAID) algorithm was built as a decision tree model for grouping related data. The intensity of oversupply of service was controlled step by step from 10% to 100%, and the performance was calculated for each group. Results The average hospitalization cost was 1,284 US dollars, and the total was 51.17 million US dollars. Of this, 43.42 million were paid by the government, and 7.75 million were paid by individuals. Factors including gender, age, type of insurance, level of hospital, LOS, surgery, therapeutic outcomes, main concomitant disease, and hypertension significantly influenced inpatient expenditure (P < 0.05). Incorporating LOS, the patients were divided into seven DRG groups, while without LOS, the patients were divided into eight DRG groups. More clinical variables were needed to achieve good results without LOS. Of the two rule sets, smaller coefficient of variation (CV) and a lower upper limit for patient costs were found in the group including LOS. Using this type of economic control, 3.35 million US dollars could be saved in one year.
Introduction
Cerebrovascular disease and its complications are the leading cause of disability and death worldwide. Of all the diseases of the nervous system, cerebrovascular diseases have the greatest impact on disability and produce the highest economic burden [1][2][3]. Since 2017, this disease has become the leading cause of death in China [4]. e number of people suffering from cardiovascular and cerebrovascular diseases in China was 330 million in 2019, and these diseases are the leading cause of death among urban and rural residents [5]. In 2017, the total cost of treating cerebrovascular diseases in China reached 83.83 billion US dollars, ranking first among all diseases and accounting for 17% of the total medical cost of treating diseases, equivalent to 0.66% of GDP [6]. One city alone spent 51.17 million US dollars a year on these diseases in this study. In the face of so much economic pressure, the government must take effective action to reduce the economic burden of cerebrovascular diseases.
Diagnosis-related groups (DRG) are one of the most advanced medical payment management methods, aiming to reduce inefficiency and contain costs [7]. Based on factors such as a patient's demographic information, diagnosis, and disease severity, DRG-based payment systems group patients with similar clinical attributes requiring similar care, providing the necessary framework to aggregate patients into case types or products, which entail the use of similar resources [8]. DRG adopt a standard pricing framework for a single disease group [9] and provide equity in payments across healthcare providers for services of the same kind. Most studies have found DRG to have positive effects on controlling medical expenses and reducing the economic burden among patients [10]. Studies into cerebrovascular diseases have found that DRG can effectively reduce unreasonable costs incurred in the treatment of cerebrovascular diseases [11,12]. However, the rules of the grouping vary between countries and regions; for example, length of stay (LOS) is widely used as a statistical classification index in research into DRG management in Poland, Britain, and other developed countries [10]. Japan uses LOS as a secondary parameter [9]. However, Finland and Sweden do not consider LOS [13].
China Healthcare Security Diagnosis-Related Groups (CHS-DRG) are the unified grouping standard used by the national pilot city [14]. Due to the unbalanced development of China's economy, the Chinese government requires cities to develop localized grouping rules based on their actual conditions, so there are variations of DRG payment policy design and grouping rules across China [15]. Beijing Diagnosis-Related Groups (BJ-DRG) are the earliest localization group in China; Beijing built Chinese Diagnosis-Related Groups (CN-DRG) following the model of the All-Patient Diagnosis-Related Groups (AP-DRG) in the USA, and Shanghai built a Shanghai-DRG and National standards for paying fees according to DRG (C-DRG) based on the Australia Refined DRG (AR-DRG). However, these grouping methods are all based on the data collected from the first-tier developed cities in China, and there is no research into the underdeveloped cities in the west of the country. It is inappropriate for cities in the west to use the same rules, due to the unbalanced economic and technological development in China [16]. None of those grouping rules take into account the LOS, unlike most countries in Asia, which incorporate LOS [17].
In this study, we collected data from an underdeveloped city in western China. Machine learning was used to group patients with similar costs, and two sets of rules were built, one incorporating LOS and the other without LOS. We compared the performance of the grouping rules based on the coefficient of variation (CV) to assess the heterogeneity within a group, as has been done in previous studies [8]. We identified the outliers in each group and considered them to represent unreasonable costs. Finally, we tried to control these costs to different extents. is study fills the gap in previous studies, which have only focused on developed cities and which use CV as the standard measure of the results of grouping. In our study, underdeveloped cities and control performance were considered. e rest of this paper is organized as follows. In Section 2, we introduce our materials and methods. In Section 3, we present our results, including general information and inpatient medical expenditure, single and multiple factor analysis of the factors influencing inpatient medical expenditure, the results of two sets of rules for DRG grouping, medical expenses in different DRG, and payment method adjustment results. In Section 4, we discuss the results. Section 5 concludes this study and provides a description of directions for future research.
Patient Data.
e data used in this research were collected from the Healthcare Security Administration of a city in western China during 2018. e data included medical records and cost information related to 93,185 inpatients with cerebrovascular diseases (ICD-10:60-69) as the principal diagnosis, all of which under the major diagnostic categories (MDC) of diseases and dysfunction of the nervous system (MDCB). Original information on these patients included 58 variables, such as gender, age, LOS, cost of hospitalization, payment of medical insurance, and type of insurance.
Data Cleaning.
In the first step of data cleaning, we selected data from only the comprehensive grade tertiary and secondary hospitals. e patients from township hospitals, community hospitals, and school hospitals were removed. As a second step, we eliminated outliers in costs [8] and patients younger than 18 years of age. Finally, patients who were not hospitalized in our study city but were reimbursed by the city's Medical Insurance Bureau were excluded. Valid data from a total of 45,575 patients were obtained after screening.
Statistical Analysis and Data Grouping.
e proportions of the training set and the test set were 80% and 20%, respectively. Firstly, the training set is grouped, and the effect of grouping is detected with the data of the test set. Finally, all the data are put into grouping rules and analyzed.
Kruskal-Wallis tests were used for single factor analysis to determine the factors influencing hospitalization expenses. Values of P < 0.05 were considered to be statistically significant [18]. Stepwise multiple generalized linear regression was used for variance analysis [19]. e medical costs for different subgroups were calculated, and the statistically significant variables with the greatest impacts on medical costs were selected for grouping analysis.
e Chi-Squared Automatic Interaction Detection (CHAID) algorithm was used to establish the combination of DRG [10,20]. In the selection of grouping variables, we considered both the inclusion and exclusion of LOS, CV, and the percentage of outliers. We considered a CV value of less than 1 to indicate no heterogeneity within a group, as has been done in previous studies [8]. We regarded outliers to represent unreasonable medical treatment and calculated the variation in unreasonable medical costs among different participants under different degrees of control. We used inpatient hospitalization expenditure as the dependent variable, and the variables selected by the generalized linear stepwise model were set as the independent variables. LOS was shown to have a significant positive influence on medical expenditure. In order to further investigate the grouping performance of LOS, we built two decision tree models. e first model used the LOS as a classification variable, and the second model omitted the LOS. We have conducted more than ten random trials using data sampling samples, and the results of each trial are consistent, which indicates that the performance of the algorithm is stable. All analyses were carried out using R.studio 4.0.2 software [21] with the CHAID package [22].
Results
In the following section, we summarize general information about the patients' medical costs in Section 3.1, and single factor and multiple analysis are shown in Sections 3.2 and 3.3, respectively. e results of grouping using the two sets of rules based on machine learning are shown in Section 3.4. Finally, the performance of the algorithm using different levels of implementation control is presented in Section 3.5.
General Information and Inpatient Medical Expenditure.
As shown in Table 1, women, individuals over 60 years old, and urban residents accounted for the majority of patients, while men, the elderly, and rural residents had relatively high expenses. Of the patients, 50.18% spent less than nine days in hospital, and 82.26% recovered after hospitalization. Of the patients with complete data, 19,488 (42.76%) were male and 26,087 (57.24%) were female; 1,995 (4.37%) were under the age of 45, while 9,117 (20%) were aged between 45 and 60, and 34,463 patients (75.64%) were older than 65. With respect to residence, 30,243 (66.36%) patients were urban workers, and 15,332 (33.64%) were rural residents. Among them, 24,482 (53.74%) were from a secondary grade hospital, and 21,087 (46.26%) were from a tertiary grade hospital. We also carried out statistical analysis on the effect of LOS, with surgery or without surgery, discharge status, and comorbidities complications (CCs) and whether there was grade III hypertension, on the distribution of patients' medical expenditure in different subgroups. e average expenditure of these patients was 1,284 US dollars. Among the subgroups, males, individuals aged over 65, rural residents, patients from tertiary grade hospitals, LOS more than 13 d, surgery, death, and CCs with insufficiency of blood supply to the cerebral arteries were more expensive.
Single Factor Analysis of the Factors Influencing Inpatient
Medical Expenditure. In this study, 58 variables were examined using single-factor analysis ( Table 1). Ten factors-gender, age, type of insurance, surgery, LOS, status on discharge, CCs, and a hypertension level of three-were shown to be associated with statistically significant differences in hospital expenditure, using Kruskal-Wallis tests (P < 0.01). Expenditure on men, individuals older than 60, rural residents, patients with longer LOS, patients undergoing surgery, death, and patients with CCs was the highest.
Multiple Factor Analysis of the Factors Influencing
Inpatient Medical Expenditure. Generalized linear stepwise models were used for multiple regression analysis. Gender, LOS, level of hospital, surgery, status on discharge, type of insurance, comorbidities complications, and age had significant impacts on medical expenditure (Table 2). e Rsquared value of the model was 0.521, and the kappa value was 12.08, indicating that the model performed well, and there was no multicollinearity between variables. All of these variables could be regarded as reasonable data for DRG grouping.
Two Rules for DRG Grouping and Medical Expenses in
ere were seven subgroups in model one and eight groups in model two. e hospital level was the main factor, and the second rule, without LOS, required more disease-related information, such as details of CCs. e group without LOS was more stringent. For example, grade A tertiary and grade B tertiary were in the same group under the rule incorporating LOS, while they were in different groups without LOS. e number of individuals in each group and details of expenses are shown in Tables 3 and 4. Most of the CVs of the first grouping method were less than 0.5, indicating that the homogeneity within the group was good, and the grouping effect was better in the grouping rules incorporating LOS. e weight calculation formula was (the average cost of the group)/(all the average costs). e higher the weight, the more resources consumed by the patients in the group. We set P75 + 1.5 IQR as the cost limit of each group, and the excess amount indicates the number of each group's medical expenses that were outside the cost limit.
We also analyzed the outliers of each group. Using the first grouping rules, the outliers were older than the normal patients, while using the second grouping rules, the outliers had a significantly longer LOS than the average.
Prediction of Medical Expenses Based on an Increasing
Control Ratio of Unreasonable Treatment. In 2018, a total of 51.17 million US dollars medical expenses were related to 45,575 inpatients with cerebrovascular diseases as the principal diagnosis. e average cost was 1,248 US dollars. Among them, 43.42 million were paid by the Healthcare Security Administration, and 7.75 million were paid by patients themselves. All of this expenditure was based on the Fee for Service (FFS) payment system. We took the mean cost of each group as the payment standard for the DRG group and calculated the average cost to the Healthcare Security Administration, hospital, and patient. e current FFS method encourages an oversupply of service in order to increase revenue [9]. We consider expenditure less than the cost limit in each group to be a normal supply and the instances in which the outliers exceed the upper limit as an oversupply of services. We increased the control intensity step by step from 10% to 100% for this oversupply service, to simulate performance under the payment system of DRG. e control effect of the two grouping rules is shown in Table 5. If we took full control, the rules with LOS could save 598,570 US dollars, and 3.35 million US dollars could be saved based on the grouping rules without LOS. e government therefore paid an average of 1,087 US dollars for each patient, and each patient paid 196 US dollars for themselves. e expenditure in developed cities was even higher. Control of the medical expenses caused by cerebrovascular disease is an urgent problem for the Chinese government. e city we chose uses a Fee for Service system, which may provide an incentive to oversupply services. We used local data to classify the patients into different groups with similar medical costs. Two models with different rules were built, based on whether the LOS was included as a classification variable. We used the CV to measure the quality of the grouping and analyzed the characteristics of the outliers in each group. We then increased the intensity of control of the oversupply of services step by step, from 10% to 100%, to simulate the performance based on the two grouping rules. e model incorporating LOS had a smaller CV than the model without LOS. If our standard model was built without LOS, it could reduce the occurrence of medical oversupply, saving 3.35 million US dollars in one year. ese figures apply to only one city; if the whole country controlled costs in this way, the economic pressures on healthcare could quickly be alleviated.
Although it is generally recognized that LOS is the main factor influencing medical expenses [23], the inclusion of LOS as a classification variable of DRG is inconsistent. It is generally believed that considering LOS as a classification variable may lead to upcoding [11]. Most European countries, including England, Estonia, and Finland, do not consider LOS as a classification variable. e official Chinese CHD-DRG, modelled on the American MS-DRG, does not include LOS [14], and the Shanghai-DRG, based on the Australia AR-DRG, also does not consider LOS. However, some studies indicate that omitting LOS may increase the frequency of readmission and moves between hospitals, with services provided in alternative ways [17]. Omitting LOS also leads to poorer care for patients who should have a longer stay. e grouping rules of some countries, such as France, Ireland, and Poland, consider LOS to be an important factor [13]. Tables 3 and 4 show the results of grouping. e grouping rule with LOS has a smaller CV, indicating that the cost difference within grouping rule one was smaller, and the grouping was more reasonable. We used the P75 + 1.5 IQR as the upper limit to test for outliers in each group. e proportion of outliers was higher in the group without LOS.
is observation implies that the use of LOS can lead to accurate grouping. Both grouping rules demonstrate that the hospital level is very important. In grouping rules without LOS, hospital levels and comorbidity are more finely divided. It is therefore counterproductive to consider only one hospital level.
We analyzed the outliers (Tables 3 and 4) and found that in the LOS group, the age of the outliers was significantly higher than the average value of the group, while in the group without LOS, the LOS was significantly higher than the average. A study using MS-DRG hospital data from Malta also found that most of the outliers were older and higher costs were associated with higher LOS [8]. Further analysis of these results could help identify the reasons for the high costs.
In Asia, only the Republic of Korea considers the type of hospital as a factor for DRG-based payment [9]. In this study, we found that the level of the hospital crucially influenced inpatient medical expenditure. Although there have been studies looking at the impact of hospital levels on costs [19], research into DRG has tended to focus only on tertiary hospitals. Our research therefore complements previous studies that only grouped hospitals at one level [13]. e major diagnosis was directly related to the differences in the cost of hospitalization. Comorbid patients often require special treatment and care, and different comorbidities may affect the cost of additional care, making comorbid diseases an important grouping variable. Medical costs are higher for the elderly, who require special treatments [13], but age did not show up in our grouping variables. In China, many DRG subgroups, such as the pneumonia subgroup, have age as the primary factor [19], possibly because the high cost of this group is mainly concentrated in the elderly and children. However, the age distribution of cerebrovascular disease is mainly concentrated in the elderly. In most of the European countries, like England and Estonia, age is not a factor used in grouping [13]. is observation is consistent with our findings. Most grouping rules have found surgery to be an important variable, and our single analysis also showed that surgery has a significant impact on costs. But surgery was not a variable identified in our results. is situation may have something to do with the choice of disease species. A cluster study in Beijing, China, also confirmed that in stroke, one of the cerebrovascular diseases, surgery is rare [24]. Table 5 shows the performance if the oversupply of services is controlled under the payment system of DRG. e intensity of control was increased step by step from 10% to 100%, and the results of application of the two rule sets were compared. More money could be saved without the LOS. Experience in Europe indicates that use of LOS leads to upcoding, and the medical cost was high when considering the LOS. ese results imply that without LOS the cost could be controlled better, but with LOS the patients could be classified better. More incentives and oversight are needed if DRG is to be introduced. For one city, 21 million RMB could be saved by applying the results of our research, an outcome which is highly desirable for the government. ere were some limitations in this study. Due to the lack of standards for the data reported by the hospitals, there were 5,768 cases lacking information on whether surgery was performed, so these data were excluded from the grouping. Since there is no uniform surgical code between each hospital, we could not use the surgical code as our research object. Due to the large amount of data, we only considered data from one year. In the future, data from more years could be included, or the data from another year could be used for the CV of the test group.
Conclusions
We used real data from less developed regions for grouping for the DRG, filling the gap in previous studies, which took developed regions as research objects. To the best of our knowledge, this is the first time that secondary grade hospitals have been considered in a Chinese DRG study. We compared two grouping methods and discussed the results of the grouping. DRG payments were fixed, and this study adjusted the payment ratio of medical insurance, patients, and hospitals to achieve a satisfactory result for all three parties. To speed the development of DRG and rationalize the costs of cerebrovascular disease, the structure of hospital information and the standardization of data entry are essential. More research in this area is urgently needed.
Data Availability
All the data were taken from the Medical Insurance Laboratory of ChengDu Healthcare Security Administration.
Ethical Approval e study does not involve human subjects and adheres to all current laws of China.
Conflicts of Interest
e authors report no conflicts of interest concerning the materials or methods used in this study or the findings presented in this paper. | 5,185.6 | 2021-11-22T00:00:00.000 | [
"Medicine",
"Economics",
"Computer Science"
] |
Using intelligence techniques to automate Oracle testing
Abstract
INTRODUCTION
One of the main objectives of testing is to hasten the release of the program while ensuring that there are no bugs that could cause it to be discovered again and undermine the programmer's or developer's confidence in the program [15]. A software system's capabilities can be assessed to see if it can produce the needed results. Software testing is a crucial step in ensuring a certain level of software system performance and quality. More than half of the development time is spent on testing, a crucial component of software development [16]. Automated Software Testing (AST) It is a type of testing where the test case is carried out automatically using different automation tools and test scripts. Automated test cases are carried out using test scripts and automation techniques. Its advantage is that it expedites test execution after automated scripts have been generated [17]. In this paper, a system will be proposed to automate Oracle testing using intelligent techniques. This test aims to predict the output of the system being tested and compare it with the results of the software under test. The random forest algorithm and the convolutional neural network were used to build this system.
A. Test Software
Testing is the process of evaluating a system or its component(s) to determine whether or not it meets the specified requirements. This activity yields the actual, expected, and difference in their outcomes. Simply put, testing is the process of running a system to identify any gaps, errors, or missing requirements that are contrary to the actual desire or requirements [7].
When to Automate: Test Automation should be used for the following software projects: 1-Large and critical projects.
2-Projects that necessitate testing the same areas on a regular basis.
3-Requirements that do not change frequently.
Al-Rafidain Journal of Computer Sciences and Mathematics (RJCM)
www.csmj.mosuljournals.com 4-Using many virtual users to access the application for load and performance.
5-Stable
Software in terms of manual testing.
Black box testing and white box testing are the two primary methods for testing a program.
The project source code is not used to create the tests when using black box testing. only use software specifications. White box testing is a technique where tests are made using source code. The evaluation of the source code reveals that it behaves well under the hidden logic test. black box testing is more efficient with testing large code blocks since only the specification must be evaluated, whereas white box testing is more focused on the inner workings of the program. Black box testing is more focused on the specification [8].
B. Test Level
Software testing consists of several levels starting with the acceptance test that evaluates the system in relation to the requirements. Then comes the system test whose tasks are to evaluate the program in relation to the architectural design. Integration test that performs system testing and evaluation of the program in relation to the design of the subsystem, then comes the module test that evaluates order regarding detailed design. Finally, unit testing is tested by evaluating programs in relation to implementation [9].
C. Test Oracle
A test oracle is a mechanism that can be used to determine whether a method output is correct or incorrect. Testing is performed by executing the method under test with random data and evaluating the output with the test oracle [10].
An oracle ought to treat these two conditions separately: 1-If the condition on the initial state does not hold, then the program is off the hook: since its assumption does not hold, whatever it does must be considered correct.
2-
The output condition of the specification is checked only if the input condition [11].
D. Regression Test
Regression testing is described as "the process of retesting the modified parts of the software and ascertaining that no new errors have been introduced into previously tested code. [12] Regression testing is used to revalidate software modifications. Regression testing is an expensive process that involves running test suites to ensure that no new errors are introduced into previously tested code [13]. There are numerous methods for regression testing: 1-Retest all: A traditional method of regression testing is that all tests in the current test suite are redone. This is very expensive compared to other types of regression tests 2-Selection of Regression Tests: is used instead of the "retest all" technique because it is less expensive.
3-Prioritization of Test Cases: This regression testing approach prioritizes test cases more highly in order to improve the rate of fault discovery, or how quickly a test suite can identify mistakes in the altered program to increase reliability.
4-Hybrid Approach : The fourth regression technique is Test Case Prioritization and
Selection. On this strategy, numerous researchers are working and have proposed a wide range of algorithms [14]. He presented two methods for developing oracle testing: one uses software recursion and the other relies on plain-language comments that describe the source code of software systems. It introduces a method known as cross-validation oracles (CCOracles), which employs redundant sequences of method calls to encapsulate program recursion and produce test oracles automatically [3]. 4-In 2020, K. Kamaraj, C. Arvind, and K. Srihari proposed a weight-optimized ANN that employs stochastic diffusion search to pinpoint the ideal weights with a particular fitness function, lowering computational time and misclassification rate. Automation of the development of test cases and test oracles has been the subject of extensive research. Among the automated test oracles, the artificial neural network (ANN) was heavily utilized [4]. 5-Ke Chen, Yufei, and other researchers presented "automating the test oracle" in 2021 to find bugs in complex graphics-enhanced applications that don't crash. They suggested GLIB, a method for improving data based on codes for spotting "GUI glitches" in video games. The results show that GLIB can detect non-crashing bugs like GUI bugs in video games with high precision and recall when tested on 20 applications for real-world games [5]. 6-In 2020, M. Valueian1, N. Attar, H. Haghighi, and M. Vahidi-As proposed an innovative black box method for developing automated oracles that can be used with low-observability software systems. The "Multi ANN Network" artificial neural network, which is used in the proposed method, trains on the input values and associated pass/fail results of the program being tested. application of the proposed method to software systems that have a lower Observational ability and a higher degree of accuracy than the current machine learning approach. After running an SUT with each input vector, a value has been assigned to each one, indicating whether the program was successful or unsuccessful [6]. 7-Ke Chen, Yufei, and other researchers presented "automating the test oracle" in 2021 to find bugs in complex graphics-enhanced applications that don't crash.They suggested GLIB, a method for improving data based on codes for spotting "GUI glitches" in video games.GLIB was tested on 20 applications for real-world games, and the results show that GLIB has a high level of recall and precision when it comes to detecting non-crashing bugs like GUI bugs in video games. (Ke Chen, Yufei, & etc., 2021) [18].
III. Proposed System
Intelligent techniques can be used to automate Oracle testing for software testing and speed up regression testing. In this research, a system design will be proposed to implement Oracle testing, which is based on software testing by predicting the output of the program being tested and comparing it with the results of the application under test by calculating the distance. between the two results. This system consists of two stages: The first stage: the training stage, which is the stage of training the model on the results of the program, and the inputs are entered into the application, and the data is entered in the model that will be trained using one of the intelligent technologies. At the same time, the software output is entered into the model that will be trained, as its results will be approved for training. Train the model The result of the trained model is Oracle's software outcome prediction model
A. Test Cases
The data that was used in this proposed system is randomly generated data that matches the specifications of the credit card approval application that will be tested using the proposed system. 10,000 samples were generated that will be considered credit card users, Application requirements and attribute descriptions were used to generate the training data used in this study. Data details consist of nine attributes The number of columns to be created is 9 and their headings will be depending on the attributes of the application: (Region, Age, Nationality, State, marital status, number of dependents, Gender, income class and approved credit). This data will be entered into the credit card approval algorithm to obtain the approved / not approved results. After obtaining these results, the training data set is ready to train the Oracle model. This data will be entered into the credit card approval algorithm to obtain the approved / not approved results. After obtaining these results, the data will be preprocessed by deleting redundant data, after which the training data set will be ready to train the Oracle model. Below is a table of the types of data that will be generated and used in the proposed system.
B. Implementation of the proposed system
In this proposed system The credit card approval system will be tested , the random forest algorithm and convolutional neural network will be used to train the model and adopt the highest accuracy model to be the approved Oracle system for testing and predicting software accuracy. The steps of executing the test consist of three main steps: generating the test data, applying them to the system, and finally reporting the errors. Create test data The test data will be generated depending on the features and requirements of the credit card approval system on which the terms of the credit card algorithm will be based. The number of columns to be generated is 9 and their titles will depend on the application attributes: (region, age, nationality, state, marital status, number of dependents, gender income category, approved credit). Number of data generated: 10,000 cases Sequential identification starts from 500,001 All data columns are merged together into one table Data frames will be saved in a csv file. This data will be entered into the credit card approval system and will be used to train the test model.
C. Build the model
After generating the data, the model will be built and trained using the test data with the output values of the credit card application, which will be represented by (0 or 1), which means 1: approved 0: not approved First, the model will be trained with the random forest algorithm. A pre-processing will be performed on the data set and divided into training data (80% of the total data) and test data (20% of the total data). After completing the training of the model with the random forest algorithm, the test model will be built and trained using the neural network. convolutional with the same preprocessed data used with the output value of the credit card application.
D. comparison tool
After the model is trained, the credit card application will be tested using a comparison tool, which will compare the results of the application with the results of the model's prediction by calculating the absolute distance. The root mean square error (RMSE) is used to determine the distance between the two values and is represented by the following equation:
√ ∑ ̂
Where N : is the number of sample. y(i) : is the i-th measurement. y i) is its corresponding prediction.
The comparison tool performs the following actions, first of all a threshold value is created, if the network prediction and application output match and the absolute distance is 0.0, then this means that the network prediction matches the program output means that both outputs are correct, and if the network prediction and application output fall in the interval less From the threshold value both outputs are likely to be correct, and finally if the network prediction and application output lie in the interval greater than the threshold value, there will be an error in the result.
E. injection errors
One of the important tests is the mutation test that injects errors and the system test to see if the program will be able to change the output value or will it remain the same. In this proposed system, logical errors will be achieved, and a slight change will be made to the algorithm of the application that will be under test.
V. Result
After training the model on the random forest algorithm, the results were calculated as follows:
VI. Conclusion
A system is proposed to test the validity of program results through its ability to predict program output. And comparing its results with the results of the program under test and discovering the different values, and the proposed system was able to convert the black box technology test into an automatic test and implement the Oracle test automatically, the system facilitated the regression test, and it was able to implement the mutation test on the credit card credit code. This system can be applied to business applications or even applications that rely on multiple inputs and one output. The proposed system was implemented using the convolutional neural network and the random forest algorithm, and the random forest algorithm showed an accuracy of 100% which is a slight difference from the CNN model which was 99%. | 3,157 | 2023-06-01T00:00:00.000 | [
"Computer Science"
] |
Water‐Based Conductive Ink Formulations for Enzyme‐Based Wearable Biosensors
Herein, this work reports the first example of second‐generation wearable biosensor arrays based on a printed electrode technology involving a water‐based graphite ink, for the simultaneous detection of l‐lactate and d‐glucose. The water‐based graphite ink is deposited onto a flexible polyethylene terephthalate sheet, namely stencil‐printed graphite (SPG) electrodes, and further modified with [Os(bpy)2(Cl)(PVI)10] as an osmium redox polymer to shuttle the electrons from the redox center of lactate oxidase from Aerococcus viridans (LOx) and gluocose oxidase from Aspergillus niger (GOx). The proposed biosensor array exhibits a limit of detection as low as (9.0 ± 1.0) × 10−6 m for LOx/SPG‐[Os(bpy)2(Cl)(PVI)10] and (3.0 ± 0.5) × 10−6 m for GOx/SPG‐[Os(bpy)2(Cl)(PVI)10], a sensitivity as high as 1.32 μA mm−1 for LOx/SPG‐[Os(bpy)2(Cl)(PVI)10] and 28.4 μA mm−1 for GOx/SPG‐[Os(bpy)2(Cl)(PVI)10]. The technology is also selective when tested in buffer and artificial sweat and is endowed with an operational/storage stability of ≈80% of the initial signal retained after 20 days. Finally, the proposed array is integrated in a wristband and successfully tested for the continuous monitoring of l‐lactate and d‐glucose in a healthy volunteer during daily activity. This is foreseen as a real‐time wearable device for sport‐medicine and healthcare applications.
Introduction
[7][8] Although conductive inks were reported already a few decades ago mainly to repair electrical circuits, they are still expensive and require specific curing procedures with long preparation time and high temperatures. [9,10]16][17] To develop reliable sensor devices, the ink mixtures should exhibit homogeneous composition, with conductive characteristics and a moderate drying time. [18]Indeed, fast drying can promote cracks on the surface, being a problem for the electrode manufacturing, while slow drying hinders the process scalability as well as specific electrode shaping/sizing. [12][21] To ensure a high level of reproducibility and robustness wearable enzyme-based biosensors needs to be tested in real operating conditions. [22,23]This requires all analyses to be performed considering the blood/tissue or peripheral bodily fluid ratio that can be affected by several factors (e.g., hormonal dysfunctions, sweating rate, age, etc.).Besides the reproducibility, the robustness is affected by the immobilization of bioreceptors. [1]In this regard, a big step forward is the possibility to print enzymes directly onto a conductive support or embedding them within a conductive ink. [24,25]The latter can be easily achieved by considering the newly developed water-based conductive inks.In addition, the roughness/porosity of such electrode surface can prevent enzyme denaturation creating a diffusion barrier that will reduce signal variation and minimize the loss of enzymatic activity. [26][29][30][31][32] Most lactate and glucose biosensors are developed considering LOx and GOx as bioreceptors, respectively.LOx contains flavin mononucleotide (FMN) catalyzing the oxidation of lactate to pyruvate with the contemporaneous reduction of O 2 to H 2 O 2 .Since O 2 is naturally working as an electron acceptor, both O 2 and the related product H 2 O 2 can be electrochemically monitored to obtain an amperometric output that is proportional to lactate concentration. [33,34]However, there are several concerns about the selectivity and reproducibility of the results about these firstgeneration lactate biosensors, mainly due to the required high overpotential needed to oxidize/reduce H 2 O 2 and the fluctuation of O 2 in the solution, not considering its limited availability while working in bodily fluids (0.22 × 10 −3 m).Similarly, GOx contains flavin adenine dinucleotide (FAD) catalyzing glucose oxidation accompanied by O 2 reduction to H 2 O 2 . [28,35]Despite its initial "fame" among bioelectrochemists, describing it as "ideal enzyme," nowadays, GOx is considered a reliable biocatalyst to develop only first-and second-generation biosensors.Indeed, GOx does not undergo direct electron transfer (DET) with electrodes. [36,37]esides the enzymatic detection, many analytical methods have been proposed for the detection of lactate and glucose, such as chemiluminescence, [38] high-performance liquid chromatography [39] and magnetic resonance spectroscopy. [40]owever, these methods have known drawbacks, as they are often time-consuming, expensive and require laboratory equip-ment and trained personnel.Amperometric enzyme-based biosensors represent a valid approach particularly for the development of wearable biosensors for continuous metabolites monitoring and remote medicine. [20,23]his work reports on the formulation of water-based graphite inks to fabricate stencil-printed electrodes (Figure 1).The waterbased inks are formulated using graphite, chitosan with medium molecular weight and glycerol, as conductive material, binder, and stabilizer, respectively.The proposed ink was further reformulated including Prussian Blue (PB) to develop an active ink toward H 2 O 2 reduction to implement first-generation enzymebased biosensors.Furthermore, the stencil-printed graphite electrode was modified with osmium redox polymers (ORPs) and LOx/GOx to develop, for the first time, an array of secondgeneration biosensors based on water-based conductive ink.After preliminary characterization performed in buffer and artificial sweat, the proposed array was integrated into a wrist band to continuously monitor lactate and glucose during daily activities with the results showing promise for future applications in remote personalized medicine.
Electrochemical, Spectroscopic, and Morphological Characterization of Stencil-Printed Graphite Electrodes
Graphite electrodes were obtained by stencil-printing on a flexible support, namely polyethylene terephthalate (PET), as specified in the experimental section.To perform a preliminary characterization, both stencil-printed graphite (SPG) electrode and a commercial screen-printed graphite electrode were analyzed by using cyclic voltammetry (CV) in 5 × 10 −3 m Fe(CN) 6 3−/4− , as reported in Figure 2A.The SPG electrode shows a peak-to-peak separation of 0.237 V at 50 mV s −1 (Figure 2A, red curve), which is smaller than that at the screen-printed graphite electrode (Figure 2A, black curve, 0.614 V).In addition, both electrodes were scanned at different scan rates (data not shown) to determine the electroactive area (A EA ), roughness factor () and electron transfer rate constant (k 0 , cm s −1 ).The SPG electrode has an A EA of 4.11 ± 0.13 cm 2 , a roughness factor of 45.7 ± 1.4 (calculated by dividing the electroactive area by the geometric area) and an electron transfer rate constant of (9.7 ± 0.7) × 10 −3 cm s −1 compared to the screen-printed graphite electrode exhibited an A EA of 0.89 ± 0.02 cm 2 , a roughness factor of 7.1 ± 0.2 and an electron transfer rate constant of (2.6 ± 0.2) × 10 −4 cm s −1 .The electroactive area was calculated by using Randle's-Ševčík equation. [41]The electron transfer rate constants (k 0 , cm s −1 ) were calculated using an extended method merging the Klingler-Kochi and Nicholson and Shain methods for totally irreversible and reversible systems. [42]n particular, the stencil-printed electrodes exhibited better electrochemical performance both in terms of electroactive area and electron transfer rate probably because of the absence of organic solvents, commonly present in commercially available inks that lead to decreased electrical conductivity, the enhanced content of graphite, reduced curing time and lower curing temperature with respect to commercial screen-printed electrodes. [12]The SPG electrodes characterized by scanning-electron microscopy (SEM) exhibit a rough surface (Figure 2B), confirming the high roughness factor of these electrodes.SPG electrodes were analyzed also by Raman spectroscopy, as shown in Figure 2C.Raman spectrum shows a G-band at 1572 cm −1 , attributed to the sp 2 -type binding of carbon atoms, typical of carbon-based species.The D-band is observed at 1348 cm −1 , associated with the breathing mode of sp 2 -carbon rings, and its second order process, the 2D band, appearing at 2695 cm −1 .A deeper analysis of 2D band, shown in the inset, reveals that it consists of two sub-bands, a more intense one around 2708 cm −1 and another at 2667 cm −1 , as expected for graphite. [43]Also the D′ band appears around 1618 cm −1 , together with its first overtone at 2336 cm −1 .D and D′ bands are related to structural disorder, being originated by scattering with defects, respectively in intervalley and intravalley processes. [44]he integrated intensity ratio I D /I G for the D band and G band is widely used for characterizing the defect quantity in graphitic materials.In particular, SPG electrodes showed a very low integrated intensity ratio I D /I G of 0.25, hence the electronic properties of graphite are not affected during the ink formulation. [45] active water-based ink was formulated enclosing chemically synthesized Prussian Blue (PB) nanoparticles chemically synthesized to fabricate stencil-printed electrodes, named thereafter SPG-PB electrodes.PB or ferric ferrocyanide is a well-known coordination compound with ferric ions coordinated to nitrogen and ferrous coordinated to carbon in a face centered cubic lattice. [46]Usually, PB is electrochemically deposited using a mixture of Fe 3+ and [Fe(CN) 6 ] 3− . [47]PB nanoparticles are reported to behave as nanozymes with peroxidase-like activity. [48,49]igure 2D displays CVs for SPG-PB electrodes at pH = 3 (black curve) and pH 7.2 (red curve).The Prussian Blue/Prussian White redox activity with potassium as the counter cation is observed in both CVs as a set of sharp peaks with separation of 80 mV at pH = 3 and 168 mV at pH = 7.2 and 50 mV s −1 as scan rate.These peaks, in particular the cathodic one, are like the peaks from anodic demetallization. [50]Such set of sharp peaks in CVs correspond to the regular structure of PB with homogeneous distribution of charge and ion transfer rates throughout the film.To confirm the presence of PB nanoparticles, SPG-PB electrodes were analyzed by SEM, as shown in Figure 2E, displaying the rough graphite surface decorated with PB nanoparticles with diameter ranging from 20 to 100 nm (Figure 2E, inset).PB nanoparticles are well dispersed in the ink, not agglomerated as normally occurring for drop-casting deposition limiting availability for catalytic reactions on the superficial layer of the electrode.
The surface chemical composition analysis of the electrodes was performed by using XPS to investigate the presence on the surface of PB nanoparticles.From the wide scan XP spectrum it was possible to observe the presence of Fe and N, whose high resolution spectral regions are shown in Figure 2F,G.Moreover, to discriminate nitrogen coming from PB nanoparticles from nitrogen of chitosan, the curve fitting procedure was applied to N 1s XP spectra and the -CN peak component at BE = 398.5 ± 0.1 eV was identified in SPG-PB spectrum (Figure 2G).
First-Generation Lactate and Glucose Biosensors with Stencil-Printed Graphite Electrodes
The so-prepared SPG-PB electrodes were further modified with lactate oxidase (LOx) and glucose oxidase (GOx) to develop firstgeneration lactate and glucose biosensors, respectively.Figure 3A shows the CVs in non-turnover (black curve) and turnover conditions (addition of 10 × 10 −3 m l-lactate, red curve).In non-turnover conditions, SPG-PB electrodes showed reversible peaks at E 0′ = 0.067 V related to the PB nanoparticles enclosed in the water-based ink (Figure 3A, black curve).After substrate addition, a significant, mass-transfer limited, electrocatalytic curve starting at E ONSET = 0.145 V with maximum current of −7.6 A at E = −0.040V (Figure 3A, red curve) was observed.Similarly, SPG-PB electrodes were modified with GOx showing a CV with reversible peaks at E 0′ = 0.067 V related to the PB active ink (Figure 3B, black curve).After substrate addition, an electrocatalytic curve starting at E ONSET = 0.194 V with maximum current of −11.5 A at E = −0.030V (Figure 3B, red curve) was observed.In both cases the mass-transfer limitation is related to the nanostructured PB film, which enhances catalytic efficiency toward H 2 O 2 reduction, and high roughness/porosity of SPG electrodes, responsible for controlling the diffusion at the electrode surface.
Additionally, the calibration curve was fitted to determine the classical Michaelis-Menten kinetic parameters, which resulted in I max of 9.2 ± 0.4 A and an apparent Michaelis-Menten constant (K M app ) of (9.7 ± 1.2) × 10 −3 m (≈20 times higher than K M measured in solution). [51]The latter could be related to the controlled diffusion of enzymatic product (i.e., H 2 O 2 ) through the rough electrode surface.This usually results in an extended linear range, but the dispersion of PB nanoparticles into the ink may have hindered their availability for superficial catalytic reactions, where the enzyme is physisorbed.The calibration curve for GOx/SPG-PB electrodes (spanning overall the 1 × 10 −6 to 1 × 10 −2 m), reported in Figure 3D, indicated a linear range from 0.2 × 10 −3 to 1 × 10 −3 m (Figure 3D, inset), a detection limit of (63 ± 1) × 10 −6 m and sensitivity of 3.2 A mm −1 and a correlation coefficient of 0.99 (RSD 4.8%, n = 10).The kinetic parameters were determined as I max of 9.3 ± 0.1 A and a K M app of (2.2 ± 0.3) × 10 −3 m (similar to K M measured in solution). [52][55][56][57][58][59][60][61][62][63][64] The analytical figures of merit for both biosensors are summarized in Table S1 (Supporting Information).The analytical performance for the proposed electrode platform could be ascribed to the low amount of enzyme effectively immobilized onto the electrode surface and the superficial unavailability of PB nanoparticles toward catalytic H 2 O 2 reduction.
Figure 4A shows the CVs in the absence (black curve) and presence of 10 × 10 −3 m l-lactate (red curve).In non-turnover conditions, SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] electrodes showed a couple of quasireversible peaks at E 0′ = 0.205 V related to LOx-modified ORP electrode (Figure 4A, black curve).After substrate addition, a catalytic curve starting at E ONSET = 0.050 V with maximum current of 23 A at E = 0.350 V (Figure 4A, red curve) was observed.Similarly, SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] electrodes were modified with GOx and the corresponding CV in non-turnover conditions displayed a couple of peaks at E 0′ = 0.250 V related to GOx-modified ORP electrode (Figure 4B, black curve).After substrate addition, the catalysis for d-glucose oxidation started at E ONSET = −0.020V with maximum current of 35 A at E = 0.390 V (Figure 4B, red curve).In both cases, the mass-transfer limitation is related to high roughness/porosity of SPG electrodes, which allows to control the diffusion at the electrode surface.Afterward, LOx and GOx were drop-cast also onto [Os(dmbpy) The calibration curve for LOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] electrodes (spanning overall the 1 × 10 −6 to 5 × 10 −2 m), reported in Figure 4C, indicated a linear range from 30 × 10 −6 to 5 × 10 −3 m (Figure 4C, inset), a detection limit of (9 ± 1) × 10 −6 m and a sensitivity of 1.32 A mm −1 and a correlation coefficient of 0.97 (RSD 6.1%, n = 10).Additionally, the calibration curve was fitted to determine the classical Michaelis-Menten kinetic parameters, which resulted in I max of 14.8 ± 0.6 A and an apparent Michaelis-Menten constant (K M app ) of (7.8 ± 1.1) × 10 −3 m (≈14 times higher than K M measured in solution).Being both the ORP and LOx immobilized onto the surface of a rough graphite electrode, K M app values could be affected by the controlled diffusion of the enzymatic substrate.The calibration curve for GOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] electrodes reported in Figure 4D (spanning overall the 1 × 10 −6 to 1 × 10 −2 m), indicated a linear range from 10 × 10 −6 to 250 × 10 −6 m (Figure 4D, inset), a detection limit of (3.0 ± 0.5) × 10 −6 m and sensitivity of 28.4 A mm −1 and a correlation coefficient of 0.98 (RSD 5.7%, n = 10).The kinetic parameters were determined as I max of 19.4 ± 0.8 A and a K M app of (0.38 ± 0.06) × 10 −3 m (like K M measured in solution) [52] probably related to the amount of ORP and GOx immobilized onto the electrode.
Besides the preliminary analytical features, the storage stability of the proposed platforms was tested by recording the amperometric response for 20 consecutive measurements every day over a period of 20 days.The stability measurements were performed by continuously supplying 0.2 × 10 −3 m l-lactate (Figure 4E, red curve) and 0.2 × 10 −3 m d-glucose (Figure 4E, black curve) through a FIA system.In particular, LOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] reported a response decrease of its initial signal of 21% after 20 days, probably because of the enzyme intrinsic stability and the porosity/roughness of SPG electrodes, which is stabilizing the enzymatic layer.GOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] showed similar storage stability trend.Furthermore, the stability of both biosensors, namely LOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] and GOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ], was tested by spiking artificial sweat with 10 × 10 −3 m l-lactate (Figure 4F, red curve) and 10 × 10 −3 m d-glucose (Figure 4F, black curve).Both electrode platforms, showed a stable amperometric response after substrate addition over 15 h (measurements performed with a flow-injection system to continuously provide a flow of both substrates to the electrochemical cell).The selectivity of the proposed biosensing platforms was evaluated toward potential interferents.The signals recorded at fixed concentration of l-lactate and d-glucose were compared with those obtained at the same concentrations of different interferents (considering their presence in human plasma), [65] both in buffer and artificial sweat.The interferents tested were d-glucose, dfructose, pyruvate, dopamine, ascorbic and uric acid for l-lactate detection (Figure 4H), while l-lactate, d-fructose, d-galactose, dopamine, ascorbic and uric acid for d-glucose detection (Figure 4H).No significant current responses were recorded except for ascorbic acid, notably 12% of l-lactate signal and 19% of d-glucose signal, respectively.Ascorbic acid, differently from other electrochemical interferents tested exhibits higher diffusion coefficient (D = 5.9 × 10 −6 cm 2 s −1 ) and can be easily oxidized at the electrode surface (i.e., dopamine D = 8.2 × 10 −7 cm 2 s −1 ) at the potential where the analytical measurements are performed.The analytical figures of merit for both biosensor platforms are summarized in Table S2 (Supporting Information).5][56][57][58][59][60][61][62][63][64]
l-Lactate and d-Glucose Integrated in a Wrist Band
After preliminary analytical characterization of both first and second-generation electrodes, they were integrated within a wristband to perform continuous l-lactate and d-glucose monitoring in sweat.As shown in Figure 5A, both working electrodes, namely LOx and GOx modified, are placed within the rubber wrist band together with a printed silver pseudo-reference and a carbon-based counter electrode.The recess within the rubber wristband created an electrochemical cell with a thickness of 2 mm, which enabled sweat accumulation.The wrist band was worn by a voluntary healthy male patient.
II This work
Figure 5B shows the amperometric recording for l-lactate detection (red curve) during the resting state, which corresponds to 0.2 × 10 −3 m l-lactate according to calibration curve previously measured.The amperometric signal increased to 12.9 A after 30 min of fast walking activity, which is in good agreement with other on-line continuous l-lactate measurements performed with previously reported wearable biosensors.Additionally, the amperometric recording for d-glucose (black curve) reported during the fasting state (0.06 × 10 −3 m d-glucose in sweat) accurately reflected blood physiological levels normally reported.During glucose tolerance test (drinking 200 mL d-glucose solution containing 75 grams) there was a sharp increase in the output response up to 8.2 A, which corresponds to 0.23 × 10 −3 m according to the calibration curve previously measured.The latter is in good agreement with values reported for other wearable biosensors.Thereafter, the amperometric recording for d-glucose detection (black curve) decreased to the physiological level within 80 min confirming the absence of type-2 diabetes for this individual patient.
[68][69] The reported results account for different physiological and altered levels in a range of bodily fluids (e.g., interstitial fluid, blood, saliva, sweat, etc.).In this regard, it should be also considered the variability in terms of composition for peripheral bodily fluids, particularly for electrolytes that may affect the reproducibility of amperometric detection in sweat.
Conclusions
In this study the first example of second-generation wearable biosensor developed on water-based ink graphite electrodes for the simultaneous detection of l-lactate and d-glucose was demonstrated.The water-based graphite ink electrodes were characterized by means of cyclic voltammetry, Raman spectroscopy, XPS, and SEM.SPG-PB electrodes, containing Prussian Blue nanoparticles, were produced as first-generation biosensors, which exhibited poor analytical figures of merit with respect to developing wearable biosensors (considering the limited range for the detection of both analytes).Therefore, the water-based graphite ink, deposited onto flexible PET sheet, was further modified with [Os(bpy) 2 (Cl)(PVI) 10 ] as osmium redox polymer (ORP) to shuttle the electrons from redox center of LOx and Gox to the SPG.This biosensor array exhibited high sensitivity (notably 1.32 A mm −1 for LOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] and 28.4 A mm −1 for GOx/SPG-[Os(bpy) 2 (Cl)(PVI) 10 ]), selectivity (tested in buffer and artificial sweat) and operational/storage stability (≈80% of initial signal retained after 20 days).Finally, the proposed array was integrated in a wristband and successfully tested for the continuous monitoring of l-lactate and d-glucose.
The proposed system shows promising features for deployment as a flexible and wearable biosensor based on biocompatible water-based inks which can be implemented for sport medicine and remote clinical care, possibly evolving toward edible biosensors for continuous metabolites monitoring.
Lactate oxidase (LOx) from Aerococcus viridans was obtained from Toyobo Enzymes.The LOx (activity 300 U mL −1 ) was dissolved in a phosphate buffer at pH 7.4.GOx (activity 300 U mL −1 ) was dissolved in a phosphate buffer at pH 7.4.
Artificial human sweat or perspiration solution was provided by LCTech (Obertaufkirchen, Germany) and used without any further pretreatments.
Water-Based Conductive Ink Formulation, Electrode Preparation, and Modification: The water-based ink was formulated based on graphite, chitosan and glycerol as conductive material, binder, and stabilizer, respectively.A 2.5% w/v chitosan solution was prepared by dissolving chitosan into 1 m acetic acid.The latter was left under stirring at room temperature overnight.Afterward, chitosan solution was diluted at 1% w/v with distilled water (final concentration of acetic acid 0.4 m).The conductive ink was formulated by mixing 5 grams of graphite powder with 10 mL of previously prepared chitosan solution and 500 L of glycerol.
Afterward, the SPG electrodes were prepared by using PET sheets cleaned three times with IPA and distilled water and sanded by using fine emery paper (1500 grit) to increase ink adhesion.A stencil was prepared on Smart Vinyl adhesive sheet detailly carved by using Cricut Explore 3 equipped with Design Space Software v.7.3.95.After applying the stencil on the PET sheet, 500 L of were placed onto the PET sheet and spread with a scrape.The prepared electrode was left to dry at ambient conditions for 10 min.and cured in an oven for 1 min.at 100 °C.Later, the stencil was peeled off and the connecting track between the working electrode and the pad was insulated with nail polishing.The printed silver pseudo-reference used for the wrist band test was realized by using LOCTITE ECI 1010 E&C silver ink and cured according to manufacturing instructions.
First-Generation LOx and GOx Electrode Preparation: PB nanoparticles were synthesized according to previously reported methods. [71]In particular, 2 × 10 −3 m K 4 Fe(CN) 6 was solubilized in a 10 × 10 −3 m HCl + 0.1 m KCl solution.Furthermore, 2 × 10 −3 m FeCl 3 were added to the solution under vigorous stirring.A blue solution was gradually formed, and the mixture was left to react overnight.To prepare SPG-PB active ink, the solution of nanoparticles was used to dilute the chitosan to 1% w/v and mixed with graphite powder and glycerol as previously described.SPG-PB electrodes were prepared by using the aforementioned stencil-printing method.Finally, 5 L of LOx and GOx were drop-cast onto the electrode surface and let to dry in ambient conditions to obtain LOx/SPG-PB and GOx/SPG-PB, respectively.The electrodes were further condition conditioned overnight in 10 × 10 −3 m HEPES buffer pH 7. Equipment and Measurements: Cyclic voltammetry and amperometry experiments were performed using a PalmSens4 electrochemical workstation equipped with PSTrace 5.6v software.All potentials were measured using a BASi Ag|AgCl|KCl, 3 m (all potential values reported in the paper need to be considered toward this reference) and a platinum wire as reference and counter electrode, respectively.Stencil-printed graphite (SPG) electrodes (geometric area = 9 mm 2 , square shape l × l 3 × 3 mm) were used as working electrodes.DRP-C110 screen-printed electrodes were used as working electrodes only for benchmarking purposes.
The morphological characterization was performed by a field emission scanning electron microscope (FE-SEM), mod.∑igma Zeiss (Jena, Germany).The images have been acquired using the in-lens detector, 5 kV acceleration voltage, 4 mm working distance, 30 m aperture., in top-view, without any further sample treatment.
Micro-probe Raman back-scattering experiments were measured at 532 nm laser excitation wavelengths, using an NT-MDT NTEGRA system.A 50× microscope objective was used to focus the incident laser beams to a spot with a diameter of ≈1 m.
X-ray photoelectron spectroscopy analyses were carried out with Versa Probe II Scanning XPS (Physical Electronics GmbH) spectrometer equipped with Al K source, spot size 200 m.Wide-scan and highresolution spectra were obtained in CAE mode with pass energy of 117.40 and 29.35 eV, respectively, and with source power of 49.2 W. The charge compensation was performed with an electron gun operating at 1.0 V and 20.0 A.The data were analyzed with the MultiPak v. 9.9.0.8 software.
Volunteer for Experiments in Sweat: A 30 years old, apparently healthy, male volunteer was participating in all measurements and procedures herein described without any minimal health risks, and written informed consent was received.The treatment of personal data was done in accordance with the provisions of the GDPR law 675/1996, based on Directive 95/46/EC, which aims to prevent the violation of personal integrity in the processing of personal data.There is no possibility of harm arising as a result of the conduct of the research project or when the information being collected is available from the public domain.PRIN project prot.2017RHX2E4"At the forefront of Analytical ChemisTry: disrUptive detection technoLogies to improve food safety-ACTUaL"; IDF SHARID (ARS01_01270); Åbo Akademi University CoE "Bioelectronic activation of cell functions"; University of Galway College of Science and Engineering Scholarship and CSGI are acknowledged for partial financial support.
Figure 1 .
Figure 1.Schematic representation of water-based conductive ink formulation and further integration of stencil-printed electrodes within a wristband with second-generation lactate and glucose biosensors.
2 (Cl)(PVI) 10 ] modified SPG electrodes.CVs for SPG-[Os(dmbpy) 2 (Cl)(PVI) 10 ] in the absence and presence of substrates are reported in Figure S1A,B (Supporting Information), displaying lower catalytic waves than SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] modified electrodes, maximum current of 17.5 ± 0.3 A at E = 0.26 V and 28 ± 1.2 A at E = 0.25 V.This smaller catalytic current may be due to a lower difference in thermodynamic potential between the formal potential of the [Os(dmbpy) 2 (Cl)(PVI) 10 ] ORP and the redox potential of the prosthetic group within the enzymes compared to the [Os(bpy) 2 (Cl)(PVI) 10 ] ORP.The influence of O 2 on the electrocatalytic behavior of GOx modified SPG-[Os(bpy) 2 (Cl)(PVI) 10 ] electrode was tested after purging the solution with N 2 .CVs in the absence (Figure S2, Supporting Information, black curve) and presence of 10 × 10 −3 m dglucose (Figure S2, Supporting Information, red curve) showed a similar catalytic wave to the ones recorded in the presence of O 2 , with a maximum current of 38 ± 2.1 A at E = 0.390 V.Both modified electrodes were tested in amperometry by increasing substrate concentration in the range 0-50 × 10 −3 m for l-lactate (Figure 4C, inset) and (0-10) × 10 −3 m for d-glucose (Figure 4D, inset), respectively.
Figure 5 .
Figure 5. A) LOx/GOx biosensing array integrated in the wrist band with two working electrodes, a pseudo-reference and a graphite counter electrodes; B) amperometric measurements as continuous monitoring for l-lactate and d-glucose after simulating daily activities like glucose intake and lactate production during sport activity.
Table 1 .
Comparison with other biosensor arrays for lactate and glucose detection reported in literature. | 6,405 | 2023-04-25T00:00:00.000 | [
"Engineering"
] |
Antiviral Activity of the Human Cathelicidin, LL-37, and Derived Peptides on Seasonal and Pandemic Influenza A Viruses
Human LL-37, a cationic antimicrobial peptide, was recently shown to have antiviral activity against influenza A virus (IAV) strains in vitro and in vivo. In this study we compared the anti-influenza activity of LL-37 with that of several fragments derived from LL-37. We first tested the peptides against a seasonal H3N2 strain and the mouse adapted H1N1 strain, PR-8. The N-terminal fragment, LL-23, had slight neutralizing activity against these strains. In LL-23V9 serine 9 is substituted by valine creating a continuous hydrophobic surface. LL-23V9 has been shown to have increased anti-bacterial activity compared to LL-23 and we now show slightly increased antiviral activity compared to LL-23 as well. The short central fragments, FK-13 and KR-12, which have anti-bacterial activity did not inhibit IAV. In contrast, a longer 20 amino acid central fragment of LL-37 (GI-20) had neutralizing activity similar to LL-37. None of the peptides inhibited viral hemagglutination or neuraminidase activity. We next tested activity of the peptides against a strain of pandemic H1N1 of 2009 (A/California/04/09/H1N1 or “Cal09”). Unexpectedly, LL-37 had markedly reduced activity against Cal09 using several cell types and assays of antiviral activity. A mutant viral strain containing just the hemagglutinin (HA) of 2009 pandemic H1N1 was inhibited by LL-37, suggested that genes other than the HA are involved in the resistance of pH1N1. In contrast, GI-20 did inhibit Cal09. In conclusion, the central helix of LL-37 incorporated in GI-20 appears to be required for optimal antiviral activity. The finding that GI-20 inhibits Cal09 suggests that it may be possible to engineer derivatives of LL-37 with improved antiviral properties.
Introduction
Like the defensins, the cathelicidins are a large family of cationic antimicrobial peptides expressed in many species and have broad spectrum antimicrobial activity. Despite this, hCAP18/LL-37 is the only known human cathelicidin [1]. The hCAP18 is 18kD precursor protein with a signal peptide, a cathelin-like domain and antimicrobial domain. LL-37 is a 37amino acid cationic peptide produced by cleavage of the anti-microbial domain from the hCAP18 protein. Like many other antimicrobial peptides LL-37 is cationic. LL-37 is implicated in host defense against a variety of infections [1][2][3][4]. It is produced by neutrophils, macrophages and various epithelial cells as well. LL37 concentration can range from 2-5 μg/ml (0.4-1μM) in bronchoalveolar lavage fluid from healthy individuals and can increase up to 20 μg/ml (2.2μM) during infections. In nasal secretions its concentration can vary from 1.2-80 μg/ml [5,6]. There is mounting evidence that LL-37 may play a role in host defense against influenza A virus (IAV) through antiviral and immune-modulatory activities. LL-37 improves outcome of IAV infection in mice through inhibition of viral replication and reduction of virus-induced pro-inflammatory cytokine generation [4]. Upregulation of LL-37 expression by stimulation with leukotriene B4 correlated with improved outcome of IAV infection in mice [7]. We have partially characterized the mechanism of anti-IAV activity of LL-37 [8]. LL-37 does not block hemagglutination activity, cause viral aggregation, or reduce viral uptake by epithelial cells, rather it inhibits viral replication at a post-entry step prior to viral RNA or protein synthesis in the cell [8]. Likely sources of LL-37 in the IAV-infected respiratory tract include infiltrating neutrophils [9], macrophages [10] and respiratory epithelial cells [11].
LL-37 is an amphipathic peptide with a predominantly hydrophobic surface and a cationic surface. In addition to LL-37, several active fragments of smaller size are produced in vivo, including LL-23 which contains the 23 N-terminal amino acids of LL-37 [12]. Intensive studies have been undertaken to determine the functional roles of different domains of LL-37 with the goal of developing peptides with increased anti-microbial or immune modulatory activity. Wang et al. has recently shown that LL-23 has limited antibacterial activity and noted that it has a single hydrophilic (serine) interruption in its hydrophobic surface (Fig 1). Replacement of this serine with valine (LL-23V9) significantly improved anti-bacterial activity [13]. The smallest fragment of LL-37 that retains antibacterial activity is KR-12 [14]. This peptide retains the core amphipathic helix structure of LL-37 and carries 5 cationic residues. The slightly larger peptide, FK-13 is the smallest peptide having HIV neutralizing activity [15]. A larger peptide, GI-20 has strong anti-HIV activity comparable to full length LL-37 [15].
For this paper our first goal was comparison of antiviral activity of LL-37 and natural or modified fragments derived from LL-37 against seasonal or mouse adapted IAV strains. Recent studies have shown that some innate inhibitors of seasonal IAV strains fail to inhibit pandemic IAV. These include the collectins, surfactant protein D and mannose binding lectin, and pentraxin [16,17]. The effects of LL-37 on pandemic IAV have not previously been studied. Hence, our second goal for this paper was to determine the activity of LL-37 and derived fragments (Fig 1) against pandemic IAV.
Virus Preparations
A/Philippines/2/82/H3N2 (Phil82) strain was kindly provided by Dr. E. Margot Anders (Univ. of Melbourne, Melbourne, Australia). The A/PR/8/34/H1N1 (PR-8) strain was graciously provided by Jon Abramson (Wake Forest University, Winston-Salem, North Carolina). These IAV strains were grown in the chorioallantoic fluid of ten day old chicken eggs and purified on a discontinuous sucrose gradient as previously described [18]. The virus was dialyzed against phosphate buffer saline (PBS) to remove sucrose, aliquoted and stored at -80°C until needed. The A/California/04/09/H1N1 pandemic strain (Cal09) and the A/New York/312/01/H1N1 (NY01) seasonal strain were prepared by reverse genetics as described [8,17]. These preparations contain the intact genome of the original strains. Two additional strains developed by reverse genetics include a strain containing only the hemagglutinin (HA) gene of the pandemic A/Mexico/4108/09/H1N1 combined with the other 7 genes of NY01 (Mex 1:7) and an additional strain containing the HA and neuraminidase (NA) of A/Mexico/09/H1N1 (Mex 2:6). All of the reverse genetic derived strains were grown in MDCK (Madin-Darby canine kidney) cells and the culture supernatants were dialyzed to remove any residual trypsin that might be present.
LL-37 derived peptides and HNP Preparations
LL-37, FK-13 and KR-12 fragments were purchased from Phoenix Pharmaceuticals, Burlingame, CA. The scrambled LL-37 preparation was purchased from Abgent Inc. LL-23 was purchased from Genemed synthesis (Texas, USA). LL-23V9 and the central fragment of LL-37 LL-37 and derived peptides employed in this study. Panel A. Shows peptide regions corresponding to the parent LL-37 as indicated with pairs of arrows and residue numbers. Note that GI-20 corresponds to residues 13-32 with the positions of I13 and G14 are swapped (9). In addition, the C-terminus of GI-20, as well as FK-13 and KR-12, is amidated. These LL-37 fragments are named in the same manner as LL-37 by taking the first two amino acids in single-letter code followed by peptide length. Panel B. Biophysical properties of the peptides obtained from or calculated using the Antimicrobial Peptide Database (http://aps.unmc.edu/AP). Panel C shows three-dimensional structures of intact LL-37 and its derived fragments. Hydrophobic surfaces are represented with filled space model in white. It is evident that the hydrophobic surfaces of both LL-37 and LL-23 are discontinuous. A mutation of Ser9 to Val9 made LL-23V9 more active against both bacteria [13] and viruses (this study). However, GI-20, corresponding to the central helix of LL-37, had the greater activity against IAV than LL-23V9. These structures are determined by NMR spectroscopy in the presence of membrane-mimetic micelles [28,29]. The structures of LL-37 and KR-12 are reported in refs. [14], LL-23 and LL-23V9 in ref. [13], GI-20 in ref. [29] and FK-13 in ref [30].
Fluorescent focus assay of IAV infectivity
This assay was carried out as previously described [20,21]. This assay has been used extensively in the literature to study antiviral activity of innate inhibitors [16,20,[22][23][24] where it has been shown to predict activity of inhibitors in vivo and on plaque assays [20,25]. MDCK cell monolayers (American Type Culture Collection, Manassas, VA) were prepared in 96 well plates and grown to confluency. In some assays, normal Human Bronchial/Tracheal Epithelial cells (HBTE) or normal human small airway epithelial cells (SAE) were used. These cells were purchased from the Life Line cell technology (Frederick, MD). All cell lines were propagated in the undifferentiated state in standard tissue culture flasks according to the manufacturer's instructions. These cell layers were then infected with diluted (in PBS supplemented with Ca 2+ and Mg 2+ . IAV preparations for 45 min. at 37°C (Corning Cellgro, Manassas, VA). Before adding to cell layers, IAV was pre-incubated for 30 min. at 37°C with various concentrations of LL-37 peptides, defensins or control buffer. The multiplicity of infection (MOI) for the fluorescent focus assay and real time PCR (see below) experiments was approximately 0.1. MOI was calculated based on number of cells at confluence. After 45 min, the plate was washed, followed by 24 hrs incubation at 37°C in tissue culture media (media used was specific for the cell line used in the experiment and as per manufacturer's instructions). After 24 hrs, MDCK cells were washed with PBS and fixed with chilled 80% acetone for 10 mins. Presence of IAV infected cells was detected using a primary mouse monoclonal antibody (1::100 dilution) directed against the influenza A viral nucleoprotein (EMD Millipore, MA) as previously described [26]. A rhodamine labeled secondary (1:1000) antibody (EMD Millipore, MA) was used to detect primary antibody. Fluorescent positive cells were counted visually on a fluorescent microscope (Nikon MVI, Avon, MA, US). The number of fluorescent foci per ml (FFC/ml) of inoculum was calculated from this. The raw numbers of positive cells counted per well are given in S1 Data and were comparable for the different viral strains. However, the primary human respiratory epithelial cells were generally less readily infected than MDCK cells. We expressed the data as mean±SEM % of control to make relative comparisons between the peptides. Where differences were statistically significant this was also true when tested using raw numbers of positive cells (data not shown).
Lactate dehydrogenase (LDH) assay
The LDH assay was performed using LDH cytoxicity detection kit (Clontech, CA, USA) according to the manufacturer's instructions. In brief, MDCK cells were incubated with Phil82 IAV (with or without LL-37 and related peptides). Cells were also incubated with peptides alone. The incubation time (24 hrs) and assay conditions were exactly same as in fluorescent focus assays. The LDH activity was measured after the 24 hrs incubation of cells with IAV and/ or peptides. Controls included uninfected/untreated cells as negative control (NC) and cells treated with lysis solution as positive control (PC). The percent cytotoxicity is obtained from OD values using the formula: Hemagglutination (HA) inhibition assay HA inhibition was measured by serially diluting LL-37 peptides in round bottom 96 well plates (Serocluster U-Vinyl plates; Costar, Cambridge, MA) using PBS as a diluent and human type O red cells as described [27]. 40 HA units of virus was used in the assay.
Plaque assay
The plaque assay was performed as previously described. In brief, IAV (3x10 9 pfu/ml) was preincubated for 30 min. at 37°C with various concentrations of LL-37 peptides, or control buffer, followed by addition of these viral samples to 100% confluent MDCK cells (6 well plates). The IAV samples were prepared in PBS supplemented with Ca 2+ and Mg 2+ . Cells were infected with IAV samples for 1hr at 37°C. After infection cells were washed followed by addition of 2ml agar overlay per well. Composition of agar overlay was as follows: 2X EMEM (Lonza Inc, USA,) with 1% penicillin streptomycin, 4mM L-glutamine (Hyclone), 1% sterile low melting agarose (Fisher Scientific) and 2ug/ml TPCK trypsin. For some experiments IAV was not preincubated with peptides before adding to cells. Instead after 1 hr incubation with IAV, cells were washed and then incubated with peptides for another 1hr at 37°C followed by washing and addition of agar overlay. After adding agar overlay, cells were incubated for 4 days and then fixed with 4% paraformaldehyde (1hr RT), stained with 0.1% crystal violet and plaques were counted visually.
Neuraminidase assay
The neuraminidase (NA) assay was performed using a 2'-(4-methylumbelliferyl)-alpha-D-Nacetylneuraminic acid (MUNANA) based influenza neuraminidase kit (Life technologies, USA) as per manufacturer's instructions. The assay is based on quantitation of fluorogenic end product 4-methylumbelliferone released from non-fluorogenic MUNANA by neuraminidase. Various doses of peptides were incubated with IAV for 30 min, 37°C followed by addition of MUNANA substrate (provided with the kit). Samples were further incubated for another 1hr at 37°C. The reaction was then stopped and read using a POLARstar OPTIMA fluorescent plate reader (BMG Labtech, Durham NC). The NA activity was expressed as % of control. Raw fluorescence data are provided under S1 Data.
Measurement of viral RNA
RNA for the viral M protein was measured using real time PCR (qPCR) as previously described [8]. MDCK cells were infected with IAV virus strains incubated for 30 min at 37°C with or without various doses of GI-20 or LL-37. RNA extraction was done at 45 min and 24 hrs post infection using Magmax viral RNA isolation kit (Applied Biosytems, Carlsbad, California) as per manufacturer's instructions. Both lysed cells and cell supernatant were used for extraction. Viral RNA was also extracted from different concentrations of virus with known FFC/ml which was used as standard series. RNA was reverse transcribed using TaqMan reverse transcription reagents (Applied Biosytems, Carlsbad, California). The reaction mix and the cycle conditions were as per manufacturer's instructions. For real time PCR, primers specific for IAV M protein (Forward AGA CCA ATC CTG TCA CCT CTGA and Reverse: CTG CAG TCC TCG CTC ACT) were used. The primers and TaqMan-labelled probes with non-fluorescent minor groove binder (MGB) moieties were designed manually using the Primer Express software version 3.0 (Applied Biosystems, Carlsbad, California) and were also synthesized by Applied Biosystems.
Statistics
Statistical comparisons were made using Student's unpaired, two-tailed t test or ANOVA with post hoc test (Tukey's). ANOVA was used for multiple comparisons to a single control. P values less than or equal to 0.05 were considered significant.
Results
Antiviral activity of LL-37 and derived fragments against seasonal H3N2 IAV and mouse-adapted PR-8 H1N1 IAV Fig 1 depicts the different LL-37 derived peptides used in this study. Although LL-37 had clear dose-related antiviral activity against the seasonal Phil82 H3N2 strain of IAV as reported, the FK13 and KR12 fragments of LL-37 were without neutralizing activity (Fig 2A). The LL-23 fragment had slight antiviral activity against Phil82. The LL-23V9 peptide had significantly increased activity compared to LL-23; however, GI-20, the central fragment of LL-37, had increased activity against Phil82 as compared to either LL-23 or LL-23V9. The activity of GI-20 approached or equaled that of full length LL-37 in these assays. We performed LDH assays to determine if the peptides had any effect on viability of the MDCK cells under the same conditions as the neutralization assay (Table 1). No significant increase in cytotoxicity was observed.
To determine if the antiviral activities observed also occur in primary respiratory epithelial cells we compared LL-37, LL-23, LL-23V9 and GI-20 using HBTE and SAE cells. Of note, the number of infected cells in the HBTE or SAE cultures were consistently less in these and subsequent experiments despite use of the same starting virus concentrations. In any case, similar relative antiviral activities for the three peptides were found in these cells (Fig 2 panels B and C).
We also tested LL-37, LL-23, LL-23V9 and GI-20 for ability to inhibit hemagglutination activity of Phil82 IAV. No inhibition was observed at concentrations up to 12μM for any of these peptides in three experiments. This is consistent with our prior findings with other anti-microbial peptides (e.g. human neutrophil defensins) [24,31].
As shown in Fig 2 panels D-F, LL-23, LL23V9, and GI-20 had similar relative activities against PR-8 as against Phil82. Once again LL-23 had only slight neutralizing activity in MDCK, HBTE or SAE cells, LL-23V9 had somewhat greater activity, and the GI-20 had the greatest activity among the fragments of LL-37. We also performed plaque assays to confirm that GI-20 and LL-37 had comparable antiviral activity. The plaque assay differs from the infectious focus assay mainly in allowing for repeated rounds of viral replication. As shown in Fig 3A, LL-37 and GI-20 had very similar inhibitory activity for the Phil82 IAV strain in this assay. Using this assay we also tested the effect of adding the peptides after initial infection of the cells with virus. The inhibitory activity was markedly reduced using this method. In addition, we tested the ability of the peptides to inhibit neuraminidase (NA) activity of Phil82using the MUNANA fluorescence assay. As shown in Fig 3B, neither LL-37 nor the related peptides inhibited NA activity.
Effects of LL-37 on replication of seasonal or pandemic H1N1 IAV in epithelial cells
We used the Cal09 H1N1 strain from the 2009 pandemic to test the activity of LL-37. We expected to find that LL-37 would inhibit this strain since it had comparable activity against all the viral strains tested thus far. Surprisingly only slight inhibition at intermediate doses of LL-37 and actual enhancement of the replication of this strain at higher doses in MDCK cells ( Fig 4A). A scrambled LL-37 (sLL-37) control had no effect on replication of Cal09. Since this strain was derived by reverse genetics and propagated in MDCK cells rather than eggs, we compared the effects of LL-37 on a seasonal IAV strain (NY01) developed and propagated in the same manner as Cal09. LL-37 caused clear dose related inhibition of the NY01 strain (Fig 4A). Again sLL-37 had no activity vs NY01.
To evaluate this effect further we tested the activity of LL-37 against two recombinant strains, one of which contained the hemagglutinin (HA) and neuraminidase (NA) proteins of In HBTE cells LL-37 did not inhibit infectivity of Cal09, while again inhibiting NY01 in parallel. In these cells LL-37 did not paradoxically increase infectivity of Cal09 (Fig 4C). LL-37 caused slight (but statistically significant, p<0.05) inhibition of Cal09 (Fig 4D) in SAE cells. However, NY01 was significantly more inhibited than Cal09 in SAE cells.
Since the experiments with the Mex09 derived strains suggested a role for the pandemic NA in resistance of pandemic H1N1 to LL-37, we also tested the ability of LL-37, sLL-37 or related peptides to inhibit NA activity of Cal09 as shown in Fig 5. As with the seasonal Phil82 strain, the peptides did not inhibit NA activity of the pandemic strains.
Effects of LL-37 on viral uptake by MDCK cells or viral RNA synthesis in these cells as assessed by qPCR
We next used qPCR to confirm the findings of the infectious focus assay through an independent assay. This was done by pre-incubating the virus with different concentrations of LL-37, and then incubating the viral samples with the cells for 45 min at 37°C. We first tested the effects of LL-37 on viral uptake by the MDCK cells. This was done by harvesting cells and supernatants were separately after the 45 min incubation and assaying both for quantities of RNA of the viral M protein. As shown in Fig 6A, there was no significant difference in viral uptake of Cal09 at the different doses of LL-37 (see cell lysate results). There was an apparent increase in Cal09 RNA in the supernatant of the LL-37 treated cells but this was highly variable and not for LL-37 or GI-20 compared with other peptides and control (ANOVA analysis). # indicates where LL-23V9 caused significantly greater inhibition than LL-23 (ANOVA analysis). statistically significant. There was a significant reduction of uptake of the NY01 into cells in these experiments. Cells and supernatant were next assayed for presence of viral RNA at 24 hrs after infection. In this case there was a significant reduction in RNA of NY01 in both cells and supernatant ( Fig 6B). There was a trend to increase of Cal09 in the cells in presence of LL-37 but this was again highly variable and not statistically significant. These results confirm the reduced activity of LL-37 to inhibit the pandemic strain in these cells.
Effects of CRAMP and HNP-1 on replication of pandemic H1N1
To determine if the lack of antiviral activity of LL-37 against pandemic H1N1 applied to other cationic antimicrobial peptides we tested the murine cathelicidin, CRAMP, and the human defensin HNP-1, against Cal09, Mex2:6, Mex1:7 and NY01. As shown in Fig 7A, CRAMP behaved like LL-37 in that it did not significantly inhibit CAL09 or Mex2:6 and inhibited (albeit modestly) Mex1:7 and NY01. Note the inhibitory activity of CRAMP against NY01 was less pronounced than that of LL-37 which is similar to our prior results using other seasonal viruses [8]. HNP-1 did inhibit Cal09 (Fig 7B) although the activity was attenuated compared to its activity against NY01.
Effects of LL-37-related peptides on replication of pandemic H1N1
Using the infectious focus assay again (as in Fig 2) we found very similar results using NY01 as we obtained with Phil82 and PR-8: LL-23 had limited or no (in case of SAE cells) antiviral activity, activity of LL-23V9 was somewhat greater, and the GI-20 fragment had the strongest activity (Fig 8A-8C). Results obtained with Cal09 were somewhat cell type dependent. All three peptides had some activity against Cal09 in MDCK cells with the central fragment having the most notable activity (Fig 8D). For HBTE and SAE cells GI-20 retained fairly strong activity against Cal09 but LL-23 and LL-23V were without activity (Fig 8E and 8F). LL-23 actually increased viral replication of Cal09 in SAE cells to a limited extent.
Discussion
Activity of LL-37 or derived peptides against seasonal strains of IAV and the PR-8 mouse adapted strain Table 2 shows the comparative neutralizing activity of various antimicrobial peptides for different IAV strains (incorporating results from this and prior papers). The results are given in both μg/ml and μM amounts for comparison. Several fragments of LL-37 that have been reported to have some activity against bacteria had limited or no activity against IAV, including FK-13, KR-12 and LL-23. The lack of activity of FK-13 is notable since this peptide has inhibitory activity against HIV-1 [15], and both FK-13 and KR-12 have activity against bacteria [14]. Of interest the LL-23V9 had consistently increased activity compared with LL-23 vs. several IAV strains and in different cell types. This is consistent with previous finding that this modified peptide has increased activity against bacteria [13]. It also suggests that having a continuous hydrophobic surface on the peptide is important for antiviral as well as antibacterial activity (Fig 1). Nonetheless the LL-23V9 peptide did not have as much activity as full length LL-37. In contrast, GI-20, the central fragment corresponding to the central helix of LL-37 (Fig 1), had strong 1). Overall our results indicate that the central helix of LL-37 is required for optimal anti-IAV activity since shortened peptides FK-13 and KR-12 are poorly active (Fig 2) Failure of LL-37 to inhibit pandemic H1N1 Another important finding of this paper is that LL-37 has minimal or no inhibitory activity for the Cal09 pandemic strain in MDCK, HBTE or SAE cells, while in each case there was clearly greater inhibition of the seasonal NY01 strain prepared in a similar manner. We used qPCR to show that LL-37 did not alter uptake of Cal09 by MDCK cells and to confirm the lack of inhibitory activity of LL-37 for this strain. A viral strain having only the HA and NA of pandemic H1N1 of 2009 (Mex 2:6) in combination with the six other viral gene segments of seasonal H1N1 (NY01) was partially inhibited at a lower concentration of LL-37 in MDCK cells, but this inhibition was again lost at higher concentrations of LL-37. However, a strain containing only the pandemic HA (Mex 1:7) was inhibited by LL-37. These results suggest the effects of LL-37 are not determined by interaction with the viral HA. This is consistent with the finding that LL-37 does not inhibit viral hemagglutination activity [8]. The results also suggest that the pandemic NA may be important in mediating effects of LL-37. Of note, however, we also show that LL-37 does not inhibit NA activity of seasonal or pandemic IAV strains. Further research with additional recombinant viral strains and additional assays will be needed to elucidate the mechanism of these findings. It is likely that other genes of the pandemic virus are involved in resistance to LL-37 since the results obtained with Mex 2:6 in MDCK cells differed somewhat from those obtained with Cal09 (e.g., Mex 2:6 was inhibited more at the lower concentrations and infectivity was not increased as much at higher concentrations). CRAMP also lacked the ability to inhibit Cal09 or the Mex2:6 strain and inhibited Mex 1:7 and NY01. HNP-1 had reduced inhibitory activity for Cal09 as compared to its activity against NY01. These results suggest that the resistance of Cal09 may apply to a variety of cationic antimicrobial peptides. Our finding that cationic anti-microbial peptides have reduced activity against pandemic H1N1 fits in with a larger theme in which pandemic IAV is resistant to other innate inhibitors including collectins and pentraxins [16,17]. In the case of the collectins, surfactant protein D and mannose binding lectin, the resistance applies to all recent pandemic strains and relates to reduced glycosylation on the HA of these strains. Surfactant protein A also has limited activity against Cal09 as compared to its activity against other viral strains [32]. SP-A has a distinct mechanism of inhibiting IAV compared to the other collectins [33]. In this case SP-A does not use its lectin activity to bind to HA-associated glycans but rather it provides a sialylated glycan ligand to which the HA can bind. Pentraxin has a similar mechanism as SP-A [16,34]. This mechanism has been termed γ-inhibition. H-ficolin also functions as a γ-inhibitor vis a vis IAV but it is able to inhibit Cal09 H1N1 [32]. The antiviral mechanism of LL-37 is not fully defined but it does not involve HA inhibition, viral aggregation or inhibition of IAV uptake by epithelial cells (at least in the case of Phil82) [8], which distinguishes it from these other inhibitors. Hence, Cal09 shows in vitro resistance to a range of innate inhibitors that have distinct mechanisms of action. The 1918 H1N1 pandemic strain and H2N2 pandemic strain were also not inhibited by SP-D [17]. These findings suggest that one of the reasons for the increased pathogenicity of pandemic H1N1 is its ability to bypass some initial soluble host defense barriers.
The central fragment of LL-37 has inhibitory activity for Cal09
An additional notable finding of this paper is that the central fragment GI-20 had greater activity against Cal09 than LL-37 in all cell types tested (Fig 8). This result provides basis for further development and testing in vivo of this peptide. In addition, prior studies have shown that GI-20 has strong anti-HIV activity with the best therapeutic index among a library of LL-37-derived peptides, including LL-23, FK-13, and KR-12 [15].
Conclusions
Taken together, the central fragment of human LL-37 is essential for optimal antiviral activity and constitutes a useful template for peptide engineering to boost human host defense. Further engineering work is ongoing in Dr. Wang's laboratory based on this patented template (Wang, G. Anti-HIV Peptides and Methods of Use Thereof, US 20120237501 A1). It is of particular interest that pandemic H1N1 was found to be resistant to antiviral activities of LL-37, CRAMP and (to an extent) HNP-1 and that this resistance is overcome by GI-20. In future studies we will also explore the mechanism of antiviral activity of GI-20 and the immune modulatory effects of the LL-37 derived peptides with respect to IAV. This will be important since LL-37 has been found to have important immuno-modulatory effects during IAV infection in vivo [4].
Supporting Information S1 Data. Supporting Information for the article. | 6,438.6 | 2015-04-24T00:00:00.000 | [
"Biology"
] |
Light dark matter candidates in intense laser pulses II: the relevance of the spin degrees of freedom
Optical searches assisted by the field of a laser pulse might allow for exploring a variety of not yet detected dark matter candidates such as hidden-photons and scalar minicharged particles. These hypothetical degrees of freedom may be understood as a natural consequence of extensions of the Standard Model incorporating a hidden U(1)-gauge sector. In this paper, we study the effects induced by both candidates on the propagation of a probe electromagnetic wave in the vacuum polarized by a long laser pulse of moderate intensity, this way complementing our previous study [JHEP 06 (2015) 177]. We describe how the absence of a spin in the scalar charged carriers modifies the photon-paraphoton oscillations as compared with a fermionic minicharge model. In particular, we find that the regime close to their lowest threshold mass might provide the most stringent upper limit for minicharged scalars. The pure-laser based experiment investigated here could allow for excluding a sector in the parameter space of the particles which has not been experimentally ruled out by setups driven by dipole magnets. We explain how the sign of the ellipticity and rotation of the polarization plane acquired by a probe photon — in combination with their dependencies on the pulse parameters — can be exploited to elucidate the quantum statistics of the charge carriers.
JHEP02(2016)027
1 Introduction Identifying the dark matter in the Universe and consistently incorporating it into the Standard Model (SM) constitute challenging problems in today's particle physics. Cosmological as well as astrophysical observations provide substantial evidence that only a small fraction 4, 5% of matter is made out of the elementary building blocks of the SM but there is not yet a clear idea about the origin and nature of the dark matter [1][2][3][4]. This fact evidences why the SM is currently accepted as an effective theory which must be embedded into a more general framework at higher energy scales. Such an enlarged theory is expected to offer us a comprehensive theoretical understanding about a variety of central problems including the charge quantization, which presently lacks an experimentally verifiable explanation. While some extensions of the SM provide mechanisms for enforcing charge quantization, other scenarios including carriers of small unquantized charge are not discarded. Indeed, effective theories containing an extra U(1) gauge field [5][6][7][8] kinematically mixed with the electromagnetic sector [9][10][11][12], introduce this sort of Mini-Charged Particles (MCPs) [13][14][15] in a natural way. The fact that at low energies these carriers are not observed might be considered as an evidence indicating that the sector to which they belong interacts only very weakly with the well established SM branch. It is, in addition, reasonable to assume that a hypothetical existence of MCPs induces nonlinear interactions in the electromagnetic field provided they are very light sub-eV particles minimally coupled to the "visible" U(1) sector [16,17]. Slight discrepancies are expected then as compared with the inherent phenomenology of Quantum Electrodynamics (QED). Indeed, motivated by this possibility, various experimental collaborations have imposed constraints and ruled out sectors in the parameter space of these hypothetical degrees freedom.
JHEP02(2016)027
The phenomena of interest which have been exploited in this research area so far are summarized in several reviews [18][19][20][21]. These searches fall into two categories depending upon the scenario under consideration. On the one side, there are searches relying on astro-cosmological observations. They provide the most stringent constraints at present. Indeed, arguments related to energy loss which is not observed in Horizontal Branch stars, limit the relative charge in MCPs to 10 −14 for masses below a few keV [22]. However, further investigations in this direction have provided arguments indicating the extent to which this bound is sensitive to the inclusion of macroscopic and microscopic parameters of the star, as well as to certain processes that might attenuate it significantly and, simultaneously, elude it from our perception [23][24][25][26]. The described vulnerability in the astro-cosmological constraints is a strong motivation for considering, on the other side, well-controlled laboratory-based searches as a complementary approach. Generally, these have been conducted through high precision experiments looking for the birefringence and dichroism of the vacuum 1 [30][31][32][33][34], modifications in Coulomb's law [35,36] or through the regeneration of photons from a hidden photon field in "Light Shining Through a Wall" setups [37][38][39][40][41][42]. For details, variants and prospects of this kind of experiment we refer also to refs. [43][44][45][46][47][48]. Most of these experiments require the presence of a static external magnetic field to induce vacuum polarization mediated by virtual pairs of MCPs. As a general rule, the relevant observables depend on the field strength as well as its spatial extend and, usually, such dependencies allow for finding more stringent bounds as both parameters increase. However, today our technical capability in laboratories are quite limited, allowing us to achieve constant magnetic fields no higher than ∼ 10 5 G along effective distances of the order of ∼ 1 km.
Focused laser pulses of few micrometer extension can produce much stronger magnetic fields but they are inhomogeneously distributed [49]. For instance, the highest peak intensity achieved so far 2 × 10 22 W/cm 2 [50] corresponds to a magnetic field strength of 9 × 10 9 G. Besides, peak magnetic fields exceeding ∼ 10 11 G are likely to be reached by the ongoing ELI and XCELS projects [51, 52], in which intensities greater than 10 25 W/cm 2 are envisaged. In view of these perspectives, high-intensity laser pulses are potential tools with which nonlinear phenomena in strong field QED [53][54][55][56] can be observed for the first time. Obviously, this would also provide an opportunity for detecting the birefringence of the vacuum [57]. 2 Indeed, motivated by this idea, the HIBEF consortium has proposed a laser-based experiment which combines a Petawatt optical laser with a x-ray free electron laser [60]. Similarly to setups driven by static magnetic fields, polarimetric experiments assisted by an external laser-wave might also constitute a sensitive probe for searching weakly interacting particles. Although studies of this nature have been put forward for the case of axion-like particles [17,[61][62][63][64][65], the estimate of the exclusion limits for MCPs and hidden photon fields from laser-based polarimetric searches is much less developed.
A first study on MCPs has been given by the authors in ref. [66]. Later, in part I of this series [67], a further step was performed by investigating the optical effects resulting JHEP02(2016)027 from an extended model containing fermionic MCPs and a hidden photon field. There we revealed that, at moderate intensities 10 16 W/cm 2 as provided by the nanosecond frontends of the PHELIX laser [71] and LULI system [72], high-precision polarimetric measurements could improve the existing laboratory upper bounds for the coupling constant of MCPs by an order of magnitude for masses of the order of m ∼ eV. However, charge carriers with unquantized electric charges might be realized in nature not only as fermions but also as scalar particles [73]. Hence, a complete study of this subject requires in addition the insights coming from the polarization tensor [68][69][70] that results from the Green's function of scalar MCPs and in which the field of the wave is incorporated in full glory. For a monochromatic plane-wave background, corresponding expressions in a pure QED context have been obtained previously [68,74]. In this paper, we study the effects resulting from scalar minicharges and paraphotons in a plausible polarimetric setup assisted by a long laser pulse of moderate intensity. We show how the absence of spin in the scalar charge carriers modifies the photon-paraphoton oscillations as compared with a fermionic minicharges model. In particular, we explain how the sign of the ellipticity and rotation of the polarization plane acquired by a probe photon beam -in combination with their dependencies on the pulse parameters -can be exploited to elucidate the quantum statistics of MCPs.
Photon Green's function and vacuum polarization
It is a long-standing prediction of QED that the optical properties of its vacuum are modified in the presence of an external electromagnetic field due to the nontrivial interplay between photons and the fluctuations of virtual electron-positron pairs polarized by an external field. Indeed, compelling theoretical studies provide evidences for self-coupling of photons, rendering QED a nonlinear theory which allows for birefringence and absorption of photons traveling through the polarized region of the vacuum. However, the source of fluctuations inducing nonlinear self-interactions of the electromagnetic field is not restricted to virtual electrons and positrons. Although at the energy scale of QED, the structure of the quantum vacuum is mainly determined by these virtual entities, actually any quantum degree of freedom that couples to photons modifies the structure of the effective vertices which result from the generating functional of the one-particle irreducible Feynman graphs. The lowest one, i.e., the one containing two amputated legs: 3 defines the vacuum polarization tensor Π µν (k, k ) through the Green's function of MCPs as well as the bare and dressed vertices, as it occurs in a pure QED context. Here g µν = diag(+1, −1, −1, −1) denotes the flat metric tensor, whereas the shorthand notationδ k,k = (2π) 4 δ 4 (k − k ) has been introduced.
JHEP02(2016)027
In the one-loop approximation, and in the field of a circularly polarized monochromatic plane-wave of the form the polarization tensor splits into elastic and inelastic terms: out of which the elastic contribution Π µν 0 (k ) is diagonalizable. Its eigenvalues π i , as well as the form factor π 0 , are functions which have been evaluated thoroughly for the case of spinor and scalar QED in [68]. In contrast to Π µν 0 (k ), the other two terms in eq. (2.3) describe inelastic processes characterized by the emission or absorption of photons of the high-intensity laser wave. The involved eigenvectors Λ + (k ), Λ − (k ) and Λ (k ) are transverse k · Λ j (k ) = 0, orthogonal to each other -Λ i (k ) · Λ j (k ) = −δ ij , and fulfill the completeness relation Particularly, we have that Λ ± turn out to be eigenstates of opposite helicities with Λ * ± = Λ ∓ . In its simple version, a scenario involving MCPs characterized by a mass m and a tiny fraction of the electron charge q ≡ |e| is reminiscent of QED; the phenomenological consequences associated with their existence would not differ qualitatively from those emerging in a pure QED context. As such, one can investigate the related processes from already known QED expressions, with the electron parameters (e, m) replaced by the respective quantities associated with a MCP (q , m ). So, in the following, we evaluate the extent to which MCPs might influence the propagation of a probe photon in the field of the strong laser wave [eq. (2.2)] through the dispersion laws that result from the poles of the photon Green's function D µν (k, k ). The latter can be obtained by inversion of the two-point irreducible function [eq. (2.1)], since Indeed, by inserting the decomposition of the polarization tensor we find that -up to an inessential longitudinal contribution -the photon Green function in the field of the wave [eq. (2. 2)] is given by ,
JHEP02(2016)027
where k ± ≡ k ± 2κ and k = (w , k k k). We remark that, in deriving the Green's function the completeness relation [eq. (2.4)] has been taken into account.
Hereafter we consider the limiting case in which the polarization effects due to MCPs are tiny corrections to the free photon dispersion equation [k 2 0]. In this approximation, the pole associated with the -mode does not correspond to photon type excitations, since -independently of the π -structure -the corresponding eigenvector Λ becomes purely longitudinal at k 2 = 0 [more details can be found in page 7 of part I of this series]. Conversely, the dispersion equations resulting from the poles associated with the transverse modes Λ ± coincide with those found previously in refs. [66][67][68]: The corresponding vacuum refractive indices n 2 ± (w , k k k) = k k k 2 /w 2 = 1−k 2 /w 2 turn out to be n 2 . (2.7) The last term in the right-hand side of eq. (2.7) is responsible for inelastic transitions between states with different helicities. In the limit of interest [k 2 0] this formula reduces to where ω k k k ≡ |k k k| denotes the energy of the probe photons. Hereafter, we restrict n ± (k k k) to an accuracy up to terms ∼ π ± /ω 2 k k k so that the effects resulting from the last contribution in eq. (2.8) are no longer considered. Note that this approximation is valid as long as the condition π 0 (k ± ) ω k k k κ 0 is satisfied; otherwise the use of our perturbative treatment would not be justified. We remark that, in this expression, θ denotes the collision angle between the probe and the strong laser wave. For the particular situation to be studied later on, i.e., counterpropagating geometry [θ = π] with ω k k k ∼ κ 0 ∼ 1 eV, the above condition would imply that π 0 (k ± )π 0 (k)/π ± (k) 2 eV 2 which can be satisfied easily since the left-hand side is proportional to the square of the -presumably very tiny -coupling constant ∼ 2 e 2 . Besides, we will deal with laser waves whose intensity parameters ξ 2 = −e 2 a 2 /m 2 [with m and e the electron mass and charge, respectively] are smaller than unity.
Optical observables: including the paraphoton interplay
The Π µν 0 -eigenvalues contain real and imaginary contributions π ± = Re π ± + i Im π ± . The respective refractive indices -eq. (2.8) limited to the first two terms in the righthand side -must also be complex quantities, i.e., n ± = n ± + iϕ ± . While the real part n ± describes the pure dispersive phenomenon, the imaginary contribution provides the absorption coefficient κ ± = ϕ ± ω k k k for mode-± photons. Accordingly, we find in the limit under consideration that Since the analytic properties of Re π + and Re π − are different, the vacuum behaves like a chiral birefringent medium. It is such a property which promotes the search of MCPs through the corresponding experimental measurements. Because of this, we propose a setup in which an intense circularly polarized laser pulse collides head-on with an incoming linearly polarized probe beam. With this geometry it is guaranteed that the probe beam experiences the vacuum polarization effects efficiently during the interaction time. We remark that the vacuum polarized by the strong field of the wave does not share the symmetry properties of the empty vacuum [69,75]. Hence, as soon as the interaction takes place, the probe decomposes accordingly into its right and left circular-polarized eigenmodes and each of them propagates with different dispersion laws as can be deduced from the respective refractive indices [w ± = |k k k|/n ± ]. As a consequence of this effect, the polarization of the outgoing probe is rotated by a tiny angle with respect to the incoming one: where τ is the temporal pulse length. Besides, in the field of the laser wave the vacuum is predicted to be dichroic. This effect induces a tiny ellipticity ψ( , m ) in the polarization of the probe beam which is determined by the nontrivial difference between the absorption coefficients The difference between κ + and κ − manifests by itself that the photo-production rate of a MCPs pair associated with a Π µν 0 -eigenwave differs from the rate resulting from the remaining mode. This statement is somewhat expected because the optical theorem dictates that the creation rate of a pair from a probe photon with polarization vector Λ ± is given We recall that the energy-momentum balance of this process k + nκ → q + + q − allows us to establish the threshold condition n n * , where n * = 2m 2 (1 + ξ 2 )/(kκ) depends on the parameter ξ 2 = − 2 e 2 a 2 /m 2 . In term of the MCP mass m , the previous relation translates into m m n , with m n refering to the threshold mass It is worth pointing out that -as generating mechanisms of the rotation and ellipticity -the roles of the dispersion and absorption of probe photons in the field of a circularly polarized wave are exchanged with respect to the situation where a static magnetic field drives the vacuum polarization. This fact is also attributable to the different symmetries that remain in each external field configuration. For example, the symmetry group down to which the Poincaré group is broken due to the presence of an external magnetic field B B B is the direct product of two groups: one isomorphic to the Euclidean group ISO (2) and one isomorphic to the pseudo-Euclidean group ISO(1, 1) [29]. Because of this -in a magnetized vacuum -the probe beam turns out to be characterized by two propagating modes with mutually orthogonal linear polarization. Such polarization states lie on the transverse and pseudoparallel planes with respect to B B B, i.e., where the groups ISO(2) and ISO(1, 1) act, respectively. In such a background, a difference between the respective JHEP02(2016)027 absorption coefficients leads to a rotated polarization direction. In contrast, the vacuum symmetry group in a circularly polarized monochromatic plane wave [eq. (2.2)] is defined from certain transformations which decompose into rotations in the plane perpendicular to the propagation and translations which compensate the rotation of the field [69,75]. Certainly, in this case the invariance properties of the vacuum differ from the previous case and, in correspondence, the physical modes do not have to be linearly polarized. Rather, as we already argued, these turn out to be circularly polarized and if one of two circularly polarized probe modes is less heavily damped than the other, the resulting outgoing probe field attains a tiny ellipticity which remains from this stronger mode. These two situations resemble the two known cases of birefringence and dichroism occurring in crystals [for details see [76] and references therein].
The model described so far relies on a hypothetical existence of MCPs only. Their occurrence is nevertheless naturally realized in scenarios involving hidden sectors containing an extra U(1) gauge group. The corresponding hidden-photon field w µ (x) is massive with mass m γ and couples to the visible electromagnetic sector via a kinetic mixing characterized by an unknown parameter χ. The diagonalization of this mixing term induces an effective interaction between the hidden-current j µ h (x) and the total electromagnetic field a µ (x) + A (x): where e h refers to the hidden gauge coupling. In addition, a mass m γ = χm γ for the visible electromagnetic field a µ (x) results. Furthermore, as a consequence of eq. (2.14), the relation e = −χe h is established and the two-point irreducible function in the one-loop approximation becomes .
Theoretical studies, as well as the experimental evidence indicate that the mixing parameter is much smaller than unity [χ 1] so that a perturbative treatment in χ is well suited. With such an approximation, the mass term of the electromagnetic field can be ignored, leading to describe the probe photon beam by two transverse polarization states Λ ± , whereas the Λ −mode remains longitudinal and unphysical. 4 Observe that the off-diagonal terms in D D D −1 (k, k ) allow for the photon-paraphoton oscillation, a process driven by both: the massive terms χm 2 γ g µνδk,k and those involving the vacuum polarization tensor 1 χ Π µν (k, k ). However, hereafter we will suppose that the energy scale provided by the loop is much greater than the scale associated with the paraphoton mass [χ 2 m 2 pairs of MCPs [see figure 1]. As a consequence of this hypothetical phenomenon, the polarization plane of a linearly-polarized probe beam should be rotated by an angle Observe that the first contribution coincides with the outcome resulting from a pure MCPs model [eq. (2.11)]. Hence, those terms that depend on the unknown parameter χ are connected to the photon-paraphoton oscillations.
The scenario including the hidden-photon field manifests vacuum dichroism as well, since the decay rates for the two "visible" Π µν 0 -eigenmodes, via the production of a MCPs pair and its conversion into a hidden-photon, differ from each other. The predicted ellipticity is determined by the difference between the attenuation coefficients of the propagating modes. Explicitly, Note that in the absence of the kinetic mixing [χ → 0] this expression reduces to eq. (2.12). Throughout our investigation, comparisons between the pure MCPs model and the scenario including the paraphotons will be presented.
Absorption coefficients and refractive indices at ξ < 1
In contrast to part I of this series, here we analyse the effects resulting from a model in which MCPs are scalar bosons. In the first place, the absence of a spin in these hypothetical degrees of freedom is manifest in the eigenvalues: value of the electron charge |e|]. The expression above depends on the threshold parameter for the photo-production of a pair of MCPs n * = 2m 2 (1 + ξ 2 )/(kκ) [see discussion above eq. (2.13)]. Here, the functions Ω ± and A read .
has been introduced. As in I, our attention will be focused on the limit ξ < 1. Particularly, on the simple cases in which one or two photons from the strong wave [n = 1, 2] are absorbed. We will consider these two situations only because -for ξ < 1 -the chiral birefringence and dichroism properties of the vacuum are predicted to be considerably more pronounced near the lowest thresholds than in the cases asymptotically far from it [n * → ∞ and n * → 0], where the vacuum behaves like a nonabsorbing isotropic medium [66]. Note that in the region of interest [ξ < 1], the parameter ∆ is much smaller than unity. So, we may Taylor expand the integrands in eqs. (2.19) -(2.20) up to second order in ∆ and integrate out the ρ−variable. The real parts of the resulting expressions allow us to write the absorption coefficients [eq. (2.10)] in the following form: κ ± κ ±,1 + κ ±,2 . (2.21) Here κ ±,1 and κ ±,2 turn out to be discontinuous contributions at the threshold point n * = 1 and n * = 2, respectively. Particularly, we find corresponding to n * 1. Conversely, the contributions resulting from the absorption of two photons of the laser wave is valid for masses m < m 2 = kκ − 2 m 2 ξ 2 1 /2 . They amount to where v 2 = (1 − n * /2) 1 /2 and the functions F i (v 2 ) with i = 1, 2, 3 are given by Some comments are in order. Firstly, eqs. (2.24)-(2.27) were determinated by restricting the threshold parameter to 1 < n * ≤ 2, so that the next-to-leading order contribution [∼ ξ 4 ] to the two-photon reaction is not considered. We remark that, when the scalar MCPs are created in the center-of-mass frame almost at rest [v 2 ∼ 0 corresponding to n * → 2], the functions F i (v 2 ) are dominated by the cubic dependences on v 2 . As a consequence, the absorption coefficients for the scalar theory approach to , we find the asymptotes κ ±,2 ≈ α m 2 ξ 4 (0.4 ∓ 0.1)/[4ω k k k ], provided the condition ξ 1 holds. The corresponding expression for κ ±,1 was derived previously in ref. [66].
In contrast to Re π ± , the imaginary parts of π ± are continuous functions. Hence, we only need to consider the refractive indices [eq. (2. After some manipulations, we end up with an integral representation for n ± − 1, suitable to carry out the forthcoming numerical analysis In this expression, ≡ (v, n * ) = n * (1 − v 2 ) −1 is a function of both the integration variable v and the threshold parameter n * . signals is understood within certain confidence levels ψ CL% , ϑ CL% , which we take hereafter as ∼ 10 −10 rad. We emphasize that this choice of sensitivity agrees with the experimental accuracies with which -in the optical regime -both observables can nowadays be measured [77]. Thus, in the following we present the numeric outcomes resulting from the inequalities 10 −10 rad > |ψ( , m , χ)| and 10 −10 rad > |ϑ( , m , χ)|.
Estimating the exclusion limits
(3.1) Some comments are in order. Firstly, the sensitivity limits found from these relations will be close to reality as the parameters of the external field [eq. (2. 2)] will be chosen appropriately to the monochromatic plane-wave model. In an actual experimental setup this restriction can be met by using a long pulse of duration τ κ −1 0 whose waist size w 0 is much greater than its wavelength [w 0 λ 0 with λ 0 = 2πκ −1 0 ]. In this way, a negligible contribution coming from the finite bandwidth is guaranteed. Based on the previous remarks, we find it suitable to consider the benchmark parameters associated with the nanosecond frontend of the Petawatt High-Energy Laser for heavy Ion eXperiments (PHELIX) [71], [τ 20 ns, w 0 ≈ 100 − 150µm, κ 0 1.17 eV, I 10 16 W/cm 2 , ξ 6.4 × 10 −2 ]. We also investigate the results coming from the parameters associated with the nanosecond facility of the LULI(2000) system [72], [τ 1.5 ns, w 0 ∼ 100 µm, κ 0 1.17 eV, I 6 × 10 14 W/cm 2 , ξ 2 × 10 −2 ]. Clearly, with this second analysis we seek to evaluate the extent to which the projected bounds depend on the parameters of the external field. We note that the square of the intensity parameters associated with the described laser systems are much smaller than unity ξ 2 1. In such a circumstance, the corrections to the vacuum refractive indices are dominated by a quadratic dependence on ξ, i.e., n ± − 1 ∝ ξ 2 . Likewise, the terms relevant for the absorption coefficients turn out to be κ ±,1 ∝ ξ 2 and κ ±,2 ∝ ξ 4 . Hence, the observables associated with the simplest MCP scenario [eqs. (2.11) and (2.12)] acquire the dependence |ϑ|, |ψ| ∝ m 2 ξ 2 τ /ω k k k . This fact indicates that -for ω k k k ∼ 1 eV -large sensitivities can be achieved provided the pulses lengths τ are long enough as to compensate for the relative smallness of ξ. This idea motivates the use of the nanosecond frontends of PHELIX and LULI(2000). Now, a suitable experimental development requires a high level of synchronization between the colliding laser waves. To guarantee this important aspect, it appears convenient to use a probe obtained from the intense wave. So, we will assume a probe beam with doubled frequency [ω k k k = 2κ 0 = 2.34 eV] and an intensity much smaller than the one of the strong laser field. Finally, to maximize the polarimetric effects, we will suppose that the collision between the probe and strong wave is head-on [k k k · κ κ κ = −ω k k k κ 0 ].
The projected exclusion regions are summarized in figure 2. They are shaded in purple and red for PHELIX and in blue and green for LULI. These should be trustworthy as long as the limits lie below the white and black dashed lines corresponding to ξ = mξ/m = 1 for LULI and PHELIX, respectively. In this figure, the left panel shows the discovery potential associated with the pure MCP model, whereas the projected bounds including the hidden-photon effects are displayed in the right panel. The results shown in the latter were obtained by setting χ = , so that the hidden coupling constant coincides with the natural value e h = e [see below eq. (2.14)]. 5 This assumption allows us to compare the MCPs, the right one shows the outcomes of the model including a hidden-photon field (γ ). In both panels, the white (LULI) and black (PHELIX) dashed lines correspond to the expression ξ = 1. The left panel includes, in addition, the exclusion regions stemming from various experimental collaborations searching for rotation and ellipticity in constant magnetic fields such as BFRT [30], PVLAS [31,32] and Q & A [33]. The shaded areas in the upper left corner in the right panel result from experimental collaborations dealing with the Light Shining Through a Wall mechanism. The respective 95% confidence levels needed to recreate these results are summarized in ref. [12]. respective outcomes with the pure MCP model. Notice that the left panel incorporates some constraints established from other polarimetric searches [30][31][32][33]. The upper bounds which result from these experiments do not represent sensitive probes of the parameter space associated with the model containing the hidden-photon field [11]. Because of this fact, they are not displayed in the right panel. To compensate it and still put our results into perspective, we include here the limits resulting from various collaborations which deal with Light Shining Through a Wall setups [30,37,38,41]. Similar to the fermion MCPs model, we observe that the most stringent sensitivity limits appear in the vicinity of the first threshold mass m 1 ≈ 1.64 eV. This outcome follows from a search of the rotation angle. In such a situation, the projected bound turns out to be < 2.3 × 10 −6 for PHELIX and < 7.5 × 10 −6 for LULI. When comparing these results with the previously obtained for the model driven by fermionic MCPs [ < 1.9 × 10 −6 for PHELIX and < 6.5 × 10 −6 for LULI], we note that the absence of spin degrees of freedom slightly relaxes the projected sensitivity. Another interesting aspect to be highlighted in figure 2 is the curve shapes of the upper limits, which deviate from those coming out from the fermion MCPs model [cp. figure 2 in I].
Observe that, independently on whether the model includes paraphotons or not, the absence of signals for PHELIX parameters leads to similar constraints. This fact manifests JHEP02(2016)027 the dominance of the first contributions to the observables in eqs. (2.15) and (2.16) for the given set of parameters. We infer that, in the region of interest within the ( , m )−plane, the characteristic times involved in the respective damping factors χ 2 κ −1 ±,1 turn out to be much smaller than the pulse lengths τ χ 2 κ −1 ±,1 . However, the behavior is different when the LULI parameters are used. For masses in the range m 1 < m < m 2 the respective upper bounds are characterized by an oscillatory pattern whose occurence is a direct consequence of the photon-paraphoton oscillations. This implies that, in such a regime, the characteristic times χ 2 κ −1 ± for LULI are much larger than the used pulse lengths τ ; the former being mainly determined by contributions coming from the second threshold point κ ± κ ±,2 [see eq. (2.24)].
We continue our investigation by studying the dependence of the sensitivity limits on the hidden gauge coupling e h . Figure 3 displays how the constraints for PHELIX might vary as e h changes by an order of magnitude around e. Taking the central panel [e h = e] as a reference, we note that the differences between this one and the one evaluated at e h = 10e [right panel] are almost imperceptible. In contrast, a notable distortion can be observed at e h = 0.1e [left panel]. Generally speaking, both trends resemble the results found for a spinor MCPs model. However, when directly comparing the present outcomes with those corresponding to the latter model [see figure 3 in part I of this series], we see that, at e h = 0.1e, the absence of spin degrees of freedom strongly modifies the qualitative behavior of the projected limits. This is not the case at e h = 10e, where the difference between the scalar and fermion models is mainly quantitative.
Perhaps the most important conclusion that one can draw from our results is that, the sensitivity limits expected for experiments driven by long laser pulses of moderate intensities would allow to discard a region of the parameter space which has not been excluded so far by other laboratory-based collaborations. Astrophysical and cosmological constraints are stronger [18][19][20][21], though, but they must be considered with some care. As
JHEP02(2016)027
we already mentioned in the introduction, the limits resulting from these scenarios strongly depend on models associated with certain phenomena which are not observed, such as star cooling in the first place. The vulnerability of these models has been addressed in various investigations and justifies the laboratory-based searches for these weakly interacting sub-eV particles [23][24][25][26]. Uncertainties introduced by parameters such as temperature, density and microscopic energy-momentum transfer are so notable that a reconciliation between the astro-cosmological constraints and those resulting from the laboratory-based experiments is achievable. To put this statement into context, let us recall that for MCPs, a study of the helium-burning phase of Horizontal-Branch (HB) stars establishes ≤ 2 × 10 −14 for m keV. However, the lack of control on the physics occurring in such stellar objects might lead to the omission of suppression channels in the production of MCPs and paraphotons whose incorporation would attenuate the previous limitation. This issue has been analyzed carefully within the RM-model [24], a scenario in which two paraphotonsone massless and one massive (mass m γ ) -are minimally coupled to dark fermions with opposite hidden charges. Owing to the incorporation of two types of paraphotons, this model turns out to be more complex than the simplest MCP scenario. We should mention however, that they can be inserted in such a way that no additional charge labeling the elementary particles is needed and leads to < 4 × 10 −8 ([eV]/m γ ) 2 . Accordingly, less severe bounds appear when the paraphoton mass m γ is getting smaller. This fact fits very well with our approach since it relies on the fulfillment of the condition m γ (π ± /χ) 1 /2 [see discussion above eq. (2.15)]. Note that, at the first threshold m = m 1 resulting from PHELIX parameters, χ < 2.3 × 10 −6 . So, the loop dominance in the photon-paraphoton oscillations is well justified as long as m γ o[0.1−1]µeV, for which the constraints coming from the HB stars become much less stringent than the projected sensitivity estimated here. In part I of this series we explained that there are even certain sectors in m γ in which our projected upper bounds for χ turn out to be currently the best model-independent results. Similar conclusions can be drawn from a study of a hypothetical solar emission of hidden massive photons for which the constraint χ < 4 × 10 −12 (eV/m γ ) for m γ 3 eV has been established [79].
Characteristic of the signals in the scalar MCPs model
Suppose that the outgoing probe beam acquires an ellipticity and rotation which do not coincide with the QED prediction [cp. discussion in section 3.1 of I]. If their origin can be attributed to MCPs, 6 the next questions of interest are: do the signals come from the existence of scalar or spinor MCPs; and do they manifest the effects intrinsically associated with hidden-photons? The answers to these questions can be obtained by investigating the dependencies of the observables on the laser parameters. In this subsection, we provide arguments which might help to discern the phenomenological differences that result from the various MCP models of interest. Our discussion will be based on the outcomes derived from the benchmark parameters of the nanosecond frontend of PHELIX can be exploited to elucidate the nature of the charged carriers. To do this, we note that the oscillatory patterns in the ellipticity spread considerably as compared with those corresponding to the fermion MCPs model [see upper panel in figure 5]. As such, the displayed curves for scalar MCPs do not show oscillations within the investigated intervals for ξ, τ and λ. This fact constitutes a remarkable property because it implies that a slight variation of the intensity could not lead to change the signal sign for the scalar MCPs model, but it might change ψ( , m , χ) substantially if it is induced by the fermion model. Clearly, this analysis is also applicable to the remaining parameters of the external laser wave.
The reason why the ellipticity curves for scalar MCPs do not change the sign can be understood as follows: at m = m 1 , the charge carriers tend to be produced at rest [v 1 → 0], so that the leading order terms in the absorption coefficients [eq. (2.21)] tend to vanish. As a consequence, the characteristic times ∼ χ 2 κ −1 ±,2 increase and can reach values much larger than the corresponding pulse length τ . Accordingly, the exponential damping factors in eq. (2.16) can approach unity. Besides, by quoting the refractive indices from ref. [66]: (n i − 1)| n * =0 ≈ −α m 2 1 ξ 2 /(5πω 2 k k k )δ −,i with i = +, − we find that the asymptotic expression for the ellipticity is determined by the oscillation probabilities between a photon and a paraphoton with negative helicities P γ − →γ − : 7 Manifestly, in figure 5, the green curves resemble the sin 2 -shape obtained above. We remark that, in contrast to the fermion model, the remaining oscillation probability in the scalar scenario tends to vanish identically [P γ + →γ + ≈ 0]. 8 A similar study allows us to find the asymptote for the absolute value of the rotation angle ϑ( , m , χ) at the first threshold point [m = m 1 ]. In this case, Observe that, since the refractive index n − −1 < 0, we have s < 0 and the involved function s + sin(s) 0. As a consequence, the rotation angle does not change the sign either, a 7 A general expression for the oscillation probability between photon and paraphoton has already been derived [see eq. (2.38) in I]: P γ ± →γ ± (τ ) χ 2 1 + exp − 2 fact which is manifest in figure 4 [lower panel]. We note that, at the first threshold mass [m 1 1.64 eV], no manifestation of oscillations appears within the range of interest in the external field parameters. However, at m = m 1 , the patterns found in the fermionic model with a hidden photon field fluctuate about the curves which result from the pure MCPs scenario. At this point we shall recall that -in contrast to the ellipticity -such oscillations for ϑ( , m , χ) do not change the sign [see I for details]. Therefore, if on variating ξ, τ and λ, the signal does not oscillate as described previously, then one could associate the measurements with the scalar model. Still, this way of elucidating the nature of the involved charge carriers may be considered more difficult than the approach associated with the ellipticity since no change of sign arises.
Regarding the behavior of the rotation angle at m = 0.1 eV, the occurrence of highly oscillating patterns in the model with paraphotons is notable [black dotted curves in figures 4 and 5, lower panels]. The corresponding trend associated with the fermion model turns out to be much less pronounced. While in this last scenario there is no change of sign, in the scalar case the signal might change. This is because, for the present benchmark parameters, the characteristic time associated with the negative helicity mode ∼ χ 2 κ −1 − becomes much smaller than the pulse length [τ = 20 ns], leading to an exponential suppression τ . In such a situation, the remaining damping factor in eq. (2.15) can be approached by unity and |ϑ( , m , χ)| ≈ 1 2 (n + − n − )ω k k k τ + χ 2 sin n + − 1 χ 2 ω k k k τ . (3.4) Thus, as in the case of the ellipticity, one might -by changing the external field parameters -use a change of sign to elucidate whether scalar MCPs are realized or not in nature. Although eq. (3.4) looks similar to eq. (3.3) it differs from the latter in the important respect that it involves the refractive index n + − 1 which -in the current regime of mass -does not vanish identically. Finally, in figure 6, the dependencies of the ellipticity and rotation of the polarization plane with respect to some unknown parameters are shown. The vertical central panel of this figure displays how the signals might change with the mass m of this hypothetical charge carriers. As in the fermion model, the ellipticity resulting from the scenario without paraphotons reveals a discontinuity at the first threshold mass [red curve], discussed in section 2.3, which is smoothed as soon as a hidden photon field is taken into account [dotted black curve]. As a side remark, we point out that at the first threshold, the ellipticity is constant in both models. Note that the blue curves -corresponding to the pure MCPs model JHEP02(2016)027 at m 1 = 1.64 eV -do not appear in the upper panels neither in figure 4 nor in figure 6. This is because, at the first threshold mass, the ellipticity becomes extremely tiny being determined by the next-to-leading order term in the absorption coefficient [eqs. (2.24)-(2.27)]. We note that, in contrast to the ellipticity, the dependence of |ϑ( , m , χ)| with respect to the mass m follows a continuous paths in both models. Regarding the left and right vertical panels, they illustrate how both observables depend on the mixing parameter χ and the relative hidden coupling e h /e. In both panels the fluctuating patterns for the ellipticity [eqs. (3.2)] and rotation of the polarization plane [eq. (3.4)], at the respective masses m 1 = 1.64 eV and m = 0.1 eV can be seen. Particularly, the outcomes associated with the latter observable in the lower left panel manifest that the curve including a hiddenphoton field is modulated around the pure MCPs contribution [first term in the right-hand side of [eq. (3.4)]. Both panels show a fast decrease of the observables for small values of χ, a trend which is also manifest with respect to e h /e [black dotted curve]. We remark that, in the right panel, the outcomes resulting from the pure MCP scenario [horizontal red and blue lines] are not sensitive to variations of the relative hidden coupling because the latter only emerges within the framework of a hidden-photon field.
Conclusions and outlook
Experiments designed to detect the QED vacuum birefringence in laser pulses might provide insights about light dark matter candidates such as MCPs and paraphotons. Throughout this investigation, we have paid special attention to the capability which long laser pulses [τ ∼ ns] of moderate intensities [ξ < 1] offer for the exploration of new domains of particle physics. Particularly, we have pointed out that their long durations compensate the small intensities associated with them and the combination of this feature with the fact that they are also characterized by a well-defined frequency manifests the realization of thresholds in which the projected sensitivities can be higher than those achieved in experiments driven by dipole magnets. We have noted that -depending on the external parameters -the absence of spin can facilitate or counteract the photon-paraphoton oscillations, as compared with the fermion MCPs model. This intrinsic property might manifest through the probe photon beam and, can be exploited to discern the quantum statistics of these particle candidates. A special emphasis has been laid on a plausible change in the ellipticity sign that the probe photon can undergo, depending upon the MCPs nature.
Finally, we emphasize that the treatment used in this investigation is valid only for ξ 1. It would be interesting to extend the present research to the case in which ξ > 1. We remark that, the estimated upper bounds [ ∼ 10 −6 − 10 −5 for m ∼ 0.1 − 1 eV] can lead to an intensity parameter greater than unity [ξ = m m ξ 1], provided ξ 1. Corresponding laser sources exist. Indeed, intensities as large as ∼ 10 22 W/cm 2 have already been achieved by the HERCULES petawatt system [50] and a substantial intensity upgrade is foreseen at ELI and XCELS [51,52]. In connection with these high-intensity petawatt sources, the HIBEF consortium [60] has proposed an experiment to measure vacuum birefringence for the first time by combining a very intense optical pulse with ξ 1 and a probe x-ray free electron laser [57]. Certainly, these measurements will provide a JHEP02(2016)027 genuine opportunity to search for axion-like particles, MCPs and paraphotons. However, in constrast to our treatment, a theoretical description of a polarimetric experiment assisted by such pulses is complicated by the fact that -as a result of the focusing -their typical spatial extensions d ∼ µm are comparable with their wavelengths. As a consequence, the monochromatic model for the external field [eq. (2.2)] is no longer valid and the pulse profile becomes relevant for the establishment of the exclusion limits. For axion-like particles a study of this nature has already been carried out [64], but it remains intriguing to see how the wave profile can influence the upper bounds associated with MCPs and hidden photon fields. | 10,794 | 2016-02-01T00:00:00.000 | [
"Physics"
] |
Transcribing Vocal Communications of Domestic Shiba lnu Dogs
How animals communicate and whether they have languages is a persistent curiosity of human beings. However, the study of animal communications has been largely restricted to data from field recordings or in a controlled environment, which is expensive and limited in scale and variety. In this paper, we take domestic Shiba Inu dogs as an example and extract their vocal communications from a large amount of YouTube videos of Shiba Inu dogs. We classify these clips into different scenarios and locations, and further transcribe the audio into phonetically symbolic scripts through a systematic process. We discover consistent phonetic symbols among their expressions, which indicates that Shiba Inu dogs can have systematic verbal communication patterns. This reusable framework produces the first-of-its-kind Shiba Inu vocal communication dataset that will be valuable to future research in both zoology and linguistics.
Introduction
It has long been an interesting interdisciplinary scientific challenge to understand the languages of animals (Hockett, 1959;Radick, 2007;Von Glasersfeld, 1974).Dogs, who are arguably the best friends of humans, have drawn particular attention.Learning what dogs want to express has broad and profound significance, such as in understanding biological evolution (Pongrácz, 2017), for applying their languages to information technology, or sometimes just for satisfying the our curiosity.
Vocal expressions of dogs, being their chief means of communication, have been studied previously.Here we define vocal expressions as all the sounds that a dog can make vocally, including bark, whine, whimper, howl, huff, growl, yelp, and yip.It has been shown that dogs can recognize the scenes and express their understandings of the outer world as well as their inner states by their voices (Molnár et al., 2008;Hantke et al., 2018).The limitations of previous works are from two aspects.On the one hand, previous research treats this task as a simple classification problem, which means that an audio segment containing barks will be straightforwardly sent into one model to get a particular label such as emotion (happy or sad).Although the results of them have shown that dogs have consistent sound patterns for different purposes, they provided little insight into exploring whether dogs have structural languages.The potential linguistic patterns beyond the dog's vocal expressions are dramatically ignored.On the other hand, previous datasets are collected by recording the voices of dogs under certain controlled environments.Such methodology is costly in practice, and the data thus produced is limited in size and variety (as we will show later in Table 1).In this way, it's hard to infer the latent linguistic patterns.The patterns and semantic meanings of some environments not covered in these datasets will not be investigated as well.
Acoustic Features Transcript Barks Semantics
Figure 1: We aim at matching the barks of dogs with its semantic meanings.In our approach, the barks of dogs will be transcribed into symbols.
Even though it is still highly debatable whether animals, or dogs in this case, have languages at all, in this paper, we present an approach pipeline to treat dog sounds as a kind of language, similar to human languages.During this process (Figure 1), the specific patterns found in their vocal expressions imply that their barking sounds can carry corresponding semantic meanings just as humans use fixed sound patterns to signify.In this paper, we present a dataset of phonetically symbolic transcripts of Shiba Inu dog barks1 called ShibaScript, which ameliorates some of the aforementioned challenges.We pick Shiba Inu as the subject because it is a popular breed around the world and there are a large number of their videos on the web.Meanwhile, we provide preliminary phonetic analysis on this dataset.We believe that this work is a first step toward investigating whether dogs have sound-actuated language just as humans with speech.
ShibaScript contains barks coming from 16 different Shiba Inu dogs, corresponding transcripts with timestamps of their barks, among which consistent sound patterns are found.These 16 dogs are respectively from 16 families who post these dogs' videos on YouTube.The dataset has a total length of over 4 hours of pure dog sound production, 4469 sentences, and 7761 words.There are in total 9 distinct syllables in these transcripts.Note that due to the ever-evolving nature of social media, the dataset-construction methodology we propose in this paper can be applied to YouTube continuously and yields a dataset that is growing in size and variety.We believe this dataset will help with future research on canine communication as well as any general audience who are interested in learning what dogs want to express.
Our contributions lie in three aspects: 1. we introduce a reusable framework for transcribing animal voices from social media like YouTube, the framework is the first to assign phonetic symbols on dog barks as well as describe dogs' vocal communication in a formal way; 2. we release a novel Shiba Inu voice transcription dataset2 , which is the first of its kind in the CL community; 3. we present some preliminary statistical findings from this dataset.9 consis-tent phonetic symbols are discovered, with phonemes/words/sentences being existent, The consistent sound patterns found over the these dogs reveal that dogs may have structural vocal communication patterns.
Approach
We now describe the method of constructing ShibaScript.To collect these clean Shiba Inu barks and endow them with corresponding transcripts, a six-step process is used.These steps, in sequence, are getting videos related to Shiba Inu dogs, extracting barks as "sentences" removing barks with noise, extracting barks as "words", separating syllables, and clustering to assign appropriate phonemes based on their acoustic features.
Collecting Data
In this work, we aim at investigating the language patterns of Shiba Inu dogs.Previous works (Ide et al., 2021;Ehsani et al., 2018;Molnár et al., 2008;Hantke et al., 2018) endeavor to understand dog language patterns conduct experiments on datasets (Table 1) which have limited sizes and scenes.Their frequent approach is to get several dogs and record their barks when dogs are put into the context of different events and in various kinds of places.The disadvantages of this method are three-fold.First, the number of dogs is limited by the budget and practical conditions of these experiments.Second, such an approach can only include several "typical scenarios", and is almost impossible to cover all of the situations that dogs might experience in their daily lives.Third, field study like this is costly in terms of humans, machines, and time.Therefore it is hard to transfer the research to other animal species.
To solve these problems, we make use of the abundant resources from online social media.Each year, millions of videos are uploaded to YouTube, which is the largest video-sharing site around the world.These videos include large amounts of Shiba Inu dogs videos of different scenes uploaded by those who keep them.There are even people who set up an account specifically for dogs and upload hundreds or thousands of their videos.Collecting data from such Shiba Inu enthusiasts can substantially enlarge the number of dogs, cover more scenes, and reduce the cost.And most importantly, researchers can adapt this methodology to other dog breeds or even animal species, which Name Type # of Dogs Scenes Activities Size Full Dataset (Ide et al., 2021) video, audio, sensor -simulated disaster sites -2825s DECADE (Ehsani et al., 2018) video, audio, sensor 1 indoors and outdoors -4864s Unknown (Molnár et al., 2008) audio 14 mostly indoors, street -6,646 barks EmoDog (Hantke et al., 2018) audio 12 7 fixed types -9,447s ShibaScript audio, link 16 37 44 14,702s : The number of scenes and activities in ShibaScript is not fixed and can be expanded as the dataset is continuously collected.
Table 1: Dog-voice data sources used previously.Existing datasets are collected by manual recording.The first two contain videos of various lengths, while the latter two contain a certain amount of pure barks with pauses.means this approach is highly reusable.
We select 16 users who have uploaded plentiful Shiba Inu dogs videos and have relatively good recording conditions.These videos are the raw data.
Extracting Sentences
What we care and label transcripts for are the moments when dogs make any vocal expressions.Similar to humans, it is possible to define the sentence in the sound system of dog expressions.The definition is as below: In a sentence, dogs bark continuously on the granularity of seconds.Barks here represent the sounds dogs generate through vocal cord vibration.
In the videos we obtain from different YouTube users, there are a lot of irrelevant and silent frames when the concentrations of videos are not dogs or the dog in the current frame is not barking.
In order to extract the video clips containing vocal expressions of dogs.We use PANNs (Kong et al., 2020), a pretrained large sound event detection model including as many as 527 sound classes that can output audio tagging results as well as events' on-and off-timestamps.Those frames which are tagged with "bark" in the top 10 results are considered to contain barks.We manually labeled 300 samples and compared them with PANNs output, a precision of 0.92 is observed.
Removing Noises
In constructing the dataset, there is an apparent advantage of recording the audio of dogs in reality: the background noise and the conditions of the recording device can be better controlled.In this work, since we pursue better coverage of the dataset and use the resources from public social media, the problem of noises in the audio samples is inevitable.
To generate the scripts and statistical results more accurately, we have tried our best to produce clean dog bark samples from two aspects: first in Section 2.1, we have selected the users who uploaded videos with less noise and better recording conditions; and second, we use the following approach to significantly remove the noise from our data.
From artificial sampling, we find that the majority of the noises come from either the background music which the user edited into the video, or the human talking while the dog was barking.In order to remove this kind of noise, we make use of the result of PANNs as well.Those frames which are tagged with "speech" or "music" in the top 10 results are considered noisy frames.Sentences that contain noisy frames are filtered out.
Extracting Words
In the vocal expressions of dogs, there are mainly long pauses and short pauses.A long audio sample can be divided into several sentences with long pauses in between; a sentence can be further divided into several words with short pauses in between like in Figure 2. We can define "words" in dogs statistically: In a word, dogs bark continuously on the granularity of microseconds.As mentioned in Section 2.2, the pre-trained model PANNs (Kong et al., 2020) performed well on the task of sound event detection.Besides the small-grid pauses, there may also be some noise that failed to be filtered in the previous step.To eliminate such small-grid pauses and noise, here we directly detect the "barking" event from the sentences, and do the word-level splitting based on it.In Hershey et al. ( 2021), The authors picked out a subset of audios from the original AudioSet (Gemmeke et al., 2017) and assigned "strong" labels to them(about 0.1 sec resolution).The strong-labeled subset of AudioSet results in improved model performance.
We first trained a uniform model from PANNs for sound event detection on the strong-labeled subset of AudioSet.Then to extract words out of the sentence, we annotated strong labels on the event "barking", for 246 sentences with a total length of 715 seconds by the phonetic analysis tool Praat (Boersma and Van Heuven, 2001) and finetuned the pre-trained model.As shown in Figure 3, the finetuned model is used to detect the "barking" event and based on the onset and offset of the event, we can extract words from sentences and eliminate the short pauses.
Separating Syllables
In human speech, we have the minimum unit as a phoneme that can construct syllables and words, based on which we form sentences with grammatical rules.We retain this setting in exploring dog language and define their barking sounds from the minimal unit, phonetic symbols (Rohrmeier et al., 2015).However, as dogs have different articulatory anatomy from humans, the sounds can be vastly different.We try to label dog sound excerpts with International Phonetic Alphabet (IPA).
In Räsänen et al. (2018), the authors introduce that it is possible to do syllabification even when no priory linguistic knowledge exists.The way to segment speech into syllable-like units depends on sonority to show the edges of syllables (Figure 4).Considering the fact that current dog voices are without any known language patterns, we can adopt this method to separate syllables in one word.
Clustering and Phonemes Assignment
Given all these syllables and the assumption that dogs have a special system of syllables, we can do clustering and matching up to find a coexisting alphabet for Shiba Inu dogs.As these 16 dogs have different sex, ages, and physical conditions, we conduct Spectral Clustering (Von Luxburg, 2007) on syllables from one certain dog respectively.The feature we use is Filterbank (Strang and Nguyen, 1996).Generally, we set the number of clusters according to the number of videos of each dog, from 10 to 20 (the more videos, the more clusters).The clustering results after dimension reduction can be seen in Figure 5: After clustering, we have found that compared to human languages, dogs have fewer phonetic categories, which is understandable because humans have a more complex vocal system.Aggregating all the clustering results together, we refer to IPA for illustration and find nine consistent syllables (Table 2).After setting up the syllables dictionary, we can reversely get the words transcripts with short pauses, sentences transcripts with long pauses and audios transcripts with pauses.A typical symbolic transcript of one audio sentence can be in Figure 6.
Data Scale
With the hierarchical structure as audios, sentences, words and syllables, we have given each of the barks of Shiba Inu dogs symbolic transcripts.The distributions of each tier are shown in Table 3.As the whole videos are got from open public media YouTube, they contain a large excess of nonlabeling fragments, when the dog doesn't bark or some noise such as human speech and background music.What we concern more are those barking fragments, that is the "sentences."We can take the length of sentences as the length of our dataset.At the same time, because we obtain our data from YouTube, the dataset size can grow over time with more users uploading videos.
Data Variety
Shiba Inu is a very common and lively breed of dog, many people like them and keep them as pets.Those hosts live with their dogs and record their daily lives through videos.As the dataset ShibaScipt is transcribed from the audios extracted from life recording videos on YouTube, the dogs may appear in a variety of common and even uncommon scenes rather than a limited set of scenes, and they may be doing many activities.Therefore, ShibaScript covers a very diverse set of scenes and activities, including 37 different scenes and 44 different activities for dogs.What's more, unlike other datasets which record audios in fixed Figure 6: The script of the sentence in the introduction, containing the id of this sentence, the source audio id, the time of this sentence in the audio, the 5 words and their information in this sentence.Each word in the "transcript" is splitted by ";".scenes or manually, the scenes and activities covered by ShibaScript can be expanded as the dataset is continuously collected.
Figure 7 shows the scenes and activities covered by ShibaScript.We can find that there is a subset of the activities that appear in the vast majority of users' videos.For pet dogs, their daily activities such as walking, running, and sleeping are essential and common, and their owners may also record these activities, so these daily activities are covered by most of the users.This holds for the statistical results of scenes as well, that common scenes in daily life like "quilt", "road", "bedroom", "dog bowl" appear in the vast majority of users' videos.Benefiting from the large number of videos used to transcribe the dataset, ShibaScript covers the vast majority of everyday scenes and activities.
Besides, there are some activities and scenes that appear rarely in the statistics.These activities and scenes are shown as "others" in Figure 7.There are two possible reasons why an activity or scene appears infrequently.First of all, it is highly possible that this activity or scene is related to the personal characteristics of the user.For example, a dog has to wear a cone collar to prevent the dog from licking the wound, so the activity "wear a cone collar" appears only when the dog has had surgery, and this event is not a common one.The second reason is that users have different shooting habits, and a user may only record videos in certain scenes or activities.For example, some users only take indoor videos, so some outdoor activities and scenes like "dig sand" and "beach" are not possible to be covered in their videos, even if the dog actually participated in those activities or scenes.These activities and scenes with personal characteristics greatly expand the diversity of ShibaScript, so that it can cover some non-daily activities and scenes.Benefiting from the wide range of dogs, we can investigate a universal sound pattern of dogs, as they are extracted via them doing various activities under different scenes.
Analysis
We present preliminary statistical findings from ShibaScript, including lexical analysis and transcribing accuracy evaluation.
Lexical Analysis
During the transcribing, there are in total 11 types of tokens, in which 9 types are phonetic symbols (Table 2), the other two are short pauses be- tween words and long pauses between sentences.Similar to humans, the length of these tokens contain ample information.The exact lengths of tokens are kept in ShibaScript for concrete analysis.Because the long pauses are largely determined by the scene at that time, the numerical analysis of it will not be included here.
The mean and variance of each token length can be seen in Table 4.In which we find that almost every phonetic symbol has a similar length of 0.35s or so.Except for the phonetic symbol [u:], which is a prolonged sound owning an average length of 0.45s.While phonetic symbol [k] is a relatively short-lived symbol, only having 0.24s average length.
Considering the monogram (Figure 8) of ShibaScript, we can find that the most frequent symbol is [en] After analyzing the monogram, we come to find the relationship between symbols, as well as the bigram (Figure 9) of ShibaScript.Among these bigrams, several appear extremely frequently.It shows a possibility that they are associated with some common semantic meanings.We will dive into that in the future works.Due to space constraints, the detailed information of bigram is shown in Section B.
Accuracy of Transcription
In this paper, we discover the certain phonetic pattern of Shiba Inu dogs and assign a vocal dictio- nary of 9 symbols, which is a first-step trial in this area.To better evaluate the phonetic symbols set as well as the integral accuracy of our transcribing, we have done an evaluation test on these two aspects.The evaluation metric is 5-level Mean Opinion Score (Viswanathan and Viswanathan, 2005).Three raters will give scores to either one syllable or one word according to Table 5.
Score Description 5
The label exactly matches up. 4 Some difference exists between the label and the sound.Humans are sometimes hard to distinguish.3 Difference exists between the label and the sound.Humans can tell the difference immediately.2 Although the label is obviously wrong, there is similarity between the label and the sound. 1 The label is totally wrong.
Table 5: The evaluation metric of rating, which is similar to MOS in speech synthesis evaluation metric.
Phonetic Symbol Accuracy Evaluation
For each syllable category, we select 50 syllables randomly.The rating result is in Figure 10.The Fleiss Kappa (Kılıç, 2015) between three annotators is 0.609.
Word Accuracy Evaluation
For the word accuracy evaluation, we select 30 words for each dog randomly and find the same person who rates for phonetic symbols to score for them.The rating result is in Figure 11.The Fleiss Kappa between three annotators is 0.516.
Related Work
Early works on understanding animal communications have never reached a point of maturity, which have direct connections between their vocal or literal expressions and their meanings.In these works, researchers attempted to interpret animals in a certain aspect through classifications.Among animals, dogs are popular as research subjects.Considering their vocal expressions, these researches can be divided into mainly three kinds: activity understanding (Ide et al., 2021;Ehsani et al., 2018;Molnár et al., 2008), emotion understanding (Hantke et al., 2018;Paladini, 2020) and individual understanding (Larranaga et al., 2015).The situation above comes from two reasons.The first is that we are short of ample dataset related to the expressions of dogs, and the second reason is that we have never mastered, or seldom investigated the language patterns of dogs.
In some datasets (Parkhi et al., 2012;Iwashita et al., 2014;Abu-El-Haija et al., 2016) related to visual information of dogs, abundant data was collected from the Internet, which saved the cost and made the data extensible.Compared to that, previous vocal-related datasets depended on manual recordings, which limits the context and costs a lot.
Given this, a thought is that we can utilize data on the Internet when collecting vocal-related data if we design a systematic process to extract useful fragments.
In the meantime, previous research adopted a straight-forward classification method, thus lacked enough investigation into the potential sound patterns of dogs.While lexical analysis (Yule, 2022) is the fundamental step for language processing, another thought is that we can set up an own "alphabet" for dogs and transcribe barks of dogs into readable tokens for further research.
Conclusion
In this work, we introduce an unprecedented approach for transcribing vocal communications of Shiba Inu dogs and release a corresponding dataset ShibaScript.Compared to the former approaches, it can save a lot of cost and make the dataset extensible.The approach can be transferred to other animals easily.And most importantly, the method is the first-step into investigating the vocal patterns of dogs, bringing inspiration to the field of animal understanding.
We also make some preliminary statistical evaluation and analysis on ShibaScript.The evaluation shows that our symbol assignments in those transcripts are consistent.In the analysis part, we have shown some interesting findings related to the lexical distribution.For future work, we can further research the semantic meanings of dog vocal expressions because we have obtained the corresponding videos of dog vocal expressions.
Limitations
Dataset Noise As the audios are obtained from the videos on YouTube, the quality of the videos will have an impact on the quality of the final transcript.For example, inferior recording equipment may affect the quality of the sound, although we have done noise removal to keep the quality, the presence of background noise will cause some losses in the transcribing process.
Relationship Between Transcripts and Scenes
In this work we get the transcript of Shiba Inu dogs, and we also find that the dataset covers a variety of activities and scenes.There may be an interesting relationship between the dog vocal units and the environment including the scene and activity.However, we did not quantitatively analyze the relationship.Considerably more work will need to be done to discover semantic information in dog barks.Phoneme Labeling Accuracy In Section 2.6 we cluster the syllables and assign phonetic symbols to them.Then in Section 4.2.1 we evaluate the result by MOS.It can be seen in Figure 10 that the accuracy score is not very high, which can be improved in our future work.
Ethics Statement
This paper makes use of only open-source video data from YouTube.During the transcribing we only focus on the dog barkings, make no use of the personal information of the users, so the released dataset ShibaScript does not contain any personal information, hence doesn't breach the privacy of any persons.
A Clustering Visualization
The full results of clustering can be seen in Figure 12.
B Bigram Statistical Result
Because of the space restrictions, we don't show the detailed results in the main paper.The complete result is in Table 6
C Activities and Scenes Covered by ShibaScript
44 activities and 37 scenes are covered by ShibaScript.The full statistics of them are in Table 7. sth up, roll, lick, stretch, play with toys, play with dogs, sneeze, sniff, walk with a wheelchair, be held, be petted, listen to music, play with people, die, wears a muzzle, wade in water, be medicated, bow, bask, watch fireworks, play with cats, dig sand, climb the mountain, be vacuumed, sprawl, dig the snow, has its teeth be brushed, hum in the sleep, squat, cut nails, wear a cone collar, surf, wag the tail, blow, pee, be massaged, has its fur be brushed D3.Did you discuss whether and how consent was obtained from people whose data you're using/curating?For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?Left blank.
Figure 2 :
Figure 2: The result of sentence-level and word-level split of a complete audio sample.
Figure 3 :
Figure 3: The SED model predicts whether the event "barking" existed in one frame.Words are extracted from the sentences by the onset and the offset.
Figure 4 :
Figure 4: Here a word is separated into four syllables based on the sonority.The complete transcript of this dog is shown in Figure 6.
Figure 5
Figure 5: 2-D Visualization of spectral clustering of one dog's data using t-SNE.The complete clustering of all dogs can be checked in Section A.
Figure 7 :
Figure 7: The activities and scenes that covered by ShibaScript.The area of the patches represents the number of dogs producing this symbol.
Figure 8 :
Figure 8: The occurrences of each monogram.The blue bars show the occurrences across the whole dataset of each monogram in ShibaScript, the green lines show the numbers of dogs producing the symbols, from 1 to 16.
Figure 9 :
Figure 9: The occurrences of each bigram.The blue bars show the occurrences across the whole dataset of each monogram in ShibaScript, the green lines show the numbers of dogs producing the symbols, from 1 to 16.
Figure 12 :
Figure 12: Visualization of Spectral Clustering after TSNE of 16 dogs.The dog's IDs are increasing from left to right, up to down.Phonetic symbols are assigned to different clusters.
Table 2 :
The nine types of syllables as well as the syllables description of Shiba Inu.Every description is a clickable hyperlink to an actual sound sample.
Table 3 :
The basic statistical information of ShibaScript.
, which reaches to 3478 times in ShibaScript, the following two are [au] and [a], .
Table 6 :
The frequency and coverage number of 16 dogs' bigrams.Here Freq.represents for the frequency of one certain bigram, Co. represents for the numbers of dogs who have made this bigram.
room, in the arm, quilt, dog bowl, cage, dining room, bathroom, by the window, lawn, snow, sea, beach, field path, road eat, walk, run, sleep, bark, be petted, be held, play with cats, bath, bask, play with people total 39 44 bedroom, living room, dog bowl, bed, quilt, cage, by the window, under the bed, dining room, bathroom, other animals, stairs, hospital, in the arm, by the fire, cat tree, heating pad, sofa, carpet, door, lawn, beach, sea, woods, field path, road, hill, shrine, shore, cabin, stream, garden, snow, terrace, sightseeing bus, mirror, on the ice, vacuum, other dogs open boxes, bath, eat, walk, run, bark, sleep, pick
Table 7 :
The full statistics for the scenes and activities appearing in each user.The order of the items in column "Scene" and "Activities" is not statistically significant C2.Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?Left blank.C3.Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?Left blank.C4.If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?Left blank.D Did you use human annotators (e.g., crowdworkers) or research with human participants?D1.Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank.D2.Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?Left blank. | 6,636 | 2023-01-01T00:00:00.000 | [
"Linguistics"
] |
How Low-Code Tools Contribute to Diversity, Equity, and Inclusion (DEI) in the Workplace: A Case Study of a Large Japanese Corporation
: Learning and using technology in the workplace are essential for a company’s commitment to the sustainable development of its resources. Finding competent engineers who can handle information communication technologies (ICTs) is a challenge for companies. Currently, however, the ability to use these technologies is limited to technicians with specialized training, and not everyone can engage in development. Therefore, it is safe to conclude that equity in the use of technology has not yet been realized. This study aims to analyze, based on actual cases, the necessary conditions and mechanisms for people with diverse experiences and circumstances, not limited to engineers, to participate in ICT development to address human resource diversity. The use of technology such as low-code platforms (LCPs) that have recently emerged on the market has shown that nonprofessional engineers without programming training can participate in development projects. This research will be useful to managers in advancing Diversity, Equity, and Inclusion (DEI) strategies in their workplaces and contribute to organizational research regarding new trends in technology use by individuals: low codability. The fi ndings of this study are of signi fi cant relevance to the Sustainable Development Goals (SDGs) of decent work and economic growth, as well as gender equality.
Introduction
Today, the use of information communication technology (ICT) has become an integral part of the corporate environment [1], and ICT is an important part of this environment.Additionally, there is a concern that there will be a shortage of engineers involved in the development and operation of ICT in the future in regions facing problems caused by environmental changes such as an aging population [2].Moreover, companies are faced with the challenge of securing resources.From the perspective of sustainable human resource management [3], companies must provide work environments and systems that enable individuals with different abilities, conditions, and circumstances to play active roles.Therefore, Diversity, Equity, and Inclusion (DEI) are important factors in the workplace [4] when considering resource sustainability.To become an engineer engaged in software development, it is necessary to acquire information technology knowledge, such as programming languages, and have practical experience in development project work; however, it is difficult for all individuals to gain experience and learn equally.Therefore, it is impractical for individuals with different circumstances to change jobs and become software engineers, and it is imperative for enterprises to consider DEI strategies [5] to ensure the sustainability of their professional ICT resources.
However, in recent years, the development of IT has brought to the market a type of technology that differs from conventional general-purpose ICT in terms of opportunities for technology use.Low-code platforms (LCPs) [6] and interactive artificial intelligence (AI) such as ChatGPT [7] have been found to have characteristics that can eliminate the need for education on ICT technology.If we identify how the use of these technologies can influence the inclusion of diversity in the workplace and improve equity in the use of technology [8], it is assumed that we will be able to develop new solutions to promote DEI through the use of technology.Although this study is mainly a case analysis of the use of low-code platforms in the workplace, that is, the feature of not using a programming language, this can extend to interactive Generative AI, which also has the same feature.It is imperative to study the potential of new technology tools with low-code characteristics to advance the objectives of the Sustainable Development Goals (SDGs).
Background and Problem Posed
First, we examined ICT, which has become an integral workplace technology, and its relationship with DEI.Generally, ICT development in the workplace [9] is limited to a group of people involved in planning and development, as it is a well-established requirement that the people involved have knowledge and practical experience with ICT, such as programming.Other employees are primarily involved in using the ICT installed in the workplace as a business system to perform their work.In other words, there is a gap between the planning and development of ICT and its operations, and access to ICT as a technology is not equally distributed.First, we observe a state in which the inequity referred to in DEI is limited in terms of access to this technology.
Fundamental systems such as Enterprise Resource Planning (ERP) are designed to standardize and streamline processes [10].Generally, business-critical systems clearly segregate the roles and authority of individuals who use the system and fix and restrict their job descriptions.Under these conditions, the diversity of individual knowledge and experience is fixed and limited to the definitions of the roles and positions of authority in the workplace [11].Therefore, the company's ability to draw on the diverse knowledge of individuals and the expansion of employees' skill sets and career directions [12] are not considered.Modern business systems aim to streamline operations, set rules, and clearly classify roles, as shown in Figure 1, so that the system is restricted to a fixed framework rather than one that demonstrates diversity and equity.
In a workplace where rationalization, division of employees, and information technology have become commonplace, the goal of implementing diversity and equity at the level of individual awareness seems counterintuitive.Therefore, to realize DEI in workplaces [4] where ICT has been introduced, it is necessary to propose new and simple ways of using ICT that can be practiced at each workplace level.
Others
Project managers, Business managers
Manuscript Structure
The following is the structure of this paper: Section 1 provides an introduction and background to this study.Section 2 presents a literature review on Diversity, Equity, and Inclusion (DEI) and low-code tools.Section 3 describes the research methodology.Sections 4 and 5 present the results of the study.Section 4 presents the results of a project that utilized low-code development in a company and a subsequent observation of the evolution of the employees' skills in information and communication technology (ICT) over time.Section 5 refers to public databases for quantitative verification.The results of the qualitative analysis presented in Section 4 are validated through the quantitative analysis demonstrated in Section 5. Section 6 offers a synthesis of the characteristics that considerations and low-coding contribute to the field of DEI.Section 7 then presents the responses to the research questions.Section 8 is the concluding section.
Diversity, Equity, and Inclusion Perspectives on Technology Use
Diversity, Equity, and Inclusion (DEI) are essential elements for organizational transformation and development [13] and are increasingly considered indispensable for organizational success and societal well-being [14].However, fundamental solutions have yet to be discovered.Extensive research has been conducted to understand the complexity of DEI management and develop strategies to foster diverse perspectives and environments [15].In the context of how diversity impacts business activities, it has been suggested that changes in sales, customers, market share, and relative profits are influenced by the increase in racial and gender diversity [16].Diversity [17] is defined as the distribution of personal attributes among the members of an interdependent workplace.Various classification methods to explain the contents of diversity have been proposed [18].There is also a multilevel perspective on diversity, with extensive research focusing on the impact of team-level diversity on team and organizational outcomes, whereas studies on larger organizational or societal-level diversity are relatively scarce.Regarding the impact of team diversity on group performance, Jehn et al. [19] suggest that interactions among members with different backgrounds and perspectives within workplace teams can enhance a team's problem-solving ability and creativity.Furthermore, Jehn [20] suggests that focusing on different aspects of diversity can have different effects on group performance, implying that diversity can have both positive and negative impacts.
Additionally, regarding cultural diversity, it is suggested that individuals with different cultural perspectives actively discussing within work groups may lead to more creative and effective solutions [21].Jackson and Ruderman [22] explored diversity within teams in organizations, suggesting that organizational diversity promotes creativity and efficiency, and that teams with diverse backgrounds and perspectives have a higher ability to generate innovative solutions.Accepting diversity within teams enriches the organizational culture and results in improved the ability to capitalize on new opportunities and address various challenges.Cox and Blake [23] suggest the importance of developing strategies to collect different perspectives from employees with diverse cultural backgrounds and turn them into competitive advantages to maintain organizational competitiveness.Although diversity management is a widely recognized management approach, its definition is ambiguous [24].Recent research has increasingly explored the impact of the DEI concept on consumer behavior, market trends, and brand management [25].Furthermore, Ferraro et al. [26] examined the importance of DEI for brand managers, providing suggestions for utilizing DEI to enhance brand value and image, and considering how actively incorporating diversity contributes to business success.These research trends clearly show that leveraging diversity and inclusion is essential for business success.
Although many studies on how diversity affects organizations have been conducted, applied research from the perspective of practice such as how to implement Equity and Inclusion in the workplace and methods of DEI management [4] is still lacking.Inequality and disparities in the use of computer technology have been discussed as digital divides [27,28].
Low-Code Development and Interactive Generative AI Tools
Recently, organizations have begun using no-code/low-code development platforms [6] to create applications for digital transformation [29].Organizations drive digital transformation by adopting low-code platform development [30], which can alleviate past software development problems.The main feature of no-code development platforms is that flexible and low-cost applications can be created in a short time by integrating components in a drag-and-drop manner through a visual interface, without the need for in-depth programming knowledge [31].This allows organizations to utilize their existing human resources for application production, instead of requiring specialized software programmers.This can alleviate difficulties such as the need for software development resources from a sustainable perspective, which require skilled ICT technicians and involve high running costs for program coding and maintenance [32].However, low-code application development platforms are less flexible and have limited functionality because they have their own specific templates, and developed applications have limited scalability compared with program-coded software [31].There are different definitions of no-code and low-code platforms [33]; however, this study uses the term "Low-code platforms" (LCPs) encompassing those two platforms, as they are handled by employees without programming skills, and are used without any coding.
Second, if we define low-code features as functions that enable a person to program a computer without a programming language [34], then we can assume that interactive Generative AI tools also have low-code-type capabilities.Since its public release in 2022, the Generative AI tool ChatGPT [35] has captured the world's attention owing to its sophisticated ability to perform extremely complex tasks and has rapidly increased its enrollment.ChatGPT has advantages and disadvantages in facilitating teaching and learning, which include individualized, optimized, interactive learning promotion, and a variety of other features.However, ChatGPT has inherent problems such as the invasion of privacy, generation of bias owing to learning data, and generation of incorrect information [36].New AI tools can change the way workers perform and learn; however, information on their impact on operations is limited.Brynjolfsson and Raymond [37] examined the staggered introduction of Generative AI-based conversational assistants in customer support operations and reported improvements for novice and low-skilled workers but little impact on experienced, high-skilled workers.The results suggest that access to Generative AI improves work productivity and that this effect is more likely to affect unskilled workers [37].Noy and Zhang [38] examined the impact of generative artificial intelligence (AI) technology, the assistive chatbot ChatGPT, on productivity and found that ChatGPT increased productivity in a mid-level professional task.Lower-skilled participants benefited from ChatGPT the most, suggesting that it has reduced inequality among workers.Therefore, recent research suggests that Generative AI tools can help improve equity among workers.
Advantages and Uniqueness of This Study
This study explores a new integrated perspective: the relationship between DEI and the conditions of ICT use in workplaces, such as low-code platforms and interactive AI tools.We conducted applied research based on a corporate case study to address the impact of these new tools on actual business operations and their potential contributions to DEI in the workplace, considering areas still lacking despite the results of the aforementioned prior studies.The findings of this study are intended to assist organizational managers in implementing DEI strategies in the workplace and provide a new perspective on improving equity through the use of technology in organizational research, such as in studies on DEI and sustainable resource management.
Purpose of This Study
This study aims to examine whether low-code tools, which appear to attract a more diverse user base than traditional ICT requiring conventional programming skills, contribute to DEI, and to enhance the understanding of the characteristics that promote DEI in the workplace.By uncovering this insight, it will be possible to more accurately understand how to use tools to advance workplace DEI strategies, thus benefiting managers and organizational researchers in their practical endeavors.To achieve this objective, the following research question was posed: Research Questions: RQ1: How do low-coding tools contribute to workplace Diversity, Equity, and Inclusion (DEI) in the workplace?RQ2: How applicable are the DEI advancements observed in a large Japanese corporation across different geographical and organizational contexts?
Research Methods
Answers to the research questions were proposed through the inference of the hypothesis by case study analysis and verification by interview analysis with relevant parties, in addition to the results of follow-up observations.Further, a mixed research method was employed, whereby the results of the case studies and interviews were used for qualitative analysis, and the results of a questionnaire survey of 1000 general residents were used for quantitative analysis.
Field of Study
This study examined the workplaces of large Japanese companies.This is because many large Japanese companies have already published their DEI policies, target setting, and systems but are struggling to achieve their goals compared with those in other developed countries [39], meaning that they are therefore considered suitable for a deep dive into the profound issues.The case study uses a Project Palette, which was conducted in the commercial department of a large Japanese company, A. Company A is a global company headquartered in Japan and is a conglomerate with approximately 300,000 employees on a consolidated basis as of 2023.Interviews were administered to employees and managers at Company A's plants and headquarters.Project 1 (Project Palette), from which data were obtained, was an internal business reform project conducted by the commercial department of Company A's factory in Japan, with the first author serving as the project leader.We used the results from an internal audit of the situation as of 2021, when Project 1 (Project Palette) was implemented, and the subsequent natural progression of the situation in 2023.This project focused on the previously unrepresented knowledge of administrative staff and women working as assistants, with a focus on representation and conditions that support equity in their use of technology.We investigated the inhibitions of persons who are not engineers regarding their use of technology through demonstration of their knowledge, as well as how technology can provide them with support.
Project 1 (Project Palette) Results (2021 End)
Project 1 (Project Palette) was an ICT construction project conducted by working members under the theme "Sharing Diverse Knowledge", in which applications and portal sites were created using the LCP over a period of approximately six months.The application was produced by non-engineering clerical members of the team, from specification studies to implementation, without the intervention of ICT specialists or consultants, which differentiates this project from typical ICT development projects.The twelve members who collaborated on Project Palette 1 comprised administrative and clerical groups in the commercial department of Company A's heavy industrial manufacturing plant.Before the project began, members were concerned about their lack of programming experience and ICT knowledge when they heard about ICT development; however, once they understood that working with LCP did not require complicated procedures or programming, they accepted the task.
Several applications were created in Project Palette; however, the most focused was the creation framework for knowledge-sharing [40] applications.Eight members contributed to this workshop on the creation of knowledge applications.By repeating the framework for sharing and refining the business knowledge of general affairs, which has not been manualized, from individuals to group members, the group members were able to go from 0 at the start to producing 94 knowledge applications in six months.The framework consists of iterations intended to spiral [41,42] of the following four items.
(1) Prepare a memo explaining the procedure (personal); (2) Review content to improve and add information (group); (3) Creation of LCP applications (personal); (4) Review applications and post them on the portal (group).
Additionally, the requirement to create a knowledge-sharing application was initially met with concern [43], especially by women, who said, "There is nothing to share in terms of knowledge because it is general affairs work."However, the result was first a representation of 223 knowledge memos, followed by group discussions that improved the content, ultimately resulting in the creation of 94 knowledge-sharing applications recognized by group members as organizational knowledge assets [44].Table 1 presents the transition in the quantity of knowledge units represented by workshop participants from the beginning to the end of the workshop.The number of knowledge units released as an application 94 Two notable results were obtained.First, the organization recognized 94 pieces of knowledge as shared assets, from a state of tacit knowledge that individuals assumed to be zero, to the deliverable, known as an application.This demonstrates the diversity of knowledge and practices of inclusion in the organization.Second, administrative staff, who had no learning or practical experience in ICT development, implemented ICT production for business use by the LCP without any special introductory training.They created many applications and a portal site on which to post them.
Interviews for Project Palette 1 (2021 End)
In interviews conducted with the eight members at the end of Project Palette 1, we asked about their impressions of the use of LCP and the asset value of the knowledgesharing application(s) they produced.The person in charge, who was mainly active in creating applications, highly rated the fact that no programming knowledge was required, saying, "I used to shy away from ICT programming because I was not good at it, but LCP is very good because there is no programming required and it can be operated intuitively."Furthermore, regarding the knowledge application, they pointed out the value of diverse knowledge being expressed and the importance of being able to refine and recognize such knowledge as an organizational asset, stating "It was good to be able to understand other people's work", and "I think we created an asset because we all brushed up on the knowledge that was expressed".The value of diverse knowledge being represented and the importance of being able to refine and recognize such knowledge as an organizational asset were also recognized.
Project Palette 1 follow-Up Results (2023)
To find out what changes Project Palette 1 has brought to the clerical members of the administrative group, interviews were conducted with seven group members who had remained in the same department at the end of 2023, asking them to report on the current situation.Consequently, new changes were observed.One of the members left the workplace and her results could not be confirmed, but it was confirmed that women tend to fluctuate their workplaces due to life events and the need for support.Table 2 summarizes the changes from 2021 (at the time of the project) to 2023 for each person in charge.
First, we checked the operational status of the knowledge-sharing applications created during the execution of Project Palette 1 and found that many were left in a state of stagnation, wherein updates had been delayed but new applications had been added, and the mechanism itself continued to operate.
Next, when the status of LCP utilization was checked, it was confirmed that after Project Palette 1, the practical skills of the members progressed further, with new functions being planned, more applications being created, and other functions being used to experiment with their application to business operations.Assistant B, who had no experience in ICT development and was in charge of unrelated administrative work at the start of the project in 2021, was assigned to create and operate applications in the LCP in 2023.Assistant B, in an interview, said, "I never had the opportunity to learn about ICT and programs before, but now that I have the chance to be involved in technology, I want to try it myself".The first step in unleashing a diverse range of talent is providing opportunities.
Summary of Project Palette 1
Project Palette 1 proved that specifications, talent, and knowledge that did not exist before could be built using low-code tools and involving people who had not previously been directly involved in ICT development in the production of applications.Additionally, two elements were found to be needed to enable individuals with diverse circumstances to participate in ICT development in the workplace: (1) reduce the burden of learning program language and (2) ensure equal opportunities to experience technology use and support.As a technology specification to help diverse individuals develop their skills, it is necessary to ensure fairness in individuals' use of technology.This can be achieved by minimizing the need to acquire prerequisite knowledge and skills, such as programming languages and low-code tools.However, the introduction of technology into the workplace alone does not eliminate inequality in use, and environmental support, such as workshops on the use of technology and advice from those around them, is essential.
Comparison with Statistics
To compare the general trend with the Project Palette 1 results, a comparison is made with survey data from the IPA (Information-Technology Promotion Agency, Japan).The data in question are derived from the database of a survey conducted by the IPA in 2021 on Japanese companies.A total of 15,000 companies (5000 of which are classified as IT companies) were invited to participate in the survey, and 1935 companies responded (a response rate of 12.9%) [45].
First, we analyzed the results of the survey on the demand for IT personnel in companies.Figure 2 illustrates the responses to this question.For companies with more than 1001 employees, 49.8% of respondents indicated that there is a "significant shortage", and together with the "slight shortage", more than 95% of them indicated that there is a shortage.The same results were observed for companies with 301 to 1000 employees, with a total of more than 90% of respondents indicating that they are experiencing a shortage.For smaller companies with less than 300 employees, 59.4% of respondents answered that there was a shortage.These findings indicate that companies are inadequately staffed relative to the demand for IT personnel.The next step is to examine the situation in ICT education in the workplace.To this end, we review the results of a survey on the education of IT personnel in companies.Figure 3 shows the results of responses to the question, "What kind of career support do you provide to your IT personnel development?".The survey revealed that 37.7% of large companies (1001 or more employees) and 9.8% of small companies (less than 300 employees) reported that their companies provide training to improve their employees' IT skills.This indicates that there may be inequity in the allocation of time or budget for training.Moreover, 70% of the companies, particularly among small companies (less than 300 employees), indicated that they had not implemented any measures, in contrast to the companies that indicated that they had taken some measures.It is challenging for SMEs to invest in IT training [45].
The data indicate that two significant barriers to integrating DEI into ICT development in Japanese companies are the reluctance towards ICT education and investment and the inequitable opportunities for experiencing and learning technological skills.Consequently, it is evident that acquiring programming skills is generally challenging for novices, given the inequities in workplace education.To achieve DEI in the workplace, it is necessary to provide tools that can address these disparities in access to technology and education.
Features of Low-Coding for DEI
This study examines individuals engaging in programming through low-code platforms and the nature of the tasks programmed by them.Not all developers consider lowcode tools the optimal programming solution, as indicated by previous research on lowcode development, because LCPs have their own templates that limit their functionality and restrict their scalability and flexibility compared with coded software programs [31].According to interviews with engineers at Company A, those proficient in programming languages expressed dissatisfaction with low-code platforms, citing limitations in functionality and time-consuming processes for tasks that could not be fully implemented using low-code methods.Conversely, interviews with individuals lacking programming skills, such as administrative staff and assistants, revealed positive feedback, with comments such as "I had reservations about programming, but low-code platforms are intuitive to use" and "I never received formal education in programming, but I could still give it a try".This indicates a lower resistance to technology use among non-technical users.Company size (number of employees) The implementation of low-code platforms has facilitated the progress of programming in meticulous tasks not previously considered for systemization, akin to the tacit knowledge possessed by administrative staff.Therefore, it can be argued that low-code programming fits well with individuals who possess previously unexpressed knowledge and have brought their tasks into the realm of socialization [46], thus contributing to the improvement in DEI within the organization.This relationship is illustrated in Figure 4.The left circle in Figure 4 represents the technical scope of programming-skilled individuals, whereas the right circle represents the technical scope of non-programmers.The overlapping areas signify a shared knowledge domain (related to IT and business).Lowcode application development fits into area (III) of tasks, in which explicit programming is not feasible owing to unexpressed knowledge.If we define low-code features as the capability of individuals to program computers without the need for traditional programming languages [34], we can infer that interactive Generative AI tools also exhibit low-code-like functionalities.Therefore, here, we examine the feature as a low-code function that commands an interactive Generative AI tool in natural language to obtain the output needed by the individual.
Similarly, the support provided by Generative AI conversational chatbots fits into areas where advice is required for tasks that have not yet been learned or mastered (Figure 5).The left circle in Figure 5 represents the knowledge domains of skilled individuals in their respective tasks, whereas the right circle represents the knowledge domains of nonskilled individuals.The overlapping area indicates a shared knowledge domain (common-sense and task-related).Generative AI-based advice fits into area (III) of tasks in which knowledge is not yet established, and learning has not yet occurred.Turning to the areas newly programmed through the introduction of low-code tools, and those where advice from interactive Generative AI proved effective, they coincide with the regions depicted in Figure 5, and both correspond to the shaded areas indicated In other words, these areas represent previously unarticulated domains of personalized domain knowledge and the knowledge gaps necessary for individuals with limited experience or learning to perform tasks.Given the diverse opportunities and experiences of learning among individuals, disparities can arise, leading to unfairness.However, by individually optimizing inputs and outputs through low-code tools, facilitating the expression of knowledge, complementing knowledge gaps, and promoting fairness in output can be achieved by adopting IT in the workplace.
Therefore, programming tasks that have been shunned by non-IT engineers in the workplace and by the majority of the public because of their aversion to them can now be used to perform complex tasks on computers, such as programming, using the features of low-code tools.
What Are the Benefits of Low-Coding Perspective of SDGs
This section examines the characteristics of low-coding-type software production, with particular attention paid to the specifications that are the focus of this study based on the results found throughout this project.Some of the benefits of low-coding for the utilization aspect of the SDGs include the following: according to the 2023 Global Gender Gap Report, science, technology, engineering, and mathematics (STEM) occupations are important and pay well, and they are expected to grow in importance and scope in the future.However, the percentage of women in STEM occupations remains significantly under-represented.In the absence of on-the-job training (OJT) for the purpose of learning a programming language, the development of raw code can be intuitive without language training, thereby enabling individuals to utilize it without discrimination.Another challenge is that technology manuals are often written in the majority language, primarily English.Consequently, individuals who are unable to read English are unable to learn it.Furthermore, in regions where English is not widely spoken, self-study of technology, such as programming, may be impeded.In this instance, low-coding policies can be programmed through the sensory manipulation of a graphic interface (GUI) without the necessity to peruse the text.
Typically, individuals seeking to learn programming must first familiarize themselves with technical materials written primarily in English and subsequently receive training in programming through work experience.Nevertheless, it has proven challenging for women who do not meet the eligibility criteria to secure training opportunities through practical work (Ministry of Economy, Trade, and Industry).Conversely, in the case of low-coding, the obstacle of reading English documents and the limited opportunity to be trained through work experience can be circumvented.It can be reasonably assumed that low-coding will facilitate the success of women as IT engineers in countries that exhibit the following characteristics.
(1) The native language is not English, and there are few English-speaking users.
(2) Women's participation in society lags behind that of other countries.
As illustrated in Figure 6, the learning barrier is less pronounced for those with low coding skills on the right side of the spectrum compared to the learning barrier for minority language natives on the far left, who must overcome the challenge of using the program language of English documents.
Integration of Legacy IT Systems and Low-Code development
Legacy systems refer to existing software applications and infrastructure that have been in use for a long time and may have outdated technology and architecture.This section discusses the benefits of integrating legacy systems with low-code development.First, existing investments in legacy systems can be leveraged by extending the functionality of legacy systems or integrating them with new low-code applications.This integration helps bridge the gap between the old and new systems, allowing for a smooth transition and reducing the need for a complete system refresh.Secondly, low-code platforms often provide connectors and APIs that facilitate integration with external systems, including legacy systems.These connectors enable data exchange and communication between the low-code application and legacy systems, providing seamless interoperability.Furthermore, by utilizing low-code tools to develop new functionality or user interfaces, organizations can modernize legacy systems without requiring extensive coding or redevelopment [29,30].In particular, diversity in design occurs because it is highly dependent on individual requirement specifications and preferences.Therefore, with low coding, a diverse group of people can be involved in the production of the interface, thus enabling them to resolve this part of the user's preferences.
It is also important to note that integrating legacy systems with low-code development can present certain challenges.Factors such as compatibility, security, and data integrity must be carefully considered during the integration process.To avoid mistakes due to a lack of understanding of the specifications, it is essential to clearly delineate the roles of the specialized IT team dealing with the legacy systems and the rudimentary tool creation team that is solely involved in low-code development.
In conclusion, the integration of legacy IT systems and low-code development offers an opportunity for organizations to modernize software interfaces, enhance functionality, and improve efficiency.However, this requires careful planning, compatibility considerations, and a thorough understanding of the capabilities of both the legacy system and the selected low-code platform.In response to the research question, the value contributed by low-code tools to the workplace is evident in their ability to reveal previously unseen areas of knowledge and talent, in contrast to traditional ICT utilization in the workplace.First, the accessibility of low-code tools that do not require programming languages has expanded the pool of individuals capable of using ICT and the opportunities available to them.Consequently, the utilization of low-code tools has facilitated the manifestation of diverse knowledge and talents that were previously hidden (diversity), expanded opportunities for collaboration (inclusion), and improved the fairness of talent engagement (equity).
Answer to RQ2
The following countries are examples of countries with specific cultural backgrounds similar to Japanese company's characteristics: Japan (123rd), Saudi Arabia (130th), Türkiye (133rd), Bangladesh (139th), Egypt (140th), Morocco (141st), Pakistan (143rd), etc. Figures in parentheses indicate ranking positions in the "Economic Participation and Opportunity" section of the Global Gender Gap Report 2023.
These countries are distinguished by the fact that their native language is not English.They tend to lag behind other countries in terms of women's participation in society and in technical professions.Japan (123rd) is a case in point, ranking low in the Global Gender Gap Report.Regardless of ranking, it is also considered effective in other regions where the native language is not English, and the percentage of women engaged in STEM is low.As discussed in Section 6.2, a tool such as the Law-code can support the social activities of women in regions with the above characteristics.
Conclusions
In conclusion, low codability was found to contribute to DEI in the workplace.Work experience, including projects and training, is a direct cause of career disparity and inequity as such opportunities are not available to all.However, technologies such as LCPs, which can be used without programming or other education, and interactive Generative AI, which provides information that covers a person's lack of experience, can be effective in realizing DEI in the workplace as they work to correct these gaps.However, when using such technology in the workplace, it is not sufficient to simply introduce the tools; management must fully consider the support environment that will accompany the start of use.To externalize the diverse knowledge of individuals, it is necessary to create a cooperative environment, such as a workshop, where the actions of individuals are supported by those around them [46].Conducting organizational workshops provides an opportunity for diverse individuals to draw on and express their knowledge and ensure the equity of the approach, which can then be discussed and refined by the organization so that knowledge can be mutually recognized as an organizational asset.The use of new technology tools, such as low-code platforms, has been found to have elements that support the SDG goals of decent work and economic growth, as well as gender equality.
Limitations and Future Development
This study focuses on the results of a large Japanese company.The results are considered versatile enough to be deployed in other organizations; however, the validation of a model that encompasses further diversity should be the subject of future research.
The utilization of low-code tools has demonstrated that less-experienced users can also be engaged in the development of ICTs [29,30].Conversely, there is a concern that management will need to regulate the involvement of inexperienced users in the creation of functions and the functioning of functions that exceed expectations (automatic generation of advanced algorithms through the use of Generative AI) [37,38].Consequently, from the perspective of management, it is anticipated that advanced optimization algorithms [47] can be employed not only to assist inexperienced users but also to assess the impact of these users' interventions on large-scale mission-critical systems and to mitigate the risk of disruption.Advanced optimization algorithms have been successfully applied in numerous domains, including online learning, scheduling, multi-objective optimization, transportation, medicine, and data classification.For instance, the self-adaptive fast fireworks algorithm has been offering a robust solution approach that adapts dynamically to the problem landscape [47].Similarly, hyper-heuristics have demonstrated their effectiveness in complex optimization tasks [48].Future research may wish to explore the potential application of these advanced optimization techniques in the workplace, with a view to investigating the impact on the workspace environment, particularly in the case of new users with limited experience.
Consequently, future research should also consider the potential application of the following techniques to create a supportive environment for less experienced users and managers in the workplace.
Adaptive and self-adaptive algorithms: It would be beneficial to explore how these algorithms can be tailored to specific user needs and problem contexts, with a particular focus on ease of use for beginners.
Hyper-heuristics: Develop higher-order heuristics that can autonomously generate and adapt low-level heuristics to enhance problem-solving capabilities without extensive domain knowledge.
Figure 1 .
Figure 1.Assignments and limitations of roles in the use of IT systems in the workplace.
Figure 2 .
Figure 2. Responses to the question: "Does your company currently have the requisite number of IT personnel to implement your business strategy?"created based on [45].
Figure 3 .
Figure 3. Responses to the question: "What types of career support do you provide for IT personnel development?(Multiple Selections Allowed)" created based on [45].
Figure 4 .
Figure 4. Identify areas where the use of low codes is beneficial.
Figure 5 .
Figure 5. Contents programed by low-code type tools.
Figure 6 .
Figure 6.Differences in barriers to language-learning in low-code and programming language.
Table 1 .
Number of members' knowledge and applications formalized through workshops.
I: No experience using low-code platforms.II: Using applications as a user.III: Creating and maintaining applications.-: No data. | 8,572.2 | 2024-06-22T00:00:00.000 | [
"Business",
"Computer Science",
"Education",
"Engineering"
] |
On the stochastic approach to marine population dynamics
The purpose of this article is to deepen and structure the statistical basis of marine population dynamics. The starting point is the correspondence between the concepts of mortality, survival and lifetime distribution. This is the kernel of the possibilities that survival analysis techniques offer to marine population dynamics. A rigorous definition of survival and mortality based on their properties and their probabilistic versions is briefly presented. Some well established models for lifetime distribution, which generalise the usual simple exponential distribution, might be used with their corresponding survivals and mortalities. A critical review of some published models is also made, including original models proposed in the way opened by Caddy (1991) and Sparholt (1990), which allow for a continuously decreasing natural mortality. Considering these elements, the pure death process dealt with in the literature is used as a theoretical basis for the evolution of a marine cohort. The elaboration of this process is based on Chiang ́s study of the probability distribution of the life table (Chiang, 1960) and provides specific structured models for stock evolution as a Markovian process. These models may introduce new ideas in the line of thinking developed by Gudmundsson (1987) and Sampson (1990) in order to model the evolution of a marine cohort by stochastic processes. The suitable approximation of these processes by means of Gaussian processes may allow theoretical and computational multivariate Gaussian analysis to be applied to the probabilistic treatment of fisheries issues. As a consequence, the necessary catch equation appears as a stochastic integral with respect to the mentioned Markovian process of the stock. The solution of this equation is available when the mortalities are proportional, hence the use of the proportional hazards model (Cox, 1959). The assumption of these proportional mortalities leads naturally to the construction of a survival model based on the Weibull distribution for the population lifetime. Finally, the Weibull survival model is elaborated in order to obtain some reference parameters that are useful for management purposes. This section does not deal exhaustively with the biological and fishery reference parameters covered in the specialised monographs (Caddy and Mahon, 1996; Cadima, 2000). We focused our work in two directions. Firstly, the principal tools generating the usual reference parameters were adapted to the proposed Weibull model. This is the case of biomass per recruit and yield per recruit, which generate some of the important reference points used for management purposes, such as the FMSY, F0.1, Fmed. They also provide important and useful concepts such as virgin biomass and overexploitation growth. For this adaptation, it was necessary to previously adapt the critical age as well as the overall natural, fishing and total mortality rates. Secondly, we analysed some indices broadly used in all population dynamics (including human populations) but only marginally dealt with in fishery science, such as life expectancy, mean residual lifetime and median survival time. These parameters are redundant with mortality rates in the classical exponential model, but are not so trivial in a more general framework.
INTRODUCTION
Fisheries science has been developed through deterministic models with a simple mathematical background that can be understood by fishery biologists.There exist methods that provide appropriate computer tools for diagnosing the state of exploitation of resources, mainly by comparing the current situation with the maximum sustainable yield.The works by Gulland (1974Gulland ( , 1977Gulland ( , 1983)), Ricker (1975) and Beverton and Holt (1957) are crucial steps in this initial deterministic approach.In these basic studies the authors propose constant mortality rates that are independent of age, or step functions over unit intervals of age that are coherent with the exponential evolution of the size of a cohort.One of the most popular evaluation procedures, virtual population analysis (VPA) and all its derivations, use this axiom in order to establish the assessment through successive retrospective evaluations of the stock.The assessment is a process of inference based on the mortality models and relating two demographic structures: that of the observed catches of the cohort (or pseudo-cohort) and that of the unknown population.
The analysis of the two components of the total mortality rate, natural and fishing mortality, plays an essential role in the elaboration of this idea.The difficulties of exploring the sea in a virgin state imply that, in most cases, natural mortality is not experimentally assessed but indirectly evaluated and assumed to be constant in the classical deterministic approach.Caddy (1991), Sparholt (1990) and Chen and Watanabe (1989) have proposed alternatives to this assumption of constant natural mortality and, in general, as in Abella et al. (1997), these models have been used to build a vector of natural mortalities (or natural mortality as a step function) in order to improve the assessment based on deterministic methods.In consequence, very little work has been done relating the fundamental models of marine population dynamics to the underlying lifetime distribution because, according to the previous description, it has been systematically assumed to be piecewise exponential corresponding to a mortality rate formalised as a step function.Although the essential random character of the phenomena and of the available data has not been considered in the classical deterministic approach, this scenario is recently changing, as Buckland et al. (2000) pointed out.Already in the works by Gudmundsson (1987) and Sampson (1990), the evolution of the stock of a cohort is formalised as a stochastic process.
In the present work, the stochastic nature of marine population dynamics is structured on the basis of the possible lifetime distributions and their corresponding mortality and survival models.The survival models form the basis for considering the evolution of the stock of the cohort and its cumulative catches as stochastic Markovian processes in the way established by Chiang (1960) for the general life table of any population.On one hand, this structure provides new stock assessment methods that improve the classical ones.On the other hand, it establishes a promising bridge between marine population dynamics and three important and powerful stochastic fields and associated tools: stochastic processes, survival analysis and multivariate analysis.SCI. MAR.,71(1), March 2007, 153-174. ISSN: 0214-8358 154 • E. FERRANDIS de estos procesos a través de procesos gaussianos, permite la aplicación del importante cuerpo teórico y computacional del análisis gaussiano multivariante para el tratamiento probabilístico de problemas relevantes relacionados con la actividad pesquera.Como una consecuencia, la necesaria "ecuación de captura" aparece como una integral estocástica con respecto al mencionado proceso markoviano del stock.Y la solución de dicha ecuación es inmediata cuando las mortalidades asociadas a las distintas causas de muerte son proporcionales.De aquí se deduce la importancia y posibilidades del conjunto de técnicas conocido como "riesgos proporcionales" y establecido fundamentalmente por Cox (1959).Asumir estas mortalidades proporcionales, conduce de forma natural a considerar un modelo de supervivencia basado en la distribución de Weibull para el tiempo de vida de la población.Finalmente, se elabora dicho modelo de supervivencia de Weibull para obtener algunos parámetros de referencia utilizados en la gestión de los recursos marinos explotados.Esta sección no trata de abarcar exhaustivamente los parámetros de referencia biológico-pesqueros que se tratan en los específicos textos de referencia (Caddy y Mahon, 1996;Cadima, 2000) y se focaliza en dos direcciones.En primer lugar los principales parámetros usualmente utilizados en la gestión de los recursos: es el caso de la biomasa por recluta y el rendimiento por recluta que generan algunos de los importantes puntos de referencia utilizados en la gestión, tales como los F MSY , F 0.1 , F med , así como importantes conceptos como los de biomasa virgen o sobreexplotación de crecimiento.Para esta generalización, ha sido necesaria la adaptación previa de la edad crítica, así como las de tasas globales de mortalidad.En segundo lugar, se analizan algunos índices de uso extensivo en dinámica de poblaciones (incluida la humana) pero tratadas de forma marginal por la ciencia pesquera: es el caso de la esperanza de vida, la vida mediana y el tiempo de vida residual.Estos parámetros son redundantes con las tasas de mortalidad en el modelo exponencial clásico pero no resultan inmediatos en un modelo más general, como el propuesto.
THE FUNDAMENTAL TRIANGLE
Let T be the lifetime (duration of life) of the population, considered as a random, real valued, nonnegative and absolutely continuous variable.Let f(t) and F(t) be its density and distribution functions, respectively.Then, the survival function is defined by (1.1) or (1.2) and the total mortality rate or "force of mortality", is expressed as (1.3) which is the "hazard function" or "hazard rate" in classical survival analysis (Smith, 2002).Therefore, Z(t) is the instantaneous rate of death at age t, given that the individual survives up to age t.Or, in the conceptual framework systematised by Cadima (2000), the total mortality rate should be the instantaneous relative rate of the survival (with a positive sign).
The integral of the mortality (1.4) is the "cumulative hazard" (Altman, 1999) or the "integrated hazard" (Dobson, 2002).Its relationship with the survival function, given by the integration of Equation (1.3), becomes (1.5) By the well known properties of distribution and survival functions: Then we have (1.6)Let t Max denote the limit of the attainable ages of the population (it can be finite or infinite, known or unknown).Then (1.7) (1.8) Therefore, the effective range of the ages will be [0, t Max ).However, this is equivalent to considering the infinite interval [0, ∞) as the range of the ages and a null mortality rate and survival, S(t) = Z(t) = 0, for every age greater than t Max .
The integrability conditions of the mortality rate imply (Munroe, 1953: 191) that (1.9) which, in turn, implies the continuity of the survival function S(t).
Also, the expression (1.5) of the survival as the exponent of an integral implies (Munroe, 1953: 268;Dieudonné, 1963: 159) that the survival must have a derivative almost everywhere (a.e.).This means that the derivative may not exist at a set of measure zero, like a set of isolated points.In fact, this is the usual case considered in classical marine population dynamics, when the mortality rate is a step function and the survival is not differentiable at the discontinuities of the mortality rate.
We can now summarise the properties of mortality and survival.
The mortality rate must be non-negative, real valued, integrable (a member of the space L 1 of Lebesgue-integrable functions) in every finite interval [0,t] bounded by an age t lower than the maximum attainable age t Max , but non-integrable in any interval containing the [0, t Max ) interval, in particular the interval [0,∞): The first condition is related to the decreasing (at least non-increasing) behaviour of the survival.The second condition guarantees the existence of the survival and its initial unit value.The third condition implies that, as the age increases, the limit of the survival is null.Any function verifying properties (1.10) is an admissible mortality rate whose survival function is given by Equation (1.5).
The survival function is real valued, bounded by the [0,1] interval, non-negative and continuous, defined on all the positive ages, the [0,∞) interval.The space of functions with these properties is usually designated by C [0,1] [0,∞) (Dieudonné, 1963: Chapter VII).In addition it is non-increasing, with an initial value of 1 and tending to 0 as the age increases.Finally, it is differentiable almost everywhere.That is: (1.11) Any function verifying these properties is an admissible survival function.
THE ANALYSIS OF THE MORTALITY RATE: INDEPENDENT AND COMPETITIVE RISKS
The mortality rate is generally expressed as a sum of natural (M) and fishing (F) components: (2.1) To say that the natural mortality rate, M(t), should be the total mortality rate in absence of exploitation, is equivalent to assuming that the two causes of death act independently.
In general, if there are k risks of death R i (i = 1,2,…,k) which act simultaneously on each individ-ual in a population, Z i (t), S i (t) are the corresponding mortality rates (or risk functions) and survival of the lifetime which would be applied if R i were the only risk present, and Z(t), S(t) are the total mortality rate and survival, then the three following conditions are equivalent (Cox, 1959, andDavid, 1970): This hypothesis of the independence between natural and fishing mortality and the corresponding analysis of the mortality rate (2.1) is usually assumed in fisheries science, although it may be far from reality, particularly for a multispecies fishery, and even more particularly for demersal resources.The fishery gears could represent a certain "competition" for the natural predators and a deflation factor for their populations, because they will often be captured too.Consequently, the dependence between fishing and natural mortality should be quite realistic.In this case, these two causes of death should be considered as dependent or "competitive" risks (Chiang, 1970(Chiang, , 1991;;David, 1970;Gail, 1975).
EXAMPLE MODELS
In the usual applications of virtual population analysis, the mortality rates are considered as step functions.The age intervals defining them are in most cases of one year, with the exception of the last.In these cases, the non-integrability of the mortality rate in the whole interval [0,∞) implies that the last age interval must be a plus-class and in this last interval the total mortality rate must be strictly positive.Otherwise, the proposed step function must be integrable in [0,∞) and could not be considered as an admissible total mortality rate.As a particular case, a constant positive function on the whole interval [0,∞), as it is frequently considered the natural mortality rate, is an admissible mortality model that verifies conditions (1.10).
As an example we shall consider the "reciprocal function" proposed by Caddy (1991) ´( ) . .[ , ) The integral of this function is The function M(t) is not integrable in any finite or infinite interval including the initial age of the cohort (the integral has an infinite limit when the age approaches zero).Conditions (1.10) are not satisfied, so it cannot be an admissible model for a natural mortality rate.
An adequate and admissible alternative to Equation (3.1) would be: Now the integral is Hence, all conditions (1.10) are verified and Equation (3.2) defines an admissible mortality rate.
Another admissible mortality decreasing with age, and therefore a possible model for the natural mortality rate, could be: with a and b > 0 whose integral is which also verifies all conditions (1.10).Therefore, Equation (3.3) defines an admissible mortality rate whose corresponding survival function is: Caddy (1991) refers to an unpublished suggestion made by R.J.H. Beverton in 1991 for a derivation of the reciprocal model leading to the following expression: (3.4) with a, b and r > 0, interpreting r as an initial considered age.This suggestion by Beverton is indeed an extension of the reciprocal model and is an admissible mortality model whose integral is and which verifies conditions (1.10).
Another natural mortality model cited by Caddy (1991) and due to Sparholt (1990) is (3.5) which is also a decreasing function of age, converging to the asymptotic null value as the age increases.
The integral of this function is which is integrable on the whole range of ages, i.e. the [0,∞) interval.Therefore, the function (3.5) cannot be by itself an admissible mortality model but only a component of a more complex model.
THE ELABORATION OF THE BEVERTON AND HOLT AND SPARHOLT MODELS:
PROPORTIONAL HAZARDS Beverton and Holt (1957) proposed a very simple and yet powerful model for the natural mortality rate, as the basic hypothesis for the stock-recruitment relationship.The mortality rate, Z(t), should have two components, one constant and the other depending on the number of surviving individuals.We adapt this idea considering the proportion of survivals, i.e. the survival function S(t).The model has an initial value Z 0 = Z(0) and converges asymptotically to the adult mortality Z a = lim t→∞ Z(t).The acceleration of this convergence is dominated by the shape parameter α.For a positive shape, α>0 (4.1) defines a mortality decreasing with age as does the survival S(t).
When the shape parameter is zero, we have the constant mortality rate and the simple exponential distribution for the lifetime of the population.
Combining this expression with Expression (1.3) of total mortality, we have the following differential equation which may be expressed as
S t bt
If the adult mortality, Z a , is null, the integration of (4.2) gives (4.3) which is a particular case of the admissible model (3.3).
Otherwise, if Z a >0, we can rescale the shape parameter by expressing the power of the survival as α/Z a .Then, the corresponding relation between mortality and survival becomes (4.4)Now, the differential equation ( 4 In general, the mortality model (4.5) is compatible with proportional mortality rates corresponding to different causes of death acting independently.Indeed, if a partial component of the mortality (i.e. the natural mortality, M(t)) presents a mortality rate of the same type (with a shape parameter β) and proportional to the total mortality, Hence and finally Thus, the proportional mortalities imply a common shape parameter and the total mortality may split as the sum of proportional fishing and natural mortalities with the same shape parameter and acting independently.This is formalised in terms of mortality (4.11) or in terms of survival (4.12) In the same way, an admissible extension of the Sparholt model for natural mortality is where, as in the previous model, M 0 = M(0), is the initial mortality, M a = lim t→0 M(t) is the asymptotic or adult mortality and α is a shape parameter which controls the more or less accelerated transition from the initial to the adult mortality or a measure of the premature character of the mortality.This model is also coherent with the possible proportionality of the mortalities due to the different causes of death, if we consider the fishing and total mortalities following the same mortality pattern, with the same shape parameter and express the proportional factor in terms of an assumed constant exploitation rate, E. (4.16)This proportionality between fishing, natural and total mortalities agrees with the idea of considering the fishery as another element of the ecosystem, a powerful predator whose effects should be coherent and related (proportional) to the other causes of mortality.This hypothesis, as will be seen, is essential in order to obtain an available catch equation.It may be assumed in the present generalised models (4.4) and (4.13), as well as in the Weibull model proposed below.In all these models, the natural mortality rate is assumed to be the same in the absence or in the presence of the fishery and both causes of mortality, natural and fishing, are considered to act independently-an independence hypothesis that is a common assumption in fishery science.
If an age of first capture, t c , is considered, a sim-ple and canonical model leads to the following battery of lifetime distributions corresponding to the unexploited phase, subject only to natural mortality, and the exploited phase, subject to total mortality.Hence (4.17) where This decomposition of the range of ages (into two or more subintervals) is particularly useful when the conditional survival (4.18) is available and the mortality rates are proportional in each interval, as is the case with the elaborated Beverton and Holt and Sparholt models.
THE LIFETIME, SURVIVAL AND EVOLUTION OF THE STOCK
We assume a marine cohort with an initial population size N 0 in its initial age t = 0, a function Z(t) for the mortality rate and a survival function S(t).Let T be the lifetime of the population and T i ,i = 1,2,...,N 0 the lifetimes of the different individuals of the cohort.
The size of the stock at age t, N(t), is the number of survivors at that age and may be expressed as (5.1) where I [t,∞] is the indicator function of the interval [t,∞], defined, for any set A, by If we assume the independence of the lifetimes of the individuals of the cohort, N(t) has the binomial distribution with parameters N 0 for the sample size and Pr (T ≥t) = S(t) for the probability of success.Its expectation, m(t), and variance, v(t), are (5.2) As a consequence of (1.3), we have For any two ages, t 1 ≤t 2 , the covariance between the cohort sizes may be obtained as follows and hence (5.4) The mean and variance of the number of deaths in any interval of ages [t 1 ,t 2 ], N(t 1 ) -N(t 2 ) are: (5.5)An alternative definition of the process may be based on the fact that, for a cohort of initial population size N 0 , the expected population decrease at age t, i.e. the expected number of deaths is N 0 (1-S(t)).If we consider the number of deaths, N 0 -N(t), as a random variable with a Poisson distribution, its expectation, variance and parameter, v*(t) = N 0 (1-S(t)), coincide and reproduce the variance of the size of the cohort N(t): var(N(t)) = var(N 0 -N(t)).
If the numbers of deaths in disjoint intervals is considered independent, the number of deaths of the cohort at the age is a Poisson process (non-stationary).
With this definition, the mean, variance and covari-ance functions of the size of the cohort at age t are: (5.6) In this Poisson process, the mean and variance of the number of deaths, N(t 1 ) -N(t 2 ), in any interval of ages [t 1 ,t 2 ] are: (5.7)
In the case of the binomial process, due to its construction the conditional distribution of N(t), given {N(t 1 ),N(t 2 ),...,N(t k )} has the binomial distribution or which verifies the Markovian property: "the future depends on the past only through the present".
And this result is coherent since Using the binomial probability function it is easily verified that The mean and variance of the conditional distributions are: ´( ) which gives (6.1) and similarly (6.2) Notice that Equation (6.1) is the so-called "stock equation" or "fundamental equation of population dynamics".In the case of a constant mortality rate Z in the interval considered, we have (6.3)A necessary and sufficient condition for the conditional distribution of the process reproduces the fundamental equation of population dynamics as a conditional expectation, is that for every t i < t j we have covariance = variance (6.4) Indeed: c being the regression coefficient, Taking into account Equation (5.2), we have Hence, the condition that becomes necessary and sufficient.
The evolution of the size of the cohort, N(t), with its mean, variance and covariance functions (5.2) and (5.4) is a binomial and Markovian process whose conditional distributions reproduce the fundamental equation of population dynamics as a conditional expectation.
If the number of deaths is considered as a nonstationary Poisson process, condition (6.4) is not satisfied and the stock equation only holds for the expected stock size of the cohort: (6.5) In this case, the conditional variance is equivalent to the variance of the increments, as a consequence of the independence of the increments: The independence of the increments of a Poisson process implies that here again the Markovian property is verified.
THE LOGNORMAL VERSION OF THE STOCK PROCESS
In a deterministic approach, the stock of a cohort of marine fishes under exploitation, N(t), submitted to a total mortality rate Z(t) and starting from an initial population size N 0, evolves over time (or age) t according to the differential equation A way to model this evolution through a stochastic process is presented by Oksendal (1985).The mortality rate is often randomised by introducing a "noise" where the "noise" is formally proportional to a white noise, X(t) (normal and independently distributed): As an example, the simulation of stocks made by the ICES Methods Group (ICES, 1988) takes a constant natural mortality and obtains the randomisation of the fishing mortality rates, F, by a lognormal var( ( ) where λ is a constant, X the standard normal distribution, F the "expected" fishing mortality rate and F* the achieved or "realised" one.The error, F*-F, is called the "process error" and it is attributed to climate or other external factors that may alter a previously decided fishing effort.This randomisation is equivalent to and taking the first approximation of the Taylor series development of the exponential function, we get This randomisation of the mortality rate originates the stochastic differential equation W(t) being the standard Brownian motion and b being considered as a constant in the simplest modelling.The solution of this stochastic equation is (Oksendal, 1985) (7.3) In this case, it is the natural logarithm of the stock that has a normal distribution and is expressed as a generalised Brownian motion.The independence of the increments of the Brownian motion guarantees the Markovian character of the lognormal stock process.
This lognormal version is a more complex process than the previous binomial and Poisson one, as it incorporates the noise parameter(s) "b", and it has the following expectations and covariances: (7.4) The covariance function verifies condition (6.4).Hence this model provides the stock equation (6.1) as a conditional expectation.
THE CATCH PROCESS
Let us assume the usual decomposition (2.1) of the total mortality as a sum of two independent components: natural (M) and fishing (F) mortalities.
The simplest expression of the catch process in a cohort, C(t) or C[0,t], as the random number of individuals of up to age t caught by the fishery, is through the stochastic integral (8.1) or, for any interval of ages [t 1 ,t 2 ] (8.2) Considering the binomial process defined before, the conditional expectations of the increments in an age interval, [h,t], are bounded by the following monotonous function and The function 1 -S(t)/ S(t) is continuous and monotone increasing.Hence, the binomial process verifies the conditions established by McShane (1969McShane ( , 1974)), and are nigh-martingales in the sense defined by Young (1970Young ( , 1974)).These are sufficient conditions for the existence of the stochastic integral (8.1) or (8.2).In fact, the stochastic integral exists in much more general conditions than those considered "noise" " in (8.1) or (8.2), with an integrand which is a real valued and bounded function, as it is the exploitation rate.The existence includes all the stochastic processes whose trajectories are integrable.Hence, it allows the mortality rates to be randomised, as is shown when the lognormal version of the stock process is dealt with.The stochastic integral (8.1) always exists and its solution is the catch equation.
Moreover, by the properties of the integral, for any real valued function, f(t), the expectation of the integral is This means that any concept (for instance, the yield per recruit) derived in the deterministic case from the catch in number becomes the corresponding expected value (the same concept in mean) when the catch in number is considered as a stochastic process.
The catch equation has an immediate solution if the exploitation rate F(t)/Z(t) is constant in the considered interval of ages.It is the same to say that the mortality rates are proportional within this interval.In this case, the solution of the integral is (8.3)Then (8.4)If the stock process verifies the covariance condition (6.4), as the binomial and the lognormal processes do, the catch equation becomes (8.5) In the simplest case, when the mortality rates are constant in the interval of ages considered, the catch equation leads to the well-known expression (8.6) which is again a conditional expectation.This expression is usually simplified in its deterministic version as: (8.7)An analogous development corresponds to the considered Poisson process.
and
The function 1-S(t) is continuous and monotone increasing.Hence, this non-stationary Poisson process verifies the above-mentioned existence conditions for the catch process.In this case, the catch equation appears as an unconditional expectation. (8.7)
GAUSSIAN APPROXIMATIONS
A normal approximation to the binomial distribution is justified by the great expectations present when marine populations are dealt with, leading to a Gaussian process with mean and covariance functions given by (5.2) and (5.4) respectively.
The normal approximation to the lognormal process has the mean and covariance functions given by (7.4).In both cases the mean and covariance structures may be integrated as follows (9.1)where (9.2) 1 for the lognormal process s for the binomial process The function B(t), in both cases, is monotone increasing with B(0) = 0.
These processes verify the Kolmogorov conditions for stochastic processes.These conditions refer to permutations and subsets of the finite dimensional distributions that in the case of multivariate normal, remain multivariate normal.It is therefore only necessary to show the coherence of the covariance structure (9.1), or in mathematical terms, that the covariance matrix corresponding to any finite dimensional distribution is semi-positive definite.For any set of ordered ages t 1 <t 2 <...<t k , the corresponding normal multivariate distribution is with and , for i≤j.
The determinant of the covariance matrix is: The proof of this statement is deduced by induction on k.It is obvious for k = 1.Now, if we subtract the first row of ∑ multiplied by S(t i )/S(t 1 ) from the successive i th row, we reduce the determinant to a k-1 order.
where and the statement is proved by applying the proposed expression to the determinant of ∑ * , which is analogue to ∑.As B(0)<B(t 1 )<...<B(t k ), the determinant remains positive for any diagonal box of the proposed covariance matrix ∑ and is therefore a positive definite matrix.
In practical applications it is decisively important to manipulate the inverse, ∑ -1 , of the covariance matrix.This inverse may be expressed by the tridiagonal matrix: (9.4) where as it is verified that .These expressions of the determinant and the inverse of the covariance matrix enable and simplify the possible applications such as likelihood estimation.
Defined by these mean and covariance structures, for any set of ordered ages t 1 <t 2 <...<t k <t , it is easy to get the expectations and variances of the conditional distributions, N(t) |{N(t 1 ),N(t 2 ),...,N(t k )}: (9.5) (9.6)Which implies that the well-defined models are Markovian processes.
The Gaussian approximation to the alternative nonstationary Poisson process leads to a Gaussian process of independent ("orthogonal") increments ("additive processes"), whose distributions are normal.
var ( ) | ( ), ( ),..., ( ) var ( These processes are known as the generalised Brownian motion, that is, a Brownian motion with non-linear mean and variance functions, a particular case of Gaussian martingales (Doob, 1953;Yeh, 1973).They are a natural extension of the work by Einstein (1905) and Wiener (1923) and constitute the most general family of continuous additive processes.
The additive character of the process, i.e. the above-mentioned independence of the increments, constitutes the weak point of the generalised Brownian motion but is a practical advantage when the parameters of the model are estimated by maximum likelihood.
In the present case, the increments of the process are decrements in the population of the considered cohort, i.e. the number of dead individuals.
Note that from (4.7) Thus, this process may be improved with a greater versatility in allowing a more flexible relation between means and variances by introducing the concept of an "aggregation function", in the sense given to this term by Pielou (1969).
This aggregation function, a(t), may be defined as the ratio between variances and means of the increments of the process (9.8) Starting from the "mean generating function", m(t) = N 0 S(t), a model for the aggregation function produces the corresponding "variance generating function" and the corresponding version of the generalised Brownian motion defined by the finite dimensional distributions where the coordinates of the mean vector and the elements of the covariance matrix are given by the expressions and 10.GENERAL PROPORTIONAL HAZARD MODELS David (1970), elaborating sufficient conditions for the proportionality between mortality rates, proposes three types of two-parameter lifetime distributions that should be coherent with this property: Among these distributions, only the Weibull model generalises the usual exponential model broadly applied in marine population dynamics.Furthermore, the Weibull model can be used for a mortality rate with high values for initial ages and continuously decreasing values for increasing age.The Weibull distribution for the lifetime is thus a "natural" theoretical scenario.In fact, Smith (2002) shows that it represents a very general model coherent with the proportional hazards (between total and natural mortality) implied by the constant exploitation rate.
THE WEIBULL DISTRIBUTION AND THE EXTENSION OF CLASSICAL HYPOTHESES
The Weibull survival (with its corresponding lifetime distribution) is given by (11.1)where Z and α are positive real values.
The usual exponential distribution corresponds to the particular case in which the exponent α is one.
The exponent α determines the "shape" of the survival (mainly in the early ages), the density functions (with a finite maximum for α >1) and the mortality rate (decreasing with age if α < 1, constant for
S t T t e Zt
the exponential case and increasing with age if α >1).The coefficient Z expresses the relative intensity of the mortality, and consequently the relative diminution of the survival.
It is appropriate to identify Z as a mortality coefficient, which is not to be confused with a constant mortality rate.The mortality rate is a continuous function, Z(t), (11.2) and the density, f(t), is (11.3)If an age of first capture, t c , is considered, the simple model (4.17) leads to a battery of Weibull distributions corresponding to the unexploited phase subject only to natural mortality (with a coefficient of mortality M) and the exploited phase subject to total mortality (with a larger mortality coefficient, Z=M+F): If the stock process verifies condition (6.4), the corresponding stock equation (6.1) is (11.5)Under this model, when t ≥ t c , the exploitation rate (11.6) is constant.Therefore, the catch equation (8.2) yields (11.7)Thus, equations (11.5) and (11.7) generalize the classical equations of marine population dynamics which correspond to the deterministic version of the particular case of an exponent α =1.
THE SIMULATION OF SURVIVAL TIMES (DURATION OF LIFE)
Simulation procedures are a useful tool in both theoretical and applied research.They are dealt with in the general framework and in the particular proposed extended model.
Let T be a random survival time (life time) with any admissible survival, S(t), distribution function, F(t) = 1 -S(t), total mortality, Z(t), and cumulative hazard, H(t).
The classical simulation procedure is to obtain a sample of random numbers of a variable X which follows a uniform distribution in the unit interval [0,1] and to simulate the required lifetimes by applying the inverse of the distribution function: As an alternative, simple algebraic operations show that the random transformed lifetime H(t) follows a simple exponential distribution with a constant mortality rate equal to one.
Indeed, for the transformed lifetime, the survival is .
Using equation (1.5), S(t) = e -H(t) , and the decreasing character of the survival, we have where S -1 is the inverse of the survival function considered.
And e -t is the survival function of a standard exponential distribution of unit and constant mortality rate.
Let Exp(1) denote this exponential distribution, whose simulation is a standard tool in every statistical package, and let {x i , i = 1…n} be a simulated sample of size n of this distribution.The lemma implies that the set {H -1 , i = 1…n}, where H -1 is the inverse of the given cumulative hazard function, H, is a sequence of simulated lifetimes following the actual assumed model whose survival function is S(t).
S t T t S t S t e
As an example, the simulation of the extended proposed model given by the series of Weibull distributions (11.4) should be: (11.1) 13.REFERENCE PARAMETERS This section does not deal exhaustively with the biological and fishery reference point parameters in the specialised monographs (Caddy and Mahon, 1996;Cadima, 2000), but it analyses the most common reference points for management purposes.
Firstly, the principal tools generating the usual reference parameters were adapted to the proposed Weibull model.This is the case of biomass per recruit and yield per recruit, which generate some of the important reference points used for management purposes, such as the F MSY , F 0.1 , F med .They also provide important and useful concepts such as virgin biomass and overexploitation growth.For this adaptation, it was necessary to previously adapt the critical age.Secondly, we analysed some indices broadly used in all population dynamics (including human populations) but only marginally dealt with in fishery science, such as life expectancy, mean residual lifetime and median survival time.These parameters are redundant with mortality rates in the classical exponential model, but are not so trivial in a more general framework.The overall natural, fishing and total mortality rates were generalised.Finally, as a consequence of this, the relationship between an assumed level of natural mortality rate and the corresponding natural mortality coefficient in order to handle further extensions of the model considered in another paper (Ferrandis and Hernández, 2007).
The critical age
This reference parameter is defined as the age of maximum expected biomass in the evolution of the cohort.This biomass is (13.1.1)where: B(t) is the biomass of the cohort at age t. w i (t) is the weight of each individual at age t.Assuming the independence between the weight at any age and the survival where: N 0 is the initial size of the cohort, W ∞ is the asymptotic weight, K and t 0 are the usual Von Bertalanffy growth parameters, and b is the exponent of the weight-length relationship, then the critical age is the age that maximises the function whose derivative is Considering the definition of the total mortality rate, Z(t) = -S´(t)/S(t), this derivative may be expressed as (13.1.2) and equating to zero, it yields This implies that (13.1.3)which is the equation whose solution(s) give(s) the critical age(s) for any model Z(t) of the mortality rate.The practical computations are included in Appendix 1.
Life expectancy, mean residual lifetime and median survival time
Life expectancy and median survival time are parameters for describing survival time that can be useful in order to compare the different cohorts in a study period or the state of resources in different areas.
If T denotes the survival time whose survival function is S(t), the life expectancy, or the expectation of survival time (Smith, 2002), is given by: The usual statistical packages (SPSS, Statistika, S-Plus) have the means to calculate this cumulative distribution.
In the classical approach with a unit exponent, α = 1, the gamma function is one, the gamma distribution becomes the exponential one and the life expectancy is (13.2.3) that in the simple cases of an unexploited (t c = ∞) or a fully exploited resource (t c = 0) becomes, respectively, E(T) = 1/M and E(T) = 1/Z.In the author's opinion this trivial redundancy of the life expectancy with the mortality rates in the classical approach is the reason why the life expectancy, considered as a fundamental reference parameter in general population dynamics (including that of human populations), is systematically neglected in fishery science.
An interesting parameter related to the life expectancy is the mean residual lifetime at age t, r(t), as a measure of the average remaining life.This is given by (13.2.4) where the integral is calculated in the same way as is applied for the life expectancy (Dobson, 2002).) .( / , ) In particular, for the age of first capture, t c .
Notice that in the particular and classical case of a unit exponent, α = 1, the mean residual time of life should be r(t c ) = 1/Z.
The median survival time, t 50 , is given by the solution to the equation and is often considered (Dobson, 2002) as a better description of the "average" survival time than the life expectancy because of the bias of the survival distribution.
The median is a particular case of a percentile, t p , of level p of the lifetime distribution, i.e. when p = 50%.Thus In the current case of a series of Weibull distributions given by (11.4), we have 13.3.The mortality rates
The natural mortality rate
The constant natural mortality rates established in the literature and estimated by different methods can be adapted to any admissible mortality model, and particularly to the proposed Weibull model.A constant mortality rate M 0 corresponds to the assumption of an unexploited population with an exponentially distributed lifetime and a life expectancy E(t) = 1/M 0 .
For the Weibull composite model, setting t c = ∞ in (13.2.2), we have (13.3.1) and (13.3.2) Considering, as is assumed in the proposed survival model (11.4), the exponent of the Weibull distribution as a population parameter and the life expectancy as an overall parameter of the lifetime distribution, the Weibull distribution with parameters M 1 and α would have the same life expectancy as that attributed to the virgin population, affected only by natural mortality.
The overall or expected mortality rate, E(M(t)), corresponding to a given natural mortality coefficient, M, for an unexploited population is which, with the change of variable, x = Mt α , leads to (13.3.3)Now, taking the constant assumed natural mortality rate M 0 as an estimation of this expected (or mean) mortality rate, i.e.M 0 = M -[0, ∞), we have an alternative expression for the coefficient of natural mortality: (13.3.4) The Weibull distribution with parameters M 2 and α would have the same overall expected mortality rate as the one attributed to the virgin population, which is affected only by the assumed and constant natural mortality.
Comparing the two estimates of the coefficient of natural mortality, their ratio is which, for α > 0.5 is ≥ 1, by the properties of the gamma function.Thus, the estimate obtained by standardising the life expectancy (Expression (13.3.2)) will be generally greater.
M M
For the particular case of the usual exponential distribution, we have Thus, the two estimates are the same and equal to the assumed M 0 .
The overall fishing mortality rate
In the proposed model, the mortality rates vary with age.The overall (expected or mean) fishing mortality rate in the interval [t c , ∞), from the effective age of first capture, t c, is given by .
For the proposed Weibull model, , which again, with the change of variable x = Zt α , yields , which gives (13.3.5)
The overall total mortality rate
The overall total mortality rate is given by .Again, for the proposed Weibull model, A similar development as the above one gives (13.3.6)
The overall exploitation rate
The overall exploitation rate is given by (13.3.7) In the case of the proposed Weibull model, we have
The yield per recruit
The yield per recruit, Y, is a reference parameter frequently used for diagnosis of the actual effort in relation to the maximum sustainable yield, MSY, and its precautionary limits.It is defined as the relative (by recruit) biomass catch from a given cohort. Its or, in terms of the conditional survival from the age t c (13.4.2) In order to calculate the integral with a given precision, ε, the infinite interval [t c ,∞], is split into two parts: a finite interval [t c ,n] bounded by an integer n, where the approximation to the integral may be obtained, and an infinite interval [n,∞), where the value of the integral should be negligible.The concrete development is presented in Appendix 2.
The biomass per recruit
As in the previous paragraph, t r let designate the age of recruitment, and consider the age interval since recruitment, [t r ,∞), split into the two intervals [t r ,t c ) and [t c ,∞).
Its differential equation is Hence, the expected biomass per recruit is given by (13.5.1) If the fishing mortality increases, the survival function decreases and the biomass per recruit is obviously decreasing with respect to the fishing mortality, as in the classical approach by Beverton and Holt.Both integrals are particular cases of the computational tools described in Appendix 2 for the case of the yield per recruit.
CONCLUSIONS
The relation between survival, mortality and lifetime distribution is the basis on which a stochastic approach to marine population dynamics is built.The properties of survival and mortality have been rigorously established and examples of published and unpublished, adequate (admissible) and inadequate models have been made.Specifically, alternative and generalised models using the Beverton and Holt and Sparholt hypotheses on natural early mortality have been presented.They are admissible mortality models continuously decreasing with age according with the ideas elaborated by Caddy (1991).
The analysis of total mortality rate in the two independent components, natural and fishing, is essential in the present elaboration as it is in all marine population dynamics.However, in the future it may give rise to dependent causes of death or "competitive risks", a concept that may be useful in an ecosystem approach framework.
The definition of survival provides an initial binomial stochastic process for the evolution of the stock of a cohort of marine resources and the population growth model leads to a lognormal and more complex version of the process.Both are Markovian processes providing the fundamental equation of population dynamics as a conditional expectation.In the present work, both binomial and lognormal processes are treated in a unified approach.These processes are correctly defined with their Gaussian approximations, which generate a promising bridge between marine population dynamics and multivariate statistical analysis.The identification of the determinant and the inverse of the covariance matrix converts this Gaussian processes into a "ready to use" tool.
The catch process is also correctly defined.It provides the catch equation as a conditional expectation that is easily formulated when the mortalities corresponding to natural, fishing and total mortality are proportional.This condition is related to the Weibull distribution for the lifetime of the population.
Thus, an extension of the classical marine population dynamics is proposed through survival and lifetime models as a battery of two Weibull distributions, considering the age of first capture as the cutoff of the unexploited and exploited phases of the cohort.
Under this extended model, the reference parameters related to the diagnosis of the fishery have been established in a way that generalises the known expressions derived from the classical exponential model.
The critical age, life expectancy, median survival time, mean residual lifetime, natural, fishing and total mortality rates, biomass per recruit and yield per recruit have been generalised and computer solutions have been implemented.
The idea underlying the Weibull survival model here presented is an alternative to classical population dynamics.Instead of considering many age intervals with constant mortalities, i.e. the mortalities as step functions, we can establish few intervals but with more flexible mortality models, i.e. the mortalities as continuous functions.
According with the author's experience with Mediterranean demersal resources, this Weibull survival model has a strong coherence with the estimated survival functions of some target species.It is for this reason that the hypotheses assumed by this model are not significantly contradictory with the behaviour of trawl surveys and commercial data.This does not hold for a general application of the proposed Weibull model.In a few words, the Weibull model may be reasonable and adequate for some fishery resources, though neither perfect nor generally applicable.The elaboration of the Beverton and Holt and Sparholt mortality models here presented shows a way (not the only one, nor the best) to deal with more complex situations.
APPENDIX 1. THE CRITICAL AGE
In the case of a constant mortality rate Z(t) = Z, Equation (13.1.3)gives the unique and well-known solution for the critical age.
In the more flexible case of an exponent α ≠ 1 corresponding to the Weibull model proposed, by the properties of the survival function formalised in the first paragraph and thus, for any exponent α < 0 because .
First, the alternative α > 1 will be considered.Now, the mortality rate, Z(t), is a monotone increasing function Then, the second factor of Expression (13.1.2) is a monotone decreasing function which approaches -∞ and which has a unique discontinuity at the age of first capture t c .
Therefore, the derivative of the expected biomass of the cohort (Expression (13.1.2))has the following properties: , because t 0 < 0; Let us now consider the case when the exponent is lower than one, 0 < α < 1, leading to a piecewise continuous and decreasing mortality rate.
We can rewrite Expression (13.1.2) in order to identify its asymptotic sign which will depend on the product Z(t)e Consequently, if B´(t) reaches any positive value, the range of ages [0,∞) will contain an interval [t cr ,∞), in which B´(t) will have only negative values that change to positive below the cut-off age, t cr .This age will therefore correspond to a relative maximum of the biomass of the cohort, and hence to the critical age.
As lim t→∞ B´(t) = -∞ , an initial interval [0, t n ) will exist in which B´(t) will also present negative values that change to positive beyond the cut-off age, t n , which will correspond to a relative minimum of the biomass of the cohort.
A sufficient and easy condition for verifying the existence of positive values of the derivative, B´(t), is to check whether B´(t c ) > 0. This inequality implies condition (A.1.2)and, in this case, the critical age will be in the exploited phase, t > t c , and the solution to (A.1.3).
Solutions to (A.1.1)or (A.1.3)must be obtained by computer numerical methods like the false position (regula falsi) method (Hildebrand, 1974;Curtis and Patrick, 1994).The author has established suitable software for solving these equations with a prefixed precision.APPENDIX 2. THE YIELD PER RECRUIT where The weight curve reaches its inflexion point at the age .Hence, the second term is and due to the integrability of the survival , for any given ε > 0 it is possible to choose n such that B ≥ ε/2.In fact, for the proposed composite Weibull model but and then, it is possible to choose an integer n such that Once an n is identified, the approximation of the first term, A, may be easily obtained by the trapezoidal or Simpson methods.
The author has built specific programs for calculating the yield per recruit with a given precision, ε, prefixed by the user.
corresponds to the original Beverton and Holt relation between mortality and survivors: (4.10) conditional survival for the individuals who reach the age t 1 is function.The integral is the cumulative distribution function of the standard gamma distribution of shape parameter p. Then and, finally, the life expectancy is (13.2.2) ., 71(1),March 2007, 153-174.ISSN: 0214-8358 ., 71(1), March 2007, 153-174.ISSN: 0214-8358 STOCHASTIC POPULATION DYNAMICS • 173 because Therefore, B´(t), from a certain age it will present only negative values.Now we can summarise the properties of the derivative of the expected biomass of the cohort: B´(t) has a unique discontinuity at the age of first capture t c ; B´(t) reaches and retains negative values from a certain age.
has a unique discontinuity at the age of first capture t c ; B´(t) reaches and retains negative values from a certain age (the critical age) .Hence, in this case, a unique critical age exists.It will be in the unexploited phase, t < t c , and the solution of the critical age will be the age of first capture t c . | 11,853.8 | 2007-03-30T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Differentiating novel coronavirus pneumonia from general pneumonia based on machine learning
Background Chest CT screening as supplementary means is crucial in diagnosing novel coronavirus pneumonia (COVID-19) with high sensitivity and popularity. Machine learning was adept in discovering intricate structures from CT images and achieved expert-level performance in medical image analysis. Methods An integrated machine learning framework on chest CT images for differentiating COVID-19 from general pneumonia (GP) was developed and validated. Seventy-three confirmed COVID-19 cases were consecutively enrolled together with 27 confirmed general pneumonia patients from Ruian People’s Hospital, from January 2020 to March 2020. To accurately classify COVID-19, region of interest (ROI) delineation was implemented based on ground-glass opacities (GGOs) before feature extraction. Then, 34 statistical texture features of COVID-19 and GP ROI images were extracted, including 13 gray-level co-occurrence matrix (GLCM) features, 15 gray-level-gradient co-occurrence matrix (GLGCM) features and 6 histogram features. High-dimensional features impact the classification performance. Thus, ReliefF algorithm was leveraged to select features. The relevance of each feature was the average weights calculated by ReliefF in n times. Features with relevance larger than the empirically set threshold T were selected. After feature selection, the optimal feature set along with 4 other selected feature combinations for comparison were applied to the ensemble of bagged tree (EBT) and four other machine learning classifiers including support vector machine (SVM), logistic regression (LR), decision tree (DT), and K-nearest neighbor with Minkowski distance equal weight (KNN) using tenfold cross-validation. Results and conclusions The classification accuracy (ACC), sensitivity (SEN), specificity (SPE) of our proposed method yield 94.16%, 88.62% and 100.00%, respectively. The area under the receiver operating characteristic curve (AUC) was 0.99. The experimental results indicate that the EBT algorithm with statistical textural features based on GGOs for differentiating COVID-19 from general pneumonia achieved high transferability, efficiency, specificity, sensitivity, and impressive accuracy, which is beneficial for inexperienced doctors to more accurately diagnose COVID-19 and essential for controlling the spread of the disease.
Background
Since the first COVID-19 case was discovered in 2019, more than 9.47 million cases of novel coronavirus pneumonia have been diagnosed worldwide, with 484,249 deaths recently according to World Health Organization Coronavirus disease (COVID-2019) situation report − 158. Currently, the detection of COVID-19 mainly relies on nucleic acid testing. However, many infected patients with obvious typical symptoms passed multiple nucleic acid tests but diagnosed positive in the last test [1]. The high false-negative rate results in delayed treatment and even aggravating the spread of the pandemic. On February 5, National Health Commission of the People's Republic of China launched the "Novel Coronavirus Pneumonia Diagnosis and Treatment Program (Trial Version 5)", which updated the diagnostic criteria for novel coronavirus pneumonia with adding CT imaging examinations as one of the main basics for clinical diagnosis of COVID-19. CT screening is considerably popular, easy to operate and sensitive to COVID-19, which is critical for both early diagnosis and pandemic control.
Nevertheless, influenza virus pneumonia and other types of pneumonia might occur in this season as well. In some aspects, especially according to clinical features, it is troublesome to differentiate COVID-19 from general pneumonia. For instance, the main manifestations of COVID-19 in the early stage were fever, fatigue, dry cough, and expiratory dyspnea while patients with general pneumonia have similar symptoms [2]. COVID-19 pneumonia places a huge burden on the health care system because of its high morbidity and mortality. Therefore, early diagnosis and isolation of GP patients and COVID-19 patients can better prevent the spread of the pandemic and optimize the allocation of medical resources. However, except for the overlapping symptoms and detection abnormalities, CT manifestations of GP and COVID-19 were similar, causing instability and uncertainty for distinguishing them [3,4].
Typical CT manifestations of COVID-19 patients consist of pleural indentation sign, unilateral or bilateral pulmonary ground-glass opacities, opacities with rounded morphology and patchy consolidative pulmonary opacities with the predominance in the lower lung [5][6][7][8]. GP infections have similar CT manifestations at presentation. However, COVID-19 presents more bilateral extensive GGO while GP shows more unilateral GGO or consolidation [9]. Furthermore, the other CT findings of GP and COVID-19 are difficult to observe and the areas of lungs contain large scale of insignificant extraneous parts. To avoid interference from irrelevant information and more accurately and stably identify COVID-19 from GP, GGO was cropped as the ROI and features were extracted based on ROIs. Figure 1 shows the samples of COVID-19 and GP CT images from the collected dataset.
Lin et al. proposed a deep learning model, COVNet, based on visual features from volumetric CT images to distinguish COVID-19 from community acquired pneumonia [10]. 4536 three-dimensional CT images (COVID-19: 30%; community acquired pneumonia: 40%; non-pneumonia: 30%) were included in their study. U-net was applied to crop the lung region as the ROI and both 2D and 3D features were extracted by COVNet based on the ROIs. Then the features were combined and inputted to the proposed scheme for predictions. The sensitivity and specificity for detecting COVID-19 were 90% and 96% while for CAP were 87% and 92%. The AUCs were 0.96 and 0.95. However, the features learned by deep learning models are embedded in a network of millions of weights. Thus, the method lacks interpretability and transparency.
Charmaine et al. evaluated ResNet with a location-attention mechanism model for screening COVID-19 [11]. Two ResNet models were enrolled in their study. Threedimensional features were extracted by ResNet-18 and fed into ResNet-23 with location-attention mechanism in the full-connected layer for classification while ResNet without location-attention mechanism was applied as well for comparison with the proposed method. Accordingly, the results show the proposed method achieved better performance with an overall accuracy of 86.7%.
Asif et al. proposed CoroNet model based on Xception architecture using X-ray images to differentiate COVID-19 from heathy, bacterial pneumonia and viral pneumonia [12]. Notably, Xception is a transfer learning model which pertained to Ima-geNet dataset and then retained on the collected X-ray dataset. In the proposed architecture, the classical convolution layers were replaced by convolutions with residual connections. The overall accuracy was 89.6% while average accuracy of detecting COVID-19 was 96.6%. To test the stability and robustness, CoroNet was evaluated on the dataset prepared by Ozturk et al. [13] with an accuracy of 90%.
Ozturk et al. developed DarkNet model based on the you only look once (YOLO) system to detect and classify COVID-19 [13]. Their model achieved the accuracy of 98.08% for classifying COVID-19 and non-infections and 87.02% for distinguish COVID-19 from no-findings and GP. Nevertheless, the proposed methods by Asif et al. and Ozturk et al. were based on X-ray images. X-ray screening is not sensitive to GGOs which is one of the most significant manifestations at the early stages of COVID-19. This can cause high error rate and ineffective containment of the pandemic.
Kang et al. developed a machine learning method with structured latent multi-view representation learning to diagnose COVID-19 and community acquired pneumonia [14]. In their work, V-Net was leveraged to extract lung lesions. Then, radiomic features and handcrafted features, totally 189-dimensional features, were extracted from the CT images. In the end, the proposed model yielded the best accuracy, which was 95.50%. The sensitivity and specificity were 96.6% and 93.2%. Compared with other methods in the study, the accuracy was improved by 6.1-19.9% and the sensitivity and specificity were improved by 4.61-21.22%. To our knowledge, most recent researches carried out for detecting COVID-19 are based on deep learning. However, deep learning models require a large scale of training data while initially the COIVD-19 samples are in shortage. Transfer learning might be promising method in terms of small amount of data while negative transfer may exist, for initial dataset and target domains may not relate to each other and the standards on what types of training data are sufficiently related are not clear.
Machine learning plays an unsubstitutable role in artificial intelligence with outstanding results in medical imaging classification. We developed a machine learning method using ensemble of bagged tree based on statistical texture features of CT images, particularly focusing on differentiating COVID-19 from GP, demonstrating high efficiency in the identification of COVID-19 and GP, helping to reduce misdiagnosis and control pandemic transmission.
Material
From January 2020 to March 2020, there were 73 COVID-19 cases confirmed by nucleic acid test positive and 27 general pneumonia cases enrolled in this study (age ranges from 14 to 72 years). Both COVID-19 and GP patients who had undergone chest CT scans were retrospectively reviewed by two senior radiologists. Of the COVID-19 cases, 12 patients without obvious characteristics on CT images were excluded (negative rate 16.4%, 12/73). Finally, 61 confirmed COIVD-19 cases and 27 general pneumonia cases were enrolled in this study.
The images were independently assessed by two radiologists. If the radiologists disagreed with each other, a senior radiologist would be invited to review the pulmonary CT images and make the final examination. All the CT images were generated from the Siemens Sensation 16-layer spiral CT (Siemens, Erlangen, Germany). The image format was Digital Imaging and Communications in Medicine (DICOM). The scan parameters were: tube voltage 120 kV; tube current automatic regulation; 1-2 mm cross-sectional thickness; 1-2 mm cross-sectional distance; scan pitch 1.3; and 16 × 0.625 mm collimation.
Results
The proposed diagnosis method is ensemble of bagged trees based on feature combination 5 (T = 0.11) including ROI delineation, feature extraction, feature selection and classification which are explicitly described in "Method" section. In this section, the results of feature selection, effectiveness of optimal feature combination 5 compared to original features, and comparison of EBT algorithm and four other classification methodologies are described. The experimental result demonstrated that the proposed COVID-19 diagnosis method outperformed other methods in terms of accuracy, sensitivity, specificity and AUC. Table 1 and Fig. 2 show the relevance of each feature and weight curves of each feature based on ReliefF algorithm. In order to select the optimal feature combination, the proposed threshold T was set to 0.11. To justify optimization, combination 1 (T = 0.11*), combination 2 (T = 0.12), and combination 3 (T = 0), combination 4 (T = 0.10) were considered to compare with combination 5 (T = 0.11). Features included according to four different T values are shown in Table 2 (the corresponding feature names of the feature numbers are presented in Table 4 in "Method" section). Table 3 shows the diagnosis performance of 5 classifiers based on 5 different feature combinations. In order to intuitively present the differences in accuracy, sensitivity and specificity of different methods using different feature combinations, we visualized them with line Fig. 3, line Fig. 4 and line Fig. 5, respectively. The receiver operating characteristic (ROC) curves of EBT algorithm and 4 other classifiers using the optimal feature combination 5 are presented in Fig. 6. Figure 3 elucidates that five classifiers using feature combination 5 achieved the highest accuracy than that obtained by other feature combinations. The measurements in the X-axis ranging from 1 to 5 represents the sequence numbers of the feature combinations in Table 2. Figures 4 and 5 substantiate that the sensitivity and specificity of the optimal feature set outperformed that of combination 2 as well as combination 1, 3 and 4. To be noted, combination 2 contains 34 features which indicates that no feature selection was applied, which illustrates that feature selection is essential.
Comparison of EBT and four other classification methodologies
As shown in Table 3, the best result was obtained by EBT algorithm with feature combination 5, leading to accuracy, sensitivity and specificity of 94.16%, 88.62% and 100.00%, respectively. The three line figures reveal that EBT algorithm achieved clearly better performance compared with other classification methodologies using no matter what feature combinations. Figure 6 demonstrates ROC curves of five models based on feature combination 5. And the AUCs (area under curve, AUC) of DT, LR, SVM, KNN and EBT are 0.91, 0.88, 0.94, 0.88, and 0.99, respectively. The EBT provided the best AUC. Therefore, the promising results validate that the proposed method can accurately and robustly differentiate COVID-19 from GP.
Discussion
The proposed diagnosis method was evaluated in terms of accuracy, sensitivity and specificity. As shown in Eqs. 2-4 in "Method" section, accuracy measures the ability of the diagnosis system to correctly detect COVID-19 and GP. Sensitivity demonstrates the proportion of correctly classified COVID-19 cases. Specificity illustrates how good the method is at identifying GP cases. As shown in Table 3, the highest accuracy, sensitivity and specificity achieved by EBT algorithm with feature combination 5 were 94.16, 88.62, and 100.00, respectively. It shows that the proposed method did better performance in detecting GP than COVID-19. To alleviate class imbalance, we did data augmentation on GP images. However, data augmentation techniques cannot increase the diversity of GP features. Although the proposed method achieved the specificity of 100.00%, which suggests no GP cases were erroneously classified, there is no denying that it has the probability of over-fitting caused by shortage in GP images. CT of COVID-19 infections presents consolidation, GGO, pulmonary fibrosis, interstitial thickening, and pleural effusion in both lungs [15][16][17] while CT of GP infections presents multifocal nodular opacity with a surrounding halo, diffuse patchy GGO, interlobular septal thickening, multiple ill-defined nodules and consolidation in both lungs [18]. Thus, most resent researches have proposed heterogeneous methods based on the whole lung region. For example, Wang et al. developed COVID-19Net for diagnosing COVID-19 with automatic lung segmentation of CT images using DenseNet121-FPN [19]. Notably, DenseNet121-FPN is also a transfer learning framework, which was pretrained on ImageNet dataset as well. The sensitivity and specificity of the method were 78.9% and 89.93% in the training set. In the two validation sets, the sensitivities were 80.39% and 79.35% and the specificities were 76.61% and 81.16%. As mentioned previously in the background section, the deep learning method proposed by Lin et al. implemented U-net for lung segmentation [10]. It achieved the sensitivity and specificity of 90% and 96%. Zhang et al. used AI system with a two-stage segmentation framework to segment lung lesions and then diagnose COVID-19 [20]. The first stage of the segmentation framework was manual annotation and the second stage was DeepLabv3based backbone for lung lesion segmentation. In their work, they achieved smoother [21]. The accuracy, sensitivity and specificity of their model in the testing set were 0.760, 0.811 and 0.615, respectively. However, compared with these studies, we did GGO segmentation instead of lung lesion segmentation. Our proposed machine learning method in combination with GGO segmentation accomplished an accuracy of 94.16% for distinguishing COVID-19 from GP. It also has a high sensitivity and specificity of 88.62% and 100.00%, respectively. Therefore, we achieved better performance in diagnosing COVID-19 based on only GGOs. The results empirically validate that COVID-19 and GP can be robustly classified based on GGOs.
Despite the remarkable performance of the proposed methods, limitations still exist in our study. First of all, the ROIs were manually delineated which is rather time-consuming especially when doctors are racing against time to save lives. Also, GGOs were the exclusive segmented features of CT images of COVID-19 and GP and spending more time on ROI segmentation is apparently unworthy while the whole lung region contains irrelevant or even pernicious information for diagnosis. Hence, further study should be processed on automatically and preciously detect and segment ROIs without manual help. Finally, our established model did not determine which specific general pneumonia it was, such as viral or bacterial, mainly due to insufficient data. More data will be collected and the prognosis of GPs will be considered in our future study.
Conclusions
This study explored an ensemble of bagged tree algorithm with statistical textural features for differentiating novel coronavirus pneumonia from general pneumonia. The classification accuracy, sensitivity, and specificity of our proposed method yield 94.16%, 88.62% and 100.00%, respectively. It is noteworthy that compared with four other machine learning classifiers, EBT achieved consistent better performance. The results show that classifiers with feature selection excelled classifiers without feature selection by 1-5% for accuracy, 2-10% for sensitivity and 0-4% for specificity. More importantly, classifiers with feature selection take shorter time. Therefore, feature selection is beneficial for promoting the diagnosis of COVID-19 in terms of all evaluation indexes.
Furthermore, GGOs were proved to play a significant role in distinguish COVID-19 from GP, which provide reference opinions for radiologists to better diagnose COVID-19. And extensive experiments will be applied on more features of COVID-19 individually and unitedly in our future work. In conclusion, the experimental results show that, as compared to other state-of-the-art works, the proposed method achieved pronouncedly superior performance with a small amount of CT images.
Overview of the proposed diagnosis framework
Machine learning algorithms integrated with statistical textural features are leveraged to differentiate COVID-19 from GP. Figure 7 illustrates the block diagram of the proposed diagnosis framework. After data collection, to more accurately extract features of COVID-19 and GP, manual delineation of the ROIs were performed based on GGOs.
The details of ROI delineation are presented in "Delineation of ROIs" section. In the next step, 34 statistical texture features including 13 GLCM features, 15 GLGCM features and 6 histogram features were extracted from the ROIs. After that, ReliefF algorithm was used to select features for time-saving and avoiding over-fitting. As a result, five feature combinations remained while combination 5 with 18 features were classified as the proposed feature group. Details are described in the following feature selection and results part. In the last stage of diagnosis process, the selected features with labels were combined and input to five classifiers while the ensemble of bagged tree is the proposed algorithm for classification. Five classifiers with five feature combinations, respectively, were evaluated in term of accuracy, specificity, sensitivity and AUC. The framework consists of 4 major steps: delineation of ROIs, feature extraction, feature selection, and classification. Each of the steps is described in detail in the following parts of this paper.
Delineation of ROIs
To improve the accuracy of the diagnosis method, precise segmentation of the ROIs from irrelevant parts was essential for feature extraction. Thus, GGO region, which is the main CT manifestations, was taken as ROI. The software of MRIcro 1.4 was used to extract the rectangle ROI of COVID-19 and GP. ROIs were delineated in CT images based on aforementioned GGOs. The main processes of ROI delineation are as follows: (1) a rectangular region as large as possible, which is the ROI, was delineated within GGOs and export the whole image with delineation to a PNG image; (2) PNG images were binarized to get the ROI boundary and fill the rectangular region to get the ROI template; (3) the ROI templates were used to extract the ROI in the original DICOM image; (4) the gray level of the ROI image was converted to 256 gray levels and the images were resized to 32 × 32 pixels. Consequently, 615 COVID-19 and 146 GP ROIs were cropped. It is apparent that COVID-19 images were four times larger than GP images while imbalanced data cannot reflect the true distribution of two categories, which could affect the classification performance. Thus, we rotated the GP images by 90°, 180°, 270°. Ultimately, the number of GP images was augmented to 584. In conclusion, 1199 ROI images were enrolled for feature extraction.
Feature extraction
In this stage, a total of 34 statistical texture features were extracted from the ROI images of COVID-19 and GP as shown in Table 4, which contain 13 GLCM [22] features, 15 GLGCM [23] features and 6 histogram [24] features. GLCM and GLGCM are the predominant second-order statistical texture analysis methods to characterize the features of an image, which have been widely applied in medical image processing [25,26]. Besides, GLCM considers the statistical and spatial relationship of the pixels in the image. It is created by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image. Then 13 statistical texture features are extracted based on the grey-level co-occurrence matrix. In contrast with GLCM, GLGCM captures not only gray-scale features, but also the second-order statistics of gray-level gradients while gradients indicate the information of image edge which provides significant features of an image. In addition, the histological characteristics of COVID-19 and GP can be well reflected in the gray mode, and the gray histogram is an intuitive statistical method [27]. It is a one-dimensional function of the gray level and belongs to the first-order statistical method. After obtaining all texture feature data, due to the different calculation methods of each feature, the numerical value changes in a wide range. Therefore, to facilitate calculation, all data are normalized to [0, 1] based on their respective dimensions, the normalized equation (1) is as follows: where X is the original data of the N th dimension, MIN is the minimum value in the N th dimension, MAX is the maximum value in the N th dimension, X * is the normalized feature.
Feature selection
Feature selection plays a critical role in enhancing the performance of medical imaging classification. High-dimensional features cause over-fitting, lower accuracy, comprehension difficulty and it is rather time-consuming. Thus, feature selection is leveraged to select a subset of features, which makes the evaluation criteria reach the optimal level, from the original feature set. ReliefF algorithm is classified as a typical filter method for feature selection [28]. It calculates the weight for each feature based on the capability to identify feature value differences between nearest neighbor instance pairs. The weight of a random given feature decreases if the difference of the feature value is observed in the nearby instance of the same class (called nearest hit). Alternatively, the weight of a random given feature increases if the difference of the feature value is observed in the nearby instance of the difference class(called nearest miss). ReliefF searches for k-nearest hits and misses and averages their contribution to the weights of each feature [29]. Furthermore, m random features will be selected and the algorithm repeated n times to improve reliability. After n iterations, divide the sum of each feature's weights by n. This is noted as the relevance. Features with relevance greater than a threshold T are selected. Therefore, different thresholds yield different combinations. Generally, T is supposed to be greater than 0, for negative weights means negative impact on classification.
Feature classification
The ensemble of bagged tree, which is a supervised classification scheme, is regarded as the proposed classification algorithm [30]. It adopts the idea of bootstrap aggregating to enhance the stability and increase the accuracy. The training data are partitioned into several subsets by random selecting with replacement. Each subset is trained to construct independent base models. All the predictions from different models are applied to majority voting scheme. As a result, it reduces the influence of noise data and is less susceptible to over-fitting, which improves the robustness. For comparison with the performance of the EBT algorithm, SVM, LR, DT, KNN are implemented with the same texture feature extraction methods and the same feature selection method. To superiorly identify the differences of the results, a tenfold crossvalidation strategy method is adopted. In tenfold cross-validation, the original data set is equally divided into 10 subsamples. Of the 10 subsamples, 9 subsamples are used as training set while the remaining one is taken as validation set. The process is repeated 10 times until each of the 10 subsamples is utilized as validation set. The average of the 10 results is retained as the final estimation.
Statistics
The classification metrics used included AUC, sensitivity, specificity, accuracy. Let TP (true positive) denote the number of samples belonging to class positive and correctly classified; TN (true negative) denote the number of samples belonging to class negative and correctly classified; FP (false positive) denote the number of samples not belonging to class positive but misclassified as class positive; FN (false negative) denote the number of samples not belonging to class negative but misclassified as class negative [31]. Classification accuracies are reported in terms of accuracy, sensitivity, specificity as | 5,574.8 | 2020-05-29T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Accidents of Electrical and Mechanical Works for Public Sector Projects in Hong Kong
A study on electrical and mechanical (E&M) works-related accidents for public sector projects provided the opportunity to gain a better understanding of the causes of accidents by analyzing the circumstances of all E&M works accidents. The research aims to examine accidents of E&M works which happened in public sector projects. A total of 421 E&M works-related accidents in the “Public Works Programme Construction Site Safety and Environmental Statistics” (PCSES) system were extracted for analysis. Two-step cluster analysis was conducted to classify the E&M accidents into different groups. The results identified three E&M accidents groups: (1) electricians with over 15 years of experience were prone to ‘fall of person from height’; (2) electricians with zero to five years of experience were prone to ‘slip, trip or fall on same level’; (3) air-conditioning workers with zero to five years of experience were prone to multiple types of accidents. Practical measures were recommended for each specific cluster group to avoid recurrence of similar accidents. The accident analysis would be vital for industry practitioners to enhance the safety performance of public sector projects. This study contributes to filling the knowledge gap of how and why E&M accidents occur and promulgating preventive measures for E&M accidents which have been under researched.
Introduction
Accident analysis plays an important role in safety management. It provides invaluable lessons for causes of accidents and helps to formulate effective preventive strategies. For instance, Haslam et al. [1] studied the contributing factors in construction accidents. Hon and Chan [2] explored the fall fatalities of repair, maintenance, minor alteration, and addition (RMAA) works. Wu et al. [3] investigated struck-by falling object accident cases and developed an integrated information management model for preventing struck-by falling object accidents in construction sites proactively. Tam and Fung [4] explored the key factors of tower crane safety and the related statutory and non-statutory guidelines for using tower cranes. Lingard et al. [5] undertook a detailed investigation into the causes of fatalities involving plants such as excavators and trucks. However, the accidents on electrical and mechanical (E&M) works, as a standalone category of activity, has not been fully investigated.
E&M work is an essential work category in both new construction works and RMAA works. Understanding the underlying relationship between E&M works-related injuries and factors leading to the injuries are important to enhance E&M work safety. E&M works involve a wide range of building services trades such as air-conditioning, fire services, plumbing and drainage, electrical wiring and lift installation and maintenance work. E&M safety is a vital issue in promoting construction safety. and size or type of company [16][17][18][19]. Studies at sub-project level such as excavation, electrical and mechanical, piping and steel work has been scant. As individual tasks are the basic components of any construction project, ensuring the safety of the different individual tasks is an essential prerequisite for safety on site. Electrocution is one of the most common types of fatal accident in most construction industries. Therefore, some studies on electrical injuries and electrical safety for workers have been conducted. Janicak [20] examined the occupational fatalities due to electrocutions in the U.S. construction industry between 2003 and 2006. The study shows that the proportion of fatalities due to contact with electric current is significantly higher for younger workers in 16 to 19 years old age group. Chi et al. [8] conducted an in-depth accident analysis of 255 electrical fatalities in Taiwan and categorized the incidents in terms of the cause, performing task, individual factor, company size and source of injury for developing effective electrical protection strategies. Inexperienced workers and those working for smaller companies were found to be at the greatest risk of electrocutions. Zhao et al. [7] explored the 486 control measures of electrical hazards to construction workers through quantitative and qualitative analysis of 134 electrocution case reports from 1989 to 2012. The research findings revealed that behavioral controls of workers remain prevalent in control of electrical hazards. While there have been individual studies on electrocution fatalities [8,13,20,21], safety research on air-conditioning, lift installation, fire services, and plumbing works has been lacking. Due to the complexities and characteristics of E&M works, safety risks vary a lot accordingly. Because the safety risks of E&M works are often different from and higher than building works, there should be an independent study to explore the "how and why" of E&M works-related accidents.
Characteristics and Risk of Electrical and Mechanical Works
Electrical and mechanical (E&M) installations are key activities in all construction works and indispensable to any building types. E&M works includes installation and maintenance of air-conditioning system, electrical wiring, lift and escalators, fire services system and plumbing and drainage system. E&M works involves lots of high-risk activities, for examples, works often involve electricity, confined working space, lifting, machinery (for lift and escalator), welding, using handheld tools, etc. Some hazards are quite particular to E&M work processes such as lifting of chillers and generators, electrical hazard in switch gear work, confined space hazards at water tank, etc. Electrocution is also a major type of E&M-related accident. A tight construction schedule for E&M works is one of the most significant characteristic in new construction works. As the installation of E&M works needs to follow the general builder's works, the delay of previous construction works will carry on to the E&M works. Even though E&M works are postponed due to delay of completion of the general builder's works, E&M contractors still need to strictly follow the master programme because extra time is not typically allowed. The condition of a new construction site is generally untidy and with unforeseeable working situations. Several building service trades are often concurrently working at the same location [22]. The coordination of multiple trades is important for the complex working environment for E&M works, particularly electrical installations. Electrocution is a major type of E&M-related accident. For electrical wiring works, the E&M workers are vulnerable to electric shock hazard whilst working on the conductive parts of the electrical cable which has not been properly isolated from the power source. The main switch should be properly isolated, locked out, and verified dead by voltage indicator or suitable testing equipment to prevent accidental energization or interference of electrical circuit by other workers. E&M accidents of lift installation and maintenance works are usually caused by a variety of factors such as lack of safety access to lift pit, inappropriate location of cat ladder in lift pit, inadequate lighting and poor ventilation. E&M workers may be injured by moving machinery when there is no separation in the lift well. Moreover, cat ladder is commonly used to access the lift machine room. It would be dangerous for workers to climb up a cat ladder with heavy tools, equipment and materials.
E&M RMAA works generally last for a short construction period with less safety resources, equipment and inadequate safety supervision (i.e., without safety personnel). As repair and maintenance of air conditioning systems, electrical wiring, water pipes and fire services always involve working at height, fall injuries frequently occur when using ladders. Wong et al. [23] pointed out that four factors, namely, inappropriate equipment, lack of design for safety, lack of resources and insufficient housekeeping, are the main factors contributing to fall injuries. For E&M works, the key hazards are identified in activities that involve working at height, with electricity, in confined working spaces, lifting, machinery (for lift and escalator), welding, and using handheld tools, etc. Some hazards in E&M works are quite unique, such as the E&M work processes in lifting of chillers and generators, electrical hazards at switch gear works, and confined space hazards around water tanks, etc.
To formulate effective strategies to prevent E&M-related accidents, a full investigation into the contribution factors of E&M accidents and corresponding improvement measures is urgently needed. This requirement has not received the committed level of attention it deserves. This study is a much-needed contribution towards filling this research gap.
Data Source
Accident data should be properly recorded, maintained and analyzed to indicate where, when and how the accidents arise. Accident prevention measures can then be established to focus on the problem area. According to the chapter 9 of the Construction Site Safety Manual [12], the safety officer or site agent of the principal contractor should complete the injury report form (version 2001) and submit it to the Departmental Safety and Environmental Advisory Unit within seven days on occurrence of accident for entry into the PCSES system. The unsafe action, unsafe condition and personal factor which causes the accident are then assessed by qualified safety personnel.
With the consent of the Development Bureau, a total of 421 sets of accident cases related to E&M works for the period between 2001 and 2015 were provided by the Electrical and Mechanical Services Department (EMSD) and the Architectural Services Department (ArchSD) of the Hong Kong Government. It was decided to exclude the accident data before 2001, since the injury report form was revised significantly in 2001. Based on the injury report form, an EXCEL file with 18 variables describing the characteristics of the accident was developed for data input. The variables used in the analysis are summarized in Table 1.
Cluster Analysis
To examine the typology of E&M works accident cases, cluster analysis was adopted in this study. Cluster analysis is regarded as a multivariate statistical technique for grouping cases of data based on their homogeneity [24,25]. It was adopted by [2] to investigate fall fatalities of RMAA works in Hong Kong. Three cluster groups of fall fatalities were identified. In addition to construction-related research, this technique was used for accident analysis in various research areas such as transportation safety and pedestrian safety [26][27][28]. In this study, the accident cases can be grouped on the basis of the injured workers' personal experience, working conditions and working behavior patterns. This segmentation of case helps in developing safety strategy for different segments of E&M workers. It is useful to identify distinct groups of E&M workers so that safety preventive measures can be appropriately targeted. A total of 421 accident cases inputted in the EXCEL template and analyzed by using SPSS 21 (SPSS Inc., Chicago, IL, USA). There are three methods for the cluster analysis-K-means cluster, hierarchical cluster and two-step cluster; all can be found in SPSS. Two-step cluster analysis, which is a log-likelihood distance criterion measure used for identifying the grouping of cases within a large data set, was adopted in this research. As this clustering procedure is effective for handling both mixtures of continuous (i.e., age, year of experience) and categorical variables (i.e., sex, trade of E&M works) [29], two-step cluster was adopted in the current research. It also provides automatic identification of the optimal number of clusters present in the data [24,29,30]. Two-step cluster analysis is comprised of pre-cluster formation and hierarchical clustering of pre-clusters. The aim of pre-clustering is to reduce the size of the matrix that contains distances between all possible pairs of cases. Based on the distance between the cases, it decides if the current case should be merged with a previous formed pre-cluster or starts a new pre-cluster. The standard hierarchical clustering algorithm is applied on the pre-clusters to obtain a range of cluster solutions with different numbers of clusters in the second step [30]. To determine which number of clusters is the best, Norusis [31] suggested that Schwarz's Bayesian criterion (BIC) can be used as the clustering criterion. In the final stage, Chi-square test was used for testing whether the variables differ significantly across all the identified cluster of E&M works-related accidents. All calculated p-values are two-tailed, and the null hypothesis of no significant difference was rejected at the significance level at <0.05.
Descriptive Statistics
Background information of the 421 E&M works-related accident cases are shown in Table 2. 170 (40.4%) E&M works-related accident cases were collected from the Electrical and Mechanical Services Department (EMSD), and 251 (59.6%) from the Architectural Services Department (ArchSD) of the Hong Kong Government ( Table 2). As most of the construction works are physically demanding and performed in outdoor environments, the construction industry is very much male-dominated [32]. It is not surprising that a vast majority of the injured workers were male and only three (0.7%) injured workers were female. The vast majority (97%) of injured workers were non-imported labourers while only 1.4% of them were imported labourers. Over 73% (n = 308) of the E&M work-related accidents occurred in RMAA works which far outweighed that of new construction works. This may be explained by the fact that the working conditions of new construction works is comparatively better than RMAA works. The construction processes of new works are relatively well-planned whereas those of RMAA works are rather unforeseeable [33]. RMAA works last for a shorter construction period with less safety resources and safety supervision (e.g., employment of safety officer or supervisor). The number of E&M accidents the occurred in summer (n = 134) was significantly higher than other seasons, especially in autumn and winter, accounting for about 31% of the 421 accident cases. The high temperature and humidity environment with low wind speed is insufferable and unfavorable to safety and health of construction workers [34]. It is believed that prolonged work in a hot environment may result in fatigue, heat-related illness and a higher chance of injury [35,36]. It seems that hot and humid weather in summer is a key contributing factor for E&M works-related accidents. Over 36% E&M accidents occurred at the beginning of the week (i.e., Monday and Tuesday) while relatively less accidents happened in the end of the week and weekends. This result was consistent with the findings of Camino López et al. [37] that construction accidents on Monday were more frequent than on other days within a week.
With the "Construction Workers Registration Ordinance" having taken effect from December 2005, all workers carrying out construction works on construction sites must be registered under the Construction Workers Registration Authority [38]. Workers in E&M installations need to be registered as "Registered Skilled Worker" for their own trades. Referring to Figure 1, workers aged 34 or below accounted for 44.5% of the E&M work-related accidents collected in this study, whereas registered E&M workers aged 34 or below only accounted for 15.44% of the registered E&M workers in Hong Kong as on 30 September 2015. It is probable that young E&M workers were more prone to accidents due to inexperience. The findings are in line with Chi et al. [39] and Salminen [40] that young workers had a higher non-fatal injury rate than older workers. The research of Choudhry and Fang [18] further pointed out that the most effective safety training for new construction workers is learned by doing or by gaining experience. New workers become more aware of construction safety when they accumulate more working experience. The high temperature and humidity environment with low wind speed is insufferable and unfavorable to safety and health of construction workers [34]. It is believed that prolonged work in a hot environment may result in fatigue, heat-related illness and a higher chance of injury [35,36]. It seems that hot and humid weather in summer is a key contributing factor for E&M works-related accidents. Over 36% E&M accidents occurred at the beginning of the week (i.e., Monday and Tuesday) while relatively less accidents happened in the end of the week and weekends. This result was consistent with the findings of Camino López et al. [37] that construction accidents on Monday were more frequent than on other days within a week.
With the "Construction Workers Registration Ordinance" having taken effect from December 2005, all workers carrying out construction works on construction sites must be registered under the Construction Workers Registration Authority [38]. Workers in E&M installations need to be registered as "Registered Skilled Worker" for their own trades. Referring to Figure 1, workers aged 34 or below accounted for 44.5% of the E&M work-related accidents collected in this study, whereas registered E&M workers aged 34 or below only accounted for 15.44% of the registered E&M workers in Hong Kong as on 30 September 2015. It is probable that young E&M workers were more prone to accidents due to inexperience. The findings are in line with Chi et al. [39] and Salminen [40] that young workers had a higher non-fatal injury rate than older workers. The research of Choudhry and Fang [18] further pointed out that the most effective safety training for new construction workers is learned by doing or by gaining experience. New workers become more aware of construction safety when they accumulate more working experience. The results also indicated that the majority of injured workers had less than five years of working experience in construction (n = 136, 32.3%), and work at that construction site with not more than three months (n = 220, 52.3%) ( Table 3). Among these 220 accident cases, over half of them (n = 128) involved workers engaged in that construction for not more than one month. As construction is always risky due to its complexity and continuously changing working environment as well as the associated hazardous characteristics of E&M works, new workers who are less familiar with the site and working environment are more prone to accidents. The results also indicated that the majority of injured workers had less than five years of working experience in construction (n = 136, 32.3%), and work at that construction site with not more than three months (n = 220, 52.3%) ( Table 3). Among these 220 accident cases, over half of them (n = 128) involved workers engaged in that construction for not more than one month. As construction is always risky due to its complexity and continuously changing working environment as well as the associated hazardous characteristics of E&M works, new workers who are less familiar with the site and working environment are more prone to accidents.
Cluster Analysis
Cluster analysis was conducted on the variables of E&M works-related accidents to identify groups with different pattern of accidents. Clustering is an effective method for segmenting the dataset in more homogeneous groups and for identifying the variables that may have a higher influence on injury type and accident severity. Clustering for accident case analysis is important in safety research to reveal characteristics of accident situations and injured workers for designing safety preventive measures in future. To assess complex relations between E&M accidents and accident outcome, seven key variables were selected for analysis (Table 1). Previous literature indicated that type of works, type of accident and worker's length of experience are crucial features of construction accidents [37,[41][42][43]. Thus, these variables were chosen for analysis. Moreover, variables related to the accident outcome such as body part injured, nature of injury, severity of injury and period of incapacity were also selected to form the cluster model. The 421 E&M accident cases were formed into three clusters ( Figure 2). Schwarz's Bayesian Criterion (BIC) or the Akaike Information Criterion (AIC) are the common clustering criteria to determine the optimum number of clusters. The Schwarz's Bayesian Criterion (BIC) with silhouette was used to determine the final optimum number of clusters for the final cluster model as Vermunt and Magidson [44] pointed out that BIC is a more reliable criterion than (AIC) especially for large datasets. The final three-cluster model was described by the proportion of each variable in each cluster, which enables us to identify each cluster as a specific E&M accident situation. The profile of the three clusters is shown in Table 4.
Cluster 1
Cluster 1 included 115 accident cases. This cluster consists of workers with more than 15 years of working experience in construction and involved in electrical wiring installation. A vast majority of accidents in this cluster were caused by 'fall of person from height' and resulted in upper limb fractures. Due to the higher severity of accident, the injured worker needed hospitalization for more than 24 h and suffered over 100 days of incapacity for work.
Cluster 2
Cluster 2 included 156 accident cases, which represent 37% of the reported E&M accidents. Workers with less working experience (i.e., zero to five years) undertaking electrical wiring installation work are classified in cluster 2. 'Slip, trip or fall on same level' was the most common accident type in this cluster. Most of the victims suffered from contusion, sprain or twist of lower limbs. The injured workers suffered less severe injury with no hospitalization or with hospitalization for less than 24 h and not more than 20 days of incapacity for work.
Cluster 3
Cluster 3 included 150 E&M accident cases with workers who were involved in air-conditioning installation works with zero to five years of working experience. The workers' injuries were caused by multiple types of accidents such as exposure to fire, stepping on object. Common classifications for nature of injury were crushing, and laceration or cut on upper limbs. The victims of this cluster mostly did not require hospitalization or required hospitalization for less than 24 h and suffered incapacity for less than 20 days. Notes: The null hypothesis of significant difference is rejected if the p-value from the Chi-square test is less than the significance level at five percent. It indicates that there are statistically significant differences among the variables across the three clusters.
Cluster 3
Cluster 3 included 150 E&M accident cases with workers who were involved in air-conditioning installation works with zero to five years of working experience. The workers' injuries were caused by multiple types of accidents such as exposure to fire, stepping on object. Common classifications for nature of injury were crushing, and laceration or cut on upper limbs. The victims of this cluster mostly did not require hospitalization or required hospitalization for less than 24 h and suffered incapacity for less than 20 days.
The cluster analysis results of the 421 E&M works-related accidents and their distribution in each cluster are summarized in Table 5. In terms of E&M trade, electrician accounted for the greatest number of accident cases. The current findings indicated that electrical wiring and air-conditioning installation works were the top two hazardous trades of E&M works, accounting for about 41% and 31% of all E&M accident cases respectively. From the analysis results, the variable of "type of accident" is the most important predictor of the formation of the clusters. E&M installation works result in various types of accidents. "Slip, trip or fall on same level" (n = 98) and "fall of person from height" (n = 87) were the most common types of E&M accident. Both accident types demonstrated a high frequency of upper and lower limbs injuries. Other types of accident (n = 99) which encompasses a range of miscellaneous accident types such as exposure to fire/burning, dust/foreign particle in eye, stepping on object, and crushing, etc. The patterns of injury nature and accident types varied substantially. For instance (Table 5), fracture was a major type of injury nature associate with 'fall of person from height'. Contusion, sprain or twist were the main injury due to 'slip, trip or fall on same level'. 'Fall of person from height' resulted in a higher number of fracture injuries indicating that falls from an elevation could generate more severe injuries and longer period of incapacity. The cluster model revealed the relationship of accident types and a comprehensive set of factors associated with the accident. The first cluster identified was electrical wiring workers with more than 15 years of working experience in construction. This is not surprising that electricians are the most accident-prone E&M workers as they need to work at height and involve electrical hazards. However, it is unexpected that workers with more experience were more prone to accidents in this cluster. It undermines the concept that safety attitude is normally built up through experience. The E&M accident data was further analyzed by the length of experience of the injured worker and illustrated in Figure 3. It is obvious that a downward trend of the number of accidents was revealed with the increase of length of the worker's experience. It is rather surprising that the number of accidents of workers with over 19 years of experience was significantly higher than expected. A U-shape relationship was discovered between year of experience and number of E&M accidents, indicating that year of experience can increase the safety awareness of workers for a certain period of time, but became less of an advantage over a certain years of experience. The safety awareness of experienced E&M workers may reduce if they take safety lightly with the increase of experience. The interviewees of Choudhry and Fang [18] revealed that workers with more site experience did not feel comfortable following proper safety procedures and were not afraid of getting hurt. The vast majority of accidents in this cluster were caused by 'fall of person from height' and result in upper limb fractures. These accidents mainly involve falls from ladders or working platforms. Serious accidents may happen when the worker falls due to the sudden collapse of the ladder or working platform or loses his balance while conducting installation works. Numerous studies supported that fall of person from height causes a greater number of severe and even fatal accidents in the construction sector [37,45,46]. The injured worker needs hospitalization more than 24 h and suffers over 100 days of incapacity for work. Hence, the long period of incapacity of workers incurred extra financial costs, including loss due to absence from work of the injured worker, inefficiency after resuming work of the injured worker, medical expenses, fines and legal expenses and loss due to damaged material or finished work, etc. [47,48].
Similar to cluster 1, the second cluster had the largest group of accident cases consisting of electrical wiring installation workers. However, the injured workers in this cluster had relatively less working experience (i.e., zero to five years). The research of Choudhry and Fang [18] revealed that young workers with less experience are more prone to accidents. The research further pointed out that the most effective safety training for new construction workers is learned by doing or by gaining experience. New workers become more aware of construction safety with the accumulation of working experience. The workers with more experience have more job knowledge, skills and patience [41]. Most of the accidents in this cluster were caused by 'slip, trip or fall on same level' and resulted in contusion, sprain or twist of lower limbs. The injured workers in this cluster suffered less severe injury with no hospitalization or hospitalization for less than 24 h and not more than 20 days of incapacity for work. The findings are in line with the research of Lipscomb et al. [49] that individuals performing electrical wiring works suffered slip/trip injury at a significant higher rate than other types of work. Major slip/trip injuries were related to soft tissue injuries such as sprains, strains or contusions and more commonly led to injuries of lower extremity [49,50]. Slips and trips are regarded as one of the most significant type of construction accidents [49,51,52]. Lipscomb et al. [49] found that environmental factors, such as slippery or uneven working surfaces, weather and lighting, were the most frequent and common contributor to slip/trip injuries. Besides, poor housekeeping and human factors (i.e., lapse of attention, carelessness or rush to finish work, etc.) also contribute to these accidents [49,53].
The third cluster covered workers who were involved in air-conditioning installation works with zero to five years of working experience. Camino López et al. [37] indicated that construction workers with a short period of services suffered a higher percentage of accidents. This cluster The vast majority of accidents in this cluster were caused by 'fall of person from height' and result in upper limb fractures. These accidents mainly involve falls from ladders or working platforms. Serious accidents may happen when the worker falls due to the sudden collapse of the ladder or working platform or loses his balance while conducting installation works. Numerous studies supported that fall of person from height causes a greater number of severe and even fatal accidents in the construction sector [37,45,46]. The injured worker needs hospitalization more than 24 h and suffers over 100 days of incapacity for work. Hence, the long period of incapacity of workers incurred extra financial costs, including loss due to absence from work of the injured worker, inefficiency after resuming work of the injured worker, medical expenses, fines and legal expenses and loss due to damaged material or finished work, etc. [47,48].
Similar to cluster 1, the second cluster had the largest group of accident cases consisting of electrical wiring installation workers. However, the injured workers in this cluster had relatively less working experience (i.e., zero to five years). The research of Choudhry and Fang [18] revealed that young workers with less experience are more prone to accidents. The research further pointed out that the most effective safety training for new construction workers is learned by doing or by gaining experience. New workers become more aware of construction safety with the accumulation of working experience. The workers with more experience have more job knowledge, skills and patience [41]. Most of the accidents in this cluster were caused by 'slip, trip or fall on same level' and resulted in contusion, sprain or twist of lower limbs. The injured workers in this cluster suffered less severe injury with no hospitalization or hospitalization for less than 24 h and not more than 20 days of incapacity for work. The findings are in line with the research of Lipscomb et al. [49] that individuals performing electrical wiring works suffered slip/trip injury at a significant higher rate than other types of work. Major slip/trip injuries were related to soft tissue injuries such as sprains, strains or contusions and more commonly led to injuries of lower extremity [49,50]. Slips and trips are regarded as one of the most significant type of construction accidents [49,51,52]. Lipscomb et al. [49] found that environmental factors, such as slippery or uneven working surfaces, weather and lighting, were the most frequent and common contributor to slip/trip injuries. Besides, poor housekeeping and human factors (i.e., lapse of attention, carelessness or rush to finish work, etc.) also contribute to these accidents [49,53].
The third cluster covered workers who were involved in air-conditioning installation works with zero to five years of working experience. Camino López et al. [37] indicated that construction workers with a short period of services suffered a higher percentage of accidents. This cluster revealed that workers of air-conditioning works are prone to E&M accidents. The research findings also indicated that air-conditioning installation and maintenance works were the second most risky form of E&M works. Most of the accidents occurred in Air Handling Unit (AHU) room or AC plant room for carrying out installation or maintenance of the AC system. A combination of project complexity, poor working conditions and hazardous nature of work is leading to variety types of accident. This cluster group included multiple types of accidents such as exposure to fire, hand tool accident, stepping on object and crushing, etc. These accident types are easily overlooked but the research findings show that multiple types of accidents contribute a significant number of accident cases. The accidents in this cluster are mainly less severe and caused laceration and cut on upper limbs. The victims of this cluster mostly did not require hospitalization or required hospitalization for less than 24 h and led to incapacity for less than 20 days.
Other factors for E&M accidents were also evaluated ( Table 6). Improper procedure and poor housekeeping were the top two unsafe conditions, whereas lapse of attention was the key unsafe action among the E&M accident cases. Carelessness or not concentrating were the most significant personal factor of accidents. According to Zhou et al. [54], major types of improper construction procedures refer to failure to operate in accordance with safety specifications and construction guidelines. For example, the electrical workers fail to de-energize or lock out electrical circuits for electrical wiring works. Ignorance of safety procedures and proper construction process may substantially increase the probability of E&M accidents. Besides, Bentley [55] and Lipscomb et al. [49] advocated that housekeeping or orderliness highly influence workers' exposure to 'slip, trip or fall' hazards. E&M works undertaken on slippery or uneven floor may cause 'slip, trip or fall' accidents if workers are not fully concentrated on their work.
Recommendations
Three clusters encapsulating a total of 421 E&M accident cases have been analyzed and discussed in this paper. A series of recommendations are suggested for better allocation of safety resources to enhance the safety performance of E&M works regardless of RMAA or new construction works.
Enhance Training and Supervision to High Risk Group
It would be most effective to formulate targeted safety measures for the high-risk groups (i.e., workers for electrical installation works, young E&M workers and workers with zero to five years of experience and more than 19 years of experience). The findings of the current study indicate that 55% of E&M accidents occurred in this two experience groups. The E&M workers with less experience may not be competent to identify the risks involved in E&M works. The safety awareness of experienced workers may decrease and they may overlook the associated hazards. It is recommended that the high-risk workers should receive more training. Suitable training should be provided to workers before they start working, and before being assigned to a job which requires new skill and after any deficiency is detected [56]. Lee [57] promoted the introduction of safety orientation programmes for new workers to impress their safety awareness.
With adequate safety training, the competent safety person would be responsible for identifying safety hazards, checking safety equipment, and reminding the corresponding workers constantly. Establishment of appropriate safety program and safety supervision by competent safety personnel is essential in protecting workers from workplace hazards [58][59][60][61]. The safety personnel should closely supervise the workers involved in high-risk activities such as electrical wiring works and work at height, etc., ensuring proper use of personal protective equipment and correcting any unsafe action and condition.
Safety Measures for Working at Height
A vast majority of accidents in cluster one are caused by 'fall of person from height' and result in severe fracture injuries. After analyzing the features of cluster one, it is suggested that safety measures should be conducted to prevent reoccurrence of fall accidents. E&M works often involve the use of ladders. Ladder was found to be the most common agent involved in fall accidents. Workers were injured with ladders being used to perform installation tasks or gain access to areas in most accident cases. Ladder is designed only for temporary use or provide access to different elevations. Workers are prevented from performing prolonged tasks on ladders [62]. Wong et al. [63] revealed that poor maintenance condition of ladders and improper use of ladders were the top causes of fall accidents when using ladders. It also indicated that "better control" and "enhance monitoring" of using ladders are the key measures to ensure the safe use of ladder, thus, minimize fall accidents from ladders. Safety checking system for equipment (i.e., ladders, hand tools, safety harness, etc.) should be established to ensure all the equipment are in safe working order. Safe means of support such as platform ladder and working platform should be provided for access or work at height.
Improve Housekeeping
For the second cluster, housekeeping and the conditions of the construction site should be improved to prevent the accidents caused by 'slip, trip or fall on same level'. Numerous studies indicated that tidy and safe working environment is one of the most important factors associated with good performance of construction safety [55,58,64]. In the continuously changing work environment of E&M works, frequent and regular housekeeping and walk-through assessments by safety personnel would be important to identify hazards such as slippery or uneven working surface [51]. These strategies could make substantial contributions in preventing accident of 'slip, trip or fall on same level'.
Proper Working and Safety Procedures
Improper procedure was regarded as one of the major unsafe conditions among the accident cases. Apparently, improvement of E&M works' safety can be achieved by ensuring proper working and safety procedures. Negligence of safety procedures and proper construction processes may substantially increase the probability of E&M accidents. Improper working procedures such as failing to de-energized before electrical works or failing to conduct lock-out and tag-out procedures leads to a risk of electrical shock. Safety personnel or site foreman are required to ensure the provision and use of correct equipment and implementation of appropriate construction procedures and methods [54,65].
Wilson and Koehn [14] pointed out that safety management is one of the major methods to control safety policies, and working procedures relating to a construction site. Proper working and safety procedures for E&M tasks can be delivered through safety training. Safety education transfers the concepts of the importance of work safely and how an unsafe act will seriously affect them.
Implement Risk Assessment Process
E&M installation works are regarded as high-risk operations in construction process. Due to the tight working schedule of E&M works in new construction works, the process of risk assessment may be neglected. Lack of risk assessment at the workplace may lead to E&M accidents. For new construction works, it is important to conduct risk assessment by safety personnel at an early stage. This helps to identify hazardous works such as confined working spaces, work at height and corresponding safety equipment, and to estimate the cost for safety investment (e.g., provision of working platform and scissor lift). It is also required to make a risk assessment of health and safety to employees and others who are exposed on construction sites, especially for specific hazards (e.g., work at height, hazardous substance, manual handing, and use of plant, etc.). Risk assessment is often ignored in RMAA works due to the short construction period and limited safety resources. An appropriate risk assessment for E&M RMAA may identify people who are being affected by the activity, the requirements of personal protective equipment, suggested additional risk control measures, and any applicable guidance related to the operation. Moreover, it is highly recommended to conduct permit-to-work systems in situations where the nature of work is complicated, the scope of work is wide, or there are energised parts in the switchgear/switchboard when the work is carried out. The permit-to-work system should include the following: (1) risk assessment of the task; (2) identifying the hazards; (3) define safety precautions; (4) strictly implementing the system; and (5) close monitoring of the system.
Conclusions
The significance of this paper lies in investigating the link between E&M works and related accidents in public sector projects based on a rich set of factors such as working environment, demographics, unsafe factors, and personal factors, etc. The clustering of accident cases provides an understanding of the relationship between type of E&M works, type of accident, severity and body part of injury, experience of workers, and other unsafe factors. The accident analysis is vital for industry practitioners and relevant Government Departments to enhance the safety performance of public sector projects. This paper analysed 421 accident cases from the Electrical and Mechanical Services Department (EMSD) and the Architectural Services Department (ArchSD) of the Hong Kong Government in a 15 years period. A two-step cluster analysis was applied to identify cluster groups in a heterogeneous E&M accident data set. Three clusters of E&M works-related accidents were identified. Cluster one was electricians with over 15 years of experience and prone to 'fall of person from height', the accidents caused upper limbs fracture with over 100 days of incapacity for work. Cluster two was electricians with zero to five years of experience and prone to 'slip, trip or fall on same level' causing lower limbs contusion, sprain or twist with 0-20 days of incapacity. Cluster three was air-conditioning workers who acquired 6-10 years of experience and were prone to multiple types of accidents causing laceration and cut on upper limbs and 0-20 days of incapacity. The results of the cluster analysis provide interesting insights into various E&M works-related accidents. Recommendations to avoid recurrence of similar occurrence have also been provided. Although the analysis of E&M work-related accidents were from Hong Kong, the research findings are believed to be applicable to other countries as well. The research outcomes would be essential for safety managers to estimate the associate risks of accident occurrence and injury characteristics and pay more attention or allocate more safety resources to prevent E&M-works related accidents and to achieve a safer working environment. | 9,462.6 | 2018-03-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Large Convex sets in Difference sets
We give a construction of a convex set $A \subset \mathbb R$ with cardinality $n$ such that $A-A$ contains a convex subset with cardinality $\Omega (n^2)$. We also consider the following variant of this problem: given a convex set $A$, what is the size of the largest matching $M \subset A \times A$ such that the set \[ \{ a-b : (a,b) \in M \} \] is convex? We prove that there always exists such an $M$ with $|M| \geq \sqrt n$, and that this lower bound is best possible, up a multiplicative constant.
Introduction
This paper is concerned with the existence or non-existence of convex sets in difference sets.A set A ⊂ R is said to be convex if its consecutive differences are strictly increasing.That is, writing holds for all i = 2, . . ., n − 1.Most research on convex sets comes in the context of sumproduct theory, and one may think of the notion of a convex set as a generalisation of a set with multiplicative structure.For instance, it is known than convex sets determine many distinct sums and differences.In particular, it was proven in [8] that the bound 1|A − A| ≫ |A| 8/5−o (1) holds for any convex set A. Here, A − A := {a − b : a, b ∈ A} denotes the difference set determined by A. This results captures the vague notion that convex sets cannot be additively structured, and there has been considerable effort expended to quantify and apply this idea in various way, see for instance [1], [3] and [9].
Given a finite set B ⊂ R, define
C(B) := max
That is, C(B) denotes the size of the largest convex subset of B. The first question that we consider is the following: given a convex set A ⊂ R, what can we say about the possible value of C(A − A)?A first observation is that as can be seen by considering the convex set A − a ⊂ A − A, where a is an arbitrary element of A. There are some simple constructions showing that the lower bound (1) is optimal up to a multiplicative constant; for instance, we can take A = {i 2 : 1 ≤ i ≤ n}.
In this paper, we give a construction of a convex set A whose difference set contains a very large convex set.
Theorem 1.1.For all n ∈ N, there exists a convex set A ⊂ R with |A| = n such that A − A contains a convex subset S with cardinality |S| ≫ n 2 .
Using the notation introduced earlier, Theorem 1.1 states that there exists a convex set A such that C(A−A) ≫ |A| 2 .This result shares some similarities with main result of [6], where it was established that there exists a set A ⊂ R such that A + A contains a convex subset with cardinality Ω(|A| 2 ).The main qualitative difference is that we have the additional restriction that the set A is also assumed to be convex.Also, Theorem 1.1 provides a convex subset of the difference set, rather than the sum set.
The simple construction giving rise to the lower bound (1) feels like something of a cheat, and so we consider a variant of this problem where we make a further restriction concerning the origin of the convex subset of a difference set.A set M ⊂ A × A is a matching if the elements of M are pairwise disjoint.Given a matching M ⊂ A × A, define the restricted difference set
CM(A) := max
M ⊂A×A:M is a matching and A− M A is convex.
|M|.
That is, CM(A) denotes the size of the largest matching on A which gives rise to a convex subset of A − A. Now, we ask a similar question for this quantity: given a convex set A ⊂ R, what can we say about the size of CM(A)?In particular, how small can this quantity be?Should we expect an analogue of the bound (1) if we rule out this simple construction?In this paper we answer this question by giving the following two complimentary results, showing that CM(A) ≥ |A| and that this bound is optimal up to a multiplicative constant.Theorem 1.2.Let n ∈ N be sufficiently large and suppose that A ⊂ R is a convex set with cardinality n.Then there exists a matching
Proof of Theorem 1.1
Define where Assume that n is a sufficiently large multiple of 100.This assumption is made only to simplify the notation slightly, and can be easily removed at the price of introducing some floor and ceiling functions to the calculations.Define A = {a 1 < • • • < a n }.Observe that A is convex.Indeed, the sequence a i+1 − a i is increasing, as can be seen by calculating that For each integer k ∈ [0.009n, 0.01n] define D k to be the set of kth differences increases with i.This can be seen by observing that (2) d and hence increases with i.
We will find a large convex subset of A − A by efficiently gluing together consecutive convex sets D k .We will make use of the following observation from [6].
Before we get into the details of the proof of Theorem 1.1, which involves some rather tedious calculations, let us take a moment to try and explain the idea behind it, with the help of some pictures.
Firstly, we note that, although the sets D k are convex, they are only slightly convex, in the sense that, if we zoom out and take a look at D k , it appears to resemble an arithmetic progression with common difference 2c 1 k.Note also that this common difference increases slightly as k increases.Figure 1.This picture shows the first three elements of D 10 after setting n = 10000.The three elements form a convex set, but to the naked eye they appear to be arranged in an arithmetic progression.
The other important feature of this construction is that we have chosen the parameters in such a way that the D k have convenient overlapping properties.In particular, each D k has diameter approximately 3 2 , and starts at k.In particular, this means that neighbouring D k have a significant overlap, but also that each D k takes sole ownership of a section of the real line.We can use this setup to form a convex set by gluing together consecutive D k .In the region where D k and D k+1 overlap, the elements of D k are slightly more dense (because the common difference of the approximate arithmetic progression is smaller).This ensures that there exist two consecutive elements of D k in this region, which allows for an application of Lemma 2.1.Meanwhile, the existence of the non-overlapping region ensures that this glued set contains many elements of D k for each k.
Now we come to the formal details.We use the notation d max ].We will prove the following two facts about the intersection properties of these intervals.
Claim 2.3.For each k ∈ [0.009n, 0.01n], there are at least Cn elements of D k in the interval (d min ), where C > 0 is an absolute constant.
Once we have proved these two claims, the proof will be finished.Indeed, we can use Claim 2.2 together with Lemma 2.1 to glue together consecutive convex sets D k to form a set Claim 2.3 guarantees that, for each k ∈ [0.009n, 0.01n], there at least Cn elements in D k ∩ S that do not appear in D j ∩ S for any j = k.This implies that It remains to prove the two claims.
Proof of Claim 2.2.We will show that the interval max ] contains at least two more elements of D k than it does of D k+1 .It then follows that there must exist two consecutive elements of D k in this interval, which then implies the existence of the claimed configuration We begin by establishing a lower bound for |D k ∩ I|, namely We need an upper bound for |{i ∈ N : We will compare the bound in ( 6) with an upper bound for |D k+1 ∩ I|, which we deduce now.Observe that Note that the term which involves the multiple of c 2 in the previous step is at most 1/n 2 .Therefore, this term is less than c 1 , which allows us to write It remains to show that the lower bound given in ( 6) is at least as big as the upper bound given in (7), plus two.That is, we need to show that , and eventually Since we have 0.009n ≤ k ≤ 0.01n, it would be sufficient to prove that Substituting in the definition of c 1 , the previous inequality becomes max ]|, we make use of (7) and deduce that Next, we present a lower bound for The last inequality above uses the fact that the term c 2 (3i 2 k + 3ik 2 ) is at most 0.01, provided that n is sufficiently large.Therefore, By combining this inequality with (8), it follows that
Matchings
Proof of Theorem 1.2.Again, write For convenience, we the shorthand use k := ⌈ √ n⌉.The matching M is given by The matching M has cardinality k ≥ √ n, and the choice of parameters ensures that it is indeed a well-defined subset of A × A, provided that n is sufficiently large.To verify this, we just need to check that k + 1 + k(k+1)
It remains to check that
denote the ith element of A − M A. We need to check that e i+1 − e i > e i − e i−1 holds for all 2 ≤ i ≤ k − 1.A telescoping argument gives and therefore It then follows that There are i + 2 terms with a positive sign and i + 1 with a negative sign.We can pair off the i + 1 largest positive terms with smaller (in absolute value) negative terms to conclude the proof, as follows: Proof of Theorem 1.3.For each j = 1, . . ., n, define The set A = {a j : 1 ≤ j ≤ n} is a convex set.Indeed, the consecutive differences of A are given by a sequence which is strictly increasing.
Let M ⊂ A × A be a matching such that A − M A is convex.Our goal is to prove that |M| ≪ √ n.Let k ≤ n − 1 be an integer.Repeating notation used earlier in the paper, set d We calculate that 1) .
An important feature of this construction is that the diameter of the components D k , which is approximately (2n) n−2 , is significantly smaller than the gaps between consecutive components, which is approximately (2n) n .This allows us to conclude that, with at most one exception, a convex set can have at most one representative from each D k .This is formalised in the following claim.
Proof.The first sentence of the claim follows from the second, and so it is sufficient to prove only the second sentence.Suppose for a contradiction that is not the first element of S, and since S also contains a larger element of D k 2 , we also know that d .By the convexity of S, On the other hand The second inequality uses the fact that k 2 ≤ n, while the third inequality is an application of the inequality (12) (2n) j + (2n which is valid for all j, n ∈ N.
Using the same basic fact about the blocks D k again, namely that the gap between consecutive blocks is significantly larger than their individual diameters, we now show that the blocks which contain elements of a convex set must occur in a weakly convex form.A set holds for all i = 2, . . ., n − 1.For a given set S ⊂ A − A, we define Proof.Suppose for a contradiction that K(S) is not weakly convex.Then there exists three consecutive elements The difference between d and d Meanwhile, the next difference can be bounded by However, by the convexity of S, we also have d Combining this with the previous two inequalities and applying (14) yields We then once again use inequality (12) to obtain Finally, note that k 3 + k 1 < 2n.This holds because k 3 , k 1 ≤ n − 1, as the sets D k i are only defined within this range.Plugging this into (15), we obtain the contradiction Another useful feature of this construction is that the consecutive differences within the components D k shrink rapidly, which makes it difficult to find a large convex sets in A − A. In this picture, we zoom in to take a closer look at the way the elements of D k are distributed (here we consider the set D 1 with n = 5).Crucially, the gaps between consecutive elements of D k shrink rapidly, with the conseutive differences resembling a geometric progression with a small common ratio.This picture can be used for a sketchy justification of Claims 3.3 and 3.4 We use this fact in the following claim to establish that a convex set cannot contain more than two elements from any D k .Proof.Suppose for a contradiction that there exist three consecutive elements of S belonging to the same block D k .In particular, we have d On the other hand, Combining the previous two inequalities with (16), we obtain the intended contradiction By proving Claims 3.1 and 3.3, we have essentially proved that each block of D k in A − A can contain at most one element of a convex set S ⊂ A − A. We can be a little more precise; taking the potential exceptional block into account, we have the bound It remains to upper bound the size of the indexing set K(S).We need one more claim to allow us to achieve this goal.Note that the following claim represents the first time in the proof where we use the fact that the convex set S is derived from a matching.
Claim 3.4.Suppose that M ⊂ A × A is a matching and that S = A − M A is a convex set.Then the indexing set K(S) does not contain four consecutive elements which form an arithmetic progression.
Proof.Suppose for a contradiction that four consecutive elements of K(S) form an arithmetic progression.It then follows from Claim 3.1 that there exist four consecutive elements d and d in S, for some positive integers k, t such that k + 3t ≤ n − 1.Since S is derived from a matching, it must be the case that the j i are pairwise distinct.Write Since S is convex, we have e 1 < e 2 < e 3 .
We will now show that it must be the case that j 2 < j 1 .Suppose for a contradiction that this is not true, and so j 2 > j 1 .Then On the other hand It follows from the previous two bounds that This is a contradiction, and we have thus established that j 2 < j 1 .The exact same argument implies that j 3 < j 2 .Now, since j 2 < j 1 , we have Similarly, since j 3 < j 2 , it follows that Combining the previous two inequalities, and again making use of the fact that j 3 < j 2 , we have This contradicts the fact that e 1 < e 2 and completes the proof of the claim.
When we combine Claim 3.2 and Claim 3.4, we see that the set K is a weakly convex subset of {1, . . ., n} which does not contain four consecutive terms in an arithmetic progression.It follows that Combining this with (17), the proof is complete.
Concluding remarks; sums instead of differences
The problems considered in this paper were partly motivated by a potential application to a problem in discrete geometry concerning the minimum number of angles determined by a set of points in the plane in general position.This problem was considered recently in [2], and similar problems can be traced back to the work of Pach and Sharir [5].We found that progress on this question could be given by a solution to the following problem: given a convex set A ⊂ R estimate the size of the largest matching on A which gives rise to a convex set in the image set f (A, A), where f : R × R → R is a specific bivariate function whose rather complicated formula is omitted here.Theorems 1.2 and 1.3 of this paper solve this problem for this simplified case when f (x, y) = x − y.
With the potential application to the distinct angles problem in mind, an interesting future research direction could be to generalise the problems considered in this paper by considering an arbitrary f : R × R → R in place of the function f (x, y) = x − y.We conclude this paper with some remarks about the most natural case, whereby f (x, y) = x + y.
It is interesting to see that we can quite easily obtain an optimal result, giving a significant quantitative improvement to Theorem 1.2, if we consider sums instead of differences, as follows.
Theorem 4.1.Let n ∈ N and suppose that A ⊂ R is a convex set with cardinality n.Then there exists a matching M ⊂ A × A such that |M| ≥ ⌊ n 2 ⌋ and A + M A is convex.
Then the set A + M A = {a n/2+k + a k : k ∈ {1, . . ., n/2}} is convex.If n is odd then we omit a n and use the same argument as above.
In particular, it follows from Theorem 4.1 that an analogue of the construction in the proof of Theorem 1.3 is not possible if we take sums instead of differences.There are other cases in which problems concerning additive properties of convex sets are sensitive to sums and differences.For instance, a construction in [4] (see also [7]) shows that there exists a convex set A ⊂ R and for any convex A ⊂ R and x ∈ A + A.
We would also be interested to know whether Theorem 1.1 is still valid when A − A is replaced by A + A. We were unable to prove anything non-trivial for this question.
Figure 2 .
Figure 2.This diagram illustrates the intersection pattern of the sets D k .
min for the smallest element of D k and d (k) max for the largest element of D k .Note that
3 Figure 3 .
Figure 3.This diagram illustrates how the gaps between the consecutive D i are significantly larger than the diameters of the individual D i .This is the heuristic reason why Claims 3.1 and 3.2 are valid.
(k 2
) j is not the last element of S. Let x be the element of S preceding d (k 2 ) j and let y be the element of S following d (k 2 ) j d
Figure 4 .
Figure 4.In this picture, we zoom in to take a closer look at the way the elements of D k are distributed (here we consider the set D 1 with n = 5).Crucially, the gaps between consecutive elements of D k shrink rapidly, with the conseutive differences resembling a geometric progression with a small common ratio.This picture can be used for a sketchy justification of Claims 3.3 and 3.4 | 4,614.4 | 2023-09-14T00:00:00.000 | [
"Mathematics"
] |
Mechanisms of Transient Signaling via Short and Long Prolactin Receptor Isoforms in Female and Male Sensory Neurons*
Background: Prolactin regulates the activity of nociceptors in pain conditions. Results: Prolactin regulation of sensory neurons is acute and mediated via PI3K and PKCϵ following activation of prolactin receptor short isoform. Prolactin receptor short isoform actions are inhibited by the long isoform. Conclusion: Prolactin receptor short isoform mediates transient sensitization of nociceptors. Significance: The proposed mechanism could underlie prolactin involvement in hyperalgesia/pain. Prolactin (PRL) regulates activity of nociceptors and causes hyperalgesia in pain conditions. PRL enhances nociceptive responses by rapidly modulating channels in nociceptors. The molecular mechanisms underlying PRL-induced transient signaling in neurons are not well understood. Here we use a variety of cell biology and pharmacological approaches to show that PRL transiently enhanced capsaicin-evoked responses involve protein kinase C ϵ (PKCϵ) or phosphatidylinositol 3-kinase (PI3K) pathways in female rat trigeminal (TG) neurons. We next reconstituted PRL-induced signaling in a heterologous expression system and TG neurons from PRL receptor (PRLR)-null mutant mice by expressing rat PRLR-long isoform (PRLR-L), PRLR-short isoform (PRLR-S), or a mix of both. Results show that PRLR-S, but not PRLR-L, is capable of mediating PRL-induced transient enhancement of capsaicin responses in both male and female TG neurons. However, co-expression of PRLR-L with PRLR-S (1:1 ratio) leads to the inhibition of the transient PRL actions. Co-expression of PRLR-L deletion mutants with PRLR-S indicated that the cytoplasmic site adjacent to the trans-membrane domain of PRLR-L was responsible for inhibitory effects of PRLR-L. Furthermore, in situ hybridization and immunohistochemistry data indicate that in normal conditions, PRLR-L is expressed mainly in glia with little expression in rat sensory neurons (3–5%) and human nerves. The predominant PRLR form in TG neurons/nerves from rats and humans is PRLR-S. Altogether, PRL-induced transient signaling in sensory neurons is governed by PI3K or PKCϵ, mediated via the PRLR-S isoform, and transient effects mediated by PRLR-S are inhibited by presence of PRLR-L in these cells.
Prolactin (PRL) 3 acts as an endocrine hormone, a growth factor, a neurotransmitter, and an immune modulator (1). PRL is produced by the anterior pituitary gland and in other tissues (extrapituitary PRL) in a variety of pain conditions (2)(3)(4)(5)(6). The release of PRL from both the pituitary and extrapituitary tissues is sex-dependent and correlates with estrogen levels in serum (2,4,7). The systemic and local release of PRL can modulate the activity of a wide variety of cell types, including peripheral sensory neurons in pain conditions where this modulation contributes to hyperalgesic responses (2-4, 8, 9). Even though the molecular mechanisms responsible for PRL-directed modulation of nociception are unknown, one possible mechanism is the PRL regulation of sensory neuronal channels, including transient receptor potential (TRP) channels that play critical roles in hyperalgesia/pain (2,7,10,11). Because PRL can differentially activate nociceptors in a sex-dependent manner (2), this modulation may involve short lasting (transient) effects that could vary in males compared with females. Accordingly, one of the main goals of this study is to define mechanisms underlying transient effects of PRL in female and male sensory neurons because this transient modulation by PRL may represent an important mechanism that contributes to analgesia and pain.
The actions of PRL are mediated by the PRL receptor (PRLR) belonging to the class 1 cytokine receptor family (12). Although the PRLR is encoded by a single gene, alternative splicing of the PRLR gene generates a variety of isoforms that differ by length and amino acid sequence at their cytoplasmic tail. In contrast, the extracellular PRL-binding domain is identical for all PRLR isoforms (13,14). In rats and humans, there are three PRLR isoforms: short and long, as well as a seldom expressed intermediate variant (15). The neuronal expression of PRLR-L and PRLR-S can vary depending on neuron location, sex of animals, and other systemic conditions. For example, PRLR-S mRNA was either undetectable or present at low levels in several hypothalamic regions of diestrous rats, but was significantly up-regulated in lactating rats (16). Even though a role for PRLR in peripheral sensory neurons appears likely, because PRL can modulate sensory neuron activities, information on the relative expression and functions of PRLR isoforms in sensory neurons is mostly lacking (7). Hence, another goal of the present study was to define the roles of PRLR isoforms in rapid sensitizing effect of PRL in female and male sensory neurons.
Variances in the cytoplasmic part of PRLR isoforms implicate different transducer pathways for each of the PRLR isoforms. All isoforms contain the box-1 domain, which is required for Janus kinase 2 (JAK2) binding, as well as the membrane-proximal region responsible for activation of Fyn and mitogen-activated protein (MAP) kinases (15). Importantly, only the PRLR-L triggers activation of the signal transducer and activator of transcription 3 and 5 (STAT3 and STAT5, respectively) pathways, because it contains the required box-2 domain (1). PRL is also capable of producing rapid action in neurons (7,17). However, it is not clear which intracellular pathways govern the rapid actions of PRL in sensory neurons. To address this question, we examined the role of different kinases in mediating the PRL-evoked sensitization of female sensory neurons.
EXPERIMENTAL PROCEDURES
Animals-The use of animals in all experiments was approved by IACUC protocols. Sprague-Dawley male and female rats were 45-60 days old (Charles River). Adult female and male PRLR-null mutant (PRLR KO) and corresponding littermate wild-type (WT) mice were obtained from The Jackson Laboratory. PRLR KO mice are viable and normal in size and do not display any gross physical or behavioral abnormalities. However, male and female homozygous PRLR KO mice are completely sterile. PRLR KO mice were produced by creating an in-frame stop codon in exon 5 (18). The lack of functional PRLR in homozygous mutant animals was confirmed using Northern, Western, and binding assays, and all demonstrated the lack of a functional receptor (18). PRLR KO mice were produced in C57BL/6J line. Because PRLR KO mice have irregular estrous cycles, trigeminal ganglia (TG) were removed from WT and PRLR KO females only at the estrous phase. The reproductive stage of cycling females was determined by vaginal lavage as described previously (19).
Human Samples-This study was approved by the Human Subjects Institutional Review Board at the University of Texas Health Science Center at San Antonio. The inclusion criteria consisted of female patients seeking dental therapy for extraction of a normal healthy third molar tooth (wisdom tooth) with fully formed apices and lacking a past history of pain and pathology. The pulpal tissue from 10 teeth (one tooth each from 10 different female patients) was evaluated in the anatomical studies.
Chinese Hamster Ovary (CHO) Cells and Sensory Neuron
Culture-We used the following expression constructs: enhanced green fluorescent protein (pEGFP-N1 from Clontech); and rat TRPV1 (accession number NM031982), rat STAT5b (kindly provided by Dr. Rotwein, Oregon Health and Science University, Portland, OR); rat short PRLR (NM001034111.1) and rat long PRLR (NM012630.1) in pcDNA3 (Invitrogen). The expression constructs were delivered into CHO cells using PolyFect (Qiagen) or FuGENE (Promega) according to the manufacturers' protocols. CHO cells were subjected to experimental procedures within 24 -48 h after transfection. Expression constructs were delivered into TG sensory neurons using the Amaxa nucleofector according to the manufacturer's protocol. In brief, plasmids were mixed with the provided transfection solution and dispersed sensory neurons and then electroporated at the G013 setting on the nucleofector (20). TG sensory neurons were maintained in DMEM supplemented with 2% FBS at low density on poly-Dlysine/laminin-coated coverslips (Clontech) as described previously (21). Recordings were performed within 16 -24 h after plating.
Western Blotting and pSTAT5 ELISA-Western blotting and ELISA were performed as described previously (3,21). Transfected CHO cells were homogenized by 20 strokes in a Potter-Elvehjem homogenizer in solution provided by ELISA kit (InstantOne ELISA for pSTAT5; eBioscience, San Diego, CA) and supplemented with protease inhibitors aprotinin (1 g/ml; Sigma-Aldrich), leupeptin (1 g/ml; Sigma-Aldrich), pepstatin (1 g/ml; Sigma-Aldrich), and phenylmethylsulfonyl fluoride (PMSF, 100 nM; Sigma-Aldrich). Cell extract was incubated on ice for 15 min and then centrifuged at 500 ϫ g for 1 min at 4°C. Supernatants were used for protein amount measurement, ELISA, and Western blotting. Protein quantification of crude plasma membrane homogenates was completed using the Bradford method (25) as recommended by the manufacturer (Thermo Scientific). ELISA for pSTAT5 was performed according to the manufacturer's protocol. Equal amounts of protein extracts (Ϸ4 g) were resolved via 10% SDS-polyacrylamide gel electrophoresis and transferred to polyvinyl difluoride membrane (Millipore). Western blots were blocked in 5% nonfat milk in Tris-buffered saline/Tween 20 (Fisher Scientific), labeled using monoclonal anti-rat PRLR antibodies (1:1000; U5 clone; Affinity BioReagents) followed by appropriate horseradish peroxidase-conjugated secondary antisera (GE Healthcare) and enhanced chemiluminescence detection following the manufacturer's instructions (GE Healthcare).
Ca 2ϩ Imaging-The Ca 2ϩ imaging experiments and ratiometric data conversion were basically performed as described previously (21). Fluorescence was detected by a Nikon TE 2000U microscope fitted with a 20ϫ/0.9 NA Fluor objective. Data were collected and analyzed with MetaFluor Software (MetaMorph; Universal Imaging Corporation). The experiments were performed in standard external solution (see under "Electrophysiology"). The calcium-sensitive dye was Fura-2/AM (2 M; Molecular Probes). The net changes in Ca 2ϩ influx were calculated by subtracting the basal [Ca 2ϩ ] i (mean value collected for 60 s prior to agonist addition) from the peak [Ca 2ϩ ] i value achieved after exposure to the agonists. Increases in [Ca 2ϩ ] i above 50 nM were considered positive. This minimal threshold criterion was established by application of 0.1% dimethyl sulfoxide as a vehicle. Ratiometric data were converted to [Ca 2ϩ ] i (in nM) as described previously (26).
Electrophysiology-Recordings were made in whole cell voltage clamp (V h ϭ Ϫ60 mV) mode at 22-24°C from the somata of small-to-medium TG rat and mouse neurons (20 -45 picofarads) as described (21). Data were acquired and analyzed using an Axopatch 200B amplifier and pCLAMP9.0 software (Molecular Devices). Recording data were filtered at 0.5 kHz and sampled at 2 kHz. Borosilicate pipettes (Sutter, Novato, CA) were polished to resistances of 4 -7 megohms in whole cell pipette solution. Access resistance (R s ) was compensated (40 -80%) when appropriate up to the value of 10 -15 megohms.
Currents were considered positive when their amplitudes were 5-fold bigger than displayed noise (in root mean square). Standard external solution contained 140 mM NaCl, 5 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 10 mM D-glucose, and 10 mM HEPES, pH 7.4. The standard pipette solution for the whole cell configurations contained 140 mM KCl, 1 mM MgCl 2 , 1 mM CaCl 2 , 10 mM EGTA, 10 mM D-glucose, 10 mM HEPES, 2 mM GTP, 5 mM ATP, pH 7.3. Ca 2ϩ -free conditions were constituted with standard external solution without Ca 2ϩ , and standard pipette solution without Ca 2ϩ , whereas 10 mM EGTA was replaced with 10 mM BAPTA. Drugs were applied using a fast, pressure-driven and computer-controlled 8-channel system (ValveLink8; Auto-Mate Scientific, San Francisco, CA).
Immunoreactive Calcitonin Gene-related Peptide (iCGRP) Release Assay-All release assays were performed on 5-7-day TG neuronal cultures, at 37°C, using modified Hanks' (Invitrogen) buffer (10.9 mM HEPES, 4.2 mM sodium bicarbonate,10 mM dextrose, and 0.1% bovine serum albumin were added to 1ϫ Hanks') as described previously (27). After two initial washes, a 15-min base-line sample was collected. The cells then were pretreated with either vehicle or kinase antagonists and then exposed to vehicle, PRL (1 g/ml), or mixes of PRL and kinase antagonists for 15 min. Finally, capsaicin (30 nM)-evoked iCGRP release was performed for 10 min, and the supernatants were collected for analysis of iCGRP content by radioimmunoassay. Radioimmunoassay was performed as described in detail previously (28). The basal release was typically 6 -8 fmol/well. Data Analysis-Statistical analysis was performed using GraphPad Prism 5.0 (GraphPad, San Diego, CA) as specified in legends to figures. The data in figures are given as means Ϯ S.E. with the value of n referring to the number of analyzed cells or trials for each group. Experiments were performed at least in duplicate. Significant differences between groups were assessed by one-way or two-way analysis of variance (ANOVA) with Bonferroni's multiple comparison post hoc test (compares all pairs of columns). Two conditions were compared using unpaired t test. A difference was accepted as significant when p Ͻ 0.05.
Transient Signaling Pathways Involved in PRL-induced Sensitization of TRPV1 Channel in Female Rat TG Neurons-PRL
is able to exert transient effects in a variety of cell types (13) including neurons where the acute application (3-10 min) of PRL (0.1 g/ml) regulates TRPV1, TRPA1, and TRPM8 channel activities in female, but not male rat sensory neurons (2, 7). Signaling pathways underlying this transient action of PRL on sensory neurons are unknown. PRL-induced enhancement of TRP responses is most likely mediated by kinases because we have detected direct phosphorylation of the TRPV1 channel by acute (5-min) application of PRL (1 g/ml) to TG neurons from OVX-E female rats (7). Activation of several kinases, including protein kinase C (PKC (29)), protein kinase A (PKA (30, 31)), and phosphatidylinositol-3 kinase (PI3K (32)), can lead to phosphorylation of TRPV1. Accordingly, we employed pharmacological and cell biology approaches in combination with Ca 2ϩ imaging, iCGRP release assays, and whole cell voltage clamp recording to investigate the contributions of several acute signal- NOVEMBER 29, 2013 • VOLUME 288 • NUMBER 48 ing pathways in regulating capsaicin (specific TRPV1 agonist) responses by PRL in female rat TG sensory neurons.
Transient Signaling via PRL Receptor in Sensory Neurons
TG neurons were pretreated for 5 min with vehicle or either selective intracellular pathway inhibitors followed by co-treatment for 5 min with PRL or vehicle as indicated on Fig. 1B. The utilized kinase inhibitors had no effect on their own for altering either basal Ca 2ϩ levels or CAP (50 nM)-evoked intracellular Ca 2ϩ ([Ca 2ϩ ] i ) rise in TG neurons ( Fig. 1, A, C, and D). PRL (1 g/ml)-induced sensitization of CAP-evoked [Ca 2ϩ ] i accumulation was substantially reversed by pan-PKC inhibitor bisindolylmaleimide I (BIS; 0.5 M; Fig. 1, A and B). Similarly, PRL sensitization effects were also inhibited by the blockade of the PI3K pathway with LY294002 (20 M; Fig. 1C (32). However, pretreatment of TG neurons with PKA inhibitor KT-5720 (0.2 M) had no effect on PRL augmentation of CAP-evoked [Ca 2ϩ ] i rise, even though it did inhibit prostaglandin E2 (1 M)-induced enhancement of CAP responses (Fig. 1D).
PRL may contribute to the initiation of local inflammation (4). This effect could be either via activation of immune cells or due to PRL-driven neurogenic inflammation (7). Therefore, to test whether PKC and PI3K mediate PRL-induced sensitization of CAP-evoked exocytosis, we evaluated actions of kinase inhibitors on PRL-sensitized CAP-evoked iCGRP release as a measure for neurogenic inflammation. Fig. 2, A and B, demonstrates that pretreatment (15 min) and co-treatment (15 min) of cultured TG neurons with PKC inhibitor BIS (1 M) and PI3K inhibitor LY294002 (20 M) completely abolished the previously observed PRL (1 g/ml)-induced augmentation of CAP (30 nM)-evoked iCGRP release from cultured TG neurons.
We next examined PRL enhancement of CAP responses with alternative kinase inhibitors using whole cell patch voltage clamp recording. Pretreatment with kinase inhibitors was performed as for Ca 2ϩ imaging. PRL (0.1 g/ml) sensitization of CAP (50 nM)-gated current (I CAP ) was significantly reversed by pretreatment with the potent and selective PKC inhibitor, GF 109203X (0.1 M; Fig. 3, A and B). Selective PI3K␥ inhibitor, AS 605240 (0.1 M), which displays 30-fold selectivity over PI3K␦ and PI3K and 7.5-fold selectivity over PI3K␣ (33), also blocked sensitization of I CAP by PRL (Fig. 3C). Unlike PKC and PI3K␥ blockers, herbimycin A (0.5 M) antibiotic, which antagonizes a Src family of kinases including BCR-ABL tyrosine kinases (34), did not produce an inhibitory effect on PRL-evoked potentiation of I CAP (Fig. 3D).
The PKC inhibitors used in this study are known to affect several PKC isoforms with different affinities (35). Because the exact isoform or combination of isoforms that mediate PRL sensitization of TG neurons is largely unknown, we carried out experiments to identify the isoforms involved. There are two classes of PKC isoforms: [Ca 2ϩ ] i -dependent and -independent (36). We found that PRL (0.1 g/ml) is able to sensitize CAP responses in TG neurons in presence and absence of extracellular and intracellular Ca 2ϩ in recording solutions (Fig. 4A). This result implies that a transient PRL effect in sensory neurons is probably not mediated by PKC␣ and PKC isoforms (37). In the next experiments, PKC␦, PKC⑀, and PKC isoforms were blocked with selective peptide-based inhibitors (38 -40). PKC␦ and PKC⑀ were inhibited with cell-permeable N-myristoylated peptides ␦PKC (8 -17) and ⑀-V1-2 PKC (both made by AnaSpec), respectively. Inhibitor of PKC is a pseudosubstrate peptide, which is attached to cell permeabilization Antennapedia domain vector peptide (Tocris). PRL (1 g/ml)-evoked sensitization of CAP responses was effectively blocked by the PKC⑀, but not PKC inhibitors (Fig. 4B), whereas PKC␦ had a partial effect (Fig. 4B). These results are in accordance with a previous publication that demonstrated an important role for PKC⑀ in the regulation of a heat-gated channel (now known as TRPV1) in sensory neurons (24).
PKC⑀ is an important isoform of the PKC family that is expressed in sensory neurons and plays a key role in sensitization of TRPV1 by inflammatory mediators (24,41). Moreover, PKC⑀ is translocated to the plasma membrane of TG neurons upon activation (24,42), providing a convenient measure of kinase activities. Thus, we next evaluated the possible PRL activation of PKC⑀ as detected by the translocation of this kinase to the plasma membrane. We freshly isolated female rat TG neurons and cultured them up to 6 h. The cultures were supplemented with 100 ng/ml NGF and estradiol (50 nM). TG neurons were treated for 5 min at 37°C with vehicle, PRL (1 g/ml), or PMA (0.5 M). Following treatments, TG neurons were fixed with 4% formalin and processed for IHC with antibodies against PKC⑀ and TRPV1. PKC⑀ translocation was measured only in TRPV1-positive neurons. Fig. 5 illustrates that PRL (1 g/ml; panel B) pretreatment, unlike vehicle (panel A), triggers trans-location of PKC⑀ to the plasma membrane of TRPV1-positive TG neurons (Fig. 5, D and E). Moreover, PRL-induced translocation of PKC⑀ was mimicked by treatment of TG neurons with PMA (0.5 M), a direct activator of several isoforms of PKC (Fig. 5, C-E, and Refs. 24,42). Altogether, the results obtained with Ca 2ϩ imaging, whole cell recording, iCGRP release, and PKC translocation assays combined with pharmacological tools demonstrated that PRL is able to induce acute effects in female rat TG sensory neurons by activating PKC⑀ and PI3K.
Roles of Short and Long Forms of Prolactin Receptor in Transient Actions of PRL in TG Neurons-It is well documented that PRL exerts long term effects via the JAK/STAT pathway (1). This pathway is triggered by binding of PRL to PRLR-L, but not PRLR-S isoforms (43). We found that a transient action of PRL in TG sensory neurons is mediated by PKC⑀ and PI3K (Figs. [1][2][3][4][5]. However, it is unclear which PRLR isoforms mediate this effect. To answer this question, we have co-transfected CHO cells with GFP, TRPV1, and rat PRLR-L or PRLR-S. GFP transfection was used for visual identification of CHO cells co-transfected with either TRPV1 and PRLR-L or TRPV1 and PRLR-S. Cells co-transfected with TRPV1 and PRLR-L did not exhibit any sensitization of TRPV1 after pretreatment (5 min) of cells with mouse PRL (0.1 g/ml; Fig. 6, A, right two columns, and C). In contrast, I CAP was sensitized by PRL (0.1 g/ml) pretreatment of CHO cells containing TRPV1 and PRLR-S isoform (Fig. 6, A, left two columns, and B).
Biochemistries of CHO cells and sensory neurons are principally different (21); hence, we evaluated whether PRLR is involved in acute PRL signaling in TG neurons. PRLR-L and PRLR-S were reconstituted in TG neurons derived from PRLRnull mutant mice (PRLR KO). The reconstitution was performed in both male and female TG sensory neurons because PRL action in sensory neurons is critically sex-dependent (2). Fig. 7A and representative traces (Fig. 7B) illustrate that, as expected, PRL (0.1 g/ml) does not sensitize I CAP in WT male mice transfected with GFP. However, reconstitution of PRLR-S, but not PRLR-L, in TG neurons from male PRLR KO mice restores acute action of PRL on TRPV1 (Fig. 7A). In TG neurons from WT female mice, PRL was capable of sensitizing CAP responses (Fig. 7C and Ref. 2). This effect was ablated in TG neurons from PRLR KO mice, indicating that PRL effects are indeed arbitrated by PRLR (Fig. 7C). Reconstitution experiments revealed that like that seen in male TG neurons, PRLR-S alone can restore transient PRL actions on I CAP in female PRLR KO TG neurons (Fig. 7C). Importantly, the PRLR-S isoform can heterodimerize with PRLR-L, and this heteromer inhibits PRLR-L function (44 -46). This finding implies that tissue-specific relative expression of PRLR-L and PRLR-S isoforms may alter the physiological effect mediated by PRL. Here, we investigated the role of PRLR-L and PRLR-S co-expression on PRLinduced transient signaling in sensory neurons. Using the above described experimental approach, PRLR-S and PRLR-L were co-transfected at a molar ratio of 1:1 into TG neurons from male or female PRLR KO mice, and CAP responses were assessed after pretreatment with vehicle or PRL (0.1 g/ml). Fig. 7D illustrates that the co-presence of PRLR-L with PRLR-S results in suppression of PRL-triggered transient effects seen in male and female TG neurons expressing PRLR-S alone. Alto- NOVEMBER 29, 2013 • VOLUME 288 • NUMBER 48 gether, PRL-induced transient enhancement was observed only in TG neurons from male and female mice expressing PRLR-S alone (Fig. 7A-7C), but not co-expressing PRLR-S and PRLR-L at an approximately equal ratio (Fig. 7D).
Transient Signaling via PRL Receptor in Sensory Neurons
Expression of PRLR-L and PRLR-S in TG Sensory Neurons/ Nerves from Rats and Humans-Differential expression patterns for PRLR isoforms in certain regions of brain and in nonneuronal cells are well documented (16,23,47). A possible differential expression in sensory neurons appears important because cell signaling pathways via PRLR-L and PRLR-S isoforms are different (Figs. 6 -8 and Ref. 1). Furthermore, a coexpression of PRLR isoforms could affect the function of each (Figs. 7D and 8 and Refs. 44,46). Even so, the precise expression patterns of PRLR isoforms in various types of neurons are still not clear.
Here, we investigated expression patterns for PRLR-L and PRLR-S in TG neurons from female and male rats and human female nerves. The main technical difficulty for this study is that rat isoform-specific PRLR antibodies are not available, thus mandating the use of in situ hybridization for rat studies. In addition, specific probes for PRLR-S produced a weak in situ hybridization signal hampering interpretation for cell distributions of PRLR-S. Therefore, to evaluate the expression of PRLR isoforms in rat TG neurons, we have employed antibodies recognizing all PRLR forms and in situ hybridization with PRLR-L-specific probes. Subtraction of cell numbers for PRLR-L-positive neurons from overall PRLR expression allows an estimate of the percentage of TG neurons expressing PRLR-S. In situ hybridization with the PRLR-L-specific antisense, but not sense probes showed that PRLR-L is expressed in only 4.6% (n ϭ 31 of 672) female and 3.8% (n ϭ 24 of 624) male rat TG neurons (Fig. 9, A-C). In contrast, IHC with the 1A2B1 antibody, which recognizes both PRLR-L and PRLR-S isoforms, labeled 59.3% female rat TG neurons (n ϭ 308 from 522; Fig. 9D). Approximately 50 -60% of PRLR-positive neurons also expressed TRPV1 (Fig. 9D). Expression of PRLR is also prominent in satellite glial cells (Fig. 9E).
Recently, isoform-specific antibodies for human PRLR have been developed and characterized (23). These antibodies do not label rat and mouse PRLR isoforms. Therefore, we have utilized these to evaluate PRLR isoform expression in female human dental pulps containing sensory nerves originating from TG. IHC of longitudinally sectioned female dental pulps show that both PRLR isoforms (long identified as PRLR-LF and short as PRLR-SF1a) are present in nerve fibers (Fig. 10, A-D). The axoplasm of many dental nerve fibers expresses NFH (seen as green in Fig. 10, A and C), and the myelin of myelinated nerve fibers can be stained with MBP. Therefore, we used NFH and MBP antibodies in combination with PRLR isoform-specific antibodies to critically evaluate expression in nerve fiber axoplasm with the use of transverse nerve sections from the female human dental pulp. Fig. 10E (left panel) illustrates that PRLR-LF is predominantly co-expressed with MBP (green), but not in NFH-identified nerve fiber axoplasm (blue). Conversely, PRLR-SF1a is co-expressed with both MBP (green) and NFH (blue) in a subset of female human dental pulp nerves (Fig 10E, right panel). Similar patterns were observed for antibodies against PRLR-SF1b (data not shown). In summary, PRLR-L is expressed mainly in satellite glial cells of female rat TG, with expression in Ͻ4% neurons and nerves. In human female dental pulp nerves, PRLR-L isoform is expressed predominantly in myelin sheath wrapping nerves. The predominant PRLR isoform of TG neurons and nerves in female rats and humans is the PRLR-S isoform that is also present in the myelin sheath.
DISCUSSION
PRL can modulate the function of CNS and PNS neurons, and it plays a critical role in diverse bodily functions such as maternal behavior (48), appetite (49), sexual receptivity (50), and hypersensitivity to painful stimuli (2)(3)(4), where the action of PRL on neurons can be either acute/transient or long term. Transient effects of PRL have been characterized in many neuronal types including gonadotropin-releasing hormone neurons, tuberoinfundibular dopamine neurons, the anteroventralperiventricular nucleus, and dorsal root ganglion and TG sensory neurons (2,7,17,51,52). Even though it is well known that long term effects of PRL are mainly mediated via the JAK/STAT pathways controlling gene expression modulation (15,43), the molecular mechanisms underlying the transient effect of PRL in neurons are not known. Thus, a major focus of the present study was to identify the mechanisms responsible for the acute actions of PRL in TG sensory neurons.
In this study, we show that PRL evokes transient effects in sensory neurons via activation of PKC⑀ or PI3K pathways. PRLR belongs to the class 1 cytokine receptor family, and this finding is consistent with the ability of other members of this family to independently activate both PKC and PI3K pathways (12). Class 1 cytokine receptors could activate PKC and PI3K pathways downstream from each other (53). However, it is not clear whether these kinases are activated independently in TG sensory neurons. Activation of these cellular signaling pathways by PRL in non-neuronal cells has been demonstrated previously and includes PRL activation of PKC in immune cells (54), PKC and PI3K in prostate cells (55), PI3K in decidual cells (56), and PKC and PI3K in breast tumor and lymphoma cell lines (57,58). Involvement of PI3K in the acute action of PRL on CNS neurons has recently been reported (17). Our previous studies have also shown that PRL is able to rapidly (within 15 min) phosphorylate the TRPV1 channel in TG neurons (2,7).
Acute effects of hormones, such as PRL, and neurotransmitters include a regulation of the activity of many ligand-gated and voltage-gated channels, and this regulation has critical implications for nervous system function. For example, PRL can transiently modulate Ca 2ϩ -dependent K ϩ channels (BK channels) and TRP-like channels in tuberoinfundibular dopamine neurons (17) where this modulation could enhance the activity in tuberoinfundibular dopamine neurons leading to dopamine release, eventually affecting lactation, sexual libido, fertility, and body weight (17). PRL also acutely sensitizes the TRP channel in dorsal root ganglion and TG neurons (2,7). This sensitization can result from the elevated local PRL levels seen in tissues after inflammation or surgical procedures, thus contributing to the development of thermal hyperalgesia (2)(3)(4). Our findings suggest that the increased PRL levels seen in a variety of painful pathological conditions could rapidly regulate neuronal activities via activation of PKC and PI3K pathways and where this regulation contributes to nociceptor sensitization.
The PRLR gene is alternatively spliced to generate isoforms, which differ in the length and amino acid sequences at their cytoplasmic tails, whereas the extracellular PRL-binding domain is identical for all PRLR isoforms (13,14). Two main isoforms of PRLR in human and rats are short and long (1,15,59). The PRL-induced JAK/STAT pathway is triggered by activation of PRLR-L, but not PRLR-S isoforms, containing both required domain box-1 and box-2 (Fig. 8, A and C, and Refs. 1,43). The roles of PRLR isoforms in mediating the transient effects of PRL in neurons were not clear, so we addressed this issue, and our results demonstrate that the transient effects of PRL in TG sensory neurons of male and female rats can be exclusively mediated by PRLR-S (Figs. 6 and 7). Interestingly, the action of PRLR-S can be suppressed by co-expression with PRLR-L (Figs. 7D and 8D). PRLR-L and PRLR-S can heterodimerize when heterologously expressed and in breast cancer cells (44,45). Heterodimerization between PRLR-S and PRLR-L can also lead to a suppression of PRLR-L-induced gene transcription in non-neuronal cells (44,46). It has previously been shown that two intramolecular disulfide (S-S) bonds within the extracellular subdomain 1 (D1; see Fig. 8A) of PRLR-S contributes to the inhibition of PRLR-L functions (46). Our data imply that intracellular domain(s) of PRLR-L located within the 293-430 amino acid region are involved in inhibition of PRLR-S transient effects (Fig. 8D). This region also contain box-2 (Fig. 8A), which is critical for tyrosine phosphorylation of STAT5 (Fig. 8, B and C The functional interaction between PRLR-L and PRLR-S suggests that the relative expression levels of these isoforms in neurons and other cells could have critically important effects (46). Our data indicate that PRLR-L is expressed predominantly in glial cells of TG in female rats and in women dental pulpal tissues (Figs. 9 and 10). In contrast, PRLR-S is present in TG neurons/nerves as well as in glial cells in female rat TG and female human dental pulps (Figs. 9 and 10). Such a differential expression of PRLR-L and PRLR-S isoforms has been reported in different regions of the brain (16,47,60). Further, relative levels of PRLR isoforms could change during pathological conditions, and in this respect, our findings and the findings of others on the functional interaction between PRLR isoforms appear physiologically relevant.
It is well documented that the nature and magnitude of PRL responses are regulated by estrogen (7,61) and during lactation in rodents (16) and to a lesser extent in humans (53). Thus, in the male, compared with the ovariectomized with estradiol replacement (OVX-E) rat, PRL expression was less in the hypothalamus and lacking in the corpus striatum (61). This estrogen regulation of PRL responses may be an important contributor to sex-dependent pain. For example, transient effects of PRL on the TRPV1 channel are weak or absent in TG neurons from male and female OVX rats, yet present or restored in neurons from estrus female or OVX-E rats, respectively ( Fig. 7 and Refs. 2, 7). Nevertheless, overexpression of PRLR-S in male sensory neurons restores transient PRL effects on TRPV1 channel (Fig. 7, A and B). This observation implies that PRLR-S expression is higher in TG neurons of females than males, but this point requires further investigation. It is noteworthy that quantitative methods (such as RT-PCR) that could be employed to evaluate the relative expression of PRLR isoforms are not suitable for this study because both PRLR isoforms are expressed in non-neuronal cells in TG. Thus, the characterization of the PRLR isoform expression with isoform-specific antibodies is crucial to understand that difference in signaling and effect in neuronal versus non-neuronal cells.
PRL contributes to hyperalgesia/pain in sex-dependent fashion (2)(3)(4). Because local trauma and inflammation lead to an increase in peripheral levels of PRL in female and to a lesser extent in male rats, it could be suggested that transient modulation of TRP channels by PRL in the periphery is one of underlying mechanisms for PRL-induced thermal (i.e. heat and cold) hyperalgesia (2)(3)(4). TRP channels are also involved in hypersensitivity to thermal and mechanical stimuli at presynaptic levels in the dorsal horn of the spinal cord and certain brain stem regions (62,63). In this respect, PRLR isoforms may also contribute to the transient presynaptic regulation of TRPs (3). Further, expression patterns of PRLR isoforms could be altered during pathological pain conditions, such as systemic inflammation, autoimmune diseases, stress, and trauma. Taking this into account, PRL could produce hypersensitivity to thermal and mechanical stimuli (possibly in a sex-dependent manner) by more efficiently up-regulating the nociceptive transmission FIGURE 10. Expression of PRLR isoforms in female human dental pulp. A and B, IHC shows expression of PRLR-L (red) in a human female dental pulp. Nerves are revealed by marker NFH (green), and nuclei are stained by TO-PRO (blue). C and D, IHC shows expression of PRLR-S (red) in a human female dental pulp. NFH (green) is nerve fiber marker, and nuclei are stained by TO-PRO (blue). E, IHC shows expression of PRLR-L (red; left panel) and PRLR-S (red; right panels) in cross-sections of human female dental nerve fibers. Nerve fiber marker is NFH (blue), and myelin sheath is identified with MBP (green).
between central terminals of nociceptors and dorsal horn spinal cord neurons via different mechanisms and PRLR isoforms. Collectively, this study reports the molecular mechanism underlying transient effects of PRL in sensory neurons of female and male rats. Our findings also suggest that the possible interaction between PRLR-L and PRLR-S and cell signaling differs between transient and long term effects of PRL in neuronal and non-neuronal cells. | 7,809.2 | 2013-10-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Influence of thermal growth parameters on the SiO 2 / 4 H-SiC interfacial region
In order to elucidate the origin of SiC electrical degradation from thermal oxidation, 4H-SiC substrates were thermally oxidized under different conditions of time and pressure. Results from nuclear reaction analyses were correlated to those from electrical measurements. Although the increase in the flatband voltage shift and in the film thickness were related to the oxidation parameters, the results exclude the thickness of the SiO2/4H-SiC interfacial region and the amount of residual oxygen compounds present on the SiC surface as the main cause of the electrical degradation from the SiC oxidation.
Silicon carbide (SiC) is a promising semiconductor to replace Si in micro and nanoelectronics device applications that require high-power, high-frequency, and/or high-temperature. 1,2 esides, a SiO 2 film can be thermally grown in a similar way to that on Si, allowing the technology used to produce MOS (metal-oxide-semiconductor) devices to be adapted to the case of SiC. 3 Nevertheless, the oxidation of SiC leads to a higher interface states density (D it ) in the SiO 2 /SiC interface as compared to the Si case. 1 Although successful routes to reduce D it were achieved, like thermal treatments involving NO, N 2 O, and H 2 , [4][5][6] the nature of the defects responsible for the electrical degradation in the SiO 2 /SiC interfacial region is not yet completely understood.
Concerning SiC oxidation, excess of carbon near the formed interfacial region was observed by different techniques, [7][8][9][10] although medium energy ion scattering (MEIS) analysis indicates a stoichiometric SiO 2 film formed from the SiC oxidation. 11Besides, a non abrupt interface between SiO 2 films and SiC was revealed by nuclear reaction profiling (NRP) [12][13][14] differently from the case of silicon oxidation. 12,15 9][20] Such compounds exhibit different properties when using Si or C face terminated substrates. 20These compounds were not observed in the silicon case, 19 being probably related to the presence of silicon bonded to oxygen and to carbon in different stoichiometries, named silicon oxycarbides (SiC x O y ). 21,22 any trials to remove these compounds in wet environments were unsuccessful evidencing their high chemical resistance. 19However, the use of a flux of O 2 bubbled in hot H 2 O 2 proved to be efficient in removing partially these SiC x O y , 23 reducing D it in the SiO 2 /4H-SiC and decreasing the interfacial thickness after further reoxidation steps. 14In order to elucidate how these residual compounds and how the SiO 2 /4H-SiC interfacial region thickness determined by NRP influence the electrical properties of the SiC MOS structures, more investigations must be performed.In this work, we propose to investigate the relation between these characteristics obtained by nuclear reaction analyses and the modification in the electrical properties induced by the thermal growth parameters oxidation time and oxygen pressure.Thus, we expect to achieve a better understanding on the SiC thermal oxidation and on the origin of the electrical defects present in the SiO 2 /SiC interfacial region.
To achieve these goals, different subatmospheric oxygen pressures (of 18 O 2 ) and oxidation times were used to thermally grow thin SiO 2 films on 4H-SiC substrates.Samples were probed by nuclear reaction analysis (NRA) to determine the total amount of oxygen incorporated before and after the removal of the Si 18 O 2 film, and by NRP to determine its depth distribution.Current-voltage (I-V) and capacitance-voltage (C-V) measurements were performed in Al/SiO 2 /4H-SiC MOS structures and correlated with the other results.
Commercial SiC wafers of the 4H polytype, polished in both (0001) and ( 0001) faces (terminated in Si and C, respectively), were employed as substrates.Samples characterized by electrical measurements were 4H-SiC (n-type) commercial epitaxial wafers, 8 • off-axis on the Si face, doped with nitrogen (1.1 × 10 16 cm −3 ), 4.6 μm thick.Wafers were purchased from CREE Inc. Research.All substrates were cleaned in a mixture of H 2 SO 4 and H 2 O 2 followed by the standard RCA (Radio Corporation of America) process. 24Then samples were etched for 60 s in a 1 vol.%aqueous solution of hydrofluoric acid (40 wt.% HF, purchased from Merck) and rinsed in deionized water.Immediately after blow drying with N 2 , 4H-SiC samples were loaded in a static pressure, quartz tube, resistively heated furnace that was pumped down to 10 −7 mbar.SiO 2 films were thermally grown at 1100 • C in different oxygen pressures (50, 100, and 200 mbar) and oxidation times (0.5, 1, 2, 3, and 4 h) of dry O 2 (<1 ppm H 2 O) enriched to 97% in the 18 O isotope, whose natural abundance is 0.2%, named 18 O 2 .Oxygen pressures higher than 200 mbar were not employed in this work due to the use of a N 2 (L) trap to help the base pressure reduction (mainly H 2 O molecules condensation) while keeping O 2 molecules in the gas phase, which would condensate at higher pressures.The use of 18 O is crucial, as the used nuclear reaction analyses allow to distinguish it from oxygen eventually incorporated from other sources (for instance, from exposure to the ambient).The total amount of 18 O in resulting samples was determined by NRA using the 18 O(p,α) 15 N nuclear reaction at 730 keV, 25 referenced to a standard Si 18 O 2 film on Si. 26 The depth distribution of 18 O in the samples was determined by NRP using the narrow resonance at 151 keV in the cross section curve of the 18 O(p,α) 15 N nuclear reaction. 18O concentration profiles were determined from experimental excitation curves (alpha particle yield versus incident proton energy) 27 using the FLATUS code.With the experimental condition used in this work, a sub-nanometric resolution can be obtained near the surface.Al thermal evaporation to obtain MOS structures used a mechanical mask, forming circular capacitors with a diameter of 200 μm.An InGa eutectic was used as back contact.Samples were electrically characterized using a computer-controlled HP4155A Semiconductor Parameter Analyzer for the I-V curves.The C-V curves were taken from inversion to accumulation at 100 kHz with a 0.25 V/s rate using a HP4284A Precision LCR Meter.
The total amount of 18 O and the corresponding Si 18 O 2 film thickness of the SiC samples before etching are presented in top panel of Figure 1.These results are presented as a function of the product of pressure and time (p × t).The motivation for such plot was that for the Si and for the SiC oxidations, despite being valid for a different thickness range (films thicker then ∼25 nm 28,29 ), a given SiO 2 film thickness can be reached by maintaining the product of oxygen pressure and time constant.For the SiC case, the p × t dependence was investigated for oxygen pressures higher than those of the present work, presenting deviation from this behavior for pressures higher than 1 atm in the case of Si face samples.In the present samples, whose thicknesses are in the 3-8 nm interval for the Si face and 7-24 nm for the C face, a linear behavior of the amount of incorporated 18 O can be observed in both faces, although it was expected a more rapid oxide growth rate up to ∼10 nm for both Si and SiC oxidations. 30,31 t is possible to convert the 18 O amount into oxide film thickness by assuming a given density to the oxide film.In the present case, it was assumed 2.21 g/cm 3 , typical of silicon dioxide films thermally grown on Si.However, this conversion might not be accurate due to modifications in the oxide density in the initial oxidation steps. 32Nevertheless, what is being highlighted from these results is that the linear dependence of the 18 O incorporated amount in the silicon oxide films on (p × t) is still valid for the thermal oxidation conditions tested for both Si and C faces of SiC.It means that for the initial stages of oxidation, this relation between oxygen pressure and oxidation time can be used to determine the SiO 2 film thickness on SiC.The residual amount of 18 O after etching in aqueous HF as a function of oxidation time is presented in bottom panel of Figure 1 for samples synthesized under different time oxidation conditions.As already observed, 20 the Si face presents higher residual oxygen amounts than the C face.However, in present results no relation was observed with oxidation time, indicating that the amount of residual compounds is not affected by the oxidation parameter tested.
Figures 2(a) and 2(b) show the excitation curves of the 18 O(p,α) 15 N nuclear reaction and the 18 O concentration profiles obtained for 4H-SiC samples submitted to different 18 O 2 pressures and oxidation times at 1100 • C. The horizontal lines at 97% observed in Figure 2 profiles refer to a stoichiometric Si 18 O 2 film, while the decrease in the 18 O concentration towards zero refers to its concentration in the SiO 2 /4H-SiC interfacial region.Interfacial region thicknesses around 3 nm (considering the thickness interval from when the 18 O concentration starts to decrease until when this concentration is negligible) can be observed for all C-face oxidized samples, in good agreement with our previous results. 13,14 or Si-face oxidized samples, only the sample oxidized under 50 mbar presented a thicker interfacial region (around 3.8 nm).The decrease in the thickness of the SiO 2 /SiC transition layer as the oxide film thickness is increased during the initial stages of oxidation can be attributed to a smoothing effect of the interface, as suggested by Szilágyi et al. 32 modifications in the SiO 2 /SiC interfacial region thickness attributable to the oxygen pressure and oxidation time were observed.
To investigate the SiO 2 /SiC interfacial region thickness in conditions of a longer oxidation time, a sample was oxidized for 10 h in 100 mbar of 18 O 2 at 1100 • C. To avoid a degradation of depth resolution around the interfacial region due to the thicker film, the upper part of the film was partially removed with a controlled etching 19 for 290 s.In this way, the final film thickness was almost the same of the sample oxidized for 1 h, simplifying the comparison.The excitation curve of the 18 O(p,α) 15 N nuclear reaction and the 18 O concentration profile on the C-face are presented and compared to a sample oxidized for 1 h in 100 mbar of 18 no major modification was observed in the SiO 2 /SiC interfacial region thickness, confirming the absence of influence of the oxidation time in this property.
I-V and C-V curves for samples oxidized at 1100 • C for 1 h in 100 mbar, 1 h in 200 mbar, and 4 h in 100 mbar are presented in Figure 3, and their results are summarized in Table I.The I-V curves presented breakdown fields around 8.0 MV/cm, almost independent of the oxidation parameters, indicating minor modifications in the nature of the SiO 2 from the oxidation conditions tested.C-V curves indicate that the increase of the flatband voltage (V fb ) follows the p × t behavior, and the increase of both oxidation parameters induced a higher effective charge concentration, although not in the same proportion, with oxygen pressure being more important than oxidation time in inducing the highest effective charge concentration.Nevertheless, this is considered an important finding, since the presence of effective negative fixed charge plays a major role in the effective mobility of SiC based MOSFETs. 33These results reveal that a higher oxygen pressure also induces a larger electrical degradation, similar to the well-known effect of longer oxidation times. 34,35 hus, the electrical degradation seems to be controlled by the p × t parameter, i.e., an oxidation parameter that accelerates the SiO 2 film growth should lead to a larger electrical degradation.Therefore, alternative ways to obtain SiO 2 films on SiC such as thermal growing of a very thin and stoichiometric SiO 2 film in a minimal oxidation condition followed by the SiO 2 film deposition, 36 or the oxidation of a Si/SiC heterojunction produced by a layer-transfer process, 37 or even the direct deposition of the SiO 2 film on the SiC substrate, reducing the influence of the substrate in the thermal oxide formation 38 should be investigated in order to minimize the formation of electrical active defects in the SiO 2 /SiC structure.
Results presented in this work exclude the thickness of the SiO 2 /4H-SiC interfacial region and the amount of residual compounds present on the SiC surface as the main cause of the electrical degradation originated from SiC thermal oxidation.Concerning the presence of effective negative fixed charge observed in our electrical measurements, Ebihara and co-workers 39 recently attributed the presence of negative fixed charge to be from CO 3 -like moiety formed from the interaction of the SiO 2 film with residual carbon atoms.Therefore, a possible explanation for the origin of the electrical degradation caused by SiC thermal oxidation is the interaction of the oxidation by-products with the SiO 2 bulk. 10By-products formed in this work would contain 18 O and can be incorporated in the solid phase in all depth regions of the film.When submitting the film to HF etching, the only compounds that remain in the sample are those insoluble in the previous near-interface region.Thus, the fact that no modification in the amount of residual compounds was observed after the removal of the oxide film in HF corroborates this hypothesis.
In summary, this work presented the influence in the electrical and structural properties of low oxygen pressures and different times of thermal oxidation of 4H-SiC on both Si and C faces.Although the product oxygen pressure with oxidation time controlled the total amount of oxygen incorporated in both faces, it does not affect the amount of residual compounds after etching the sample in HF or the SiO 2 /4H-SiC interfacial region thickness.On the other hand, the V fb was influenced by those parameters inducing higher negative effective charge concentrations, indicating that increasing oxygen pressure during thermal oxidation can induce an electrical degradation in a similar way to oxidation time.The possibility that the origin of this electrical degradation is in the interaction of SiC oxidation by-products with the SiO 2 bulk should not be disregarded.
FIG. 1 .
FIG. 1. (Top)18 O amounts obtained by NRA from samples submitted to different oxidation times (0.5, 1, 2, 3, and 4 h) and oxygen pressures (50, 100, and 200 mbar) at 1100 • C on 4H-SiC on both Si and C faces.Film thicknesses were calculated assuming that the SiO 2 density on SiC is 2.21 g cm −3 .(Bottom) Amount of residual18 O after etching in aqueous HF of samples oxidized at 1100 • C and 100 mbar of 18 O 2 under different oxidation time conditions.Error bars correspond to experimental accuracy of 5%.
FIG. 2 .
FIG. 2. Experimental (symbols) excitation curves of the 18 O(p,α) 15 N nuclear reaction around the resonance at 151 keV and the corresponding simulations (lines) for 4H-SiC samples oxidized for 1 h in 50 (triangle and dashed line), 100 (circle and solid line), and 200 mbar (square and dotted line) of 18 O 2 , and for 3 h at 100 mbar (inverted triangle, dotted/dashed line) at 1100 • C. (a) For the Si-face and (b) for C-face samples.(c) Excitation curves and the corresponding simulations for the same nuclear reaction for a 4H-SiC (C-face) sample oxidized for 10 h in 100 mbar of 18 O 2 at 1100 • C followed by aqueous etching in HF for 290 s (diamond and short dotted line) and of a 4H-SiC (C-face) sample oxidized for 1 h in 100 mbar of 18 O 2 at 1100 • C (circle and solid line).Insets: 18 O profiles obtained from the simulation of excitation curves using the same line types.
FIG. 3 .
FIG. 3. (a) I-V curves and (b) C-V curves (open symbols are experimental data and line is the ideal curve.The arrow indicates where V fb was extracted for the sample oxidized for 4 h.For the other samples, it is not shown since it is alike) of Al/SiO 2 /4H-SiC structures.Oxygen pressure and oxidation time are indicated.Oxidation temperature was 1100 • C for all samples.
TABLE I .
Electrical parameters obtained from C-V and I-V measurements for Al/SiO 2 /4H-SiC Si face, n-type structures. | 3,823.4 | 2013-08-09T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Controlling the Adsorption of β-Glucosidase onto Wrinkled SiO2 Nanoparticles To Boost the Yield of Immobilization of an Efficient Biocatalyst
β-Glucosidase (BG) catalyzes the hydrolysis of cellobiose to glucose, a substrate for fermentation to produce the carbon-neutral fuel bioethanol. Enzyme thermal stability and reusability can be improved through immobilization onto insoluble supports. Moreover, nanoscaled matrixes allow for preserving high reaction rates. In this work, BG was physically immobilized onto wrinkled SiO2 nanoparticles (WSNs). The adsorption procedure was tuned by varying the BG:WSNs weight ratio to achieve the maximum controllability and maximize the yield of immobilization, while different times of immobilization were monitored. Results show that a BG:WSNs ratio equal to 1:6 wt/wt provides for the highest colloidal stability, whereas an immobilization time of 24 h results in the highest enzyme loading (135 mg/g of support) corresponding to 80% yield of immobilization. An enzyme corona is formed in 2 h, which gradually disappears as the protein diffuses within the pores. The adsorption into the silica structure causes little change in the protein secondary structure. Furthermore, supported enzyme exhibits a remarkable gain in thermal stability, retaining complete folding up to 90 °C. Catalytic tests assessed that immobilized BG achieves 100% cellobiose conversion. The improved adsorption protocol provides simultaneously high glucose production, enhanced yield of immobilization, and good reusability, resulting in considerable reduction of enzyme waste in the immobilization stage.
INTRODUCTION
Enzymes are a family of nontoxic, environmentally friendly biomolecules, involved in a plethora of biochemical processes. 1 They are widely used as biocatalysts owing to their outstanding properties such as being effective under milder reaction conditions, higher specificity and selectivity, and faster kinetics with respect to traditional catalysts. 1 However, they suffer from intrinsic instability under harsh operative conditions and are expensive. 2 Several technical challenges need to be overcome to make enzymatic processes economically feasible: the high cost of the enzymes, their low thermal and pH stability causing a loss of activity during the process, the inhibition by reactants and products, and difficult recovery. 2 These drawbacks can be overcome by enzyme immobilization. Indeed, immobilization usually results in increased pH, temperature, and organic solvent tolerance as well as resistance to proteolytic digestion and denaturants. 3,4 The key issue for enzyme immobilization is the selection of the immobilization technique and of the appropriate support. Many different immobilization methods are proposed to improve the biocatalyst efficiency. 5 Among them, physical immobilization is the simplest and can be carried out under mild conditions, 6 being based on physical interactions, such as hydrogen bonding, electrostatic forces, and hydrophobic interactions between the enzyme and the matrix. With this method, the enzyme activity is often preserved, but the immobilized enzyme can have poor operation stability and be subjected to leaching. 7 For this reason, the choice of a good support is crucial. It should exhibit thermal and mechanical stability, high surface area, adequate pore diameter, biocompatibility, and chemical affinity toward the enzyme, to create the optimal microenvironment to preserve protein conformation and activity and ensure reusability. 6 In this context, mesoporous SiO 2 nanoparticles are very good supports, owing to a high surface area and tunable porosity allowing for the high loading of guest species. 8−10 Moreover, the great availability of surface hydroxyl groups enables easy chemical functionalization. 11 −14 In particular, wrinkled silica nanoparticles (WSNs), which are mesoporous nanoparticles with central-radial pore structure, are gaining great attention as carriers for enzymes because the conical pore shape helps reduce pore blocking. 15 Furthermore, hierarchical trimodal porosity effectively lowers diffusive limitations for both substrate and products. 15 Another important issue is about the colloidal stability of the supported systems that has a significant effect on the catalytic performances of the immobilized enzymes. 16,17 Indeed, fast self-aggregation or precipitation processes in the reaction media can hinder the substrate access or induce unfavorable conformational transition of the enzyme on the support, 16,18 thus drastically decreasing the biocatalytic activity. These dynamics are often triggered by the complex behavior of enzymes in solution because proteins can unfold and aggregate, depending on ionic force and pH values, forming clusters of different sizes. 19 Hence, robust immobilization on the nanoparticles as well as great colloidal and structural stability appears mandatory to design biocatalysts with high performances, reduce preparation costs, and promote higher reusability. 20,21 Protein−nanoparticle interactions have been extensively studied. 22−24 Most nanoparticles are readily covered by a dynamic layer of proteins when put in contact, generating what is called a protein corona. No single kind of interaction can be attributed to the protein−surface adsorption but rather it is generated from a complex interplay of polar and nonpolar interaction mechanisms. 22 Both kinds of interaction can be attractive or repulsive, determining the formation of the corona. With porous nanoparticles, the protein corona that possibly forms can later migrate inside the pores. 25 Recently, we have used WSNs as a matrix to immobilize βglucosidase (BG) and cellulase. 26,27 BG belongs to the glycosyl hydrolase family that finds applications in many biotechnological fields. 28,29 It plays a key role in the enzymatic degradation of cellulose, hydrolyzing cellobiose to two glucose molecules and allowing the production of sugars that can be fermented to ethanol. The alcohol thus produced can be used as biofuel, with both environmental and geopolitical benefits. 30 Physical immobilization was carried out to attach BG onto WSNs, leading to a performing and stable biocatalyst for the hydrolysis of cellobiose. 26 Adsorption allowed for preserving the enzyme native conformation and increasing substrate− enzyme affinity, leading to 100% cellobiose conversion in 2 h. 31 The yield of immobilization (YI), defined as the percentage weight ratio between the adsorbed enzyme and the overall enzyme used in the immobilization step, reached 30%. 26 In a subsequent work dealing with the immobilization of cellulase onto the same nanoparticles, Costantini et al. found out that the YI varies with the enzyme concentration in the adsorption environment following an exponential decay function. 27 This result confirmed what was previously observed for lysozyme immobilization into mesoporous silica. 32 Therefore, the lower the enzyme concentration, the higher the YI and thus the lower the enzyme waste. In this work, physical immobilization of BG onto WSNs under diluted conditions was performed. Different enzyme concentrations, corresponding to precise BG:WSNs weight ratios, were investigated with the aim to discover the best conditions to limit the selfaggregation process and enhance the control over the protein− support interaction dynamics. At the same time, the search for the optimal system was intended to optimize the yield of immobilization to keep a high enzyme density over the entire surface of the nanoparticles. The most stable BG/WSNs systems were tested in the hydrolysis of cellobiose to glucose and compared with the performances of the reference system previously designed.
Synthesis of Wrinkled SiO 2 Nanoparticles (WSNs).
The preparation of wrinkled SiO 2 nanoparticles (WSNs) was inspired by the synthetic route described by Moon and Lee, 15 which was opportunely modified by using cetyltrimethylammonium bromide (CTAB) instead of cetylpyridinium bromide (CPB) as templating agent for mesopore formation. 33 Also, a more accurate 24-h lasting surfactant removal step was introduced into the preparation protocol. More specifically, 123.68 mL of a solution of IPA and cyclohexane (IPA 3 v/v%) was mixed into an aqueous solution of CTAB (0.01 M) and urea (0.33 M). The reaction mixture promptly turned from transparent into white. Afterward, TEOS was added dropwise to the stirred solution for a final concentration of 0.18 M. Finally, the reaction system was stirred for 30 min at room temperature and then heated to 70°C for 16 h. The obtained nanoparticles were centrifuged, washed three times with ethanol, and subjected to acid extraction of the surfactant by dispersion in a HCl−ethanol solution ([HCl] = 1.3 M) for 24 h at 70°C. Finally, the nanoparticles were collected by centrifugation and washed three times with ethanol.
2.3. Physical Immobilization of BG onto WSNs. Physical immobilization of BG onto WSNs was designed following the protocol reported by Califano et al. 26 However, to define the optimal conditions for enzyme adsorption preventing the self-aggregation process, the procedure was carried out in diluted conditions and different concentrations of BG were investigated. More precisely, 3 mg of WSNs was dispersed in 9.5 mL of citric acid/sodium citrate buffer (21 mM, pH = 5). A 500 μL amount of each BG solution in buffer was then added to the WSN colloidal suspension. Four BG solutions of different concentrations were tested: 0.6, 1, 1.5, and 3 mg/mL, corresponding to precise BG:WSNs weight ratios of 1:10, 1:6, 1:4, and 1:2, respectively. Each mixture was kept under mild stirring (400 rpm) at 40°C for 24 h. Then 0.6 mL of each prepared BG/WSN mixture was analyzed through dynamic light scattering (DLS) to identify the best immobilization conditions for enhancing the stability of the supported enzyme. Subsequently, to study the time evolution of the most controllable BG/WSNs system (1:6), 0.6 mL of each prepared BG/WSN mixture was withdrawn after 15 min, 2 h, 6 h, and 24 h to be analyzed through DLS, circular dichroism (CD), and ζ-potential measurements. The prepared samples were named as BG/WSNs_15 min, BG/WSNs_2h, BG/WSNs_6h, and BG/ WSNs_24h. The supported BG/WSNs biocatalysts were collected by centrifugation after double-washing with bidistilled water to perform catalytic assays as well as other physicochemical analyses. This optimized BG/WSNs system was studied in terms of catalytic performances. The yield of immobilization (YI) was evaluated through thermogravimetric analysis (TGA).
Physicochemical Analysis of Morphology, Size Distribution and Solution
Behavior of BG/WSNs. Morphological and dimensional analysis of bare WSNs and BG-loaded WSNs was carried out through transmission electron microscopy (TEM), using a FEI Tecnai G12 Spirit Twin (FEI, Hillsboro, OR) with a LaB6 emission source and an acceleration tension of 120 kV. The images are taken with a CCD FEI Eagle 4k camera. The samples to be Langmuir pubs.acs.org/Langmuir Article measured were prepared by soaking the proper copper grid used for TEM measurements (400 mesh with a thin carbon film) in an aqueous suspension of the nanoparticles with the concentration set at 0.5 mg/mL. Time evolution of the colloidal stability and selfaggregation process of the BG/WSNs systems during the immobilization process was monitored by DLS measurements. 34,35 A homemade experimental set up, composed of a Photocor compact goniometer (Moscow, Russia), a SMD 6000 laser Quantum 50 mW light source (Laser Quantum, Fremont, CA) operating at 532.5 Å, a photomultiplier (PMT-120-OP/B), and a correlator (Flex02−01D) from Correlator.com (Shenzhen, China) was used. The experimental temperature was fixed to the room value (25°C), while the scattering angle θ was set at 130°. A regularization algorithm 36 was used to analyze the correlation function of the scattered intensity (I(t)) reported below as where G 2 (τ) is the correlation function of the scattered intensity I(t) and the angular brackets denote an average over time t. The autocorrelation function is necessary to extract information about the colloidal stability of the nanostructures from the random fluctuation of the scattered intensity. The hydrodynamic radius (R H ) of the nanostructures was calculated as follows: where k B is the Boltzmann constant, T is the absolute temperature, η is the solution viscosity, and D is the average diffusion coefficient measured in the DLS experiments. For each sample, 12 acquisitions of the scattering intensities lasting 120 s each were collected to have a good and reproducible statistics. ζ-Potential measurements were performed to assess the nature of the enzyme−support interaction and the influence of the surface charge on the colloidal stability of the BG/WSNs nanosystems. About 600 μL of each suspension at different immobilization times was analyzed by means of electrophoretic light scattering using a Zetasizer Nano ZSP (Malvern Instruments, England). Each measurement was recorded at 25°C upon a 30 s equilibration time, and the average of three measurements at a stationary level was taken. The ζ-potential was calculated by the Smoluchowski model.
Quantification of the Enzyme Fraction in Commercial
BG. An estimation of the protein content in the commercial BG was realized through UV analysis, following the procedure first reported by Goldfarb in the 1950s. 37 Briefly, a 1 mg/mL BG buffer solution was loaded in a 1 cm path length quartz cuvette and subjected to UV−vis spectrocopy, recording in the 240−320 nm range. The enzyme concentration was then calculated following eq 1 derived from the Lambert−Beer law: where M (mol·L −1 ) is the protein molar concentration, A is the absorbance at 280 nm, l (cm) is the optical length, and ε is the BG molar absorptivity (L·mol −1 ·cm −1 ). The presence of other protein fractions within the commercial powder was investigated through sodium dodecyl sulfate−polyacrylamide gel electrophoresis (SDS-PAGE). 2.6. Evaluation of the Yield of Immobilization. The yield of immobilization (YI) was determined through thermogravimetric analysis (TGA). Ten milligrams of each dried sample was ground and loaded into platinum pans to be thermally treated from 30°C to 1000°C under air atmosphere, with a heating rate of 10°C/min. The decay in the initial weight of each sample was monitored. The enzyme weight fraction contained in the BG/WSNSs samples was calculated as the weight loss between 200°C and the final temperature over the initial weight, in percentage, minus the organic weight fraction of the bare support. YI was then evaluated as the percentage ratio between the loaded enzyme and the amount of protein dissolved initially in the adsorption mixture. The activity yield of immobilization YI E was calculated by the formula YI E = (E i /E c ) × 100, where E c represents the contacted enzyme activity and E i the activity expressed by the immobilized enzyme. 38 2.7. Conformational Analysis of Immobilized BG. Circular dichroism (CD) was carried out to analyze the structural stability of the supported BG enzyme as well as the evolution of its conformation. For CD analysis, 300 μL of each BG/WSNs suspension was withdrawn from the reactor, poured into a 0.1 cm path length cuvette, and analyzed using a Jasco J-710 spectropolarimeter equipped with a Peltier thermostatic cell holder (model PTC-348WI). CD spectra were recorded in 195−250 nm range, with a resolution of 0.5 nm, at both room temperature (25°C) and reaction temperature (50°C ). Thermal denaturation curves were obtained by heating the samples from 25°C to 90°C, with a heating rate of 1°C/min and following the CD signal at the fixed wavelength of 222 nm. A Nexus spectrometer equipped with a DTGS (deuterated triglycine sulfate) KBr detector was used to perform FTIR experiments. All the BG/ WSNs samples were dried, ground, and pressed into pellets (13 nm in diameter). FTIR spectra were recorded in the 4000−400 cm −1 range, choosing a spectral resolution of 2 cm −1 and 32 scans for each acquisition. The KBr spectrum was chosen as the background. The occurrence of any modifications in the protein secondary structure was assessed by Gaussian deconvolution of amide I band, performed by means of GRAMS 32 software. The number of Gaussian components and their initial position were determined by the second derivative spectrum.
Catalytic Assays.
For the hydrolysis of cellobiose to glucose, a cellobiose solution in citric acid/sodium citrate buffer (pH = 5, 21 mM) was added to an equal volume of a BG/WSNs suspension in the same medium to have final concentrations of cellobiose and BG fixed to 1.5 and 0.15 mg/mL, respectively. The system was kept under mild stirring at 50°C for 24 h. The supernatant with the final obtained product was separated from the supported BG/WSNs biocatalyst by centrifugation (11 500 rpm, 10 min) and then was kept in an oven (100°C, 10 min) to thermally inactivate traces of the free enzyme which might have leaked from the support. Finally, the concentration of produced glucose was assessed through the D-glucose oxidase− peroxidase method. 39 In detail, 300 μL of the collected supernatant was diluted to 1:10 v/v with bidistilled water, mixed into 600 μL of glucose-measuring reagent, and kept in a thermostatically controlled water bath at 37°C. After 30 min, the reaction was stopped by adding 600 μL of sulfuric acid (12 N), and 1.5 mL of the final solution was poured into a 1 cm path length quartz cuvette and subjected to absorbance measurement at 540 nm using a Shimadzu UV-2600i spectrophotometer (Shimadzu, Milan, Italy). The glucose concentration was estimated on the basis of a calibration curve. The results were expressed in terms of yield of cellobiose conversion, defined as the concentration (mg/mL) ratio between obtained glucose and initially loaded cellobiose, in percentage. Similarly, the product obtained after 10 min of reaction was also analyzed to determine the specific activity of the supported biocatalysts, expressed in U/mg of enzyme. Units (U) indicate the micromoles of glucose produced per minute by a certain amount of enzyme. Experiments were repeated in triplicate.
2.9. Operational and Thermal Stability. Reusability assays were carried out for BG/WSNs_2h and BG/WSNs_24h systems. The biocatalysts were tested in consecutive reaction cycles of 24 h. After each cycle, the produced glucose was evaluated as previously described. The biocatalysts were collected by centrifugation and washed twice with bidistilled water before each reaction cycle. The results were expressed in terms of glucose production over the reuse cycles. The occurrence of leakage phenomena affecting the performance of the supported biocatalysts in the consecutive reuses was assessed by TGA measurements. The experimental conditions were the same as those used to evaluate the yield of immobilization. More specifically, the enzyme weight fraction was estimated before and after the reuse cycle associated with a remarkable loss in terms of glucose production.
Both the supported biocatalysts and free BG underwent thermal stability assessment. Briefly, the samples were dispersed (dissolved, in the case of the free enzyme) in citrate buffer, incubated for 1 h at a set Langmuir pubs.acs.org/Langmuir Article temperature (60°C, 70°C, or 80°C), and then used to perform cellobiose hydrolysis for 24 h at 50°C. The cellobiose conversion obtained without subjecting the samples to thermal stress was chosen as the reference to evaluate the residual cellobiose conversion (%).
Colloidal Behavior and Morphology of Immobilized BG/WSN. DLS analysis was performed on both naked
and BG-loaded WSNs to investigate the colloidal behavior of the systems in an aqueous environment as a function of the enzyme/nanoparticles ratios and immobilization times. First, a suspension of bare WSNs was analyzed as reference sample. As reported in Figure 1A, the hydrodynamic radius distribution shows a polydisperse system with the presence of two populations: the first one is centered at about 290 nm, while the second one is centered at about 2500 nm. This representation emphasizes the presence of large aggregates. Converting the intensity-weighted profile into a numericalweighted profile, an indication of the relative concentration of the different species in the WSNs suspension is given. This second representation clearly indicates the presence of the most abundant population centered at about 280 nm as the hydrodynamic radius. Figure 1B reports TEM micrographs for bare WSNs. The nanoparticles exhibit spherical profiles, with silica fibers spreading radially from the center to the outer surface. The mesoporous structure is made of conical pore channels, with pore size increasing moving outward, as confirmed by the remarkable decrease in contrast with respect to the inner portion of the nanoparticles, where the silica skeleton gets thicker. Moreover, this micrograph confirms the presence of silica nanoparticles with sizes ranging from 450 to 550 nm in diameter, whereas micrometric aggregates are not detected. Therefore, the population of 2500 nm in diameter detected through DLS analysis can be univocally attributed to the presence of clusters of WSNs, confirming that the naked nanostructures tend to aggregate in aqueous solution. As described in Experimental Section, different BG:WSNs weight ratios, equal to 1:2, 1:4, 1:6, and 1:10, were considered. In all cases, the immobilization time of 24 h was first considered, according to the previously investigated system. 26 The total protein content in the BG commercial powder had been evaluated before the adsorption protocol was started, and the estimated value was equal to 24 wt % (see Supporting Information, Figure S1). This estimation was considered as correct because SDS PAGE analysis was performed for the 1:6 BG:WSNs ratio ( Figure S2). Indeed, the images of the gels proved the absence of other proteins besides BG in the commercial product. As a matter of fact, the profiles of both the offered and the immobilized protein ( Figure S2a, S2c) exhibit only one band centered at a molecular weight of about 65 kDa, corresponding to the monomeric form of BG. In fact, SDS PAGE, as known, does not allow detecting oligomeric forms of proteins due to the strong denaturating effect of SDS. 40 No band is detected in the profile of the supernatant ( Figure S2b), suggesting almost complete immobilization of the protein.
Therefore, only one-quarter of the commercial product is actually made of protein. Figure S3 displays the autocorrelation functions versus the time of BG/WSNs_24h at the considered weight ratios. However, although the self-aggregation and precipitation of greater aggregates occur in all samples, some differences can be observed as a function of the enzyme/ nanoparticles weight ratio. Indeed, by comparing the autocorrelation functions shown in Figure S3, a slightly better situation is observed for 1:4 and 1:6 ratios for which the curves tend to reach a plateau condition over time, suggesting that they represent the best conditions capable of promoting greater control of the physical immobilization process of the enzyme onto WSNs. On the other hand, the correlation function of the 1:10 w/w sample starts to decay at slightly longer τ than the two other systems and does not reach a plateau at value g 2 (t) = 1, indicating the presence of greater particles, such as large clusters. This could be related to the presence of a very small fraction of WSNs covered with the BG enzyme and, therefore, the prevalence of naked WSNs, which show a greater tendency to self-aggregate and precipitate. Consequently, according to DLS evidence and considering the opportunity to use a BG amount as low as possible to make the final biocatalyst, only the system designed by fixing BG:WSN wt/wt equal to 1:6 was further investigated.
Four immobilization times (15 min, 2 h, 6 h, and 24 h) were monitored by DLS to study the time evolution of the system during the adsorption process. Considering only the intensityweighted profiles for both WSNs and BG/WSNs samples after 15 min of immobilization (Figure 2), the curves exhibit a population bigger than 2000 nm, but the most significant result is the presence of another population, centered below 500 nm, which is bigger than the corresponding one for bare WSNs. This would suggest that BG is already adsorbed onto WSNs after the first 15 min without gaining colloidal stability. Unfortunately, due to the rapid evolution of the system also related to the self-aggregation process occurring with the time, it is not possible to make a precise estimation of the size of BG/WSNs at diverse immobilization times. However, a comparison of the correlation functions can be done. As shown in Figure 3, no significant differences are observed between the different systems: a slightly better condition should be associated with the BG/WSNs_2h sample, which appears more similar to the BG/WSNs_15 min one, while those prepared at longer immobilization times look almost equivalent. Finally, the colloidal stability of the system could be increasingly worse with time due to aggregation phenomena triggered by adsorbed enzyme.
The changes in the morphology of supported biocatalysts occurring during adsorption were investigated through TEM analysis (Figure 4). Figure 4A and 4B shows lower and higher magnifications of bare WSNs, respectively. As previously said, the pronounced difference in terms of the contrast between the core and the border portion of the nanostructure is due to the extended presence of radial pore channels. Micrographs for BG/WSN_15 min ( Figure 4C, 4D) exhibit a decrease in the contrast difference. In particular, a thin enzyme layer seems to be adsorbed onto the outer surface of the nanoparticles while pores are expected to be only partially filled ( Figure 4D). Moving onward to 2 h of immobilization, a wide enzyme corona surrounding clusters of nanoparticles becomes visible ( Figure 4E, 4F). Indeed, 2 h are enough to allow for a consistent amount of protein to be adsorbed externally and start diffusing inward. Protein adsorption could trigger aggregation phenomena, because the enzyme appears organized in extended aggregates enveloping clusters of a few nanoparticles ( Figure 4E). Furthermore, the surfaces of close nanoparticles are bound to each other by enzyme bridges ( Figure 4F). Complete pore filling seems to be accomplished after 24 h. Indeed, the whole profile of the nanoparticles exhibits a homogeneously dark contrast, suggesting that the protein is completely hosted by the mesopore channels ( Figure 4G). Moreover, the wide enzyme aggregates, visible in BG/ WSNs_2h samples ( Figure 4E, 4F), disappears, resulting in the absence of a proper protein corona layer of noticeable thickness ( Figure 4H).
A quantitative analysis of TEM images was performed by the Histogram function of the software National Instrument Vision assistant. The Histogram function counts the total number of pixels in each of the 256 grayscale levels (zero is black). These intensity profiles were taken along a horizontal line passing through the center of the particle. The results are shown in Figure 5. As can be seen, the first maximum, which represents the darkest region of the particle, moves toward smaller pixel values and increases in intensity as the contact time between the enzyme and the support increases. The second maximum, which represents the clearest part, moves significantly toward smaller pixel values (maximum at 120 for WSNs, at 70 for BG/ WSNs_15 min and BG/WSNs_2h) and almost disappears for BG/WSNs_24h, meaning the entire porous structure of the silica skeleton is gradually filled by the protein during the immobilization process.
ζ-Potential measurements assessed that the increasing colloidal instability of BG/WSNs nanosystems over time was due to consistent changes in the surface charge of WSNs and allowed for unveiling the mechanism of interaction between enzyme and support. Figure 6 shows the evolution of the ζpotential during the immobilization stage. Bare WSNs exhibit a ζ-potential value equal to −7.31 mV. This is an expected result because the isoelectric point (pI) for sol−gel silica is set within a 2−3 pH interval 41−44 below pH = 5 of citrate buffer used for the immobilization. As the adsorption process goes on, the ζpotential rises with time from −5.35, recorded at 15 min, up to −1.57 mV, registered after 24 h. This visible trend might be evidence of the protein binding onto the silica surface because commercial BG is positively charged at pH = 5 (pI = 7.3, 45 ). Previous works had relied on changes in ζ-potential values to monitor protein adsorption kinetics at the interface. 46−48 Therefore, the YI for BG is expected to follow the same trend as the ζ-potential that is the higher the amount of adsorbed protein, the higher the increase in surface potential. In our first work dealing with the physical immobilization of BG onto WSNs, we detected the presence of hydrogen bonding between the enzyme and the silica surface. 26 Results herein described underline that also electrostatic forces give a strong contribution to the protein−silica interaction, because the surface charge seems to be intimately correlated to the enzyme loading.
Moreover, time-dependent aggregation and thus precipitation phenomena detected through DLS analysis can be
TGA Analysis for the Estimation of the Yield of Immobilization.
YI of BG/WSNs was estimated through TGA measurements carried out after 2 and 24 h of immobilization. The reason for choosing these systems is that they were the only ones to load consistent amounts of Figure 7 reports thermograms for bare WSNs as well as BG immobilized in 2 and 24 h. WSNs experience a weight loss of 6.8% in the 200− 800°C temperature range, while the values recorded for BG/ WSN_2h and BG/WSN_24h are 10.5% and 18.5%, respectively. Thus, YI for the supported biocatalysts reaches 23% in 2 h and 80% in 24 h, corresponding to 38 and 133 mg/ g of support, respectively. The presented results confirm that the dilution of both enzyme and support as well as the choice for a lower BG:WSN w/w resulted in the optimization of the immobilization route. In fact, the achieved enzyme loading in 24 h was comparable to that of the reference system namely the biocatalyst similarly produced by Califano et al. 26 using a BG:WSNs w/w ratio of 1:2 (133 mg/g vs 150 mg/g) whereas YI was more than doubled, rising from 30% to 80%. The feasibility of using TGA analysis for protein content determination was previously assessed. The BG:WSN system was tested for protein content with both TGA 26 and the Bradford method, 31 giving exactly the same result of 150 mg/g. Such enhancement in YI was not unexpected. Indeed, it was observed that absorption of cellulolytic enzymes into WSNs follows a Langmuir mechanism 27 which prescribes enzyme monolayer adsorption. According to such a mechanism, the amount of immobilized protein rises with the concentration of enzyme in solution until a plateau is reached, when all the binding sites of the support are saturated. Therefore, low enzyme concentrations lead to high YI values because YI follows an exponential decay function.
Conformational Analysis of Immobilized BG.
To analyze the effect of BG immobilization on WSNs at different times on the enzyme conformation, CD spectra of free BG and BG/WSNs systems after adsorption at 2h (BG/WSN_2h) and 24h (BG/WSN_24h) were recorded, as shown in Figure 8.
The spectrum of the free enzyme showed two minima centered at 215 and 222 nm, suggesting the presence of comparable amounts of β-sheet and α-helix components. 49−52 The spectra of the BG/WSNs systems are similar but slightly different from that of the free protein. Indeed, the two minima are better resolved and fall at about 210 and 220 nm. These spectral features may suggest a slightly higher presence of αhelices with respect to β-sheets. However, the comparison between the spectra highlights that the enzyme does not unfold and retains its secondary structure when adsorbed on the nanosilica skeleton in 2 h as well as 24 h. The enzyme in its free form experienced a two-step denaturation phenomenon. In detail, the first step is likely due to rearrangements of the quaternary structure, whereas the second one to the loss of Langmuir pubs.acs.org/Langmuir Article secondary structure, with a melting temperature of 74°C. In fact, it was found that β-glucosidase from almonds exist in two isoforms, monomeric and dimeric, with the dimeric form that performs much better than the monomeric one. 53 Thermal denaturation curves of the immobilized samples ( Figure 9A) did not exhibit remarkable signs of denaturation up to 90°C. More specifically, the thermal curve of BG/WSN_24h remains flat, indicating that no structural change occurs. Differently, the slight slope exhibited by the BG/WSN_2h thermal profile reveals a partial structural modification. Such distinct thermal behaviors could be attributed to the different protein organizations and densities onto the silica skeleton. Indeed, the protein is mostly externally adsorbed over the surface of the nanostructure after 2 h of immobilization and thus free to undergo modifications of quaternary and tertiary structure. On the contrary, the enzyme is best shielded when hosted inside the pores as for BG/WSNs_24h because the pore wall− protein physical interaction ensures conformation rigidity, resulting in more improvement of the thermal stability than BG/WSNs_2h. 31 Thermal stabilization of enzymes is particularly important for multimeric enzymes (dimeric in our case) where dissociation of the subunits can produce inactivation. 54 It was argued that for β-glucosidase, inactivation may start by subunit dissociation. 55 In our case, immobilization seems to stabilize the quaternary structure of the enzyme. Stabilization of multimeric enzymes by physical adsorption was observed where multipoint enzyme−support interactions exist, 54 due to the presence of several interacting groups on the support surface [i.e, OH for hydrogen bonding and O − for electrostatic interactions).
The anchoring into the pores of WSNs dramatically improved the thermal stability of the enzyme. The benefits brought by the physical immobilization to the thermal resistance of the enzyme clearly emerge from the comparison between CD spectra of free BG and the most stable supported biocatalyst namely BG/WSNs_24h acquired before and after subjecting the sample to a denaturation test ( Figure 9B, 9C). The free protein experienced a remarkable change in the 200− 225 nm range and a very strong decrease in CD intensity, thus confirming that it is mostly unfolded. 56 Differently, immobilized BG exhibited only slight variations in the spectrum profile, confirming the enhanced rigidity of the protein chains provided by the physical immobilization.
The deconvolution of the amide I band carried out by FTIR spectroscopy of BG/WSNs_24h ( Figure S4) confirmed that the enzyme underwent only a little structural modification upon adsorption onto WSNs. As reported in Table 1, the obtained structural pattern underlines that the optimized system showed more similarities with the original structure of the BG with respect to that observed for the immobilized Figure 9. Thermal denaturation curves for free BG (black line), BG/WSNs_2h (red line), and BG/WSNs_24h (blue line) (A). Comparison between CD spectra of free BG (B) and BG/WSNs_24h (C) acquired before (dashed curve) and after (solid curve) a thermal denaturation ramp. Langmuir pubs.acs.org/Langmuir Article enzyme at the highest WSNs/BG weight ratio, as previously prepared acting as reference system. 26 Indeed, the percentage of α-helices (30.8%) was higher and closer to the one exhibited by the BG in its free form (34%), just like the difference between the percentage amounts of αhelices and β-sheet, 57 confirming that observed through CD measurements. Moreover, the non-negligible value for aggregate portions could be a consequence of protein rearrangement when adsorbed onto the nanostructure or might occur during the drying process necessary to analyze the samples by FTIR.
3.4. Catalytic Assays. BG/WSNs_2h and BG/WSNs_24h were both assayed in the hydrolysis of cellobiose to glucose using the same amount of immobilized enzyme. Table 2 shows the immobilization parameters and activity for the supported biocatalysts, compared to the reference system and to soluble BG.
As can be seen, all the immobilized biocatalysts show hyperactivation, possibly due to an increased concentration of the substrate near the active site. 26 However, for the reference biocatalyst, it has been shown that the situation levels off over a longer period: there is a decrease in the rate of the reaction after 60 min for BG_WSN with respect to free BG, probably due to the accumulation of glucose inside the matrix. 26 Figure 10 shows the histogram reporting the cellobiose conversion achieved by the two biocatalysts in 10 min and 24 h of reaction. Both biocatalysts allowed for about 35% cellobiose conversion after 10 min ( Figure 10A). The specific activities were 7.77 and 8.22 U/mg BG for BG/WSNs_2h and BG/ WSNs_24h, respectively (calculated by dividing the activity values for the weight of the actual BG contained in the commercial product). Moreover, both systems pushed cellobiose conversion up to 100% in 24 h ( Figure 10B). Catalytic assays thus highlight that these biocatalysts exert performance similar to that of the biocatalyst chosen as the reference (activity ∼8.44 U/mg BG , 100% cellobiose conversion in 24 h), produced by adsorption of BG into WSNs for 24h, fixing enzyme and support concentrations to 1 and 2 mg/mL, respectively. 26 The obtained results confirm that assessed by CD analysis which is that the enzyme conformation is unaffected or even improved by physical immobilization, leading to performing biocatalysts produced after both 2 and 24 h of adsorption. Such achievements mean that this modified adsorption route leads to biocatalysts which retain conformation and improved activity, although using only a third of the enzyme needed previously in the immobilization step with respect to the reference system designed by Califano et al. 26 In the end, 24 h is confirmed as the optimal immobilization time. Indeed, it allows for the highest YI (80%), resulting in a consistent enzyme saving. Moreover, enzyme location within the pores is responsible for the largest improvement in protein thermal stability, as assessed by CD analysis ( Figure 9A). The 2 h adsorption leads to a transient state that is not in equilibrium. Actually, it is shown that after 24 h the protein corona disappears and the enzyme is mainly located inside the pore. Furthermore, it was found that in porous nanoparticles the proteins of the corona can undergo, during storage, intraparticle migration inside the pores. 25 Therefore, the catalyst is likely to change over time in an uncontrollable way.
3.5. Operational and Thermal Stability. The arrangement and organization of the protein over the porous architecture of the silica nanoparticles in the different biocatalysts affect the operational stability, due to conformational variations or leakage phenomena. As a matter of fact, BG/WSNs_24h and BG/WSNs_2h systems exert different performances in terms of reusability, as shown in Figure 11. BG/WSNs_24h biocatalyst exhibits total reusability up to the third cycle, only losing 20% of conversion at the fourth one. Afterward, the performances of the biocatalyst drop to about 20% and 15% conversion at the fifth and sixth cycles, respectively. The operational stability of this system was The yield of immobilization expressed in terms of activities (YI E ) was measured as the percentage ratio between the activity of the immobilized protein and the activity of the offered protein in the immobilization step. b Specific activity (SA) is defined as the recorded activity per mass of BG. c Recovered activity (RA) is defined as the recorded activity per mass of support. Langmuir pubs.acs.org/Langmuir Article already tested for the reference system: there was no loss of activity after three repeated uses. In the fourth, the yield reduced to 80% and 40% with the fifth reuse. 26 A comparable trend is reported for BG/WSNs_2h. However, it keeps complete conversion only for two cycles, losing 40% conversion at the third one. In a way similar to that of BG/WSNs_2h, after the third cycle it experiences a fall in conversion until losing it all at the sixth reuse cycle. BG/ WSNs_24h's higher operational stability can be attributed to the penetration of BG into the pores of the nanostructure. This maximizes the protein−matrix interaction, reducing the risk of both conformational modifications and leakage phenomena. On the contrary, BG is set mostly over the outer surface of BG/WSNs_2h, being exposed to the release of the external protein layers as long as the reusability tests go on. 58 Indeed, TGA measurements proved that the BG/WSNs_2h sample loses almost the 88% of the original enzyme load after four reaction cycles whereas only the 20% of protein is released from the pore structure of BG/WSNs_24h. Figure 12 shows the results of the thermal stability experiments. The bar plot highlights that the immobilized enzyme recovers higher cellobiose hydrolytic activity than that of its free counterpart upon an incubation of temperature >60°C , regardless of the immobilization time. More specifically, free BG is completely inactivated after incubation at 70°C. On the other hand, the enzyme immobilized for longer times (24 h) is slightly more stable than the enzyme immobilized for shorter times (2 h) when both are incubated at 70°C. This result confirms the lower stability of the soluble enzyme compared to the immobilized one. Both supported biocatalysts experienced complete inactivation after incubation at 80°C.
CONCLUSIONS
This work is focused on the study of the self-aggregation processes associated with the physical immobilization of BG into WSN with the aim to better control the protein−support interactions and their evolution as a function of time and enzyme concentration. Indeed, this behavior has been poorly studied, and many aspects related to the enzyme immobilization appear unclear.
In this work, the optimal adsorption conditions in terms of colloidal stability and yield of immobilization (YI) were found. Specifically, a BG:WSNs ratio equal to 1:6 wt/wt leads to the highest controllability of the system, as indicated by DLS analysis. In these conditions, the formation of a protein corona is observed at 2 h and a 23% YI resulted, as demonstrated by TEM and TGA analyses, respectively. However, the enzyme corona disappears after 24 h as the protein diffuses inward to reach the inner edge of the pores, achieving 80% YI. At the same time, the enzyme conformation was only slightly affected by physical immobilization, as confirmed by FTIR and CD measurements. Indeed, a huge gain in thermal stability of the supported enzyme was observed after both 2 and 24 h of immobilization. More specifically, BG/WSNs_24h preserves almost complete folding even at 90°C owing to the interactions between the pore walls and the protein established as the enzyme is located inside the pores. Both BG/WSNs_2h and BG/WSNs_24h show complete conversion of cellobiose to glucose after 24 h of reaction at the same enzyme concentration, proving the success of the adsorption protocol in preserving native enzyme secondary structure. The sensitively high YI reached after 24 h points out BG/ WSNs_24h to be the best obtained biocatalyst, exerting comparable performances to that of previously prepared BG:WSNs having a ratio equal to 1:2 wt/wt and about 10fold the support concentration. 26 However, this favorable biocatalytic activity is strongly associated with the enhanced controllability achieved as the BG:WSNs wt/wt ratio is set to 1:6 and the protein amount is lowered by 1 order of magnitude, also guaranteeing a noticeable enzyme saving. Moreover, the new adsorption protocol results in a fully reusable biocatalyst up to the fourth cycle of reaction.
In summary, the proposed study underlines the key role of a fine-tuning of immobilization processes, in terms of both time and enzyme content onto inorganic supports, to improve colloidal stability and to prevent fast self-aggregation processes as decisive strategies to enhance the enzyme loading and reduce protein waste without undermining the biocatalytic performances.
■ ASSOCIATED CONTENT | 9,688 | 2023-01-18T00:00:00.000 | [
"Materials Science",
"Engineering",
"Chemistry",
"Environmental Science"
] |
Design and Optimization of Molecularly Imprinted Polymer Targeting Epinephrine Molecule: A Theoretical Approach
Molecularly imprinted polymers (MIPs) are a growing highlight in polymer chemistry. They are chemically and thermally stable, may be used in a variety of environments, and fulfill a wide range of applications. Computer-aided studies of MIPs often involve the use of computational techniques to design, analyze, and optimize the production of MIPs. Limited information is available on the computational study of interactions between the epinephrine (EPI) MIP and its target molecule. A rational design for EPI-MIP preparation was performed in this study. First, density functional theory (DFT) and molecular dynamic (MD) simulation were used for the screening of functional monomers suitable for the design of MIPs of EPI in the presence of a crosslinker and a solvent environment. Among the tested functional monomers, acrylic acid (AA) was the most appropriate monomer for EPI-MIP formulation. The trends observed for five out of six DFT functionals assessed confirmed AA as the suitable monomer. The theoretical optimal molar ratio was 1:4 EPI:AA in the presence of ethylene glycol dimethacrylate (EGDMA) and acetonitrile. The effect of temperature was analyzed at this ratio of EPI:AA on mean square displacement, X-ray diffraction, density distribution, specific volume, radius of gyration, and equilibrium energies. The stability observed for all these parameters is much better, ranging from 338 to 353 K. This temperature may determine the processing and operating temperature range of EPI-MIP development using AA as a functional monomer. For cost-effectiveness and to reduce time used to prepare MIPs in the laboratory, these results could serve as a useful template for designing and developing EPI-MIPs.
Introduction
Analytes are frequently analyzed in various fields and settings, such as health clinics, environmental monitoring, warfighter protection, and industrial factories [1].Sensing platforms consist of two components: first, the recognition element that binds and responds to the presence of analytes, and second, the transducer that converts the interactions resulting from the binding of the recognition element with the target into analytical signals [2][3][4].The development of biomimetic or synthetic receptors with selectivity and specificity resembling the biological receptor has become an alternative and an area of intensive contemporary interest because of several disadvantages in biological receptors, including Polymers 2024, 16, 2341 2 of 21 their fragile nature, the need for specific operating conditions, such as ionic strength, pH values, and temperature, and the limited life span of these receptors [5].
MIPs are synthetic materials aimed at selectively identifying and binding particular molecules.They are produced via polymerizing functional monomers and cross-linkers in the presence of a template molecule, generating specific binding sites of the chosen template within the polymer matrix.This process typically involves the following steps: (i) Selection of a suitable template/target molecule for analysis (i.e., a drug or biomarker).(ii) Selection of a functional monomer and cross-linker containing functional groups that can participate in intermolecular interactions with the selected template molecule.(iii) Polymerization of the monomer and cross-linker in the presence of the template molecule.This can be done through methods such as bulk polymerization [6], precipitation polymerization [7], or emulsion polymerization [8].(iv) Removal of the template from the polymer matrix to yield an MIP with binding sites complementary in shape, size, and chemical functionality to the template molecule.The properties of the MIP cavities allow selective binding of the template molecule, even in complex, multi-analyte samples [9,10].MIPs have found applications in diverse fields, including drug delivery [11,12], molecular sensing [13], chromatography [14,15], and biomimetic catalysts [16][17][18].They offer a versatile and costeffective alternative to natural recognition elements like antibodies and enzymes, making them valuable tools in molecular recognition and separation processes [19].
EPI, also known as adrenaline, is a hormone and neurotransmitter that plays a crucial role in the body's "fight or flight" response to stress or danger [20].It is produced by the adrenal glands and released into the bloodstream in response to various stimuli.EPI can act on different receptors throughout the body to produce a variety of physiological responses, such as increasing heart rate, blood pressure, and blood flow to the muscles, lungs, and brain, expanding airways, and releasing accumulated energy in the form of glucose [21].EPI is widely used in medicine because it can counteract severe allergic reactions (anaphylaxis) and treat life-threatening conditions such as cardiac arrest and severe asthma attacks [20].EPI-MIPs have potential applications in drug delivery, chemical sensing, and separation sciences [22].They also have potential to be used to develop selective sensors for detecting and quantifying EPI in samples [23], as well as for controlled drug release systems where the MIP is loaded with EPI and released in a controlled manner based on binding interactions.Overall, EPI-MIPs may offer a promising approach for targeted recognition and delivery of EPI molecules.The present study aims to design an MIP that possesses high affinity and binding capacity for targeting EPI molecules.Based on this, a functional monomer able to give a very strong complex with a target molecule needs to be chosen.There are frequently used monomers that are either neutral or charged but able to form non-covalent interactions with EPI as the template [24].Aniline (ANI) is a good monomer for EPI-MIP despite its polymerization condition in an acidic medium to form polyaniline [22].First, EPI is relatively stable in acidic conditions [25], which favors its structure and functional groups during the polymerization process.Furthermore, an acidic medium enhances a strong hydrogen bond and electrostatic interactions between the functional groups on EPI and the functional groups in the polymer matrix, creating highaffinity binding sites in the MIP [26].In addition, the aromatic structure present in ANI can interact through π-π interactions with the aromatic rings present in EPI, leading to a more specific binding site [27].Acrylic acid (AA) is another monomer compatible with EPI-MIPs due to its free-radical polymerization adaptable to different conditions [28].AA can dissolve in a variety of solvents due to its polar nature and ability to form hydrogen bonds, creating an optimal environment for polymerization.Such solvents include acetonitrile, methanol, chloroform, and water, whereas toluene is partially compatible [29,30].Other functional monomers, such as 4-vinyl pyridine (4VP), glycidyl methacrylate (GMA), methylacrylic acid (MAA), and 2-hydroxyethyl methacrylate (HEMA), are also compatible with EPI due to the specific functional groups present in them, which can interact with EPI [26].Examples include the pyridine ring, epoxy group, methacrylate group, carboxy group, and Polymers 2024, 16, 2341 3 of 21 hydroxy group.The presence of these functional groups enhances the binding affinity for EPI, making the MIP more effective at recognizing and binding the target molecule [26].
The design and optimization of MIPs may be assisted through the use of computational techniques.Molecular modeling, such as molecular docking, molecular dynamics simulations, and quantum mechanics calculations, can be used to study the interactions between MIPs and their target molecules.These methods can help elucidate the binding mechanisms, affinity, and selectivity of MIPs towards a specific target molecule.Further, in silico virtual screening can be employed to identify and prioritize potential monomers and cross-linkers used for MIP synthesis [31].These methods can help select monomers with favorable intermolecular interactions towards the desired target molecule.Polymer optimization is another computer-aided design tool that can assist in optimizing the polymerization process and predicting the properties of MIPs.This includes assessing the influence of monomer:template ratios, cross-linker concentration, and reaction conditions on MIP performance [32].Computational tools also provide insight into the morphology and structural properties of MIPs.The above-mentioned computer-aided techniques can provide valuable insight into the design of MIPs and help researchers with the optimization process.
Experimental investigations of EPI-MIPs are available in the literature [22,33,34], but detailed information on computational studies is limited [24].In the present investigation, a DFT method was used to establish key parameters of the structures of EPI and functional monomers before MD simulations.Six DFT functionals, namely, Becke 3-parameter Lee-Yang-Parr (B3LYP) [35], Becke 3-parameter Perdew-Wang 91 (B3PW91) [36], Coulomb-attenuating method B3LYP (CAM-B3LYP) [37], Local Spin Density Approximation (LSDA) [38], Modified Perdew-Wang 1-parameter with Perdew-Wang 91 (MPW1PW91) [39], and ωB97X-D [40], and three basis sets, namely, 6-31g [41], 6-311g(d,p) [42], and DGTZVP [43] were assessed to validate the DFT method used.Each functional and basis set has specific features that make them suitable for different types of calculations.B3LYP is a hybrid functional that combines Hartree-Fock exchange with density functional approximations [35,[44][45][46].B3PW91 uses the Perdew-Wang 91 correlation functional instead of Lee-Yang-Parr [36,47].CAM-B3LYP is a long-range corrected version of B3LYP that adjusts the exchange-correlation function to better handle charge-transfer excitations [37].LSDA uses the local electron density to approximate the exchange-correlation energy [38].MPW1PW91 is a hybrid functional combining Perdew-Wang 91 correlation with a modified exchange functional [39,47].ωB97X-D is a range-separated hybrid functional with dispersion corrections [40].The 6-31g set is a split-valence basis set that uses a minimal basis set for core electrons and a split basis set for valence electrons [41].The 6-311g(d,p) set is a triple-split valence basis set with polarization functions [42].The Double-Zeta with Polarization Valence (DGTZVP) is a high-quality basis set that provides double-zeta coverage with polarization functions [43,48].B3LYP and MPW1PW91 are widely used for a variety of systems, while CAM-B3LYP and ωB97X-D are tailored for specific interactions such as long-range charge transfer and dispersion, and LSDA is useful for solid-state systems and bulk materials where the electron density is relatively uniform.Basis sets like 6-31g provide a good balance for preliminary studies, while more comprehensive sets like 6-311g(d,p) and DGTZVP are used for detailed and accurate calculations.Based on DFT, the best functional monomer and appropriate solvent were predicted to design and develop MIPs using EPI as a template molecule.To predict the most suitable interaction sites from the template-monomer complex, the frontier molecular orbitals (FMOs) and molecular electrostatic potential (MEP) of the molecules were examined.MD was employed to further investigate the compatibility of template-monomer-cross-linker-solvent combinations in EPI-MIPs.First, the Blends module was used to analyze various parameters, such as the Flory-Huggins parameters.Next, amorphous cells were constructed, which contained the template (EPI), monomers (six functional monomers), a cross-linker (ethylene glycol dimethacrylate (EDGMA)), and a porogenic solvent.Solubility parameters and thermodynamic equilibrium energies were also analyzed.Understanding and con-trolling temperature conditions are important tasks for optimizing the performance and effectiveness of MIPs.The effect of temperature was investigated at the ratio of EPI/AA on mean square displacement, X-ray diffraction, density distribution, specific volume, radius of gyration, and equilibrium energies.These analyses were carried out to elucidate the suitability of the functional monomer for the template in EPI-MIP receptor development.Following the computation results, a conclusion was drawn on the most suitable monomer with the appropriate ratio.
Geometry Optimization
Information about EPI, ANI, AA, 4VP, GMA, MAA, HEMA, and EGDMA was retrieved from the PubChem database.The chemical structures of the reacting species were subjected to geometric optimization at the most common DFT methods of B3LYP, B3PW91, CAM-B3LYP, LSDA, MPW1PW91, and ωB97X-D functionals and 6-31g, 6-311g(d,p) and DGTZVP basis sets using Gaussian 16 software [49].After geometry optimization, the most performed DFT method was used to obtain the optimal configuration of both the functional monomers and the template, and determine the binding site for potential complexes using the MEP.Zero-imaginary frequencies were obtained for each of the systems during the period of optimization.The most performed basis set with all the selected DFT functionals was used to optimize the resulting complexes between the functional monomers and the template.The use of the different functionals against the most performed basis set was used to measure the performance of the DFT method employed in this study.The DFT enables the prediction of binding sites and the highest interactions of functional monomers with templates for MIP development [50].It helps to facilitate the choice of functional monomers and suitable solvents in designing MIPs [51].
Energy Calculations
The conformational optimization and binding energy of the template-functional monomer complexes were estimated according to Equation (9).
Solvent Selection
There are different solvation models used in computational chemistry to simulate the effect of solvent on solute molecules.The choice of model depends on the need for accuracy/computational cost and the nature of the solvent and solute involved.The Conductorlike Polarizable Continuum Model (CPCM) is a solvation model that approximates the solvent as a conductor and modifies the electrostatic potential calculation accordingly [54].It provides a good balance between computational efficiency and the accuracy of the solvent's polarizability representation.Coupled with its reasonable computational cost, CPCM is effective enough to handle a variety of solvents, making it suitable for studies that involve comparing different solvent environments from non-polar solvents to highly polar solvents [55,56].The Integral Equation Formation Polarizable Continuum Model (IEFPCM) is another solvation model that solves the Poisson-Boltzmann equation to account for the solvent's dielectric response [57].Though IEFPCM provides accurate results for solvation energies, its implementation complexity may lead to longer computational times, particularly for large systems, making it computationally more demanding than CPCM [55,56].The Conductor-like Screening Model (COSMO) treats the solvent as a dielectric medium that interacts with the solute's charge distribution [58].While COSMO is efficient and straightforward, it is generally flexible in handling different solvent environments with varying dielectric properties [55,56].The Solvation Model Based on Density (SMD) combines continuum solvation with explicit consideration of specific solute-solvent interactions using DFT [59].It accounts for both electrostatic and non-electrostatic interactions, making it more computationally intensive due to the additional non-electrostatic interaction terms, and it also requires parameterization for each specific solvent, which may not always be available [55,56].The last solvation model to discuss is the Surface and Charge Interacting Polarizable Continuum Model (SCI-PCM), with improved standard PCM by incorporating both surface and charge distribution interactions explicitly [60].When compared with CPCM, SCI-PCM has increased computational complexity and cost, and requires extensive parameterization and setup [55].To select the most suitable solvent, the complexes with the lowest binding energies were further examined in five different solvents, namely, acetonitrile, methanol, toluene, chloroform, and water.The analysis using the various solvents utilized a CPCM model due to its ease of implementation, making it an ideal choice for comparative solvent studies, ensuring reliable and efficient analysis for this type of study.The binding energies considering the solvent effect are calculated using Equation (10).
Molecular Modeling and Simulation
All-atom MD simulation can unveil the structural characteristics exhibited by a molecule.To model various types of EPI-MIPs and analyze specific intermolecular interactions within the systems, the molecular simulation was conducted following the previously described method [32,61,62] using Material Studio 2020 software.We optimized using the Smart algorithm, 5 × 10 2 steps, energy of 8.37 × 10 −5 kJ/mol −1 , 4.18 kJ/mol −1 Å for forces, and a displacement of 1.0 × 10 −5 Å, while the initial density was 1400 kg/m 3 .COMPASS III force field was utilized during the simulation studies, employing a van der Waals cut-off radius of 12 Å.After initial geometry optimization of the reacting species, the Flory-Huggins approach was employed for the mixing properties, as embedded in the Blend module.The simulation cells were built using an amorphous cell module, varying the percentage compositions of the template-functional monomer to achieve various ratios (EPI/monomer = 1:1 to 1:9) using EGDMA as a cross-linker and acetonitrile as a solvent.The EGDMA was chosen because it is the most widely used cross-linker for MIPs [63].Subsequently, the simulation cells underwent MD simulation utilizing the Forcite module and a Berendsen thermostat.This was performed under the NVT canonical ensemble across a temperature span of 298-500 K, with 5 ramps per cycle, and under the NPT isobaricisothermal ensemble for 5.0 ns at 298 K and 100 kPa.Notably after the MD simulation, we examined the solubility parameters, and material properties such as thermodynamic equilibrium energies, mean square displacement, X-ray diffraction, density distribution, and specific volume of the system.The summary of the methods is presented in Scheme 1.
(EPI/monomer = 1:1 to 1:9) using EGDMA as a cross-linker and acetonitrile as a solvent.The EGDMA was chosen because it is the most widely used cross-linker for MIPs [63].Subsequently, the simulation cells underwent MD simulation utilizing the Forcite module and a Berendsen thermostat.This was performed under the NVT canonical ensemble across a temperature span of 298-500 K, with 5 ramps per cycle, and under the NPT isobaric-isothermal ensemble for 5.0 ns at 298 K and 100 kPa.Notably after the MD simulation, we examined the solubility parameters, and material properties such as thermodynamic equilibrium energies, mean square displacement, X-ray diffraction, density distribution, and specific volume of the system.The summary of the methods is presented in Scheme 1.
Scheme 1.
A general workflow of the simulation methods that was adopted.
Assessment of Different Functionals and Basis Sets in DFT Calculations
Table 1 presents the single point energy written against the computational time taken for the optimization of the functional monomers and the template.Six functionals and three basis sets were explored in DFT methods.From the results, the same trends were observed for the single point energy and the time across the six functionals and the basis sets for the reacting species.The low energy is observed for the LSDA functional (the more negative, the better the result) for the three basis sets for all the reacting species.There is no appreciable difference for B3LYP, B3PW91, CAM-B3LYP, MPW1PW91, and ωB97X-D functionals and the basis sets, but B3LYP still performed better.The general performance Scheme 1.A general workflow of the simulation methods that was adopted.
Assessment of Different Functionals and Basis Sets in DFT Calculations
Table 1 presents the single point energy written against the computational time taken for the optimization of the functional monomers and the template.Six functionals and three basis sets were explored in DFT methods.From the results, the same trends were observed for the single point energy and the time across the six functionals and the basis sets for the reacting species.The low energy is observed for the LSDA functional (the more negative, the better the result) for the three basis sets for all the reacting species.There is no appreciable difference for B3LYP, B3PW91, CAM-B3LYP, MPW1PW91, and ωB97X-D functionals and the basis sets, but B3LYP still performed better.The general performance of the functionals can be arranged as B3LYP > MPW1PW91 > ωB97X-D > B3PW91 > CAM-B3LYP > LSDA, while that of the basis sets can be arranged as 6-311g(d,p) > DGTZVP > 6-31g.In terms of computational time, 6-31g is cost-effective, and therefore 6-31g might be better since there is no appreciable difference between them for single-point energy.Based on these observations, B3LYP/6-31g is recommended for further analysis in this study.
HOMO and LUMO Analysis and Dipole Moment
The quantum molecular descriptor is an interesting tool for studying the properties of molecules relating to their structures.The FMOs, such as the HOMO and LUMO, can be employed to predict the reactivity of a compound.Figure 1 depicts the HOMO and LUMO densities with the associated energy band gap (HOMO-LUMO) of the optimized geometry structures of EPI and the functional monomers extracted from DFT calculations using the B3LYP/6-31g method.The FMO was used to predict the stability of the molecules from the energy gap (E), hardness (η), softness (σ) chemical potential (µ), stabilization energy (∆E), and electrophilicity index (ω).The energy of the HOMO of EPI is −0.298 eV, which is superior to the HOMO energy (E HOMO ) of all monomers examined (4VP (−0.351 eV), AA (−0.362 eV), ANI (−0.308 eV), GMA (−0.339 eV), HEMA (−0.356 eV), and MAA (−0.361 eV)).Typically, the higher the E HOMO , the greater the likelihood the donation of electrons is to occur.Moreover, the LUMO energy (E LUMO ) of EPI is −0.149 eV, which is superior to all other monomers under study (4VP (−0.203 eV), AA (−0.191eV),ANI (−0.166 eV), GMA (−0.182 eV), HEMA (−0.182 eV), and MAA (−0.184 eV)).These E LUMO values show that EPI possesses higher reactivity, and may be classified as the main electron donor, while the monomers are the main electron acceptors.The gap energies were in the order MAA > HEMA > AA > GMA > EPI > 4VP > ANI.The ∆E value denotes the reactivity caused by the charge transfer from the HOMO to the LUMO.When a lower ∆E value is observed, the molecules are more reactive and less stable.
From Table 2, EPI and ANI are reactive and less stable compared to other monomers.The µ value of all compounds was negative (Table 2), indicating a stable system where spontaneous decomposition into their constituent parts cannot occur.The η value can be considered as the measure of resistance to the deviations in the dispersal of electrons in a system.The η order was observed as follows: MAA > HEMA > AA > GMA > EPI > 4VP > ANI; this trend also relates to the value of ∆E (Table 2).Whereas σ is inversely proportional to η, the monomers with smaller energy gaps are not only softer but also display higher reactivity (Table 2).An electrophile can be detected using ω by acquiring additional charges and preventing the exchange of charges with the environment.In short, ω can provide details about stability and electron transfer in a system.The smaller values of ω observed in 4VP, GMA, HEMA, MAA, and AA monomers indicate that the molecules are electrophilic (Table 2).The tendency of the functional monomers to donate electrons is expressed in terms of ∆N max , which ranged between 3.080 eV (MAA) and 3.754 eV (4VP).This implies the quality of the polymer matrix will be well formed in 4VP with the template compared to the rest of the monomers [64].A molecule's dipole moment may also influence the selection of significant monomers to template molecules.The charge distribution within a molecule often leads to the distortion of its electron cloud; hence, the simplicity of distortion is referred to as the polarizability of a molecule or an atom.Alteration of the electron cloud may grant accessibility of non-polar molecules or atoms to dipole moment.From Table 2, EPI and ANI are reactive and less stable compared to other monomers.The µ value of all compounds was negative (Table 2), indicating a stable system where spontaneous decomposition into their constituent parts cannot occur.The η value can be considered as the measure of resistance to the deviations in the dispersal of electrons in a system.The η order was observed as follows: MAA > HEMA > AA > GMA > EPI > 4VP > ANI; this trend also relates to the value of ΔE (Table 2).Whereas σ is inversely proportional to η, the monomers with smaller energy gaps are not only softer but also display higher reactivity (Table 2).An electrophile can be detected using ω by acquiring additional charges and preventing the exchange of charges with the environment.In short, ω can provide details about stability and electron transfer in a system.The smaller values of ω observed in 4VP, GMA, HEMA, MAA, and AA monomers indicate that the molecules are electrophilic (Table 2).The tendency of the functional monomers to donate electrons is expressed in terms of ∆Nmax, which ranged between 3.080 eV (MAA) and 3.754 eV (4VP).This implies the quality of the polymer matrix will be well formed in 4VP with the template compared to the rest of the monomers [64].A molecule's dipole moment may also influence the selection of significant monomers to template molecules.The charge distribution within a molecule often leads to the distortion of its electron cloud; hence, the simplicity of distortion is referred to as the polarizability of a molecule or an atom.Alteration of the electron cloud may grant accessibility of non-polar molecules or atoms to dipole moment.
Charge Distributions
By calculating the atomic charges of a system, the reactivity behaviors of the molecules involved are determined.Mullikan atomic charge distribution analysis was used to compute the atomic charges on each structure after DFT calculation using the B3LYP/6-31g method.An increase in electron donation to the external surface resulted from the positive charges, while an increase in electron donation to the internal surface resulted from negative charges [64].Figure 2a shows that N1, C3, C4, C7, and C8 are the most negatively charged atoms for 4VP, O1, O2, C3, and C5 for AA; N1 for ANI; O1-O3 and C10 for GMA; O1-O3 and C9 for HEMA; and O1, O2, and C6 for MAA.The most positively charged atoms are C2, C5, C6, H9, H10, and H13-H15 for 4VP; C4 and H9 for AA; C2, H13, and H14 for ANI; C6 and C7 for GMA; C5, C7, and H19 for HEMA; and C5 and H12 for MAA.The most charged atoms on the EPI are O1-O3, N4, C8, C9, and C11 (negatively charged atoms) and H19, H21, H25, H26, C5, C7, C10, and C12 (positively charged).The negatively charged atoms favor an electrophilic attack, while the positively charged atoms favor a nucleophilic attack.MEP was used in identifying the regions of electron density.As presented in Figure 2b, the red regions indicate an area of high electron density, while blue regions indicate a low electron density.The active sites were analyzed, and the template-monomer complex was constructed based on considerations of spatial conformation, charge distribution of atoms, and monomer composition.The regions that are very red or blue are considered for all the participating atoms (Figure 2b).4VP has proton acceptors N1, and AA has proton donors H9 and proton acceptors O2 (Figure 2b).Meanwhile, ANI has proton donors H13 and H14, and proton acceptors at the benzene ring (Figure 2b).GMA (O1 and O3), MAA (O2), and HEMA (O2 and O3) also displayed proton acceptors (Figure 2b).Comparing charge distributions of EPI with the investigated monomers, EPI molecules contain more active sites, and the proton donors of EPI are H26 and H21 and its proton acceptors are O1 and O3 (Figure 2b).
Charge Distributions
By calculating the atomic charges of a system, the reactivity behaviors of the molecules involved are determined.Mullikan atomic charge distribution analysis was used to compute the atomic charges on each structure after DFT calculation using the B3LYP/6-31g method.An increase in electron donation to the external surface resulted from the positive charges, while an increase in electron donation to the internal surface resulted from negative charges [64].Figure 2a shows that N1, C3, C4, C7, and C8 are the most negatively charged atoms for 4VP, O1, O2, C3, and C5 for AA; N1 for ANI; O1-O3 and C10 for GMA; O1-O3 and C9 for HEMA; and O1, O2, and C6 for MAA.The most positively charged atoms are C2, C5, C6, H9, H10, and H13-H15 for 4VP; C4 and H9 for AA; C2, H13, and H14 for ANI; C6 and C7 for GMA; C5, C7, and H19 for HEMA; and C5 and H12 for MAA.The most charged atoms on the EPI are O1-O3, N4, C8, C9, and C11 (negatively charged atoms) and H19, H21, H25, H26, C5, C7, C10, and C12 (positively charged).The negatively charged atoms favor an electrophilic attack, while the positively charged atoms favor a nucleophilic attack.MEP was used in identifying the regions of electron density.As presented in Figure 2b, the red regions indicate an area of high electron density, while blue regions indicate a low electron density.The active sites were analyzed, and the template-monomer complex was constructed based on considerations of spatial conformation, charge distribution of atoms, and monomer composition.The regions that are very red or blue are considered for all the participating atoms (Figure 2b).4VP has proton acceptors N1, and AA has proton donors H9 and proton acceptors O2 (Figure 2b).Meanwhile, ANI has proton donors H13 and H14, and proton acceptors at the benzene ring (Figure 2b).GMA (O1 and O3), MAA (O2), and HEMA (O2 and O3) also displayed proton acceptors (Figure 2b).Comparing charge distributions of EPI with the investigated monomers, EPI molecules contain more active sites, and the proton donors of EPI are H26 and H21 and its proton acceptors are O1 and O3 (Figure 2b).
Interaction between EPI and the Functional Monomers
EPI contains multiple interaction sites that may form hydrogen bonds, so the interaction energies and hydrogen bonds at these sites must be determined.Additionally, to gain insight into the regions with the most effective interactions, the optimized geometry, interaction energies, bond type, and bond distance for all EPI-monomer complexes formed between EPI and functional monomers in the gas phase using the B3LYP/6-31g
Interaction between EPI and the Functional Monomers
EPI contains multiple interaction sites that may form hydrogen bonds, so the interaction energies and hydrogen bonds at these sites must be determined.Additionally, to gain insight into the regions with the most effective interactions, the optimized geometry, interaction energies, bond type, and bond distance for all EPI-monomer complexes formed between EPI and functional monomers in the gas phase using the B3LYP/6-31g method are presented in Figure 3 and Table 3. Notably, there are differences in the values of binding energy.The most favorable position is at the lowest formation energy where the reaction can occur easily.Based on this, for the functional monomers with more than one binding site, the one with the lowest energy (highest binding energy) is selected for further studies.The most stable complexes with the highest binding energies are presented in Figure 3 with the charge distribution of their atoms.Charge distribution is an essential factor in Polymers 2024, 16, 2341 10 of 21 imprinted polymer selectivity.It can be inferred from Figure 3 that the charge distribution of EPI is altered in the presence of functional monomers, indicating interactions with the monomer.Because of the presence of a phenyl ring in EPI, 4VP, and ANI, one would expect a better interaction between EPI as the template and these two monomers than the rest of the monomers considered, but this is not the case.AA has the highest binding affinity with the template (Table 3) because it acts both as the acceptor and the donor of electrons, as indicated in the MEP surfaces (Figure 2).When the highest binding energy is predicted for a given template or the template of the highest binding energy with a particular monomer, it denotes their ability to be used to prepare an MIP.The EPI-monomer complexes showed the highest binding energy with this order: (EPI-AA-v1) > (EPI-MAA-v1) > (EPI-4VP) > (EPI-GMA-v2) > (EPI-HEMA-v1) > (EPI-ANI) (Table 3).Accordingly, it can be concluded that EPI interacts most strongly with AA, while the interaction with ANI is less favored.Additionally, each complex has a greater dipole moment value than its counterparts (Tables 2 and 3) due to the formation of a more polarized structure [65].The increased dipole moment in a complex implies an increased solubility in polar solvents, which is advantageous for MIP production.A higher value of the dipole moment of a complex corresponds to the dominant electrostatic interaction between the template and the respective monomer.
method are presented in Figure 3 and Table 3. Notably, there are differences in the values of binding energy.The most favorable position is at the lowest formation energy where the reaction can occur easily.Based on this, for the functional monomers with more than one binding site, the one with the lowest energy (highest binding energy) is selected for further studies.The most stable complexes with the highest binding energies are presented in Figure 3 with the charge distribution of their atoms.Charge distribution is an essential factor in imprinted polymer selectivity.It can be inferred from Figure 3 that the charge distribution of EPI is altered in the presence of functional monomers, indicating interactions with the monomer.Because of the presence of a phenyl ring in EPI, 4VP, and ANI, one would expect a better interaction between EPI as the template and these two monomers than the rest of the monomers considered, but this is not the case.AA has the highest binding affinity with the template (Table 3) because it acts both as the acceptor and the donor of electrons, as indicated in the MEP surfaces (Figure 2).When the highest binding energy is predicted for a given template or the template of the highest binding energy with a particular monomer, it denotes their ability to be used to prepare an MIP.The EPI-monomer complexes showed the highest binding energy with this order: (EPI-AA-v1) > (EPI-MAA-v1) > (EPI-4VP) > (EPI-GMA-v2) > (EPI-HEMA-v1) > (EPI-ANI) (Table 3).Accordingly, it can be concluded that EPI interacts most strongly with AA, while the interaction with ANI is less favored.Additionally, each complex has a greater dipole moment value than its counterparts (Tables 2 and 3) due to the formation of a more polarized structure [65].The increased dipole moment in a complex implies an increased solubility in polar solvents, which is advantageous for MIP production.A higher value of the dipole moment of a complex corresponds to the dominant electrostatic interaction between the template and the respective monomer.Table 3 additionally provides a comprehensive analysis of the variables involved in the production of hydrogen bond networks.All the formed hydrogen bond lengths fall within the range of 1.67916 and 1.97055 Å (Table 3).These values perfectly align with the general O-H single bond length and the van der Waals radius, confirming the accuracy and consistency of the hydrogen bond networks being studied.All the observed hydrogen bonding energy values for all complexes were negative, implying that the hydrogen bonding between EPI and all monomers is thermodynamically favorable [66].The shortest hydrogen bond length of the imprinted molecule complexes was 1.69419, 1.77215, 1.78758, 1.67638, 1.75361, 1.75195, 1.67916, and 1.75606 Å for the complex versions EPI-4VP, EPI-AA-v1, EPI-AA-v2, EPI-GMA-v1, EPI-GMA-v2, EPI-HEMA-v1, EPI-MAA-v1, and EPI-MAA-v2, respectively (Table 3).Based on the strength of the hydrogen bonds formed (Table 3), the most significant hydrogen bonds are produced in the EPI-AA complexes.The lowest binding energy (better interactions) is observed for molecular imprinted complexes constructed from template EPI-AA-v1.To establish this observation claimed for EPI-AA, the 6-31g basis set along with the six functionals displayed in Table 1 was used to optimize the complexes formed between the functional monomers and the template.The results are displayed in Figure 4a.The same trends are observed for all the complexes across all the six functionals except ωB97X-D.The disparity is seen in the energy calculation of EPI-GMA and EPI-ANI, which did not follow the trends observed in the other five functionals.A similar observation was reported in the estimation of electrophilicity and nucleophilicity scales of some organic compounds, out of which errors were shown for three of them using ωB97X-D functionals [67].Overall, the order of binding energy in terms of interactions for the complexes follows EPI-AA > EPI-MAA > EPI-4VP > EPI-GMA > EPI-HEMA > EPI-ANI.For the functionals, the performance follows LSDA > ωB97X-D > CAM-B3LYP > MPW1PW91 > B3LYP > B3PW91.As observed, the LSDA functional that performed poorly in the optimization of the reacting species (Table 1) has better performance in the energy calculation for the optimization of the complexes (Figure 4a).Table 3 additionally provides a comprehensive analysis of the variables involved in the production of hydrogen bond networks.All the formed hydrogen bond lengths fall within the range of 1.67916 and 1.97055 Å (Table 3).These values perfectly align with the general O-H single bond length and the van der Waals radius, confirming the accuracy and consistency of the hydrogen bond networks being studied.All the observed hydrogen bonding energy values for all complexes were negative, implying that the hydrogen bonding between EPI and all monomers is thermodynamically favorable [66].The shortest hydrogen bond length of the imprinted molecule complexes was 1.69419, 1.77215, 1.78758, 1.67638, 1.75361, 1.75195, 1.67916, and 1.75606 Å for the complex versions EPI-4VP, EPI-AA-v1, EPI-AA-v2, EPI-GMA-v1, EPI-GMA-v2, EPI-HEMA-v1, EPI-MAA-v1, and EPI-MAA-v2, respectively (Table 3).Based on the strength of the hydrogen bonds formed (Table 3), the most significant hydrogen bonds are produced in the EPI-AA complexes.The lowest binding energy (better interactions) is observed for molecular imprinted complexes constructed from template EPI-AA-v1.To establish this observation claimed for EPI-AA, the 6-31g basis set along with the six functionals displayed in Table 1 was used to optimize the complexes formed between the functional monomers and the template.The results are displayed in Figure 4a.The same trends are observed for all the complexes across all the six functionals except ωB97X-D.The disparity is seen in the energy calculation of EPI-GMA and EPI-ANI, which did not follow the trends observed in the other five functionals.A similar observation was reported in the estimation of electrophilicity and nucleophilicity scales of some organic compounds, out of which errors were shown for three of them using ωB97X-D functionals [67].Overall, the order of binding energy in terms of interactions for the complexes follows EPI-AA > EPI-MAA > EPI-4VP > EPI-GMA > EPI-HEMA > EPI-ANI.For the functionals, the performance follows LSDA > ωB97X-D > CAM-B3LYP > MPW1PW91 > B3LYP > B3PW91.As observed, the LSDA functional that performed poorly in the optimization of the reacting species (Table 1) has better performance in the energy calculation for the optimization of the complexes (Figure 4a).
Solvent Selection
Furthermore, the interaction energy between the template and monomer varied depending on the porogenic solvent used.For this reason, the solvent effect was considered when predicting the interaction between the EPI template and functional monomers.EPImonomer complexes were evaluated in different solvents including acetonitrile, chloroform, ethanol, toluene, and water.Figure 4b portrays the binding energies in the gas phase and the investigated solvents using the B3LYP/6-31g method.Interestingly, the introduction of solvents in the computations caused vast variations in the binding energies (Figure 4b).While all the examined porogenic solvents are appropriate for the preparation of EPI-MIPs, the stability concerning the solvation energy followed the order of acetonitrile ~methanol > chloroform > water > toluene (Figure 4b).This indicates the EPI complex is less favored in water and toluene.When protic solvents (i.e., methanol) are used, hydrogen bonding will occur, influencing the interaction energy between the template and the functional monomers.In this case, acetonitrile is chosen as the porogenic solvent for EPI-MIP, and the reasons for selection were detailed in a previous study [32].Meanwhile, negative interaction energies will favor the production of higher concentrations of template-monomer complexes and strong molecular recognition, resulting in an MIP with high selectivity.Notably, the binding energy predicted for the EPI-AA was superior in all the solvents, and thus the appropriate monomer selected for EPI-MIPs was AA.The reducing stability order of the complexes in acetonitrile is EPI-AA > EPI-ANI > EPI-4VP > EPI-MAA > EPI-HEMA > EPI-GMA.
Compatibility of Epinephrine with the Functional Monomers
The miscibility behavior of epinephrine with the functional monomers: The interaction energies, miscibility, and Flory−Huggins's chi (χ) factors were investigated by mixing tasks in the Blends module after the equilibration of the initial geometries.The Blends module was utilized to observe the binary mixtures' miscibility behavior between the functional monomers and EPI.Replacing the temperature-dependent interaction parameter, χ, in the Flory-Huggins expression leads to the calculation of the free energy.To observe a value close to zero, indicating miscibility, the mixing energy (E mix ) and free energy of the system must also be analyzed along with the χ.Whenever the E mix between the monomers and EPI becomes negative, the two are miscible.Evaluating the miscibility between the functional monomers and EPI to determine the E mix involves assessing their interactions at a temperature range that is relevant for practical applications and ensuring the stability of both compounds [68].The temperature range should reflect the condition under which the MIP will be synthesized or used.Generally, room temperature to a slightly elevated temperature is used for polymer blends [69,70].Based on this, a temperature range between 273.15 K and 313.15K (20 and 40 • C) was used for evaluating χ and E mix .Figure 5a,b show the miscibility behavior of the complexes.Negative χ and E mix values were observed for all the complexes except for ANI (Figure 5a,b), implying the functional monomers would have superior miscibility with EPI.The ANI binary mixture has positive values, indicating its immiscible behavior (Figure 5b).For 4VP, GMA, HEMA, MAA, and AA, the χ and E mix values are negative; thus, these functional monomers' mixtures with EPI are miscible.However, the AA mixture showed superior miscibility in comparison to the other monomers (Figure 5).For χ and E mix , the order of miscibility of the monomers with the EPI follows AA > 4VP > GMA > MAA > HEMA > ANI (Figure 5).
When the Flory-Huggins interaction parameter has been solved, the free energy change of mixing and temperature can be determined.The free energy plot at the various temperatures as a function of the mole fraction of functional monomers and temperature (293, 303, 313 K) was generated from the system's analysis.The free energy of the mixing of EPI and the monomers with respect to the temperature can be seen in Figure 6.The free energy of EPI and ANI is positive (endothermic).In contrast, the free energies of EPI with other monomers such as 4VP, AA, GMA, HEMA, and MAA acid are negative (exothermic).When there is spontaneous mixing, a negative free energy change occurs and the entropy of mixing increases, and the enthalpy of mixing will be negative for mixing to happen.Based on these results, the EPI and ANI mix is immiscible over the temperature range, while EPI and the remaining monomers are miscible.Observing the trend of the free energy with the temperature for individual monomers, the free energy increases as the temperature increases for all the monomers (Figure 6a-c).This means that low temperatures support the miscibility.As the temperature is increasing, the free energy is increasing, which means the free energy is shifting to a negative value at a similar composition of the EPI.This is attributed to the temperature-dependent miscibility behavior.Figure 6 also compares the trend of free energy for the different monomer complexes at different temperatures.In all the temperature ranges considered, the order of miscibility of the monomers with the EPI follows AA > 4VP > GMA > MAA > HEMA > ANI.The same order was observed for the χ and E mix discussed previously.When the Flory-Huggins interaction parameter has been solved, the free energy change of mixing and temperature can be determined.The free energy plot at the various temperatures as a function of the mole fraction of functional monomers and temperature (293, 303, 313 K) was generated from the system's analysis.The free energy of the mixing of EPI and the monomers with respect to the temperature can be seen in Figure 6.The free energy of EPI and ANI is positive (endothermic).In contrast, the free energies of EPI with other monomers such as 4VP, AA, GMA, HEMA, and MAA acid are negative (exothermic).When there is spontaneous mixing, a negative free energy change occurs and the entropy of mixing increases, and the enthalpy of mixing will be negative for mixing to happen.Based on these results, the EPI and ANI mix is immiscible over the temperature range, while EPI and the remaining monomers are miscible.Observing the trend of the free energy with the temperature for individual monomers, the free energy increases as the temperature increases for all the monomers (Figure 6a-c).This means that low temperatures support the miscibility.As the temperature is increasing, the free energy is increasing, which means the free energy is shifting to a negative value at a similar composition of the EPI.This is attributed to the temperature-dependent miscibility behavior.Figure 6 also compares the trend of free energy for the different monomer complexes at different temperatures.In all the temperature ranges considered, the order of miscibility of the monomers with the EPI follows AA > 4VP > GMA > MAA > HEMA > ANI.The same order was observed for the χ and Emix discussed previously.Solubility parameters and membrane cell equilibrium: The purpose of studying the solubility of the different complexes formed between the EPI and the monomers is to determine the optimal ratio for systems with different compositions of the monomers.In the development of EPI-MIP receptors, the Flory-Huggins approach was used to approxi- Solubility parameters and membrane cell equilibrium: The purpose of studying the solubility of the different complexes formed between the EPI and the monomers is to determine the optimal ratio for systems with different compositions of the monomers.In the development of EPI-MIP receptors, the Flory-Huggins approach was used to approximate the solubility of EPI-monomer-EGDMA in acetonitrile.The solubility parameter (δ), defined as the square root of the cohesive energy density (CED) of various complexes, was studied, focusing on the proportion of EPI to the functional monomer in the mixture.Generally, the lower the δ between two components, the greater the miscibility between the two due to stronger interaction.The δ value was investigated at the different ratios of EPI-monomers with EGDMA and acetonitrile present.The constructed amorphous cells for the systems at EPI-monomer are presented in Figure 7.As shown in the plots in Figure 8a, the range for δ is between 10.53 and 41.43 ((Jcm −3 ) 1/2 ).The lowest δ reported for EPI:ANI is 1:6 (39.65 ((Jcm −3 ) 1/2 )); EPI:4VP, EPI:AA, EPI:GMA, EPI:HEMA, and EPI:MAA are at 1:7 (22.35, 10.53, 25.53, 29.78 and 19.80 ((Jcm −3 ) 1/2 ), respectively).This indicates an optimized template:monomer ratio for the creation of EPI-MIP for each functional monomer examined.However, EPI-AA displayed a superior δ value compared to the remaining complexes, indicating that the superior binding energy of the EPI-AA complex is due to its higher molar ratio and hydrogen bond interactions.The stability of the systems at 298.15 K after 5 ns was first established through thermodynamic equilibrium energies including potential, kinetic, non-bond, and total energy, as shown in Figure S1A for the EPI-AA complex.The stable values observed for the plot of free energy density in Figure S1B also indicated that the system reached equilibrium.The results from quantum studies showed that AA as a functional monomer, and acetonitrile as a protogenic solvent, will be suitable for designing MIP for EPI.The χ, E mix , free energy, and δ results through Flory-Huggins' approach also confirmed the superiority of AA above other functional monomers investigated as the most suitable for EPI-MIP design.Though the result of δ indicated a ratio of 1:7 as the best suitable ratio for AA, further investigation was conducted using thermodynamic equilibrium energies studied at different ratios of EPI/AA.At a molar ratio of 1:1 to 1:9, molecular dynamics trajectory files were created for the structures of the EPI-AA complexes.As presented in Figure 8b, the thermodynamic energies, including potential, kinetic, non-bond, and total energy, showed that EPI-AA is most stable and favorable at 1:4.This is also supported by the potential energy components (total valency, van der Waals, total potential, and electrostatic energy) displayed in Figure 8c.The stability of the systems at 298.15 K after 5 ns was first established through thermodynamic equilibrium energies including potential, kinetic, non-bond, and total energy, as shown in Figure S1A for the EPI-AA complex.The stable values observed for the plot specific volume is relatively constant within the studied temperature.Density is inversely related to a specific volume.Figure 9b shows the density of the EPI-AA system as a function of temperature.The density at each temperature was extracted from the average density of the system.Here, a relative constant value is also observed.Constant specific volume and density suggest that the MIP maintains its structural integrity and does not undergo significant thermal expansion or contraction within the studied temperature range.This stability is crucial for preserving the precise cavities and binding sites created during the imprinting process.The mobility of the systems was analyzed through mean square displacement (MSD).MSD is a measure of the average displacement of molecules over time.The larger the slope of the MSD curve, the higher the mobility of the system.Figure 9c presents the MSD at various temperatures.The result for the MSD plot is comparable to the plot of a specific volume.A constant MSD indicates that the mobility of the polymer chains and the diffusion of the template molecules within the polymer matrix remain stable.This suggests that the MIP retains its dynamic properties, ensuring consistent interactions between the template and the polymer matrix.The scattering analysis of simulated X-ray diffraction denoted as intensity (I) at the different temperatures is shown in Figure 9d.Temperature changes can affect the intensity by altering molecular vibrations and interactions.As observed, constant intensity is observed, indicating that the MIP's microstructure is stable and homogeneous, with excellent distribution of the particles within the system.The equilibrium energies were calculated as the final energy after the NPT dynamic simulation and are presented in Figure 9e.All the energy values are relatively constant within the temperature range.This is an indication that the thermodynamic state of the MIP is stable and that the interactions within the polymer matrix, as well as between the polymer and the template, are not significantly affected by temperature fluctuations within the studied range.The radius of gyration (Rg) is described as the length that represents the distance between the point when a molecule is rotating and the point where the transfer of energy is maximized.The compactness of the systems was evaluated through the Rg as shown in Figure 9f.The Rg distribution for the backbone of the EPI-AA complex at the different temperatures is stable, with a slight decrease with the temperature change.A constant Rg implies that the overall size and shape of the polymer coils remain unchanged.The smaller the Rg, the greater the flexibility of the polymeric material.The Rg at this smaller radius also confirms the stability of the complexes at the 1:4 template:monomer ratio [75].Optimal binding conditions of the template with the monomer and overall stability are often temperature-dependent, and are critical for the effective design and performance of the MIP.Generally, the stability is observed for all the investigated parameters between 293 and 353 K, but much better from 338 to 353 K.This shows that the MIP will perform reliably across the temperature range, making it suitable for applications such as sensors, separation process, and catalysis, where temperature may vary but performance is critical.The constant values observed for all the parameters investigated collectively provide insights into the structural stability, dynamic behavior, and overall performance of the MIP.This will ensure efficient polymerization, stability of the monomer and template, and the physical properties of the solvent.At this temperature range, the imprinted cavities and their ability to rebind the template remain stable and effective.
Considering template-functional monomer interactions during the formation of MIP, the template molecule (analyte) interacts with the monomer units to form dimers, trimers, or larger oligomers.These reactions take place in the simulation box, which contains the ratio of the template to the functional monomer of interest.In the case of AA as the functional monomer and EPI as the analyte, AA contains a carboxyl group that can form hydrogen bonds with the hydroxyl, amine, and phenolic groups present in EPI during the polymerization process.Electrostatic interactions, van der Waals forces, and hydrophobic interactions can also occur between the functional groups present in EPI and the AA oligomers.Additionally, when using a functional monomer such as AA for EPI-MIP development in the presence of a cross-linker and acetonitrile as a solvent, precipitation polymerization is expected in which the polymer precipitates out of the solutions as it forms [24,76].This polymerization process allows for the formation of uniform, spherical particles, with high surface area and ease of handling in applications such as chromatography and sensor technology [76].
Conclusions
In this work, we presented a computational approach to predict suitable functional monomers for EPI-MIP development.We examined the most appropriate functional monomer, porogenic solvent, and the template:monomer ratios resulting in the most stable interactions between the EPI (template) and 4VP, AA, ANI, GMA, HEMA, and MAA (monomers).
•
An in-depth understanding of the intermolecular interactions between the EPI and its functional counterparts is provided by this study, as well as theoretical guidance for designing more precise imprinting sites and recognizable sites to increase the specificity of MIP binding.Six DFT functionals (B3LYP, B3PW91, CAM-B3LYP, LSDA, MPW1PW91, and ωB97X-D) and three basis sets (6-31g, 6-311g(d,p) and DGTZVP) were used to establish the prediction claimed for the functional monomers.
•
Based on properties like hydrogen bonding, interaction energy, and solvation energy, the stability of the interactions between EPI and the functional monomers was determined.The most suitable monomer using DFT methods in the gas phase is AA, and this was confirmed with the same trends observed across five out of six DFT functionals investigated.It can also be concluded that any of the functionals B3LYP, B3PW91, CAM-B3LYP, LSDA, or MPW1PW91 can be used to observe the trends of interactions among the functional monomers with the template; however, when it comes to the best interactions, LSDA performed best.To find the most suitable porogenic solvent environment where the most stable EPI-monomer complex will form, we calculated solvation energies at a 1:1 mole ratio in acetonitrile, chloroform, methanol, water, and toluene.In the acetonitrile and methanol solvent, EPI-AA complexes have low energy values, suggesting that intermolecular interaction exists at its highest level.
•
AA was confirmed as the most appropriate functional monomer for the preparation of the complex pre-polymerization and synthesis of the EPI-MIPs among the investigated monomers based on mixing energy, binding energy, and solubility parameters.
Conclusions
In this work, we presented a computational approach to predict suitable functional monomers for EPI-MIP development.We examined the most appropriate functional monomer, porogenic solvent, and the template:monomer ratios resulting in the most stable interactions between the EPI (template) and 4VP, AA, ANI, GMA, HEMA, and MAA (monomers).
• An in-depth understanding of the intermolecular interactions between the EPI and its functional counterparts is provided by this study, as well as theoretical guidance for designing more precise imprinting sites and recognizable sites to increase the specificity of MIP binding.Six DFT functionals (B3LYP, B3PW91, CAM-B3LYP, LSDA, MPW1PW91, and ωB97X-D) and three basis sets (6-31g, 6-311g(d,p) and DGTZVP) were used to establish the prediction claimed for the functional monomers.
•
Based on properties like hydrogen bonding, interaction energy, and solvation energy, the stability of the interactions between EPI and the functional monomers was determined.The most suitable monomer using DFT methods in the gas phase is AA, and this was confirmed with the same trends observed across five out of six DFT functionals investigated.It can also be concluded that any of the functionals B3LYP, B3PW91, CAM-B3LYP, LSDA, or MPW1PW91 can be used to observe the trends of interactions among the functional monomers with the template; however, when it comes to the best interactions, LSDA performed best.To find the most suitable porogenic solvent environment where the most stable EPI-monomer complex will form, we calculated solvation energies at a 1:1 mole ratio in acetonitrile, chloroform, methanol, water, and toluene.In the acetonitrile and methanol solvent, EPI-AA complexes have low energy values, suggesting that intermolecular interaction exists at its highest level.
22 Figure 1 .
Figure 1.Plots displaying frontier molecular orbitals from ground state density surface.
Figure 1 .
Figure 1.Plots displaying frontier molecular orbitals from ground state density surface.
Figure 2 .
Figure 2. The optimized geometry structures of the reacting species: (a) Mulliken charges and (b) molecular electrostatic potential surfaces (the proton accepting and proton donating sites of molecules are electrostatically marked in red and blue color, respectively).
Figure 2 .
Figure 2. The optimized geometry structures of the reacting species: (a) Mulliken charges and (b) molecular electrostatic potential surfaces (the proton accepting and proton donating sites of molecules are electrostatically marked in red and blue color, respectively).
Figure 3 .
Figure 3.The optimized structures of the generated complexes between EPI and monomers with Mulliken charges.
Figure 3 .
Figure 3.The optimized structures of the generated complexes between EPI and monomers with Mulliken charges.
Figure 4 .
Figure 4. (a) Binding energies calculated for the complexes between the template and the monomers in the gas phase by DFT using various functionals and basis sets (b) Binding energies of the resulting complexes formed between the template and the monomers in different porogenic solvents using the B3LYP/6-31g method.
3. 7 .
Effects of Temperature on the Material Properties and the Dynamics of the System Designing an MIP involves considering several factors, including the temperature effects for polymerization.Studying a range of temperatures helps to optimize conditions for efficient and complete polymerization.In addition, MIPs are often used in environments where temperature can vary.Investigating the effects of temperature ensures that the MIP performs reliably under different conditions.Additionally, controlled temperature can maintain an amorphous state of MIP, thereby preventing unwanted crystallization and ensuring uniform polymerization.As mentioned earlier, the miscibility temperature for the monomers and the template ranges between 20 and 40 • C, the common polymerization temperature range for AA is from 60 to 200 • C [71,72], and a typical temperature range for developing MIPs involving AA or EPI is between 65 and 80 • C [73,74].EPI is stable at moderate temperatures and acetonitrile boils at 82 • C. Based on this, the temperature range between 293 and 353 K (20-80 • C) was selected to study the effect of temperature at the molar ratio of 1:4/EPI:AA.The specific volume affects the extent to which the template molecule interacts with the functional monomers during the imprinting process.The specific volume extracted from the average volume obtained after the NPT dynamic simulation was plotted against the temperature, as presented in Figure 9a.The specific volume typically increases as the temperature increases due to thermal expansion and can lead to changes in the polymer matrix, affecting the imprinting sites.In this case, the Polymers 2024, 16, 2341 16 of 21
Table 1 .
Single-point energy (a.u.) against the time taken (s) calculated for the reacting species in the design of EPI-MIPs by DFT using different functionals and basis sets.
Table 2 .
Computed quantum chemical properties and the dipole moment of the EPI and monomers under study.
Table 2 .
Computed quantum chemical properties and the dipole moment of the EPI and monomers under study.
Table 3 .
Bond distances, bond type, change in binding energies (ΔE), and dipole moment of the template-monomer complexes computed in the gas phase.
Table 3 .
Bond distances, bond type, change in binding energies (∆E), and dipole moment of the template-monomer complexes computed in the gas phase. | 13,256.6 | 2024-08-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Value-Added Bio-Chemicals Commodities from Catalytic Conversion of Biomass Derived Furan-Compounds
The depletion of fossil resources in the near future and the need to decrease greenhouse gas emissions lead to the investigation of using alternative renewable resources as raw materials. One of the most promising options is the conversion of lignocellulosic biomass (like forestry residues) into bioenergy, biofuels and biochemicals. Among these products, the production of intermediate biochemicals has become an important goal since the petrochemical industry needs to find sustainable alternatives. In this way, the chemical industry competitiveness could be improved as bioproducts have a great potential market. Thus, the main objective of this review is to describe the production processes under study (reaction conditions, type of catalysts, solvents, etc.) of some promising intermediate biochemicals, such as; alcohols (1,2,6-hexanetriol, 1,6-hexanetriol and pentanediols (1,2 and 1,5-pentanediol)), maleic anhydride and 5-alkoxymethylfuran. These compounds can be produced using 5-hydroxymethylfurfural and/or furfural, which they both are considered one of the main biomass derived building blocks.
Introduction
The world energy consumption has incremented up to 16% over the last 10 years (2007-2017) [1] (see Figure 1). This raise can be attributed to economic trends (the development of the global economy) and demographic changes (the growth of the population) [1][2][3][4].
Oil, coal and natural gas are currently the world's main non-renewable resources for producing energy. In 2017, the most consumed resource was oil (34%), followed by coal (28%) and natural gas (23%). The total of these non-renewable supplies rises to 85% of the total world energy consumption [1]. These data are summarized in Figure 2.
The concern about climate change and the depletion of fossil fuels have led to investigate new renewable and green producing energy systems. The lignocellulosic biomass is one of the promising renewable sources to produce value added chemicals because of its renewability, abundance and wide distribution in nature [2,5]. It is mainly composed by three polymers: cellulose, hemicellulose and lignin [5][6][7]. These components can be separated to treat them independently [8]. Different pretreatments are used for this purpose: physical (reducing the particle size by mechanical force or heat), chemical (using ionic liquids or other reagents to dissociate biomass) and biological (favoring the biomass digestion by microbes or fungal enzymes) [5] Cellulose is the largest fraction of lignocellulosic biomass, constituting 30-50% by weight [7]. It is a linear polymer of hexoses [6,9]. Glucose (C6 sugar) can be dehydrated into 5-hydroxymethylfuran (HMF; see Figure 3) [10,11].
Catalysts 2020, 10, x FOR PEER REVIEW 3 of 25 to be overcome by the development of inexpensive and environmentally friendly technologies to produce FF from renewable feedstock [2,6,15]. Figure 3. Reaction pathway to produce FF and HMF from lignocellulosic biomass (adapted from [16]).
New research is focused on the investigation of new catalytic systems to produce FF from hemicellulose. Both homogeneous and heterogeneous catalysts are being studied. Organic acids such as oxalic acid, levulinic acid or maleic acid and mineral acids such as H2SO4, HCl or H3PO4 can be used as homogeneous catalysts. Heterogeneous catalysts incorporate solid catalysts (TiO2, ZrO2 and zeolites) and Lewis acids (CrCl3, ZnCl2 and AlCl3) [16,17]. Many investigations are also focused on the utilization of a proper solvent. Organic solvents, such as dimethyl sulfoxide (DMSO), ƴvalerolactone (GVL) and ionic liquids (e.g., liquid salts) and biphasic aqueous/organic systems, such as water/toluene or water/methyl isobutyl ketone (MIBK), have been proposed as the most efficient ones for achieving high yields of FF [5,12]. FF has many applications; it can be used as an extracting agent on the refinery process of oil and diesel, as an additive in many products such as flavoring agents, insecticides, etc. Moreover, it can be further converted into a wide range of value added chemicals [5,7,8,12,14].
HMF produced from the hexose dehydration has been investigated for over 100 years [18]. Different catalytic systems have been reported obtaining high yields of HMF. These systems use both heterogeneous and homogeneous catalysts such as mineral and organic acids, salts and zeolites (similar to the FF production) [16,19,20]. Diverse solvents have been studied for the production of HMF, including ionic liquids, organic solvents (DMSO, acetone and tetrahydrofuran (THF)) and biphasic aqueous/organic systems (e.g., water/toluene) [16,19,20].
At present, the cost of HMF is three orders of magnitude higher than fossil-based chemicals (around 500-1500 USD/kg) [21]. Therefore, the market for HMF is limited if it is compared to the market of the final products that can be obtained from this platform molecule [11]. These chemicals derived from the hydrogenation or oxidation of HMF can be used in pharmaceuticals, polymers, resins, solvents, fungicides and biofuels [11,20].
HMF and FF have been included in the top 10 value-added chemicals derived from biomass for suitable production of fuels and chemicals [2,12,13]. These chemicals have been widely studied for the production of biofuels. Bozell et al. [13] reveal that the return on investment in biofuel-only operations does not overcome the economical goal due to the low value of fuels. Moreover, these processes involve high hydrogen consumption [22]. Biorefinery needs to integrate biofuels with high value-added biobased chemicals to achieve simultaneously the economical and energetic (replacement of petroleum with green and renewable raw materials) goals [13]. In this sense, value added chemicals such as sugar alcohols, maleic anhydride (MAN) or alkoxymethylfurfural (AMF) compounds are interesting products that can be obtained from HMF or FF. . Reaction pathway to produce FF and HMF from lignocellulosic biomass (adapted from [16]).
New research is focused on the investigation of new catalytic systems to produce FF from hemicellulose. Both homogeneous and heterogeneous catalysts are being studied. Organic acids such as oxalic acid, levulinic acid or maleic acid and mineral acids such as H 2 SO 4 , HCl or H 3 PO 4 can be used as homogeneous catalysts. Heterogeneous catalysts incorporate solid catalysts (TiO 2 , ZrO 2 and zeolites) and Lewis acids (CrCl 3 , ZnCl 2 and AlCl 3 ) [16,17]. Many investigations are also focused on the utilization of a proper solvent. Organic solvents, such as dimethyl sulfoxide (DMSO), γ-valerolactone (GVL) and ionic liquids (e.g., liquid salts) and biphasic aqueous/organic systems, such as water/toluene or water/methyl isobutyl ketone (MIBK), have been proposed as the most efficient ones for achieving high yields of FF [5,12]. FF has many applications; it can be used as an extracting agent on the refinery process of oil and diesel, as an additive in many products such as flavoring agents, insecticides, etc. Moreover, it can be further converted into a wide range of value added chemicals [5,7,8,12,14].
HMF produced from the hexose dehydration has been investigated for over 100 years [18]. Different catalytic systems have been reported obtaining high yields of HMF. These systems use both heterogeneous and homogeneous catalysts such as mineral and organic acids, salts and zeolites (similar to the FF production) [16,19,20]. Diverse solvents have been studied for the production of HMF, including ionic liquids, organic solvents (DMSO, acetone and tetrahydrofuran (THF)) and biphasic aqueous/organic systems (e.g., water/toluene) [16,19,20].
At present, the cost of HMF is three orders of magnitude higher than fossil-based chemicals (around 500-1500 USD/kg) [21]. Therefore, the market for HMF is limited if it is compared to the market of the final products that can be obtained from this platform molecule [11]. These chemicals derived from the hydrogenation or oxidation of HMF can be used in pharmaceuticals, polymers, resins, solvents, fungicides and biofuels [11,20].
HMF and FF have been included in the top 10 value-added chemicals derived from biomass for suitable production of fuels and chemicals [2,12,13]. These chemicals have been widely studied for the production of biofuels. Bozell et al. [13] reveal that the return on investment in biofuel-only operations does not overcome the economical goal due to the low value of fuels. Moreover, these processes involve high hydrogen consumption [22]. Biorefinery needs to integrate biofuels with high value-added biobased chemicals to achieve simultaneously the economical and energetic (replacement of petroleum with green and renewable raw materials) goals [13]. In this sense, value added chemicals such as sugar alcohols, maleic anhydride (MAN) or alkoxymethylfurfural (AMF) compounds are interesting products that can be obtained from HMF or FF.
New research reported in the literature is based on the ring-opening of HMF and FF to produce aliphatic or linear polyols (sugar alcohols) [23][24][25] for the chemical industry [26] by means of hydrogenation processes. Among these products, whose general formula is C n H 2n+2 O n , the most investigated alcohols are diols and triols. These kinds of alcohols are useful as polymeric monomers, especially diols, to produce polyester and polyurethanes, with a variety of applications such as resins, coatings, plasticizers and adhesives [23][24][25]. Moreover, these chemical commodities can also be used as: i) ingredients in the field of cosmetics and medicaments as humectants, solvents and viscosity-controlling agents, ii) components in printing inks, disinfectants and surfactants and iii) in the case of triols, as precursors of their diol counterparts [23,27].
Another high value chemical is MAN, which is used in numerous applications currently in the chemical industry (resins, agrochemicals, pharmaceuticals, etc.) and has an annual consumption of more than 1600 Kt [28][29][30]. Two principal feedstocks are currently used for its industrial production, benzene and n-butane, which can be substituted by the aforementioned two platform molecules [31][32][33].
Another important reaction is the etherification of HMF to produce AMF compounds. These compounds are mainly used as biodiesel additives. Among these biofuels 5-ethoxymethylfurfural (EMF) is considered to be a promising liquid biofuel due to its high energy density [34][35][36][37]. 5,5-oxy-bismethylene-2-furaldehyde (OBMF) can be used for the imine-based polymers preparation [38] and for the production of heterocyclic ligands and hepatitis antivirals [39]. Furthermore, some of these AMF can be used as precursors of other important compounds such as 2,5-furandicarboxylic acid (FDCA) [40]. Many chemical/biochemical companies including DuPont, Corbion and Synvina (a BASF-Avantium joint venture) have developed an efficient synthesis of FDCA.
Currently, these compounds are produced from the petroleum-derived chemicals through different and complex reactions [23,27,37,40]. However, as it is discussed in the following sections, they can be obtained from biomass-derived furan-compounds, providing simplicity to the production process and sustainability to these chemical commodities' applications [27,37,41].
Therefore, the main objective of this review is the identification of the main catalytic processes related to the production of sugar alcohols, maleic anhydride (MAN) or alkoxymethylfurfural (AMF), analyzing and discussing the best results obtained in terms of operating conditions, temperature, pressure, solvents to be used, catalysts used, reaction equipment, etc., using as a raw material the compounds derived from biomass such as furfural and 5-hydroxymethylfurfural.
Generally, cracking and hydrogenation reactions take place in acid and metal sites of the employed heterogeneous catalytic systems, respectively [25,43,46,47]. Therefore, the transformation of HMF into HT requires bifunctional catalysts [25,47]. Yao et al. [27] studied a direct production of HT from HMF using coprecipitated Ni-CoAl mixed oxides, with different metal contents, as well as Raney Ni and Raney Co type alloy catalysts. It is well known that Ni is active in hydrogenation reactions [25] and, the addition of a second metal can result in the formation of an alloy with a synergetic effect, improving the catalytic activity and the selectivities [27] to the desired products.
Moreover, the presence of metal oxides with acid-basic properties has a significant role in the reaction mechanism. In this sense, the acid metal oxides seem to be selective to furan ring-opening, increasing the polyols formation [48]. Although all of the tested catalysts presented almost a complete HMF conversion, not all were active in the furan ring-opening.
As it can be observed in Table 1, only catalysts designed as 0.3Ni2.7CoAl, 0.5Ni2.5CoAl and 0.9Ni2.1CoAl were able to produce a significant amount of HT, mainly via DHMF intermediate formation, reaching the highest yield of 37.4%, for the 0.5Ni2.5CoAl catalyst at 120 °C, 4 MPa of H2 and 4 h. The good performance of this catalyst was due to the appropriate Ni/Co ratio, which provided enough Ni-CoO active sites and Ni⁰ species to produce ring-opening and hydrogenation.
The temperature and pressure increase had a negative influence in HT formation, resulting in degradation and polymerization reactions. However, the increase of time on the reaction up to 12 h improved the HT yield, reaching a value of 64.5%. Generally, cracking and hydrogenation reactions take place in acid and metal sites of the employed heterogeneous catalytic systems, respectively [25,43,46,47]. Therefore, the transformation of HMF into HT requires bifunctional catalysts [25,47]. Yao et al. [27] studied a direct production of HT from HMF using coprecipitated Ni-CoAl mixed oxides, with different metal contents, as well as Raney Ni and Raney Co type alloy catalysts. It is well known that Ni is active in hydrogenation reactions [25] and, the addition of a second metal can result in the formation of an alloy with a synergetic effect, improving the catalytic activity and the selectivities [27] to the desired products. Moreover, the presence of metal oxides with acid-basic properties has a significant role in the reaction mechanism. In this sense, the acid metal oxides seem to be selective to furan ring-opening, increasing the polyols formation [48]. Although all of the tested catalysts presented almost a complete HMF conversion, not all were active in the furan ring-opening.
As it can be observed in Table 1, only catalysts designed as 0.3Ni2.7CoAl, 0.5Ni2.5CoAl and 0.9Ni2.1CoAl were able to produce a significant amount of HT, mainly via DHMF intermediate formation, reaching the highest yield of 37.4%, for the 0.5Ni2.5CoAl catalyst at 120 • C, 4 MPa of H 2 and 4 h. The good performance of this catalyst was due to the appropriate Ni/Co ratio, which provided enough Ni-CoO active sites and Ni 0 species to produce ring-opening and hydrogenation. The temperature and pressure increase had a negative influence in HT formation, resulting in degradation and polymerization reactions. However, the increase of time on the reaction up to 12 h improved the HT yield, reaching a value of 64.5%.
Buntara et al. [26] also worked in the HT formation using DHMTHF as a raw material. The tested catalysts were mainly supported bimetallic catalysts containing hydrogenating (Rh, Pd and Pt) and oxidic promoters (Re 7+ , Mo 6+ , W 6+ , Cr 3+ , Mn 2+ and Sn 2+ ) metals. Apart from γ-Al 2 O 3 , other types of supports, like SiO 2 , SiO 2 -Al 2 O 3 , CeO 2 , TiO 2 , carbon, Nb 2 O 5 and sulphated-ZrO 2 were tested. According to the results, after the reduction treatment, the amorphous SiO 2 (Fuji G6-3)-supported Rh-ReO x catalyst with the Rh/Re molar ratio of 0.5, followed by Rh-ReO x /SiO 2 (HDK-T40) and Rh-ReO x /Nb 2 O 5 , showed the best activity and selectivity toward HT formation, resulting in 31% of the conversion and 84% of selectivity at 120 • C, 8 MPa of H 2 and 4 h. The same catalyst synthetized via Rh nitrate precursor presented lower conversion, 24%, and a similar selectivity. In the same line as Yao et al. [27], Buntara et al. [26] associated the good performance of the Rh-ReO x /SiO 2 catalyst, prepared using Rh chloride, to the presence of Rh-Re alloys with large particles size generated using mild reduction pretreatment conditions (120 • C, 1 MPa of H 2 and 1 h). Catalysts containing Re combined with noble Pt and Pd showed negligible conversion and selectivities probably due to the use of inadequate metal precursors or low amounts of bimetallic alloys. Chen et al. [49], as Buntara et al. [26], concluded that Ru catalysts were more selective toward HT than Pt and Pd catalysts, due to their capacity for DHMF hydrogenation and DHMTHF hydrogenolysis having a total conversion of HMF. On the other hand, Buntara et al. [26] established that temperature and time have a negative influence on the catalysts activity and selectivity, as it can be observed in Table 1. In this sense, some authors [44] also reported that a temperature and time increase improves DHMTHF conversion but reduces HT selectivity due to the formation of HT degradation products, such as diols and mono-alcohols, which are discussed in Section 2.2. When WO x is used instead of ReO x in the catalyst, the combination with Pt type metal provided a HT selectivity above 95% with a DHMTHF conversion around 23%, being the reaction conditions 160 • C and 5.5 MPa of H 2 [41,50]. T → temperature; P → pressure; MeOH → methanol; BuOH → butanol; χ → conversion; S TP → selectivity od the target product; y → yield; a → Fuji G6-3 type SiO 2 ; b → prepared via Rh chloride precursor; c → HDK-T40 type SiO 2 ; d → feed treated for acidic compounds elimination; e → sum of HT, 1,2,5-HT and 1,2,5,6-hexanetetraol selectivities; n.a. → not available.
Similar catalytic systems were used by Alamillo et al. [42] to produce DHMTHF from HMF, detecting the presence of HT and 1,2,5-hexanetriol (1,2,5-HT), among others. Regarding 1,2,5-HT product, it seems to be mainly produced via hydrogenation of 1-hydroxyhexane-2,5-dione, which was generated by acid-catalyzed ring-opening of DHMF [48,51]. Concretely, they [42] prepared Ru catalysts supported on oxides with high (CeO 2 , MgO 2 -ZrO 2 and γ-Al 2 O 3 ) and low (SiO 2 ) isoelectric point, non-oxides (Vulcan carbon) and unsupported Ru. These catalysts presented good activity achieving almost a total conversion of HMF. However, among the mentioned catalysts, the unsupported ones showed better selectivity toward HT (13%) and 1,2,5-HT (13%) at 130 • C and 3 MPa using a biphasic system of water/1-butanol. While the high isoelectric supported catalysts, which have mainly basic properties, provided higher DHMTHF selectivity (91% for Ru/CeO 2 ; 88% for Ru/MgO-ZrO 2 and 89% for γ-Al 2 O 3 ) in the detriment of hexanetriols selectivity (around 1%) for the highest time of reaction (see Table 1). Hexanetriols selectivity is also influenced by acid impurities of the used feed, solvents and active metal. The presence of levulinic acid and formic acid, derived from HMF production via monosaccharides [16], favors the degradation reactions resulting in higher selectivity toward hexanetriols. The same phenomenon was observed when a single-phase system containing water was used. It seems that water is responsible of the additional degradation process. These researchers [42] concluded that Pd and Pt monometallic catalysts were less selective toward hexanetriols [26], but specially toward DHMTHF. The data reported by Kataoka et al. [52] also suggests the low capacity of Pt supported catalysts to transform HMF into HT at 135 • C, 3 MPa of H 2 and 24 h. However, among the tested catalysts, the ones supported in supports with a basic nature (hydrotalcite, CeO 2 and MgO) were more selective toward HT production than the ones supported in acidic or inert supports. Moreover, they reported that the promotion of Pt/CeO 2 catalysts with CoO x improved HT yield from 27% to 42%, being this last data the same as the Pt/hydrotalcite catalyst obtained. The good performance of the supported Pt catalysts could be due to the presence of the metal-support interface. Concretely, in the case of Pt catalysts supported in basic nature supports, their better behavior is associated to monodentate alkoxide adsorption and the nature type of the supports.
Diols from HMF and FF Using Metal Catalysts
Hydrogenation/hydrogenolysis reactions can also allow the direct or sequential production of hexanediols and pentanediols from HMF or FF, respectively, for which bifunctional solid catalysts are also the most investigated solid materials.
HMF Transformation into 1,6-Hexanediol (HD)
The production of HD can be carried out in fixed bed [45] or batch reactors [55] using supported metallic catalysts. Xiao et al. [45] prepared supported monometallic Pd, for the HMF-DMHTHF reaction, and bimetallic M(metal)-ReO x catalysts for the HMF-HD reaction (M one of the following metals: Ir, Pd, Pt, Rh, Pd-Ir and Pd-Rh). These catalysts were similar to the employed ones in HT synthesis. The Pd-Ir-ReOx/SiO 2 catalyst showed a good HD yield of 19.1% for a complete conversion of HMF (see Table 2). However, the formation of 1,5-HD and hexane was higher than the formation of HD, reaching a yield of 22% and 24.6%, respectively. The Pd-Ir-ReO x /SiO 2 and Ir-ReOx/SiO 2 catalysts provided a similar HD yield (see Table 2), but the formation of 1,5-HD and hexane decreased due to the fact that these catalysts were mainly less active in hydrogenolysis of the HT intermediate product.
On the other hand, the DHMTHF was the main product for the Pd-ReOx/SiO 2 catalyst (72.9%), followed by the Ir-ReOx/SiO 2, Pt-ReOx/SiO 2 and Ir-ReOx/Al 2 O 3 catalysts. The HD yield improvement came when the Pd/SiO 2 catalyst, which shows the best performance in the HMF-DHMTHF reaction, and the Ir-ReOx/SiO 2 catalysts were used in a double-layered fixed bed (Pd/SiO 2 in the upper layer and Ir-ReOx/SiO 2 in the bottom layer). In this way, the HD yield increased from 19.1% to 46.2% when Pd/SiO 2 +Ir-ReOx/SiO 2 were used at 100 • C, 3 MPa of H 2 , employing a LHSV of 6 h −1 and mixed solvents of water/THF. The presence of water enhanced the HD yield due to the formation of Re-OH groups, which are Brönsted acid sites precursors, allowing the hydrogenolysis of the C-O bond on the DHMTHF intermediate, while THF favored a stronger adsorption of reactants with active sites [45,56]. The increase of H 2 pressure allows the adsorption of more H 2 , which favors the HD desorption avoiding its degradation via hydrogenolysis. For this reason, the obtained HD yield under 7 MPa of pressure was 57.8% (see Table 2) for the Pd/SiO 2 +Ir-ReOx/SiO 2 catalyst. Tuteja et al. [55] used monometallic Pd catalysts and formic acid as H 2 source. The highest HD yield obtained was 42%, slightly below to the obtained ones by Xiao et al. [45], for the 7 wt % Pd/ZrP catalyst at 140 • C and 21 h. It seems that the obtained yield toward HD is attributed to the low metal dispersion and the high Brönsted/Lewis acid ratio, which are necessary to allow hydrogenation and furan ring-opening, respectively, corroborating the conclusions established by Buntara et al. [26] and Ohyama et al. [48]. Other Pd catalysts under other conditions were tested without improvement of the HD yield (see Table 2). Pt-WO x /TiO 2 and their monometallic counterparts were used to produce HD via HMF and DMTHF [41]. The Pt-WO x /TiO 2 catalyst, which contains a 10 wt % of each metal, was able to reach a HD selectivity of 90% by a sequential reaction (DHMTHF conversion into HT, followed by its hydrogenolysis to produce HD). In this case, the conversions did not exceed 25%. When the process is carried out in the one-pot reaction, the obtained selectivity was around 70% (in batch reactor: 160 • C and 5.5 MPa) and 60% (in fixed bed reactor: 160 • C, 3.5 MPa and WHSV = 1.3 h −1 ) for a conversion of 100% and 23%, respectively (see Table 2). These authors [41] attributed this good behavior to the hydrogen spillover, which allows separation of Pt/TiO 2 and WO x /TiO 2 species in the catalysts, and synergistic effect between Pt and WO x , favored by the reduction capacity of TiO 2 . SiO 2 and γ-Al 2 O 3 supported Rh-ReO x catalysts [57], which were used for HT production from DHMTHF [26], were also tested in HT conversion into HD. For that purpose, catalysts were previously reduced and then tested in the batch reactor at 180 • C and 8 MPa of H 2 , using water as a solvent. The Rh-ReO x /SiO 2 and Rh-ReO x /SiO 2 combined with γ-Al 2 O 3 catalysts provided highest HD selectivity, 73% and 76%, respectively, for a conversion around 20% in both cases. The carbon balance was mainly closed by the formation of 1,5-HD. The difference for these catalysts were the time on the reaction, being 3 h for the supported-SiO 2 catalyst and 20 h for the catalyst combined with γ-Al 2 O 3 . When the Rh-ReO x /SiO 2 catalyst was tested during 20 h, the HT conversion increased to 100%, while HD and 1,5-HD remained constant. The suitable superficial and acidic characteristics of this catalyst [26] could be responsible of the good behavior in the one-step conversion of HT into HD.
1,5-and 1,2-Pentanediol Synthesis from FF
The most extended reaction pathway for the production of C5 diols from FF is included in Figure 5. According to the literature [64], reduced Pd-Ir-ReOx/SiO2 type solid materials are good candidates for the 1,5-PD production via hydrogenation/hydrogenolysis of FF, following the same criteria as the one for the production of polyols from HMF [26,45]. Liu et al. [64] were capable to obtain a 1,5-PD yield of 71.4% for the total FF conversion in one-pot two-step controlled temperature reaction. As it is reflected in Table 3, the optimal conditions were 6 MPa of H2, 40 °C for 8 h and 100 °C for 72 h in each heating step, using the Pd-Ir-ReOx/SiO2 catalyst. The presence of Pd and Ir metal species interacting with ReOx catalyzed the FF hydrogenation and THFA hydrogenolysis, respectively, reaching the similar conclusion as other authors [26,42,45]. They observed that the increase of the first step temperature from 30 to 40 °C favored the rearrangement reaction of 1,5-PD into 1,4-PD, reducing the 1,5-PD yield [64]. On the contrary, the enhancement of the second step temperature, in the range of 100−120 °C, increased the 1,5-PD yield, without exceeding the highest temperature in order to avoid the over-hydrogenolysis reaction, which is the precursor of the monoalcohols formation [44] (see Table 3). Moreover, low H2 pressures and the high FF concentration provoke low hydrogenation and FF polymerization rate, respectively. Similar results were obtained using the Rh-Ir-ReOx/SiO2 catalyst [58], under a higher pressure, 8 MPa of H2, and higher FF:H2O ratio (3:3 in weight). The good performance of this catalyst was ascribed to the presence of the Rh-Ir alloy [25,27]. Supported Ni and Cu non-noble metal catalysts were also employed in hydrogenation/hydrogenolysis of THFA [63] and FFA [59], respectively. However, Ni catalysts provided 1,5-PD selectivities below 50% in the established conditions [59,63]. When Ni/HZSM-5 catalysts were used [63], the highest 1,5-PD selectivity achieved was 36% for 17% of THFA conversion at conditions reflected in Table 3. It seems that THFA is transformed into 1,5-PD, which suffers from hydrogenolysis and circulation to produce tetrahydropyran. Ni/SiO2 and Ni/Al2O3 catalysts presented 1,5-PD selectivity below the obtained ones by Ni/HZSM-5 at different reaction conditions [59,63]. According to the literature [64], reduced Pd-Ir-ReO x /SiO 2 type solid materials are good candidates for the 1,5-PD production via hydrogenation/hydrogenolysis of FF, following the same criteria as the one for the production of polyols from HMF [26,45]. Liu et al. [64] were capable to obtain a 1,5-PD yield of 71.4% for the total FF conversion in one-pot two-step controlled temperature reaction. As it is reflected in Table 3, the optimal conditions were 6 MPa of H 2 , 40 • C for 8 h and 100 • C for 72 h in each heating step, using the Pd-Ir-ReO x /SiO 2 catalyst. The presence of Pd and Ir metal species interacting with ReO x catalyzed the FF hydrogenation and THFA hydrogenolysis, respectively, reaching the similar conclusion as other authors [26,42,45]. They observed that the increase of the first step temperature from 30 to 40 • C favored the rearrangement reaction of 1,5-PD into 1,4-PD, reducing the 1,5-PD yield [64]. On the contrary, the enhancement of the second step temperature, in the range of 100−120 • C, increased the 1,5-PD yield, without exceeding the highest temperature in order to avoid the over-hydrogenolysis reaction, which is the precursor of the mono-alcohols formation [44] (see Table 3). Moreover, low H 2 pressures and the high FF concentration provoke low hydrogenation and FF polymerization rate, respectively. Similar results were obtained using the Rh-Ir-ReO x /SiO 2 catalyst [58], under a higher pressure, 8 MPa of H 2 , and higher FF:H 2 O ratio (3:3 in weight). The good performance of this catalyst was ascribed to the presence of the Rh-Ir alloy [25,27]. Supported Ni and Cu non-noble metal catalysts were also employed in hydrogenation/hydrogenolysis of THFA [63] and FFA [59], respectively. However, Ni catalysts provided 1,5-PD selectivities below 50% in the established conditions [59,63]. When Ni/HZSM-5 catalysts were used [63], the highest 1,5-PD selectivity achieved was 36% for 17% of THFA conversion at conditions reflected in Table 3. It seems that THFA is transformed into 1,5-PD, which suffers from hydrogenolysis and circulation to produce tetrahydropyran. Ni/SiO 2 and Ni/Al 2 O 3 catalysts presented 1,5-PD selectivity below the obtained ones by Ni/HZSM-5 at different reaction conditions [59,63]. Table 3. Different catalysts used for PD production with a summary of reaction conditions and catalytic activity, using an autoclave reactor. Table 3 indicates that Cu/Al 2 O 3 catalysts are able to provide 1,5-PD selectivities close to obtained ones with Ni catalysts. They offer higher conversions of FFA and 1,2-PD selectivities at 140 • C, 6 MPa of H 2 and 8 h. Concretely, the 10Cu/Al 2 O 3 catalyst, with a high Cu dispersion and acidic support, shows the best performance in terms of conversion (85.5%) and pentanediols selectivity (70%) [59].
Catalyst T ( • C) Time (h) P (MPa) Solvent Raw Material
When Pt supported on hydrotalcite were used, the 1,2-PD yield increased up to 73%, in detriment of 1,5-PD, for a total conversion of FF at 150 • C and 3 MPa of H 2 , using 2-propanol instead of another kind of solvent [65] (see Table 3). This yield trend was the opposite when lower reaction temperatures were used. The combination of hydrotalcite basic sites, which promote the formation of polar hydrogen species attacking selectively the C-O bond, and Pt, which offers hydrogenation capacity, provided a catalyst with good activity and yield to target products. However, when Pt/Al 2 O 3 embedded in a metal organic framework, MIL-53-AI-NH 2 , was used with NaBH 4 as a hydrogen donor, a 1,5-PD yield of 75,2% were obtained at 45 • C, 0.45 MPa of H 2 and 8 h using water as a solvent, without the presence of 1,2-PD [66].
The Pd/MMT catalyst, with Pd nanoparticles and Brönted-Lewis acidity, necessary for hydrogenolysis [48,68], presented also good performance in 1,2-PD production when high temperatures (220 • C) and pressures (3.5 MPa) were used [62]. In this case, significant quantities of 1,4-PD are also formed without the presence of 1,5-PD. Even the use of noble metal catalysts supported on OMS-2 (octahedral molecular sieve), with basic sites, seems to be a good alternative to produce 1,2-PD from FF [67]. Concretely the Ru/OMS-2 catalyst provided a 1,2-PD selectivity of 87% for a complete conversion of FF at 160 • C, 3 MPa of H 2 and 8 h. The Pd/OMS-2 catalyst also provided good performance in terms of selectivity toward 1,2-PD (76%), but below the ones obtained by the Ru catalyst. Catalysts without the metallic phase, such as NbPO, were active in one-pot xylose conversion and more selective toward 1,2-PD (maximum value 19.1%) when compared with Nb 2 O 5 catalysts in an aqueous biphasic system, because of their high Lewis acidity [68].
Anhydride of Acids: Maleic Anhydride (MAN)
In the last years different conversion routes have been technically demonstrated to turn this petrochemical into a renewable chemical by the oxidation of different renewable platforms like 1-butanol [69], levulinic acid [70], HMF [33,[71][72][73] and FF [13,31,[74][75][76][77][78][79][80][81] using O 2 (see Table 4). Moreover, contrary to the current conventional process carried out in the gas phase, depending on the reaction conditions and the used reactant, the MAN production can be performed in the aqueous or gas phase. If the reaction is carried out in the aqueous phase, maleic acid is produced instead of MAN. However, both products can be efficiently converted into each other via reversible dehydration/hydration steps.
Pavarelli et al. performed the oxidehydration of butanol [69]. Two sequential steps are involved in this reaction: (i) 1-butanol dehydration to 1-butene (catalyzed by acid sites) and (ii) the oxidation of 1-butene into MAN (catalyzed by redox sites). Thus, a bifunctional catalyst able to dehydrate butanol and oxidate 1-butane is required; in this case a vanadyl pyrophosphate catalyst was employed in a continuous flow reactor achieving maximum MAN yields of 39% and total conversion at 340-360 • C. Levulinic acid is another biomass-derived platform chemical that could play a central role in emerging industries as an intermediate that facilitates the production of different biochemicals. Chatzidimitriou et al. [70] used levulinic acid as the raw material to produce MAN using VO x supported on SiO 2 as a catalyst. They tested different reaction conditions obtaining the best results (100% of levulinic acid conversion and 71% of MAN yield) at 325 • C and 3.9 min of contact time. Moreover, they describe the possible reaction pathway where the levulinic acid is oxidized into formaldehyde and succinic acid and the later one can reversibly form its anhydride and then the MAN by oxidative dehydrogenation steps.
The use of HMF and FF as raw materials is the most investigated route for the production of MAN since these two reactants are two of the most promising biobased platform molecules so far [2,13,82]. With regards to the HMF, Du et al. [33] developed several catalytic systems in the liquid phase using acetonitrile as a solvent. Using bis(acetylacetone)oxovanadium (VO(ACAC) 2 ) as a catalyst, 52% of MAN yield was achieved at 90 • C, 1 MPa and 4 h of reaction time. They also tried to use some other transition metal catalysts such as copper sulphate, cobalt acetate, ferrous sulphate but they did not succeeded (< 5% MAN yield), concluding that vanadium species are crucial for MAN production from HMF. Lan et al. [71] tested the usage of vanadium-substituted heteropolyacid catalyst using acetonitrile and achieved 41.8% of MAN yield after 8 h at 90 • C and under 1 MPa. They also tried to discover the reaction mechanism testing the oxidation of different published HMF oxidation intermediates (FDCA, 2,5-diformylfuran (DFF), 5-formyl-2-furancarboxylic acid (FFCA), and 5-hydroxymethyl-2-furancarboxylic acid (HMFCA)) in the same conditions as with HMF. They concluded that none of them has a chance to serve as intermediates in MAN formation. Li et al. [83] also used vanadium based catalysts (V 2 O 5 , VOHPO 4 , (VO) 2 P 2 O 7 and Mo 9 V 3 O 8 ) and reported total HMF conversion with MAN + maleic acid yields around 75−79% at 100 • C, 4 h and under 1 MPa. Although they tested different solvents, acetic acid was the one that offered the best results. Moreover, these authors tested the direct conversion of fructose to MAN achieving 50% of MAN yield in the above described conditions (fructose dehydration was catalyzed by HCl in 2-propanol). Lv et al. [72,84] use a vanadium based catalyst achieving the best results with those that were immobilized onto the Schiff base modified graphene oxide (GO) using HMF diluted in acetic acid. The achieved yield was 95.3% (sum of MAN and hydrolyzed maleic anhydride in 1:2.5 proportion, respectively) with a total HMF conversion at conditions reflected in Table 4. Chai et al. [85] also conducted HMF oxidation to maleic acid + MAN reaction using a graphene oxide supported vanadium catalyst (V-GO). They tested the reaction using different solvents (GVL, HAc and H 2 O) being the use of GVL the one that showed the best results (53.7% of maleic acid+MAN yield and 97.8% of HMF conversion at 90 • C, 2MPa of O 2 atmosphere and 4 h of reaction time). More recently, Jia et al. [86] carried out a more novel approach to produce MAN using (formate) methyl]furfural (FMF) as the raw material and pure oxygen as the oxidant over α-MnO 2 /Cu(NO 3 ) 2 with the assistance of K 2 S 2 O 8 (KPS). FMF is presented as a more stable and more hydrophobic compound than HMF, facilitating its separation from the reaction mixture. The tests were performed using a mixture of water and MeCN as a solvent at 90 • C, 5 h of reaction time and atmospheric pressure. Results showed 100% of conversion of FMF and a maleic acid yield of 89%. Moreover, the same catalyst was reused in three different cycles showing almost the same results in all of them.
As well as in the HMF case, liquid-phase FF oxidation using O 2 at high pressures (above 1 MPa [2]) has been also studied. In general more studies are reported using this raw material than with HMF, but 60% is the highest achieved MAN yield reported in all the cases. Moreover, the employed catalyst propoters and solvents make most of these processes far from being technoeconomically viable, reporting some deactivation problems due to leaching phenomena when using acetic acid as a solvent [72]. Thus, Huang et al. [80] reported MAN formation using metallopor phyrin catalysts (FeT(p-Cl)PP Cl) in the aqueous/organic biphase system achieving 44% of the MAN yield and 95% of the FF conversion at 90 • C, 1.2 MPa and 4 h. Guo et al. [87] achieved 34.5% of the MAN yield and 50.4% of the FF conversion using a phosphomolybdic acid catalyst in the aqueous/organic (tetrachloroethane) biphase system (110 • C, 2 MPa and 14 h ). Lan et al. [76] reported a 54% MAN yield and 98.5% of the FF conversion using H 5 PV 2 Mo 10 O 40 and Cu(CF 3 SO 3 ) 2 catalysts using acetic acid as the solvent at 110 • C, 14 h and under 2 MPa. More recently, Soták et al. [88] reported their study where the CaCu-phosphate catalyst showed the best catalytic performance (37.3% MAN yield) at 115 • C and 0.8 MPa using water as the solvent.
Another aqueous phase oxidation option reported in the open literature is using H 2 O 2 , as an oxidizing agent, instead of O 2 . López-Granados et al. has published several papers where using a commercial titanium silicate catalyst (TS-1), reaching a 70% of MAN yield with 5 wt % of the FF conversion in water, 5 wt % of the catalyst (titanium silicate), H 2 O 2 /FF mol ratio of 7.5, 50 • C and 24 h of residence time [74]. Recently, some other authors have also employed H 2 O 2 in the production of MAN [77][78][79]91,92]. However, in spite of the promising catalytic activity that offers the usage of H 2 O 2 , its usage could be restricted due to economic reasons [93].
Finally, oxidation of FF in the gas phase is also reported using different vanadium oxide-based catalysts (V oxide, V-Mo and V-Bi mixed oxides); however, most of the references date from the first part of the 20th century [31,89,[94][95][96]. More recently, López-Granados et al. has reported some promising results [31,89] using vanadium oxide supported on alumina. Thus, at 300 • C and using 1% of FF in air (vol%) and a O 2 /FF molar ratio of 20, an initial yield well above 70% was obtained. However, the deactivation of the catalyst is unavoidable due to the deposition of resins on the catalyst surface. More recently, Santander et al. [90] have studied the effect of different supports (SiO 2 , γ-Al 2 O 3 , ZrO 2 and TiO 2 ) on the oxidation of FF to MAN using vanadia catalysts. They concluded that SiO 2 and γ-Al 2 O 3 are the ones offering the highest maleic acid yield (50%), but when the oxidation potential of the reaction feed was decreased (lower O 2 /FF ratios) comparable maleic acid yields were obtained with V 2 O 5 /SiO 2 and V 2 O 5 /γ-Al 2 O 3 catalysts. Despite achieving lower yields than the group of López-Granados, Santander et al. did not observe any deactivation after 20 h on stream.
Alkoxymethylfurfurals (AMF) from HMF Etherification with Different Alcohols
All the AMF compounds are obtained by etherification of HMF using different alcohols [97,98]. Nevertheless, fructose or glucose are also used since some authors claim that the direct use of HMF as raw material is economically unfeasible [99]. As it can be observed in Figure 6, which reflects the most usual reactions, the formation of AMFs is due to the integration of an alkoxy group on the HMF structure substituting its OH group [23]. These compounds can be 5-isopropoxymethylfurfural (IPMF), 5-tert-butoxymethylfurfural (TBMF), 5-octyloxymethylfurfural (OMF), 5-dodecyloxymethylfurfural (DDMF) and 5-hexadecyloxymethylfurfural (HDMF).
Another symmetrical ether, the OBMF, can be produced by the HMF self-etherification or by Williamson reaction using HMF and 5-chloromethylfurfural [23] (see Figure 7). Depending on the operating conditions different byproducts, such as alkyl levulinate and dyalkylacetals [100], can be obtained.
The use of a higher molecular weight alcohols lead to the formation of higher molecular weight AMF compounds. These ones present a better low-temperature flowing properties when compared with low molecular weight AMF, such as methoximetilfurfural (MMF) or EMF [97,98]. Another symmetrical ether, the OBMF, can be produced by the HMF self-etherification or by Williamson reaction using HMF and 5-chloromethylfurfural [23] (see Figure 7). Depending on the operating conditions different byproducts, such as alkyl levulinate and dyalkylacetals [100], can be obtained. OBMF production for a self-etherification [23].
The most used catalysts in the etherification reactions are acid-catalysts, which are identified as the key point in this process [99]. Although some homogeneous acid catalysts have been proposed [101][102][103][104][105][106], the use of heterogeneous catalysts (heteropolyacids [107,108], their supported nanoparticles [109,110], acid-modified mesoporous silica materials [3,111], GO [112], zeolites and resins [113]) seems to be promising and more adequate in order to avoid the well-known disadvantages presented by homogeneous acid catalysts [114]. Recent studies have evidenced that the type and strength of the acid sites, as well as the operating conditions and the used cosolvent, influence the reaction path and the selectivity [111].
Among the possible AMF compounds, many studies are focused on the EMF production, because of its potential use as biofuel. Most of these studies are carried out in batch conditions. Some authors [115] suggested that the EMF yield ratio depends on the relative Lewis:Brönsted acid sites ratio of the catalyst. Strong Brönsted and Lewis acids favor the formation of levulinic acid esters as byproducts. These authors [115] used SBA-15 zeolites in its protonated form and MCM-41 type zeolites. In both cases, the acid sites were tuned by introducing Zr and S and adjusting the Si/Al ratio, The use of a higher molecular weight alcohols lead to the formation of higher molecular weight AMF compounds. These ones present a better low-temperature flowing properties when compared with low molecular weight AMF, such as methoximetilfurfural (MMF) or EMF [97,98].
The most used catalysts in the etherification reactions are acid-catalysts, which are identified as the key point in this process [99]. Although some homogeneous acid catalysts have been proposed [101][102][103][104][105][106], the use of heterogeneous catalysts (heteropolyacids [107,108], their supported nanoparticles [109,110], acid-modified mesoporous silica materials [3,111], GO [112], zeolites and resins [113]) seems to be promising and more adequate in order to avoid the well-known disadvantages presented by homogeneous acid catalysts [114]. Recent studies have evidenced that the type and strength of the acid sites, as well as the operating conditions and the used cosolvent, influence the reaction path and the selectivity [111].
Among the possible AMF compounds, many studies are focused on the EMF production, because of its potential use as biofuel. Most of these studies are carried out in batch conditions. Some authors [115] suggested that the EMF yield ratio depends on the relative Lewis:Brönsted acid sites ratio of the catalyst. Strong Brönsted and Lewis acids favor the formation of levulinic acid esters as byproducts. These authors [115] used SBA-15 zeolites in its protonated form and MCM-41 type zeolites. In both cases, the acid sites were tuned by introducing Zr and S and adjusting the Si/Al ratio, respectively. The catalytic test was developed at 140 • C for 5 h under autogenous pressure, achieving 76% EMF yield and total HMF conversion with SBA-15 doped with Zr, as it can be observed in Table 5. Yang et al. [107] used heteropolyacids, such as H 3 PW 12 O 40 (HPW), to transform fructose into EMF. This type of catalyst is common in the carbohydrate conversion [116][117][118] and etherification processes [119] due to their well-defined structure, their Brönsted acidity, the possibility to modify their acid-base properties, the ability to accept and release electrons and the high proton mobility [120]. However, they suggest that EMF selectivity cannot only depend on the acid character of the catalyst, but also on the reaction time, reaction temperature and catalyst amount [107]. Therefore, it seems to be necessary to establish appropriate reaction conditions. Hence, they [107] were able to achieve an EMF yield of 65% at 130 • C and 0.5 h. The presence of THF as a cosolvent improved this yield up to 76% in a batch reactor [107] limiting the undesirable formation of humins. H. Wang et al. [108] also used heteropolyacids in the production of EMF from fructose and other polysaccharides achieving an EMF yield of 64% using DMSO as a cosolvent and phosphotungstic acid (HPW), corroborating the importance of using suitable cosolvents to avoid undesired byproducts. Moreover, they suggest that the existing water in the reaction medium could significantly reduce the EMF yield.
Other authors such as Bing Liu et al. [3] have developed zeolite based new catalysts by immobilizing propyl sulfonic groups on mesoporous silica to build silica-supported sulfonic acids. Using these catalysts and HMF as raw material, different reaction parameters were optimized, concluding that the optimum temperature was 100 • C for a HMF conversion of 96.5% and an EMF yield of 83.8%. Moreover, they suggested that: (i) temperatures higher than 100 • C favor the formation of undesired byproducts derived from HMF polymerization and cross-polymerization [121][122][123], (ii) high amounts of catalyst derives in the formation of byproducts as ethyl levulinate, (iii) the feed of C 6 sugars reduce the EMF yield below 65% and (iv) catalysts can be reused with no activity loss.
Morales et al. [111] have also used silica sulfate catalysts to obtain EMF from fructose. The EMF yield was 63.4% at 116 • C using the Ar-SO 3 H-SBA-15 catalyst and DMSO as the cosolvent. In this case, the main byproduct was ethyl levulinate. The mentioned catalyst was reused up to four consecutive times, without any regeneration treatment, showing a slight deactivation due to organic deposits on the catalyst surface [111].
Finally, another reported incipient heterogeneous catalyst is GO, formed by the Hummers method, based on the exhaustive oxidation from graphite under strong acidic conditions (in concentrated H 2 SO 4 ) by using permanganate and H 2 O 2 [112]. After this treatment, the graphene contains numerous oxygen functional groups (alcohols, epoxides and carboxylates) as well as a small quantity of sulfate groups [112]. These functional groups made it possible to achieve a high EMF yield (92%) from HMF. However, a loose of activity was observed due to the leaching of some active sites limiting its reusability. Different carbohydrate feeds were also tested obtaining EMF yields below 71% [112].
Conclusions
This review described the renewable and sustainable catalytic production of value-added biochemical commodities, such as polyols, MAN and AMF from furanic platform molecules, concretely HMF and FF. These biochemicals have many applications in the pharmaceutical and cosmetics industry, in the production of polymers and as biofuels.
With regard to polyols, their formation requires hydrogenation/hydrogenolysis reactions including furan-ring opening. The review of the available literature on this topic indicated that research studies are primarily focused on the synthesis of bifunctional catalysts, composed by noble metal active sites for hydrogenation and acid-basic support for hydrogenolysis, which are tested mainly in batch type reactors. The good performance of these catalysts seems to be related to metallic alloy formation, the Lewis/Brönsted adequate ratio and dispersion of the metal phases. They can be produced in sequential steps and/or using reaction intermediates, and in monophasic and biphasic reaction mediums.
According to MAN, FF oxidation in the gas phase seems to be the most promising alternative in the short term since this feed is one of the few biomass-derived building blocks commercially available currently. However, the oxidation of HMF seems to be a very promising alternative as soon as its commercial production starts increasing. For that purpose, bifunctional catalysts, as in the case of polyols, are the promising ones for the production of this chemical commodity. Concretely, vanadium-containing catalysts, which are currently used in conventional production process of MAN, seem to be good alternatives to carry out the heterogeneous catalytic conversion of HMF into MAN.
As well as in the previous processes, the formulation of a suitable heterogeneous catalyst is essential in the HMF etherification process, being the acid sites the key parameter. It seems that strong Lewis acid favors the EMF yield, comparing with strong Brönsted acid sites. The EMF yield also depends on the operating conditions: temperature, amount of catalyst and cosolvent. The use of a cosolvent could limit or avoid the undesirable byproducts, such as humins.
In the overall, it can be concluded that it is possible to produce different biochemical commodities from FF and HMF. A suitable heterogeneous catalyst formulation is essential in order to enhance the selectivity toward the target products and get stable catalytic systems. For this purpose a deeper characterization of the catalysts and the used of continuous reaction systems or catalyst reusability tests in batch systems are necessary. Thus, a deeper correlation among activity, selectivity and characterization would lead to discover the reaction mechanism, especially in the case of MAN and AMF.
Author Contributions: A.I. was in charge of Section 2 (Polyols); I.A. was in charge of Section 3 (MAN) while N.V. was in charge of the Introduction section. Finally, J.R. was in charge of Section 4 (AMF). All authors discussed the results/information and contributed to the final manuscript. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 11,683 | 2020-08-07T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Coupling of light from microdisk lasers into plasmonic nano-antennas
An optical dipole nano-antenna can be constructed by placing a sub-wavelength dielectric (e.g., air) gap between two metallic regions. For typical applications using light in the infrared region, the gap width is generally in the range between 50 and 100 nm. Owing to the close proximity of the electrodes, these antennas can generate very intense electric fields that can be used to excite nonlinear effects. For example, it is possible to trigger surface Raman scattering on molecules placed in the vicinity of the nano-antenna, allowing the fabrication of biological sensors and imaging systems in the nanometric scale. However, since nano-antennas are passive devices, they need to receive light from external sources that are generally much larger than the antennas. In this article, we numerically study the coupling of light from microdisk lasers into plasmonic nanoantennas. We show that, by using micro-cavities, we can further enhance the electric fields inside the nano-antennas. ©2009 Optical Society of America OCIS codes: (130.0130) Integrated Optics; (140.5960) Semiconductor lasers; (240.6680) Surface plasmons. References and links 1. S. A. Maier, Plasmonics: Fundamentals and Applications (Springer, New York, 2007). 2. C. Genet, and T. W. Ebbesen, “Light in tiny holes,” Nature 445(7123), 39–46 (2007). 3. A. Boltasseva, S. I. Bozhevolnyi, T. Søndergaard, T. Nikolajsen, and K. Leosson, “Compact Z-add-drop wavelength filters for long-range surface plasmon polaritons,” Opt. Express 13(11), 4237–4243 (2005), http://www.opticsexpress.org/abstract.cfm?URI=oe-13-11-4237. 4. S. A. Maier, P. G. Kik, H. A. Atwater, S. Meltzer, E. Harel, B. E. Koel, and A. A. G. Requicha, “Local detection of electromagnetic energy transport below the diffraction limit in metal nanoparticle plasmon waveguides,” Nat. Mater. 2(4), 229–232 (2003). 5. J. C. Weeber, M. U. Gonzalez, A. L. Baudrion, and A. Dereux, “Surface Plasmon routing along right angle bent metal stripes,” Appl. Phys. Lett. 87(22), 221101 (2005). 6. A. Minovich, H. T. Hattori, I. McKerracher, H. H. Tan, D. N. Neshev, C. Jagadish, and Y. S. Kivshar, “Enhanced transmission of light through periodic and chirped lattices of nanoholes,” Opt. Commun. 282(10), 2023–2027 (2009). 7. V. A. Poldoskiy, A. K. Sarychev, and V. M. Shalaev, “Plasmon modes in metal nanowires and left-handed materials,” J. Nonlinear Opt. Phys. Mater. 11(1), 65–74 (2002). 8. H. Fischer, and O. J. F. Martin, “Engineering the optical response of plasmonic nanoantennas,” Opt. Express 16(12), 9144–9154 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-12-9144. 9. N. Yu, E. Cubukcu, L. Diehl, D. Bour, S. Corzine, J. Zhu, G. Höfler, K. B. Crozier, and F. Capasso, “Bowtie plasmonic quantum cascade laser antenna,” Opt. Express 15(20), 13272–13281 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-20-13272. 10. J. Li, A. Salandrino, and N. Engheta, “Shaping light beams in the nanometer scale: A Yagi-Uda nanoantenna in the optical domain,” Phys. Rev. B 76(24), 245403–245407 (2007). 11. M. L. Brongersma, “Engineering optical nanoantennas,” Nat. Photonics 2(5), 270–272 (2008). 12. J. Merlein, M. Kahl, A. Zuschlag, A. Sell, A. Halm, J. Boneberg, P. Leiderer, A. Leitenstorfer, and R. Bratschitsch, “Nanomechanical control of an optical nano-antenna,” Nat. Photonics 2(4), 230–233 (2008). (C) 2009 OSA 9 November 2009 / Vol. 17, No. 23 / OPTICS EXPRESS 20878 #118416 $15.00 USD Received 12 Oct 2009; revised 23 Oct 2009; accepted 27 Oct 2009; published 30 Oct 2009 13. A. Alù, and N. Engheta, “Tuning the scattering response of optical nanoantennas with nanocircuit loads,” Nat. Photonics 2(5), 307–310 (2008). 14. N. Yu, R. Blanchard, J. Fan, Q. J. Wang, C. Pflügl, L. Diehl, T. Edamura, M. Yamanishi, H. Kan, and F. Capasso, “Quantum cascade lasers with integrated plasmonic antenna-array collimators,” Opt. Express 16(24), 19447–19461 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-24-19447. 15. H. G. Park, J. K. Hwang, J. Huh, H. Y. Ryu, S. H. Kim, J. S. Kim, and Y. H. Lee, “Characteristics of modified single-defect two-dimensional photonic crystal lasers,” IEEE J. Quantum Electron. 38(10), 1353–1365 (2002). 16. N. Yokouchi, A. J. Danner, and K. D. Choquette, “Vertical-cavity surface-emitting laser operating with photonic crystal seven-point defect structure,” Appl. Phys. Lett. 82(21), 3608–3610 (2003). 17. H. T. Hattori, C. Seassal, X. Letartre, P. Rojo-Romeo, J. L. Leclercq, P. Viktorovitch, M. Zussy, L. di Cioccio, L. El Melhaoui, and J. M. Fedeli, “Coupling analysis of heterogeneous integrated InP based photonic crystal triangular lattice band-edge lasers and silicon waveguides,” Opt. Express 13(9), 3310–3322 (2005), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-9-3310. 18. H. T. Hattori, V. M. Schneider, R. M. Cazo, and C. L. Barbosa, “Analysis of strategies to improve the directionality of square lattice band-edge photonic crystal structures,” Appl. Opt. 44(15), 3069–3076 (2005). 19. R. M. Cazo, C. L. Barbosa, H. T. Hattori, and V. M. Schneider, “Steady-state analysis of a directional square lattice band-edge photonic crystal laser,” Microw. Opt. Technol. Lett. 46(3), 210–214 (2005). 20. D. Ohnishi, T. Okano, M. Imada, and S. Noda, “Room temperature continuous wave operation of a surfaceemitting two-dimensional photonic crystal diode laser,” Opt. Express 12(8), 1562–1568 (2004). 21. S. J. Choi, K. Djordjev, and P. D. Dapkus, “Microdisk lasers vertically coupled to output waveguides,” IEEE Photon. Technol. Lett. 15(10), 1330–1332 (2003). 22. S. V. Boriskina, T. M. Benson, P. D. Sewell, and A. I. Nosich, “Directional emission, increased free spectral range, and mode Q-factors in 2-D wavelength scale optical microcavity structures,” IEEE J. Sel. Top. Quantum Electron. 12, 1175–1182 (2006). 23. M. Fujita, R. Ushigone, and T. Baba, “Continuous wave lasing in GaInAsP injection laser with threshold current of 40 μA,” Electron. Lett. 36, 790–791 (2000). 24. H. T. Hattori, D. Liu, H. H. Tan, and C. Jagadish, “Large square resonator laser with quasi-single-mode operation,” IEEE Photon. Lett. 21(6), 359–361 (2009). 25. Fullwave 6.1 RSOFT design group, 2008, http://www.rsoftdesign.com 26. Y. Z. Huang, and Y. D. Yang, “Mode coupling and vertical radiation loss for whispering gallery modes in 3-D microcavities,” IEEE/OSA, J. Lightwave Technol. 26(11), 1411–1416 (2008). 27. H. T. Hattori, “Modal analysis of one-dimensional nonuniform arrays of square resonators,” J. Opt. Soc. Am. B 25(11), 1873–1881 (2008).
Introduction
Surface plasmon polaritons (plasmonic waves) are electromagnetic excitations propagating at the interface between a dielectric and a conductor, evanescently confined in the normal direction [1].A few years ago, it was shown that the excitation of plasmonic waves could lead to the transmission of light at sub-wavelength range [2], creating the possibility of developing tiny optical components with dimensions smaller than the wavelength of light [3][4][5][6][7].Another important aspect of the excitation of plasmonic waves is the strong enhancement of the incident electric fields near the surface of the metallic regions by several orders of magnitude [8,9].These regions of intense electric fields ("hot" regions) can excite localized nonlinear effects such as the surface enhanced Raman scattering.The excitation of high intense electric fields at sub-wavelength regions can be achieved by using plasmonic devices called nanoantennas.One simple example of nano-antenna is a dipole antenna, where a sub-wavelength air gap between two metallic regions can enhance the electric field more than 100 times [9].
The properties of nano-antennas have been extensively discussed (see, for example [9][10][11][12][13], ).Besides applications in biological sensing and imaging, these nano-antennas can be used to manipulate nano-particles that are attracted by the high intense fields generated in the gap between the metallic regions.Recently, nano-antennas have also been used to collimate the far-field emission from semiconductor lasers [14].
These exciting applications have fostered the research on nano-antennas by several research groups worldwide.However, light has been coupled from large area semiconductor lasers (e.g.Fabry-Perot lasers with transversal areas of several micrometers) into nanoantennas with nanometric gaps.This way of coupling light into the nano-antenna is not efficient since most of the emitted light is not directly coupled into the nano-antenna but lost elsewhere.Moreover, more compact laser sources such as photonic crystal [15][16][17][18][19][20] and polygonal lasers [21][22][23][24] could be used to excite these tiny antennas.In this article, we examine the coupling of light from a microdisk laser into nano-antennas with dimensions between 50 and 100 nm: different coupling schemes are analyzed and their performances are assessed.We show that coupling of light into dipole nano-antennas can be tricky, since the metallic surfaces act as reflectors of the incident wave, leading to additional resonant peaks in the microdisk structure.Although the introduction of micro-cavities to the nano-antennas create additional resonant peaks in the microdisk resonator, the addition of these cavities can increase the electric field inside the nano-antenna and, at the same time, improve the coupling efficiency into the device.
Stand-alone microdisk structure and direct coupling into the nano-antenna
A schematic diagram of an epitaxially layered structure that could be used to fabricate these devices is shown in Fig. 1(a).The core layer consists of GaAs with three In 0.5 Ga 0.5 As quantum dot layers, whose vertical confinement is provided by a low refractive index air top layer (total internal reflection) and a bottom Bragg stack layer (working in its bandgap region).The Bragg stack layer consists of 25 pairs of alternating quarter wavelength AlAs and GaAs layers, providing a reflectivity above 99%.The quantum dots have diameters ranging from 20 to 30 nm and heights between 3 to 5 nm, with an average quantum dot concentration of 4x10 10 cm −2 .The separation between different quantum dot layers is about 30 nm.These quantum dots can be grown by a Metal Organic Chemical Vapor Deposition (MOCVD) system by using a Stranski-Krastanov method.These quantum dots have a gain peak at 1160 nm with a gain bandwidth of about 100 nm.In order to analyze different optical devices, commercial three-dimensional finite difference time domain (FDTD) software [25] is employed.The mode is assumed to be TE, with main component of the magnetic field in the y-direction (H y ), perpendicular to the plane of the device [see Fig. 1(a)].A light source is placed at the edge of the microdisk laser and is assumed to be Gaussian, with a spot-size diameter of 300 nm.The computation region is terminated by perfectly matched absorbing layers.The grid size specified in the calculations is uniform along the x and z directions, (with grid sizes of 30 nm) and the time step ∆t=6.7x10−18 s.No material gain is added to these simulations because we are trying to assess coupling efficiencies.
We first consider a scenario in which a stand-alone microdisk is coupling light into a single-mode waveguide as shown in Fig. 1(b).The single-mode waveguide has a width of 300 nm and supports a single mode at the wavelength of 1160 nm.The radius of the microdisk is 1.5 µm and the gap between the microdisk laser and the waveguide is 100 nm.The field spectrum (H y ) for this microdisk laser is shown in Fig. 2(a) in the range between 1100 and 1200 nm.The main resonant peak appears at the free-space wavelength λ = 1166 nm with a quality factor (Q) of 16000.This mode corresponds to TE 17,1 using the same convention as in [26] (the first index corresponds to the azimuthal mode number and the second index is the radial mode number).Another resonant peak appears close to the edge of the gain bandwidth (λ = 1116 nm), with Q = 8000.A power budget analysis indicates that about 38% of the input power is coupled into the waveguide.The magnetic field distribution (H y ) at this wavelength is shown in Fig. 2(b).The resonant mode frequencies of the microdisk are determined by solving the following eigenmode equation [26]: where J m and H (2) m are the Bessel and second-kind Hankel functions of order m, k is the freespace wave number, R is the radius of the microdisk, n eff is the effective refractive index of the TE mode, and η is 1/n eff for TE modes.The vertical distribution of the main microdisk mode (H y magnetic field) is shown in Fig. 2(c): the mode is well confined in the vertical direction but has a very asymmetric field profile.It rapidly decays to zero in the top air layer, but slowly decays to zero in the bottom Bragg stack region [this can be noted if we simultaneously observe the vertical field profile in Fig. 2(c) and the refractive index profile in Fig. 2(d)].In fact, the electromagnetic fields extend by more than 700 nm into the Bragg layers.One way to reduce the field penetration in the Bragg stack is to use quarter wavelength layers with higher index contrast (e.g.air and GaAs).
A dipole nano-antenna can be placed directly into the single-mode waveguide.This dipole nano-antenna consists of two golden regions with a solid region between them.The metallic regions are placed at the edges of the waveguide as shown in Fig. 3(a).We assume that the width of the gap between the metallic regions is 60 nm.There are additional lateral resonant peaks due to the large reflectivity of the metallic regions in the nano-antenna.The main peak still appears at λ = 1166 nm with a reduced Q of 7000, as shown in Fig. 3(b).Another lateral resonant peak appears at λ = 1169.4nm with a Q of 1600.The mode with largest Q will be the fundamental mode, i.e., the first mode to reach lasing [27].A power budget analysis indicates that, at λ=1166 nm, only 12% of the power is transmitted through the aperture and the electric field (E x ) intensity is about 1.41 MV/m (note that the electric field intensity is defined by the software and is not directly related to the laser output power).As expected, a direct coupling into the nano-antenna is not efficient and most of the power is lost to the surrounding medium and not directly coupled into the dipole antenna.There are more efficient methods to couple light into the nano-antenna as will be shown later.
Using tapers to couple light into the nano-antennas
Instead of direct coupling light into the nano-antenna, we can couple light by using a taper as shown in Fig. 4(a).The taper has a total length of 6 µm.We tried to avoid a very long taper but, at the same time, we tried to use a taper with sufficient length to produce a good transition between the single-mode waveguide and the aperture in the nano-antenna.The magnetic field spectrum (H y ) is shown in Fig. 4(b).The main resonant peak appears at λ=1164.8 nm with a quality factor of 12000.We can indeed couple more power into the nanoantenna with the nano-taper: the coupling efficiency increases to 30%.However, the electric field strength does not increase dramatically with the introduction of the taper: there is an increase of the amplitude of the electric field by only 30%.
A reduction in the taper length leads to a reduction in the coupling efficiency as is expected.A longer taper can couple more power into the nano-antenna, but at the expense of a larger structure.In any case, we could only couple 38% of the generated light into the singlemode waveguide without the nano-antenna, so we are close to the limit of the amount of light that could be coupled into the waveguide.The quality factor of the sidelobe at λ=1169.4 nm is reduced to 600.The introduction of the taper creates additional lateral modes in the wavelength region around the shortest wavelength edge of the material gain, indicating that this taper is reflective for these particular modes.
Using a photonic crystal micro-cavity to couple light into nano-antennas
One way to increase the electric field inside the nano-antenna is to introduce a micro-cavity close to the antenna, as shown in Fig. 5(a).In typical laser oscillators, the circulating intensity inside the micro-cavity is considerably larger than outside it.This is because photons can bounce back and forth inside the micro-cavity and the net effect is an accumulation of photons inside the micro-cavity.This generally means that the introduction of micro-cavities to an optical system can produce regions of high intense electromagnetic fields.This effect can lead to an enhancement of the electric fields close to the nano-antenna, as will be discussed later.
However, this micro-cavity has "reflecting" components such as the air holes which can reflect light back into the microdisk resonator.The end effect is similar to what happens in optical fiber systems with very reflective fiber ends: new additional resonant modes will appear in the laser resonator.However, if these additional modes have considerably lower quality factors than the main resonant mode, we could still have a range of electrical/optical pumping power in which only one mode would be lasing before the other modes reach their threshold levels.In our case, we create a micro-cavity by adding air holes with equal diameters of 120 nm with centers positioned at distances of 200 and 350 nm "below" the "lower" edge of the nano-antenna.Hence, the cavity is created by the nano-antenna on one side and the air holes on the other side.The magnetic field spectrum (H y ) is shown in Fig. 5(b).The microcavity formed by the nano-antenna and the air holes has a large transmission bandwidth between 1100 nm and 1400 nm.Adding more holes can increase the quality factor of the micro-cavity structure, but not much since the main escape route of photons in this micro-cavity is the nano-antenna and not the air holes.Now, when this micro-cavity is added to the microdisk resonator, several resonant peaks appear in the gain region of the quantum dots.The main peak appears at λ = 1164.5nm with Q of 13000.Lateral peaks appear at λ = 1165 nm with Q of 4000, λ = 1169 nm with Q of 1800 and several peaks at the shortest wavelength edge of the gain region of the quantum dots, around 1120 nm (the main peak in this region appears at 1118.6 nm with Q of 2000).
The magnetic field distribution at the main peak (λ = 1164.5nm) is shown in Fig. 6(a).A power budget analysis indicates that about 21% of the generated power is transmitted through the nano-antenna.This amount is higher than in the case of direct coupling of light but is definitely lower than in the case where we used a nano-taper to couple light into the nanoantenna.On the other hand, the electric field has doubled with the introduction of the air holes and the creation of the micro-cavity.Electric field enhancement occurs in other devices such as vertical emitting cavity surface emitting lasers (VCSELs).Since one of the main ideas of nano-antennas is to generate high intense electric fields in small regions that could trigger surface enhanced Raman scattering (SERS) locally, the micro-cavity has further boosted the electric field in the nano-antenna and, at the same time, improved the coupling efficiency into the nano-antenna.We can clearly observe, in Figs.6(b) and 6(c), that the electric field is very intense in the gap between the two metallic regions of the nano-antenna.A general guideline for the optimization of the photonic crystal cavity and the nanoantenna is provided below: 1. We need to minimize the reflection of the micro-cavity structure composed by the nano-antenna and air holes at the transmission peaks of the combined structure (micro-cavity structure).This means that we need to optimize the position and diameters of the air holes to maximize the transmission at the resonant peaks of the structure.As mentioned previously, the escape rate of light through the nano-antenna is generally much higher than the escape rate of light through the nano-holes, so two or three holes should be enough to boost the electric field inside the nano-antenna (more holes will not increase the Q of the micro-cavity).
2. We should try to design the micro-cavity structure to support a single longitudinal mode in the cavity.This may reduce the number of resonant peaks when we merge the microdisk resonator with this structure.
3. We need to match the transmission peak of the micro-cavity structure composed by the nano-antenna and air holes to match the main resonant peak (the peak with the highest Q in the gain bandwidth of the quantum dots).At the same time, when we merge the microdisk and the micro-cavity structure, we need to change slightly the dimensions and positions of the air holes to minimize the number of peaks in the gain region of the quantum dots.If we create additional lateral modes by merging the microdisk and the micro-cavity, we should try to make sure that they will have much lower Q when compared with the Q of the main mode or that they appear outside the gain region of the quantum dots.
Conclusions
In this article, we analyzed different coupling schemes to couple light from a microdisk laser into a plasmonic nano-antenna.We showed that a direct coupling into the nano-antenna isn't efficient, but if we either use a nano-taper or a low quality-factor micro-cavity we can improve the coupling efficiency into the nano-antenna.Moreover, the addition of a microcavity can further enhance the amplitude of the electric field inside the nano-antenna.
Fig. 1 .
Fig. 1.(a) Schematic of the epitaxially layered structure and (b) Microdisk laser coupled to a single-mode waveguide.
Fig. 2 .
Fig. 2. (a) Magnetic field (Hy) spectrum at the centre of the waveguide and (b) Magnetic field distribution at the main resonant peak at λ=1166 nm (c) vertical distribution of the mode at λ=1166 nm (d) Refractive index profile of the epitaxially layered structure.
Fig. 3 .
Fig. 3. (a) Direct coupling scheme from the microdisk into a nano-antenna and (b) Hy spectrum at the centre of the waveguide.
Fig. 4 .
Fig. 4. (a) Microdisk coupling light to the nano-antenna via a nano-taper and (b) Hy spectrum at the centre of the waveguide.
Fig. 5 .
Fig. 5. (a) Micro-disk coupling light to a nano-antenna via a photonic crystal cavity and (b) Hy spectrum at the centre of the waveguide.
Fig. 6 .
Fig. 6.Field distributions at the main peak at λ=1164.5 nm: (a) Magnetic field distribution (Hy), (b) Electric field (Ex) distribution and (c) Highlight of the electric field in the nano-antenna | 4,991 | 2009-11-09T00:00:00.000 | [
"Engineering",
"Physics"
] |
The link between urbanization, energy consumption, foreign direct investments and CO2 emanations: An empirical evidence from the emerging seven (E7) countries
This study investigated the link between energy consumption (EC), foreign direct investments (FDI), urbanization (URB) and CO2 emissions in the emerging seven (E7) countries for the period 1991 to 2014. The exploration made a methodological contribution by employing modern econometric methods that are robust to the issues of cross-sectional dependence and slope heterogeneity, so as to obtain valid and reliable outcomes. From the results, the panel under consideration was heterogeneous and cross-sectionally correlated. Also, the series were first differenced stationary and cointegrated in the long-run. The DCCEMG and the DCCEPMG estimators were engaged to explore the long-run elastic effects of the covariates on the response variable, and from the results, EC and URB were key promoters of CO2 effusions in the countries. However, FDI mitigated the emanation of CO2 in the nations. Additionally, economic growth (GDP) and population growth (POP) escalated the emittance of CO2 in the E7. On the D-H causality test outcomes, a feedback causality amid POP and CO2 effusions; GDP and CO2 excretions; FDI and CO2 emissivities; and between URB and CO2 secretions were discovered. Finally, a one-way causation from URB to the effluents of CO2 was unfolded. Based on the verdicts, policy suggestions were proposed to help abate the rate of CO2 exudations in the countries.
Introduction
Urbanization (URB) has been widely accepted as one of the preconditions for development in the World. However, this notion has proven futile due to the numerous consequences associated with URB of which CO 2 emanations form a key part (Behera and Dash, 2017). Despite its countless demerits, URB has witnessed a noticeable surge of about 50% in the initial stage of the twenty-first century globally (Behera and Dash, 2017). With this massive rise in URB, factors like improper expansion of industries, spiraling demand for automobiles, rising income of the middle class and urban clusters have escalated world CO 2 effusions through energy consumption (EC) (Behera and Dash, 2017;Dogan and Turkekul, 2016). According to Poumanyvong and Kaneko (2010), URB will surge the globe's population to 4.6 billion by 2030. Half of this population living in urban areas are expected to consume more than 50% of the world's energy, and escalate the emanation of CO 2 by over 60% (Shahbaz et al., 2015). As indicated by Afridi et al. (2019), EC and URB are among the major causes of high CO 2 effusions in both advanced and developing economies, of which the Emerging Seven (E7) countries are no exception. Also, foreign direct investment (FDI) has been established as a catalyst for economic development in the E7. However, its major contribution to environmental pollution in the nations cannot be overlooked.
Based on 2018 statistics, E7 nations accounted for 40% of the world's EC and also witnessed a massive increase in URB and FDI inflows. These three variables contributed to the nations' ranking among the top 20 emitters of global CO 2 in that year (Tong et al., 2020). The above statistics suggest that, member countries of E7 are prone to threats emanating from climate change, due to the rapidly increasing levels of URB, EC and FDI influxes with their associated high CO 2 effusions. Therefore, investigating the nexus amid the variables in the context of E7 to come out with recommendations to help mitigate the emanation of CO 2 in the countries and the rest of the world was worthwhile. Also, URB, EC, FDI and CO 2 excretions have been found to be inextricably related. However, consensus on the nature of association between the variables has not been reached, since URB, EC and FDI could promote, abate or exert no effect on the emanation of CO 2 . These contradictory affiliations imply, the debate on the nexus amid the variables is unceasing and warranted for further explorations like ours. The study makes several contributions to extant literature as follows; to the best of our knowledge, limited explorations have considered the combined effects of URB, EC and FDI on the emanation of CO 2 in the E7 countries. However, E7 countries have all the characteristics required to witness a relationship of such nature. This study was therefore conducted to help fill that gap.
The study also made a methodological contribution by employing advanced econometric techniques like the DCCEMG and the DCCEPMG estimators. These techniques were adopted because they are robust to correlations in residual terms and heterogeneity in slope coefficients. Most explorations conducted on the member countries of E7, did not consider these vigorous econometric methods. Further, most investigations conducted on the E7 failed to consider the issue of omitted variable bias in their analysis. However, according to Sun et al. (2021), Musah et al. (2021a), Phale et al. (2021) and Musah et al. (2021b), omitted variable bias is detrimental because it leads to bias coefficient estimates that could result in erroneous tests of hypothesis. The study therefore catered for the above issue by controlling for economic growth (GDP) and population growth (POP). Finally, based on data properties, our model specification included the lagged response variable to account for dynamics, persistence and may be, the slow-moving nature of some of the indicators. Amazingly, many explorations on the E7 countries, failed to consider these essential attributes of data. This implies, the models in those studies were likely to be misspecified resulting in prejudiced and erroneous inferences.
The study is important because it provides grounds for a better comprehension of how URB, EC and FDI influence the effusion of CO 2 in E7 countries. The study is also relevant because it comes out with concrete policies that could be used to minimize CO 2 excretions in the E7. The exploration is finally essential since it serves as a reference material for further studies on this current topic. The next section of this report presents the methodology adopted to meet the focus of the research, while the results of the investigation are outlined in the third part. Detailed discussions on the outcomes of the research are presented in the fourth part, while the final portion displays the conclusions of the report.
Literature review
This aspect of the exploration reviews literature that supports the topic understudy. The reviews are grouped into three, comprising of urbanisation-CO 2 emission nexus, energy consumption-CO 2 emission nexus, and foreign direct investments-CO 2 emission nexus as follows.
Urbanisation-CO 2 emission nexus
A lot of studies on the nexus between URB and CO 2 emanations have been conducted on different geographical environments. For instance, Ozatac et al. (2017) research on Turkey for the period 1960 to 2013, and discovered URB as a key promoter of CO 2 effusions. Though this study is insightful, generalizing its findings for all countries is inappropriate, because it was confined to only Turkey. Also, the study was time series in nature. Its revelations could vary when the panel data approach was used. Our exploration, which is panel data in nature, is therefore essential since it could offer outcomes that could support or contrast the above finding. An investigation on India was undertaken by Franco et al. (2017). From the disclosures, URB promoted CO 2 emittances in the country. This result collaborates that of Musah et al. (2020a), Wang et al. (2019) and Saidi and Mbarek (2016), but conflicts that of Lin et al. (2018) and Sadorsky (2014) who established an immaterial association between URB and the excretion of CO 2 in 16 emerging economies. The contradictory findings imply, the debate on URB-CO 2 excretion nexus is not over yet and demanded for further studies like ours. Ali et al. (2017) undertook an investigative study on Singapore, and discovered that, URB improved the country's environmental quality. This revelation is in line with Sharma (2011), but contrasts those of Sun et al. (2018) and Sehrawat et al. (2015) who affirmed URB as a key promoter of CO 2 emanations. The divergent discoveries imply, an in-depth analysis into the link between URB and CO 2 secretions like ours was paramount, since it could come out with results that could improve upon the debate on URB and CO 2 emissivities.
A study to examine the non-linear effects of URB on the emanation of CO 2 in Chinese Provinces was undertaken by Xie and Liu (2019). From the discoveries, a "roller coaster" pattern between URB and the effluents of CO 2 was unfolded. Though the outcome of the study is vital, its revelation was not fit for the purpose of generalization because of two reasons. Firstly, the study was limited to only Chinese Provinces. The results might not be the same if other Provinces, regions or counties in different jurisdictions, that are highly heterogeneous, were included in the analysis. Secondly, if the exploration was undertaken in a linear framework, the outcome might also be different from that of the above. McGee and York (2018) researched on the asymmetric affiliation between URB and CO 2 excretion for less developed countries. It was uncovered that the connexion amid URB and the exudation of CO 2 was asymmetrical, where a fall in URB led to a reduction in CO 2 excretions than it improved it. The study is very insightful; however, its disclosure cannot be generalized because it was asymmetrical in nature. If the exploration was conducted in a symmetrical manner, the outcome could be different. The outcome of the study can also not be generalized for all countries, because it was confined to only nations that were not developed. If advanced nations were to be incorporated into the analysis, the result might have been different.
Energy consumption -CO 2 emission nexus
Countless explorations on the link between EC and the effusion of CO 2 have been performed with varied outcomes. For instance, Jian et al. (2019) conducted a study on China for the period 1982 to 2017 and discovered EC as a key promoter of CO 2 secretions. Though this finding is very essential, the fact that the study was time series in nature implies, its result should be interpreted with caution. If the exploration was to be panel in nature, its outcome might have been different. A research on 10 SSA countries was undertaken by Ingleshi-Lotz and Dogan (2018). From the disclosure, NRE promoted the emanation of CO 2 , however, RE improved environmental quality. This disclosure is in line with Chen and Lei (2018) and Souza et al. (2018), but contradicted that of Farhani et al. (2014). These conflicting findings imply, the debate on the connection between EC and the exudation of CO 2 is far from over and demanded for an exploration like that of ours. In Turkey, Karasoy and Akcay (2019) conducted a study and confirmed REC as a negative determinant of CO 2 effluents. This exploration is very material; however, its discovery cannot be generalized for all nations in the globe because, it was solely conducted on Turkey.
If other nations had been included in the analysis, the finding could have been different. Our research is therefore relevant, since it was conducted on more than one country. Rahman et al. (2019) undertook an investigative study on NAFTA and BRIC countries, and confirmed coal and oil as key promoters of CO 2 excretions. Though this finding is very essential, the study used only coal and oil as the proxies of energy. If other surrogates of energy were to be incorporated into the analysis, the finding could have been different. This therefore implies, the interpretation of the study's results demands some caution.
On G7 countries, Bildirici and G€ okmeno glu (2017) undertook a study and confirmed energy from hydro as a negative predictor of CO 2 effusions. This finding is very insightful; however, the fact that the study was limited to only hydro power implies, care should be taken when interpreting the results. Also, the fact that the exploration was skewed towards only G7 countries implies the generalization of its outcome for all nations is inappropriate. Udemba and Agha (2020) studied the nexus amid EC and the excretion of CO 2 and validated EC as a major contributor of CO 2 effusions. This disclosure collaborates those of Saud et al. (2019) and Ali et al. (2018), but conflicts that of Pata (2018). These contradictory revelations imply, the EC-CO 2 emanation argument is not over yet and demanded for more investigations like ours. Chen et al. (2019) conducted a study on China and found out that RE abated the exudation of CO 2 in the country. Though this study is vital, its discovery cannot be generalized for all countries in the world, because the study was confined to only China. If other nations were to be included in the study, the outcome might be diverse. Our study which considered more than one country is therefore significant because it adds to the unceasing debate on the connection between EC and CO 2 excretions. On 74 countries, Sharif et al. (2019) studied the connection between EC and CO 2 effusions and affirmed NREC as a key contributor of CO 2 emanations, however, REC abated the excretion of CO 2 in the nations. The conflicting results between the two proxies of EC implies, much research is needed on the connexion amid EC and CO 2 effluents. Zoundi (2017) researched on some selected African countries and confirmed that, REC enhanced the countries' environmental quality supporting those of Liu et al. (2017) and Yazdi and Beygi (2017). This research is very relevant; however, one should be cautious when interpreting its outcome because it was limited to REC. The discovery could have been different if energy from nonrenewable sources were included in the analysis.
FDI-CO 2 emission nexus
Myriad of studies have been conducted to explore the affiliation between FDI and CO 2 excretions. The findings have however been contrasting. For instance, Huang et al. (2019) investigated the connection between FDI and the emanation of CO 2 in Chinese provinces. Discoveries of the study affirmed FDI as a negative determinant of CO 2 emissivities. This revelation is very relevant, however, the fact that the investigation was limited to only Chinese provinces implies, the generalization of its findings for all countries is appropriate, because if other provinces, regions and counties in other jurisdictions were to be included in the study, the outcome might show a different picture. Minh (2020) undertook a study on Vietnam, and established FDI as a key contributor of CO 2 effusions. Though this exploration is insightful, the interpretation of its finding warrants some caution, because it adopted the time series approach. If the study had adopted the panel data approach by including more countries into its sample, the outcome might differ. Our research which was panel data in nature is therefore vital, since it offers outcomes that adds to FDI-CO 2 effusions debate.
On the OECD region, Ahmad et al. (2020) carried out an investigative study and confirmed FDI as a vital promoter of CO 2 secretions. This finding supports that of Li et al. (2020a) and Minh (2020) for Vietnam, but contradicts that of Huang et al. (2019) for Chinese provinces. These conflicting revelations suggests that the debate on the connection between FDI and the emanation of CO 2 is far from over and demanded for further explorations like ours. Sarkodie and Strezov (2019) researched on five developing economies. From the disclosures, FDI raised the rate of CO 2 effusions in the countries. This study is very essential, however, its revelation cannot be generalized for all nations, because it was confined to only developing countries. If the exploration was to be conducted on developed nations, the outcome might differ. Dhrifi et al. (2020) also conducted a study on developing economies and confirmed FDI as a mitigator of CO 2 emissivities. This discovery supports that of Zhang and Zhou (2016), but conflicts that of Dou and Han (2019). These conflicting findings imply, more investigations on FDI-CO 2 secretions nexus are needed. Therefore, undertaken a study like ours was of much relevance. Guzel and Okumus (2020) researched on ASEAN-5 countries and found FDI as a key promoter of CO 2 excretions. This disclosure is very insightful; however, the study was limited to ASEAN-5 countries only. Interpretation of the results therefore warrants some caution, because if other nations were incorporated into the analysis, the finding could differ.
Model specification
In line with Abbasi et al. (2020), Ahmad et al. (2019) and Nathaniel (2019), the Stochastic Impacts by Regression on Population, Affluence, and Technology (STIRPAT) model proposed by Dietz and Rosa (1997) was adopted for this study. The STIRPAT model is based on the environmental impacts, population, affluence and technology (IPAT) framework discovered by Ehrlich and Holdren (1971). The IPAT model is stated as: Even though the IPAT model is very useful, it has its inherent limitations. According to Dietz and Rosa (1994), the model assumes strict proportionality amid variables, and is also a basic mathematical equation that is not ideal for testing hypothesis. To help remedy the above limitations, Dietz and Rosa (1997) put forward a stochastic version of the IPAT model stated as: Where I is the environmental impact proxied by CO 2 emissions; P represent population; A denotes affluence surrogated by GDP; T represents technology; a is a constant term; u is the error term; i denotes studied countries; t represents the study period; and b 1 , b 2 and b 3 are the slope coefficients of P, A and T respectively. The emanation of CO 2 was employed as a proxy of environmental impacts because it efficaciously assesses the performance of the environment (Rahman et al., 2019). Some FDI inflows are linked to high polluting items that surge the emanation of CO 2 in host nations. This supports the pollution haven hypothesis (Minh, 2020). Exudations of CO 2 are also being abated by some FDI technologies concurring with the pollution hallo hypothesis (Sarkodie and Strezov). Based on these assertions, FDI was introduced into the model as a proxy for technology. The new STIRPAT model after the inclusion of FDI became; Where FDI denotes foreign direct investments. Also, without energy, economic activities like business expansion and industrialization cannot be accomplished. The energy used to undertake the above activities are however, not environmentally friendly, leading to more CO 2 emanations (Martinez-zarzoso et al., 2007). Therefore, in line with Behera and Dash (2017) and Abdallh and Abugamos (2017), energy consumption (EC) was considered as a predictor of CO 2 excretions. Finally, many people move to urban areas to search for better jobs and good living standards. This scenario increases the demand for EC leading to high CO 2 effusions (Cole and Neumayer, 2004). Urbanization (URB) was therefore considered as a predictor of CO 2 excretions, supporting the investigations of Nathaniel (2019) and Ahmad et al. (2019). The final extended STIRPAT model therefore became; Where EC denotes energy consumption and URB represents urbanization. To help remedy data fluctuation and heteroscedasticity issues, all the variables were transformed into natural logarithms. The resulting log-linear model therefore became; Where LnCO 2 , lnPOP, lnGDP, lnFDI, lnEC and lnURB are the log transformation of CO 2 , POP, GDP, FDI, EC and URB respectively; b 1 , b 2 , b 3 , b 4 and b 5 are the elasticities to be estimated; and i, t, a and u are already defined in equation (2). We expected positive signs for b 1 , b 2 , b 4 and b 5 , whilst the sign for b 3 was projected to be either negative or positive. The dynamic common correlated effects (DCCE) estimator was adopted to estimate the established model. This estimator was employed because it is robust to endogeneity, slope heterogeneity and correlations in residual terms (Chudik and Pesaran, 2015;Ditzen, 2016). In line with Chudik and Pesaran (2015), the study's dynamic panel regression model that controls for heterogeneity was stated as; Where v i;t ¼ c i *f t þ e i;t ; f t and c i are unobserved common factors and heterogeneous factor loadings correspondingly, a i represents effects specification to unobserved countries, e it symbolizes residual terms which are not correlated with the regressors and k i denotes convergence of CO 2 emissions across countries. According to Chudik and Pesaran (2015), equation (6) is inconsistent unless sufficient lags of the cross-sectional averages are added to the model. After incorporating sufficient lags of the cross-sectional averages, the dynamic heterogeneous model from equation (6) with respect to the DCCE framework therefore became; Where lnCO 2i;t 1 , lnPOP i;t , lnGDP i;t , lnFDI i;t , lnEC i;t and lnURB i;t denote the crosssectional means of both the lagged response variable and the explanatory variables; whilst a 1ir , a 2ir , a 3ir , a 4ir , a 5ir and a 6ir are the cross-sectional mean effects of the lagged response variable and the explanatory variables on CO 2 emanations correspondingly. Finally, K represents the average lags of the various cross-sections assumed to be equal.
Data source and descriptive statistics
A panel dataset covering the period 1991 to 2014 was employed for the analysis. The main reason for choosing the period 1991 to 2014 is that data was not fully available for some of the countries at certain periods. For instance, data on GDP for most periods below 1989 were missing for Russia, whilst the country's data on EC for most periods below 1990 and above 2014 could not be found. Additionally, data on FDI for most periods below 1992 were also missing for the same country. Further, data on EC for Brazil, China, India and Indonesia for periods above 2014 were missing, whilst data on the same variable for periods above 2015 were missing for Mexico and Turkey. Finally, all the countries had missing data on CO 2 emanations from 2017 to 2019. In order to work with a fully balanced data and also to have all the countries on board, the researchers viewed the period 1992 to 2014 as the most appropriate. Thus, the period was chosen based on the availability of data for the studied series. Further details on the series are shown in Table 1. than the standard 3, meaning their distributions were leptokurtic in shape, whilst that of CO 2 and EC had kurtosis values lower than the standard 3, symbolizing that the shapes of their distributions were platykurtic. The kurtosis statistics vindicates the Jarque-Bera test outcomes that also confirmed the nonnormality of the variables' distributions. Additionally, there was no multi-collinearity amid the covariates as per the tolerance and VIF tests. The correlation between the variables were also investigated. From the results depicted in Table 2, a surge in POP, GDP, EC and URB resulted in a surge in CO 2 effusions and vice versa. However, an upsurge in FDI mitigated the countries' CO 2 emittances and vice versa. Finally on the Principal Components Analysis (PCA) shown in Table 3, POP, EC and
Econometric techniques
Heterogeneity and cross-sectional correlations are very vital for the choice of econometric methods to be used for further analysis. Therefore, the study first tested for correlations in the residual terms though the CD test of Pesaran (2015). Secondly, the heterogeneity assumption was tested through the Pesaran and Yamagata (2008) test. This stage was followed by the analysis of the variables' integration attributes through the CADF and the CIPS stationarity tests, that are resilient to cross-sectionally correlated residuals. The Westerlund and Edgerton (2007) and the Durbin-Hausman tests were then performed to affirm the cointegration characteristics of the variables. At the fifth step, the DCCEMG estimator with the support of the DCCEPMG estimator were adopted to explore the longrun elasticities of the covariates. Finally, the Dumitrescu and Hurlin (2012) test which is robust to heterogeneous slopes and cross-sectionally correlated residuals was engaged to study the causal liaison between the variables. If X and Y are the input and the criterion variables correspondingly, then the D-H causality test could be expressed officially as; where M signifies lag orders, c i implies distinct fixed effects, and a i ðmÞ and d i ðmÞ indicate lag and slope parameters that differentiate across groups. Based on equation (8), the ensuing models were specified to help examine the causal liaisons between the variables; where c 1 ; . . . ; c 6 are constant coefficients to be explored, a 1 ; . . . ; a 6 symbolizes autoregressive coefficients, and d 1 ; . . . ; d 30 connotes coefficients of the covariates. The D-H causality test is made up of the W-statistic and the Z-statistic expressed as; where W i;t signifies cross-sectional Wald statistic, and E W i;t ð Þ and Var W i;t ð Þ are the expectation and variance of the Wald test statistic correspondingly.
Heterogeneity and cross-sectional dependence tests results
As indicated by Li et al. (2020b) and Mensah et al. (2020), dealing with models with crosssectionally correlated regressors could yield bias and inaccurate results. Therefore, as a first step, the Pesaran (2015) cross-sectional reliance test was undertaken to examine correlations in the residual terms. With reference to the test's revelations displayed in Table 4, the residuals of the model were embedded with dependencies. This indicates that there were strong economic bonds among the countries of concern. The outcome also signposts that any effect on a particular variable (say urbanization) in one country, was likely to affect the other countries due to their ties economically. Empirical investigations by Musah et al. (2020b) and Mensah et al. (2019) supports the above finding. Since the negligence of heterogeneity could lead to erroneous outcomes and inferences, the study tested for this assumption by employing the test displayed in Table 5. Disclosures from the test confirmed heterogeneity in the parameters. This implies, the acceptance of homogeneity could lead to the application of wrong econometric techniques. Studies by Erdogan et al. (2020) and Dogan and Aslan (2017) are in agreement with this discovery.
Unit root and cointegration tests results
Econometrically, dealing with series with stationary order leads to accurate and reliable outcomes. Therefore, as a third step, the tests exhibited in Table 6 were performed to affirm the variables' order of integration. From the discoveries, all the series became stationary after first difference collaborating that of Musah et al. (2020c). In the context of estimating the elasticities of the covariates, it was imperative to confirm the cointegration features of the variables. Therefore, as a fourth step, the tests exhibited in Tables 7 and 8 were conducted. From the results, the series were noticeably cointegrated in the long-term. This annuls the possibilities of biased and inaccurate estimates, thereby yielding correct conclusions. A by Faisal et al. (2017) is in support of this discovery, but that of Ozturk and Acaravi (2010) contradicts the findings of the study.
Panel model estimation results
At the fourth phase, the elasticities of the covariates were first explored through the DCCEMG estimator. From the estimates depicted in Table 9, POP raised carbon excretions Note: LnCO2 emissions is the response variable. a,b,c imply significance at the 1%, 5% and the 10% levels respectively. by 1.475% at the relevance level of 5%. Also, 3.096% of CO 2 discharges was linked to the countries' GDP. Additionally, EC surged CO 2 exudations by 2.538% at the 5% material level. Similarly, URB escalated the rate of carbon effusions by 5.395% at the connotation level of 10%. Further, FDI abated the rate of CO 2 spillages by 1.2% at the 5% significance level. Also, at the 5% level, the lagged response variable (CO 2t-1 ) was negatively significant. This means the excretion of CO 2 in the countries was corrected annually by 78.891% in absolute terms. Additionally, the R-squared value of 0.95 signifies that the explanatory variables accounted for 95% of the variations in CO 2 emanations. The F-value was also material at the 5% level. This means the distribution of the variables fitted the model very well. Finally, the RMSE value of the estimated model was less than 0.08 (Hair et al., 2017). This indicates that the fitted model had a very high explanatory power. For the purpose of robustness, estimates of the DCCEPMG estimator were also computed, and the results were similar to that of the DCCEMG in terms of sign. Specifically, a percentage surge in POP, GDP, EC and URB, promoted the excretion of CO 2 by 4.335%, 1.571%, 1.078% and 2.74% correspondingly. However, FDI improved the environment by 3.119% at the connotation level of 5%. The lagged dependent variable was also adverse and statistically material at the 5% level, suggesting that annual CO 2 emanations was corrected by 24.027% in absolute terms. Similarity in the DCCEMG and the DCCEPMG estimates underscores the robustness of the results. The elastic effects of POP, GDP, FDI, EC and URB on CO 2 emissions are displayed in Figure 1.
Causality tests results
Finally, the direction of causalities between the variables were examined through the D-H panel causality test. From the discoveries portrayed in Table 10 and Figure 2, a doubleheaded causal connection was found between POP and the emanation of CO 2 . Also, a feedback affiliation amid GDP and CO 2 emittances was unfolded. Similarly, a mutual liaison was established between FDI and the emissivities of CO 2 . Additionally, a two-sided causal connexion amid URB and the emanation of CO 2 was uncovered. Finally, the analysis established a unilateral causal relation moving from EC to CO 2 emittances. Figure 1. The elasticities of CO 2 with respect to POP, GDP, FDI, EC and URB. Note: CO 2 is the dependent variable, (þ) denote positive influence on CO 2 , whilst (-) signifies negative influence on CO 2 .
Discussion of the results
After establishing that the variables were materially related in the long-run, the elasticities of the covariates were computed through the DCCEMG estimator with the support of the DCCEPMG estimator. From the discoveries, POP raised CO 2 effusions in E7 countries. This finding indicates that POP growth did not generate energy efficiency incentives that could mitigate the nation's CO 2 emissivities. Another plausible explanation for this finding is that individuals did not take issues of environmental sustainability seriously, as they spent more on environmentally unfriendly products and services. Additionally, the countries' increase in production due to a surge in population also led to a momentous rise in the usage of energy, and subsequently high emittances of CO 2 . This outcome is in line with Mahmood and Chaudhary (2012), Acharyya (2009) and Wang et al. (2013), but conflicts that of Talukdar and Meisner (2001). Also, GDP promoted CO 2 exudations in the nations. This revelation signposts that the countries' economic activities are connected to the use of dirty energies that contribute to a rise in the emanation of CO 2 in the countries. Another reason for this discovery is that, an upsurge in economic development might have influenced people to use appliances and automobiles that promote the exhalation of CO 2 in the nations. This outcome agrees with Ito (2017), Mahmooda et al. (2020) and Antonakakisa et al. (2017), but is conflicting to that of Bekhet et al. (2017). Additionally, FDI mitigated the effluents of CO 2 in the E7. This discovery indicates that countries in E7 commited adequate resources to protect their environment, and also supported organisations in the field of green technology. FDI further helped to raise aware of the environment, thereby minimizing corporates' and individual's engagement in high polluting activities. The finding further suggests that there were strict environmental controls that made it difficult for high polluting entities to operate in the E7 countries. A study by Rafindadi et al. (2018) support this revelation, but that of Seker et al. (2015) contrasts the finding of this investigation. Further, EC escalated the excretion of CO 2 in E7. This is not Figure 2. Direction of causalities between the explained and the explanatory variables. Note: CO 2 is the dependent variable, $ signify a two-way causality between variables and denote a one-way causality from one variable to the other.
surprising because most emerging economies have a lot of industries, that are heavily reliant on high-polluting energy sources to drive their operations. Another potential reason for this finding is that, emerging economies all over the world undertake massive infrastructural activitivies for their citizenry. The execution of these activities are however dependent on the use of dirty energy that deteriorates the environment. The outcome agrees with Alemzero et al. (2020) and Udemba and Agha (2020), but contrasts that of Zafar et al. (2019). Lastly, URB promoted CO 2 exudations in E7. This discovery is not shocking because a surge in URB demands the usage of high energy to makke major improvements in public infrastructural networks, leading to more CO 2 effusions. An empirical investigation by Franco et al. (2017) is in support of this finding, however that of Sadorsky (2014) contradicts the above discovery.
The D-H causality test was employed to examine the causations between the variables. From the discoveries amid the explained and the explanatory series, there was a two-sided cause and effect affiliation between POP and CO 2 discharges. This means that the variables were mutually reinforcing on each other. Thus, POP was reliant on the effusion of CO 2 , and the emanation of CO 2 was also reliant on POP. Any attempt to reduce the level of POP will also lead to a reduction in the rate of CO 2 emissivities. This finding agrees with Chung-Sheng et al. (2012), but conflicts those of Sulaiman and Abdul-Rahim (2018) and Begum et al. (2015). Also, GDP and the secretion of CO 2 were bilaterally related in the countries. This discovery indicates that expanding economic activities will raise the countries' rate of CO 2 excretions. Likewise, any effort to create low-carbon economy will mitigate development activities in the countries. This finding agrees with Saud et al. (2019), but differs from that of Ssali et al. (2019). Additionally, a feedback liaison amid FDI and CO 2 effluents was affirmed. This indicates that, the two series were predictive powers of each other. Thus, a surge in FDI influxes raised the rate of CO 2 releases in the countries. Similarly, any effort to abate the level of FDI influxes, will diminish the countries' level of CO 2 effusions. An investigation by Omri et al. (2014) confirms this outcome, but those of Lee (2013) and Zhang (2011) differ from the study's finding. Further, a single-headed causal movement from EC to CO 2 exhalation was demonstrated. This means that EC unilaterally reinforced CO 2 spillages in the countries. Explorations by Cetin et al. (2018) and Shahzad et al. (2017) affirm this revelation, but those of Sun et al. (2018) and Afridi et al. (2019) contradict this discovery. Finally, a feedback connexion amid URB and CO 2 effluences was uncovered. This means that the series were interdependent on each other. The discovery also insinuates that permanent or temporary reverberations from urbanization trigger the seepages of CO 2 in the countries. Likewise, lessening the rate of urbanization will also mitigate CO 2 discharges in the countries. The above revelation backs the verdicts of Khoshnevis and Dariani (2019) and Afridi et al. (2019), but contrasts those of Liu and Bae (2018) and Sehrawat et al. (2015).
Conclusions and policy recommendations
This study investigated the link between energy consumption (EC), foreign direct investments (FDI), urbanization (URB) and the emanation of CO 2 in the E7 from 1991 to 2014. Taking into account the consequences of heterogeneity and dependencies in cross-sections, the study with the ambition of yielding valid outcomes, employed modern econometric methods that are resilient to the above issues. From the discoveries, the panel under consideration was heterogeneous and cross-sectionally correlated. Also, a cointegration association existed among the series after they had been affirmed as first differenced stationary. Further, the DCCEMG and the DCCEPMG long-run estimates affirmed EC and URB as key promoters of CO 2 effusions in the countries, whilst FDI mitigated the countries' level of emanations. In addition, economic growth (GDP) and population growth (POP) also escalated the emittance of CO 2 in the countries. On the D-H causality test outcomes, a feedback causality amid POP and CO 2 effusions; GDP and CO 2 excretions; FDI and CO 2 emissivities; and between URB and CO 2 secretions were discovered. Finally, a causality from URB to the effluents of CO 2 was discovered. Based on the revelations, the following implications and policy suggestions are proposed; Firstly, the analysis affirmed EC as a major driver of the effluents of CO 2 . This implies energy usage harmed the countries' quality of environment. Therefore, energy usage strategies that do not aggravate the emanation of CO 2 should be adopted by the countries. Also, governments in the various countries should set up projects that will provide sufficient energy supplies by raising the proportion of renewable energy resources across the entire energy supplies constantly. This is because to abate CO 2 emanations, an upsurge in energy generation from renewable sources is required. Secondly, URB raised the level of CO 2 effusions in the countries. This means that urbanization is detrimental to environmental quality as urban population carry with them increased domestic energy demand for goods and services, thereby escalating the rate of CO 2 emanations. As a recommendation, measures to help reduce the pace of urbanization in the countries should be instituted. This could be attained if authorities concentrate on enhancing rural-income policies. Also, given that urban stretch within E7 countries is often linked to higher demand for energy and greater environmental degradation, strategic planning involved in design, advancement and management is of prime importance in combating urban expansion, while increasing urban density. The advantage of urban density involves lower environmental damage followed by an effective transport network and infrastructure, particularly, public transport, that facilitates greater accessibility as well as energy supply and water management systems. Thirdly, FDI abated the emanation of CO 2 in the countries. This signifies that FDI is part of the solution to rising environmental problems associated with the emanation of CO 2 in E7. The result also suggests that FDI has helped to raise awareness of the countries' environment. As a result, individuals are businesses have decreased their engagement in activities that are detrimental to the environment. As a recommendation, authorities should tighten the country's FDI inflow regulations. This would help to reduce emission-related goods and services that could be moved into the nations. Also, companies in the countries should embrace new technologies in their undertakings, rather than archaic technologies that could worsen the quality of the environment.
Fourthly, GDP was a substantially positive determinant of CO 2 effluents in the countries. This indicates that the countries should have a strong balance between economic growth and the quality of their environment, because they are on the road to improving their economies. It would therefore be counterproductive for them to compromise economic development for the quality of their environment. This means that unless clean energy technologies are adopted, the drive to improve the countries' economic growth will not move at a faster pacce. Also, manufactoring and other entities that are characterized by high EC and CO 2 effluents should be well monitored to help improve the environment. Lastly, POP is a major promoter of CO 2 exudations in the countries. This suggests that a surge in population growth inhibits the countries' environmental quality. As POP rates increase, households and companies consume more energy leading to more CO 2 secretions. Authorities should therefore monitor the rate of POP in the countries. The increasing rate of POP also warrants improvements in the countries' research and development (R & D) on low-carbon technologies. | 9,255.4 | 2021-06-09T00:00:00.000 | [
"Economics"
] |
Estimation of the lifespan distribution of gold nanoparticles stabilized with lipoic acid by accelerated degradation tests and wiener process
Accelerated degradation tests (ADT) are widely used in the manufacturing industry to obtain information on the reliability of components and materials, by degrading the lifespan of the product by applying an acceleration factor that damage to the material. The main objective is to obtain fast information which is modeled to estimate the characteristics of the material life under normal conditions of use and to save time and expenses. The purpose of this work is to estimate the lifespan distribution of gold nanoparticles stabilized with lipoic acid (GNPs@LA) through accelerated degradation tests applying sodium chloride (NaCl) as an acceleration factor. For this, the synthesis of GNPs@LA was carried out, a constant stress ADT (CSADT) was applied, and the non-linear Wiener process was proposed with random effects, error measures, and different covariability for the adjustment of the degradation signals. The information obtained with the test and analysis allows us to obtain the life distribution in GNPs@LA, the results make it possible to determine the guaranteed time for possible commercialization and successful application based on the stability of the material. In addition, for the evaluation and selection of the model, the Akaike and Bootstrapping criteria were used.
Introduction
Accelerated degradation tests (ADT) are an effective tool for evaluating the reliability of materials through the analysis of degradation data, these tests consist of degrading the life of the product by applying a factor that accelerates degradation, thereby obtaining degradation data, which are used to estimate the life distribution of the material under normal conditions of use and at the same time minimize costs and times involved in the test, obtaining good material life data.
For recently created materials, current studies have adopted an ADT in the evaluation of reliability based on the Wiener process. The Wiener process is frequently found in practice as it provides a satisfactory and flexible description of degradation data obtained after having performed an ADT [1,2].
Nowadays, nanoscience and nanotechnology develop highly innovative materials and products with the ability to revolutionize life as we know it. These nanomaterials, like any other material, show deterioration that involves a very complex interaction between stress, time, and the environment, eventually causing the failure of the product [3]. In this way, in any technological field, knowing the useful life of the products is required for the successful application.
For nanostructured materials, to our knowledge, it has not been determined the life span within an appropriate test time, and the studies found in the literature do not use test methods and degradation analysis to have reliable information on the life's nanomaterials over the time, moreover, some studies report the need for regulatory reforms to improve supervision of nanomaterials throughout its life cycle [4,5]. Faced with this condition, there is a wide opportunity of using the ADTs for nanostructured materials to estimate the useful life, as well as, to contribute to the regulation of these materials.
Material of great interest at the nanoscale is gold, which is probably one of the most fascinating materials due to its physical and chemical properties at the nano-scale [6,7]; gold nanostructures (GNPs) have shown potential applications in many research areas. In medicine, ultra-small nanoparticles below 5 nm have unique advantages in the human body due to their relatively rapid clearance, good absorption, and favorable interaction with radiation [6,8]. For example, gold nanostructures have been tested as sensors [9] capable of detecting certain diseases such as cancer [10], SARS-CoV-2 [11], Alzheimer's, and Salmonella [9,10], also, they were utilized as chemical carriers [12,13] and as theragnostic agents [10,14].
For the successful application, there is evidence that the stability of gold nanoparticles must be well known to reach the desired tissues or cells [15], overcoming the limitations of biological barriers to diagnose and treat deep targets [8,16]. Some variables that influence the alteration of stability are the pH [17] of the medium as well as the presence of NaCl [18].
GNPs@LA analyzed in this study are spherical at 2.5 nm, highly stable in colloid as well as a powder; they are stabilized with lipoic acid which prevents agglomeration and creates functional groups for bio-conjugation, and also do not present toxic effects at the cellular level based on ISO 10993-5 [19]. However, there are no studies that allow us to determine the lifetime of the nanomaterial. The purpose of this study is to estimate the failure rate and useful life of GNPs@LA through ADT relied on the Wiener process applying NaCl as an acceleration factor. The proposed methodology is an important contribution to the supply of nanomaterials guarantees and opens the door for the development of further research.
In summary, the main contributions are: A methodology based on accelerated degradation testing and Wiener process to estimate the useful life of GNPs@LA.
• Estimation of the failure rate GNPs@LA using NaCl as a degradation factor.
• A methodology that relies on a non-linear Wiener process with random effects, error measures, and different covariability to determine the useful life of the GNPs@LA.
• A rigorous statistical analysis to determine the most appropriated Wiener degradation model.
The rest of this paper is structured as follows: section 2 provides a general description of the synthesis and stability of GNP@LA, an explanation of the accelerated degradation model with the Wiener process. Also, it is presented a statistical inference framework based on the maximum likelihood estimation (MLE) method to estimate the parameters of the life distribution. At the end of this section, the statistical framework is applied to our specific degradation problem. Section 3 explores and compares the specified degradation models to our degradation data showing which is the more appropriate model to estimate life distribution. Finally, section 4 gives the conclusions of our work.
Synthesis of GNPs@LA
Based on the bottom-up approach, particularly the colloidal method; gold nanoparticles with an average size of 2.5 nm were synthesized and stabilized using gold chloride as a precursor, sodium borohydride as a reducer, and lipoic acid as a stabilizer, following the methodology reported by Cornejo-Monroy et al [19].
Stability of GNPs@LA
Gold colloid stability implies that solid nanoparticles do not settle or aggregate at a significant speed [20]. When nanoparticles lose their stability by aggregation, particle size increases and creates agglomerates, losing their interesting properties.
One way to measure how these properties are affected according to their size is through their characterization by UV-vis spectroscopy [21], which is a simple and reliable method to monitor the stability of the gold colloids. As the nanoparticles become destabilized, the original characteristic peak will decrease in intensity due to the depletion of stable nanoparticles, and often the peak will be broadened to longer wavelengths due to the formation of aggregates or agglomerates [22]. The shape and peak position of UV-vis spectra are related to the morphology and size of the nanoparticle, as well as, the dispersion/aggregation of gold colloids [23]. Gold colloids present electronic transitions of bands in the visible range between 450 nm and 550 nm.
Therefore, some visible wavelengths are absorbed, emitting a characteristic color that can be characterized and related to morphological changes in the nanomaterial [23]. In figure 1, J Martínez et al [24] show different sizes of gold nanoparticles as a function of the characteristic peak of the plasmon band, it can be observed that their characteristic peak is red-shift as the GNPs diameter increases. This is because the optical properties of gold nanoparticles change when the particles aggregate and the conduction electrons near the surface of each particle are delocalized and shared among neighboring nanoparticles [25]. When this occurs, the surface plasmon resonance shifts to lower energies, causing the characteristic absorption and scattering peak shift to longer wavelengths [24].
In addition, it is known that charge repulsion effects between particles can be affected by the NaCl concentration of the solution [18]. This occurs because charges can be removed or neutralized by protonated or unprotonated ionizable groups or by the concentration of ions in solution. There is evidence that a high NaCl concentration can effectively mask the charge character of a carboxylate particle by having too many positively charged ions associated with surface charges thus causing aggregates in the material [25].
For this study, and as an example to apply an ADT analysis, to determine the failure rate and useful life of GNPs@LA, we considered that gold nanoparticles with UV-vis spectra peak greater than 525 nm failed and NaCl was used as a factor that accelerates the degradation.
Accelerated degradation model
One objective of the reliability analysis is to estimate the useful life of the product through the life distribution. To obtain the life distribution of a product using degradation data, the central step is to set up a model that describes the degradation process, called an accelerated degradation model. An accelerated degradation model is the combination of an accelerated model and a degradation model based on physics and statistical models.
An accelerated model shows the relationship between life and effort to establish the connection between degradation data and product life, it is essential to establish a suitable probability model to describe the behavior of collected degradation data, also known as degradation trajectories. Two types of degradation models are commonly used, which are the general trajectory models and the stochastic models.
General trajectory models are described as simple and easy to use, but they lack the ability to capture system dynamics. In contrast, stochastic models have great potential to capture random dynamics within degradation processes. The Wiener process, the Gamma process, and the Inverse Gaussian process are three common stochastic processes that have received many applications in degradation modeling [9,26]. However, it should be noted that both the Gamma process and the Inverse Gaussian process are only suitable for modeling monotonous degradation trajectories. In comparison, the Wiener process applies to non-monotonous degradation processes that are frequently encountered in practice as it provides a satisfactory and flexible description of the degradation data [2]. The Winner process has been widely applied to degradation data analysis for example to light-emitting diodes [27], fatigue of metals [28], aluminum reduction cells [29], and microelectromechanical systems [30], among others.
Wiener degradation process
Since components of systems deteriorate over time and fail when the degradation level reaches a certain threshold. The Degradation information can be measured in a non-destructive way, after which an appropriate degradation model is chosen to describe the process through analysis of the data. Among the degradation models, the Wiener process with positive drift is well-established method due to its mathematical properties, it is expressed as follows, where ( ) L t represents the transformed time scale and it is a monotonic continuous function that explains the non-linearization of the data, typical examples are: ( ) 1 is a parameter to be estimated [31]. The parameters l and s stand for the drift and diffusion parameters, respectively. ( ) B t corresponds to the standard Brownian motion which satisfies the following properties: ii. ( ) B t has normal distribution with mean 0 and variance t.
, , , 0 has stationary increments. That is, the distribution of ( ) ( ) + -B t s B t does not depend on t.
From the above properties, it can be deduced that random vector ( ( ) ( ) , , 0, , The previous properties on ( ) B t entail that the Wiener degradation process ( ) X t has the same properties of the Brownian motion ( ) B t but the first property. Additional is straightforward to see that ( ) .
2 N Due to imperfect instruments, random environments and among other factors, measurements errors are inescapably introduced. Thus, error of measurements , with is introduced, leading to an observe degradation process as follows Since stress factors such as voltage, humidity, temperature, vibration, etc affect the performance of the degradation process, then an acceleration model can be used to integrate the covariate into the Wiener process. The most common way to incorporate the acceleration model into the Wiener process is to consider some model parameters as a covariate function which is typically called a link function (·) h . The choice of the form of this function will depend on the way the acceleration factor influences the model parameters. Some accelerated models are the Arrhenius model, Inverse power model, Eyring model, and linear and quadratic model, which are summarized in the following table 1.
Therefore, the acceleration model of the drift parameter and diffusion parameter depends on the stress factor employing the link function as follows where S k denotes the stress level and h represents a variability parameter and k is a constant factor associated with the diffusion. From now on ( ) h S k will be represented as h . k It is common to find differences between the degradation trajectories from unit to unit of the population. This type of difference is the result of non-observable random effects. In order to express this, some model parameter will be specifics for each unity, obtaining a process with certain parametric distribution [28]. To model this, the drift parameter l k will be specific for each unit and follows a normal distribution, on the other hand the parameter s k will be taken as a constant, Peng and Tseng [32]. Si et al [33,34] and Tsai et al [35]. Thus, it is assumed that the variability parameter h is a random variable with normal distribution ( ) m s h h , .
2 N Putting all this together into (2), it is obtained where (4) models the ADT with random effects, error measurements, and covariates. Some studies that configure more than one variant in the Wiener process, for instance, Li Sun et al [36] describe a methodology for the model and estimation of parameters through a constant stress ADT (CSADT) applying the non-linear Wiener process, with covariates, random effects, and measurement errors. A CSADT is a test plan consisting of three to four levels of tests with different proportions of units in each one, where mainly at the low effort level more samples run than at a high level and this type of plan can provide accurate estimates for an ADT. Following the notation in [36], the increasing applied stress level is ¼ ¼ S , S , S , 1 k K where K denotes the maximum stress level. Also, there are N k units tested under each constant stress S k and each unit is measured M ki times at the k stress level with k The transformed time will be expressed as ki Therefore, the degradation observed under the Wiener process with its four variants is shown as follows that will be estimated in the next section.
Statistical inference of the wiener degradation process, parameters estimation, and life distribution on ADT data
In the previous section a model for CSADT was formulated in (5), to estimate the unknown parameters setQ let's consider the following vectors k The properties ii and iii of the Wiener processes implies that the Brownian movement y ki follows a multivariate normal distribution with a mean, and covariance˜˜s Substituting (10) and (11) in (9) and simplifying the log-likelihood function, it is obtained Note that the matrixH ki depends on the parametersk b , ands e 2 and h k 2 depends on b. Therefore, the maximum likelihood estimates of˜k b s e , , 2 can be obtained by maximizing the log-likelihood function (12) by employing the L-BFGS-B quasi-Newton optimization method that can be found in the R-project packages [37]. The value ofk can be obtained by the following equations:˜·ˆ( One objective of the reliability analysis is to estimate the useful life of the product through the life distribution. To obtain the life distribution of a product using degradation data Li et al [31] incorporate the measurement errors into the deduction of the expression for the CDF and PDF of the failure time , 1 5 where T corresponds to the first time that the degradation process Y hits a failure threshold w. They deduced the life PDF expression for each stress level S , k which can be found in equation (12) at [31] Note that in this formulation they used two different time scales ( ) ( ) t L t t and , when ( )
Methodology and analysis of degradation data under the wiener process/ADT
To obtain degradation data of GNPs@LA under a CSADT, several samples were synthesized following the methodology reported by Cornejo et al [19] and using NaCl as an acceleration factor. The stress test levels were based on an exploratory study, leaving three different levels of effort for three different populations. UV-vis absorption spectra of colloids were carried out every third-day generating degradation signals for low, medium, and high levels. In this study, the material degradation was determined considering the absorbance between 450 and 550 nm, and the maximum characteristic peak of gold. Gold colloids with a characteristic peak greater than 525 were considered as a failure. Since this work proposes to estimate the life distribution of GNPs@LA applying a CSADT. Accordingly, the following can be defined: • The percentage levels of NaCl are indexed as = ¼ k K 1, , • The population of samples is indexed as Once the degradation signals have been defined as degradation data, we proceed to obtain the configuration of the CSADT that describes the degradation trajectories. Thus, this work proposes the non-linear Wiener process with drift parameter, random effects, measurement errors, and different link functions in the covariability ( Now, proposing the CSADT and the Wiener process with these characteristics, the degradation process can be established as formula (5). Thus, the following parameter set˜{˜˜} m s k b s Q = h h e , , , , 2 2 will be estimated to get the life distribution. We remark that 1 2 in the quadratic model case and b b = 1 in the remaining cases.
All the above were programmed in the statistical software R, where the MLE was applied to obtain the parameters setQ. Figure 2 shows the proposed methodology.
As can be seen in figure 2, different degradation models have been proposed. To select the best model will be used the AIC criterion [38] and for the evaluation and validation, the estimated parameters of the model will be employed and the Bootstrapping distribution [39] will be calculated with the construction of confidence intervals.
Results
The constant stress ADT in GNPs@LA had 3 levels, these being: 19 samples for the low level with 8% NaCl, 13 samples for the medium level with 12%, and 12 samples for the high level with 16% NaCl m/v. With a censorship time of 69 days; except for the low level which was 51 due to remaining time conditions and modifications of the plan. The degradation measurements were performed every third day generating a total of 18 measurements for low level and 23 measurements for medium level and high level, providing degradation signals over time.
From UV-vis absorption spectra degradation signals were obtained. In figure 3, the changes in the spectra are graphically presented comparing the first measurement and the last measurement in the range of 400 to 800 nm in wavelength.
It can be observed in figure 3 that for the three degradation levels, the absorbance amplitude decreases, and the width of the band broadens causing a red-shifted of the characteristic peak due to their increase in size and the aggregation of the gold nanoparticles. It can be also noticed that at higher percentages of NaCl the degradation is more appreciable than at lower percentages. To have a better relationship between UV-vis spectra and degradation material we made a graph from the initial average and final characterization for each level, which can be seen in figure 4. The area comprised between 450 and 550 nm from UV-vis spectra was used to quantify the material degradation. Additionally, when the characteristic peak moves above 525 nm is considered the failure threshold. From figure 4, a change in the area between the initial and final characterization is well noted and it was used to obtain degradation data.
To maintain a notation according to the property of independent increments and a normal distribution under the Wiener process, the area for each measurement was calculated, and each degradation increment was obtained from the difference between the first-day area and the subsequent day areas, these were the increments of degradation to be modeled. Under this consideration, the degradation trajectories were obtained for each sample at the different levels of NaCl.
In figure 5 the different trajectories at each level are shown, it can also be observed that there is a nonmonotonous behavior with increasing and decreasing trends. Also, it is observed that the degradation of gold colloids with the same levels of NaCl is different, this can be attributed to unobservable factors such as concentration, unit-to-unit variability, inherent randomness, as well as the measurement variability of each sample. Thus, the Wiener stochastic model was chosen with its four variants since it has great potential to capture stochastic dynamics, and it is also applicable to non-monotonic impairments, providing a satisfactory and flexible description of the impairment data.
For this study, we propose to model the degradation trajectories under the non-linear Wiener process, with random effects, error measurements, and different covariability using three different link functions, which generate three different models.
To obtain the optimal parameters in each model, the initial parameters should be close to the true model parameter to be estimated, as well as the value b in the time transformation, these were obtained by a preliminary package made by us using RStudio that performs an individual regression using least squares for each degradation path giving as initial parameters˜k Once these initial parameters were estimated, they were fixed at the likelihood function to estimate the optimal values of m s h e and 2 via the MLE approach (in closed form (10) and (11) With the estimated parameters of the model in table 2 and establishing w as the failure threshold, which corresponds to a 1.51 degradation increment, equivalent to an area of 2.1 units implying a 525 nm deface in the wavelength. The life distribution is given by (16). Therefore, the density and cumulative density functions for each degradation level are shown in figure 6. It is observed that at higher percentages the degradation is more noticeable, also the probability mass concentrates more towards zero as the percentage of NaCl increases, according to the accelerated degradation test.
Given the cumulative density of the model, this can be evaluated to make desired inferences and consequently estimate the useful life under different conditions. As an example of this, table 3 presents some failure rates based on formula (16).
As can be seen in table 3, the results between different models differ, being the quadratic model, which provides the lowest failure rate.
To select the best model, the Akaike information criterion (AIC) introduced by Hirotugu Akaike in 1973 has been one of the most widely known and used model selection tools with degradation forecast [38]. This criterion has been used by some authors such as [40][41][42] for the selection of the most appropriate degradation model given a set of degradation measures. The AIC is used as a selection criterion when the model parameters have been estimated by maximum likelihood, its formula is given by where L log is the log likelihood and p is the number of parameters in the model, the likelihood function reflects the conformity of the model with the observed data, the higher the conformity between the model and the data the higher the likelihood, however, when the number of model parameters increases the likelihood usually increases so the AIC penalizes the number of parameters. Therefore, the selected model will be the one with a minimum AIC. According to the above explanation, we will use the AIC to establish the covariability influences. Modeling without covariance yields an AIC value of −931,703 which is higher compared to the AIC values for the proposed models, as can be seen in table 4.
It can be easily seen that covariability has an influence on the models and must be embedded in the process. The application of the AIC criterion suggests that the quadratic model is a better option for the degradation data obtained.
Continuing with the assessment, the bootstrap method [43] is used to determine confidence interval (CI) for the failure distribution, the CI is found by using the estimated model parameters with the sample data as if they were the true parameters, since this is the information available from the degradation process. New degradation samples are generated from the estimated parameters, with these data, new model parameters are estimated and used to obtain a new cumulative failure distribution. It is necessary to repeat the above procedure many times, thus obtaining an approximation to the sampling distribution. It is common to plot the empirical distribution of the failure pseudo-times with the confidence intervals to check the adequacy of the model, the more pseudotimes are within the confidence intervals the higher the adequacy. The Bootstrap was applied using 4000 datasets. The results are shown in figure 7 for each model. According to the obtained CIs with a confidence level of 95% which are observed in figure 7, the percentages of 8% and 16% present an empirical value which is closed to the theoretical cumulative density. On the other hand, it can be noted that at 12% percentage the three models present several distant points outside the CI, however, the quadratic model seems to have more values within the confidence interval, as well as close to the theoretical distribution.
Discussion
The results of the research show that according to the AIC criterion and the bootstrap confidence interval the quadratic acceleration model has a better adjustment of the degradation of the GNPs@LA, however, in the ADT and ALT the samples usually show a degree of curvature that is not sufficient for the AIC criterion to better consider the quadratic acceleration model over a linear one. In our model, the curvature was large enough to improve the AIC criterion. When a curvature parameter is added in an ADT, special attention should be paid, since there could be an overestimation of the degradation of the product under normal conditions of use, to avoid this overestimation we add the restriction m m > h h + h h k k 1 for > k 1 in the estimation of the model parameters, thus obtaining an estimate of the failure fraction of the GNPs@LA under normal conditions of use consistent with the little knowledge that we had of them. We also recommend further investigation of the quadratic acceleration model in ADT tests.
Conclusions
This research proposed a methodology and an analysis model to estimate the failure rate and useful life of GNPs@LA based on accelerated degradation tests and a non-linear Wiener process incorporating random effects, error measures, and covariability. The proposed scheme employs three different link functions in covariability using the inverse power, linear and quadratic models.
The modeling has been tested using NaCl as an acceleration factor and a three-level constant stress ADT with 8%, 12%, and 16% of NaCl as degradation signals and as degradation data in the Wiener process. The data presented a non-monotonous behavior with oscillatory tendencies, and the GNPs@LA degradation observed for the same population was different thus the Wiener stochastic process was applied with its four variants.
It is demonstrated that the model applied by the non-linear Wiener process, with random effects, error measures, and covariability that uses the quadratic model as a link function was the most effective and gives the best estimate of the degradation rate of the shelf life of GNP@LA and as a function of NaCl. These results can be used to provide guarantees of commercially available nanomaterials. | 6,519.8 | 2021-09-17T00:00:00.000 | [
"Materials Science"
] |
Interfacial intermetallic compound modification to extend the electromigration lifetime of copper pillar joints
Electromigration is the massive metal atom transport due to electron flow, which could induce a disconnect in electronics. Due to the size of copper pillar bump reduction, the portion of interfacial intermetallic compound in solder joints is increasing obviously. However, there is lack of systematical research on the effects of intermetallic compound on the EM lifetime of solder joints. In this paper, the interfacial intermetallic compound of copper pillar joints is modified to extend the electromigration lifetime. The growth rate of intermetallic compound in solder joints sample is calculated firstly. From 230°C to 250°C, the growth rate of intermetallic compound increases from 0.09 μm/min to 0.19 μm/min. With a longer reaction time, the intermetallic compound layers continuously grow. Then electromigration tests were conducted under thermo-electric coupling loading of 100°C and 1.0 × 104 A/cm2. Compared with lifetime of thin and thick intermetallic compound samples, the lifetime of all intermetallic compound sample improved significantly. The lifetime of thin, thick, and all intermetallic compound samples is 400 min, 300 min, and 1,200 min, respectively. The failure mechanism for the thin intermetallic compound sample is massive voids generation and aggregation at the interface between solder joints and pads. For the thick intermetallic compound sample, the intermetallic compound distance is short between cathode and anode in solder joints, leading to lots of crack create in the middle of solder joints. As the all intermetallic compound sample can greatly reduce the number of voids generated by crystal structure transforming, the lifetime extend obviously.
Introduction
To meet the requirement of the decrease of bump size and the increase of circuit density, the copper pillar bump for integrated circuit (IC) products has long been realized as one of the most promising solutions for high performance and high data rate electrical signal transmission (Hu et al., 2013). The first introduction of copper pillar bump is developed by IBM company in 1970, which is named as controlled collapse chip's connect (C4) bumps (Naha et al., 2006). Usually, the copper pillar bumps compromise a Sn96.5Ag3.5 lead-free solder cap, used to bond metallization pads. n electronics, solder joints in copper pillar bump sample will experience severe electromigration (EM) (Ma et al., 2016a;Ma et al., 2016b;Ding et al., 2022). Under the EM stress, crack can be observed in the copper pillar joints, and the cracks will accumulate, eventually leading to the failure of the entire electronic devices. In EM process, the atoms will massively transport along the direction of electron flow (Gu and Chan, 2008). Besides, Joule heating and current crowding are all associated with EM-induced failure of solder joints, which refers to the disconnection caused by the formation and growth of Kirkendall voids (Gu and Chan, 2008). Recently, many studies reported on EM in solder joints of copper pillar samples, and most of them focused on the mechanism of EM with various under bump metallurgies (UBMs) (Liang et al., 2006;Wang et al., 2012;Chen et al., 2015).
With the growing demand for smaller size and higher integration of microelectronics in electronic packaging, the size of solder joints in copper pillar bump has been gradually decreased (Ko et al., 2019). This means that higher current densities and temperature may occur in the solder joints, giving rise to EM. Thus, EM has become a more and more important indicator of joints reliability, which will determine the lifetime of the service time of electronic products (Song et al., 2020;Wang et al., 2020;Yue et al., 2021). In addition, copper pillar bump needs to be joined to pad by Sn96.5Ag3.5 solder in industry, and intermetallic compound (IMC) is formed through the interfacial reaction between solder and surface-finishing pad material during joining process (Mokhtar et al., 2021) , (Xu et al., 2022). This IMC layer can affect the mechanical and electrical reliability of joints under the influence of EM. Due to size reduction, the proportion of IMC in solder joints of copper pillar is remarkably increased. In a particular case, there are even all IMC solder joints. Thus, it is essential to study the effects of IMC on the EM lifetime of solder joints.
The objective of this paper is to investigate the effects of IMC on the lifetime and the cracking failure of solder joints of copper pillar sample caused by EM under 100°C and 1.0 × 10 4 A/cm 2 thermoelectric coupling load. In addition, the growth rate of IMC in solder joints was evaluated.
Materials and methods
The copper pillar samples were provided by Advanced Semiconductor Limited, which were prepared by electroplating process and lithography process. As shown in Figure 1, Sn96.5Ag3.5 lead-free solder cap is electroplated on the copper pillar, and a (Cu,Ni) 6 Sn 5 IMC layer formed between solder cap and copper pillar. The size of copper pillar is with diameter of 75 μm and height of 30 μm. To realize metallurgical connection, a pad is prepared, and metallization layers on pad are Cu/Ni/ Au. The connection of copper pillar to pad was produced by hot pressing connection (HPC) process, which simultaneously applies pressure and heat. To observe the IMC growing process in solder joints, cross-sectional samples were prepared by mechanical polishing with SiC abrasive sandpaper (240#, 800#, 2000#, and 4,000#), and then polished with a 0.25 μm diamond suspension to eliminate surface scratches. The microstructure of solder joints was observed by a TESCAN Vega three scanning electron microscope (SEM) equipment at 10 kV.
The growth rate of IMC layer
Usually, the IMC with thickness of 1-3 μm in solder joints is considered to process sufficient reaction between copper pillar bump and metallization pad. When thick IMC forms, voids and crack may occur in copper pillar joints, leading to shorter service time. Thus, it is necessary to study the growth rule of IMC in solder joints. The temperature and time are the two key parameters to control the IMC growth rate, with constant load of 1 MPa. The reaction temperature sets at 230°C, 240°C, and 250°C, respectively, and the Frontiers in Materials frontiersin.org reaction time is from 1 min to 10 min. The IMC thickness was calculated by the IMC area divided by the IMC length, while the IMC area and length were tested from SEM image. Figure 2 indicates the relationship between heating parameters and IMC thickness of solder joints. Mathematical equations were used to fit the experimental data. At different heating temperature, the IMC is growing gradually with time increases, basically according to the equation of d Dt where d is the thickness of IMC, t is the time, and D is the growth rate. By calculation, the growth rates of IMC are 0.09 μm/min at 230°C, 0.14 μm/min at 240°C, and 0.19 μm/min at 250°C. An obvious rule can be obtained that higher temperature can accelerate the growth rate of IMC. The reason is that higher temperature can increase the solution rate of Ni and Cu atoms to liquid solder, while the higher temperature beneficially accelerates the atomic reaction rate to form IMC. Longer time will extend the reaction to form more IMC at interface. Figure 3 illustratesthe microstructure of IMC at different temperature and time. When sample sinter at 230°C for 1 min, very thin IMC formed and the elementary compositions are (Ni,Cu) 3 Sn 4 and (Ni,Cu) 3 Sn. The result is similar to previous studies (Dai et al., 2022). When reaction time extends to 10 min, besides obvious thickness increase, the elementary composition transforms into (Cu,Ni) 6 Sn 5 . No more phase transformation is observed indicates that the (Cu,Ni) 6 Sn 5 is stable crystal structure. When heating time increases to 240°C, the initial crystal structure of IMC is (Ni,Cu)Sn 3 , and it will also gradually transform into (Ni,Cu) 6 Sn 5 at 10 min. By further increasing temperature to 250°C, the microstructure of IMC transforms quickly, and the initial (Ni,Cu) Sn 3 will become to (Ni,Cu) 6 Sn 5 at 5 min. Based on these research, it can be seen that the (Ni,Cu) 3 Sn 4 , (Ni,Cu) 3 Sn, and (Ni,Cu)Sn 3 are transition crystal structures, and those structures will be eventually transformed into stable (Ni, Cu) 6 Sn 5 structure. The reason for the crystal structure transformation principle is that, at the beginning of the soldering reaction, the liquid solder would provide sufficient Sn atoms. Thus, the Sn-rich crystal structures primarily grow. However, the formation energy of the alloy (Ni,Cu) 6 Sn 5 is negative, so it is more thermodynamically stable and its alloying capacity is higher, resulting in automatic transformation from transition crystal structures to (Ni,Cu) 6 Sn 5 structure (Leineweber et al., 2021). In summary, when there is sufficient atomic diffusion time, the final crystals structure is the (Ni,Cu) 6 Sn 5 structure. Meanwhile, high temperature would accelerate the atomic diffusion rate, leading the faster crystal structure transformation from transition crystal structures to the most stable (Ni,Cu) 6 Sn 5 structure.
The daisy chain of copper pillar bumps
To study the effects of IMC on the electromigration lifetime of solder joints, a daisy chain of copper pillar bumps was designed. The daisy chain is composed of two substrates as shown in Figure 4. The copper pillar substrate contains two lead bonding pads, which are used to connect DC power. A 6 × 6 copper pillar array layouts on the substrate, and a metal wire prepared by electroplating connects each two copper pillars, with all copper pillars at non-conducting states. The other pad substrate has a 6 × 6 metallization pad array. Each two pads are connected by metal wire, with all pads at non-conducting states. The two paired substrates were bonding by hot pressing methods.
Lifetime of solder joints
Based on the Arrhenius equation, improving temperature can significantly accelerate the atomic diffusion rate. Thus, to accelerate the atomic diffusion and shorten the research period, all experiments were conducted at environment of 100°C. The current density through solder joints sets at 1.0 × 10 4 A/cm 2 . The experimental samples were placed on a hot plate, and the heating rate was 5°C/min. When the temperature reached at set value, the current stress was immediately
FIGURE 4
The schematic illustration of daisy chain. Figure 5 shows the initial microstructure of solder joints after reflow. Figure 6 shows the resistance change with time increase and crosssectional image of failed solder joints. At simultaneous stresses of heat and electron flow, the resistance evolution of thin IMC sample could be divided into three steps. In the first step, the resistance has a huge increase from 2.3 Ω to 4.9 Ω. The obvious resistance change is due to quickly IMC increase. As we know, the conductivity of IMC crystal structure is obviously lower than that of Sn96.5Ag0.5 solder (The electrical resistivity of Cu 6 Sn 5 and Sn is 17.5 mΩ cm and 10.9 mΩ cm, respectively) (Dai et al., 2022) , (Zhu et al., 2020). Thicker IMC will continuously reduce the electric conduction of joints. In the second step, resistance of joint increases to 5.1 Ω and trend to be stable until to 250 min. After that, the resistance increases significantly until to 400 min when the daisy chain electrically failed. By observing the failed sample ( Figure 6B), a crack close to the cathode side crosses the solder joint, but at the right corner of solder joints destruction is more serious. When the electron flow transports from cathode to anode, quick crystal structure transformation generates lots of voids in copper pillar and solder interface, which will finally aggregate to crack. Additionally, because the current tends to transmit along the least resistance pathway, there is a current crowding in the right corner. Therefore, the right corner near the cathode of solder joint has the highest failure risk.
The same method is applied to monitor the resistance evolution of the thick IMC sample. Indicating in Figure 7A, there is a sharp increase in the Frontiers in Materials frontiersin.org initial few minutes, in which the resistance increases from 2.8 Ω to 5.1 Ω.
In the second step after 10 min, the state of resistance is stable firstly and has a slight increasement after 150 min. Then, the growth rate of resistance gradually increases until to failure. Figure 7B shows microstructure of failure sample, lots of cracks form in the middle area of the joint. In this sample, initial IMC layer distance between copper pillar side and metallization pad side is less than 5 μm, voids generate at both anode and cathode will aggregate together. Thus, the time for crack crossing the solder joint unexpected shorten, resulting in a lifetime of thick IMC sample. Unlike the thin and thick IMC samples, there is no quick resistance increase step being observed in all IMC joints, as shown in Figure 8A. Because all IMC crystal structure has formed completely. The lifetime of all IMC sample can reach about 1,200 min Due to the fact that there is no quick phase transformation during aging process, the failure of full IMC joint is due to high Joule heat caused by higher resistivity of IMC structure. In addition, the coefficient of thermal expansion (CTE) mismatch between pad materials and IMC cause mechanical stress in joints, which leads to crack initiation near cathode, as shown in Figure 8B. In the middle of joint, EM leads to several voids, but there are much less voids generating compared to thin and thick IMC joint. Under the electrical stress, the atoms of joints are moved by the electron wind force, leaving lots of voids in the joints, which is the main reason for EM failure. Because the interatomic force in IMC is much stronger than that in Sn96.5Ag3.5 solder, the atomic movement rate in all IMC joints is much slow than that in Sn96.5Ag3.5 solder. In summary, EM is not major factor of joint failure due to IMC excellent resistance to EM.
By comparison, the solder joints with nearly 100% percent of IMC crystal structure have the highest lifetime, which is triple lifetime of thin IMC sample and four times of lifetime of thick IMC sample. The main reason is that all IMC sample can greatly reduce the number of voids, which are generated by crystal structure transforming. The research results in this study provide a novel solution to design long lifetime solder joints.
Conclusion
In this paper, the growth rate of IMC layer in solder joints of copper pillar samples was systematically investigated. The key parameters to control IMC growth rate are reaction temperature and time. Then, three IMC levels, 1.3 μm, 2.7 μm, and 10.1 μm, were selected to conduct electromigration tests on the copper pillar samples at ambient temperature of 100°C and current density of 1.0 × 10 4 A/cm 2 . The effects of IMC on the lifetime of solder joints of copper pillar samples were tested. And also, microstructure evolution and failure modes of solder joints during electromigration have been investigated. The results of this study can be summarized as follows: 1) Higher temperature can accelerate the atomic reaction speed, and it also accelerates Ni and Cu atomic solution to liquid Sn96.5Ag3.5 solder, thus resulting in a quick IMC growth rate. Longer time can extend the atomic reaction time, which is beneficial the IMC growth. Meanwhile, it is found that the transition crystal structures would eventually transform into the most stable (Ni,Cu) 6 Sn 5 structure. 2) For thin IMC sample, during the process of thermo-electric coupling loading, the IMC growth quickly in the initial few minutes, then stable for a longtime, and quickly failed at last. The lifetime of thin IMC sample is about 400 min. The failure mechanism is massive voids caused by EM generate and gather at the interface of solder joints. 3) Thick IMC sample has the shortest lifetime about 300 min, because IMC distance from cathode and anode is short, leading to lots of cracks occur in the middle of solder joints. 4) Due to the minimum crystal structure transformation and excellent EM resistance, all IMC sample has the longest lifetime, which is about 1,200 min. The reason for the excellent EM resistance of IMC is that the interatomic force in IMC is very strong. Thus, the atomic movement hardly occurs in IMC under current flow.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
Conceptualization, DY; methodology, DY; software, YH; validation YH; investigation, DY; data curation, YH; writing-original draft preparation, DY; writing-review and editing, YH; supervision, DY; project administration, DY. All authors have read and agreed to the published version of the manuscript. | 3,878 | 2023-01-04T00:00:00.000 | [
"Materials Science"
] |
The tropical rain belts with an annual cycle and a continent model intercomparison project: TRACMIP
This paper introduces the Tropical Rain belts with an Annual cycle and a Continent Model Inter-comparison Project (TRACMIP). TRACMIP studies the dynamics of tropical rain belts and their response to past and future radiative forcings through simulations with 13 comprehensive and one simplified atmosphere models coupled to a slab ocean and driven by seasonally varying insolation. Five idealized experiments, two with an aquaplanet setup and three with a setup with an idealized tropical continent, fill the space between prescribed-SST aquaplanet simulations and realistic simulations provided by CMIP5/6. The simulations reproduce key features of present-day climate and expected future climate change, including an annual-mean intertropical convergence zone (ITCZ) that is located north of the equator and Hadley cells and eddy-driven jets that are similar to present-day climate. Quadrupling CO2 leads to a northward ITCZ shift and preferential warming in Northern high latitudes. The simulations show interesting CO2-induced changes in the seasonal excursion of the ITCZ and indicate a possible state dependence of climate sensitivity. The inclusion of an idealized continent modulates both the control climate and the response to increased CO2; for example, it reduces the northward ITCZ shift associated with warming and, in some models, climate sensitivity. In response to eccentricity-driven seasonal insolation changes, seasonal changes in oceanic rainfall are best characterized as a meridional dipole, while seasonal continental rainfall changes tend to be symmetric about the equator. This survey illustrates TRACMIP’s potential to engender a deeper understanding of global and regional climate and to address questions on past and future climate change.
Introduction
The simulation of tropical rainfall is one of the most stubborn challenges in climate science. Despite general improvements, climate models still exhibit large-scale tropical rainfall biases such as a double intertropical convergence zone (ITCZ) in the Central-East Pacific that have now persisted more than two decades [Mechoso et al., 1995;Hwang and Frierson, 2013], and projections for both the ITCZ position [Frierson and Hwang, 2012;Donohoe and Voigt, 2016] and continental rainfall [Knutti and Sedlacek, 2013] remain uncertain in magnitude and sign. In many regions the models disagree (e.g., in the Sahel [Biasutti et al., 2008;Park et al., 2015]), and even in regions where they agree models are often at odds with recent trends (e.g., in East Africa [Williams and Funk, 2011;Lyon and DeWitt, 2012]). The This paper introduces a hierarchical modeling approach that we have developed over the past 2 years to help answering the Grand Challenge question of "What controls the tropical rain belts?." The "Tropical Rain belts with an Annual cycle and a Continent-Model Intercomparison Project" (TRACMIP) is a simulation suite that complements other modeling efforts [Eyring et al., 2015;Webb et al., 2016;Zhou et al., 2016] and has been performed by 13 comprehensive global climate models and 1 simplified gray-atmosphere model. The suite includes five simulations (see section 2 for details), two in aquaplanet configuration (prefixed Aqua) and three with an idealized tropical continent (prefixed Land). The Aqua-and LandControl simulations have a circular orbit and preindustrial greenhouse gas concentrations, while other experiments simulate the response to enhanced atmospheric carbon dioxide and changes in seasonal insolation. In all simulations, the atmosphere is thermodynamically coupled to a motionless slab ocean of uniform depth.
We envision that the design of TRACMIP and different pairings of the five simulations will shed light on different aspects of the dynamics of tropical rainfall and, more generally, the global climate. TRACMIP represents land as a rectangular patch of a very thin slab ocean with reduced evaporation and increased albedo. Soil moisture dynamics are thus disallowed, as are complications arising from continental geometry and the presence of topography and vegetation. This will allow us to identify which aspects of monsoonal circulations can be captured and understood with such a maximally idealized land, and which require more realistic land features . Comparing Aqua and Land simulations can illuminate whether zonal asymmetries created by continental landmasses fundamentally change the behavior of the zonal-mean ITCZ in the control climates or in its response to greenhouse gas forcing. The juxtaposition of aquaplanet and idealized land simulations can further help us assess the extent to which established zonal-mean ITCZ frameworks provide useful information about regional rainfall characteristics [Adam et al., 2016a[Adam et al., , 2016b, to what extent zonal asymmetries are required for monsoons to exist [Bordoni and Schneider, 2008], and how the presence of land modulates rainfall both locally and in the zonal-mean [Maroon et al., 2016]. The land simulations provide an update to the seminal work of and Neelin [2001, 2003] to investigate the importance of fully resolving the vertical structure of tropical circulations, interactions between tropical and extratropical circulations, and the representation of convection. Moreover, simulating the response to increased greenhouse gases and the response to seasonal insolation within the same model setups provides the foundation to build theories that encompass the key forcings of both future and past changes. This approach is similar to what has informed the design of the paleoclimate contribution to CMIP5 [Schmidt et al., 2014a] and will allow us to study to what extent and under which circumstances a theory built from past changes, e.g., the greening of the Sahara during the mid-Holocene, can inform and possibly constrain future changes [Harrison et al., 2015]. TRACMIP can thus fill the gap between work on past climate that used idealized models [Merlis et al., 2013a] and the comprehensive-model work done within the Paleo Model Intercomparison Project.
Model setups with idealized boundary conditions have become an important tool in the development of climate models and the investigation of climate dynamics [e.g., Kang et al., 2008;Williamson et al., 2012;Stevens and Bony, 2013;Leung et al., 2013;Medeiros et al., 2015;Voigt et al., 2014a;Shaw et al., 2015], and are now included in CMIP activities.
Notably, CMIP5 included aquaplanet simulations with prescribed time-constant SSTs (forced with an approximation to current annual-mean SST and with a 4 K uniform warming) that build upon the AquaPlanet Experiment [Williamson et al., 2012] and were partly motivated by the Cloud Feedback Model Intercomparison Project. The CMIP5 aquaplanet simulations illustrate the large impact of moist processes on the atmospheric circulation and our gaps in understanding this impact [Stevens and Bony, 2013;Voigt and Shaw, 2015]. However, the CMIP5 use of fixed SSTs and the lack of seasonality might overemphasize model uncertainties that are less relevant for the dynamics of tropical rainfall in coupled realistic setups. When cloud and convective processes have full reign over the tropical rain belts, seemingly small changes in the convection scheme can lead to large changes in tropical rainfall [e.g., Hess et al., 1993;Williamson et al., 2012;Moebis and Stevens, 2012].
Yet when SST interactions and seasonality are present, convection is more constrained.
An example is given in Figure 1 which shows tropical precipitation simulated by two versions of the ECHAM6.1 model (the atmospheric component of CMIP5 MPI-ESM Earth system model). The two versions only differ in the entrainment/detrainment rate of moist convection, but this small change is sufficient to create a stark difference in tropical precipitation between the two versions used in the CMIP5 aquaplanet setup, with one version simulating a single and the other version simulating a double ITCZ. Yet when used in an aquaplanet setup with interactive SSTs and a seasonal cycle, the precipitation differences largely vanish and both versions simulate similar ITCZs. This suggests that the ability of SSTs to respond to air-sea fluxes, including cloud-radiative effects, as well as the external timescale and the interhemispheric asymmetries set by the seasonal cycle, provide an anchor to the tropical climate. This hypothesis is supported by the interactive-SST aquaplanet work of Lee et al. [2008], who found that all of the studied models simulate a single ITCZ when run with a slab ocean (no seasonal cycle was used in that work). Model differences in cloud and convective processes, which have a strong impact in uncoupled CMIP5 aquaplanet simulations, might thus have much less of an impact in realistic coupled CMIP5 simulations. This raises the question to what extent the CMIP5 aquaplanet simulations are helpful to understand the model behavior in more realistic setups, and points to a gap in the model hierarchy provided by CMIP5. TRACMIP's AquaControl simulation strives to fill this gap by using an interactive slab ocean and seasonally varying insolation. The aquaplanet simulations with quadrupled CO 2 extend the bridge provided by TRACMIP between CMIP5 aquaplanets and realistic simulations to the case of future scenarios. Importantly, TRACMIP fills this gap in the CMIP5 hierarchy not with a single model, but with an ensemble of models. MIPs, or the intercomparison of simulations performed by different climate models under identical boundary conditions, have revolutionized climate science. In particular, they have given researchers another method (complementary to singlemodel sensitivity experiments) to identify the processes that determine the response of a climate variable to external forcing. A model response that is consistent across an ensemble of GCMs carries more weight than results from a single model and is welcomed for that reason. But scatter across models can be just as informative. For example, correlations of anomalies across an ensemble can highlight how changes in two variables are connected to each other in a robust way across all models, even though the magnitude or even the sign of the changes are uncertain; these robust correlations point to robust mechanisms [e.g., Biasutti et al., 2009]. In particular, model intercomparisons identified "emergent constraints": relationships that hold for both natural variability and anthropogenic changes and that, therefore, can be evaluated in observations of the former and used to constrain the latter [e.g., Hall and Qu, 2006;Sherwood et al., 2014]. Such constraints can be specific to a world region if the simulations are fully realistic, but they can also be specific to dynamical regimes and thus can also be identified in idealized model setups that add the advantage of a clean experiment [e.g., Voigt et al., 2014a;Medeiros et al., 2015].
Most models that contribute to TRACMIP are comprehensive global climate models. TRACMIP also includes an idealized model that represents convection in a simplified manner and that does not take into account radiative interactions of clouds and water vapor. The idealized model provides a link to past theoretical studies of tropical rain belt dynamics [Chou and Neelin, 2004;Bordoni and Schneider, 2008;Kang et al., 2009;Merlis et al., 2013b;Bischoff and Schneider, 2014]. We hope that this will foster TRACMIP's aim to understand tropical rainfall dynamics across a hierarchy of models and boundary conditions and to better connect theories, state-of-the-art models, and ultimately observations [Held, 2005[Held, , 2014.
In this paper we introduce TRACMIP to the scientific community. First, we describe the experimental protocol, available diagnostics and participating models (section 2). The main part of the paper presents an overview of the mean climate simulated in the five configurations and highlights interesting aspects of the mean climate and its response in rainfall and temperature to external forcings: the control simulations without and with land are characterized in section 3 and the response to radiative forcing from CO 2 and insolation changes is discussed in section 4. The full breadth and depth of the scientific inquiries that can be based on this data set is beyond the scope of any one paper and we will not attempt in our discussion (section 5) to fully answer any of the big-picture questions that TRACMIP was designed to address. Instead, we will discuss how TRACMIP can be used not just to investigate how tropical rain belts respond to climate change, but for a broad range of other purposes, from high-frequency tropical variability, to tropical-extratropical interactions and extratropical stormtracks. TRACMIP has been a community effort, and we are proud to share it as a community tool.
Experimental Protocol
TRACMIP consists of five experiments that are listed in Table 1. The control experiment is an aquaplanet climate called AquaControl with zonally uniform boundary conditions. Aquaplanets have been employed previously, including in CMIP5, but in contrast to CMIP5 we couple the models to a thermodynamic slab ocean to close the surface energy balance and to allow for interactive sea-surface temperatures. A similar setup was proposed by Lee et al. [2008] and used in a small intercomparison by Rose et al. [2014], but here we also include a fixed northward meridional ocean heat transport and a seasonal cycle. Following the CMIP5 aquaplanet setup, greenhouse gases (with the exception of CFCs) and total solar irradiance are adapted from the AquaPlanet Experiment (APE) [Williamson et al., 2012].
AquaControl is forced by present-day CO 2 = 348 ppmv, CH 4 = 1650 ppbv, N 2 O = 306 ppbv, and a total solar irradiance of 1365 W m −2 . Direct radiative effects of aerosols are set to zero, as are CFCs. Ozone is taken from APE (http://www.met.reading.ac.uk/~mike/APE/ ape_ozone.html). Unlike in APE and CMIP5, physical constants such as gravitational acceleration and global-mean surface pressure are not specified, but the effect of model differences in these quantities is deemed negligible. TRACMIP includes the seasonal and diurnal cycles in insolation. With the exception of the LandOrbit experiment described below, the seasonal cycle is an idealized version of today's insolation with an obliquity of 23.5° and zero eccentricity. The latter implies an annual-mean insolation that is symmetric with respect to the equator. Northern Hemisphere spring equinox is set to 21 March. The seasonal cycle enables seasonal north-south migrations of the ITCZ. To simulate seasonal ITCZ migrations comparable to today's climate, the slab ocean depth is set to 30 m [Donohoe et al., 2014]. Modeling groups were asked to use a 360 day calendar, but since this was not available in all models some models use a 365 day calendar without (second option) or with leap years (third option). As in previous slab-ocean aquaplanet studies [e.g., Kang et al., 2008;Voigt et al., 2014a;Rose et al., 2014] sea-ice formation is turned off and the ocean is allowed to cool below the freezing temperature. Models use their own surface roughness length and ocean albedo. Model differences in ocean albedo do not appear to be the cause of model differences in global surface temperature (Figure 2), as Earth's energy balance is more strongly controlled by atmospheric processes, in particular clouds [Donohoe and Battisti, 2011].
Four more experiments study the impact of CO 2 , land, and insolation. The first is an aquaplanet experiment initiated from AquaControl with CO 2 instantaneously quadrupled and is called Aqua4xCO2. This experiment mimics the CMIP5 coupled Abrupt4xCO2 experiments and is designed to provide insights into the equilibrium response to the greenhouse gas forcing as well as its transient evolution. The lack of a dynamic ocean means, however, that the transient response in TRACMIP focuses on the mixed-layer response of the ocean on decadal timescales and does not account for the impacts of spatially and time-varying ocean heat uptake (see Rose and Rayborn [2016] for a recent review).
The other three experiments are performed with a modified lower boundary designed to capture the essential characteristics of a continent. The continent is a flat rectangular region that straddles the equator in a fashion analogous to the African continent, reaches into the subtropics (30°S-30°N), and is limited in longitude to a width of 0°E-45°E. Because the primary focus of TRACMIP is on atmospheric processes, we choose to avoid the complication of land surface schemes and soil moisture feedbacks, and implement a continent made neither of land nor of water-a "jello" continent. Land is modeled as a thin (0.1 m) slab of ocean with albedo increased by 0.07 compared to the models' own ocean albedo, suppressed ocean heat transport (i.e., zero q-flux), and reduced evaporation. The reduction in evaporation is achieved by halving the surface exchange coefficient for moisture, C q , used in the calculation of the surface evaporative flux E, Voigt et al. Page 6 J Adv Model Earth Syst. Author manuscript; available in PMC 2020 August 25.
where v is a measure of near-surface wind speed, q is near-surface specific humidity, and q s is the saturation-specific humidity for a given surface temperature. Over land, equation (1) is changed to which, assuming changes in surface wind speed and boundary-layer humidity are small, will reduce evaporation by a factor of 2. While evaporation is always suppressed, land can never dry out in TRACMIP, in contrast to what would happen with a bucket model formulation [e.g., Manabe, 1969]. Over ocean equation (1) is applied in all experiments. The surface roughness is the same over land and ocean.
The three land experiments are LandControl, Land4xCO2, and LandOrbit. LandControl differs from AquaControl only by the introduction of the continent. Land4xCO2 and LandOrbit are initiated from LandControl and study the response to radiative forcing. Land4xCO2 has instantaneously quadrupled CO 2 . In LandOrbit a nonzero eccentricity of ϵ=0.02 is introduced to create a hemispheric difference in seasonal insolation such that compared to LandControl, Northern and Southern Hemisphere receive less and more insolation in their respective summers. Annual-mean insolation in LandOrbit is the same as in all other experiments; the seasonal insolation changes are shown in Figure 3. The eccentricity change addresses the seasonal insolation change due to precessional forcing that is responsible for the dominant signal in Holocene tropical hydroclimate [e.g., Prell and Kutzbach, 1987;Clemens et al., 2010]. The choice of comparing simulations with and without eccentricity, instead of simulations with the same eccentricity but different time of perihelion, was made in order to have the simplest possible control simulation (ϵ = 0), in which the only source of hemispheric asymmetry is the ocean heat flux (see below). The insolation in LandOrbit roughly corresponds to today's orbit [Joussaume and Braconnot, 1997], so that the insolation difference of LandControl-LandOrbit is about half as strong as the insolation change between the mid-Holocene and today.
The slab ocean includes a prescribed ocean heat transport that is imposed as a so-called "qflux" in units of W m −2 . The q-flux is added to the surface energy balance and cools low latitudes and warms mid and high latitudes, mimicking the effect of meridional energy transport of a dynamic ocean. The TRACMIP q-flux is zonally symmetric and constant in time. It is an approximation to the zonal and time mean q-flux of the present-day climate that is shown in Figure 4a and that we calculated from observations of top-of-atmosphere radiative fluxes from CERES and moist static energy divergence from the ERA-Interim reanalysis, both averaged over years 2001-2010 (see Frierson et al. [2013] for details). The zonal average includes land points for which the q-flux is set to zero. Small-scale meridional variability in the observed q-flux in mid and high latitudes arguably is impacted by the specific land-ocean geometry of the present-day Earth, and so for TRACMIP we meridionally smooth the q-flux by fitting a fourth-order polynomial to the observed q-flux, q(φ) = p 0 + p 1 φ + p 2 φ 2 + p 3 φ 3 + p 4 φ 4 , where φ is degree latitude. The fit is done separately for the Northern and Southern Hemisphere, leading to hemispherically dependent coefficients listed in Table 2. In the simulations with land, the q-flux is set to zero over land. This requires a small q-flux correction of −0.59 W m −2 over ocean points in the land simulations to ensure that the global-mean q-flux is still zero. The correction is applied to all ocean points as a small decrease in p 0 , which implies a small cooling over ocean in the land simulations compared to the aquaplanet simulations. The meridional energy transport in PW associated with the qflux is shown in Figure 4b. At the equator the ocean transports 0.5 PW into the Northern Hemisphere. This is consistent with the present-day climate [Ganachaud and Wunsch, 2000;Frierson et al., 2013;Marshall et al., 2013] and puts the annual-mean ITCZ into the Northern hemisphere in TRACMIP, as is described in more detail in section 3.
Requested Diagnostics and Participating Models
While TRACMIP is not organized within CMIP6, partly because the idea for the project became evident only in the fall of 2014, TRACMIP attempts to leverage past and future CMIP activities as much as possible. Many of the contributing models are either CMIP5 models or recent developments that reflect preparations for CMIP6. Modeling groups were asked to prepare their data according to the CMIP5 conventions for variable names, units and signs (i.e., to "cmorize" the data). The requested fields are those specified in the CMIP5 atmospheric Amon table, which is available at http://cmip-pcmdi.llnl.gov/cmip5/docs/ standard_output.pdf (excluding those related to the chemical composition of the atmosphere). Three-dimensional atmospheric data are interpolated on the 17 CMIP5 pressure levels (1000,925,850,700,600,500,400,300,250,200,150,100,70,50,30,20, and 10 hPa). The fields are requested as monthly, daily and 3 h data to enable studies that connect the models' climatologies to fast processes on daily and subdaily timescales. TRACMIP also follows CMIP5 regarding whether fields should be saved as averages or snapshots. For the monthly and daily output streams, all fields are requested as averages over the daily or monthly output period. For the 3 h output stream, surface and atmospheric temperature, horizontal wind, vertical wind, specific humidity, and geopotential height are requested as snapshots, and all other fields as averages. For each experiment, monthly output is requested for all years (except the 15 years of spin-up in AquaControl; for all models global-mean surface temperature has equilibrated at year 15 of AquaControl), daily output for the last 10 years, and 3 h output for the last 3 years. To enable studies of the transient response, all experiments except AquaControl are restarted from another experiment as described in Table 1.
TRACMIP was very well received by the scientific community. So far 13 comprehensive climate models and one simplified climate model have contributed simulations (see Table 3). With a few exceptions, almost all models have performed all experiments. This can be read off from the summary of global-mean time-mean surface temperature and time-mean ITCZ position given in Table 4 for each model and experiment. Some of the 13 comprehensive models only differ in specifics of the physical parameterizations, allowing for a judgment of how changes in the treatment of clouds and convection impact tropical rain belts. For example, the MetUM model is run in two configurations CTL and ENT that differ in the parameter settings for convection [see also Klingaman and Woolnough, 2014;Bush et al., 2015]. Similarly, ECHAM6 is run in two versions 6.1 and 6.3; and three different versions of the CAM Community Atmosphere Model are used. This judgment is further facilitated by the inclusion of the idealized model CALTECH that does not take into account radiative feedbacks from clouds and water vapor and represents moist convection in a simplified manner. The CALTECH model uses a gray radiation scheme, in which absorption and emission of solar and thermal radiation do not depend on wavelength. Hence, in this model an equivalent 4xCO 2 experiment is run by increasing the prescribed longwave optical thickness in the gray scheme.
For reference Figures 5-7 show the model median of annual-mean surface temperature, precipitation, zonal-mean zonal wind, and meridional mass stream function in all five TRACMIP experiments. Throughout this paper, the last 20 years of each simulation are analyzed and models are interpolated on a common 1° × 2° latitude-longitude grid for the calculation of the model median values.
Continent
In this section we describe the control climates of the aquaplanet setup and the setup with land. We begin with AquaControl and then compare it to LandControl to characterize the global and local impact of land. Unless otherwise stated we discuss the annual-mean climate.
AquaControl
In the aquaplanet setup the models simulate an annual global-mean surface temperature of 290.4-300.7 K (model median of 295.1 K; Figure 2 and Table 4). The models are about 3-12 K warmer than the present-day climate, and warmer than realistic coupled CMIP5 simulations of the twentieth century. This is expected from the lack of sea ice and continental areas, which both have a higher albedo than ocean, as well as the lack of aerosolradiative interactions.
Eight of the 14 models are within ±2 K of the model median. The model spread in TRACMIP global surface temperatures is higher than in historical CMIP5 simulations, which show a model spread of around 3 K [Mauritsen et al., 2012]. Yet the model spread in global surface temperature is still small enough to justify a meaningful comparison between the models, and is smaller than what one might have expected given that models were not tuned to a specific target temperature for TRACMIP, in contrast to CMIP5. Precipitation is about 50% higher compared to present-day (2.7 mm/d GPCPv2.2;1979-2010. This is only partly explained by the warmer climates in TRACMIP. When the present-day precipitation is extrapolated to the TRACMIP surface temperatures assuming a 2-3%/K precipitation scaling following Held and Soden [2006] (gray shading in Figure 8), TRACMIP precipitation is still larger. TRACMIP precipitation is higher not only because of a warmer climate but also because of the lack of continental areas, over which evaporation can be moisture limited and sensible heat fluxes play a larger role than over ocean.
The ensemble-median annual mean patterns of surface temperature and precipitation of the AquaControl were shown in Figures 5a and 6a; as expected the climate is zonally symmetric aside from very small residual noise. The zonal-mean median and all individual models are shown in more compact form in Figure 9, which shows annual-mean temperature, precipitation, and lower tropospheric zonal wind (see also Figure 7) as well as the seasonal progression of the ITCZ. Following Frierson and Hwang [2012], the ITCZ is defined as the latitude of the precipitation centroid between 30°N and 30°S (same area-integrated annualmean precipitation north and south of the ITCZ).
The Northern Hemisphere is 0.9-5.3 K (model median 2.2 K) warmer than the Southern Hemisphere in the hemispheric mean, and is also warmer at all corresponding latitudes ( Figure 9a). The warmer Northern Hemisphere is consistent with northward cross-equatorial ocean heat transport, which has been invoked to explain the 1-2 K warmer Northern Hemisphere of the present-day climate [Feulner et al., 2013;Kang et al., 2015]. However, since the hemispheric difference in TRACMIP is much larger than in the present-day climate even though the ocean transports the same amount of energy across the equator, other processes, likely those involving radiative interactions of clouds and water vapor, must play a role as well. The meridional profile of surface temperature is rather different from the present-day climate, as the lack of land, ice, and mountains strongly reduces the equator to pole contrasts. For example, the surface temperature contrast across the Southern Hemisphere is about 30 K, roughly half of that in the real world.
The annual-mean tropical precipitation has a double peak structure, but the Northern Hemisphere peak is dominant, so that the annual-mean ITCZ position, defined as the precipitation centroid, is at 0.9°N-10.8°N (model median 3.3°N) (Figure 9b and Table 4). This is consistent with the northward cross-equatorial ocean energy transport Frierson et al., 2013;Marshall et al., 2013]. Over the course of the seasonal cycle, the ITCZ migrates back and forth across the equator (except for AM2.1), reaching its most northern excursion in October and its most southern excursion around April-May ( Figure 9c); this indicates a seasonal cycle that is lagged behind the real world, where the presence of land mitigates the longer response time of the mixed-layer ocean [Biasutti et al., 2004].
The specific choice of the mixed-layer depth has, of course, a large impact on the exact timing of the seasonal peak. During most months tropical precipitation has only one peak (not shown), implying that the double peak in tropical annual-mean precipitation is the result of the seasonal migrations of a single ITCZ. The annual-mean circulation shows a Northern Hadley cell of 42-99 × 10 9 kg s −1 (model median 72 × 10 9 kg s −1 ) and a Southern Hadley cell of −53 to −265 × 10 9 kg s −1 (model median −133 × 10 9 kg s −1 ). The eddy-driven jet, defined as the 850 hPa zonal wind maximum [Barnes and Polvani, 2013], is at 42°N-51°N (model median 46°N) in the Northern Hemisphere and at 38°S-49°S (model median 42°S) in the Southern Hemisphere (Figure 9d). Overall, TRACMIP's AquaControl reproduces the main features of the present-day climate: the Northern Hemisphere is warmer than the Southern Hemisphere, the ITCZ is located in the Northern Hemisphere in the annual mean and migrates back and forth across the equator, the annual-mean Hadley circulation is stronger in the Southern Hemisphere than in the Northern Hemisphere, and the eddy-driven jets are located at around 45°N/S. The TRACMIP AquaControl simulations are thereby closer to the present-day climate than the CMIP5 prescribed-SST aquaplanet simulations, which show an excessively strong Hadley circulation, too equatorward eddy-driven jets, and no hemispheric asymmetry [Medeiros et al., 2015].
LandControl
We now describe the climate impact of the tropical "jello" continent by comparing LandControl and AquaControl. Introducing land leads to a global-mean cooling of −0.1 to −1.8 K (model median −0.7 K) and a precipitation decrease of −0.05 to −0.25 mm/d (model median −0.11 mm/d) (Figure 8 and Table 4). This global cooling might be expected if one assumed that the land-induced increase in surface albedo translated to a similar change in planetary albedo. However, a closer look at the pattern of surface temperature change indicates that matters are more complicated. Figure 10 shows the annual mean change in surface temperature that results from introducing the tropical continent (LandControl-AquaControl) in the model median and in each model. In the model median the cooling does not predominantly arise from the change in surface temperatures over land but rather from the general cooling of the global ocean and the even stronger cooling in the region just west of the continent. The temperature change differs substantially in sign and magnitude between models, however. In some models the land warms with respect to the aquaplanet setup, while it cools in others. Many models show a wedge of ocean cooling west (i.e., downstream) of the continent that extends along the equator from the coast to between 60°W and 120°W, but the details of this feature are not robust across models. The introduction of land thus has a clear impact on regional temperatures, but this impact differs markedly between models. Much of the temperature pattern response to the introduction of land and the model difference therein appears to be mediated by clouds. This can be seen from the CALTECH model, in which clouds are missing, the cooling maximizes over land consistent with the local surface albedo increase, and the ocean cooling west of the continent that is seen in the comprehensive models is nearly absent. Figure 11 shows the annual mean change in low-latitude precipitation due to the tropical continent. The shading indicates the LandControl-AquaControl anomalies, while the colored lines indicate the precipitation centroid at each longitude in each experiment (blue for AquaControl and red for LandControl; see also Figure 6). In the model median, precipitation is increased near the equator over land but reduced over the Northern subtropical part of the continent and over the near-equatorial ocean west of the continent (creating an isolated global maximum of precipitation over land at about 5°N and away from the coast, see Figure 6). There is also a hint of increased precipitation over the subtropical ocean in the western hemisphere. As was the case for surface temperature, however, models differ markedly on the regional precipitation change, and the regional changes in the model median are not completely robust across models. For example, ECHAM6.3 dries the equatorial continent, while AM2.1 wets at all equatorial longitudes. The precipitation changes are associated with substantial changes in the ITCZ position; these are largest at the location of the continent, but are not limited to it and do not sum up to zero in the zonal mean. The zonal-mean, timemean ITCZ shifts southward in LandControl compared to AquaControl in all models but one (0.0 to −4.2° lat; model median −0.6° lat; Table 4). This happens even though the continent is symmetric with respect to the equator, implying that the southward ITCZ shift must result from a rectification of hemispheric asymmetries in the atmospheric energy budget. The ITCZ shift varies zonally. All models simulate a southward ITCZ shift over land; this extends downstream over the ocean in some models, but in others the precipitation is shifted north for a span of 30-60° longitude. Overall, this shows that even a small continent that only covers 1/16th of Earth's surface has a clear impact on tropical rain belts and causes important regional variations from the zonal mean. Future studies are needed to elucidate by which mechanism the median change is achieved and why some models behave as outliers in their temperature or rainfall responses.
Insolation Changes
In this section we characterize the response of the AquaControl and LandControl climates to increased CO 2 and seasonal insolation changes. As in section 3 we focus on the annualmean climate. Figure 12 shows the climate and hydrological sensitivities in the aquaplanet and land simulations. Climate sensitivity is calculated as half of the global-mean surface temperature change between the Control and 4xCO2 simulations. In response to increased CO 2 the models warm with climate sensitivities of 1.5-4.8 K (model median 3.3 K) in the aquaplanet setup. The tropical continent impacts climate sensitivity in a nonrobust way across models. In most models (AM2.1, CAM3, CAM4, CAM5Nor, MetUM-CTL, MetUM-ENT, MIROC5, and CALTECH) land does not strongly impact climate sensitivity, but in MPAS land increases climate sensitivity by 0.8 K and decreases it by 0.5-0.7 K in CNRM-AM5, ECHAM6.1, ECHAM6.3, and LMDZ5A. TRACMIP climate sensitivities are about as large as climate sensitivities reported with Earth system CMIP5 models in realistic setup, with a similar model spread [Flato et al., 2013]. This is despite the lack of positive radiative feedbacks from snow and sea ice and the lack of large continental landmasses that tend to warm more than ocean under global warming in realistic CMIP5 models [Sutton et al., 2007;Byrne and O'Gorman, 2013] and suggests that the TRACMIP setup includes a strong positive feedback that is missing from realistic CMIP5 simulations, a hypothesis that deserves more attention in future studies. There is a weak indication that some of the model spread in climate sensitivity results from differences in the control temperatures and an increase of climate sensitivity for warmer reference climate (Figure 12a), consistent with previous work [e.g., Jonko et al., 2013;Caballero and Huber, 2013;Meraner et al., 2013].
Response to Quadrupled CO 2
However, a statistically significant correlation between the control temperature and climate sensitivity is only found for the land simulations, and in the aquaplanet simulations if the MPAS model is excluded. As the models warm, global precipitation increases by 2.2% per degree K surface warming (Figure 12b), independent of the presence of land and in close agreement with realistic CMIP5 model simulations [Held and Soden, 2006;Fläschner et al., 2016].
More interesting is the spatial pattern of change, as revealed for the model median in Figures 5-7 and for individual models in Figure 13, which surveys the zonal mean climate change due to increasing CO 2 in the aquaplanet simulations (Aqua4xCO2-AquaControl). In the model median, surface temperatures increase most strongly in high latitudes (Figure 13a), consistent with previous work [Pithan and Mauritsen, 2014] that has shown how highlatitude climate change is amplified due to local temperature feedbacks, even in the absence of albedo feedbacks (remember that there is no ice in any of the TRACMIP simulations). Yet polar amplification is much stronger in the Northern Hemisphere than in the Southern Hemisphere, and a handful of models do not produce any polar amplification in the Southern Hemisphere (e.g., the LMDZ5A model). The lack of strong warming around Antarctica in recent observed climate change and model projections for this century has been explained as primarily a transient response due to the slow heat uptake of a deep Southern Ocean [Marshall et al., 2014;Armour et al., 2016]. This is not what is happening in the TRACMIP simulations, as the anomalies are calculated at equilibrium and the prescribed slab ocean is a shallow 30 m globally. Instead, the explanation must lie with the balance of the atmospheric feedbacks that affect polar amplification: the water vapor, cloud, lapse-rate, and Planck feedbacks, as well as possible changes in atmospheric heat transport. This hypothesis is supported by the fact that polar amplification is equally strong in the Northern and Southern Hemisphere in the idealized CALTECH model that does not take into account radiative feedbacks from clouds and water vapor.
Under CO 2 quadrupling, the median annual-mean precipitation in the deep tropics intensifies and becomes more sharply peaked around the Northern Hemisphere maximum ( Figure 6). This pattern does not simply follow the wet-get-wetter paradigm [Held and Soden, 2006] but involves changes in tropical circulation (Figure 13b). In particular, the ITCZ shifts northward in essentially all models and all seasons (Figure 13c). The attendant mean meridional circulation is weakened outside the deep tropics (as expected from warming [Held and Soden, 2006]), but is strengthened in the northern deep tropics (Figure 7c), indicating a local strengthening of the ascending motion, consistent with the northern shift of the rain belt. The vertical reach of the Hadley cell is also increased, as expected, and the core of the subtropical jets is lifted in accordance. Changes in the tropospheric time mean zonal wind are relatively small in the model median, but the extratropical eddy-driven jet experiences substantial shifts in some models, leading to a large model spread ( Figure 13d). The Northern hemisphere jet shifts poleward by −0.9 to +10.8° (model median +2.1°) and the Southern Hemisphere jet shifts poleward by +0.7 to +8.3° (model median +1.7°) in the comprehensive models. Interestingly, the idealized CALTECH model deviates from this picture, as it simulates a contraction instead of widening of the circulation (the Northern and Southern hemisphere jets shift equatorward by 5.4 and 5.0° lat, respectively). The contraction of the circulation in CALTECH is likely related to a much stronger warming of the poles in that model [Butler et al., 2010, Figure 13], and highlights the role that radiative interactions of clouds and water vapor play for the response of regional temperatures and the extratropical circulation to global climate change Shaw, 2015, 2016;Ceppi and Hartmann, 2016].
Although the model spread in the ITCZ shift is considerable (Figure 13c and Table 4), it is clear that the annual-mean northward shift of the ITCZ is mostly accomplished by anomalies during the first half of the year, when the ITCZ is in the Southern Hemisphere in the control climate. That is, excursions of the ITCZ into the Southern Hemisphere are weakened in a warmer world. To verify if this connection between warmer temperatures and a more northerly confinement of the ITCZ holds across models and basic state, we have plotted the annual-mean and seasonal excursion of the ITCZ as a function of global-mean temperature in AquaControl and Aqua4xCO2 simulations ( Figure 14a) and in LandControl and Land4xCO2 simulations (Figure 14b) (we obtain the same qualitative result when we plot the ITCZ versus annual-mean tropical-mean surface temperature). Indeed, in both setups, the annual-mean position of the ITCZ (colored numbers indicating single experiments) is further north the warmer the global-mean temperature and this shift is mostly accomplished through a northward shift of the southernmost seasonal reach of the ITCZ (open dots). The northernmost reach of the ITCZ also shifts north with a warmer climate, although the response is more modest. The same qualitative behavior is seen with or without the presence of the tropical continent, but the migration of the southern edge is less steep with land (0.54°/K, as opposed to 0.81°/K for Aqua simulations). Contractions in the width of the ITCZ have also been observed in other circumstances, models, and setups. These studies have suggested several mechanisms by which convective zones might shrink with a warming climate, including increased upper tropospheric static stability [Bony et al., 2016], an uppedante for convection because of low-level inflow of relatively dry air , cloud-radiative changes , and changes in energy transport by the Hadley circulation and transient eddies [Byrne and Schneider, 2016]. It remains to be investigated whether one or several of these mechanisms explain the contraction seen in TRACMIP. The reason why the southern edge migrates more than the northern edge might be a different balance of these mechanisms, it might reflect a dynamic limit to how far poleward the northward edge of the ITCZ can migrate in TRACMIP given Earth's rotation rate (S. Faulk et al., Dynamical constraints on the ITCZ extent in planetary atmospheres, submitted to Journal of the Atmospheric Sciences, 2016), or it might be better understood in terms of warmer-get-wetter and seasonal amplification mechanisms [Huang et al., 2013].
One motivation for TRACMIP is to investigate if the breaking of the zonal symmetry by land changes not only the basic state of tropical precipitation, but also its sensitivity to external forcings. We have mentioned above how the ITCZ in the Land simulations displays the same qualitative behavior as in the Aqua simulations, but that the displacement of the southern edge was weaker. In Figure 15 we show in a more quantitative way how the annualmean zonal-mean precipitation response to CO 2 quadrupling is affected by the presence of land. Figure 15b shows how CO 2 forces a northward shift of the model-median rainband at all longitudes, including over the "jello" continent. The model-median anomalies tend to be slightly stronger to the east of the continent, but there is no model-robust signal in the zonal distribution of the anomalies. For example, the precipitation increases most strongly over the Northern Hemisphere continent in CAM4 and CAM5Nor, but over the Northern Hemisphere ocean in ECHAM6.1 and ECHAM6.3 (not shown). These differences likely arise from differences in the LandControl climate as well as a different response of clouds and convection. Irrespective of the lack of a robust signal in the zonal pattern, models agree qualitatively on how land impacts the zonal-mean precipitation response. In the zonal-mean, the anomalous rainfall dipole under increased CO 2 is similar in the Aqua and Land simulation, both in terms of median location and model spread (compare Figures 15a and 15c). However, the magnitude of the response in the Land simulations is substantially muted and the ITCZ shift is smaller in many of the models (Table 4), even though land only occupies 6% of Earth's surface in TRACMIP (12% of the tropical longitudes). This reduction is a consequence of both the weak anomalies over the continent itself and weaker anomalies over the ocean, especially downstream from the continent (see also Figures 6c and 6d). The presence of land thus strongly modulates the sensitivity of the ITCZ to changes in CO 2 even when only a small percentage of Earth is covered by land (5 times less than in the present-day climate). While more work is needed to understand this behavior, it suggests that caution should be applied when using idealized aquaplanet simulations to understand the sensitivity of present-day climate to greenhouse gas forcing.
Response to Changes in Insolation
We now briefly describe the response to a change in seasonal insolation by comparing the LandControl and LandOrbit simulations. The seasonal insolation change was shown in Figure 3. Insolation is reduced during boreal summer (JJA) and increased during austral summer (DJF), leading to less seasonal insolation contrast in the Northern Hemisphere in LandOrbit compared to LandControl and more seasonal insolation contrast in the Southern Hemisphere. The insolation change of LandOrbit-LandControl is similar to the insolation change between present-day and the mid-Holocence. Figure 16 shows the change of seasonal precipitation and ITCZ location separately calculated over ocean and land. In the model median, the orbital forcing causes ocean precipitation to increase south of the control ITCZ (black line in Figure 16a) and to decrease to the north of it from March through September (marginally in October). As a result, the ocean ITCZ shifts southward during these months in almost all models (Figure 16c). The rest of the year, the oceanic precipitation and the ITCZ position show little change in the model median, and the ITCZ shift is not robust across models. The response of land precipitation and ITCZ is qualitatively different from the response over ocean (Figures 16b and 16d). Model-median precipitation changes are concentrated to the north of the climatological rainband and tend to be symmetric with respect to the equator, with reduced precipitation during March-July (roughly when insolation is reduced) and increased precipitation during October-February (roughly when insolation is increased). This is consistent with insolation changes driving changes in monsoonal circulations. In contrast to ocean regions, and consistent with the model differences in the Land simulation, the land ITCZ does not shift robustly across models in any of the seasons, and models differ in the direction of the land ITCZ shift.
In the annual-mean zonal-mean (average over all longitudes), all models shift their ITCZ southward (−0.2 to −1.0° lat; model median −0.6° lat; Table 4). Importantly, the zonal mean shift is dominated by a robust southward shift over the ocean (−0.4 to −0.7° lat; model median −0.6° lat), while the ITCZ response over land is not robust across models (0.2 to −0.7° lat; model median −0.2° lat). This suggests that zonal-mean frameworks of atmospheric energetics and ITCZ shifts are not sufficient to understand past regional precipitation changes, such as the greening of the Sahara during the early and mid-Holocene (11,000-5000 BPE) [Hoelzmann et al., 1998;Kuper and Kröpelin, 2006]. We hope that the TRACMIP orbital simulations, in combination with the quadrupled CO 2 simulations, will prove helpful to understand how zonal mean frameworks can be extended to understand past and future regional changes.
Conclusions
This paper has presented the new Tropical Rain belts with an Annual cycle and Continent-Model Intercomparison Project, TRACMIP. TRACMIP is a community effort that is motivated by the desire to better understand the dynamics of tropical rain belts, how they respond to internal (seasonal and diurnal insolation cycles) and external (CO 2 and orbital changes) forcings, and how zonal-mean frameworks can be extended to understand regional rain belt changes. The suite of TRACMIP experiments includes an aquaplanet configuration and one with an idealized tropical continent. In all cases the lower boundary allows for thermodynamic coupling with the atmosphere and a closed surface energy balance, an important factor for modeling tropical rainfall [Kang and Held, 2012]. TRACMIP thus fills the gap in the CMIP5 model hierarchy between the fixed-SST aquaplanet simulation and coupled simulations in realistic setups. Accordingly, the TRACMIP simulations are much closer than the CMIP5 fixed-SST aquaplanets to the observed tropical rainfall and global circulation patterns, suggesting that they can be used as a simple analog to realistic model configurations.
In this survey of the main aspects of the TRACMIP simulations we have focused mostly on the ensemble mean response of the monthly climatology or of the annual mean to changes in configuration and external forcings. We have highlighted how the presence of a "jello" continent changes both the basic state and the sensitivity to greenhouse forcing of oceanic areas away from the continent, how quadrupling CO 2 leads to an amplification of the asymmetry between the Northern and the Southern Hemispheres both in temperature and precipitation, and how differently land and ocean respond to changes in the seasonality of insolation. The spread across models is, nonetheless, just as interesting. In particular, we have noted how the range of climate sensitivity is as large for the aquaplanets as it is for standard CMIP5 simulations and how the scatter would seem to suggest that a warmer climate is also a more sensitive climate. The way in which the simplified climate model differs from the comprehensive models is a testament to the importance of clouds and convective processes. These and many other aspects of the ensemble scatter invite deeper investigations. For example, why is the land response to orbital changes much less robust than the oceanic changes? What makes one climate model an outlier by one measure, but not by any other?
In this paper, we have only presented results that relate to seasonal or longer timescales, but TRACMIP data can be used to investigate aspects of the climate that range from the diurnal cycle to the synoptic scale and the scale of intraseasonal variability. And although the original motivation for TRACMIP was the investigation of the tropical rain belts, its use extends beyond this topic. Indeed, we hope that TRACMIP will also resonate with the extratropical community and will provide a helpful perspective on extratropical jet streams and storm tracks. TRACMIP is an ongoing effort that combines a hierarchy of boundary conditions with a hierarchy of climate models. As such we hope that the community that has coalesced around this project will grow in both contributors and users. Moebis and Stevens [2012] for details). While this difference has a large impact for fixed SSTs, its impact is strongly muted for interactive SSTs and a seasonal cycle. The simulations in Figure 1a use the CMIP5 aquaplanet setup [Medeiros et al., 2015], those in Figure 1b the setup of Voigt et al. [2014aVoigt et al. [ , 2014b with a slab ocean depth of 30 m and a peak ocean heat transport of about 2 PW in the subtropics. Global surface albedo and surface temperature in the AquaControl experiment. The surface albedo is calculated as the ratio of the global and time mean upward and downward shortwave radiative fluxes at the surface. (a) q-flux over ocean grid boxes and (b) associated total meridional ocean heat transport. The TRACMIP q-flux is a fourth-order polynomial fit to the observed q-flux shown in gray. The q-flux for simulations with land is set to zero over land, which requires a small decrease of q-flux compared to the aquaplanet simulations to ensure that the global-mean q-flux is zero. As a result of replacing some of the tropical ocean grid boxes with land, the total ocean energy transport is slightly reduced in simulations with land compared to aquaplanet simulations. Impact of the tropical continent on surface temperature: annual-mean surface temperature difference between LandControl and AquaControl. The continent is indicated by the gray box. Impact of the tropical continent on precipitation: annual-mean precipitation temperature difference between LandControl and AquaControl. The continent is indicated by the gray box. To highlight the impact on tropical precipitation, the plot is restricted to latitudes between 40°N and 40°S. The blue and red lines show the location of the precipitation centroid (defined between 30°N/30°S) at every longitude in AquaControl and LandControl, respectively. Climate sensitivity and hydrological sensitivity in the TRACMIP ensemble. (a) Climate sensitivity as estimated by halving the global surface temperature change between the Control and 4xCO2 experiments for aquaplanet simulations (no underscore) and land simulations (with underscore). The numbers give the correlation coefficient and P value. For the aquaplanet simulations, excluding the MPAS model (model 13) leads to an increased correlation coefficient to 0.54 that is statistically significant (P = 0.06). (b) Precipitation change in response to quadrupling CO 2 relative to the control precipitation. The line corresponds to a 2.2%/K precipitation increase, which is obtained from a linear regression of the precipitation change on temperature change. Response of the zonal-mean climate to a quadrupling of CO 2 in the aquaplanet simulations. The difference between Aqua4xCO2 and AquaControl is shown. (a) Annual-mean surface temperature, (b) annual-mean precipitation, (c) seasonal evolution of the ITCZ position, and (d) annual-mean zonal wind at 850 hPa. Individual models are shown by the colored lines, the model median is shown by the thick black line. Annual-mean precipitation response between 40°N and 40°S to increased CO 2 in aquaplanet and land simulations. (a) Zonal-mean response in the aquaplanet setup, (b) longitude-latitude response of the model-median precipitation in the land setup, (c) zonal-mean response in the land setup, and (d) difference between zonal-mean response in the land versus aquaplanet setup. In Figures 15a, 15c, and 15d models are colored according to the color coding introduced in Figure 2; the model median is shown by the thick black line. In Figure 15b, the black line is the model-median ITCZ in LandControl. a Simulations with land include a small correction of the q-flux over ocean, which is implemented by a spatially uniform decrease of p 0 compared to the aquaplanet simulation. The ITCZ position is calculated from the zonal-mean time-mean precipitation as the latitude of the precipitation centroid between 30°N and 30°S.
The Aqua4xCO2 values are given as the change with respect to AquaControl, and the Land4xCO2 and LandOrbit values as the change with respect to LandControl. | 12,259.8 | 2016-11-16T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Simultaneous loss of interlayer coherence and long-range magnetism in quasi-two-dimensional PdCrO2
In many layered metals, coherent propagation of electronic excitations is often confined to the highly conducting planes. While strong electron correlations and/or proximity to an ordered phase are believed to be the drivers of this electron confinement, it is still not known what triggers the loss of interlayer coherence in a number of layered systems with strong magnetic fluctuations, such as cuprates. Here, we show that a definitive signature of interlayer coherence in the metallic-layered triangular antiferromagnet PdCrO2 vanishes at the Néel transition temperature. Comparison with the relevant energy scales and with the isostructural non-magnetic PdCoO2 reveals that the interlayer incoherence is driven by the growth of short-range magnetic fluctuations. This establishes a connection between long-range order and interlayer coherence in PdCrO2 and suggests that in many other low-dimensional conductors, incoherent interlayer transport also arises from the strong interaction between the (tunnelling) electrons and fluctuations of some underlying order.
A previous interlayer magnetoresistance study in PdCoO 2 by Takatsu et al. [1] showed a striking angle dependence upon azimuthal rotation of the magnetic field within the conducting plane that was attributed to high mobility of the conduction electrons and fine details of the hexagonal Fermi surface of PdCrO 2 . We note however that the data reported in the Takatsu paper were taken with the magnetic field aligned approximately 3 o away from the conducting plane. As shown in Figure 2d of the main manuscript, the Hanasaki peak is suppressed completely once the field is rotated 2°away from the conducting plane. Thus, the sharp peaks in the interlayer magnetoresistance reported in Ref. [1] are not associated with the Hanasaki peaks discussed in the main paper.
Additionally, it should be pointed out that while the polar ADMR are extremely sensitive to details of the Fermi surface topology-as can be seen by inspection of the polar ADMR curves for PdCrO 2 (Figure 2b of the main manuscript) and PdCoO 2 (Figure 5a of [2])-the Hanasaki peak itself is similar in form in both systems. Indeed, to be visible, the Hanasaki peak requires only the existence of a three-dimensional Fermi surface (provided that ω c τ is large enough) and its width is determined uniquely by the ratio between k F and t ⊥ . It is therefore only weakly dependent on other details of the band structure. T e m p e r a t u r e ( K ) Supplementary Figure 1. | The c-axis resistance of different samples. The highest quality sample, with residual resistivity ratio of 108, was chosen for our angular-dependence measurements.
SUPPLEMENTARY NOTE 2. BACKGROUND SUBTRACTION
For analysis of the coherence peak sharpness, the resistivity values were normalized to a common scale as given by The broad quasi-sinusoidal background was then removed by subtracting the ρ N (T, θ, H) curve at 44 K, which has no AMRO nor coherence peak, via where the multiplicative factor α broadly corrects for the change in ω c τ with temperature and 'N.B.' stands for 'no background'. The factor α was found by empirically scaling the 44 K data until ρ N (44 K, θ, H) matched ρ N (T, θ, H) in the featureless low-angle region (0-30°). This results in a ρ N.B. (T, θ, H) that is approximately zero at low angles, with a series of peaks at higher angles. Thus one has a complete removal of the quasi-sinusoidal background, leaving only the AMRO features and the Hanasaki coherence peak that are observed at higher angles. As an example, see Supplementary Figure 3.
Using the 44 K curve to subtract the background in this way does not affect the analysis of the sharpness of the coherence peak. This is shown in the plot of d 2 ρ/ dθ 2 in Supplementary Figure 3b of the main manuscript, where d 2 ρ/ dθ 2 at θ = 90°is essentially zero for all four data points above 37.5 K. This confirms that the 44 K curve does not have a coherence peak, and the implies successful subtraction of the quasi-sinusoidal background in a way that does not influence our analysis of the temperature dependence of the Hanasaki peak.
SUPPLEMENTARY NOTE 3. ESTIMATE OFh/τ AND ωcτ
The mean free path can be estimated from the Drude formulae for a simple cylindrical Fermi surface: where n is electronic density, m * is the effective mass, d = 6.03Å is the interplanar distance, e is the electron charge and k f is the Fermi wave-vector. Thus given that ρ ab (37.5 K) = 0.7(1) µΩ cm [3,4], and assuming that the electronic transport is dominated by the biggest non-breakdown orbit (γ) for which k f = 0.57 (3) Finally, it should be noted that in contrast to quantum oscillations measured by the de Haas-van Alphen effect, where the amplitude of ω c τ can be suppressed by both small-and large-angle scattering events, angledependent magnetoresistance (ADMR), being a transport property, is not affected or degraded by small-angle scattering. This is best illustrated in Ref. [5] where the polar ADMR in the interplane magnetoresistance of an overdoped cuprate could be fitted by precisely the same ω c τ value that is obtained from in-plane Hall effect measurements. Thus, the ω c τ product obtained from the interlayer magnetoresistance is, in principle, identical to that estimated from zero-field resistivity measurements.
SUPPLEMENTARY NOTE 4. FIELD-DEPENDENCE OF THE HANASAKI PEAK
It is interesting to investigate the field-dependence of the amplitude of the Hanasaki peak since any strong variation in the scattering rate with field, e.g. due to a change in the spin fluctuation spectrum, could be reflected in a departure from the expected quadratic dependence of the peak amplitude with field. In this section, we thus compare the field-dependence of the Hanasaki peak in PdCrO 2 with that measured in non-magnetic PdCoO 2 . The corresponding amplitudes are plotted as a function of field in Supplementary Figure 4. We find that the field-dependence in both systems is quadratic in field to within our experimental uncertainty. This finding sug-gests that the antiferromagnetic fluctuation rate does not vary appreciably at least within the field range of our experiments. With regards the cuprates, where antiferromagnetic fluctuations are clearly present, their influence on the magnetotransport, vis-á-vis the field-dependence of the magnetoresistance, for example, is also found to be negligible (see, e.g. Ref. [6]) despite the fact that it may have a significant impact on its temperature dependence [6]. | 1,509.2 | 2017-04-13T00:00:00.000 | [
"Physics"
] |
Quantitative investigation of thermal evolution and graphitisation of diamond abrasives in powder bed fusion-laser beam of metal-matrix diamond composites
ABSTRACT Preventing the thermal damage of diamond abrasives is the major challenge of diamond composites in the field of super-hard tools by laser additive manufacturing. In the presented work, we established a quantitative framework to accurately evaluate the thermal damage behaviour and the relevant microstructure-performance characteristics, by using CuSn10-diamond composite by powder bed fusion-laser beam (PBF-LB). By simulating the thermal history of diamond in the molten pool and microstructure characterisation, the critical temperature of 1491.6°C of diamond graphitisation was obtained. Below the critical temperature, the composite with no diamond-graphitisation exhibited abrasive wear and wear loss rate below 0.01%. The increasing temperature led to the aggravation of graphitisation, which ID: IG value changed from 2.00 to 0.57 with the temperature increasing from 1491.6°C to 1896.1°C, resulting in wear mechanism changing from adhesive wear to three-body abrasion, with the wear loss rate from 0.01% to 0.73%. Integrating the results of simulation, microstructures and wear properties, the graphitisation threshold of diamond in PBF-LB was revealed and the quantitative relationship of ‘PBF-LB parameters - Temperature - Graphitisation degree - Wear resistance’ of the metal-matrix diamond composites was established.
Introduction
Super-hard diamond composite plays an irreplaceable role in the precision manufacturing of hard and brittle materials owing to its excellent hardness and wear resistance.In recent years, owing to the development of aerospace industry, electronic communication and high-end manufacturing technologies, stringent requirements have been implemented for the precision grinding of high-performance titanium alloys for aerospace applications, carbon fiber composite materials, electronic communication 3C ceramics, high-end chips and other materials that are difficult to process (Lv et al. 2018).Laser additive manufacturing (LAM) can be employed (Peng et al. 2021) to form components with three-dimensional structures and complex shapes via the layer accumulation of materials.This approach has the characteristics of a large design space, simple procedure and high material utilisation rate and provides a new means of achieving structure-function integrated manufacturing of diamond abrasives tools (Gu et al. 2021), such as chip-holding holes and inner flow channel structures, which are important means of improving the performance of diamond tools with high precision, high efficiency and low thermal processing (Wu et al. 2019;Tian et al. 2019).
The performance of metal-matrix diamond composites by LAM is significantly affected by interaction between diamond and laser ormolten pool, due to the unique characteristics of diamond.
On the one hand, diamonds interact with lasers direct irradiation.When diamond particles are directly irradiated by a high-energy laser beam, numerous electrons are ionised and graphitisation transformation occurs (Olejniczak et al. 2019).Thus far, laser machining is used in engineering applications to polish and cut diamond, mostly by using femtosecond and picosecond pulse lasers to process diamond (Cai et al. 2020;Zhang et al. 2021b;Li et al. 2020).
On the other hand, diamond is affected by the contact of the high temperature molten pool during LAM.Owing to the development of additive manufacturing, the metal-matrix diamond composites fabricated by LAM has gradually emerged as a research focus, such as powder bed fusion-laser beam (PBF-LB), laser welding and laser cladding.
However, due to the thermal instability of diamond, the thermal damage such as oxidation, graphitisation and ablation, are likely to occur when diamond contact with high temperature molten pool.Several studies (Fang et al. 2020;Su et al. 2020;Zhang et al. 2021a) have shown that graphitisation occurs due to direct laser irradiation in diamond composites fabricated by LAM.More often, though, the thermal damage of diamond happened during the direct contact with the molten pool.The rapid melting and an uneven energy distribution of the laser Gaussian heat source in LAM lead to an extremely unbalanced temperature field in the molten pool.As the instantaneous temperature can exceed 2000°C, the diamond particles are subjected to severe high temperature heat conduction and thermal shock.Zhou, Li, and Gao (2022) found that the remelting, heat accumulation and secondary heating occurring in multi-track scanning increased the thermal damage to diamond.Daniel et al. (Rommel et al. 2016(Rommel et al. , 2017) ) studied the interfacial reactions of diamond and molten metal, showing that thermal damage and interfacial reactions occurred only in the diamond particles in contact with the molten pool, not in the diamond particles directly irradiated by the laser.Iravani et al. (2012) found that the presence of Fe and Ni as the catalyst, would aggravate the graphitisation of diamond.
Apparently, the thermal damage behaviour of diamond become the key factor for LAM technologies to be widely applied in the fabrication of diamond composite.However, the research on the thermal evolution of diamond particles and the damage mechanism during LAM is rarely reported.In the presented study, CuSn10-diamond composites were prepared by employing PBF-LB technology.The CuSn10 alloy has no graphitisation catalyst or carbide-forming elements, therefore, the effect on thermal damage of diamond can be focused on the high temperature molten pool.The thermal evolution of diamond throughout the PBF-LB process was systematically investigated by building a simulated temperature field of single diamond particles.The effect of the molten pool temperature on the graphitisation of diamond abrasives was described quantitative and verified by characterisations of microstructures and wear properties.The quantitative relationship of 'PBF-LB parameters-Temperature of diamond abrasives-Graphitization degree-Wear resistance' of the diamond composites were established.This study not only reveals the thermal evolution and damage behaviour of diamond abrasives by PBF-LB, but also provides a theoretical model for the fabrication of metal-matrix diamond composites via LAM technologies.
The feedstock and PBF-LB
The gas-atomised CuSn10 alloy powders (15-53 μm) and diamond particles (MDB4, 75-90 μm) a were used as the feedstock, and the morphology are shown in Figure 1.The CuSn10 alloy powders has good sphericity and fluidity, which is beneficial to efficient spreading during PBF-LB.The alloy powders and diamond particles were uniformly mixed by a 3D mixer for 5 h, with 12.5 vol % concentration of diamond.The mixed powders were dried at 80°C for 24 h.The CuSn10-diamond composite samples (8 mm × 8 mm × 8 mm) were fabricated by a PBF-LBequipment (WXL-120E, Xiamen Wuxinglong Technology Co., LTD) with a continuous wave laser beam, which has an emission wavelength of 1064 nm.The laser beam has maximum power of 500 W and the laser beam diameter of 50 μm.The composites were fabricated on the pure copper build platform (116 mm × 116 mm × 20 mm).During the PBF-LB process, the temperature of the substrate was maintained at about 100°C, and the oxygen content ≤400 ppm in the build chamber.The PBF-LB process parameters are listed in Table 1.And the corresponding laser energy densities were calculated according to following equation: where P is the laser power, L is the layer thickness, H is the hatch space, and v is the scanning speed.And the samples were cut from the substrate parallel to the X-Y plane by wire electrical discharge machining.
Numerical model of LAM
The ANSYS Workbench finite element software was used to simulate the temperature distribution of the diamond particle during PBF-LB, the specific steps of the finite element modelling are shown in Supplementary Information.The properties of the materials, including density, melting temperature, thermal conductivity, specific heat, were determined in JMatPro software, as shown in Table S1.
The heat transfer governing expression was used by the 3D transient heat conduction equation in a thermodynamically isotropic material.The loaded heat source is a moving Gaussian heat source, which is a main heat source model for most of simulations in laser processing (AlMangour et al. 2018;Yan et al. 2018).
The model geometry consisted of a mixed powder bed on a pure copper build platform.The CuSn10 alloy powder bed model with size of 1000 μm × 100 μm × 4000 μm; the size of diamond particles model is 80 μm × 80 μm × 80 μm cube with eight corners removed and the diamond particles model is embedded in the powder bed model.
Microstructures and properties characterisation
The microstructure and chemical composition of the feedstock and PBF-LBed samples were analysed by scanning electron microscope (SEM) with back-scattering mode (FEI Quanta FEG 250, Czech) equipped with an energy dispersive spectrometry (EDS) probe.The Archimedes method was used to measure the real density of the samples.The relative density was calculated by dividing the real density by the theoretical one.X-ray diffractometry (XRD, Bruck D8 Advance, Germany) with Cu-Ka radiation at 30-100°and 8°/min was used to analyse the phase structures of the composite samples.
The graphitisation of the diamonds was analysed by the Renishaw in Via Raman microscope system with 532 nm of wavelength, 500-2000 Raman shift/cm -1 of detection range.The wear properties were tested by the high-speed reciprocating friction testing machine with a friction time of 900 s, 50 N loading, 15 Hz frequency, and a 5 mm stroke at room temperature and the Si 3 N 4 balls (6 mm diameter) were used as counterparts.The NanoMap500DLS dual-mode profiler was used to investigate wear scar profiles in two dimensions.
PBF-LB formability of the CuSn10-diamond composites
Figure 2 shows the PBF-LB formability of the CuSn10diamond composites.The formation of balling is caused by that, the diamond particles increase the viscosity of the molten metal and limit its fluidity, and there is poor wettability between the CuSn10 alloy melt and diamond (Constantin et al. 2021).According to the balling degree, the samples can be divided to three categories: . The samples with spheroidised sizes of less than 500 μm were defined as well-formed in Figure 2(b). .The samples with spheroidised sizes of more than 500μm and having clear pits and slag were defined as poor-formed in Figure 2(c). .The samples that fell off directly from the substrate because of warping, cracking, or powder scraping during forming were defined as no-formed in Figure 2(d).
Considering the 29 PBF-LBed samples in Figure 2(a), as the laser power (P) increases and scanning speed (v) decreases, the higher laser energy input leads to a Table 1.PBF-LB process parameters for diamond composites.
Process parameters Values
Laser power (P)/W 120, 140,160,180,200,250, 300 Scanning speed (v)/mm/s 500, 700, 800, 900, 1100 Hatch space/mm 0.05 Layer thickness/mm 0.07 more intense flow and longer duration of the molten pool.These characteristics resulted in more severe balling and poor formation.According to the formability, the well-formed samples at 120-180 W laser power and 700-1100 mm/s scanning speed, were selected for further experimental exploration.
In addition, the well-formed samples only exhibited a relative density of 80.07-86.98%,far below many materials that pursue full density by PBF-LB (Khorasani et al. 2019).However, the porosity of diamond abrasive tools plays an important role in grinding.A porous structure has more interconnected microchannels and therefore can provide sufficient space for the grinding fluid and reduce the temperature of the grinding zone, which are beneficial for cooling and lubrication.Moreover, a porous structure increases the debris storage space (Xu, Liao, and Weng 2011;Hou et al. 2012).Therefore, unlike in the forming of metal materials through PBF-LB, the density of metal-diamond composites is not the only means of assessing the forming quality, and the >80% relative density of CuSn10-diamond composites is acceptable.
Thermal evolution of the diamond particles
Due to the extremely rapid melting-solidifying process during PBF-LB, it is difficult to experimentally detect the temperature of the molten pool.Therefore, an ANSYS finite element simulation was performed to establish the 3D temperature field model of the molten pool to clarify the influence of the molten pool temperature on the thermal evolution of the diamond particle.Taking the sample with P = 180W, v = 700 mm/s as an example, the temperature distribution of the diamond particle in molten pool at specific timing during molten pool moving forward in PBF-LB is shown in Figure 3, and it can be divided into three stages: Figure 4 presents the thermal evolution of diamond particles at different process parameters, revealing the changes in the temperatures over time.It is obvious that the two peak temperatures appeared, corresponding to Figure 4(b,c).The simulation results of diamond peak temperature under different PBF-LB parameters in Table 2 show that the second peak temperature is obviously higher than the first one because of the heat accumulation.
The relationship between temperature and process parameters was established.The laser energy density was always adopted as the key factor for the evaluation on the temperature of molten pool (Yang et al. 2018;Shi et al. 2022).However, the temperatures of diamond particles were significantly different under the same energy density.As shown in Figure 5, the laser energy densities of samples Nos. 5, 10 and 15 were the same as 57.1 J/ cm 3 , but it is obvious that the second peak temperature of diamond particles of sample 15 was the highest, reaching 1676.5°C, which was 184.9°C higher than that of sample 5. Therefore, the traditional method evaluating the molten pool or diamond temperature by laser energy density is not suitable for the fabrication of diamond composites by PBF-LB.Thus, temperature of diamond particles, rather than process parameters or laser energy density, will be used as the standard in the subsequent analysis of the thermal evolution and thermal damage of diamond.
Microstructures
In order to analyse the influence of the high molten pool temperature on the thermal damage of diamond particles, the microstructure characterisation of CuSn10- diamond composites prepared by PBF-LB was carried out.The microstructures of the diamond particles are shown in Figure 6.According to the morphology and degree of thermal damage of the diamond, the processing window of 16 well-formed samples can be divided into three regions: Area 1 (no damage area); Area 2 (light damage area) and Area 3 (severe damage area).The highest temperatures of diamonds in the three areas are 1080.0 to 1420.1°C, 1491.6 to 1539.4°C and 1585.3 to 1896.1°C respectively.
In Area 1, the diamond particles remained intact with smooth surface and no obvious thermal damage was observed.With increasing of diamond temperature (increasing P and decreasing v), local structural transformation of diamond occurred, which were mainly reflected in the local fragmentation of diamond particles and the gradual disappearance of edges and corners (Sample Nos. 5, 11 and 16 in Area 2).In Area 3, diamond undergone a distinct structural transformation, the original hexagonal octahedral crystal morphology no longer retained and all crystal planes were coarsened (Sample Nos. 10, 14 and 15).Most of the diamond particles exhibited serious thermal damage, as mainly reflected by the cleavage fracture of the whole crystal (Sample Nos. 9 and 13).
To explore the possible thermal damage or phase transformation in the CuSn10-diamond composite samples, Nos. 8, 13 and 16 samples corresponding to the temperature of 1242.1°C,1539.4°C and 1891.1°Crespectively, were selected from no damage area, light damage area and severe damage area for XRD characterisation, as shown in Figure 7. CuSn10-diamond composite exhibited the α (Cu, Sn) solid solution and diamond phase.Since Cu or Sn does not react with C, no carbide phase was formed during PBF-LB.However, XRD is not able to identify the allotropes of carbon including sp3 hybridised diamond, sp2 hybridised graphite or amorphous carbon phases, therefore the graphitisation transformation of diamond will be further characterised by Raman.
Raman spectrum was used to identify the graphitisation of the diamond in CuSn10-diamond composites by PBF-LB.As shown in Figure 8(a,b), diamond in the composite maintained an intact crystal morphology at1242.1°Cfrom Area 1.Only the characteristic peak of sp3 hybridised diamond at 1331.9 cm -1 was observed, inferring that no graphitisation of diamond occurred.However, the characteristic peaks of sp2 hybridised graphite at 1580 cm -1 were detected in the samples from both light and severe damage area as shown in Figure 8(c,e), demonstrating that the graphitisation of diamond particles occurred.Figure 8(c) reveals that the diamond (1343.6 cm -1 ) and graphite (1593.3cm -1 ) characteristic peaks have different degrees of deviation, which may be due to the lattice distortion caused by different rapid cooling rate during PBF-LB.Moreover, as the temperature of diamond increased, the graphitisation degree of diamond intensified.The graphite characteristic peak is clearly visible in Figure 8(e).It is shown in Figure 8(d,f) that the area ratios of I D /I G are about 1.10 and 0.57.The interfacial bonding state of CuSn10/diamond determines the retention force on the diamond and affects the performance of the composite.To explore the interfacial diffusion behaviour of CuSn10/diamond further, EDS line-scanning analysis was performed, and the results are shown in Figure 9.The diffusion zone gradually expanded with increasing temperature.When the temperature of the diamond was 1242.1°C, the interfacial diffusion zone was approximately 1 μm.When the temperature of diamond was up to 1891.1°C , the width of diffusion zone reached approximately 2.5 μm.Theoretically, there should be no phase transformation or diffusion at the interface, since the Cu and Sn do not react with the diamond showing a mechanical bonding of the CuSn10/diamond interface (Denkena et al. 2016).However, the latest research showed that the hybridisation degree of the s and p orbital electrons of the Sn atoms and the p orbital of the C atoms is very strong that the relaxed structure with Sn atoms has a high electron enrichment ability.This characteristic is beneficial for the formation of a strong covalent interaction between the intermetallic compounds in the CuSn10 alloy and diamond (Yu et al. 2022).Moreover, due to the high temperature molten pool of PBF-LB, the free C atoms were subsequently separated from the diamond surface, which would be directly dissolved into the melt, and the solubility of C atoms increased with the temperature.The relationship between the solubility of C atoms and the temperature in the melt of Cu and Sn is as follows: where x is the atomic fraction of C atoms dissolved in the melt, and T is the temperature of the melt.According to equations ( 2) and ( 3), the solubility of C atoms in Cu/Sn increased gradually with increasing molten pool temperature.The C dissolved in melt would when the melt solidified, in the form of graphite or amorphous carbon.However, the hightemperature molten pool was approximately 10 6 °C/s during PBF-LB.In the rapid solidifying process, C atoms had negligible time to re-precipitate and remained solidly dissolved in the metal lattice in the form of interstitial atoms.
Thermal damage behaviour process of diamonds
A schematic of CuSn10-diamond composite by PBF-LB is shown in Figure 10.Diamonds were evenly distributed in the powder bed after powder-mixing in Figure 10(a).In the PBF-LB process, as the molten pool formed, diamonds were directly irradiated by the laser and part of diamond particle immersed in the high-temperature molten pool due to the low density of diamond, as shown in Figure 10(b).With the expansion of molten pool as shown in Figure 10(c), diamond particles migrated in the molten pool under the influence of the Marangoni effect and gravity (Long et al. 2020;Bouabbou and Vaudreuil 2022;AlMangour, Grzesiak, and Yang 2016).These particles were completely immersed in the high-temperature molten pool, thereby 'escaping' direct laser irradiation.Therefore, the thermal behaviour of diamond can be divided into two kinds: irradiation contact with laser beam (Figure 10 laser.As a continuous-wave laser applied in PBF-LB, when the laser reached the ablation threshold of diamond, the diamond would suffer permanent photo induced damage and therefore undergo phase transformation, as interaction type II reviewed in Introduction section.To evaluate the influence of continuous wave laser irradiation on the structural stability of diamond, the classic laser ablation theory was used in this study.The threshold of diamond can be expressed as (Jeschke and Garcia 2002;Li et al. 2020): where λ is the thermal conductive rate of the diamond A is the diamond absorption rate to laser, α is the thermal diffusivity of the diamond, t 0 is the laser dwelling time, T c is the crucial temperature and T 0 is the ambient temperature.The diamond particle size is 90 μm.Subsequently, t 0 is given by t 0 = d/v, where v is the scanning speed.
The lowest ablation threshold of diamond is approximately 1.21 × 10 7 W/cm 2 under direct laser irradiation at a wavelength of 1064 nm.To evaluate whether the laser fluence can affect the photo induced damage of the diamond, the equivalent laser fluence F laser used in this study can be expressed as (Li et al. 2020): where P is the laser power, t is the ablated duration, r is the laser spot radius, and v is the scanning speed.Calculations revealed that the maximum equivalent laser fluence was approximately 6.24 × 10 6 W/cm 2 in this experiment, which is one order of magnitude below the ablation threshold of diamond.Therefore, direct laser irradiation cannot induce the thermal damage of diamond during PBF-LB.As shown in Figure 10(c), several diamond particles were completely immersed in the high-temperature molten pool owing to the Marangoni effect and gravity.The contact between the particle surface and molten pool would be affected by the local high temperature.When the temperature was higher than the graphitisation temperature of diamond, the graphitisation transition occurred.In general, graphitisation occurred in an inert gas atmosphere at 1500°C (Iravani et al. 2012).According to Table 2 and Figure 6, sample No. 5 had a local graphitisation at the temperature of 1491.6°C.Below this temperature, diamond could avoid the thermal influence of the high molten pool temperature.
Under ambient conditions, the Gibbs free energy of diamond is 2.9 kJ, whereas it is zero for graphite; consequently, graphite is the stable phase.The Gibbs free energy change of diamond graphitisation under ambient conditions can be obtained according to Formula 6: where G graphite is Gibbs free energy of the graphite, G diamond is Gibbs free energy of the diamond.The fact that DG is less than 0 proves that diamond tends to change into graphite under ambient conditions.However, diamond exists under ambient conditions, owing to the high activation free energy barrier DG a (Wang, Scandolo, and Car 2005).Consequently, only when the free energy of the C of diamond is higher than DG a , the graphitisation transformation of diamond occurs.The transformation rate can be expressed based on the equation: where A is Arrheniusconstant, T is temperature.The transformation rate increased gradually with increasing temperature, which further verified the results of the I D :I G in the Raman spectrum, as shown in Figures 8(d,f).
Wear properties
According to the wear test results in Figure 11, the friction coefficients of the composites prepared at 1242.1°C and 1539.4°Cremained at 0.62 and 0.55, respectively.While the friction coefficients of the composites prepared at 1891.1°C fluctuated significantly and presented a gradual upward trend.The friction coefficient is small; the cutting obstacle is small, and the wear resistance is good (Yin et al. 2021).From Figure 11(b), with increasing graphitisation degree, the wear mark depth gradually increased and had values of 97.16, 116.15 and 118.29 μm.Therefore, with increasing degree of graphitisation, the wear properties of the composite became worse.
The worn surfaces morphologies of composite are shown in Figure 12.The worn surface morphology of the composites prepared by the lowest temperature of 1080°C (P = 120 W, v = 1100 mm/s) was shown in Figure 12(a).Although the diamond had no thermal damage under such conditions, due to insufficient melting of CuSn10 powder, there were many defects at the interface with poor retention force on diamond abrasives.The abrasives were easy to peel off during grinding, thus forming peeling pits and wide grooves.Figure 12(b) shows the worn surfaces morphology of the composite prepared at 1242.1°C and the wear mechanism was typical abrasive wear.At this temperature, no graphitisation of the diamond abrasives happened and the interface exhibited metallurgical bonding with good retention force.As the temperature of the diamond increasing to 1491.6°C, graphitisation occurred and the wear resistance of the composite decreased.The wear mechanism changed to adhesive wear with partially abrasive wear, as shown in Figure 12(c).When the temperature increased to 1891.1°C, with graphitisation of diamond aggravated, most of the diamond abrasives exhibited cleavage fracture and the fragments of the fracture diamond participated in the friction process, resulting in the three-body abrasion, leading to violent fluctuation of the friction coefficient, as shown in Figures 11(a) and 12(d) (Mandal et al. 2020).
Quantitative relationship
Table 3 shows the quantitative relationship of the 'PBF-LB parameters-Temperature-Graphitization degree- Wear resistance'.It can be seen that with the increase of the highest temperature of diamond, the graphitisation degree, bonding state, wear mechanism and wear mass loss all changed regularly and were correlated with each other.According to the temperature, the samples can be divided to four categories: . 1080.0°C.When the temperature was 1080°C, although graphitisation did not occur, the interface bond state was poor, and the diamond was easy to fall off during the friction process. .1192.9∼ 1420.1°C.As the temperature increased, the bonding state between diamond and CuSn10 was improved, which also improve the wear resistance of CuSn10-diamond composite samples, especially when the temperatures were 1192.9,1242.1 and 1248.7°C, the rate of wear mass loss was only 0.002-0.003. .1491.6 ∼1585.3°C.When the temperature reached 1491.6°C, the graphitisation occurred, and the wear mechanism changed into adhesive wear and abrasive wear.When the temperature is 1491.6∼1585.3°C, the degree of degree of graphitisation, adhesive wear and the rate of wear mass loss also increased with the further increase of temperature. .1676.5 ∼1896.1°C.When the temperature was 1676.5∼1896.1°C, the degree of graphitisation of diamond was further intensified and the cleavage fracture occurred.Meanwhile, the retention force of the CuSn10 on diamond deteriorated seriously, and the wear mechanism changed into three-body abrasion, and the wear resistance decreased greatly.
The establishment of the quantitative relationship provides a processing window for the fabrication of the CuSn10-diamond composites and the protection of diamond during PBF-LB.In addition, the method of establishing the quantitative relationship can be used in the fabrication of diamond composites by any highenergy beam additive manufacturing technology which can form high-temperature molten pools.
Conclusion
In the process of LAM of diamond tools, diamond and powder material parameters and laser process parameters are the main factors affecting the quality of their forming.With excellent thermal conductivity, the diamond will change the local thermal conduction ability of the alloy melt and the temperature distribution, which will affect the melt pool morphology, the microstructure near the diamond and the forming quality.Moreover, due to the thermal instability of diamond, the thermal damage such as oxidation, graphitisation and ablation, are likely to occur when diamond contact with high temperature molten pool.Therefore, it is extremely important to establish a quantitative framework to accurately evaluate the thermal damage behaviour of diamond abrasives and the relevant microstructure-performance characteristics, which can provide the basic support for relationship among process parameters and forming.In this study, by investigating the thermal evolution of the CuSn10-diamond composite by PBF-LB, the essential relationship among processing parameters, thermal behaviour of diamond abrasives, interfacial bonding and wear properties was quantitively revealed.The main findings are summarised as follows: (1) The CuSn10-Diamond composites were fabricated by PBF-LB.The thermal evolution of diamond particles in PBF-LB was obtained by simulation.The temperature of diamond presented two peaks in the process of molten pool moving forward,
Figure 4 .
Figure 4. Maximum temperature versus time curves of the diamond particle obtained with different processes: laser power (a) 120 W; (b) 140 W; (c) 160 W; (d)180 W.
Figure 6 .
Figure 6.SEM images of the microstructures of diamond particles with different PBF-LB parameters: No damage light damage and severe damage area correspond to the purple Area 1, the green Area 2 and the red Area 3.
Figure 7 .
Figure 7. XRD pattern of CuSn10-diamond composite samples with different diamond temperature.
Figure 10.Schematics of CuSn10-diamond composites produced via PBF-LB (a) mixed powders bed; (b) irradiation contact with laser beam; (c) thermal contact with the molten pool.
Figure 11 .
Figure 11.(a) The friction coefficient-time curve and (b) profile of the wear depth.
Table 2 .
Numerical simulation results of diamond peak temperature under different processing conditions. | 6,371.4 | 2022-09-20T00:00:00.000 | [
"Materials Science"
] |
Petrol Prices and Subjective Well-Being: Longitudinal Data Evidence From China
This paper studies the effects of petrol prices on individuals’ subjective well-being (SWB). Three waves of household data from the China Health and Retirement Longitudinal Study and petrol prices at the province level are used and ordered probit models are applied. The empirical results show that petrol prices are negatively associated with SWB due to income effects. The findings are robust to alternative independent variable measures and clustered standard errors.
Introduction
Countries around the world have attached great importance to the well-being of their citizens.Several studies have focused on the determinants of subjective well-being (SWB), such as income (Clark et al., 2008), personal characteristics (Dolan et al., 2008;Easterlin, 2006), and the economic and social environment (Alesina et al., 2004;Verme, 2011).However, the effects of petrol prices on SWB have not been well identified.
Petrol prices can affect individuals' SWB in opposite directions.Petrol prices, for instance, have income effects that reduce SWB.When petrol prices increase, people allocate more disposable income to fuel expenses and lower their expenses on other well-being-enhancing activities (Prakash et al., 2020).On the other hand, petrol prices can also induce health effects that enhance SWB.When petrol prices increase, individuals may turn to public transportation, cycling, or walking to commute.These physically demanding activities together with the induced improved air quality can ultimately facilitate health and SWB (Ma et al., 2018;Shaw et al., 2018).
To uncover the effects of petrol prices on individuals' SWB in China, we use three waves of household data from the China Health and Retirement Longitudinal Study (CHARLS) along with province-level 92 petrol prices over the same period.Ordered probit models are applied to perform regressions.We then use 95 petrol prices and clustered standard errors to verify the robustness of the empirical findings.We find that petrol prices have negative effects on SWB.By using longitudinal data to ease endogeneity issues, this paper provides insight into the relation between petrol prices and SWB in China, thus contributing to the research on energy and individual well-being (Boyd-Swan & Herbst, 2012;Prakash et al., 2020).
The remainder of this paper proceeds as follows.Section 2 describes the data source, and the dependent, independent, and control variables.Section 3 reports the results of baseline regressions and robustness checks.Section 4 con-cludes this paper.
Data and variables
Petrol prices at the province level are acquired from East Money (http://data.eastmoney.com),which integrates and provides data about stocks, funds, and the economy.Yearly weighted 92 petrol prices are used as the independent variable.Following Diener et al. (1985), we measure the dependent variable, SWB, by life satisfaction based on the 2013, 2015, and 2018 waves of CHARLS.The variable is coded from one (not at all satisfied) to five (completely satisfied) as individual SWB increases.Based on the CHARLS dataset, we include control variables at the individual level, including gender (male = 1, female = 0), age, education (elementary school or below = 0, middle school = 1, high school or vocational school = 2, college/associate degree or above = 3), marital status (married = 1, otherwise = 0), work status (employed = 1, otherwise = 0), self-reported health level (poor = 0, fair = 1, good = 2), and income (in logarithmic form).We also control for province heterogeneity, which could be correlated with individual SWB.We obtain data on province-level gross domestic productivity per capita (PERGDP) and population density from the National Bureau of Statistics of China.Since the SWB variable is ordinal, we employ ordered probit models.To account for time trends, year dummies are included in all regressions.Moreover, given that the petrol price is at the province level and the number of data points is small, we exclude province fixed effects to avoid severe multicollinearity, following Verme (2011), for instance.
We exclude observations that are missing information on the variables we use.per liter.Moreover, the price of 95 petrol is higher compared to that of 92 petrol.
Results
Hierarchical regressions are performed, and the results are reported in Table 2.As we observe from Column (1), the coefficient of petrol prices is significantly negative at the 1% level.When individual and province characteristics are gradually added to the regression, the effect of petrol prices decreases slightly in magnitude, but remains significant at the 1% level.These results show that an increase in petrol prices results in a decrease in individuals' SWB, indicating that income effects may dominate the relation between petrol price and SWB.
As for the control variables, health and income are positively related to SWB, consistent with findings in previous studies concerning SWB (Clark et al., 2008;Dolan et al., 2008).Unlike the nonlinear relation found by Easterlin (2006) and Ferrer-i-Carbonell & Gowdy (2007), we find the impact of age to be linear and significantly positive.In addition, the impact of marital status on SWB is positive, while the impact of education is negative.
To verify the robustness of the empirical findings, we first use 95 petrol prices over the same period, which are obtained from the same source as mentioned earlier.The regression results reported in Panel A of Table 3 show that the effect of 95 petrol prices on SWB is still significantly negative.We also note that the magnitude of the effect is smaller, and the significance level is lower than that of 92 petrol price.This result could be due to the invariably higher price of 95 petrol compared to 92 petrol.Consumers who choose 95 petrol are less sensitive to petrol prices, leading to a weaker effect on SWB.This evidence confirms the underlying mechanism, namely, the income effect, behind the effects of petrol prices on SWB.
In addition, we use clustered robust standard errors (see Panel B of Table 3) instead of default standard errors in the baseline regressions.The results indicate that our findings are robust and unchanged.
Conclusion
In June 2021, the number of motor vehicles in China reached 384 million, including 292 million cars, according to China's Ministry of Public Security.The increasing number of private cars has accentuated the influence of petrol prices on people's lives in China from the point of view of consumption and well-being.Based on the data from CHARLS and the province-level petrol prices from East Money, this paper studies the effects of petrol prices on individuals' SWB in China over the period 2013-2018.The empirical results show that higher petrol prices are correlated with lower SWB.This result is consistent with the findings of Boyd-Swan & Herbst (2012) and Prakash et al. (2020) on the effects of petrol prices on SWB in the United States and Australia, respectively.Although public transport is convenient in most areas of China, income effects still dominate, indicating that people's reliance on cars is increasing in China.To reduce the sensitivity of SWB to petrol prices, the government should provide more incentives for people to go green.
Table 1 . Descriptive statistics
The original data sample includes 31,671 observations, but it drops to 20,901 when individual and province characteristics are included.Descriptive statistics are reported in Table1.We observe that the average 92 petrol price around China during the sample period is about CNY 7 per liter, ranging from CNY 5.59 per liter to CNY 8.58 per liter, with a standard deviation of CNY 0.59 Notes: This table presents selected descriptive statistics (namely, sample mean, its standard deviation (SD), and the minimum (Min.) and maximum (Max.)values of the data.The sample size is noted in column 2.
Petrol Prices and Subjective Well-Being: Longitudinal Data Evidence From China
Table 3 . Robustness checks
Notes: Ordered probit (Oprobit) models are employed and coefficients are reported.The t-statistics are presented in parentheses; and *, ** and *** represent statistical significance at the 10%, 5% and 1% levels, respectively.Petrol Prices and Subjective Well-Being: Longitudinal Data Evidence From China Energy RESEARCH LETTERS This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CCBY-4.0).View this license's legal deed at http://creativecommons.org/licenses/by/4.0 and legal code at http://creativecommons.org/licenses/by/4.0/legalcodefor more information. | 1,822.8 | 2021-09-22T00:00:00.000 | [
"Economics",
"Business"
] |
Calibration for a count rate-dependent time correlation function and a random noise reduction in pulsed dynamic light scattering
A pulsed dynamic light scattering (DLS) system, which would be potentially applied to nonlinear DLS with molecular selectivity, was developed by combining a sub-nanosecond pulsed laser with a software-based detection system. The distortion of the time correlation function due to the clipping effect in the photon counting module, and the resulting underestimation of the particle size, were successfully calibrated based on a theoretical simulation. The effective removal of random noises was also demonstrated via time gating synchronized to the laser pulses. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s44211-022-00071-0.
Introduction
Dynamic light scattering (DLS) is widely used to determine the size distributions of nanometer-scale objects in dispersions [1,2], the mesh size of gels [3], the zeta potential of the colloidal particles [4], and the aspect ratio of rod-like particles [5] by measuring the time correlation function of the scattered light intensity. The conventional DLS uses a continuous wave (CW) laser as a light source, and the time correlation function is obtained using an autocorrelator. Recently we have developed a software-based DLS system [6], in which the arrival times of all the scattered photons are recorded [7][8][9]. This has enabled the application of DLS even to a dispersion containing large pollutants by calculating the time correlation function exclusively from the uncontaminated parts of the data in the post-processing. This noise reduction scheme can only be applied to transient noise; however, a new technique to effectively remove random noises, such as dark counts from the detector and the signals from background light, has not been achieved.
Employing a pulsed laser with a high peak intensity as the light source for DLS would realize molecular selective DLS based on nonlinear optical processes, such as hyper-Rayleigh scattering [10,11] and coherent anti-Stokes Raman scattering [12]. In a novel attempt at pulsed DLS using a femtosecond laser as the light source [13], a nonlinear DLS measurement was not successful, predominantly because of the instability of the laser output. Moreover, even a linear pulsed DLS had a significantly worse signal-to-noise ratio than that obtained with a CW laser. This was attributed to destructive interference among the spectrally broad scattered light. In addition, the nanoparticle size obtained from the linear pulsed DLS was underestimated for some unknown reason [13]. We speculate that the underestimation originated from a false detection of the number of scattered photons. Because of the high peak intensity of the incident laser pulse, it is highly probable that more than one scattered photon arrived at a photon counting module within the pulse duration time (Fig. 1a), which was shorter than the dead time of the module. In this case, the photon-counting module would fail to count the second and subsequent photons arriving within the dead time (Fig. 1b). This is known as a clipping effect [2]. This clipping effect affects both linear and nonlinear pulsed DLS. This paper reports on the nanoparticle size estimation using a linear pulsed DLS system, combining a sub-nanosecond laser with the software-based DLS system developed in our previous study [6]. Use of the narrowband pulsed laser enabled us to suppress destructive interference. We found that the time correlation function depends on the count rate of the scattered light on the photon counting module, leading to a systematic underestimation of the particle size in dispersion, in accordance with the previous study [13]. Our numerical simulation quantitatively reproduced the count rate dependence by considering the clipping effect, thereby giving an effective calibration to recreate the undistorted time correlation function and the precise particle size. Taking advantage of the pulsed light source, we further demonstrated an effective reduction of random noises via timegating of the detected signals.
Experimental
All of the DLS measurements were performed at room temperature (23 °C). Monodisperse silica nanoparticles (803847-1ML, Sigma-Aldrich, guaranteed particle concentration: 1.2 × 10 13 /mL), whose average particle radius is estimated to be 101 ± 6 nm (202 nm in diameter) evaluated by transmission electron microscopy (TEM), were used as the dispersion sample. A representative TEM image of the silica nanoparticles and the probability density of the observed nanoparticle radius are shown in Fig. S1 in Supporting Information. The dispersion was diluted using pure water to obtain 1 × 10 4 particles/nL (1 nL is a typical irradiated volume viewed by a detector).
The developed pulsed DLS apparatus is schematically shown in Fig. 1c. The vertically polarized output of a master oscillator power amplifier (MOPA) laser (STA-01-MOPA-2, Standa, Lithuania), with a wavelength of 1064 nm, a pulse width of 400 ps, a repetition rate of f = 50 kHz, and a pulse energy of 50 μJ/pulse, was frequency-doubled by a lithium triborate crystal and used as a light source. The visible light pulse was focused onto a quartz cell filled with a sample dispersion after its polarization was rotated vertically by a half-wave plate. The scattered light was collected at 90° and focused onto a photon counting module (C11202-050, Hamamatsu Photonics, Japan). The dead time of the module was 15 ns. The dark count rate of the module was less than 10 counts per second (cps), which was negligibly small compared to the typical count rate of the scattered photon signal, > 10 3 cps. The count rate at the photon counting module was varied by adjusting the incident laser intensity with the neutral density filter. For a comparison, a CW Nd:YAG laser was also used at a wavelength of 532 nm (0532-04-01-0100-700, Cobolt, Sweden). Each measurement involved the detection of 10 6 scattered photons.
The electronic signal pulses from the photon counting module were stretched by a homebuilt pulse stretcher circuit and recorded by a time-to-digital converter (TDC, NI-9402 & cDAQ-9174, National Instruments). The arrival times of detected photons were converted into a normalized time correlation function, g (2) (τ), as described in our previous paper [6]. The electronic signal pulses from the MOPA laser synchronized to the laser firing time were also recorded by the TDC.
Evaluation of nanoparticle size in dispersion by pulsed DLS system
We first evaluated the silica nanoparticle size in the dispersion with the pulsed DLS system. Figure The time correlation function obtained with a CW laser is also shown for a comparison. b Hydrodynamic radius obtained from pulsed DLS measurements before and after calibration as a function of the count rates (symbols, plotted to the left axis). The horizontal dashed line indicates the actual radius of the particle obtained from TEM (101 nm). Corresponding correction factor, R h sim /R h0 sim , obtained from simulation is also plotted with a solid curve to the right axis photons from the same dispersion obtained with a pulsed laser (colored curves) and a CW laser (black curve). The time correlation obtained with the pulsed laser deviates from that obtained with the CW laser. Moreover, the deviation becomes more significant with increasing incident laser intensity, thereby increasing the count rate, C s , on the photon counting module.
In the CW DLS, the normalized time correlation function of the scattered light intensity can be expressed as where (2D 0 q 2 ) −1 is the relaxation time and β 0 is a coherence factor [2]. D 0 is the diffusion constant, and q is the momentum transfer, which is defined as where k B , T, η, R h0 , n r , λ 0 , and θ are the Boltzmann constant, absolute temperature, viscosity of the solvent, hydrodynamic radius, solvent refractive index, laser wavelength in vacuum, and the scattering angle, respectively. By fitting the time correlation function obtained with the CW laser to Eq. (1), the coherence factor for the present detection setup was obtained to be β 0 = 0.9. R h0 was estimated to be 104 ± 3 nm, in good agreement with the particle size obtained from TEM. We emphasize that all these quantities should be independent of the incident laser intensity and the count rate, C s .
For the pulsed DLS measurements, the initial amplitude, β, of the correlation function, which is defined by g (2) (τ → 0) ≡ 1 + β and would correspond to the coherence factor for the CW DLS, became smaller as the count rate increased, as plotted in Fig. 3a. The nominal diffusion coefficient, D, obtained by fitting to Eq. (1), increased simultaneously, as shown in Fig. 3b. The latter trend resulted in the nominal particle size, R h , estimated from Eq. (2), becoming significantly smaller than the actual particle size, as shown in Fig. 2b with open circles. A possible origin of this count rate dependence is the clipping effect. To confirm this hypothesis, we performed a numerical simulation that considered the clipping effect, whose details and flow charts (Figs. S2, S3 and S4) are shown in the Supporting Information. We considered 1 × 10 4 particles of an identical size R h0 = 101 nm that were random walking with the diffusion constant, D 0 = k B T/6πηR h0 . We assumed that a light electric field, E i (⃗ r, t) , with a wavevector of � ⃗ k i was incident onto the particles: Here, we assumed that E i 0 was time-independent, because it varied slowly compared with the carrier wave, e i(⃗ k i ⋅ ⃗ r− t) . The electric field scattered by the particles at an angle of θ = 90°, E s (t), could then be calculated as where ⃗ r j (t) is the position of the jth particle and � ⃗ q ≡ � ⃗ k i − � ⃗ k s is the momentum transfer, with � ⃗ k s being the wavevector of the scattered light. The calculated scattered light intensity, was then converted into the number of photons, n(t, Δt), detected between time t and t + Δt, such that the probability followed the Poisson distribution: where aI(t)Δt denotes the number of photons arriving at the detector between time t and t + Δt. In this simulation, the number of the arriving photons was adjusted by varying the coefficient, a. When the signal clipping effect was neglected, the count rate of the detector was given by C s = aI(t). In this case, the time correlation function calculated from n(t, Δt) was independent of C s , as shown in Fig. S5a in the Supporting Information. The calculation agreed with the experimental time correlation function obtained from the CW DLS. The signal clipping effect was introduced in the simulation by counting only the first photon and ignoring the rest for a given time interval, Δt. This led to the number of counted photons being smaller than the arriving photons, C s < aI(t). Figure S5b compares the calculated time correlation functions with the clipping effect for different P(n, t, Δt) = e aI(t)Δt (aI(t)Δt) n n! , In the present simulation, the interval was set to be Δt = f −1 = 20 μs to match the experimental interval of the laser pulses. The simulation can be applied to a pulsed DLS system with any laser repetition rate f, as long as the dead time of the detector is shorter than the interval of the laser pulses. The disparity between the number of counted photons and that of the arriving photons depends on the ratio C s /f in the presence of the clipping effect. By fitting the numerically calculated time correlation function to Eq. (1), we obtained the simulated initial amplitude, β sim , and the diffusion coefficient, D sim , as a function of C s /f for the pulsed DLS. We also obtained the ratios β sim /β 0 sim and D sim /D 0 sim , with β 0 sim and D 0 sim being the simulated initial amplitude and diffusion coefficient without considering the clipping effect, respectively, as shown with curves in Fig. 3. From Eq. (2), the corresponding ratio for the hydrodynamic radius can be expressed by R h sim /R h0 sim = D 0 sim /D sim , as shown with a curve in Fig. 2b. The calculation reproduced the experimental underestimation in the particle size quantitatively at the high-count rate, confirming that the clipping effect is the origin of the underestimation.
According to the simulation results, it is in principle desirable to measure the pulsed DLS at the lowest count rate possible to minimize the clipping effect. In the case that the high-count rate is not avoidable, however, β sim /β 0 sim and D sim /D 0 sim can be used to calibrate the "clipped" experimental time correlation function, as demonstrated in Fig. S5c. The numerical values for the calibration factors are listed in Table S1 in the Supporting Information. Likewise, the underestimated experimental hydrodynamic radius can be calibrated, whose result is shown in Fig. 2b with filled squares. We emphasize that our calibration scheme does not depend on the sample condition, such as the particle size and concentration, because the origin of the distortion in the time correlation function is the clipping effect, which is purely the issue of detection. This is demonstrated by comparing the simulation results for different particle sizes, as shown in Fig. S6 and Table S2 in the Supporting Information. The calibrated hydrodynamic radius successfully reproduced the actual particle size with a precision of better than 1% for the relatively low count rate regime of C s /f < 0.5. In the high-count rate regime, e.g., at C s /f = 0.82 (41 kcps), the calibrated R h was too large compared with the actual particle size by 10%. The remaining discrepancy may be attributed to too many (more than 80%) signals being clipped. In this case, the time correlation function deviated from Eq. (1) and was difficult to reconstruct. We, therefore, concluded that the results of the pulsed DLS measurements can be safely calibrated below the count rate of C s /f = 0.5.
Random noise reduction by the pulsed DLS system
In the above experiments, we did not set a time gate for the photon counting module, because it had a sufficiently low dark current. In case the DLS signal suffers from random noise, we can set a time gate to exclusively detect the scattered photons that are synchronized to the incident laser pulses (Fig. 4a). Here, we demonstrate the removal of the random noise by intentionally introducing intense incoherent light into the detector during the DLS measurement, as shown schematically in Fig. 4b. The count rates of the signal light scattered from the nanoparticle dispersion and the incoherent light source were 3 and 30 kcps, respectively. Figure 4c compares the time correlation functions reconstructed from the ungated signal (gray line) with that from the time-gated signal (red curve), in which the gate is synchronized with the incident laser pulse with a precision of ± 25 ns. The ungated signal gives a time correlation function that equals unity at all correlation times, indicating that there is no correlation in the detected signals, and that it is contributed by random noise. The time-gated signal, in contrast, gives a time correlation function that follows a clear decay curve. Fitting the latter time correlation function to Eq. (1) yielded a hydrodynamic radius of 101 ± 1 nm after the calibration described above, which was in good agreement with the actual size obtained by TEM (101 ± 6 nm). We also performed the measurement using a CW laser at the same signal and noise count rates for a comparison. As shown with a blue dashed line in Fig. 4c, the result is similar to that of an ungated signal with a pulsed laser, even though the transient noises are removed by the post-processing noise reduction scheme proposed in our previous paper [6]. The comparison demonstrates the efficiency of the random noise removal by combining a pulsed light source and time-gated detection synchronized to it.
Conclusions
We demonstrated a pulsed DLS system using a narrowband pulsed laser and software-based detection. This scheme allowed us to obtain an accurate particle size at a relatively low count rate, once the clipping effect of the photon counting module was calibrated. The obtained knowledge is essential for developing a novel nonlinear DLS technique using a similar pulsed light source, which would provide various selectivities to the conventional DLS. For example, DLS combined with hyper-Rayleigh scattering could exclusively monitor non-centrosymmetric molecules, whereas that with coherent anti-Stokes Raman scattering could estimate the respective particle sizes of multi-component colloidal dispersions. We also demonstrated that, even in the presence of intense random noise, the time-gating of the detected signal enabled accurate DLS measurements. This would prove to be even more powerful when the pulsed DLS scheme is combined with an infrared laser, because photodetectors with a low dark count are not readily available in the infrared region [14,15]. Such an infrared pulsed DLS would reduce the multiple scattering from turbid dispersion, such as milk and ink [16], because the scattering intensity is inversely proportional to the 4th power of the wavelength of light [17]. The low absorption and scattering coefficients for infrared light would also increase the penetration depth in biological tissue, and thereby enable us to track the diffusion dynamics in vivo and in situ [18], which plays an important role for biological functions. | 4,044.6 | 2022-02-15T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Wearable Emotion Recognition Using Heart Rate Data from a Smart Bracelet
Emotion recognition and monitoring based on commonly used wearable devices can play an important role in psychological health monitoring and human-computer interaction. However, the existing methods cannot rely on the common smart bracelets or watches for emotion monitoring in daily life. To address this issue, our study proposes a method for emotional recognition using heart rate data from a wearable smart bracelet. A ‘neutral + target’ pair emotion stimulation experimental paradigm was presented, and a dataset of heart rate from 25 subjects was established, where neutral plus target emotion (neutral, happy, and sad) stimulation video pairs from China’s standard Emotional Video Stimuli materials (CEVS) were applied to the recruited subjects. Normalized features from the data of target emotions normalized by the baseline data of neutral mood were adopted. Emotion recognition experiment results approved the effectiveness of ‘neutral + target’ video pair simulation experimental paradigm, the baseline setting using neutral mood data, and the normalized features, as well as the classifiers of Adaboost and GBDT on this dataset. This method will promote the development of wearable consumer electronic devices for monitoring human emotional moods.
Introduction
Emotions can significantly impact on our daily lives and work. Not only can emotions reflect a person's mental state, but they also present a strong connection with people's physical health [1]. Negative emotions have become key factors affecting human health. Studies have shown that long-term negative emotions can lead to various health problems such as headaches, asthma, ulcers, and heart diseases [2]. Due to the lack of diagnosis and treatment resources for psychological problems such as depression and anxiety in recent years, social problems are on the rise. Techniques for emotion recognition can improve human-computer interaction as well as psychological treatment to some extent [3]. Commonly used emotional recognition methods are based on behavioral parameters or physiological signals. Although emotional recognition based on behavior performance is intuitive and convenient, people can deliberately disguise emotional states in some situations. This can reduce reliability and accuracy. It is well known that physiological signals are affected by the human endocrine system and the autonomic nervous system. These systems are less affected by human subjective consciousness and can reflect the real emotional state more objectively and accurately [4]. From this perspective, emotion recognition based on physiological signals makes the results more objective.
Wearable emotion recognition devices using physiological signals have the potential for applications in our daily lives [5]. Some wearable devices are applied to people who are depressed or mentally handicapped to monitor their emotional states, as well as in the field of gaming. Currently, there are 300 million people with depression in the world, and predicting their mood can provide better care and prevent dangerous events. In the field of gaming, emotional changes can be used as an interactive means to change the game contents including scene and background music, so that players can have a better sense of immersion. It is hoped that more people will benefit from the wearable device based emotion recognition technology. Among the physiological signals, the heart rate is relatively easier for collection using various wearable devices such as a smart watch, bracelet, chest belt, and headset. At present, majorities of manufacturers have released smart bracelets or watch products with heart rate monitoring functions via photoplethysmography (PPG) sensors or electrocardiograph (ECG) electrodes. Devices from Apple, Huawei, Fitbit and Xiaomi provide a solid platform for wearable emotion recognition. We have tried to adopt a simple and effective way for daily emotional monitoring, so heart rate data captured by a smart bracelet is taken as the research object for comprehensive considerations. Moreover, heart rate is generated by the activity of the heart, controlled only by the human nervous system and endocrine system, and less affected by the subjective thinking. Human beings can hide emotions and do not show up in facial expressions and physical movements. However, heart rate characteristics due to emotions are difficult to control. Compared with facial expressions and limb movements, the heart rate based result is more objective and the actual emotions are not easy to be hidden.
Studies have shown that the heart rate varies with mood changes. In 1983, an experiment designed and conducted by Ekman et al. proved that physiological signals had unique responses to different emotions. The heart rate increased significantly when people were angry or scared but decreased significantly in a state of disgust [6]. Britton's research showed that the heart rate during a happy mood was lower than that in a neutral mood [7]. Valderas showed that the effects of relaxation and fear on heart rate were significantly different, and the average heart rate during happiness was lower than that while in a sad state [8]. Using the IBPSO algorithm, Xu et al. collected ECG and heart rate signals for emotion recognition in which the highest recognition rate of sadness and joy was 92.10% [9]. Quiroz et al. used walking acceleration sensor data and heart rate data from a smart watch to predict the emotional state of the subject. Several time series and statistical methods were adopted to analyze changes in mood. It was found that the accuracy of the individual's emotional recognition model was higher than the individual's baseline level, and the classification accuracy for happiness and sadness was higher than 78% [10]. Pollreisz et al. used a smart watch to collect data on electrodermal activity (EDA), skin temperature (SKT), and heart rate (HR) for ten subjects. All of the subjects were asked to fill out the Self-Assessment Manikin (SAM) form after watching an emotional stimuli video. They built a simple solution for emotion recognition based on the peaks in the EDA signal. The success rates of their algorithm and the SVM + GA was 64.66% and 90%, respectively [11]. Zhang et al. conducted an experiment in which 123 subjects were asked to wear smart bracelets with built-in accelerometers. They attempted to identify emotions from walking data using the LibSVM algorithm. They achieved classification accuracy of 91.3% (neutral vs. angry), 88.5% (neutral vs. happy), and 88.5% (happy vs. sad). The recognition rate of the three emotions (neutral, happy, and angry) achieved an accuracy of 81.2%. These results demonstrated that emotion could be reflected to some extent in walking, and wearable smart devices could be used to recognize human emotions [12]. Covello, et al. collected subjects' ECG data through wearable wireless sensors for detecting human emotions. The proposed CDR (a basic emotion response, known as cardiac defense response) algorithm detected changes in non-stationary transitions which might indicate abrupt changes in heart rate regulation (specifically autonomic nervous system regulation) due to a fear or startle event. It achieved an overall accuracy of 65% on 40 subjects [13]. Table 1 summarizes the studies of wearable devices for measuring emotion recognition. Many are complex and time-consuming for real applications. To make the emotion recognition feasible on wearable devices, our study proposed a method for recognizing emotional states (happy, sad, and neutral) of subjects via the heart rate signals from a wearable bracelet.
Subject Information
A total of 25 subjects (Chinese, 13 females and 12 males) were recruited for the experiment. They ranged in age from 22-25 years old with an average of 23.5 years [16][17][18][19]. All the subjects were in good health, without any psychiatric illnesses, and free of any alcohol or medications that could delay the emotional response within the prior 72 h. A written consent was obtained from each subject prior to the experiment. All of the procedures were approved by the Research Ethics Committee of the Body Data Science Engineering Center in Guangdong Province, China (BDS18-06).
Stimulation Materials
This experiment relied on videos as the sources of emotional stimulus materials to induce corresponding emotions. Fifteen videos were selected from the CEVS (China's Standard Emotional Video Stimuli Materials Library) and portrayed three categories of emotions: neutral, happy, and sad, as can be seen in Figure 1 [20]. The length of the videos ranged from 53 s to 3 min, as shown in Table 2. The materials in this database passed standardized evaluation. 48 video clips of 6 different emotions, including happiness, sadness, anger, fear, disgust and neutrality, were collected. According to the length and comprehensibility of the clips, 30 video clips of emotions were selected, and 50 subjects had evaluated the video clips. Statistical analysis showed that in terms of arousal, the main effect of emotion type was significant (F = 23.232, p < 0.001) [20].
Experiment Process
Before watching the videos, the subjects were informed about how the experiment would be conducted. The experiments were conducted in a 30-dB soundproof test room (Hengqi, Foshan, China). All of the subjects wore a smart bracelet (Algoband F8, Desay Electronics, Huizhou, China), and were required to watch three video pairs of 'neutral + target emotion' videos. The pair sets were design to evoke the following emotions: video set 1: neutral and neutral, video set 2: neutral and happy, and video set 3: neutral and sad. All the subjects were required to rest for 5 min at the beginning of the experiment to achieve a resting state. After each pair set of videos was shown, there was a minimum 5-min break to reduce the emotional interference of the previous video with the emotional response to the next video. In each set of videos, the first portion was neutral so that the subject could return to a neutral mood before viewing the mood stimulus material that followed, which was used to determine the baseline of the subject's heart rate data. Physiological data were recorded along the time. The experimental setup is shown in Figure 2. Figure 3 shows the flow chart of the experiment procedures. Before watching each video, a neutral emotion video was shown to let the subject return to a neutral state for reducing interference. The data extracted in the subject's neutral state was used as the baseline heart rate.
The 2-video pair set used in our study reduced interference between different video induced emotions. There was a significant emotional difference between two portions in each video pair set (except the neutral-neural set), which made the subjects more likely to feel the change in emotional mood and therefore produced a more effective stimulation effect. Changes in heart rate during the videos were recorded and the data of the latter part of each video pairs were separated. Our main goal in the study was focusing on analyzing this part of the data. For each subject, the video pairs set (neutral + happy, neutral + sad, neutral+ neutral) were randomly selected in the material library CEVS, to avoid the data imbalance and emotional decay caused by using a single piece of material. All video materials in the experiment were professionally evaluated.
Data Processing
Heart Rate Signal Pre-processing Each segment of the heart rate data was under two emotional states, where the first corresponded to the neutral state and the second was linked with the target emotion. Therefore, the corresponding original heart rate was selected in the latter part with reference to the length of each video in Table 1. Figure 4 shows the typical separated heart rate signals of one subject in the three stimulated emotional states.
Because of individual differences, the difference in baseline heart rates for each individual varied widely. To explore the subject-independent characteristics, we reduced the influence of individual differences [14,21]. Here, we defined the mean value of first part data under the neutral state as a baseline for removal of the individual difference. Therefore, the normalized heart rate can be calculated as the original data minus the baseline: Rate normal : the heart rate after reducing the influence of individual difference. Rate original : the original heart rate Rate neutral_mean : baseline for removal of the difference in individual heart rates.
Taking the average heart rate in the neutral state as the index value and subtracting that value from the original heart rate of the target emotion led to the normalized heart rate, as shown in Equation (1). The datasets that contribute to mood changes include commonality and personality. The former was extracted from the normalized heart rate. The latter was extracted from the original data to determine the variation in heart rate of the different moods. As shown in Figure 5, from the original heart rate data, the universal characteristics that reflect the emotional changes were first extracted from original data as the feature subset one, and the characteristics from the normalized signals that removed the individual differences were then extracted as the feature subset two [21].
Feature Extraction
As changes in emotional mood cause changes in heart activity, the extracted features can be used to express the state of different emotions. Figure 6 shows some typical parameters used for feature extractions. Parameters (1) and (3) represent the amplitude change of the heart rate. Parameter (2) indicates that the heart rate continues to rise in time and (4) shows the change slope of the heart rate. Parameter (5) denotes the duration when heart rate data remains unchanged. The features used in our study are divided into two parts: one from the original signal and the other from the normalized signal.
Features of the Original Signal
Rate_diff1_mean (Diff1) denotes the mean value of the first-order difference in heart rates. X n represents the heart rate from the original signal. N represents the total length of the discrete data [14,22]: Rate_diff2_mean (Diff2) denotes the second-order difference in heart rates: Rate_range (H range ) denotes the variation range of the heart rate: Rate_data_entropy denotes the information entropy of the heart rates. It indicates the degree of dispersion of heart rate data,x i represents the value of the heart rate [23].
Max_ratio (Ratio_max) denotes the ratio of the maximum heart rate value and data length: Min_ratio (Ratio_min) denotes the ratio of the minimum heart rate value and data length: Rate_Adjacent_data_root_mean (Radrm) denotes the root means square of the difference between adjacent heart rate data elements in a sequence [4]: Rate_Down_Time describes the time when the heart rate decreases, and its specific characteristics include the following: Rate_down_time_max, Rate_down_time_min, Rate_down_time_median, Rate_down_time_mean, Rate_down_time_std.
Rate_Up_Time describes the time that heart rate increases. The max, min, median, mean, and variance of this feature were calculated to describe its characteristics, which were noted as: Rate_up_time_max, Rate_up_time_min, Rate_up_time_median, Rate_up_time_mean, Rate_up_time_std.
Rate_Time_continue denotes the duration when heart rate data remains unchanged. The five statistical characteristics are Rate_time_continue_max, Rate_time_continue_min, Rate_time_continue_median, Rate_time_continue_std, and Rate_time_continue_mean.
Features of the Normalized Signal
Rate_Down_Slope can be calculated using Equation (9), where Down amplitude represents the amplitude decline of the normalized heart rate, Down time represents the corresponding decrease time. Its specific characteristics include: Rate_down_slope__max, Rate_down_slope__min, Rate_down_slope_ mean, Rate_down_slope_ median, and Rate_down_slope_std.
Rate_amplitude_var denotes the variance of normalized heart rate data. Rate_up_amplitude represents the amplitude change when the normalized heart rate increases, and its specific characteristics include Rate_up_amplitude_max, Rate_up_amplitude_median, Rate_up_amplitude_mean, and Rate_up_amplitude_std. Rate_down_amplitude represents the amplitude change when the normalized heart rate declines, and the five statistical characteristics are Rate_down_amplitude_max, Rate_down_amplitude_min, Rate_down_amplitude_median, Rate_down_ amplitude_mean, and Rate_down_amplitude_std.
The normalized signal is processed by moving average with a window length of 25, then the 25_mean data is obtained, as seen in Figure 7. It mainly includes the following five characteristics of 25_mean_max, 25_mean_min, 25_mean_median, 25_mean_mean, 25_mean_std [11,12].
Rate_data_mean, Rate_data_var represents the mean and variance of the normalized heart rate signalX respectively.
Rate_diff1_normalization (Diff1_normalization) denotes the average of the first-order difference of the normalized original signals [12,14]. It contains the following five characteristics: Rate_data_normazation_diff1_max, Rate_data_normazation_diff1_min, Rate_data_normazation_diff1_std, Rate_data_normazation_diff1_median, Rate_data_normazation_diff1_mean: Rate_diff2_normalization (Diff 2_ normalization) denotes the average absolute value of the second-order differences of the normalized heart rate signals. Similarly, the following five features are also extracted: Rate_data_normazation_diff2_max, Rate_data_normazation_diff2_min, Rate_data_normazation_diff2_std, Rate_data_normazation_diff2_median, Rate_data_normazation_diff2_mean:
Selection of Features
As shown in Table 3, 53 features were extracted from each piece of data. Explanation of features terms was supplemented in Table A1. To make the emotion recognition process easier, we needed to reduce the dimension of features by selection of the most effective features from total 53 features. We chose to adopt the SelectKBest for feature selection, which returned the top k features under an evaluation parameter setting of mutual_info_classif (classification problem). The SelectKBest used here was a library function of sklearn, which was a machine learning library implemented in Python. In this paper its corresponding version was 0.19.1. Anaconda (an open source Python distribution) was adopted, and sklearn was upgraded to the corresponding version through pip or CONDA instructions. Table 4 shows the top five rated features with their corresponding scores for identifications of features that are important to each emotional category. The features of data_mean, 25_mean_median, data_entropy play the most important role in two or three emotional classifications, which are relevant to the mean and median amplitude value of the normalized signals and the information entropy of the original signals. Generally, it can be seen the features extracted from the normalized signals obtained relatively higher scores, such as data_mean, 25_mean, and data_normalized_diff1 series features, indicating the normalized heart rate data has a greater impact on the classification. The fusion of features from normalized and original signals might effectively improve the accuracy of recognition.
Model Training
To distinguish among the three emotional states (happy, sad, and neutral), five classifiers were evaluated including KNN (k-Nearest Neighbor) [15,24], RF (Random Forests) [5,25], DT (Decision Tree) [12,26], GBDT (Gradient Boosting Decision Tree) [27], and AdaBoost (Adaptive Boosting) [28]. We adopted these classifier models from the sklearn library, which integrated common machine learning methods. In each evaluation, there were 50 samples with two different emotional tags (25 subject data/emotion), and 75 samples in three emotional classifications [16]. "leave-one-out cross-validation" was used. Loop each data as a test set and the rest as a training set [29]. At the end of each round of operation, the correct samples were statistically counted to calculate the accuracy. The prediction values of all samples were evaluated and the predictive effect of the model was then calculated. Table 5 shows the parameter information of the classifiers. Classifier parameter definition was supplemented in Table A2. The accuracy rate calculated by Equation (14) was used to evaluate the effect of classification on different models, where N correct represents the number of samples that are correctly identified, and N total represents the total number of samples [16,22,27,[30][31][32]:
Results
Our study calculated the accuracy of two and three emotional classifications to evaluate the performance of the classifiers. As stated in Section 5, we used the SelectKBest for feature selection and found that in many cases only the scores of the top 21 features were beyond zero. Therefore, k was set below 21 as 20, 16, 12, 10, 8, or 5. And the models' performance was evaluated with various k settings. We used "leave-one-out cross-validation" here, and ten runs were conducted. The mode of the ten runs results was used as the final accuracy. Selecting the mode as the accuracy rate was conducive to predicting the approximate rate in a single experiment in the future. Actually, it was found that the average accuracy rate was very close to the mode value in the experiment. Next, the analysis of the classifier was mainly based on the comparison of mode results, which reflected the performance comparison of the classifier in recognition under the condition of high probability. The comparison results based on the "leave-one-out cross-validation" and the mode value could reflect the performance difference of the classifiers to some extent.
Categories of Neutral and Happy Emotions
The recognition result of neutral and happy emotions is given in Figure 8. Except KNN, the accuracies of the RF, DT, GBDT, and the AdaBoost were all over 0.80, where AdaBoost ranked the first, whose average accuracy was around 0.96 under different k configurations.
Categories of Neutral and Sad Emotions
The classification result of neutral and sad emotions is shown in Figure 9, where the accuracy was over 0.8 on the five classifiers, and the classifiers of RF, DT, GBDT, and AdaBoost exhibited adequate performances. Figure 10 shows the classification results of happy and sad emotions, where the accuracy rate was obviously lower than the previous two cases. The best performing classifier model was GBDT, which achieved a score of 0.84 at k = 8. The AdaBoost also achieved an acceptable rate of 0.80.
Classification of Three Emotions
In addition to the research of the two emotional classifications, this paper analyzed the classification performance of three different emotions (happy, neutral, and sad). Except KNN which adopted the parameter settings in Table 5, the other classifiers used the default configurations. The results in Figure 11 showed the best performing classification model was GBDT, which achieved a rate of 0.84 (k = 10). Considering all the two and three emotional classifications, the GBDT and AdaBoost algorithm were recommended which could achieve a better accuracy rate in our dataset.
Discussion
Our study proved that when human emotions were experienced and changed (happy, sad, and neutral) the heart rate could reflect the mood accordingly [5]. It also showed that real-time emotional recognition and monitoring could be achieved with the available wearable devices. This method was simple, quick, and easy to deploy on many available wearable devices. Several key issues are discussed below.
Experimental paradigm: we presented a way of 'neutral + target' pair emotion stimulation experimental paradigm. Except the regular rest period between two emotional stimulation which let the subjects enter the resting state, the neutral stimulation video was added as the first part of each video pair as a control, which was conducive to the induction of target emotion. We then used the data of neutral mood as a baseline, and the results approved this baseline setting was applicable. For real emotion recognition applications by smart bracelet, we suggest standard neutral stimulation videos be firstly presented to the wearer, to obtain the heart rate data of neutral mood as a baseline. This operation helps to eliminate personalized differences.
Baseline operation: the experimental results showed that the features extracted from normalized signals contributed more to emotion recognition, which further validated the effectiveness of the baseline operations. Some literature adopted resting mood data as a baseline. It is worth further studying the merits and demerits of eliminating individual differences by using resting state or neutral state as baseline.
As for the classifiers, the recognition accuracies of the RF, DT, GBDT, and AdaBoost were higher than that of the KNN. Maybe it's because the relationship between emotions and body physiological signatures (one to one, one to many, or many to many) has not been confirmed yet. Therefore using the classifiers based on the tree model are more suitable here.
Performance comparison: compared with the research on emotion recognition based on heart rate, we also achieved adequate recognition accuracy. Guo's study [19] used a wearable electrocardiograph to collect single-lead ECG signals with a sampling frequency of 200 Hz. They conducted feature processing using PCA (principal component analysis) method, and selected five significant characteristic values to classify emotional states. 13 HRV features were used to classify two and five emotional states, and the accuracy was 70.4% and 52% respectively. If the accuracy was used as the evaluation, our method demonstrated better performance. Moreover, our work directly collected data through the wearable bracelet, which was simple and had certain application significance.
Sample size: the subject number of 25 was chosen with reference to the literature. The subject number of the following referenced studies was also around 25, which indicated 25 subjects can reflect some feature to some extent. The DEAP was a standard database for emotion recognition research based on multi-channel physiological signals and facial expressions, which collected physiological signals from a total of 32 subjects. Many researchers had conducted experiments on emotion recognition based on the database [17]. In Guo's study [22], HRV was used for emotion recognition, and physiological data of 25 healthy people aged 29 to 39 were collected, which achieved good recognition result. In another reference [19], researchers collected data from 21 healthy subjects using a wearable ECG device, where short-term HRV data were analyzed for mood recognition. The SEED dataset was another standard collection of EEG data for emotion recognition from 15 subjects [18]. Although our sample size of 25 was similar to the previous studies, it might not be representative of population characteristics, and this method was announced to be effective on this dataset. The applicability of this method to other data sets deserves further exploration.
Activity effect: daily life activities will affect the accuracy in heart rate monitoring based on the bracelet, due to the slip, friction and sweat between human skin and the bracelet. In particular, motion artifact will increase the measurement noises and even the situations that the accurate data cannot be obtained. To avoid these influences, we suggest the bracelet be tightly worn on the wrist, and the mechanism of activity monitoring by the bracelet integrated motion sensors be adopted. Therefore, only the heart rate data under static (such as sleeping, standing, and sitting) or quasi-static (such as jogging) states can be used in daily emotion monitoring, while the data of dynamic states (such as running) are not used. This mechanism can lead to a rough assessment of emotions throughout the day. A more accurate analysis depends on the improvement of the wearable dynamic heart rate monitoring. It deserves further studies on minimizing the activity effect in heart rate-based daily mood monitoring.
PPG effect: we used the smart bracelet with PPG to collect the pulse rate at the wrist in this study. While in the absence of major diseases, the pulse rate is equal to the heart rate, the heart rate collected by PPG is susceptible to the influence of motion artifacts. Therefore, during the experiment, subjects were required to stay still and wore the smart bracelet as tight as possible to reduce the interference of motion artifacts. Noise reduction algorithms were embedded to reduce the interference caused by movement to a certain extent. Additionally, as the interference caused by blood pressure to PPG sensor also contained certain emotional information [33], it was also included for final emotion estimation.
Furthermore, this study used standard emotional stimulation videos to bring passive induced emotion to the subjects. However, in real life, human emotions often include active and passive induced emotions. Whether there is difference between the physiological representations of active emotion and passive emotion is still worth further study.
The duration of stimulation, the rest period and the interval among different emotion stimulation were all determined according to the literature and experience. However the cumulative and attenuating effects of emotions have not been completely confirmed yet. And due to individual differences, the group norm of these parameters has not been established yet, which deserves further studies.
Conclusions
In our study, we proposed a method of using heart rate data to identify human emotions. The data was collected by a wearable smart bracelet. The experimental results showed that our method was an effective means of using the heart rate signal to recognize human emotions. This method is simple to be realized on wearable consumer electronic devices. It will help to promote the application and development of wearable devices for monitoring human emotional moods in static or quasi-static states.
Categories Features Explanation
Features from normalized signal Rate_down_slope_max: the max value of the ratio of decreased amplitude and the corresponding decrease time; Rate_down_slope_min: the min value of the ratio of decreased amplitude and the corresponding decrease time; Rate_down_slope_median: the median value of the ratio of decreased amplitude and the corresponding decrease time; Rate_down_slope_mean: the mean value of the ratio of decreased amplitude and the corresponding decrease time; Rate_down_slope_std: the standard deviation of the ratio of decreased amplitude and the corresponding decrease time; Rate_up_amplitude_max: the max value of the amplitude change when the normalized heart rate increases; Rate_up_amplitude_median: the median value of the amplitude change when the normalized heart rate increases;Rate_up_amplitude_mean: the mean value of the amplitude change when the normalized heart rate increases; Rate_up_amplitude_std: the standard deviation of the amplitude change when the normalized heart rate increases; Rate_down_amplitude_max: the max value of the amplitude change when the normalized heart rate declines; Rate_down_amplitude_min: the min value of the amplitude change when the normalized heart rate declines; Rate_down_amplitude_median: the median value of the amplitude change when the normalized heart rate declines; Rate_down_amplitude_mean: the mean value of the amplitude change when the normalized heart rate declines; Rate_down_amplitude_std: the standard deviation of the amplitude change when the normalized heart rate declines; 25_mean_max, min, median, mean, std: the normalized signal is processed by moving average with a window length of 25, then the 25_mean data is obtained. Then features (includes max, min, median, mean, std) are extracted from the 25_mean data;Rate_data_mean,Rate_data_var: the mean and variance of the normalized heart rate signal; Rate_data_normalized_diff1_max\min\std\median\mean: Rate_data_normalized_diff1 is the average of the first-order difference of the normalized original signals. Then features (includes max, min, std, median, mean) are extracted from Rate_data_normalized_diff1 data; Rate_data_normalized_diff2_max\min\std\median\mean: Rate_data_normalized_diff2 is the average absolute value of the second-order differences of the normalized heart rate signals. Then features (includes max, min, std, median, mean) are extracted from Rate_data_normalized_diff2 data; | 6,773.2 | 2020-01-28T00:00:00.000 | [
"Psychology",
"Computer Science"
] |
Metabolic Heterogeneity of Cancer Cells: An Interplay between Reprogrammed and Oxidative Metabolism and Roles of HIF-1, GLUTs and AMPK
It has been long recognized that under hypoxia conditions cancer cells reprogram their metabolism through shift from oxidative phosphorylation (OXPHOS) to glycolysis to meet elevated requirements in energy and nutrients for proliferation, migration and survival. However, data accumulated over the last years increasingly evidence that cancer cells can revert from glycolysis to OXPHOS and maintain both reprogrammed and oxidative metabolism even in the same tumor. The phenomenon denoted as cancer cell metabolic plasticity or hybrid metabolism depends on a tumor micro-environment, which is highly heterogeneous and influenced by intensity of vasculature and blood flow, oxygen concentration, nutrient and energy supply, and requires regulatory interplay between multiple oncogenes, transcription factors, growth factors, reactive oxygen species (ROS), etc. Hypoxia-inducible factor-1 (HIF-1) and AMP-activated protein kinase (AMPK) represent key modulators of switch between reprogrammed and oxidative metabolism. The present review focuses on cross-talks between HIF-1, GLUTs, and AMPK and other regulatory proteins including oncogenes such as c-Myc, p53 and KRAS, growth factor-initiated PKB/Akt, PI3K and mTOR signaling pathways and tumor suppressors such as LKB1 and TSC1 in controlling cancer cell metabolism. The multiple switches between metabolic pathways can underlie chemo-resistance to conventional anti-cancer therapy and should be taken into account in choosing molecular targets to discovery novel anti-cancer drugs.
Introduction
Cancer cells often suffer from hypoxia, nutrient (glucose and amino acid) and energy deprivation resulted from insufficient vasculature and blood supply [1]. These stress conditions are key factors imposed on proliferating tumor cells to trigger their malignant transformation and to enable them to overcome or escape antitumor immune surveillance and to avoid cellular senescence and apoptosis [2][3][4]. This is resulted in tumor progression and aggressiveness, genetic instability, development of chemo-and radio-resistance and poor prognosis [5,6].
Under physiological conditions, oxidative phosphorylation (OXPHOS), i.e. coupling of oxidation reactions with mitochondrial electron transportation chain (ETC), is the most efficient way of ATP production, which generates much larger energy as compared to anaerobic glycolysis; however, under hypoxic conditions glycolysis is the only process providing cells with energy [7,8]. In the hypoxic microenvironment cancer growth is maintained by metabolic and bioenergetic reprogramming characterized by adaptive switch from OXPHOS to glycolysis followed by excessive glucose consumption and lactate production. This phenomenon was first discovered by German scientist Otto Warburg in 1927 and was denoted as Warburg effect by Efraim Racker in 1972 [9][10][11].
Molecular mechanisms underlying cancer cell tolerance to prolonged hypoxia and nutrient/energy starvation are very complex and can work both at transcriptional and posttranslational levels. Hypoxia-inducible factor-1 (HIF-1) is a master regulator of cellular oxygen sensing and adaptation to hypoxia and ubiquitous transcriptional activator, which regulates expression of numerous genes at both DNA level and epigenetic, chromatin remodeling/histone modifications, level [12][13][14]. Modulation of gene expression by HIF-1 causes alterations in mitochondrial oxidative metabolism, glucose uptake and oxidation, energy production and angiogenesis to enable cancer cell proliferation, migration and survival.
However, a large body of data has shown that most tumors grow in the interaction with highly heterogeneous microenvironment with different densities of blood and lymph vessels, amount and types of infiltrating cells, extracellular matrix composition, content of signaling molecules, etc. [15] Moreover, many tumors are not monoclonal despite they originate from a single cell; instead they are composed of multiple distinct clones differed by morphological and phenotypic features, which can vary depending on a cancer type and stage, treatment regimes, etc. [16][17][18]. This phenomenon denoted as tumor heterogeneity implies that within a definite tumor a heterogeneous population of various cell types with distinct gene expression and metabolic profiles, proliferative, angiogenic and metastatic potential co-exist.
Furthermore, results of experimental, bioinformatics and computer/mathematical modeling approaches are increasingly evidencing that cancer cells do not fully rely on glycolysis, instead they preserve oxidative metabolism [19,20]. This indicates that cancer cells acquire hybrid or heterogeneous metabolism, which enables them to use both glycolysis and OXPHOS as sources of ATP and that oxidative catabolic pathways including tricarboxylic acid (TCA) cycle (Krebs cycle), oxidative decarboxylation of pyruvate by pyruvate decarboxylase complex (PDC), glutaminolysis and fatty acids β-oxidation (FAO) can remain functional as sources of reducing equivalents (NADH and FADH2), and carbon and nitrogen [20]. Moreover, multiple switches between the metabolic pathways can exist depending on nutrient and energy availability, micro-environmental factors, and clinico-pathological characteristics such as tumor stage, histological type, differentiation grade, lymph node involvement, depth of invasion, etc.
To enable cancer cell metabolic plasticity, induction of numerous genes and activation/inhibition of multiple oncogenes, growth factors and tumor suppressors are required [21]. Crucial role in this phenomenon belongs to interplay between HIF-1 and AMP-activated protein kinase (AMPK), energy sensor and master regulator of cellular metabolism and bioenergetics. AMPK is a heterotrimeric serine/threonine kinase, which is activated by decrease in AMP/ATP ratio to provide ATP production through both glycolysis and OXPHOS [22]. In general, AMPK maintains ATP level in cells by switching from anabolic to catabolic metabolism through stimulation of glucose uptake, aerobic glycolysis and mitochondrial oxidative metabolism, mainly, due to fatty acid β-oxidation [23].
Additionally, both hypoxia and nutrient deprivation can cause elevated generation of reactive oxygen species (ROS) by mitochondrial ETC and Nox family NADPH oxidases resulted in oxidative stress followed by alterations in cell signaling pathways [24]. Various ROS types can affect the activities of both HIF-1 and AMPK along with intracellular effectors of cell signaling pathways and transcription factors to trigger cancer progression and metastasis under hypoxia, nutrient/energy and oxidative stress conditions. This review focuses on the recent advancements in understanding novel mechanisms underlying the ability of cancer cells to maintain hybrid metabolism, both metabolic/bioenergetic reprogramming and oxidative metabolism, for growth, invasion and metastasis. We demonstrate here the importance of consideration of cross-talks between HIF-1 and AMPK and the expression of GLUTs and enzymes involved in glucose and fatty acid metabolism in cancer initiation and progression. Furthermore, we show that growth factorinitiated phosphatydyl-3-kinase (PI3K), protein kinase B (PKB)/Akt, and mammalian target of rapamycin (mTOR) cell signaling pathways along with oncogenes and transcription factors such as KRAS, c-Myc and p53 interplay with HIF-1 and AMPK, and ROS generation to enable cancer cell metabolic plasticity.
Hypoxia-inducible factor-1
Usually, under experimental in vitro conditions oxygen concentration up to 20% is used, the condition denoted as normoxia [25] With the use of tumor metabolism modeling approach, it has been shown that in the hypoxic microenvironment, both intracellular and environmental factors contribute to metabolic reprogramming of cancer cells and various growth factor-initiated cell signaling cascades and transcription factors can affect HIF-1 activity [40]. An interplay between HIF-1 and variety of oncogenes such as Ras, c-Myc, p53, AMPK, along with PKB/Akt, PI3K and mTOR signaling pathways has been observed ( Figure 1) to control mitochondrial ETC functioning and energy production to maintain cancer cell proliferation and survival [41][42][43][44][45][46][47][48].
Figure 1.
Regulation of HIF-1 and its implication in metabolic reprogramming in cancer cells. HIF-1 induces the expression of genes, which encode glucose transporters, GLUT1 and GLUT3, enzymes of glycolysis and pentose-phosphate pathway, and pyruvate dehydrogenase complex kinase. Activity of HIF-1 is regulated by Ras-PKB/Akt-mTOR axis.
For example, interplay between HIF-1α and p53, two transcription factors regulated by both E3 ubiquitin ligase and murine double minute 2 (Mdm2), in response to hypoxia during carcinogenesis has been observed [49]. The p53 activation by gamma-rays used in cancer treatment triggers Mdm2-mediated HIF-1α UPS-mediated degradation. This leads to decrease in the peroxisome-proliferator activated receptor gamma co-activator 1β (PGC-1β) inhibition
Interplay between HIF-1 and facilitative glucose transporters
The phenotypic hallmark of more than 90% of primary and metastatic tumors is an increase in glucose uptake from the blood, which at a great extend depends on facilitative has been observed to correlate with that of HIF-1α in many cancer types including colorectal and ovarian cancers, and to associate with the tumor clinicopathological characteristics [74,75]. GLUT1 and HIF-1α expression was similar in relation to tumor size, location and patient age and gender however there were differences in intracellular location of these two proteins.
Immunoreactivity of GLUT1 was significantly higher in node-positive colorectal cancer compared to node-negative one. Furthermore, GLUT1 was found in membranes of multifocally necrotizing cancer cells and in the cytoplasm of cancer cells with no necrosis, while HIF-1α, mostly, had cytoplasmic location [75].
Additionally, an interplay between GLUTs, HIF-1 and glycolytic enzymes has been observed in many cancer types. Additionally, in non-small cell lung carcinoma cell culture and in vivo model, increased glucose uptake with the involvement of GLUT3 and caveolin 1 (Cav1), an important component of lipid rafts, triggers tumor progression and metastasis. Interestingly, Cav1-GLUT3 signaling can be targeted by atorvastatin, FDA-approved statin, which increases cholesterol biosynthesis, and this inhibits EGFR-tyrosine kinase inhibitor (TKI)-resistant tumor growth and increases the overall patient survival [82].
Higher expression of GLUT1 and GLUT3 in papillary carcinoma as compared to follicular carcinoma and non-neoplastic thyroid lesions has also been reported [83]. Additionally, both GLUT1 and GLUT3 have been up-regulated in poorly differentiated endometrial and breast cancers both at mRNA and protein levels [84]. Transactivation of GLUT3 was in Yesassociated protein (YAP)-dependent manner suggesting that this pathway serves as regulator of metabolic reprogramming during cancer progression and can be considered as promising anticancer therapeutic target [85].
Enhancement of glycolysis
As early as in 1925 C. Cori and G. Cori discovered that glucose content in the axillary veins of hens with Rous sarcoma was 23 mg less whereas content of lactate was 16 mg greater than those in veins of normal tissue [86]. Afterwards, Otto Warburg and co-workers compared glucose and lactate concentrations in tumor veins and arteries to find 69 mg greater lactate in the vein blood than that in the same volume of aorta blood of rats with Jensen sarcoma, while glucose uptake by tumor tissue was 52-70% as compared to 2-18% by normal tissues [9].
Warburg effect has been experimentally confirmed by over-expression of glycolytic enzymes accompanied by deficit in ATP production by OXPHOS in many cancer types in both cultured cell lines and animal models [87,88]. Genes affected by HIF-1 and implicated in carcinogenesis include solute carrier family SCL2A of genes and those encoding glycolytic enzymes such as hexokinase II (HK II), phosphofructokinase 1 (PFK1), fructose-bisphosphate aldolase A (ALDOA), α-enolase (ENO1), pyruvate kinase M2 (PKM2), and lactate dehydrogenase A (LDH-A or LDH-5) as well as genes encoding pyruvate dehydrogenase complex kinase (PDK) and enzymes of PPP [89,90].
The first reaction of glycolysis ( Figure 1) is catalyzed by key rate-limiting enzyme, hexokinase, which has four isoforms in mammalian cells, among which HK II over-expression at both mRNA and protein levels has been reported for many tumor types including hepatocellular carcinoma, ovarian cancer, etc. [91][92][93]. Furthermore, correlation of overexpression and co-localization of both HK II and HIF-1α in cancer cells near necrosis regions have been shown.
The second key rate-limiting glycolytic enzyme is PFK, a tetrameric enzyme in mammals that catalyzes the third reaction of glycolysis, i.e. phosphorylation of fructose-6-phosphate to fructose-1,6-bisphosphate (FBP) accompanied by ATP utilization. Interplay between HIF-1α and Ras and Src oncogenes in tumor microenvironment in regulation of PFK1 and PFK2 isoenzymes has been suggested to contribute to human cancer cell proliferation and survival [94]. PFK1 is an allosteric enzyme activated by fructose-2,6-bisphosphate that is produced from fructose-6-phosphate by bifunctional enzyme, phosphofructokinase-2/fructose-2,6bisphosphatase (PFK2/FBPase-2 or PFKFB2), which is induced by HIF-1α. Thus, targeting the fructose-2,6-bisphosphate can be considered as a promising therapeutic strategy to combat tumor growth, invasion and metastasis. For example, silencing of PFKFB2 gene has been shown to significantly inhibit ovarian and breast cancer growth and to enhance paclitaxel sensitivity and patient's survival [95].
In glycolytic pathway, there are two enzymes, which catalyze transfer of phosphoryl group from a substrate to ADP producing, thereby, ATP in reactions of substrate-level phosphorylation and serving as energy source in hypoxia conditions. The first enzyme is phosphoglycerate kinase (PGK) that catalyzes conversion of 1,3-diphosphoglycerate to 3phosphoglycerate. Several single nucleotide polymorphism variants of PGK1 with decreased catalytic efficiency and thermodynamic stability due to alterations in local protein conformations have been found in carcinoma cells [96]. The second enzyme is pyruvate kinase that catalyzes the last reaction of glycolysis under aerobic conditions, i.e. convertion of phosphoenol-pyruvate (PEP) into pyruvate, being allosterically regulated by fructose-2,6bisphosphate. Four mammalian PK isoforms differed by regulation and tissue specificity designated as PKM1, PKM2, PKR, and PKL have been described [97]. PKM2 is expressed in embryonic, proliferating and tumor cells to have a role in progression of many cancer types such as ovarian, gastric, lung cancers, etc. [98,99]. PKM2 up-regulation has been shown to occur through mTOR-mediated HIF-1α stabilization and c-Myc-heterogeneous nuclear ribonucleoproteins (hnRNPs)-dependent regulation [100].
Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) catalyzes the sixth reaction of glycolysis, oxidation of GAP to 1,3-biphosphoglycerate accompanied by reduction of NAD + to NADH. Regulation of GAPDH expression in cancer cells is not obvious. For example, it has been shown that GAPDH was not regulated in Hep-1-6 mouse hepatoma, Hep-3-B and HepG2 human hepatocellular carcinoma, A-549 human adenocarcinoma, and HT-29 and HCT-116 colon cancer cell lines [101]. This indicates that GAPDH is not attractive target in anti-cancer therapy and emphasizes importance of proper choice of housekeeping genes for correct interpretation of experimental results.
Indeed, various glycolytic enzymes can be differentially regulated in cancer cells under hypoxia conditions. For example, analysis and interpretation of transcriptomic data with the use of Cytoscape network showed highly over-expressed glycolytic genes including HK2, PFKP, ENO2, SLC2A3, SLC16A1, PDK1 in patients with clear cell renal carcinoma [102]. However, according to this study, some other glycolytic enzymes such as ALDOB, PKLR, PFKFB2, G6PC, PCK1, FBP1, and SUCLG1 were highly down-regulated. Proteomics approaches can contribute to explanation of altered metabolic phenotype of cancer cells along with their bioenergetic signature and increased glucose uptake resulted from both the activation of anaerobic glycolysis for cell proliferation and the impairment in mitochondrial functions [103].
Other metabolic pathways can contribute to Warburg effect by producing intermediates, which fuel glycolysis. For example, over-expression of the PPP enzymes associates with HIF-1α stabilization and tumor progression to serve as an indicator of poor prognosis. Since, PPP is linked to glycolysis, inhibiting of PPP enzymes can serve as a promising strategy in anti-cancer therapy. For example, the ability of natural peptide carnosine to decrease activities of PPP enzymes, as well as malate-aspartate and glycerol-3-phosphate shuttle mechanisms, which carry electrons from glycolysis to ETC, has been observed in glioblastoma cell lines [104].
Tissue acidification and its role in reverse to OXPHOS
Under oxygen deficiency conditions (oxygen concentration less than 2%), the efficacy of mitochondrial ETC decreases to cause the increase in NADH/NAD + ratio, which triggers conversion of pyruvate into lactate instead of further oxidation through oxidative decarboxylation catalyzed by pyruvate dehydrogenase complex (PDC). NADH accumulation causes the over-production of lactate due to activity of lactate dehydrogenase, which has 5 isoforms (LDH-1 to LDH-5) differentially expressed in normal tissues and over-expressed during carcinogenesis [105]. Indeed, increased glucose uptake, induction of glycolysis-related genes, excessive lactate production and HIF-1α activation associated with aggressive phenotype and poor prognosis have been observed in patients with HCC and in Ewing sarcoma cells [106,107]. These observations have led to the conclusion that forcing cancer cells into mitochondrial metabolism can efficiently suppress tumor progression, while targeting glycolytic enzymes can be effective strategy to combat cancer growth.
Accumulation of lactate in tissues, up to 40mM, leads to acidosis (pH ≤ 6.8), which is hallmark phenotypic feature of tumor microenvironment affecting tumor progression, invasion and metastasis [108,109]. Normal cells cannot grow in acidic microenvironment [110,111].
However, it is necessary condition to promote cancer cell migration and invasion. Both endogenous and exogenous lactate have been shown to activate some enzymes such as matrix metalloproteinases and to affect the expression of oncogenes (Myc, Ras), transcription factors (HIF-1, E2F1), tumor suppressors (BRCA1, BRCA2) and cell cycle genes [112].
In addition to lactate, carbon dioxide produced in catabolic pathways such as PPP, contributes to acidification of tumor microenvironment [113]. For example, in hypoxia conditions tumor cells have been shown to produce more HIF-1-induced IX and XII isoforms of carbonic anhydrase, which catalyzes reversible hydration of carbone dioxide into bicarbonate and protons to contribute to intracellular acidification and tumor cell survival [114]. Moreover, in a mouse model of ductal carcinoma in situ differences in levels of GLUT1 and carbonic anhydrase IX expression between normal and precancer cells along with heterogeneity in intracellular pH values have been demonstrated [115].
However, data obtained in the last years evidence that tumor cells demonstrate increased proton export due to the up-regulation of proton transporters such as Na + /H + exchanger 1 (NHE1), H + -lactate co-transporter and monocarboxylate transporters (MCTs) to regulate intracellular pH values [116,117]. Activities of these proton exchange systems represent additional adaptation and selection mechanism, which enable emerging of chemoresistant cell clones and tumor progression and metastasis.
Due to activities of lactate shuttle mechanisms lactate can serve as energy fuel, important gluconeogenic substrate and signaling molecule [118]. For example, lactate can fuel TCA cycle as show in human non-small cell lung cancers [119]. More importantly, lactate accumulation has been shown to be a mechanism underlying the reverse from glycolysis to OXPHOS in ATP production in cancer cells. Quantification of ATP amount produced via glycolysis and OXPHOS in 9 randomly selected cancer cell lines demonstrated that in the lactic acidosis microenvironment (20 mM lactate, pH 6.7) ATP was generated almost twice greater by OXPHOS and almost four times less by glycolysis than that without lactic acidosis [120].
Moreover, glucose consumption was much greater in lactic acidosis environment than that without lactic acidosis in the same tumor cell lines.
Oxidative metabolism and OXPHOS in cancer
Warburg wrote that "cancer cells can obtain approximately the same amount of energy from fermentation as from respiration, whereas the normal body cells obtain much more energy from respiration than from fermentation" and that uncoupling of respiration and phosphorylation with no diminishing oxygen consumption causes decrease in ATP production [10].
Currently, it is obvious that molecular oxygen is an indispensable component of mitochondrial ETC and serves as a final acceptor of electrons transferred through ETC enzymatic complexes (I, II, III and IV) localized in the inner mitochondrial membrane ( Figure 2). The energy of electrons is used for ATP biosynthesis with the involvement of ATP-synthase (complex V) in the process denoted as OXPHOS [121]. Data obtained over the last years evidence that elevated oxidative metabolism with increased uptake of mitochondrial fuels such as lactate, pyruvate, and ketone bodies are characteristic for many cancer types including head and neck cancer, breast cancer, lymphomas, etc. [122][123][124]. For example, up-regulation of mitochondrial OXPHOS featured by succinate dehydrogenase (complex II) and cytochrome c oxidase (complex IV) activation to allow them producing higher ATP amount has been observed in epithelial cancer cells [125,126]. Cancer stem cells resist glucose deprivation and over-express genes associated with oxidative metabolism including OXPHOS, PPP and FAO along with higher level of ROS generation [127]. Additionally, chemotherapy has been shown to induce shift from glycolysis to OXPHOS mediated by SIRT1 and transcriptional co-activator PGC1 to promote tumor survival during treatment [128].
Figure 2.
Oxidative metabolism, OXPHOS and ROS generation. NADH is produced, mainly, by glycolysis, pyruvate dehydrogenase complex, fatty acid oxidation and TCA cycle to fuel ETC via Complex I, while FADH2 is produced, predominantly, by fatty acid oxidation and TCA cycle and fuels ETC via Complex III. Glycerol-phosphate and malate-aspartate shuttle mechanisms serve to transfer reducing equivalents through outer mitochondrial membrane from the cytoplasm to ETC. OXPHOS is ATP biosynthesis by ATP-synthase (Complex V). Superoxide anion radical, a primary type of ROS is produced as a byproduct of ETC.
Two co-enzymes, NADH and FADH2 produced in reactions of oxidation of various biomolecules in the cytoplasm or in mitochondrial matrix, are the main suppliers of highenergy electrons for ETC (reviewed by [3,33]). The major sources of NADH or FADH2 are FAO and oxidative degradation of glucose, which proceeds through the following three sequential metabolic processes: (i) aerobic glycolysis, which occurs in the cytoplasm through 10 enzymatic reactions to give rise to two molecules of pyruvate per one glucose molecule, (ii) oxidative decarboxylation of pyruvate by PDC to form acetyl-CoA, which further enters (iii) TCA cycle, which is a source of not only electrons, but also important intermediates such as αketoacids including α-ketoglutarate (α-KG) and oxaloacetate utilized in biosynthesis of amino acids and other biomolecules.
PDC and TCA cycle
In many cancer types, mutations in Krebs cycle enzymes have been shown to cause accumulation of oncometabolites such as citrate and 2-hydroxyglutarate, which stabilize HIF-1 and Nrf2 transcription factors and ROS generation, along with inhibiting tumor suppressor p53 and PDC enzyme, pyruvate dehydrogenase isoenzyme 3 (PDH3) [129,130]. Cancer-associated mutations have been found in genes encoding three TCA cycle enzymes: succinate dehydrogenase (SDH), fumarate hydratase (FH) and isocitrate dehydrogenase (IDH), which lead to accumulation of succinate, fumarate and 2-hydroxyglutarate, respectively [131][132][133].
Indeed, multiple mutations in IDH isoenzymes, IDH1 and IDH2, which normally catalyze oxidative decarboxylation of isocitrate to α-KG, have been shown to occur frequently in gliomas and acute myeloid leukemia [113,134]. They lead to decrease in α-KG content and simultaneous increase in the amount of its antagonist, 2-hydroxyglutarate [135]. 2-Hydroxyglutarate being accumulated in tumor cells serves as a competitive inhibitor of multiple α-KG-dependent dioxygenases including histone demethylases and TET (ten-eleven translocation) family 5-methylcytosine (5mC) hydroxylases. Fumarate and succinate also have been proposed to act as competitive inhibitors of α-KG-dependent oxygenases including the HIFα hydroxylases contributing to HIF stabilization [136,137].
Nevertheless, cancer metabolome analysis has demonstrated that proliferating tumor cells require more diverse and large quantities of nutrients. Despite the earlier opinion that cancer cells bypass TCA cycle, emerging evidences are increasingly demonstrating that many cancer cells rely heavily on this process to meet the requirements in nutrients and energy [138].
Moreover, most tumor cells can retain functional mitochondria and TCA cycle intermediates, which serve as substrates for nucleotides and nucleic acid, amino acid and fatty acid biosynthesis [139].
Indeed, higher pyruvate uptake and mitochondrial activity associated with increased ATP production have been observed in more invasive ovarian cancer cells as compared to less invasive ones [140]. Additionally, the activation of both PDC and TCA cycle enzymes with the production of about 50% acetyl-CoA from glucose along with synthesis of glutamine and glycine from TCA cycle metabolites have been observed in brain cancers in both humans and animal models [141]. For example, metaboluic complexity, which includes oxidation of glucose to pyruvate and further to acetyl-CoA by PDC followed by TCA cycle as well as glutamine metabolism have been observed in mouse in vivo model of genetically diverse primary human glioblastomas [142].
Glutaminolysis
It has been recognized that cancer cells demonstrate higher rate of glutamine consumption as compared to normal cells, because glutamine catabolism can accommodate carbon and nitrogen demands for nucleotide and nucleic acid biosynthesis required for cell division and proliferation [143]. This occurs due to glutaminolysis, the catabolic pathway of glutamine degradation to α-KG through the following reactions: (i) deamination of glutamine by glutaminase (GLS) yielding glutamic acid and ammonia followed by (ii) oxidative deamination of glutamate by glutamate dehydrogenase (GDH) or (iii) glutamate transamination with alanine by alanine amino transferase and with aspartate by aspartate amino transferase. α-KG can further enter the TCA cycle, since it is an anaplerotic intermediate of this process and serves as energy fuel for cells [144].
Metabolic profiling studies showed that glycolysis is decoupled from TCA cycle in cancer cells, mainly, through glutaminolysis to feed TCA as an alternative source of carbon (reviewed by [145]). Glutaminolysis yields lactate and pyruvate, the latter can be carboxylated by pyruvate carboxylase (PC) to oxaloacetate, which also anaplerotically fuels TCA cycle for cancer growth and metastasis [146,147]. Thus, anaplerotic replenishment of TCA cycle in cancer cells depends on both glutamine degradation and glucose-derived pyruvate carboxylation to oxaloacetate. However, carbon can travel through TCA in reverse direction to feed fatty acid biosynthesis, while lactate and pyruvate can be used in gluconeogenesis for biosynthesis of non-essential amino acid [145].
Under glutamine deprivation conditions, cancer cells have been shown to undergo oncogenic transcription factor c-Myc-driven up-regulation of GLS and GDH, and cell cycle arrest [148]. Moreover, activation of mTOR complex 1 and ribosomal protein S6 kinase-β1 (mTORC1/S6K1)-mediated pathway have been observed to regulate c-Myc to promote uptake of glutamine and to stimulate its catabolism via up-regulation of GLS in pancreatic cancer through modulating phosphorylation of eukaryotic translation initiation factor eIF4B, which is crucial to unwind its 5'-untranslated region (5'UTR) [149]. Activation of GDH proceeds through suppression of mitochondrial sirtuin SIRT4, which is over-expressed in human cancers [150].
Additionally, glutamine transporter SNAT2 (Figure 1) facilitates transportation of glutamine into cancer cells to promote tumor growth. In breast cancer cells, SNAT2 can be induced by both HIF-1α and estrogen receptor-α (ER-α), binding sites for both HIF-1α and ERα being overlapped in cis-regulatory elements of SNAT2 gene [151]. Up-regulation of SNAT2 can cause complete resistance to anti-estrogen therapy and, partly, to anti-VEGF treatment indicating that developing drugs targeting SNAT2 is promising strategy in endocrine-resistance breast cancer.
Pentose phosphate pathway
Oxidative glucose metabolism can proceed through PPP, the process occurring in the cytoplasm and composed of two branches: (i) oxidative yielding ribose-5-phosphate used in nucleotide and nucleic acid biosynthesis, and NADPH utilized in fatty acid biosynthesis, and (ii) non-oxidative giving rise to glyceraldehyde-3-phosphate (GAP) and fructose-6-phosphate, the both can enter glycolysis [32]. There are two oxidation reactions in the oxidative branch of PPP, each of which yielding NADPH.
The first reaction is oxidation of glucose-6-phosphate to 6-phosphogluno-δ-lactone catalyzed by key time-limiting PPP enzyme, glucose-6-phosphate dehydrogenase (G6PDH), which is up-regulated in many cancer types and has been considered as a promising target for anti-cancer therapy and to revert cancer cell chemotherapy resistance [152,153]. In human clear renal cell carcinoma, both elevated glucose uptake and consumption and increased activity of G6PDH along with PPP-derived metabolites including NADPH have been observed [154]. The second oxidation reaction is conversion of phosphoglucono-δ-lactone into ribulose-5-phosphate catalyzed by 6-phosphoglucono-δ-lactone dehydrogenase (6PGD), which is also overexpressed in many cancer types including lung and ovarian cancer [155,156]. Up-regulated PPP enzymes such as NADP-dependent G6PDH and thiamine pyrophosphate (TPP)-dependent transketolase family enzymes, TKTL, TKTL1 and TKTL2, in various cancer types including breast, lung, gastric, endometrial, head and neck cancer have been reported [157][158][159][160][161]. Importantly, NADPH is an essential component of NADPH oxidases, which represent a major source of ROS and produce superoxide anion radical (O2•‾) as a primary product [162].
In addition to PPP, there are two NADP-dependent enzymes which produce NADPH (i) IDH, and (ii) decarboxylating malate-dehydrogenase (malic enzyme). Both enzymes are associated with TCA cycle and tumor growth. Over-expression of both ME1 and ME2 isomers of malic enzyme causes reduction in tumor suppressor 53 level, however down-regulation of ME2 causes more prominent increase in ROS generation and phosphorylation/activation of p53 by AMPK followed by senescence as compared to ME1 [163].
Fatty acid β-oxidation
The most efficient metabolic pathway producing NADH and FADH2 is FAO proceeding in mitochondrial matrix to yield acetyl-CoA, which further enters the TCA cycle (i) to provide a link between glucose and fatty acid metabolism, (ii) to enable generation of larger amount of ATP, and (iii) to produce important intermediates used in other metabolic pathways [164].
Energetically, FAO is more efficient and produces greater amount of ATP per one substrate molecule through OXPHOS as compared to oxidative degradation of glucose. Triacylglycerols and fatty acids of adipose tissue have been shown as potential sources to feed cancer growth.
Up-regulation of FAO enzymes and their key roles in aerobic respiration have been observed in many cancer cell lines including human malignant gliomas, HCC and breast, lung and ovarian cancers [165][166][167][168][169].
Indeed, FAO activation was driven by over-expression of c-Myc oncogene in triplenegative breast cancer [165], while AMPK-and liver kinase B1 (LKB1)-mediated acetyl-CoAcarboxylase phosphorylation/activation has been observed to increase intracellular levels of ATP and induction of resistance to glucose deprivation in HCC cells and MCF-7 and MDA-MB-231 breast cancer cell lines [166,170]. Up-regulation of carnitine palmitoyltransferase 1 (CPT1), a rate-limiting enzyme of FAO that catalyzes interaction of long-chain fatty acidscontaining acyl-CoA with carnitine to transfer them into the mitochondrial matrix, has been observed in ovarian cancer in mice [169]. Additionally, JNK and p38 MAPK-mediated phosphorylation and activation of FoxO transcription factor correlated with CPT1 inactivation and cell cycle regulator, cyclin-dependent kinase inhibitor p21, activation.
ROS generation
In addition to ATP production, ETC serves as a primary endogenous source of ROS ( Figure 2) and generates superoxide anion radical in large amounts, however, as a byproduct rather than a primary product [171,172]. ROS generation by ETC results from leak of electrons and incomplete reduction of molecular oxygen to yield O2•‾. In addition to well-recognized ETC sites of ROS generation, enzymatic complexes I and III, mitochondrial FAD-dependent glyrerol-3-phosphate dehydrogenase (GPD2) and a system of electron transfer flavoprotein and electron transfer flavoprotein: ubiquinone oxidoreductase (ETF/ETF:QO system) have been identified. GPD2 is involved in glycerolphosphate shuttle mechanism to carry reducing equivalents produced in glycolysis by cytoplasmic glyrerol-3-phosphate dehydrogenase (GPD1) through outer mitochondrial membrane to ETC [173]. ETF/ETF:QO system is involved in transferring of electrons from 11 different mitochondrial flavoprotein dehydrogenases including FAD-dependent acyl-CoA dehydrogenases (ASADs), which catalyze dehydrogenation of acyl-CoA to enoyl-CoA during β-oxidation of fatty acids [174,175]. In ETC, the both systems transfer electrons from FADH2 to CoQ to yield FAD and CoQH2, respectively, and to give one electron for incomplete reduction of O2 to O2•‾. Thus, oxidative metabolism is associated with generation of ROS, which can both cause alterations in redox homeostasis and underlie redox signaling to regulate cell response to stress stimuli.
Various human cancer types produce much greater amount of ROS as compared to normal tissues (reviewed in [176]). Alteration in signal transduction pathways that control mitochondrial bioenergetics and dynamics has been observed to cause mitochondrial dysfunction and elevated ROS production, which are implicated in determining the cancer cell fate for survival or death [177]. ROS production contributes to tumor microenvironment, which is highly heterogeneous and can affect tumor growth by multiple ways depending on interplay between various intracellular and environmental factors among which a key role belongs to AMPK.
Role of AMPK in promoting cancer cell oxidative metabolism
AMPK is an energy and nutrient sensor activated in response to energy starvation to provide the restoration of ATP level in cells by switching from anabolic to catabolic metabolism (reviewed by [178][179][180]. High AMPK activity associates with variety of metabolic processes including stimulation of glucose uptake by cells and mitochondrial oxidative metabolism, i.e. glucose oxidation, FAO and OXPHOS. Additionally, AMPK activation leads to inhibition of fatty acid and protein biosynthesis, cell cycle progression and cell proliferation in both normal and tumor cells [181]. AMPK is a heterotrimeric serine/threonine protein kinase that is expressed in different tissues and exists in various combinations of catalytic α-subunit and two regulatory β-and γsubunits to provide diverse roles in regulating cell proliferation, autophagy and metabolism [22]. At low ATP level AMPK is allosterically activated by AMP/ADP binding to enable phosphorylation of specific enzymes. Adenine nucleotides bind to four tandemly arranged cystathionine-β-synthase (CBS) domains in the AMPK γ-subunit (Figure 3). Binding of AMP stimulates phosphorylation of Thr172 residue in the kinase domain of α-subunit by upstream kinases such as LKB1, which is activated by formation of a complex with STRAD (STE20related kinase adapter protein-α) and scaffolding protein MO25 [182,183]. The β-subunit contains glycogen-binding domain and allows AMPK accumulation in the large cytoplasmic inclusions. Figure 3. Interplay between AMPK, HIF-1 and ROS-regulated growth factor/nutrient and energy stress/hypoxia-initiated cell signaling pathways in regulation of both glycolysis and OXPHOS to produce ATP for cancer cell proliferation, invasion and migration. The involvement of AMPK in lysosomal complex formation is shown.
In some cell types, AMPK activation can occur through the AMP/ADP-independent mechanism, when an intact AMP-binding site is not required for the AMPK activation. For example, fructose-bisphosphate aldolase, a sensor of glucose availability, when occupied by its substrate, fructose-1,6-bisphosphate, cannot promote AMPK activation, while ALDO free of FBP stimulates formation of a lysosomal complex composed of AMPK, V-ATPase, Ragulator, AXIN and LKB1 kinase, tumor suppressor; this complex is required for AMPK phosphorylation and activation by LKB1 [184]. Inhibition of LKB1-AMPK signaling by G6PD activation and ribulose-5-phosphate formation in PPP followed by activation of acetyl-CoA carboxylase to provide a link between PPP, lipid biosynthesis and tumor growth has been observed [185].
With the use of comprehensive proteomics and phosphor-proteomics approaches a large network of AMPK substrate proteins involved in cell migration, adhesion and invasion has been identified [186,187]. One of the key downstream signaling pathways regulated by AMPK is mTOR-mediated signaling, which controls cellular response to environmental stress stimuli through the formation of two distinct complexes, mTORC1 and mTORC2 [188]. mTORC1 is sensitive to changes in cell growth conditions and contains scaffolding protein Raptor and mTOR to trigger anabolic metabolism, i.e. protein, lipid and nucleic acid biosynthesis ( Figure 3). It can be regulated by growth factors and changes in cellular energy and nutrient concentrations to control numerous cellular processes at both transcriptional and translational levels. mTOR-mediated signaling functions to integrate it with PKB/Akt, HIF-1 and AMPK signaling pathways to control cell proliferation and survival in nutrient and energy deprivation conditions [189].
AMPK can inhibit mTORC1 through direct phosphorylation of several residues including Ser1387 in tumor suppressor TSC2, which forms heterodimeric complex with TSC1 for activation [190]. TSC1-TSC2 complex relays signals from diverse cellular pathways to properly modulate mTORC1 activity [191,192]. TSC2 contains GTPase-activating protein (GAP) domain, which activates small GTPase Ras homolog enriched in brain (Rheb), which in turn in active, GTP-bound, form can activate mTORC1. Rheb and Rag small GTPases act together to localize mTORC1 to lysosomal membrane and Ragulator complex in response to amino acids to activate mTORC1 and to drive maturation of endosomes into lysosomes [193].
Rag GTPases form A/C and B/D heterodimers, which use unique mechanism to stabilize their active ( GTP RagA-Ragc GDP ) or inactive ( GDP RagA-Ragc GTP ) states. Ragulator and lysosomal protein SLC38A9, arginine sensor, are guanine exchange factors (GEFs), which control nucleotide loading state [194].
Warburg effect can be closely associated with interplay between HIF-1 stabilization and decrease in AMPK activity, which underlies cancer cell survival and chemoresistance. In tamoxifen-resistant LCC2 and LCC9 breast cancer cell lines, rate of glycolysis was higher than that in MCF-7S cells and HIF-1 was activated through Akt/mTOR signaling pathway and phosphorylated AMPK was decreased without hypoxic conditions. The specific inhibition of glycolytic enzyme HK2 associates with suppression of Akt/mTOR/HIF-1 axis, and this along with increase in AMPK activity led to reduced lactate accumulation and cell survival [195].
Using a combination of mathematical modeling, bioinformatics and experimental data, an association between ROS, HIF-1 and AMPK activities in breast cancer cell lines has been shown. Hybrid metabolic phenotype of cancer cells comprising both aerobic glycolysis and OXPHOS to adapt to varying microenvironment has been reported [196]. Three stable steady state depending on levels of HIF-1 and phosphorylated AMPK (pAMPK)-HIF-1 high /pAMPK low , HIF-1 low /pAMPK high , HIF-1 high /pAMPK high , which correspond to glycolytic phenotype, OXPHOS phenotype (oxidative glucose degradation and fatty acid β-oxidation) and hybrid phenotype, respectively, have been described. Analyzing of well-annotated metabolomics and transcriptomics data along with mRNA sequencing data an association of HIF-1/AMPK activities with aggressive metastatic phenotype has been reported [197]. The authors concluded that targeting both glycolysis and OXPHOS is necessary to combat cancer aggressiveness.
Over-expression of AMPK contributes to tumor progression through multiple ways including stimulation of EMT and cell migration and adhesion. Moreover, AMPK signaling can exert opposite effects on tumor growth depending on cancer cell microenvironment including ROS production (reviewed in [198]). For example, an ability of AMPK to inhibit cancer growth through mitochondria-mediated metabolism has been suggested [199].
AMPK is a major controller of fatty acid metabolism. It inhibits acetyl-CoA carboxylase (ACC) by phosphorylation ACC1 at Ser79 and ACC2 at Ser212. ACC catalyzes conversion of acetyl-CoA into malonyl-CoA, a substrate for fatty acid biosynthesis [200]. Additionally, malonyl-CoA inhibits carnitine palmitoyl transferase 1 found in mitochondrial membrane to facilitate entry of fatty acids into mitochondria associated with increased FAO [201]. On the other hand, elevated production of ROS and inhibition of AMPK by isorhamnetin, which triggers cell cycle arrest at G2/M phase due to increase in the expression of cyclin-dependent kinase (Cdk) inhibitor p21 WAF1/CIP1 have been observed. Additionally, isorhamnetin induced apoptosis associated with down-regulation of Fas/Fas ligand, reduced ratio of B-cell lymphoma 2 (Bcl-2)/Bcl-2 associated X protein (Bax) expression, release of cytochrome c from mitochondria, and activation of caspases [202].
The increased mitochondrial ROS generation and AMP/ATP ratio caused AMPK activation, enhanced glycolysis and upregulation of uncoupling protein 2 (UCP2) have been observed in human cholangiocarcinoma associated with poor prognosis [203]. Additionally, gemcitabine has been shown to induce ROS/KRAS/AMPK-mediated metabolic reprogramming, mitochondrial oxidation and aerobic glycolysis to promote stem-like properties of pancreatic cancer cells [204]. Small GTPse KRAS is involved in Ras-MAPK-mediated signal transduction and formation of its active, GTP-bound, form is dramatically increased by GAP. It activates c-Raf and can contribute to Warburg effect in cancer cells through upregulation of GLUT1. Enhanced glucose uptake and glycolysis rate along with increased cell survival associated with GLUT1 up-regulation has been observed in colorectal cancer cell lines with mutations in KRAS and BRAF genes under glucose deprivation conditions [205].
Smolkova and co-authors hypothesized that during carcinogenesis there may be waves in metabolic changes, which start from alterations in oncogene expression followed by HIF-1 stabilization and metabolic reprogramming characterized by increased glycolysis and suppression of mitochondrial oxidation and OXPHOS [206]. High rate of cell proliferation causes hypoxia, and nutrient and energy deficiency and this stimulate oxidative glutaminolysis and the involvement of LKB1-AMPK-p53, PI3K/Akt-mTOR signaling along with c-Myc dysregulation [207]. This leads to resumption of mitochondrial OXPHOS and each type of neoplasm is characterized by distinct metabolic phenotype according to waves of metabolic changes and oncogenic mutations.
Conclusions
Cancer is a complex disorder depending on multiple intracellular and micro-environmental factors influencing its initiation and progression. Tumor cells grow in a highly heterogeneous microenvironment characterized by both hypoxia and physioxia, and this requires the involvement of numerous regulatory proteins to control tumor growth, invasion and metastasis.
Under hypoxia conditions, HIF-1α serves as a key oxygen sensor and a major transcriptional regular of numerous genes involved in glucose uptake and metabolism to provide switch in ATP production from OXPHOS to glycolysis. However, in many cancers the reverse from glycolysis to oxidative mitochondrial metabolism has been observed. Therefore, HIF-1α interplays with another master regulator of ATP production, AMPK, which enables switch from anabolic to catabolic metabolism to trigger oxidative degradation of glucose and βoxidation of fatty acids, the major producers of NADH and FADH2 as sources electrons for OXPHOS. This interplay involves growth factor-initiated signaling pathways, oncogenes and transcription factors, and these multiple cross-talks underlie uncontrolled cancer growth, invasion and metastasis and underlie cancer chemoresistence to conventional anti-tumor drugs.
Thus, more investigation are needed to understand cancer complexity and numerous interactions between various signaling pathways, which can cause switch between metabolic pathways to enable cancer cell tolerance to micro-environmental changes for proliferation and migration. This should be also taken into account in discovery of novel molecular targets for anti-cancer agents. | 9,105.6 | 2020-02-28T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Cultural, textual and linguistic aspects of legal translation: Amodel of text analysis for training legal translators
Legal translation training involves the acquisition and development of a set of sub-competences that constitute legal translation competence (Cao, Deborah. 2007. Translating law. Clevedon: Multilingual Matters; Prieto Ramos, Fernando. 2011. Developing legal translation competence: An integrative processoriented approach. Comparative Legilinguistics. International Journal for Legal Communications 5. 7–21; Piecychna, Beata. 2013. Legal translation competence in the light of translational hermeneutics. Studies in Logic, Grammar and Rhetoric 34(47). 141–159; Soriano Barabino, Guadalupe. 2016. Comparative law for legal translators. Oxford: Peter Lang; Soriano Barabino, Guadalupe. 2018. La formación del traductor jurídico: Análisis de la competencia traductora en traducción jurídica y propuesta de programa formativo. Quaderns: Revista de Traduccio 25. 217–229). The development of those sub-competences is part of a complex process where students are faced with different concepts and translation strategies and techniques which are not necessarily easy to grasp for trainee translators (Way, Catherine. 2014. Structuring a legal translation course: A framework for decisionmaking in legal translation training. In Le Cheng, King Kui Sin & Anne Wagner (eds.), The Ashgate handbook of legal translation. Farnham: Ashgate), particularly when applied to a legal context. It is our experience that translation students tend to focus on the product (text production) and do not spend enough time analysing the source text, which results in obvious mistakes in mainly – but not only – cultural (legal), textual and linguistic aspects. The interdisciplinary nature of legal translation calls for an integrative model for teaching and learning. The model presented provides trainees with a framework for source text analysis that places the communicative situation and the translation brief at the core from which three fundamental dimensions, based on the aspects mentioned above, develop. *Corresponding author: Guadalupe Soriano Barabino, Department of Translation and Interpreting, University of Granada, AVANTI Research Group, Granada, Spain, E-mail<EMAIL_ADDRESS>https://orcid.org/0000-0002-1134-981X Intl J Legal Discourse 2020; 5(2): 285–300 Open Access. © 2020 Guadalupe Soriano Barabino, published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 International License. Elements such as the legal cultures involved, legal text typologies or the level of specialisation of terms and discourse are some of the aspects to be considered, so allowing trainees to achieve a thorough understanding of the source text for a conscious translation. The model will be applied to a specific source text and translation brief.
Elements such as the legal cultures involved, legal text typologies or the level of specialisation of terms and discourse are some of the aspects to be considered, so allowing trainees to achieve a thorough understanding of the source text for a conscious translation. The model will be applied to a specific source text and translation brief.
Keywords: legal translator training, legal translation competence, source text analysis, text typologies 1 Legal translation training Legal translation is interdisciplinary by nature and so is its training. Generally speaking, legal translation training can fit three different scenarios: undergraduate students following a translation degree; postgraduate modern languages or postgraduate law students (of course, other types of students can also be found, such as professional translators wishing to specialise in legal translation, for instance). Although students' background, baggage and interests are different, they normally share common features as far as legal translation training is concerned: they must grasp different concepts, translation strategies and techniques (Way 2014); they tend to focus on the product (target text production) rather than on the translation process; and they are inclined to concentrate on terminology (micro-level) rather than on the whole text (macro-level).
Among the different sub-competences that must be developed in legal translation training, 1 the analysis of source texts is mainly related with the communicative and textual, intercultural and subject area sub-competences. Clearly all other sub-competences are of utmost importance and must all be combined within the training process (the professional, interpersonal and instrumental subcompetence is particularly important for source text analysis regarding the use of and access to specialised documentary sources in later stages) but will be considered as subsidiary for our aim of creating a model for source text analysis in legal translation training. I will refer briefly to those three sub-competences below.
To be competent from a communicative and textual point of view, legal translators must have a thorough knowledge of common/general and legal language in at least two legal cultures (or legal systems). This refers not only to mastering legal language (terminology, phraseology, concepts) but also common language (how to write properly, ability to understand texts written in legalese). They must be familiar with textual conventions and text types as well as with legal discourse and different registers found in legal texts.
Intercultural competence involves understanding legal systems as part of the culture of a particular society, as the law evolves in the same way as society. Translators must also be familiar with the social and political reality of a particular country or region, its traditions and customary law. The development of intercultural competence is also seen as the ability to transfer not only between legal systems but also between legal genres (Balogh 2019: 17).
Subject area competence comprises knowledge of legal families or traditions, legal systems, legal branches within the legal systems, sources of law, concepts, institutions, proceedings, substantive and procedural law, and divergences between legal systems. The degree of knowledge will vary greatly depending on the competence and training stage of the translator. It is to be expected that trainees are not experts in the legal systems involved in the translation process, however they should have the skills necessary to access specialised documentation to solve translation problems.
When students are trained in legal translation they normally have a certain degree of translation competence (or are being trained as translators) what means that trainers do not have to focus on translation training from scratch and some basic concepts and strategies have already been developed. Legal translation training must thus be seen as complementary to this basic training and I will focus on these aspects specific to legal translation in the model suggested.
Before explaining the model, I will briefly concentrate on the main features of legal texts so that we can understand why analysing the source text is so important, not only in translation as a whole, but in legal translation in particular.
Main features of legal texts
The main aspects that differentiate legal translation from translation in a broad sense are legal language, asymmetries and incongruences between legal systems and text typologies (Borja Albi 2000; Soriano Barabino 2020). Legal texts are written using a particular (legal) language and reflect the legal culture to which they belong so that the analysis of legal texts prior to translation is not only useful but also necessary.
Law expresses itself with its own language formed by terms, expressions and different elements of style or register. Legal language has developed parallel to the history and culture of each society and "legal terminology is system-bound, tied to the legal system rather than to language" (Pommer 2008: 18). However, and as this same author states, "legal language is a technical language with particularly close ties to the common language, which significantly heightens its culture-specificity".
Law, as a socio-cultural phenomenon, "is always linked to the culture of a particular society and jurisdiction. Consequently, national legal systems are deeply rooted in a specific legal tradition and legal culture" (Pommer 2008:18). This results in asymmetries between legal systems, often considered to be the main challenge for legal translators (Šarčević 1997).
Given the uncountable legal situations existing, there is an immense variety of legal texts. A vast number of them cannot be considered purely legal but are hybrid texts, not only because law impregnates Politics or Economics (among other areas) but also because legal texts may include elements belonging to areas outside the law (Mayoral Asensio 2002Asensio , 2004. However, text awareness is essential for translators. As Trosborg (1997: 17) puts it, the "lack of relevant knowledge of genre, communicative functions, text types and culture may result in distorted translations". Translators should take conscious decisions and to do so they must master not only the two (or more) cultures involved in the translation process but also the textual conventions of both systems (Bathia 1997).
All these elements that characterise legal translation are found in legal texts. When analysing a legal text we should be able to identify a particular legal language and a legal system and culture. This text has a particular form and has a series of features that identify it with a particular genre. Even if this may seem obvious for experienced translators it is not always as clear for novice translators and part of their training should consist in raising awareness about the characteristics of legal texts so that they can produce correct translations.
A model for source text analysis
As mentioned before, legal translation students are often overwhelmed by the countless aspects they must take into account when producing a target text and tend to concentrate on the final product, not necessarily paying enough attention to the process itself, thereby neglecting aspects related to culture, language or even textual conventions. Having a reference model with blank spaces to be completed makes them devote some time to analysing the text, which highlights the main elements they must take into account for target text production. Before explaining the model that I propose it is necessary to explain the different levels in which I divide the translation process from a training perspective: 1. Source text analysis. This should be the first step in any translation process. The model I suggest in this paper aims at making that analysis easier for trainee translators. It is intended to offer a general overview of the text covering general aspects related to the macro and microstructures of the source text but also to implicit cultural elements. It should also include a conscious choice of the general translation strategies to be applied to the translation process in a particular communicative situation and with a specific translation brief. 2. Access to documentary sources and documentary research. Although this level starts during the source text analysis, it is necessary to commit some time to accessing documentary sources and do as much research as may be necessary to accomplish the translation process. Frequently this phase overlaps with the other stages and continues during the whole translation process. 3. Translation problem solving. At this stage, we must focus on solving specific problems both at macro and micro-levels. Comparative law is particularly useful at this point and translation techniques must be used particularly to solve problems of asymmetry at the micro-level. 4. Translation in the proper sense (understood as the actual writing of the target text). 5. Proofreading and revision of the target text.
Based on the above, I suggest a model of source text analysis aimed at helping students at level 1. Although other models for text analysis exist, such as the one suggested by Elena García (2008) based on four levels (functional, situational, thematic and formal-grammatical) where the result is an exhaustive analysis of the text, the one I suggest has several advantages that make it particularly useful for legal translation training, namely: -It is a model specially designed for legal texts, focusing on the main difficulties of legal translation (as mentioned above). -It pays particular attention to the communicative situation and translation brief (Nord 2018). -It does not require a lot of time or documentary research so that students can easily complete the table below before each translation assignment. -Although very general, the result is a sort of diagram of the source text where the most relevant elements for its translation are highlighted.
The model is presented as a table with different blanks to be completed by the students before undertaking the next levels of the translation process. This is the empty table (Table 1): The first part of the table focuses on analysing the source text on its own. After indicating the name of the text or document to be translated, students are required to offer basic information about the communicative situation, specifically the producer, receiver and function of the source text. It is important to note that I follow a functionalist approach to translation. Therefore, function, understood as "the use a receiver makes of a text or the meaning that the text has for the receiver" (Nord 2018: 138), is the main guiding principle of the translation process.
The three columns below the communicative situation gather essential information to develop the communicative and textual, intercultural and subject area competences. These are of particular relevance in legal translation and may be Level of specialisation: -Discourse: legalese or plain language -Terminology (frequency of): totally irrelevant in other types of translation, such as translation of scientific texts, for instance (especially and broadly speaking, those aspects related to culture).
The basic cultural elements of the source text that students must be familiar with, so that they can easily locate the text in its cultural environment are the legal system and legal family to which the text belongs. Although this may seem an obvious statement and it is paramount in legal translation to know the legal system we are translating from, it is not always as clear for trainee translators. It is our experience that during their first contacts with legal translation, students tend to associate everything written in a particular language with a specific country, what may undeniably result in inaccurate translations.
Although these are not strictly cultural aspects, after locating the text in its legal culture, students must inform about aspects related to the subject area subcompetence. That is, the branch of law to which it belongs, whether it is a text developing private or public law and if it is a document of substantive or procedural law. The distinction between public and private law, although specific to civil law countries 2 can be also extended to common law countries. Even if this is not one of their characteristic features and it is not often that the court system is divided into public and private courts, it is always possible to differentiate between matters of public law and those falling under the scope of private law.
Although normally more difficult to identify for novice students, another element is whether the text falls under the sphere of substantive law (the positive law applied) or of procedural or adjective law.
The second column is intended to offer information about text typologies. Students must identify the category and text genre of the text. As there are different text classifications in Translation Studies in general, and in legal translation in particular, and given the importance of mastering the textual conventions in both legal systems, a specific part (Section 4) of this paper is dedicated to this aspect and to explain the classification that I suggest.
The third column focuses on linguistic aspects. Students must analyse three characteristics of the text as far as language is concerned. First, they must consider if the text is written in legalese, that is the formal and technical language used in legal documents, or in plain language, using a language that is more accessible to the general public.
Secondly, they must concentrate on terminology. Generally speaking, legal terms can be specialised, semi-specialised or general (of frequent use in the area). Specialised terms are specific to a certain subject area (in this case, law) even if some of them are also used in general language with a different or similar meaning. Semi-specialised terms usually have a particular meaning in general language and a different one in the subject area (Alcaraz Varó 2001). This feature goes hand-inhand with the one mentioned above: the higher the frequency of specialised terms in a text the more specialised that text is. However, all legal texts include the three types of legal terms indicated to a certain extent.
Thirdly, it is also possible to affirm that there are different legal languages and this is the third aspect to be linguistically marked in our model. Just as texts can be classified according to their textual conventions, texts belonging to different textual categories generally share a language which is somewhat different to other texts falling under a different textual category. For language purposes, we will be talking about five "legal languages" (even if I have identified more text categories) 3 : -Normative: language used in statutes, codes, regulations, etc.
-Judicial: language used by the judiciary in the texts issued by the courts and the court administration. -Administrative: language used by the administration/government in its relations with the general public and by the citizens in their relations with the administration/government. -Language used in public documents issued or authorised by a public officer (in some countries referred to as notarial language). -Language used in private documents (such as contracts).
After this brief analysis of the source text, the focus moves to the translation brief. Even if this is not, strictly speaking, an analysis of the source text, it is essential to be aware of the requirements of the assignment and take a series of aspects into account so that the result complies with the expectations of the client. Therefore, basic information about who is going to receive the document and who are the potential readers must be provided. Likewise, it is essential to know the function of the target text, as frequently different communicative goals are intended between source and target texts.
As far as translation strategies are concerned, the communicative situation of the translation brief may call for interaction between different legal systems (a text produced in a particular legal system is translated to be read and/or create effects in a different legal culture), in which case we would be talking about inter-systemic legal translation, or it may just remain in one legal system (we must transfer one text from one language to another but within the same legal system, as is often the case in European Union Law, for instance). In this case, we would talk about intra-systemic legal translation. This distinction is important as legal asymmetries between legal systems (the main feature of inter-systemic legal translation) is one of the most difficult aspects in this type of translation and also (one of) the most challenging to be understood (and mastered) by novice translators.
Strongly related to this is the fact that during their first contacts with legal translation, students tend to convert a text belonging to a particular source culture to a totally different text written in a different language and adapted to a different culture. Although the particular strategies to be applied to and developed for each translation assignment depend on the translation brief itself it is not often that this shift in source and target culture occurs. However, students often have problems in understanding that their task basically consists of "explaining" a source text written in a particular language and belonging to a given source culture in a different language to readers belonging to a specific target culture.
Another aspect to be taken into account, as far as the general strategy to apply to the translation is concerned, has to do with the function of the translation process and the function of the target text as a result of this, that is documentary versus instrumental translation (Nord 2018). Although most legal texts follow an instrumental process of translation and "the result is a text that may achieve the same range of functions as an original text" (Nord 2018: 50), there may also be cases where the result of the translation process is a text "whose main function is metatextual" and the target text is "a text about a text, or about one or more particular aspects of a text".
A final space is left in the table so that students can indicate particular features of specific texts. An example of this is whether the text is part of a certain proceeding and must be translated in accordance with particular requirements.
Legal typologies
Before undertaking the next levels of the translation process, students must have a clear image of the source text and, as such, the text typology must be clearly identified. This is why I have added a specific column on text typology to the model. However, as mentioned above, there is a vast array of legal texts and they can be classified according to several criteria. In Translation Studies, several classifications have been suggested based basically, although not only, on subject and professional areas (Delisle 1980;Snell-Hornby 1988or Wilss 1988, among others), function (some of the most representative classifications are the ones suggested by Hatim and Mason 1990;House 1977 or Reiss andVermeer 1996), or genres. Among the existing current legal text classifications are the one suggested by Šarčević (1997), who classifies texts according to their function (primarily descriptive, primarily descriptive but also descriptive and purely descriptive) or the one by Tiersma (1999), also based mainly on the text functions (operative, expository and persuasive). It is my belief that these classifications do not offer a clear image of the source text and I am more inclined to use a genre-based typology, such as the one created by the GENTT research group. Obviously, both typologies (function-based and genre-based) can be combined thereby providing a clearer image of the source text.
Before explaining the classification that I suggest it is important to clarify some terms such as text type or genre. In line with Balogh (2019: 20-21), I understand genre as "text used in a particular situation for a particular purpose composed and structured according to the norms accepted by a particular discourse community and thus displaying differences in external format (e.g. newspaper article, essay, contract, etc.)" while text types would be distinguished according to their "rhetorical (and communicative) function (e.g. narrative, descriptive, argumentative, comparative, etc.)". Hence the possibility of combining function-based typologies (text types) and genre-based ones.
The model that I suggest aims to collect information about text genres. I have grouped texts genres in seven broad categories, according mainly to their communicative situation, the producer of the text and the legal language used (see Section 3). As a result, and partly based on the categories suggested by the GENTT research group, I have identified the following: 1. Normative texts: this category includes all texts produced by the legislative or executive powers of the State that impose obligations and duties on the citizens. Some of the genres in this category are constitutions, statutes, regulations, codes, minister orders, etc. 2. Judicial documents: texts issued by the courts and the court administration. Judgments, court orders or petitions, among other texts, are included within this category. 3. Administrative texts and documents: texts and documents issued by the administration (government) in their relations with the general public or by the public in their relations with the government. Some genres in this category are application forms or reports, for example. 4. Public documents (issued or authenticated by a certifying officer): in some countries there are public officers, known as notaries, who are the only ones authorised to issue or authenticate certain documents such as wills or deeds. Documents issued by other public officers such as registrars are also included in this category.
5. Private documents: documents that regulate certain private law issues and are agreed upon by private individuals be they companies or citizens. The main text genre under this category is the contract. 6. Texts written by legal scholars: this category includes law textbooks, academic papers, legal opinions, etc. The language used in this genre is not purely legal but mostly academic. 7. Informative texts: all kind of texts informing about any legal aspect. These texts are usually aimed at the general public, do not have a high level of specialisation and use plain language. The
Application of the model
Our aim now is to apply the model explained above to a particular legal text and translation brief. The source text that I have used for this has been created for Translation brief: The deceased Mary Stewart had a property in Spain. Her Trustee needs her last will and testament to be translated into Spanish so that the all formalities required by the Spanish authorities can be accomplished.
As indicated above, the first level in the translation process should consist of the analysis of the source text and for this I will use the model suggested above, as follows (Table 3): As shown in the table above, all the information required can be easily obtained from a thorough reading of the source text and does not require a great amount of time that may discourage students. As has been mentioned before, this is just a first step in undertaking the translation of the text; it allows students to obtain a general overview of the source text so that they can start to make conscious decisions regarding its translation.
Conclusions
Converting a legal source text into a target text is a complex process. Diverse history developments and different ways of understanding social phenomena originate dissimilar legal systems that express themselves in different languages and refer to, often, distinct realities. Legal translation implies deconstructing source legal texts into various elements, legal but also cultural, textual or linguistic, among others, and converting them into a new target text where legal aspects from both the source and the target legal cultures co-exist.
This interdisciplinary nature of legal translation calls for an integrative model of translation, not only from a professional perspective but also from a training perspective. Offering trainees the tools to elaborate correct translations is the basis to have good professionals in the future.
Source text analysis is an aspect to which legal translation students do not necessarily pay enough attention. The model proposed allows for the deconstruction of source texts into the main elementslegal, cultural, textual, linguisticthat must be taken into account when transferring a legal source text into a target text. Although it may not be particularly useful for experienced translators it is so for novice translators who do normally concentrate on the product rather than on the process. | 6,066.2 | 2020-09-01T00:00:00.000 | [
"Linguistics",
"Law"
] |
LED backlight designs with the flow-line method
An LED backlight has been designed using the flow-line design method. This method allows a very efficient control of the light extraction. The light is confined inside the guide by total internal reflection, being extracted only by specially calculated surfaces: the ejectors. Backlight designs presented here have a total optical efficiency of up to 80% (including Fresnel and absorption losses) with an FWHM below 30 degrees. The experimental results of the first prototype are shown. ©2011 Optical Society of America OCIS codes: (080.4298) Nonimaging optics; (220.2945) Illumination design. References and links 1. J. C. Miñano, P. Benítez, J. Chaves, M. Hernández, O. Dross, and A. Santamaría, “High-efficiency LED backlight optics designed with the flow-line method,” Proc. SPIE 5942, 594202, 594202-12 (2005). 2. D. Feng, G. Jin, Y. Yan, and S. Fan, “High quality light guide plates that can control the illumination angle based on microprism structures,” Appl. Phys. Lett. 85(24), 6016–6018 (2004). 3. D. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plate for liquid crystal display backlight,” J. Opt. A, Pure Appl. Opt. 7(3), 111–117 (2005). 4. H. Tanase, J. Mamiya, and M. Suzuki, “A new backlighting system using a polarizing light pipe,” IBM J. Res. Develop. 42(3), 527–536 (1998). 5. N. Guselnikov, P. Lazarev, M. Paukshto, and P. Yeh, “Translucent LCDs,” J. Soc. Inf. Disp. 13(4), 339–348 (2005). 6. S. R. Park, O. J. Kwon, D. Shin, S. H. Song, H. S. Lee, and H. Y. Choi, “Grating micro-dot patterned light guide plates for LED backlights,” Opt. Express 15(6), 2888–2899 (2007). 7. W. J. Cassarly, “Backlight pattern optimization,” Proc. SPIE 6834, 683407, 683407-12 (2007). 8. K. Imai and I. Fujieda, “Illumination uniformity of an edge-lit backlight with emission angle control,” Opt. Express 16(16), 11969–11974 (2008). 9. R. Winston, J. C. Miñano, and P. Benítez, with contributions of N. Shatz and J. Bortz, Nonimaging Optics (Elsevier, Academic Press, 2004).
Introduction
Most existing LCDs are still backlighted with a cold cathode fluorescent lamp (CCFL).These lamps are efficient in terms of light output, but LEDs are progressively gaining market share.LEDs have many advantages over the fluorescent lamps which is propelling this progression: the possibility of dynamic electronic control of the illumination for a lower power consumption and higher contrast, higher compactness, less weight, less voltage, greater reliability, instant start up, larger colour range and brightness.These advantages have made them ideal for many applications including monitors in notebook personal computers, screens for TV, and many portable information terminals.
To satisfy market trends, it is important to make backlights as an efficient, thin, light, and bright system.There are many backlight designs that fulfill these requirements [1][2][3][4][5][6][7][8], which are designed using optimization algorithms and non-deterministic management of the light (via random scatterers), which usually sacrifice efficiency.In this paper we present several designs using the flow-line design method of Nonimaging Optics introduced in [1], which provides a deterministic efficient control of the light, together with the experimental results of the first manufactured prototype
Backlight architecture
Figure 1 shows the cross section of our LED backlight architecture.It consists of two sections: a collimator and a beam slicer.In fact, both functions can be integrated into just one section [1], but are considered separately for simplicity.In the 2D cross section, the top line of the beam slicer is labeled as "top guide line", since it guides the light all along the backlight, while the bottom line of the beam slicer is a microstructured line made up of two alternating segments, labeled as "guide segments" and "ejectors" (Fig. 1).The ejectors "slice" the incoming bundle into small "light ribbons", which are ejected with high collimation (~20°-30° FWHM) through the top guide line toward the LCD.The input rays need to be collimated enough so that all of them are totally internally reflected when they hit the top guide line, guide segments or ejectors.This is done by a collimator at the entrance of the slicer (for example a CPC).
Extended ray bundles and flow lines
Let the refractive index of the backlight, n, be known, and let the collimator illuminates the beam slicer (in the cross section of Fig. 1) within an angle ± θ.For the beam slicer design, consider an extended bundle of rays passing through the segment AB within an angle ± θ with respect to the x axis.The edge rays of that extended bundle [9] are shown in Fig. 2. For clarity, they have been represented as two subsets.Every point in the region bounded by the rays r 1 and r 2 is crossed by two edge rays (one ray of each edge ray subset).These subsets can be defined by two eikonal functions O 1 (x,y) and O 2 (x,y), whose constant values provide the wavefronts associated to those edge ray subsets.are the well-known flow lines of the bundle [1,9].At each point they bisect the angle formed between the two edge rays at that point.Figure 3(a) shows the flow lines for the edge rays presented in Fig. 2. The flow lines have two useful properties [9].First, since they bisect the bundle, a reflector can be placed coincident with the flow line without modifying the extended ray bundle as a whole: the edge rays associated to eikonal O 2 are just transformed into edge rays associated to eikonal O 1 .This property will be used to design the top guide line and the guide segments of the bottom microstructured line to coincide with the flow lines.The second interesting property of the flow lines is that the étendue ∆E of the rays crossing a segment whose edge points lay on flow lines j 1 and j 2 > j 1 is independent of the coordinates of those points (see Fig. 3(b)), and is just given by: The ejectors can be designed to reflect the rays, so the flow lines after the reflection are essentially perpendicular to the x-axis.Then the rays will be refracted at the top guide line, but since this line will be selected parallel or nearly parallel to the x-axis, the flow lines of the refracted bundle will remain essentially perpendicular to the x-axis.
Therefore, if all the rays have the same radiance R, the incremental power is ∆P = R∆E = 2R∆j.In order to provide uniform irradiance on the LCD (dP/dx = constant), the envelope of the microstructured line (at the limit of infinitesimally small ejectors), must fulfill the equation: where E out is the 2D etendue of the exiting bundle.Extrapolated to finite ejector sizes, Eq. ( 3) means that, for instance, if all the ejectors have the same projected size ∆x, each ejector must intercept the same ∆j.
By introducing the equation of the envelope of the microstructured line as y = y(x) in Eq. ( 3), it will fulfill the following differential equation:
Beam slicer design in two-dimensions
The design procedure consists of four steps: In this paper, we will present two designs (called conical and linear), both associated with the ray bundles already presented in Fig. 3(a), so step (1) is completed.For step (2), the flow lines in Fig. 3(a) consist of 3 segments (with C 1 continuity): a straight line (in region I of Fig. 3(a)), a parabola (in region II), and a hyperbola (in region III).This can be easily obtained by calculating the lines bisecting the edge rays in each region.In region I each point is intercepted by two edge rays forming the angle θ in respect to the axis x, thus the flow lines are straight lines y = const.In region II, one edge ray comes from an interior point of the strip AB, while the other from an extreme point of the strip (A or B).Therefore, the flow lines are the parabolas with focus at A or B. Finally, in region III both edge rays come from points A and B, so the flow lines become the hyperbolas with foci A and B.
In step (3), for both designs, we will select the flow line y = 0 as the top guide line.Finally, in step (4) the microstructured line must be calculated.For very small ejector sizes, this could be done by solving differential Eq. ( 4) with contour condition y(0) = -d 0 , where d 0 is the collimator exit aperture size (in Fig. 3 the aperture is given by the strip CD).However, for better performance, we do the exact finite-facet construction.This is implemented by starting from the flow line passing though point C = (0, -d 0 ), alternating flow line segments (which are the guide lines) and the ejectors.We chose that these ejectors will all intercept the same etendue ∆j, and with a tilt provided that all the rays are reflected by total internal reflection.
The input parameters for the finite facet-size design are the coordinate y 0 that defines points A = (0, y 0 ) and B = (0,-y 0 ), the aperture size d 0 and angle θ of the collimation, the minimum thicknesses of the beam slicer at its end, d min , which is fixed by manufacturing constraints.The purely geometrical efficiency (i.e, without considering absorption or Fresnel losses) is given by the ratio E out /E in , where E in = 2•n•d 0 •sin θ is the étendue of the bundle exiting the collimator and entering the beam slicer.Note that fixing d min >0 implies some geometrical losses, given by E l , escaping at the end of the slicer (E l = E in -E out ).
Once the design is finished, the maximum thickness of the beam slicer, d max , and the length l are computed.Since d max and l are more practical input parameters, we match them by varying d 0 and θ.
When y 0 is small enough that flow lines inside the beam slicer contain not only straight line segments but parabolic and hyperbolic ones too (i.e. the slicer is designed in regions I-III), the design will be called conical backlight here.When y 0 tends to infinity, the bundle has only region I, so all the flow lines inside the beam slicer are straight lines, therefore the design will be called linear backlight.
Figure 4 shows the cross section of two of these designs.The conical one has been designed with the following specifications: AB = 5mm, θ = 10°, d min = 0.5mm, d max = 2.9mm, p = 1mm (where d min and d max are the smallest and the biggest thicknesses, and p is the distance between two ejectors).A linear backlight design with similar specifications: θ = 8°, d min = 0.5mm, d max = 2.89mm, p = 1mm is also modeled.
Three-dimensional designs
Figure 5 shows the 3D model of the one of the backlights obtained by linear symmetry along the z-axis of both the beam slicer and collimator sections.We have selected this simple symmetry to ease the prototype manufacturing (see Section 6).The LEDs will not be glued to the collimators, which will have a flat entry surface.Therefore, the light after the refraction will form an angle with the x-axis smaller than the critical angle.Note that due to the linear (a) (b) symmetry of the collimators, the light exiting the backlight towards the LCD plane will be collimated only in the x-y plane, but not in the z-y.Collimation in both directions can be obtained using cross-CPC type collimators (which collimate in both dimensions) and the same linear-symmetric beam slicer.
The conical design has been traced in a commercial raytrace package LightTools ® , using 7 equally spaced LEDs with dimensions 0.6x1.9mm(the same dimensions as OSRAM microside LEDs).Beside the collimation, the collimator's function is to mix the rays coming from different LEDs, so that the irradiance at the entrance of the slicer is uniform.This provides an equal etendue supply for the ejectors.The LightTools ® simulation gives 92.0% of geometric efficiency and 79.9% of total efficiency (including Fresnel and absorption losses) for the conical backlight, and 82.0% of geometric efficiency and 71.4% of total efficiency for the linear backlight.
Simulations and experimental results for a linear backlight prototype
Considering the efficiency, the best design presented here is the conical backlight.However, the linear backlight is easier and cheaper to prototype since its microstructure lines consist of straight lines, the thickness decreases constantly and all the ejectors have the same size (in this design, 22 microns of horizontally projected dimension).Therefore, we have chosen the linear backlight design for the first prototype (Fig. 6), manufacturing direct cutting of the PMMA, and using 7 OSRAM microside LEDs.The edge of the prototype has been observed under a microscope.It has been noticed that the curvature radii at the connection between guiding lines and ejectors is around 10 µm, which must certainly affect the performance.Therefore, an approximate model with all of the edges having the same radio of curvature of 10 µm has also been simulated.The surface scattering is considered as negligible.The simulation shows an irradiance pattern with the maximum variation (in respect to the mean value) of 10.7% (Fig. 7 The experimental measurements of the irradiance at the backlight exit have been carried out with a luxmeter placed at seventeen points distributed throughout the prototype's exit aperture.These measurements show a non-uniformity of about 12%, compared to the 10.7% predicted by the simulation.This disagreement may be due to dispersion of flux between the 7 LED's. The prototype's intensity distribution and optical efficiency were measured using a single LED and the LUCA optics measurement system.This system comprises a camera, lens, screen and software that processes the information collected by the CCD camera (Fig. 8).Since the camera aperture is very small, we can consider that the screen is placed at infinity from the camera, so the CCD directly reflects the power emitted by each point on the screen.The screen is located on the lens's focal plane thus the parallel rays from the source will focus on a point on the screen.The camera records the power of the light at each point on the screen.From this data the overall power has been calculated.Due to the sizes of backlight and Luca's lens, one measurement can involve only the light contained in a range of ± 5° and ± 10°.In order to measure the total output radiation, the backlight was set on the platform of a two-axis rotator and rotated from −90° to 90° in both axes with steps of 10° and 20°, as shown in Fig. 8.The measured efficiency is 51.7%.The additional efficiency drop of 10% (in respect to the simulated value 61.2%) is caused by surface roughness and coupling losses between the LEDs and the backlight.
The measured and simulated cross section of the intensity distribution is also shown on the right-hand side of Fig. 8(b), which shows a reasonable agreement, and a FHWM<30°.The intensity has been designed with an offset of 15° with respect to the normal to the backlight to guarantee the total internal reflections of the ejectors.
Summary
The presented backlight designs contain a basic piece called a beam slicer which guides the beam between flow lines and periodically ejects a part of it, creating an output beam which is made up of small ribbons of light.All of the designs provide uniform radiation for the LCD with high efficiency (up to 80% including all losses).The experimental results differ from the theoretical results due to surface errors and roughness.
Fig. 1 .
Fig. 1.Cross section of a generalized LED backlight design based on the flow-line method.
Fig. 2 .
Fig. 2. Edge rays of the bundle immersed in a medium of refractive index n and radiating from the strip AB within the angle ± θ.The j = const lines of the function
Fig. 3 .
Fig. 3. Flow lines for the beam radiating from the strip AB (in red), (a) Definition of the flow lines, (b) Design procedure for the conical and linear backlights.
(1) choice of the ray bundle, (2) calculation of its flow lines, (3) selection of one flow line as the top guide line and (4) construction of the (
(a)), and an efficiency of 61.2%.These values are poorer than the 6.8% non-uniformity and 71.4% efficiency predicted with the idealized model (with null radius) mentioned in Section 4. The simulated intensity $15.00 USD Received 30 Sep 2011; revised 7 Nov 2011; accepted 18 Nov 2011; published 6 Dec 2011 (C) 2011 OSA distribution of the realistic model is also shown in Fig. 7(b), in which the lack of collimation in one dimension can be appreciated due to the selected linear symmetry of the collimators.
Fig. 8 .
Fig. 8. LUCA measurement system, and simulated and measured intensity cross section profiles. | 3,866.8 | 2012-01-02T00:00:00.000 | [
"Engineering",
"Physics"
] |
Approximating structured singular values for Chebyshev spectral differentiation matrices
In this article, we present the numerical computation of lower bounds of structured singular value known as the µ-value for a family of Chebyshev spectral differentiation matrices. The µ-value is a versatile tool used in control in order to analyze the robustness, performance, stability, and instability of feedback systems in system theory. The purposed methodology is based upon low-rank ordinary differential equations based technique and provides tight lower bounds of µ-value once compared with the well-known MATLAB routine mussv available in the MATLAB control toolbox.
Introduction
* Structured Singular Value known as the µ-value defined by Packard and Doyle (1993) is a valuable tool available in system theory to analyses the robustness and performance of the uncertain control systems. The µ-value tool is applicable to investigate the stability analysis of control system with the help of the main loop theorem discussed in Packard and Doyle (1993). However, one need to do more analysis in order to deal with the complex robustness.
The structures addressed by µ-value are generic in nature. In principle, these structures allows us covering all kinds of uncertainties, perturbations which can be included into the linear control systems with the help of both real and complex linear fractional transformations (LFT's). For applications of structured singular values and its examples, interested readers can see Bernhardsson et al. (1998), Hinrichsen and Pritchard (2005), Chen et al. (1996), Zhou et al. (1996), Qiu et al. (1995), Karow (2011), and Karow et al. (2006) and the references there in.
Unfortunately, the computation of an exact value of structured singular value is not a trivial task and appears to be NP-hard problem, for more details, see Braatz et al. (1994). In case of pure real perturbations, even approximating µ-value appears to be NP-hard. The matter of fact is that the computation of µ-value needs dependency upon the approximation of both µ-lower and µ-upper bounds.
For the special case when only repeated parametric perturbations are allows, in such scenario its much valuable to have lower bounds because the upper bound could be conservative, especially when repeated parametric perturbations occurs. The widely used MATLAB routine, mussv, approximate an upper bound by means of the diagonal balancing technique, for further details, readers can consult Young et al. (1992) and a linear matrix inequalities (LMI) technique developed in Fan et al. (1991). The lower bound of µ-value is approximated by means of power method, the interested reader can consult Packard et al. (1988) and Young et al. (1994). The algorithm presented for this resembles a matrix of the power method for approximating the maximum eigenvalue and the maximum singular value of the given matrices.
In this paper, we present numerical approximations to a lower bound of the µ-values of Chebyshev spectral differentiation matrices and we consider the fact that the underlying perturbations are associated with pure complex, mixed real and complex uncertainties. The proposed methodology to approximate the lower bounds of µ-value is based on two level algorithm, inner-outer algorithm (Rehman and Tabassum, 2017).
In section 2, we emphasize our attention on the basic framework of proposed problem under consideration. It is describe that how the approximation of the µ-values can be addressed by means of a two level algorithm that is an inner algorithm and an outer algorithm. In Section3 of this article, we introduce the inner algorithm for the case of pure complex uncertainties. The outer algorithm is mentioned in section 4. Finally in Section 5, we give the numerical experiments to compare lower bounds of µ-values for Chebyshev spectral differentiation matrices obtained with algorithm of Rehman and Tabassum (2017) to the one obtained with MATLAB function mussv.
Framework
Let ∈ , ( , ), where denotes the complex matrices while denotes the family of the real matrices though out this article, and an underlying perturbation set with prescribed repeated real scalar block matrices and repeated complex scalars block matrices and the full blocks along the major diagonal.
The following definition is given in Packard and Doyle (1993), where is the ( , ) identity matrix.
Definition 2.1. Let ∈ , and consider the set of block diagonal matrices that is the set and let ∆∈ is an admissible perturbation. Then, a structured singular value is denoted by ( ) and is defined as follows: For a general set , the structured singular values become smaller and thus we have an upper bound. The important case, that is, when underlying perturbation set allows the pure complex perturbations, under such circumstances, we write * instead of .
For ∆∈ * , it's true to say that the perturbation ∈ for any value of ∈ . Thus we choose ∆∈ * such that the spectral radius achieves the maximum value to be one, that is, ( ∆) = 1 which is possible only if there is ∆∈ * , with the exactly same norm so that the matrix ∆ possesses an eigenvalue which attain the maximum value one and furthermore the matrix ( − ∆) is a singular matrix. This gives us following alternative definition of structured singular value as: In Eq. 3, the quantity ρ(⋅) denotes the spectral radius of a matrix.
Reformulation of definition of SSV
The structured spectral value of ∈ , w.r.t to perturbation level, > 0 is defined as follows: where, the set Λ(•)denotes the eigenvalues of the matrix. For pure complex uncertainties * , the Eq. 4 is simply a disk having its center at origin. Thus, the Eq. 3 for pure complex uncertainties can be reformulated as: (5)
Overview of the proposed methodology
We need to solve the maximization problem, For the fixed parameter > 0. From the above discussion, it's very much that clear that the quantity * ( ) is the reciprocal of minimum value of such that ( ) = 1. In the inner algorithm, we intend to solve the problem addressed in Eq. 6. In the outer algorithm, we first vary , the small parameter by using the fast Newton's method which gives knowledge to compute the extremizers. We address Eq. 6 by solving a system of ordinary differential equations (ODEs).
Computation of local extremizers
In this section, we consider the solution of problem as mentioned in the Eq. 6 by making use of the inner algorithm. Now, we use the following standard eigenvalue perturbation result by Kato (1980). where, 0 * and * are the right and left eigenvectors of 0 = (0) associated with simple eigenvalue 0 = (0) that is ( 0 − 0 0 ) and 0 * ( 0 − 0 ) = 0.
Definition 3.1.1. An admissible perturbation ∆∈ * such that ‖∆‖ 2 ≤ 1 and the matrix ( ∆) for some fixed parameter > 0 has the largest eigenvalue , which maximizes the modulus of the structured spectral value set Ʌ * ( ), is known as a local maximizer.
In the next theorem we replace full block matrices in an extremizer of the rank-1 matrices. with ‖∆‖ 2 = 1 which is an extremizer of the structured spectral value set, that is, Ʌ * ( ). Consider λ, , as given in Theorem 3.1.2. Furthermore, additionally assume that the nondegeneracy of Eq. 10 holds and every block possess a singular value which attains the maximum value exactly equal to 1. Moreover, the matrix, ∆= {diag( 1 1 , … , ; 1 1 * , … , * )} acts as the local extremizer of the structured spectral value set.
Remark 3.1.4. Theorem 3.1.3 helps us to consider the admissible perturbations that is the uncertainties in the spectral value set as given in Eq. 4. By making use of the fact that both Frobenius norm and the matrix 2-norm of a rank-1 matrix appear to be same, this helps us to search for extremizers within the sub-manifold given as:
System of ODEs to approximate extremal points of Ʌ * ( )
In order to approximate the local maximize for structured spectral vale set Ʌ * ( ), we aim to construct and then solve a matrix valued function ∆( ). The matrix valued function ∆( ) ∈ 1 * is so that the maximum value of the absolute value of an eigenvalue ∈ Ʌ * ( ) of the matrix valued function ( ∆( )) achieves the maximum growth. Our next aim is to derive a gradient system of ODE's which satisfies the choice of the initial matrix admissible perturbation ∆( ).
The orthogonal projection onto *
Lemma 3.2.1. For ∈ , , consider the product, which shows that the block diagonal matrix is obtained by the entry wise multiplication of the matrix C with the pattern matrix * . The pattern matrix is defined as below: The orthogonal projection of the matrix onto the family * is obtained as: where, = ( ) , ∀ = 1: and 1 = +1 , … , = + .
The optimization problem
We consider the fact that = | | acts the simple and the largest eigenvalue with the corresponding right and left eigenvectors , respectively and are normalized so that, From the result of the above Lemma 3.2.1, we get the following expression for the change in the largest eigenvalue as: The eigenvectors and are defined and normalized as in the Theorem 3.1.2. Now, by considering the suitable perturbation ∆∈ 1 * with 1 * in Eq. 11. We search the direction that maximizes the growth of the modulus of the largest eigenvalue . For this we need to determine the direction as given in the Eq. 16: which solves the following optimization problem: = max{ ( * ) } subject to ( ̅ ) = 0 , ∀ = 1: , (∆ , Ω j ) = 0 , ∀ = 1: The linear constraints in the maximization problem as in Eq. 17 ensure the fact that lies in the tangent space of the manifold 1 * at ∆( ). In particular, Eq. 17 ensures that the matrix norm of each block of the admissible perturbation ∆( ) remain conserved. The quantity > 0 is obtained in the first equation is nothing but the reciprocal of the absolute value of the quantity appearing on the right-hand side of the expression for , if this is other than zero, while this quantity appears to be one, that is, = 1, otherwise. In a similar way the quantity > 0 appear as the reciprocal of the Frobenius norm of the quantity appearing on the right-hand side of the expression for Ω j in the second equation, if it appear other than zero, while it appears to be equal to one, that is, = 1. Also note the fact that if the quantity appearing on the right-hand sides are other than zero, then ∈ 1 * .
Corollary 3.3.2.
The result of the previous Lemma 3.3.1 can be written as follows: In above Eq. 18, the quantity * (•) acts as the orthogonal projection to the manifold of the pattern matrices. Also, 1 , 2 ∈ * are the orthogonal matrices with 1 appear as to be positive matrix.
Gradient system of ordinary differential equations
Lemma 3.3.1 and Corollary 3.3.2 suggests us to focus on following differential equations on the manifold of rank-1 matrices 1 * .
The vector ( ) acts as an eigenvector which is associated to a simple and largest eigenvalue ( ) of the matrix valued function ( ∆( )) for some fixed parameter > 0. Also, consider the fact that the quantities ( ), 1 and 2 depends on the choice of the matrix valued function that is ∆( ). The obtained differential Eq. 19 represents a gradient system of ODE's due to fact that right-hand side is nothing but is the projected gradient of → ( * ).
Choice of initial value matrix ∆ and
In our two level algorithm for the approximation of the perturbation level , we make use of the admissible perturbation, ∆ which is obtained for the previous value of perturbation level as the initial value matrix for the system of ODE's as in Eq. 19.
Consider that the given matrix is invertible and we consider the fact that the matrix can be expressed as − 0 ∆ 0 = ( −1 − 0 ∆ 0 ). To compute the initial choice of the admissible perturbation ∆ 0 , we perform an asymptotic analysis around 0 ≈ 0. In order to achieve this we consider that the very suitable choice of the matrix valued function ( ) = −1 − ∆ 0 , and also consider that ( ) being as eigenvalue of the matrix valued function ( ) which possesses the smallest modulus. Finally by considering that , represents both right eigenvector and left eigenvector to the initial choice of (0) = 0 = | 0 | , scaled such that * > 0, From Lemma 3.1, we get To achieve local maximal decline of the function | ( )| 2 as = 0, take the initial perturbation as: In Eq. 20, the matrix appear as a diagonal and positive definite matrix and the initial admissible perturbation, uncertainty ∆ 0 ∈ 1 * . On the other hand a very natural choice of the 0 is given by as below: The quantity is the upper bound for structured singular values which are approximated by the MATLAB routine mussv.
Outer algorithm
In the following, we consider that ( ) represents the maximizers by approximating the stationary points against the gradient system of ODE's in Eq. 19.
For making use of fast Newton's method for solution of the equation | ( )| = 1 we approximate the derivative of the equation| ( )| − 1 = 0 w.r.t perturbation level .
Numerical experimentation
In the last section of this article, we present various numerical experimentations for pure and the admissible both mixed real and complex perturbations, uncertainties. The comparisons of lower bounds of µ-values for a family of Chebyshev spectral differentiation matrices is presented.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 5.4677 − 0.006.
In Table 1, we give the comparison of lower bounds of structured singular values for 3dimensional Chebyshev spectral differentiation matrices.
Example 2: Consider the following four dimensional real Chebyshev spectral differentiation matrix. Also, consider the set of block diagonal uncertainties as an input argument. The uncertainty set is taken as: = {diag(∆ 1 ): ∆ 1 ∈ 4,4 }.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 6.4745.
In Table 2, we give the comparison of lower bounds of structured singular values for 4dimensional Chebyshev spectral differentiation matrices. Also, consider the set of block diagonal uncertainties as an input argument. The uncertainty set is taken as: = {diag(∆ 1 ): ∆ 1 ∈ 5,5 }.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 10.3961.
In Table 3, we give the comparison of lower bounds of structured singular values for 5dimensional Chebyshev spectral differentiation matrices. Also, consider the set of block diagonal uncertainties as an input argument. The uncertainty set is taken as: = {diag(∆ 1 ): ∆ 1 ∈ 6,6 }.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 15.3612.
In Table 4, we give the comparison of lower bounds of structured singular values for 6-dimensional Chebyshev spectral differentiation matrices.
In the Figs. 1-8, we present graphical interpretation of the bounds of µ-value obtained by our algorithm with the one obtained by MATLAB function mussv.
Conclusion
In this article, we have considered the problem for the computation of the lower bounds of µ-values for a family of Chebyshev spectral differentiation matrices. The numerical computation of µ-values gives an important role in stability analysis of linear systems in the system theory.
The numerical experimentation show that the comparison of the lower bounds of µ-values computed by algorithm mentioned in this article when compared to well-known MATLAB control toolbox.
B
Family of block diagonal matrices ɛ0 Perturbation level Δ0 Initial admissible perturbation µ Structured singular values. | 3,932.6 | 2018-11-01T00:00:00.000 | [
"Mathematics"
] |
Macroscopic tensile plasticity by scalarizating stress distribution in bulk metallic glass
The macroscopic tensile plasticity of bulk metallic glasses (BMGs) is highly desirable for various engineering applications. However, upon yielding, plastic deformation of BMGs is highly localized into narrow shear bands and then leads to the “work softening” behaviors and subsequently catastrophic fracture, which is the major obstacle for their structural applications. Here we report that macroscopic tensile plasticity in BMG can be obtained by designing surface pore distribution using laser surface texturing. The surface pore array by design creates a complex stress field compared to the uniaxial tensile stress field of conventional glassy specimens, and the stress field scalarization induces the unusual tensile plasticity. By systematically analyzing fracture behaviors and finite element simulation, we show that the stress field scalarization can resist the main shear band propagation and promote the formation of larger plastic zones near the pores, which undertake the homogeneous tensile plasticity. These results might give enlightenment for understanding the deformation mechanism and for further improvement of the mechanical performance of metallic glasses.
indentation. Wang et al. found that surface mechanical attrition treatment could induce the intense structural evolution and then lead to the formation of gradient amorphous microstructures, which boosts the multiple shear banding and then obtains the superior tensile ductility 28 . However, whether there exists a method to realize the homogeneous tensile deformation in BMGs rather than the inhomogeneous deformation governed by SBs is seldom investigated.
On the other hand, in the nano-scale, the deformation mechanism endures a transition from inhomogeneous to homogeneous deformation not relying on SBs, which results in tensile ductility and even necking 12,29 . From this point of view, monolithic BMGs could be intrinsic malleable and ductile under tension. Meanwhile, when the energy state of BMGs is tuned into the higher energy state of the super-cooled liquid, the large tensile plasticity could be also obtained 30 . Similarly, for oxide glass, it has been found that the nanowires show superplastic elongation larger than 200% under moderate exposure to electron beam 31 . These experimental results indicate that the plastic deformation carrier in BMGs may not solely reply on the SBs but a more microscopic deformation units. Many experimental results imply that BMGs are not completely homogeneous in nanoscale, and there exists a lot of dynamic or property defects of flow units (also termed as liquid-like zones or nanoscale SBs) [32][33][34] . These dynamic defects show low modulus, low viscosity and high atomic mobility. When the fraction of these dynamic defects increases (such as by rejuvenation treatment), the mechanical properties of BMGs such as the plasticity can be largely improved 35,36 . A question is then raised: Could we improve the tensile plasticity of BMGs by making the deformation units directly accommodate the plastic strain rather than the SBs? It is challenging to realize above idea considering the SB formation along the main shear plane. However, recent research on the densification and strain hardening under multiaxial loading 37 implies that the tensile plasticity may be got by complicating the stress field in BMGs. Meanwhile, the stress, which is equivalent to temperature, plays a similar effect on the viscosity, and the yielding could be considered as a stress-induced glass transition 38 . Thus, it is possible for the viscosity of the whole BMG decreases and then approaches the liquid-like state under certain applied stress mode, which leads to the near-homogeneous deformation in BMGs.
The surface artificial defects such as the notch, the indentation printing and the laser shock peening have been verified to induce the stress concentration, which could be used to induce a complex stress field. However, these methods are not readily controllable and do not allow systematic variation of microstructural features, such as phase spacing and volume fraction. As a highly controllable and precise technique, laser surface texturing treatment (LSTT) has been adopted in welding and surface modification of BMGs as well as in cladding of engineering materials with amorphous coating 39,40 . Thus, LSTT could be an efficient tool to induce the surface treatments and then create a complex stress field. In present work, a series of designed LSTT pore arrays with different sizes are introduced into typical Zr-based BMG samples. The LSTT samples with different pore sizes display different tensile fracture behaviors and appreciable tensile plasticity is obtained when the size of pore is about 150 ~ 200 μ m. The finite element analysis simulations for different LSTT pore arrays were made to analyze the stress distribution evolution. A strategy of stress distribution scalarization is proposed to enhance the macroscopic tensile plasticity of BMGs.
Results
Laser surface texturing treatment (LSTT). We designed three kinds of LSTT pore arrays. We showed one of them in the below part of Fig. 1(b), and the as-cast sample for comparison in the above part of Fig. 1(b). Clearly, both the as-cast and LSTT samples are amorphous confirmed by X-ray diffraction in Fig. 1(c). The amorphous nature of LSTT sample can be maintained due to the ultrafast cooling rate of pulse laser during LSTT. Figure 1(d) displays the magnified part of LSTT sample circled by blue dashed rectangular in Fig. 1(b), and the pore arrangement is AB-like pattern [shown in the inserted graph of Fig. 1(b)], which is easier to motivate the formation of multiple SBs 23 . From Fig. 1(e,f), the ratio of the depth and the size of the pores is about 280:150 ~ 1.87 and lies in the range between 1 and 2, which meets our pore profile designing. It is noted that the LSTT samples are different from those of the laser-ablation surface layer in previous research 41 and the depth of the laser-heating influenced layer is only several hundred nanometers for metals considering the ultrashort laser interaction time (10 fs) 42 . This thin influenced layer does not arise the pronounced effect on the tensile mechanical behaviors compared with the molten layer of the several or hundreds of micrometers during the traditional laser-ablation. What is more, we selectively designed the laser texturing pore pattern on the surface and the shape of the pores were specially designed to the near-cylindrical profile [ Fig. 1(f)] to systematically analyze the stress field distribution near the pores by finite element simulation.
Tensile plastic strain, elastic modulus and fracture strength. Figure 2(a) shows the typical tensile stress and strain curves of as-cast and LSTT specimens. For the as-cast specimen, no visible macroscopic tensile plasticity and the catastrophic fracture takes place when the tensile strain reaches about 2%; in sharp contrast, obvious tensile elongation appears in the LSTT sample marked with pattern C and the pore size of 150 μ m. For LSTT samples with the pore size of 42 and 85 μ m (pattern A and B), the visible nonlinear tensile stress-strain behavior also appears. The enlarged tensile stress and strain curves corresponding to the parts circled by the green, magenta and blue dashed rectangular circles are also shown in Fig. 2(b). One can clearly see that the nonlinear plastic deformation starts when the tensile strain is ~0.0195 and the tensile plastic strain ε p is only about 0.11% for the LSTT sample A. For LSTT sample B, the starting tensile strain of plastic deformation decreases to 0.0164 and the ε p increases to 0.19%. For LSTT sample C, the starting tensile strain of plastic deformation decreases to 0.0128 and the ε p increases to 0.51%. The above results indicate that ε T depends strongly on the pore geometry. In addition, no serrated flow in the plastic part of the stress-strain curve of LSST sample C in Fig. 2(b), which is the direct hint of SB-governing plastic deformation in BMGs 26,27 . The nonlinear plastic part in the stress and strain curves is very analogous to the tensile deformation in the microscale or nanoscale BMGs 12 , which indicates that the homogeneous plastic deformation process may take place within the LSTT BMGs. With the increase of the LSTT pore size, the tensile plastic strain ε T increases and the elastic modulus E, fracture strength σ f conversely decreases from Fig. 2(b). The values of ε T , E and σ f with various LSST pore sizes are included in Table.1 and shown in Fig. 2(c). The evolution of ε T and E, σ f displays the inverse changing trend with the increase of the pore size, which is consistent with previous research 23 . The surface pore array is actually considered as the the second soft phase and the increase of proportion of the surface pores leads to the decrease of E. Although σ f decreases about 30% compared to the as-cast sample, ε T increases to 0.51% from almost zero of as-cast sample. The above results indicate that to some extent we can tune the tensile plastic deformation ability by designing the LSTT pore stacking.
Fracture angle and fracture morphology. The LSTT treatment also induces marked change in fracture angle and morphology as shown in Fig. 3. The as-cast sample fails by a single main shear fracture, with a shear fracture angle of ~50.9°, which is consistent with previous research [43][44][45] . The fracture surface morphology is the typical tensile fracture morphology of firework-like patterns consisting of the core and the radial vein-like pattern in the first pictures of Fig. 3(b,c). This indicates that the normal tensile stress controls the fracture progress. In contrast, the LSTT samples exhibit the larger fracture angles than that that of as-cast samples, and the fracture angle of pattern A, B and C are 51.5°, 55.5°, and 62.9°, respectively [see Table 1], which implies that the LSTT pore array twists the propagation direction of main SBs. Analogous to as-cast sample, the LSTT sample A with smaller pore size displays the similar radial-like pattern with smaller size, which indicates the influence of the pore arrays starts to work [second pictures of Fig. 3(b,c)]. For the LSTT sample B, the fracture surface displays a vein-like pattern and river-like pattern [third pictures of Fig. 3(b,c)], which is the typical fracture pattern in compression deformation process where the compression and shear stress play the dominant role during fracture. These results suggest the fracture mode has a transition from the uniaxial tensile fracture to compression-like fracture with the change of the pore array and size. For the LSTT sample C, the dense micro-scale cone-sharped structures with the size of 7.5 μ m appear in the central part between the two opposite surface pores [marked by green dashed circle in the fourth picture of Fig. 3(b)], which only exists in the microscopic BMG samples induced by the size effect such as the micro-scale foils 46 and the nanoscale samples 29 . These unique cone-sharped structures remind us of the homogeneous tensile fracture morphology in supercooled liquid state of BMGs 47 and the central part between the two opposite surface pores seems like the liquid state. Previous research 23,24,37 have shown that constraints induce the stress concentration to activate the formation of multiple SBs. The SB dominated fracture mode usually express the vein pattern on the main fracture surface 48 . This unexpected unique cone-sharped structures indicates that the fracture mode transition occurs from the usual heterogeneous plastic deformation mode via shear banding to homogeneous deformation in BMGs. The evolution of the fracture angle, the fracture morphology and mode with the LSTT pore size is displayed in Fig. 4 based on the data of Table 1. Stress field distribution of LSTT samples with different pore sizes D. The finite element simulations are adopted to provide explanations for the reduction in fracture strength and appearance of the homogeneous tensile plastic deformation. The numerical results of three LSTT samples (three different pore sizes of 50, 100, and 150 μ m) with the elastic strain 2% are displayed in Fig. 5(a-c), in which the elastic modulus of 78. 41 GPa and Poisson's ratio of 0.377 were used for the Zr-based BMGs 8 . Figure 5(a) shows the stress distribution field for LSTT sample with D = 50 μ m. One can see that most external stress is undertaken by BMG matrix and there appears the stress concentration in the regions near the LSTT pores from both the plan and cross-sectional view. The influence of the LSTT pore is only localized in the regions near pores and the stress field is analogous to that of the as-cast sample. Thus, the fracture features such as the fracture strength, the fracture angle and the fracture morphology do not much change compared to the as-cast ones. When the D increases to 100 μ m, the stress field distribution is markedly different in Fig. 5(b). The stress concentrated regions near the pores become bigger, and start to form the grid-like stress concentration zone by hand-in-hand from the plan view. The average stress value near the pores is comparable to the stress value of BMG matrix and the grid-like stress concentration zone starts to carry more the external stress, which indicates that the influence of the LSTT pores has already competed with that of the BMG matrix. In the cross-sectional view, the central parts between the opposite pores undertake the larger stress than BMG matrix and the central parts between the adjacent pores undertake the smaller stress, which produces a compression-like stress field. Thus, this comprehensive stress field disturbs the usual deformation process along the main shear plane and twists fracture angle away from the normal value (~50°). However, this comprehensive stress field does not change the heterogeneous deformation mode via the main SBs in Fig. 5(b) and the main fracture morphology is the vein-like pattern governed by the tensile shear mode.
When the D further increases to 150 μ m, the regions both near the pores and between the opposite pores firstly reaches yielding compared to BMG matrix and the grid-like stress concentrated regions grow larger. These stress concentrated zones superimpose together and form the yielding zone, in which BMG enters into the liquid-like state and expresses the homogeneous flow behaviors 49 . From Fig. 5(c), one can see that the influenced zones of the LSTT pores has exceed the BMG matrix and the deformation and fracture mode transition happens from the tensile shear fracture to the homogeneous plastic deformation fracture mode. This homogeneous plastic deformation in mesoscopic scale arises the formation of the microscopic cone-sharped structures on the fracture surface of LSTT sample C in Fig. 3.
Stress field evolution of LSTT sample with D = 150 μm in different tensile strains.
We also studied the stress field evolution of the sample with D = 150 μ m under different tensile strains (0, 2%, 4% and 6%) to understand the evolution of the stress field during tensile deformation in Fig. 6. One can clearly see that the stress concentration should start to take place in the regions near pores and the BMG matrix barely sustains the loading. With the increase of the strain to 2%, the stress-concentrated regions connect each other and form a complex grid-like stress field. The influence of the grid-like stress field plays a dominant role in the following tensile deformation. When the tensile strain reaches 6%, the influenced zones of the grid-like stress field expand to the whole region between the opposite pores from both the plan view and the cross-sectional view. Especially, from the cross-sectional view, the central regions between the opposite pores have entered into the yielding state compared to the BMG matrix. These regions break the main shear plane of the brittle fracture mode without tensile plasticity and lead to the macroscopic tensile plastic deformation in LSTT BMG samples.
Discussions
Above experimental results and finite element analysis demonstrate that the identical Zr-based BMG specimens with different LSTT pore arrays display quite different tensile fracture behaviors. Under uniaxial tension, applied tensile stress is uniform and it is easier to form a single main SB along the main shear plane, leading to the rapid propagation of SB and the followed brittle failure. For the LSTT samples in this work, the complex stress field (compressive shear stress and tensile shear stress) induced by the LSTT pore array plays a similar role of the second soft crystalline phases 21,22 in activating the production of stress concentrated zones. This complex stress field leads to a complex plastic deformation mechanism in LSTT samples, i.e. the mesoscopic homogeneous plastic deformation near the LSTT pores and the heterogeneous shear banding governed deformation. Thus, the whole stress field is disrupted by the pore array induced complex local stress field, and this effect is equivalent to the transition of a single vectored stress to a multiaxial vectored field, i.e. the stress field scalarization. From this view, stress field scalarization makes the uniaxial tension stress field transform into the multi-axial complex stress field, and then prevents the fast propagation of the main SB and promote the production of the mesoscopic yielding zone, which enhances the tensile ductility for BMGs.
Previous works [32][33][34][50][51][52][53] demonstrated that BMGs is heterogeneous in nano-scale, which consists of flow units and elastic matrix. Upon external loading, the flow units behave like inelastic inclusions and give birth to local plastic events also known as shear transformation zones, which closely correlates with various mechanical behaviors. Based on the flow unit image, the SB can be considered as the assembling consequence of many flow units along the main shear plane. Thus, to clearly understand the physical deformation mechanism of the LSTT BMG samples, a phenomenological picture of stress field scalarization based on the flow units image and the finite element analysis is displayed in Fig. 7. Under uniaxial tensile stress, the stress field displays a near-parallel distribution along the external loading direction for the as-cast sample [left part of Fig. 7(a)]. For this kind of stress field distribution, the total effect of the internal stress field is equivalent to the tensor stress. And it is the tensor stress that directly leads to the formation of a single main SB along the main shear direction, which is prone to induce the catastrophic fracture. In contrast, for LSTT samples, the stress field is twisted in the regions near LSTT pores and the tensor stress with parallel distribution is scalarized [left part of Fig. 7(b)]. The scalarized stress field directly arouses the stress concentration in the regions near LSTT pores, which disrupts the flow units arrangement along the main shear plane. Thus, not only the flow units near the main shear plane are activated, but also the hidden flow units away from the main shear plane can also be excited. Those activated flow units aggregate into the mesoscopic yielding zone near LSTT pores when the D reaches the certain value (150 μ m in this work). Previous research suggests that the stabilization of SB propagation require that the typical length of the artificial heterogeneous microstructures D < R P 25,27,28,43 . R P is the intrinsic crack tip plastic zone radius, and R P ~ (1/2π ) (K IC /σ y ) 2 (K IC is fracture toughness and σ y is the yield strength). For Zr-based BMGs, the value of R P is about 150 μ m. In our case, D is the size of LSTT pores. As is shown in Figs 5 and 6, when the D < R P (pore size is about 50 μ m), the tensile plasticity is just increased to be ~0.1% and the tensile nominal stress still dominates the fracture process. When D is about 100 μ m comparable to R P , the deformation mode becomes different and the shear stress starts to play the dominant role. When D reaches about 150 μ m, the homogeneous plastic deformation near the LSTT pore starts to become obvious, which induces the significantly improvement of the tensile plasticity. This suggests that D/R P is actually the prominent factor for controlling the stress distribution, and thereby, the fracture strength and tensile plastic strain in BMGs.
We note that the depth of the LSTT pores is an important controlling parameter. The core idea of improving ductility of BMGs by LSTT technique is to tune the stress field distribution for activating more flow units to undertake the external loading. Thus, this work is actually one of a series of methods for stress field controlling engineering in improvement of mechanical properties. The introduced LSTT pore array with the same pore size and arrangement may produce a different stress field distribution when the depth of the pores varies and then leads to a distinct mechanical behavior. Furthermore, the relative thickness of the LSTT pore compared to the thickness of BMG samples may be a key factor when the size of pore is be comparable to the thickness of sample. Therefore, various LSTT patterns could be applied to obtain the corresponding stress field distribution based on the specific BMG sample with wanted mechanical properties It is worth mentioning that our strategy is significantly different from the previous methods for enhancing the tensile plasticity by promoting multiple SBs 25,27,28,43 . In our case, the carrier of the tensile plasticity is the mesoscopic plastic zone near LSTT pores consisting of flow units rather than multiple SBs. Before the main SB propagates, the regions near the LSTT pores have transformed from the solid-like state to liquid-like state under the compression-shear complex stress field. Although there is only 0.51% tensile plastic strain, the larger macroscopic tensile plasticity might be obtained by further optimizing the profile and spatial distribution of the LSTT pore array, which is our further work. Actually, the methods of introduction of the second crystalline phase, the artificial surface defects and the notches into BMGs 26,28,29,48 for enhancing the tensile plasticity can also be regarded as other forms of stress field scalarization
Conclusions
A stress field scalarization strategy is proposed to improve the macroscopic tensile plasticity of BMGs, and the method is proved to be feasible experimentally by designing the laser surface texturing treatments on the surface. The introduced surface pore array can activate the formation of the microscopic plastic zones in the regions near LSTT pores and then connect into a mesoscopic zone when the pore size meets the certain conditions. As a result, the mesoscopic zone undertakes the external stress and then arise the macroscopic tensile plasticity. Under the complex stress field environments, the BMGs display the totally different mechanical behaviors compared to the uniaxial stress field, which provides the in-depth understanding of physical mechanism in different external environmental conditions. Due to the superior forming ability of BMGs within supercooled liquid region, the present strategy can also be readily realized by introduction of various artificial defects on the surface using the superplasticity of the BMG in its supercooled liquid state.
Methods
Metallic glasses and the specimen preparation. Zr-based BMG samples with a nominal chemical composition of Zr 64.13 Cu 15.75 Ni 10.12 Al 10 were prepared by induction melting a mixture of pure metal elements and then casting into Cu mold to form plate shape specimens with dimensions of 1 × 10 × 50 mm 3 . The glassy nature of BMG samples was confirmed by x-ray diffraction (XRD) using a BRUKER D8 ADVANCE diffractometer with Cu K α radiation source and differential scanning calorimetry (DSC) performing under a purified argon atmosphere in a Perkin-Elmer DSC-7. The as-cast BMG plates were polished using 200, 600 and 1200 grit SiC paper successively to remove the thin crystalline surface layer caused by interaction with the mold. The final thickness of polished plates was reduced to about 0.7 mm, with the upper and lower surfaces being parallel.
Dog bone-like specimens for tensile tests with cross section dimensions of 0.7 × 7.0 mm 2 and a total length of 42 mm were cut from the BMG plates using electric spark line cutting machine and the gauge dimension is 0.7 × 3 × 22 mm 3 . All tensile specimens were polished with 1.5 μ m diamond sandpaper to get rid of corrosion pits induced by electric spark line cutting.
Laser surface texturing treatment. Before tensile tests, the polished dog bone-like specimens were pre-treated by the laser surface texturing treatment technology, LSTT, in the central gauge part and the LSTT set-up sketch is shown in Fig. 1(a). A Picosecond laser TruMicro 5025 was used. The laser produces a beam with a Gaussian energy distribution and operates at 515 nm with a maximum pulse energy of 150 μ J, a pulse duration of 0.01 ns and a frequency of 800 kHz. A scanner head, combined with the laser, allows to reach a high precision during texturing. The BMG specimen was fixed on the movable platform (including the cooling water system with the temperature range between 5 °C and 23 °C). The surface texture can present various forms like streaks, holes and other geometries. In this work, texturing was done in the form of circular pores. After LSTT, the surface micro-pores were then observed by scanning electron microscopy (SEM) conducted in a Philips XL30 instrument and white light interference profiler (BRUKER, Coutour GT). Various laser-induced pore array patterns with different diameters and depths were designed on the tensile specimens. In the practical industrialized applications, the improvement of the mechanical and physical properties by LSTT are largely influenced by the profile (shape, size, density and depth) of pores induced by LSTT 54 . To individually study the LSTT effect on the mechanical properties, we controlled the ratio of the depth and the size of the pores between 2:1 and 3:1 by optimizing the laser parameters and kept the identity of the pores in the spatial arrangement with different sizes.
Tensile mechanical tests. Uniaxial tensile tests were conducted on the as-cast and LSTT BMG specimens with a constant quasi-static strain rate of about 1 × 10 −4 s −1 under an INSTRON ElectroPuls E10000 All-Electric Test Instrument at room temperature. Strain was precisely and directly measured based on the sample gauge length using non-contacting video extensometer (INSTRON). At least three specimens were measured to ensure that the results were reproducible. The fracture features, such as newly generated tensile fracture surfaces, fracture side surface morphology and fracture angle, were observed by the SEM.
Finite element simulation.
A series of finite element simulations were carried out to probe the mechanical mechanism giving rise to the dramatic tensile ductility enhancement. The dimensions of the model system and pores were designed to be identical to the experiment values to conveniently analyze the difference between the simulation and the experimental results. The number of pores was reduced in tensile direction for saving computing time without changing the final simulation results. Specially, we varied the size of pores to investigate the effect of the heterogeneity induced by the LSTT pores on the tensile mechanical behaviors. Tensile deformation was introduced by applying an X displacement on the right boundary and the left was forbidden to move in X direction, as was shown in Fig. 5. To have an insight into the evolution of the stress field, displacement was imposed by increasing steps 50, 100, and 150 μ m, corresponding to the nominal strain 2%, 4%, and 6% respectively.
In the model, the material were treated as isotropic elastic solids, Yong's modulus and Poisson's ratio of the BMG were taken to be 78.4 and 0.377, respectively. Previous studies 55,56 have shown that the von Mises criterion is adequate for describing the yield response for amorphous alloys. Therefore, for ease to compare the results among the different types of samples, the von Mises criterion was chosen to be used in the present simulations. The basis set of finite element simulations was chosen to be a four-node linear element. The finite element program, Abaqus (version 6.10, Dassault Syste'mes Simulia Corp., Providence,RI, USA), was employed for the calculation in this work. | 6,458.2 | 2016-02-23T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
RESEARCH ON THE INFLUENCE OF MILD STEEL DAMPERS ON SEISMIC PERFORMANCE OF SELF-RESETTING PIER
In order to improve the energy consumption capacity of the assembled self-resetting pier, the mild steel damper is added to the prefabricated self-resetting pier to form a prefabricated selfresetting pier with an external mild steel damper. Two sets of pier models were established by numerical simulation. On the basis of verifying the correctness of the traditional prefabricated selfresetting pier model, the two sets of pier models were subjected to low-cycle reciprocating loading to study the influence of the mild steel damper yield strength parameters and the pier axial compression ratio parameters on the seismic performance of the pier structure. The results show that compared with traditional prefabricated self-resetting piers, the hysteresis curve of self-resetting piers with mild steel dampers is fuller, and energy consumption and bearing capacity are greatly improved. With the increase of the yield strength of the mild steel damper, the energy consumption capacity will decrease when the loading displacement is less than 25mm, but the overall energy consumption capacity will increase. As the axial compression ratio of the pier column increases, the bearing capacity and energy consumption capacity of the structure increase significantly, but the impact is not obvious when the axial compression ratio exceeds 0.052.
INTRODUCTION
Compared with cast-in-place concrete piers, fabricated self-resetting piers have the advantages of good seismic performance, high post-earthquake repairability, and low damage. In particular, the combination of damping technology and self-resetting pier design forms a layered protection. The bridge system that can eliminate the plastic hinge and quickly restore its function after an earthquake has important research value. Mander et al [1] first introduced the idea of selfresetting structure in the design of bridge piers, using unbonded prestressed steel bars to improve the seismic performance of the bridge piers. The pseudo-static test results showed that the prefabricated self-resetting bridge piers have small residual displacements, but weak energy consumption. In order to improve the energy dissipation capacity of bridge piers, Solberg et al [2] proposed to install energy dissipation steel bars to improve the seismic performance of bridge piers. The pseudo-static test results showed that energy dissipation steel bars increased the energy dissipation capacity of bridge piers. Marriott et al [3][4] proposed to install an external energy dissipation device to improve the post-earthquake recovery of the self-resetting pier. The pseudostatic and pseudo-dynamic test results show that the new prefabricated self-resetting pier has higher bearing capacity and energy consumption, and easy to repair after an earthquake. Trono et al [5] confirmed that self-resetting piers have obvious advantages in damage and residual displacement through shaking table tests. Guo Jia, Xin Kegui et al [6][7] expounded the working principle of prefabricated self-reset bridge pier test. Haitham [8] performed a numerical simulation on the performance of the fabricated bridge pier under reciprocating load and verified the influence of the concrete constitutive model on the simulation results. Bu et al [9] conducted pseudo-static tests on 5 circular cross-section piers. The test results showed that compared with bonded prestressed tendons, unbonded prestressed tendons have less prestress loss and more sustainable and effective prestress. Wang Junwen et al [10] conducted a pseudo-static test on a hollow concrete pier and 3 precast and assembled hollow piers with prestressed sections. The test results showed that the unbonded prestressed tendons reduced the residual displacement of the structure, but the energy dissipation capacity decline. Ge Jiping [11] conducted a pseudo-static test study on prefabricated self-resetting bridge piers. The study showed that the bottom of the pier and the cap were separated and swayed. Although the bottom damage of the pier column was reduced, the overall energy consumption was reduced. Guo et al [12] tried to set up replaceable external energy-consuming devices. Research shows that different energy-consuming device parameters have a certain impact on the seismic performance of bridge pier structures. In summary, the traditional prefabricated selfresetting piers have good self-reset capability and small residual displacement, but they have poor energy consumption and they are not easy to repair after an earthquake. Fabricated self-resetting piers with external mild steel dampers have the advantages of good energy consumption and easy repair after earthquakes. It is necessary to conduct more in-depth research on fabricated selfresetting piers with additional mild steel dampers. This research proposes a fabricated self-resetting pier structure with mild steel dampers. The establishment of a refined numerical model and the validation of the validity of the numerical simulation are carried out to study the seismic performance of fabricated self-resetting piers with mild steel dampers, and the influence of different yield strength parameters and different axial compression ratio parameters on the seismic performance of the structure.
Fabricated Self-resetting Pier Structure with Mild Steel Damper
An external mild steel damper is added to the foundation of the traditional prefabricated selfresetting pier. The pier and the cap are separated. The mild steel dampers are symmetrically arranged at the centre of the two sides of the pier at a 45-degree angle along the transverse direction. The damper can effectively transfer stress and avoid slippage of the structure. The structure diagram of the assembled self-resetting pier with external mild steel damper as shown in Figure 1.
Basic Structure and Working Principle of Mild Steel Damper
The structure diagram of the mild steel damper is shown in Figure 2 and the main component size diagram is shown in Figure 3. The mild steel damper is mainly composed of 1. shaft sleeve hinge support, 2. shaft hinge support, 3. high-strength bolts, 4. mild steel rod, 5. shaft sleeve, and 6. shaft.
Yield Mechanism of Mild Steel Damper Prior to Energy Dissipation Steel Bar
Under the condition of ignoring the second-order effect of the bridge pier, the schematic diagram of the corner between the bottom of the bridge pier and the cap under the action of earthquake as shown in Figure 4. is the deformation of the energy dissipating steel bar, as Equation (1). steel bars), it shows that the damper yielded before the energy dissipation steel bar, this research is based on it. Mild steel dampers are used as the first line of defence against energy dissipation, and energy-consuming steel bars are used as the second line of defence against energy dissipation. Mild steel dampers act earlier than energy-consuming steel bars. The steel bars play a protective role to avoid premature yielding of energy-consuming steel bars, which is conducive to the continuous performance of the structure's seismic performance under aftershocks, which improves the safety performance of the structure, and the external mild steel dampers are easy to replace after earthquakes.
Shear Capacity Analysis
The shear force in the horizontal direction of the pier is mainly borne by the pier column itself, the horizontal component of the prestressed tendons, the friction between the bottom of the pier and the cap, and the horizontal component of the mild steel damper. The shear resistance mechanism as shown in Figure 5. DOI 10.14311/CEJ.2021.01.0008 109
Fig. 5 -Shear Mechanism
When the angle between the pier bottom and the cap is under the horizontal force of the pier top, the compressed concrete of the pier column forms a compression rod, and the prestressed tendons form a tension rod. The shear capacity of the pier column is mainly divided into five parts, as Equation (5).
V is the shear bearing capacity of the pier column; y V is the horizontal component of the inclined rod; l V is the horizontal component of the diagonal rod; f V is the horizontal friction between the pier bottom and the cap; 1 V and 2 V are the horizontal components respectively force of the left and right mild steel dampers. Therefore, the above equation is referred to as Equation (6).
In the above Equation, 0. 8 is the concrete strength reduction factor; 0. 2 is the constant coefficient; c f is the compressive strength of the concrete; b1 and b2 are the width of the bridge pier in the transverse and forward directions respectively; b3 is the equivalent width of the diagonal strut; p F is the prestressed tendon Tensile force value; is the friction coefficient between the bottom of the pier and the cap, taking 0. 5; F1 and F2 are the axial tension and pressure values of the left and right mild steel dampers respectively; is the angle between the line connecting the top of the column to the edge of the compressed concrete at the bottom of the column and the centre line of the pier column; and are the angles between the left and right mild steel damper axis and the horizontal direction; h is the height of the pier column. In general, is small, and the above equation is simplified as Equation (8).
Generally b1/h≤1, therefore, y V ≤ f V . Compared with the friction between the bottom of the pier and the cap, the shear capacity of the pier is mainly determined by the pier itself, as Equation (9).
The prefabricated self-resetting pier model with mild steel dampers designed by this research can get: k V =472kN. The shear bearing capacity is much greater than the maximum horizontal force of 292kN in the simulation, so the designed pier shear capacity meets the requirements.
Model Parameters and Unit Selection
In this paper, two sets of numerical models of bridge piers are established. The first group is M1, M2, M3 and M4. It mainly studies the influence of three mild steels with different yield strength parameters: BLY100, BLY160 and BLY225 [13][14][15] dampers on the seismic performance of the structure. The second group is M3, M5 and M6. Taking M3 pier as an example, the influence of axial compression ratio parameters of pier column on the seismic performance of the structure is mainly studied. The pier columns, caps, reinforcement, cross-sectional area of reinforcement, and prestressing tendons of the two groups of piers are the same, and the reinforcement ratio, energy dissipation steel, and prestressing tendons of the pier model are all based on the data in reference [7]. This paper used finite element software ABAQUS. The mild steel damper adopts hexahedral solid element and an ideal elastoplastic model. The main parameters of the model and the corresponding model numbers as shown in Table 1. M1 is a traditional assembled self-resetting pier. The concrete grade of each specimen is C60, the plastic damage model is adopted, the elastic modulus is 3.8×10 4 MPa, the Poisson's ratio is 0.3, and the hexahedral solid element is adopted. The steel bar is HRB335, using truss elements, and an ideal elastoplastic constitutive model. The prestressed tendons are made of 1860 grade steel strands, and each prestressed tendon uses the cooling method to apply an initial prestress of 80kN, and the expansion coefficient is set to 1.2×10 -5 /℃. The main parameters of each component of the structure as shown in Table 2 and Table 3, the mechanical properties of reinforcement as shown in Table 4, and the reinforcement drawing of the assembled self-reset bridge pier with mild steel damper as shown in Figure 6.
Boundary Conditions and Loading System
The bottom of the cap is fixed, and the bottom of the cap and the upper pier adopts surfaceto-surface contact, and the contact characteristics of the normal direction and the tangent direction are set. The tangent direction adopts the Coulomb friction model, the friction coefficient is 0.5, and the normal direction is "hard" contact. The contact relationship between the concrete and the steel bar is assumed to ignore the bond-slip effect, and the steel bar is incorporated into the designed concrete member. The prestressed tendons are combined with the steel plate into a whole, which Rigid connection is adopted between the mild steel damper support and the bridge pier, cap and between the soft steel rod and the shaft and sleeve, which are set as binding constraints. The shaft, sleeve and support are hinged by high-strength bolts, and the contact surfaces adopt surface-tosurface contact, regardless of the Coulomb friction between the contact surfaces. Numerical experiments are used to control the horizontal displacement of the low-cycle reciprocating loading form to simulate the cyclic reciprocating motion of the bridge pier under the action of an earthquake. Before low-cycle reciprocating loading, apply a concentrated load of 220kN, 440kN, 660kN on the top of the pier to simulate the vertical load from the superstructure, corresponding to M3, M5, M6 piers with different axial compression ratios, and add pier column structure self-respect. Couple a reference point on the top surface of the pier, and apply a reset movement to the reference point. The model loading diagram is shown in Figure 7. The maximum load displacement is 60mm, the grading load, the first level is 5mm, the second level is 10mm, the third level is 15mm, and so on, each level is increased by 5mm (0.3%), and each level is repeated 3 times. The loading method is shown in Figure 8.
Numerical Model Verification
The numerical model established in accordance with the traditional prefabricated selfresetting pier in the test in Reference [7] is subjected to low-cycle repeated loading, and the forcedisplacement curve is obtained, which is compared with the existing test results, as is shown in Figure 9. The force-displacement curve obtained based on the numerical test is basically consistent with the test result. Because the energy-dissipating steel bar in the numerical calculation adopts an ideal elastoplastic constitutive model, and does not consider the strengthening effect of the actual steel bar force, the ultimate bearing capacity is slightly lower than the results obtained from the test.
Comparison of Hysteresis Curves of Bridge Piers with Mild Steel Dampers
The hysteresis curve is of great significance to the analysis of the seismic performance of structures or components, and comprehensively reflects its seismic performance. The comparison of hysteresis curves is shown in Figure 10. In Figure 10(a), the following conclusions can be drawn: (1) Compared with M1 piers, the force-displacement curves of M2, M3, and M4 piers are generally shuttle-shaped, with a more full shape, and the energy consumption and bearing capacity have been improved to a certain extent. The mild steel dampers have a good seismic effect. (2) The force-displacement curves of M2, M3, and M4 piers are similar, indicating that the three have similar mechanical properties. In the initial stage of loading, the structure is in the elastic stage. As the displacement loading progresses, the hysteresis loop area continues to increase and consumes energy. The stiffness of the specimen is gradually degraded. At the later stage of loading, the structural force-displacement curve is gradually full and pinch phenomenon appears, indicating that the prestressed tendons have played a good role. In Figure 10(b), as the pier-column axial compression ratio increases, the hysteresis curve of the structure becomes fuller, the bearing capacity and energy consumption capacity are improved, and the residual displacement increases.
Analysis of Energy Dissipation Capacity of Bridge Piers with Mild Steel Dampers
Energy dissipation capacity is of great significance for measuring the seismic performance of structural members. It is generally represented by the graphic area enclosed by the loaddisplacement curve envelope. By analyzing the hysteresis curve of the structure, the cumulative energy consumption of the structure can be calculated quantitatively. The cumulative energy consumption of the study is the superposition of the average value of the hysteresis loop area of 3 cycles per load displacement. The energy consumption curve as shown in Figure 11. In Figure 11(a), as the loading displacement gradually increases, the cumulative energy consumption of piers gradually increases. Compared with traditional piers M1, M3 piers, the maximum cumulative energy consumption value is increased by 2.5 times. The energy consumption capacity is better than that of traditional fabricated self-resetting piers.
In Figure 11(b), compared to M3, the cumulative energy consumption value of M5 piers has increased by 30%, while the energy consumption capacity of M5 and M6 piers is not much different, indicating that increasing the axial compression ratio of the pier column within a certain range can be improve the energy consumption capacity of the bridge pier, when the axial compression ratio exceeds 0.052, the effect is not obvious. Figure 12 shows the ratio of the energy consumption of mild steel dampers to the total energy consumption. In Figure 12(a), the ratio of the energy consumption of the three types of mild steel dampers with different yield strengths to their total energy consumption gradually increases with the load of displacement, and they are all greater than 0.55, indicating that the external mild steel dampers are effective improve the energy consumption capacity of bridge piers. When the horizontal displacement load is less than 25mm, the M2 pier mild steel dampers account for the largest energy consumption, and the cumulative energy consumption value is the largest, and the M4 pier has the smallest value. When the horizontal displacement load is greater than 25mm, the M4 pier mild steel dampers account for the largest energy consumption, and the cumulative energy consumption value is the largest, and the M2 pier the above value is the smallest, indicating that with the increase of the mild steel damper yield strength, when the loading displacement is small , the ratio of the energy consumption of the mild steel dampers to the total energy consumption gradually decreases, and the cumulative energy consumption value of the bridge piers decreases. When the loading displacement is large, the energy consumption of the mild steel damper gradually increases, and the cumulative energy consumption value of the pier increases. Therefore, in order to increase the overall energy consumption of the pier, the yield strength of the mild steel damper cannot be increased indefinitely, and the energy dissipation capacity of the structure under small displacement loading must also be considered. In contrast, M3 piers with BLY160 mild steel dampers have the best seismic performance. In order to make the structure have a more sustainable and stable energy dissipation capacity, the energy sharing ratio between the mild steel damper and the energy dissipation steel bar when the horizontal loading displacement limit value is 25mm is taken as the best sharing ratio, and the calculated ratio is 1.75.
In Figure 12(b), when the loading displacement is less than 13mm, as the axial compression ratio of the pier column increases, the energy consumption of the mild steel damper gradually decreases. When the loading displacement is greater than 13mm, the energy consumption of the mild steel dampers of the M5 and M6 piers gradually increases and exceeds that of the M3 pier. They are in a stable state at the later stage of the displacement loading, indicating that in the middle and late loading displacements, the pier column axis is within a certain range The larger the pressure ratio, the better the energy dissipation effect of the mild steel damper. The energy consumption ratio of the mild steel dampers of M5 and M6 piers is basically similar in the middle and later stages of displacement loading, indicating that the effect of the pier-column axial compression ratio exceeds a certain range is not obvious. Consider the energy consumption of the mild steel dampers when the loading displacement is small. The recommended value of pier column axial compression ratio is 0.052.
Comparison of Stiffness Degradation of Piers with Mild Steel Dampers
Stiffness degradation is a phenomenon that the peak point displacement increases with the increase of the number of cycles when the same peak load is maintained under cyclic loading, and the formula for stiffness degradation as Equation (10).
In Equation (10), Ki is the ring stiffness of the structure; the numerator is the peak load of the i-th cycle; ∆ is the deformation value at the maximum point of the i-th cycle; n is the number of times. The stiffness degradation curve as shown in Figure 13.
Fig. 13 -Stiffness degradation curve
The stiffness degradation is roughly symmetrical, and the stiffness degradation is more obvious and uniform. The stiffness degradation of the M2, M3, and M4 piers is relatively similar, showing similar mechanical properties. Compared with traditional prefabricated self-resetting piers, the rigidity of the prefabricated self-resetting piers with external mild steel dampers is greater, because the mild steel dampers increase the initial rigidity of the pier. The stiffness of the pier with the external mild steel damper decreases with the increase of the horizontal displacement, the descent rate is almost the same, and the slope of the stiffness curve gradually decreases. It shows that the stiffness of the assembled self-resetting pier with external mild steel dampers is obviously degraded, continuous and stable, and there is no damage to the structure due to the sudden decrease. The increase of the pier-column axial compression ratio improves the initial stiffness of the pier, but the axial compression ratio parameter has little effect on the overall stiffness degradation of the pier.
Comparison of Residual Displacement of Bridge Piers with Mild Steel Dampers
The variation of the pseudo-static residual displacement of the specimen with the load displacement level as shown in Figure 14. Under the action of low-cycle reciprocating load, compared with traditional prefabricated selfresetting piers, the residual displacement of self-resetting piers with additional mild steel dampers has increased. This is because the external mild steel dampers increase the unloading rigidity of the pier. Cause the residual displacement to increase. According to the reference [16], in order to make the piers repairable after the earthquake without reconstruction, it is necessary to ensure that the residual displacement angle of the pier after the earthquake does not exceed 1%. The maximum residual displacement of the M2, M3, and M4 piers is 16.7mm, which meets the requirement that the residual displacement angle is less than 1%. The residual displacement curves of M2, M3, and M4 piers have roughly the same trend. The residual displacement of M4 piers is smaller, indicating that the greater the yield strength of the mild steel dampers within a certain range, the better the selfresetting ability of the piers, but the difference is not much.
The text defines the self-resetting coefficient to measure the self-resetting ability of the structure. The larger the value, the better the self-resetting ability of the structure. Generally, 0≤ ≤1.
The self-resetting coefficient is determined by Equation (11).
In Equation (11), i is the residual displacement of the structure after loading; j is the maximum displacement of the structure in the same load cycle. The comparison of self-reset coefficients under different axial compression ratios is shown in Figure 15. In the initial stage of displacement loading, increasing the pier-column axial compression ratio within a certain range can improve the self-reset capability of the structure. In the later stage of displacement loading, the self-reset ability of the structure is reduced. It shows that the increase of the pier-column axial compression ratio makes the mild steel damper enter the fully yielding energy dissipation state earlier, resulting in greater residual deformation at the later stage of displacement loading. The self-reset coefficient curves of M5 and M6 are similar, indicating that the influence of the pier-column axial compression ratio exceeds a certain range is not obvious.
CONCLUSION
In this study, the numerical test method was used to compare the response analysis and comparison of the fabricated self-resetting piers with additional mild steel dampers and the traditional fabricated self-resetting piers under low-cycle reciprocating loads. The effects of the yield strength parameters of mild steel dampers and the axial compression ratio parameters on the seismic performance of the bridge piers are studied. Concluded as follow: (1) Fabricated self-resetting piers with external mild steel dampers have good self-resetting ability, high bearing capacity and energy dissipation capacity, and have good seismic performance under low-cycle repeated loads. (2) Within a certain range, the greater the yield strength of the mild steel damper, the greater the bearing capacity, the better the overall energy dissipation capacity, and the smaller the residual displacement. When the loading displacement is less than 25mm, the greater the yield strength of the mild steel damper, the smaller the proportion of energy consumption. In contrast, BLY160's prefabricated self-reset bridge pier has the best seismic performance, and the optimal energy dissipation ratio between the mild steel damper and the energy-dissipating reinforcement is 1.75.
The greater the axial compression ratio of the pier column in a certain range, the greater the energy consumption, bearing capacity of the structure, when the loading displacement is small, the self-reset ability of the structure is better, and when the loading displacement is large, the mild steel damper enters the full yielding energy consumption state earlier. When the axial compression ratio exceeds a certain range, the effect is not obvious. In contrast, the recommended value of pier column axial compression ratio is 0.052. (4) The external mild steel damper yields before the energy-dissipating steel bars to protect the energy-dissipating steel bars and avoid premature yield failure of the energy-consuming steel bars, which is conducive to the continuous development of the seismic performance of the structure under aftershocks, and its easy replacement feature is conducive to post-earthquake repair. | 5,826.8 | 2021-04-09T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Effect of (Tb+Y)/Al ratio on Microstructure Evolution and Densification Process of (Tb0.6Y0.4)3Al5O12 Transparent Ceramics
(Tb0.6Y0.4)3Al5O12 transparent ceramics were successfully fabricated by solid-state reactive sintering using Tb4O7, Y2O3, and α-Al2O3 powders as raw materials. The effect of (Tb+Y)/Al ratio on microstructure evolution and densification process was investigated in detailed. The results showed that the grain growth kinetics were significantly affected by (Tb+Y)/Al ratio. Al-rich and Tb-rich phases appeared in part of the samples of different ratios. Particularly, excess aluminum increased the diffusing process, leading to a higher densification rate, while samples with excess terbium ratios displayed a smaller grain size and lower relative density. The optical quality was highly related to the amount of the secondary phase produced by different (Tb+Y)/Al ratios. Finally, (Tb0.6Y0.4)3Al5O12 transparent ceramics have been fabricated through pre-sintering in vacuum, followed by hot isostatic sintering (HIP), and the best transmittance of sample with a 4 mm thickness was approximately 78% at 1064 nm.
Introduction
Magneto-optical material, including glass, single crystal, and transparent ceramic, is the crucial constitution of the optical isolators in high-power laser systems [1][2][3]. At present, due to the advantages of large Verdet constant, high thermal conductive, and low absorption, Tb 3 Ga 5 O 12 (TGG) is one of the most commonly used commercial magneto-optical material of Faraday isolators [4][5][6]. Compared to TGG, Tb 3 Al 5 O 12 (TAG) has a higher Verdet constant, which makes it a highly sought magneto-optical isolator material for future applications [7]. However, it is difficult to obtain TAG single crystals, due to incongruent melting [8][9][10]. Although many efforts have been devoted to solving this problem, the size of crystals is still too limited to meet the requirement of practical application [11,12]. This phenomenon can be effectively avoided by fabricating TAG transparent ceramic below the phase transition point, thanks to the cubic structure.
TAG transparent ceramic has been studied for many years since it was firstly reported in 2011 [13]. A large number of studies have been done investigating the preparation method, ion doping, and magneto-optical property improvements. More importantly, it was found that Y-doping can avoid strain generation and crack initiation during the sintering process. Chen et al. [14] successfully fabricated (Tb 1−x R x ) 3 Al 5 O 12 (R = Y, Ce) ceramics by a two-step sintering method, and confirmed that Y 3+ addition improved the optical quality of the TAG ceramics. Duan et al. [15] found Y-doping can optimize the microstructure of the TAG transparent ceramics and achieve a smaller average grain size. In 2017, Ikesue et al. [16] produced (Tb 1−x Y x ) 3 Al 5 O 12 transparent ceramics with ultralow optical loss for practical applications, promoting the commercial development of TAG transparent ceramics.
Even though most former studies have claimed that highly transparent TAG or TAG-based ceramics were fabricated, there was still numerous scatters that existed in the samples (pores, second phases, impurities, grain boundaries) which limited further improvement of the transmittance. Generally, in order to avoid the second phase, the ratio of RE/Al (RE is rare earth, such as Y, Lu, Dy) must be carefully controlled as 3/5, according to the binary phase diagram of RE 2 O 3 -Al 2 O 3 . Much research has been devoted to understanding the effect of composition deviation on several common garnet structures. Hu et al. [17] found that excess lutetium restrained abnormal grain growth by the impurity drag effect, while excess Al 2 O 3 pinned in the grain boundary limited the fast migration of grain boundaries in Pr: LuAG transparent ceramics. Stanek et al. [18] studied the variation of lattice parameter with stoichiometry deviation, and non-stoichiometry in YAG proceeded through cation antisite defects, which would be a theoretical foundation in vacancy diffusion during the densification process. Liu et al. [19] investigated that a small excess of yttrium was tolerable for the optical quality of ceramics compared with excess alumina. They deduced that the average grain size abruptly decreased, and the porosity increased with the increasing of both excess Al 2 O 3 and Y 2 O 3 . However, related works have not been carried out in TAG ceramic system, though it would be meaningful for obtaining transparent ceramics with excellent magneto-optical properties.
Generally, Tb 4 O 7 instead of Tb 2 O 3 is usually used as the raw material to prepare TAG transparent ceramics, owning to the instability of Tb 2 O 3 at room temperature [20,21]. However, the precise contents of Tb 3+ and Tb 4+ in Tb 4 O 7 powder can be hardly measured. In other words, Tb 4 O 7 should actually be described as Tb 4 O 7±x , making precise control of Tb/Al ratios impossible. Therefore, investigate different ratios will be significant for fabrication of high optical quality ceramics. In this paper, (Tb 0.6 Y 0.4 ) 3 Al 5 O 12 transparent ceramics were fabricated by reactive sintering in vacuum, followed by hot isostatic sintering (HIP) treatment and Tb 4 O 7 , Y 2 O 3 , and α-Al 2 O 3 were used as raw powders. The effect of (Tb+Y)/Al ratio on the phase formation, densification process, and microstructure evolution was elaborately investigated. Then, the slurries were dried at 80 • C in oven for 24 h and sieved through a 100-mesh screen. After this, they were uniaxially pressed into plates in Φ 12 mm stainless steel molds at 20 MPa and cold isostatic pressing at 200 MPa for 5 min. In order to remove organics, plates were calcined at 800 • C for 6 h in a muffle furnace. The green bodies were pre-sintered at varieties temperatures (from 950 to 1550 • C) in a vacuum furnace (ZW-50-20, Chenrong Corp. Ltd., Shanghai, China) under a vacuum level of 10 −3 Pa for 4 h, followed by HIP at 1600 • C under 196 MPa Ar atmosphere. Finally, all samples were annealed at 1350 • C for 10 h in a muffle furnace (SSX-2-16, Yifeng Corp. Ltd., Shanghai, China), and mirror-polished to 4 mm on double sides.
Characterization
The phase compositions of pre-sintering plates were identified by X-ray diffraction (XRD; D2, Bruker, Hamburg, Germany) with Cu Kα radiation. The microstructures of the ceramics were characterized by scanning electron microscopy (SEM; JSM-6510, JEOL, Akishima, Japan). The element mapping was conducted with energy dispersive spectroscopy (EDS; SwiftED3000, HITACHI, Tokyo, Japan). The densities of the transparent ceramics were measured by Archimedes method. The in-line transmittances of the polished samples were measured by UV-vis-NIR spectrophotometer (Lambda 950; Perkin-Elmer, Waltham, MA, USA).
Phase Formation Process
The X-ray diffraction patterns in Figure 1 demonstrate the phase formation of the pre-sintered samples with 0.6000 ratio. The results confirm that Tb and Y react with Al 2 O 3 and form a solid solution of yttrium terbium aluminum garnet (YTbAG). Specifically, Tb 4 O 7 deoxygenates to Tb 2 O 3 beyond 950 • C and Tb 2 O 3 , Y 2 O 3 , and Al 2 O 3 can be detected at this temperature. Yttrium terbium aluminum monoclinic phase (YTbAM) forms at 1050 • C while the diffractions of raw powders still exist. With the temperature increasing, YTbAM and yttrium terbium aluminum perovskite phase (YTbAP) simultaneously appear at 1150 • C, and the diffraction intensity of YTbAM decreases. Meanwhile, YTbAP and some YTbAG are detected at 1250 • C while YTbAM has disappeared. When the temperature reaches 1350 • C, a pure YTbAG phase is generated and all peaks match well with TAG standard card (PDF#17-0735). No residual intermediate phases remain to be detected. In summary, YTbAM, YTbAP, and YTbAG appear in order with the reaction processing, which can be described by the formulas Materials 2019, 12, x FOR PEER REVIEW 3 of 9 annealed at 1350 °C for 10 h in a muffle furnace (SSX-2-16, Yifeng Corp. Ltd., Shanghai, China), and mirror-polished to 4 mm on double sides.
Characterization
The phase compositions of pre-sintering plates were identified by X-ray diffraction (XRD; D2, Bruker, Hamburg, Germany) with Cu Kα radiation. The microstructures of the ceramics were characterized by scanning electron microscopy (SEM; JSM-6510, JEOL, Akishima, Japan). The element mapping was conducted with energy dispersive spectroscopy (EDS; SwiftED3000, HITACHI, Tokyo, Japan). The densities of the transparent ceramics were measured by Archimedes method. The in-line transmittances of the polished samples were measured by UV-vis-NIR spectrophotometer (Lambda 950; Perkin-Elmer, Waltham, MA, USA).
Phase Formation Process
The X-ray diffraction patterns in Figure 1 demonstrate the phase formation of the pre-sintered samples with 0.6000 ratio. The results confirm that Tb and Y react with Al2O3 and form a solid solution of yttrium terbium aluminum garnet (YTbAG). Specifically, Tb4O7 deoxygenates to Tb2O3 beyond 950 °C and Tb2O3, Y2O3, and Al2O3 can be detected at this temperature. Yttrium terbium aluminum monoclinic phase (YTbAM) forms at 1050 °C while the diffractions of raw powders still exist. With the temperature increasing, YTbAM and yttrium terbium aluminum perovskite phase (YTbAP) simultaneously appear at 1150 °C, and the diffraction intensity of YTbAM decreases. Meanwhile, YTbAP and some YTbAG are detected at 1250 °C while YTbAM has disappeared. When the temperature reaches 1350 °C, a pure YTbAG phase is generated and all peaks match well with TAG standard card (PDF#17-0735). No residual intermediate phases remain to be detected. In summary, YTbAM, YTbAP, and YTbAG appear in order with the reaction processing, which can be described by the formulas YTbAM + Al2O3 → YTbAP (1050-1250 °C), YTbAP + Al2O3 → YTbAG (1250 -1350 °C).
Densification and Microstructure
The relationship between relative density and pre-sintering temperature is shown in Figure 2. This indicated that the relative densities improved simultaneously with the increase of temperature for all samples. A rapid densification process between 1350 and 1450 • C can be observed, and the rate slows down from 1450 to 1550 • C. The density fluctuations tend to be flat when the temperature continues to increase. Regularly, the relative density decreases with the ratio of (Tb+Y)/Al increasing. The relative density of the 0.5964 ratio sample is 78%, while it is just 71% for the 0.6110 ratio sample at 1350 • C. Density distinction among different ratio samples constantly decreases as the pre-sintering temperature rises further. Finally, all of the densities with the different (Tb+Y)/Al ratios are almost coincident at 1550 • C, and they are above 99%.
Densification and Microstructure
The relationship between relative density and pre-sintering temperature is shown in Figure 2. This indicated that the relative densities improved simultaneously with the increase of temperature for all samples. A rapid densification process between 1350 and 1450 °C can be observed, and the rate slows down from 1450 to 1550 °C. The density fluctuations tend to be flat when the temperature continues to increase. Regularly, the relative density decreases with the ratio of (Tb+Y)/Al increasing. The relative density of the 0.5964 ratio sample is 78%, while it is just 71% for the 0.6110 ratio sample at 1350 °C. Density distinction among different ratio samples constantly decreases as the pre-sintering temperature rises further. Finally, all of the densities with the different (Tb+Y)/Al ratios are almost coincident at 1550 °C, and they are above 99%. Figure 3 shows the thermal etching surface of the as-prepared ceramics with different (Tb+Y)/Al ratios (0.5964, 0.6000, 0.6036, 0.6073, and 0.6110) pre-sintered from 1350 to 1500 °C. It is clearly observed that the average grain size of ceramics increases and the porosity decreases with the sintering temperature increasing, regardless of (Tb+Y)/Al ratios. Open pores can be easily observed at 1350 °C which change to being closed around 1400 °C. When the sintering temperature reaches 1450 °C, the samples possess uniform grains as well as high density. The microstructure evolution as well as porosity changes are consistent with the results displayed in Figure 2. It is worth mentioning that second phases at grain boundaries appear in part of the (Tb+Y)/Al ratio samples (0.5964, 0.6000, 0.6073, and 0.6110) when the sintering temperature reaches 1450 °C, and they are marked with red circles. For samples with (Tb+Y)/Al ratios of 0.5964 and 0.6000, the residual pores and average grain size are obviously larger than those samples with other ratios (0.6073 and 0.6110) below 1450 °C. The density distinction decreases with continually increasing temperature. However, for the sample with (Tb+Y)/Al ratio of 0.6036, the grain boundaries are clean and free from second phases and abnormal grains at each temperature. Unluckily, intergranular pores appear at 1500 °C and are unable to be removed by HIP treatment. Figure 3 shows the thermal etching surface of the as-prepared ceramics with different (Tb+Y)/Al ratios (0.5964, 0.6000, 0.6036, 0.6073, and 0.6110) pre-sintered from 1350 to 1500 • C. It is clearly observed that the average grain size of ceramics increases and the porosity decreases with the sintering temperature increasing, regardless of (Tb+Y)/Al ratios. Open pores can be easily observed at 1350 • C which change to being closed around 1400 • C. When the sintering temperature reaches 1450 • C, the samples possess uniform grains as well as high density. The microstructure evolution as well as porosity changes are consistent with the results displayed in Figure 2. It is worth mentioning that second phases at grain boundaries appear in part of the (Tb+Y)/Al ratio samples (0.5964, 0.6000, 0.6073, and 0.6110) when the sintering temperature reaches 1450 • C, and they are marked with red circles. For samples with (Tb+Y)/Al ratios of 0.5964 and 0.6000, the residual pores and average grain size are obviously larger than those samples with other ratios (0.6073 and 0.6110) below 1450 • C. The density distinction decreases with continually increasing temperature. However, for the sample with (Tb+Y)/Al ratio of 0.6036, the grain boundaries are clean and free from second phases and abnormal grains at each temperature. Unluckily, intergranular pores appear at 1500 • C and are unable to be removed by HIP treatment. SEM micrograph and the corresponding EDS mapping images of various elements of the presintered sample with two (Tb+Y)/Al ratios (0.6000 and 0.6073) are shown in Figure 4. Figure 4a indicates that Al2O3 second phases are detected in the sample with a ratio of 0.6000. Due to the lower atomic number, it looks darker. Meanwhile, Tb and Y disappear in this area. The mapping result of 0.6073 ratio is showed in Figure 4b, and excess Tb second phases exist in the bright area. Surprisingly, an interesting phenomenon occurs that the content of Si is also enriched. This may be caused by generation of a terbium silicate compound. Similar results were already reported in Nd: YAG transparent ceramics [22,23].
The rare earth-controlled densification and average grain size were already discovered in YAG [24,25]. In this investigation, it can be explained that (Tb+Y)/Al ratios affect the sintering behavior via generating structural defects which can promote or inhibit densification and grain growth, relying on their atoms' diffusion kinetics. When excess Al reacts with terbium in the system, it generates a high concentration of vacancies, which increases the diffusion rate during the sintering process. The second phase does not produce a pinning effect, and the grain boundaries migrate, sequentially, with higher temperature. When Tb is in excess, it consumes the bulk concentration of rare earth vacancies so that the diffusion kinetics of rare earth species would be limited. This also indicates that excess terbium hinders the grain growth which is significantly effective at lower temperatures. SEM micrograph and the corresponding EDS mapping images of various elements of the pre-sintered sample with two (Tb+Y)/Al ratios (0.6000 and 0.6073) are shown in Figure 4. Figure 4a indicates that Al 2 O 3 second phases are detected in the sample with a ratio of 0.6000. Due to the lower atomic number, it looks darker. Meanwhile, Tb and Y disappear in this area. The mapping result of 0.6073 ratio is showed in Figure 4b, and excess Tb second phases exist in the bright area. Surprisingly, an interesting phenomenon occurs that the content of Si is also enriched. This may be caused by generation of a terbium silicate compound. Similar results were already reported in Nd: YAG transparent ceramics [22,23]. HIP treatment is a typical method in fabricating optical ceramics, because it can remove residual intergranular pores and further improve the densification. There is more suitable microstructure of pre-sintering ceramics before HIP due to no intragranular pores, higher relative density, and smaller grain size. Therefore, the microstructures of samples pre-sintered at 1450 °C and treated by 1600 °C HIP are displayed in Figure 5. It can be seen that residual pores are removed through HIP treatment. At the same time, the average grain size also obviously grows despite the second phases remain in some samples. The rare earth-controlled densification and average grain size were already discovered in YAG [24,25]. In this investigation, it can be explained that (Tb+Y)/Al ratios affect the sintering behavior via generating structural defects which can promote or inhibit densification and grain growth, relying on their atoms' diffusion kinetics. When excess Al reacts with terbium in the system, it generates a high concentration of vacancies, which increases the diffusion rate during the sintering process. The second phase does not produce a pinning effect, and the grain boundaries migrate, sequentially, with higher temperature. When Tb is in excess, it consumes the bulk concentration of rare earth vacancies so that the diffusion kinetics of rare earth species would be limited. This also indicates that excess terbium hinders the grain growth which is significantly effective at lower temperatures.
HIP treatment is a typical method in fabricating optical ceramics, because it can remove residual intergranular pores and further improve the densification. There is more suitable microstructure of pre-sintering ceramics before HIP due to no intragranular pores, higher relative density, and smaller grain size. Therefore, the microstructures of samples pre-sintered at 1450 • C and treated by 1600 • C HIP are displayed in Figure 5. It can be seen that residual pores are removed through HIP treatment. At the same time, the average grain size also obviously grows despite the second phases remain in some samples. HIP treatment is a typical method in fabricating optical ceramics, because it can remove residual intergranular pores and further improve the densification. There is more suitable microstructure of pre-sintering ceramics before HIP due to no intragranular pores, higher relative density, and smaller grain size. Therefore, the microstructures of samples pre-sintered at 1450 °C and treated by 1600 °C HIP are displayed in Figure 5. It can be seen that residual pores are removed through HIP treatment. At the same time, the average grain size also obviously grows despite the second phases remain in some samples. Figure 6 shows the in-line transmittance curves of samples pre-sintered at 1450 °C followed by 1600 °C HIP treatment. Specifically, excess aluminum enormously affects the transmittance both in the visible and near-infrared region, while the transmittance lines of excess terbium samples are lower than the sample with 0.6036 ratio, and it decreases quickly along with the wavelength decreasing. Actually, this phenomenon is explained by the Mie scattering caused by residual pores [26]. The absorption peak at 484 nm is attributed to the Tb 3+ : 7 F6 → 5 D4 transition. The best optical quality sample, whose transmittance reaches 78% at 1064 nm, is obtained by a (Tb+Y)/Al ratio of 0.6036. Macroscopic observations of (Tb0.6Y0.4)3Al5O12 transparent ceramics with different (Tb+Y)/Al ratios are displayed inside. The samples with 0.6036, 0.6073, and 0.6110 ratios are transparent, with words being clearly seen below the photograph, while 0.5964 and 0.6000 ratio samples are opaque. The bright yellow appearance is connected with the valence state of terbium. Figure 6 shows the in-line transmittance curves of samples pre-sintered at 1450 • C followed by 1600 • C HIP treatment. Specifically, excess aluminum enormously affects the transmittance both in the visible and near-infrared region, while the transmittance lines of excess terbium samples are lower than the sample with 0.6036 ratio, and it decreases quickly along with the wavelength decreasing. Actually, this phenomenon is explained by the Mie scattering caused by residual pores [26]. The absorption peak at 484 nm is attributed to the Tb 3+ : 7 F 6 → 5 D 4 transition. The best optical quality sample, whose transmittance reaches 78% at 1064 nm, is obtained by a (Tb+Y)/Al ratio of 0.6036. Macroscopic observations of (Tb 0.6 Y 0.4 ) 3 Al 5 O 12 transparent ceramics with different (Tb+Y)/Al ratios are displayed inside. The samples with 0.6036, 0.6073, and 0.6110 ratios are transparent, with words being clearly seen below the photograph, while 0.5964 and 0.6000 ratio samples are opaque. The bright yellow appearance is connected with the valence state of terbium.
Conclusions
(Tb0.6Y0.4)3Al5O12 transparent ceramics were fabricated by vacuum pre-sintering and HIP treatment. Due to the uncertain volume of terbium in Tb4O7 raw powders, the influence of different (Tb+Y)/Al ratios on the densification process and optical properties of (Tb0.6Y0.4)3Al5O12 transparent ceramics were studied in detail. Meanwhile, results indicated that excess aluminum ((Tb+Y)/Al =
Conclusions
(Tb 0.6 Y 0.4 ) 3 Al 5 O 12 transparent ceramics were fabricated by vacuum pre-sintering and HIP treatment. Due to the uncertain volume of terbium in Tb 4 O 7 raw powders, the influence of different (Tb+Y)/Al ratios on the densification process and optical properties of (Tb 0.6 Y 0.4 ) 3 Al 5 O 12 transparent ceramics were studied in detail. Meanwhile, results indicated that excess aluminum ((Tb+Y)/Al = 0.5964 and 0.6000) made the densification process faster, while excess terbium ((Tb+Y)/Al = 0.6073 and 0.6110) caused decay and hindered grain growth. More importantly, the excess aluminum or terbium caused second phases to appear, which seriously affected the sample optical properties. Finally, transparent ceramic with (Tb+Y)/Al = 0.6036 pre-sintered at 1450 • C in vacuum, followed by HIP treatment at 1600 • C, resulted in better optical quality and a transmittance of up to 78% at 1064 nm in a sample of 4 mm thickness was obtained. | 5,096 | 2019-01-01T00:00:00.000 | [
"Materials Science"
] |
Plasma etching and surface characteristics depending on the crystallinity of the BaTiO3 thin film
Due to its high dielectric constant (κ), the BaTiO3 (BTO) thin film has significant potential as a next-generation dielectric material for metal oxide semiconductor field-effect transistors (MOSFETs). Hence, the evaluation of the BTO thin film etching process is required for such nanoscale device applications. Herein, the etching characteristics and surface properties are examined according to the crystallinity of the BTO thin film. The results demonstrate that the etching rate is low in the high-crystallinity thin film, and the surface residues are much lower than in the low-crystallinity thin film. In particular, the accelerated Cl radicals in the plasma are shown to penetrate more easily into the low-crystallinity thin film than the high-crystallinity thin film. After the etching process, the surface roughness is significantly lower in the high-crystallinity thin film than in the low-crystallinity thin film. This result is expected to provide useful information for the process design of high-performance electronic devices.
Introduction
Over the past several decades, the metal oxide semiconductor field-effect transistors (MOSFETs) have been scaled to increase the speed, power efficiency, and density of the integrated circuit [1][2][3][4]. However, due to the concurrent reduction input voltage, the thickness of the insulating layer must also be reduced. Consequently, because a very thin insulating layer causes leakage current and adversely affects device performance, an insulating layer with a high dielectric constant (κ) has become an important requirement for MOSFETs and metal-insulator-metal (MIM) capacitors for application in memory devices [5][6][7]. In particular, the ternary perovskite barium tin oxide (BaTiO 3 or BTO) has attracted attention as a next-generation insulator due to its high κ value of ∼1,700 compared to the binary oxides such as SiO 2 (κ = 3.9) and ZrO 2 (κ = 2.9) [8,9]. In addition, BTO is widely used in various applications such as nanogenerators, photovoltaics, and sensors due to its piezoelectric, pyroelectric, and ferroelectric characteristics [10][11][12].
For application to nanoscale electronic devices, anisotropic nano-patterning of the BTO thin film is required. Although various nano-patterning methods exist and are under development, the extreme ultraviolet (EUV) photolithography method can reliably realize the finest patterns, with a size of several nanometers. By contrast, wet etching is difficult to apply to certain nano-sized patterns due to its isotropic etching characteristics [13]. Hence, it is essential to apply a plasma etching process that provides anisotropic and elaborate etching characteristics [14,15]. In addition, a post-deposition annealing process is essential to obtain high-κ, high-crystallinity BTO thin films [16,17]. Therefore, various studies are needed in order to establish various process strategies for nano-patterning. For example, the patterning may be performed either before or after the postdeposition annealing process of the BTO thin film. If the patterning is performed after annealing, the crystallinity of the annealed thin film may cause differences in the etching rate, surface residues, surface doping by the plasma, and density of defects [18][19][20][21].
In this study, the Cl-based plasma etching characteristics and surface properties of BTO thin films are investigated according to the crystallinity of the thin films. The crystallinity and surface chemical states of the BTO thin films are examined via x-ray diffraction (XRD) and x-ray photoelectron spectroscopy (XPS), respectively, before and after the annealing and etching processes. The etch rate of the low-crystallinity thin film is shown to be higher than that of the high-crystallinity thin film. Further, the proportion of surface residues on the thin film after etching is significantly higher in the low-crystallinity thin film than in the high-crystallinity thin film. In particular, it is found that significant numbers of Cl radicals are inserted deep into the lowcrystallinity BTO thin film. Additionally, the surface roughness of the high-crystallinity BTO thin film was less affected by the etching process. This study reveals the effects of crystallinity upon the etching characteristics of the thin film, and is expected to provide useful information for the process design of high-performance electronic devices.
Experimental
The BTO thin films were deposited on P-type (100) silicon substrates via RF-magnetron sputtering using a 3-inch diameter, 1/8-inch thick BaTiO 3 (99.9%) target with a bonded Cu backplate. The target-to-substrate distance was 57 cm. Before the sputtering process, the 4-inch silicon wafer was cleaned by sonication in isopropyl alcohol (IPA) for 10 min, rinsed with deionized (DI) water, and dried by N 2 blowing. To minimize the effects of potential contamination, the base pressure of the chamber was maintained at 2 × 10 -6 Torr using a turbomolecular pump for 1 h.
The BTO thin films were deposited on the Si wafer for 4 h at a substrate temperature of 300°C, an RF power of 140 W, a working pressure of 22 mTorr, and Ar and O 2 flow rates of 12 and 4 standard cubic centimeters per minute (sccm). The thickness of the as-deposited BTO thin film was about 280 nm. After deposition, samples were annealed for 2 h under an oxygen atmosphere in a furnace at 600, 700, or 800°C, respectively, to compare the effects of crystallinity on the subsequent Cl-based plasma etching.
The plasma etching characteristics according to the crystallinity of the BTO thin films were compared by using a planar high-density plasma (HDP; SELEX 200, APTC, South Korea) system, which combines the high plasma density of the inductively-coupled plasma (ICP) and processing reproducibility of the capacitively coupled plasma (CCP) sources, respectively [22]. In detail, the design of the upper RF antenna for plasma generation was a combination of the plate structure of the CCP source and the coil structure of the ICP source. An RF generator was connected to the bottom of the chamber to control the bottom (platen) RF power and, thus, control the ion energy in the plasma. The frequencies of 13.56 and 2 MHz were used for the upper and lower generators, respectively. Prior to the plasma etching process, the chamber was maintained at a base pressure of 5 × 10 -6 Torr for 30 min using a turbomolecular pump. A cooling system was connected to the wafer chuck to keep the substrate temperature constant during the process.
The BTO thin films were etched for 1 min at various Cl 2 /Ar gas mixing ratios of 0:100, 25:75, 50:50, 75:25, 100:0 (with a total mixing gas flow rate of 100 sccm in each case), while other conditions were fixed as an RF power of 150 W, a bottom RF power of 50 W, a process pressure of 15 mTorr, and a substrate temperature of 21°C.
The etch rate was measured using a depth profiler (α-step 500, KLA Tencor, USA) after the etching process. The crystallinities of as-deposited and annealed films were examined via x-ray diffractometer (XRD; New D8-Advance, Bruker, USA), while x-ray photoelectron spectroscopy (XPS; NEXSA, Thermo-Fisher Scientific, USA) was used to define the atomic percentage, chemical bonds, chemical shifts, and depth profiles before and after plasma etching. The base pressure was maintained below 10 −8 mbar using two turbomolecular pumps, an automated titanium sublimation pump and a backing pump, for a high vacuum for XPS analysis. Then, the sample surface was etched with an Ar + ion gun for 10 s prior to XPS measurement in order to remove the surface contamination that occurred during the movement for measurement. The XPS spectra were recorded using a monochromatic Al Kα radiation at 1486.6 eV with a 400 μm spot size in fixed delay ratio mode, and all the binding energies were determined with reference to the adventitious C 1s peak at 284.8 eV. For curve fitting, a Gaussian-Lorentzian peak shape was used sequentially after subtracting the background signal by Shirley's method. To confirm the contamination depth by Cl-based plasma etching, the XPS depth profile was measured after the etching by an Ar+ ions gun at intervals of 10 s for a total of 250 s.
In addition, atomic force microscopy (AFM; NX-10, Park system, Korea) was used to measure the surface roughness.
Results and discussion
The XRD patterns of the BTO thin films that were annealed at 600, 700, and 800°C for 2 h are presented in figure 1(a). Here, all the deposited BTO thin films exhibit peaks in the (111), (200), and (211) orientations (indicated by the red triangles). After annealing, however, an additional peak appears in the (110) orientation of the perovskite structure [23,24]. This demonstrates that the annealing process leads to crystallization of the previously non-crystallized regions in mainly the (110) orientation. However, no peaks are observed in the (100) and (210) orientations of the perovskite phase, which can be attributed to the specific method of deposition [25][26][27]. The peaks marked with red rectangles and blue circles in figure 1(a) are due to TiO 2 and Ti 2 O 3 , respectively, while the peak marked with a white diamond is due to the Si substrate [28,29]. In brief, the changes in the intensities of the (110), (111), (211) peaks indicate that the annealing process promotes the crystallization of the perovskite phase in the BTO thin films.
The FE-SEM images in Fig. S1 of the Supplementary Material indicate that crack and void are formed in the BTO film when annealed at 800°C. Therefore, the annealing temperature of 700°C is selected for additional investigation. The 700°C-annealed BTO thin film was then plasma etched at the highest etching rate (i.e. 75:25 Cl 2 :Ar) under the conditions given in the Experimental section. The XRD results obtained before and after etching of the 700°C-annealed BTO thin film are presented in figure 1(b). Here, a peak due to the (110) orientation of the perovskite phase is clearly seen in the BTO thin film before the etching process, but is absent after the etching process, while an intense silicon peak has appeared. The latter can be attributed to a decrease in the thickness of the BTO thin film by removal of the crystallized surface during the etching process. This indicates that the change in crystallinity during the annealing process occurs mainly on the surface of the BTO thin film.
The Cl 2 /Ar plasma-etching rates of the as-deposited and 700°C-annealed BTO thin films are compared in figure 2(a). In both cases, the etching rate is seen to steadily increase as the proportion of Cl 2 gas is increased up to 75%, but then decrease under the pure Cl 2 plasma. This decrease can be attributed to the absence of Ar ion bombardment (physical sputtering) [30,31], which is required in order to accelerate the removal of by-products and, thus, achieve a high etching rate [32]. Thus, with Ar ion bombardment, the etching rate of the as-deposited BTO thin film ranges from 14.2 to 87 nm min −1 , while that of the as-annealed thin film ranges from 12.8 to 65.8 nm min −1 . The generally lower etching rate of the as-annealed BTO thin film is attributed to the influence of crystallization during the annealing process, as shown schematically in figure 2(b) [33]. There are some reports that higher crystallinity inhibits oxygen vacancy formation due to the lesser structural flexibility and smaller atomic relaxation [34,35]. This means that there are more unbroken atomic bonds in high-crystallinity thin films than in low-crystallinity thin films. In turn, the increased number of broken chemical bonds in the lowcrystallinity BTO thin film allows easy combination with the Cl radicals generated during the plasma etching process ( figure 2(b)). As a result, the etching rate of the low-crystallinity thin film is higher than that of the highcrystallinity (i.e., the annealed) thin film. The XPS narrow scans of the as-deposited and 700°C-annealed BTO thin films obtained before and after etching are presented in figure 3. Here, the Ba 3d spectrum of the as-deposited BTO thin film exhibits a doublet of peaks at about 793.66 and 778.29 eV due to the Ba 3d 3/2 and Ba 3d 5/2 levels, respectively ( figure 3(a)). Further, these are deconvoluted into sub-peaks at 779.36 and 780.20 eV due to BaO, 779.93 eV due to BaCO 3 , and 780.74 eV due to BaO 2 . The BaCO 3 bonds are due to contamination of the sample surface during thin film deposition and transport for XPS measurement [36].
After the etching process of both the as-deposited and the 700°C-annealed BTO films, a new peak appears at 781.98 eV due to BaCl 2 , while the intensity of the BaCO 3 peak is reduced. This indicates that the Ba on the surface of the BTO thin film is etched away after bonding with Cl radicals, thereby reducing the proportion of BaCO 3 on the surface [15]. Surface residues that are not etched away after the bonding of Ba and Cl radicals remain on the surface in the form of BaCl 2 .
Similarly, the XPS narrow scan Ti 2p spectra each exhibit a doublet of peaks at 464 and 456 eV due to the Ti 2p 1/2 and Ti 2p 3/2 levels, respectively, and are deconvoluted into sub-peaks corresponding to TiO (459.05 eV), TiO 2 (457.75, 458.24, and 458.68 eV) and Ti 2 O 3 (456.53 eV) ( figure 3(b)) [37]. An additional peak is observed at 455.0 eV due to residual TiCl X on the surface of both samples after the etching process [38], thereby indicating that the surface Ti atoms are etched by bonding with Cl radicals in the form of TiCl X . Further, the deconvoluted sub-peaks of O 1s spectra show the bonding of Ti-O, Ba-O, and BaCO 3 ( figure 3(c)). After the annealing process, the O 1s peak is shifted by about 0.2 eV towards a lower energy due to compensation of the cation defects. Further, the Ba-O bonds are significantly decreased in both the as-deposited and 700°C-annealed BTO thin films after the etching process, while the Ti-O and Ba-C-O bonds are unaffected. This may be because the Ba-Cl chemical bond occurs more predominantly than the Ti-Cl bond, as can be seen from the Gibb's free energies of BaCl 2 (-806.67 kJ mol −1 ) and TiCl 4 (-737.2 kJ mol −1 ) [39]. In particular, the decrease in the B-O bond is more significant in the as-deposited sample than in the 700°C-annealed sample, which may be due to the difference in crystallinity of the two thin films.
As expected, no Cl 2p peak is observed in either the as-deposited or the 700°C-annealed sample before etching, whereas, a Cl 2p peak is observed in both thin films after the Cl 2 /Ar plasma etching process ( figure 3(d)). Moreover, the as-deposited BTO thin film exhibits a significantly larger Cl 2p peak than does the annealed thin film. This is due to the different crystallinities of the thin films, such that the penetration of Cl radicals into the BTO thin film surface under bottom RF power occurs more easily in the low-crystallinity (asdeposed) thin film than in the high-crystallinity (annealed) thin film, as shown schematically in figure 2(b).
The XPS elemental contents of the as-deposited and 700°C-annealed BTO thin films before and after plasma etching are summarized in table 1. In each case, the proportion of Ba atoms is seen to decrease after etching. However, the as-deposited thin film exhibits a larger decrease in the proportion of Ba atoms, along with a larger increase in the proportion of Cl atoms, than does the as-annealed thin film. In addition, the as-deposited thin film exhibits a much larger decrease in the amount of surface Ti than does the annealed thin film after the etching process. In conclusion, the BTO thin film is etched mainly via BaCl 2 and TiCl X bonding in the presence of the Cl 2 /Ar plasma, with BaCl 2 bonding proceeding more actively than TiCl X bonding on the surface of the low-crystallinity BTO thin film.
The bottom RF power of the plasma etching process causes strong physical sputtering of radicals and ions in the plasma, and these can penetrate into the bulk of the thin film to cause chemical contamination and defects. The depth variations in the chemical compositions of the as-deposited and 700°C-annealed BTO thin films after the etching process are revealed by the XPS profiles in figure 4. Here, both samples exhibit high surface concentrations of O 1s (black profile) and C 1s (blue profile) due to contamination and oxidation of the thin film surface upon exposure to the atmosphere during transport for XPS analysis after Cl 2 /Ar plasma etching. Hence, to about 50 at% after about 50 s, as the bulk of the film is sampled. In the as-deposited BTO thin film (figure 4(a)), the atomic percentage of Ti 2p becomes higher than that of Ba 3d between 25 and 125 s, after which the atomic percentage of Ba 3d exceeds that of Ti 2p. Meanwhile, the atomic percentage of Cl 2p gradually decreases from 7 to 3% as the etching time increases. In the 700°C-annealed BTO thin film, however, the atomic percentage of Ba 3d is consistently higher than that of Ti 2p, and both concentrations are maintained at constant levels after 50 s, while the atomic percentage of the Cl 2p remains very low. Thus, the Cl radicals and Ar ions in the Cl 2 /Ar plasma have more pronounced effects in the bulk of the as-deposited BTO thin film than in the asannealed thin film. This can be attributed to the difference in crystallinity, such that the Cl radicals penetrate into the crystal defects in the as-deposited BTO thin film, where they are trapped as residues and bond with Ti atoms [40,41], whereas the high surface crystallinity of the as-annealed BTO prevents the deep penetration of Cl radicals and Ar ions.
The surface morphologies of the as-deposited and 700°C-annealed BTO thin films before and after the Cl 2 /Ar plasma etching process are revealed by the AFM images in figure 5. Before etching, the as-deposited BTO thin film exhibits a very smooth surface, with a roughness of 0.5 nm ( figure 5(a)), while the as-annealed thin film exhibits a slightly increased surface roughness of 7.6 nm due to the surface crystallization ( figure 5(b)). After the etching process, however, the surface roughness of the as-deposited BTO thin film is significantly increased to 31.7 nm ( figure 5(c)), whereas that of the as-annealed BTO thin film has decreased very slightly to 4.7 nm ( figure 5(d)). These results suggest that the amorphous state is partially present in the as-deposited BTO thin film, and that the etching rate is higher in the amorphous region than in the crystalline region, thereby resulting in an increased surface roughness during the etching process. However, because the surface of the as-annealed BTO thin film is mostly crystallized, the etching rate is similar on most parts of the surface, thus leading to no significant change in surface roughness. Since the metal/insulator interface topography deeply affects the electric field strengths and leakage currents of devices such as transistors and capacitors, this indicates that the use of the annealing process prior to etching can be advantageous for improving the device performance [42,43].
Conclusions
Herein, the Cl 2 /Ar plasma etching characteristics of magnetron-sputtered BTO thin films were investigated according to their surface crystallinities. To increase the surface crystallinity, the as-deposited BTO film was annealed at 700°C for 2 h in an oxygen atmosphere, and XRD analysis confirmed that the (110) orientation was newly formed on the annealed surface. The etching rate was highest at the Cl 2 /Ar gas ratio of 75:25 in both the as-deposited and as-annealed BTO thin films, and was higher in the low-crystallinity (as-deposited) thin film than in the high-crystallinity (as-annealed) thin film. The XPS spectra indicated that the BTO thin film was etched in the form of BaCl 2 and TiCl X by bonding with Cl radicals, and the BaCl 2 bonding was more dominant than that of TiCl X . In addition, the bonding of BaCl 2 occurred more actively in the as-deposited BTO film than in the as-annealed film. The XPS depth analysis confirmed that Cl radicals were deeply inserted in the defects of the low-crystallinity thin film under the bottom RF power, where they remained in the form of TiCl X . Further, AFM analysis revealed that the surface roughness was highest for the as-deposited, etched BTO thin film, while no significant change was observed in the surface roughness of the as-annealed BTO film before and after etching. These results demonstrate the effect of the surface crystallinity provided by the annealing process upon the subsequent plasma etching of the BTO thin film, and are expected to provide a direction for the process design of thin films for high-performance electronic devices.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files). | 4,820.2 | 2022-12-07T00:00:00.000 | [
"Materials Science"
] |
Beneficial effects of elafibranor on NASH in E3L.CETP mice and differences between mice and men
Non-alcoholic steatohepatitis (NASH) is the most rapidly growing liver disease that is nevertheless without approved pharmacological treatment. Despite great effort in developing novel NASH therapeutics, many have failed in clinical trials. This has raised questions on the adequacy of preclinical models. Elafibranor is one of the drugs currently in late stage development which had mixed results for phase 2/interim phase 3 trials. In the current study we investigated the response of elafibranor in APOE*3Leiden.CETP mice, a translational animal model that displays histopathological characteristics of NASH in the context of obesity, insulin resistance and hyperlipidemia. To induce NASH, mice were fed a high fat and cholesterol (HFC) diet for 15 weeks (HFC reference group) or 25 weeks (HFC control group) or the HFC diet supplemented with elafibranor (15 mg/kg/d) from week 15–25 (elafibranor group). The effects on plasma parameters and NASH histopathology were assessed and hepatic transcriptome analysis was used to investigate the underlying pathways affected by elafibranor. Elafibranor treatment significantly reduced steatosis and hepatic inflammation and precluded the progression of fibrosis. The underlying disease pathways of the model were compared with those of NASH patients and illustrated substantial similarity with molecular pathways involved, with 87% recapitulation of human pathways in mice. We compared the response of elafibranor in the mice to the response in human patients and discuss potential pitfalls when translating preclinical results of novel NASH therapeutics to human patients. When taking into account that due to species differences the response to some targets, like PPAR-α, may be overrepresented in animal models, we conclude that elafibranor may be particularly useful to reduce hepatic inflammation and could be a pharmacologically useful agent for human NASH, but probably in combination with other agents.
Non-alcoholic fatty liver disease (NAFLD) is closely associated with obesity, insulin resistance and hyperlipidemia and is considered as the hepatic manifestation of the metabolic syndrome. NAFLD is defined by the accumulation of fat in the liver, in the absence of excessive alcohol consumption. A more severe form of NAFLD is nonalcoholic steatohepatitis (NASH), characterized by steatosis in concert with inflammation, which can lead to liver fibrosis and cirrhosis. Current management is primarily focused on promoting weight loss through lifestyle interventions. Although NASH has emerged as a rising and major form of chronic liver disease worldwide, there is still no approved pharmacotherapy for NASH. Therefore a tremendous effort is put worldwide in the development of novel NASH therapeutics 1 . This development requires the use of animal models that adequately mimic the human disease. However, several preclinical models do not have the same metabolic syndrome-like context seen in most NASH patients or do not represent the underlying disease pathways 2,3 . Since many novel NASH therapeutics have failed in clinical trials 4 , this has raised questions on the adequacy of preclinical models. Elafibranor is one of the drugs that showed beneficial effects in different animal models 5 and was lately tested in patients with NASH and fibrosis in a clinical phase 3 trial. Elafibranor is a dual agonist acting upon the peroxisome proliferator-activated receptors (PPARs) α and δ, nuclear receptors that play a key role in cellular processes regulating lipid metabolism and fatty acid transport and oxidation, but affect glucose metabolism and inflammation as well [6][7][8][9] . Elafibranor revealed beneficial effects in NASH patients during a phase 2 trial 10 , but interim results of the ongoing RESOLVE-IT phase 3 trial with monotreatment of elafibranor have reported a failure to demonstrate a significant effect on NASH resolution 11 . However, the RESOLVE-IT study will be continued and combination therapies of elafibranor with other NASH therapeutics are still being launched 12 .
In the current study we investigated the response of elafibranor in APOE*3Leiden.human Cholesteryl Ester Transfer Protein (E3L.CETP) mice. E3L.CETP mice are a well-established model for hyperlipidemia and atherosclerosis 13,14 . When fed a high fat diet the mice display characteristics of the metabolic syndrome 15 and with cholesterol supplementation they develop NASH in the context of obesity, insulin resistance and hyperlipidemia [16][17][18][19] . The model has been proven to respond to several hypolipidemic and anti-diabetic drugs similarly as in humans [20][21][22][23][24][25][26][27] . Using this translational model, we evaluated the response of elafibranor on plasma parameters and NASH histopathology, and hepatic transcriptome analysis was used to investigate the underlying pathways affected by elafibranor. The underlying disease pathways of the model were compared with those of NASH patients and we discuss the response of elafibranor in the mice as compared to the response in human patients, as well as potential pitfalls when translating preclinical results of novel NASH therapeutics to human patients.
Methods
Animals and experimental design. All animal care and experimental procedures were approved by the Ethical Committee on Animal Care and Experimentation (Zeist, The Netherlands), and were in compliance with European Community specifications regarding the use of laboratory animals. The study was carried out in compliance with ARRIVE guidelines. Homozygous human cholesteryl ester transfer protein (CETP) transgenic mice (strain 5203) 15,28 were obtained from Jackson Laboratories (Bar Harbor, ME, USA) and cross-bred with E3L mice 29 in our local animal facility at TNO to obtain heterozygous E3L.CETP mice 14,30,31 . Mice were group housed in a temperature-controlled room on a 12 h light-dark cycle and had free access to food and heat sterilized water. For the experiment 20-22 week old male APOE*3Leiden.CETP mice were matched on age, body weight, blood glucose and plasma cholesterol and triglycerides into one age-matched healthy reference group of 8 mice that were kept on the healthy chow diet (R/M-H, Ssniff Spezialdieten GmbH, Soest, Germany) and a group of 36 mice that were given a high fat and cholesterol diet (HFC) containing 45 kcal% fat derived from lard (Cat. no. 12451), supplemented with 1% (w/w) cholesterol (Research Diets, New Brunswick, NJ, USA) for 15 weeks to induce NASH. After 15 weeks mice on the HFC diet were matched on age, body weight, blood glucose and plasma cholesterol and triglycerides into one group that was left untreated (HFC control group, n = 15) and one group of mice that were treated with the PPAR-α/δ agonist elafibranor (Bio-Connect, Huissen, The Netherlands) provided as diet admix (15 mg/kg/d) from week 15-25 (n = 15). In addition, a small (n = 6) HFC reference group was added that was sacrificed at t = 15 weeks to indicate the severity of NASH and fibrosis at the start of the treatment. Comparison of the elafibranor treated group with this small reference group led to an indication whether elafbibranor treatment could improve certain NASH/fibrosis characteristics beyond the levels at the start of the treatment or merely blocked the further progression. Animals were sacrificed unfasted by gradual-fill CO 2 asphyxiation in week 15 (HFC reference group) or week 25 (other groups). Body weight and food intake per cage were measured regularly during the study (at t = 0, 15, 20 and 25 weeks). Blood samples were taken from the tail vein after 4 h fasting (with blood withdrawn around 08.00 h) in EDTA coated tubes (Sarstedt, Nümbrecht, Germany). Terminal blood was collected through cardiac heart puncture to prepare EDTA plasma and livers and perigonadal, visceral and subcutaneous white adipose tissue (WAT), were collected, weighed and fixed in formalin and paraffin-embedded (lobus sinister medialis hepatis and lobus dexter medialis hepatis) for histological analysis or (remaining liver lobes) fresh-frozen in N 2 and subsequently stored at -80 °C for biochemical analysis and gene expression analysis.
Plasma and liver biochemical analysis. Blood glucose was measured at the time of blood sampling using a hand-held glucometer (Freestyle Disectronic, Vianen, The Netherlands). Plasma insulin was analysed by ELISA (Mercodia AB, Uppsala, Sweden). Plasma cholesterol and triglycerides were determined using enzymatic assays (CHOD-PAP and GPO-PAP, respectively; Roche Diagnostics, Almere, The Netherlands). HDL-choles- www.nature.com/scientificreports/ terol was also quantified for each mouse individually in plasma after precipitation of apoB-containing lipoproteins using PEG/glycine, as previously described 24 . The distribution of cholesterol of the various lipoproteins was determined in plasma pooled per group after separation of lipoproteins by fast-performance liquid chromatography (FPLC) using a Superose 6 column 26 . Plasma alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were measured using a spectrophotometric activity assay (Reflotron-Plus, Roche). Hepatic collagen content was measured via a hydroxyproline based colorimetric assay as marker of fibrosis using the Sensitive total collagen assay (Quickzyme, Leiden, The Netherlands). Intrahepatic concentration of triglycerides, free cholesterol and cholesteryl esters was determined as described previously 32 . Briefly, approximately 50 mg of tissue was homogenized in phosphate buffered saline and samples were taken for measurement of protein content. Lipids were extracted and separated by high performance thin layer chromatography (HPTLC) on silica gel plates. Lipid spots were stained with colour reagent (5 g MnCl2̣ .4H2O, 32 mL 95-97% H2SO4 added to 960 mL of CH3OH:H2O 1:1 v/v) and quantified using Image Lab software (version 5.2.1, Bio-Rad Laboratories B.V., Veenendaal, The Netherlands).
Histology. Liver samples (lobus sinister medialis hepatis and lobus dexter medialis hepatis) were collected (from non-fasted mice), fixed in formalin and paraffin embedded, and 3 µm sections were stained with hematoxylin and eosin (H&E) and Sirius Red. NASH was scored blindly by a board-certified pathologist in H&E stained cross sections using an adapted grading system of human NASH 33,34 . In short, the level of macrovesicular and microvesicular steatosis was determined (in two separate cross-sections of medial lobe mounted on one glass) at 40× to 100× magnification relative to the total liver area analysed and expressed as a percentage. Inflammation was scored by counting the number of aggregates of inflammatory cells per field using a 100× magnification (view size of 4.2 mm 2 ). The average of five random fields were taken within those two cross-sections and values were expressed per mm 2 . Hepatic fibrosis was identified using Sirius Red stained slides and evaluated as well using two cross-sections by computerized image analysis of hepatic collagen content (as percentage of liver surface area and including blood vessels). In addition a qualitative analysis regarding the fibrosis stage was performed by a certified pathologist using the protocol of Tiniakos et al. 35 , in which the presence of pathological collagen staining was scored within two cross-sections of medial lobe as either absent (F0), observed within perisinusoidal/perivenular or periportal area (F1), within both perisinusoidal and periportal areas (F2), bridging fibrosis (F3) or cirrhosis (F4).
Transcriptome analysis. Nucleic acid extraction was performed as described previously in detail 36 . Total RNA was extracted from individual lobus dexter lateralis samples using glass beads and RNA-Bee (Campro Scientific, Veenendaal, The Netherlands). RNA integrity was examined using the RNA 6000 Nano Lab-on-a-Chip kit and a bioanalyzer 2100 (Agilent Technologies, Amstelveen, The Netherlands). The NEBNext Ultra II Directional RNA Library Prep Kit (NEB #E7760S/L, New England Biolabs, Ipswich, MA, USA) was used to process the samples. Briefly, mRNA was isolated from total RNA using the oligo-dT magnetic beads. After fragmentation of the mRNA, cDNA synthesis was performed, cDNA was ligated with the sequencing adapters and amplified by PCR. Quality and yield of the amplicon was measured (Fragment Analyzer, Agilent Technologies, Amstelveen, The Netherlands ) and was as expected (broad peak between 300 and 500 bp) and a concentration of 1.1 nM of amplicon-library DNA was used. Clustering and DNA sequencing, using the Illumina NovaSeq6000, was performed according to manufacturer's protocols by service provider GenomeScan B.V (Leiden, the Netherlands), yielding 15-30 million sequencing clusters per sample and 2 × 150 bp Paired-End reads (PE) per cluster. The genome reference and annotation file Mus_musculus.GRCm38.gencode.vM19 was used for analysis in FastA and GTF format. The reads were aligned to the reference sequence using the STAR 2.5 algorithm with default settings (https ://githu b.com/alexd obin/STAR ). Based on the mapped read locations and the gene annotation HTSeq-count version 0.6.1p1 was used to count how often a read was mapped on the transcript region. These counts serve as input for the statistical analysis using DEseq2 package 37 . Selected differentially expressed genes (DEGs), corrected for multiple testing, were used as an input for pathway analysis (P value < 0.000001) through Ingenuity Pathway Analysis suite (www.ingen uity.com, accessed 2020).
To evaluate the representation of human pathophysiological pathways in HFC-fed E3L.CETP mice, murine hepatic gene expression profiles were compared with published data on hepatic gene expression profiles in human NASH patients versus control. To this end, hepatic gene expression of NASH patients and controls of four different human studies from the Gene Expression Omnibus (GEO) with accession numbers GSE48452, GSE61260, GSE89632 and GSE33814 38-41 were used. A unique gene symbol list over all studies was used to identify common expression results over the various studies and 2logR and P-values were calculated using NCBI GEO2R (https ://www.ncbi.nlm.nih.gov/geo/geo2r /?acc=GSE48 452 or GSE89632 or GSE33814). For study GSE61260 normalised counts-data were used to calculate P-vales and 2logR. Only the differentially expressed genes that were found in at least two studies AND had the same 2logR direction were used as an input for pathway analysis (P values < 0.01) through Ingenuity Pathway Analysis suite (www.ingen uity.com, accessed 2020). In addition, the representation in E3L.CETP mice of human pathophysiological pathways specific for severe fibrosis, was evaluated by comparing the murine gene expression with published data of a study that differentiates NASH patients with severe fibrosis (fibrosis stage F3 or 4) from NASH patients with mild fibrosis (fibrosis stage F0 or 1) (GEO accession number GSE31803) 42 .
Statistical analysis.
All values shown represent means ± SEM. Statistical differences between groups were determined by using non-parametric Kruskal-Wallis followed by Mann-Whitney U test for independent samples using SPSS software. A P value < 0.05 was considered statistically significant. Two-tailed p values were used. In the case of transcriptome analysis we selected differentially expressed genes using p-values, adjusted for mul-
Results
Elafibranor reduces features of the Metabolic Syndrome in E3L.CETP mice. E3L.CETP mice fed the HFC diet developed pronounced obesity (as compared to age-matched control mice fed a low fat chow diet) within 15 weeks that remained stable until 25 weeks (Fig. 1A). Treatment with elafibranor resulted in a significant lowering of body weight (with −24%, p < 0.001 at t = 25) as compared to the HFC control group (Fig. 1A), despite receiving the same HFC diet and food intake being similar or slightly higher during the study in the elafibranor group (data not shown: 3.2 ± 0.1 g/mouse/day vs. 2.9 ± 0.2 g/mouse/day, p = 0.126, respectively; average food intake values of ≥ 5 cages at t = 20 and t = 25 weeks). The HFC diet resulted in a gradual increase in perigonadal, visceral and subcutaneous WAT weights after 15 weeks (HFC reference group) and 25 weeks (HFC control group), while treatment with elafibranor resulted in significantly lower WAT weights as compared to the HFC control group (with −55%, −52% and −63%, all p < 0.001 for perigonadal, visceral and subcutaneous WAT, respectively; Fig. 1B). Plasma insulin levels significantly increased on the HFC diet after 15 weeks and then decreased again at t = 20 to remain at a stable hyperinsulinemic level until t = 25 weeks, while glucose levels remained similar to the chow fed animals (Fig. 1C, D). Elafibranor treatment resulted in a significant decrease in both insulin and glucose levels as compared to the HFC control group (insulin with -71% and -78%, both p < 0.001, at t = 20 and t = 25, respectively; glucose with −18%, p < 0.01 and −11%, p = 0.026 at t = 20 and 25, respectively; Fig. 1C, D). In response to the HFC diet, mice developed stable hypercholesterolemia and severe hypertriglyceridemia (cholesterol: 11.6 and 12.2-fold increase, both p < 0.001, vs. chow diet at t = 20 and t = 25, respectively; triglycerides: 3.9-fold, p = 0.001 and 4.4-fold, p = 0.013, increase vs. chow diet for t = 20 and t = 25, respectively) ( Fig. 1E, F). The increase in cholesterol was primarily due to an increase in very low-density lipoprotein (VLDL), although low-density lipoprotein (LDL) and high-density lipoprotein (HDL) cholesterol were increased as well (Fig. 1G, H). Elafibranor treatment significantly lowered plasma cholesterol and triglyceride levels as compared to the HFC control group (cholesterol with −45%, p = 0.009 and −52%, p = 0.001, at t = 20 and t = 25, respectively; triglycerides with −84%, p < 0.001 and −71%, p = 0.011, at t = 20 and t = 25, respectively; Fig. 1E, F). The decrease in cholesterol with elafibranor was primarily due to a reduction in VLDL and LDL, while HDL-cholesterol was significantly increased and a larger cholesterol ester (CE)-and apoE-rich HDL-particle 27,43 was formed (Fig. 1G, H).
The HFC diet led to an increased liver weight as compared to the chow diet (data not shown: 3.7 ± 0.3 g vs. 1.7 ± 0.1 g, p < 0.001) and analysis of liver enzymes showed a concomitant increase in plasma ALT (5.9-fold, p = 0.005 and 7.1-fold, p < 0.001, at t = 20 and t = 25, respectively) and AST (5.6-fold, p = 0.035 and 7.2-fold, p < 0.001, at t = 20 and t = 25, respectively) as compared to the chow diet, indicating that the HFC diet caused liver damage (Fig. 1I, J). Elafibranor treatment increased liver weight even further, typical for a compound with PPARα-agonistic activity (to 5.2 ± 0.1 g, p < 0.001 both vs. chow and HFC control), and did not significantly affect plasma ALT levels, while plasma AST levels were significantly decreased (with −42%, p = 0.029 and −60%, p = 0.004, at t = 20 and t = 25, respectively) as compared to the HFC control group (Fig. 1I, J).
Elafibranor reduces steatosis and hepatic inflammation and blocks progression of fibrosis in E3L.CETP mice. HFC feeding induced pronounced steatosis after 15 weeks (HFC reference group) and 25 weeks (HFC control group) of diet feeding, while treatment with elafibranor decreased this beyond the levels at the start of the treatment at 15 weeks ( Fig. 2A). Quantitative analysis (Fig. 2B, C) revealed that after 15 weeks about 54% of the surface area was steatotic, of which 25% consisted of macrovesicular steatosis and 29% of microvesicular steatosis. After 25 weeks about 71% of the surface area was steatotic, of which 30% consisted of macrovesicular steatosis and 41% of microvesicular steatosis. Treatment with elafibranor fully blunted the microvesicular steatosis to 0.1% and only a slight macrovesicular steatosis of 5% remained. Biochemical analysis of intrahepatic liver lipids (Fig. 2D, E and F) was in line with the histological analysis and revealed that HFC feeding resulted in a significant increase in hepatic triglycerides and cholesterol esters as compared to the chow diet (2.1-fold and 3.5-fold increase after 25 weeks, both p < 0.001), while free cholesterol levels remained similar. Elafibranor treatment almost normalized the hepatic triglyceride levels (−44% decrease vs. HFC control, p < 0.001) and significantly decreased hepatic cholesterol esters as well (−29% decrease vs. HFC control, p < 0.001), while free cholesterol levels remained unchanged.
HFC feeding also strongly induced lobular inflammation, characterized by aggregates of inflammatory cells comprising mononuclear cells and polymorphonuclear cells. Quantification of the lobular inflammation (Fig. 2G) showed that the HFC feeding resulted in a robust increase in the number of aggregates as compared to the chow diet (20.5-fold increase, p = 0.029 and 27.8-fold increase, p < 0.001, after 15 weeks and after 25 weeks, respectively). Treatment with elafibranor largely decreased the number of inflammatory aggregates (7.4-fold decrease vs. HFC control, p < 0.001).
Fifteen weeks of HFC feeding induced onset of fibrosis, as shown by the patches of collagen deposition and after 25 weeks the fibrosis was evidently existing ( Fig. 2A; Sirius Red staining). Quantification of fibrosis by computerized analysis of collagen deposition in histological slices (Fig. 2H) revealed a profound increase in collagen deposition after 25 weeks of HFC feeding (3.3-fold increase vs. chow diet, p = 0.011) that was confirmed by biochemical analysis of collagen ( Fig. 2I; 3.5-fold increase vs. chow diet, p < 0.001). Treatment with elafibranor precluded this induction of fibrosis, as shown by the significant lower levels as compared to the HFC control group in biochemical analysis (2.9-fold, p < 0.001, respectively) and tendency with histological collagen content (1.9-fold, p = 0.067). Fibrosis evaluation by a board-certified pathologist revealed that after 15 weeks of HFC feeding for most mice fibrosis was primarily located within perisinusoidal and/or periportal area (score F1-F2) www.nature.com/scientificreports/ www.nature.com/scientificreports/ and after 25 weeks of HFC feeding the majority (60%) of mice revealed bridging fibrosis (F3) (and the remaining mice had fibrosis in perisinusoidal and/or periportal area, score F1-F2) (Fig. 2J). Almost all mice (87%) with elafibranor treatment had fibrosis within perisinusoidal and periportal area (score F2) and the remaining mice revealed bridging fibrosis (F3) (Fig. 2J).
Treatment with elafibranor normalizes metabolic, inflammatory and fibrotic gene expression.
To further investigate the mechanisms and pathways modulated by elafibranor, differentially expressed pathways of E3L.CETP mice treated with elafibranor were compared with untreated mice. While HFC diet led to a total of 327 differentially expressed pathways as compared to the mice on a healthy chow diet, elafibranor treatment resulted in a total of 338 differentially expressed pathways, the majority of which (83%) overlapped with the differentially expressed pathways that were induced by HFC diet (Fig. 3A). The far majority of those overlapping pathways were also reversed by elafibranor treatment and only a small portion of the pathways were attenuated or enhanced by elafibranor treatment per se. The top 15 most significantly enriched pathways for the overlapping part of the Venn diagram that were induced by HFC diet and reversed by elafibranor treatment, as well as the top 15 most significantly enriched pathways that were not induced by HFC diet but only affected by elafibranor treatment, are visualized in Fig. 3A. Among the pathways that were reversed by elafibranor treatment were important pathways for NASH development, like inflammation pathways such as NF-κB and IL-8 signalling and metabolism pathways (sirtuin signalling, mitochondrial function and oxidative phosphorylation, LXR/ RXR activation), as well as hepatic fibrosis/hepatic stellate cell activation. Part of the hepatic fibrosis pathway analysis representing the statistically significant gene expression changes in activated stellate cells is shown in Fig. 3B and indicates an upregulation for most genes with HFC diet and a reversal of this gene expression upon elafibranor treatment. We subsequently performed an upstream regulator analysis as well that determines the activation state (z-score) of the transcription factors involved, based on the changes in expression of their target genes. All significant upstream regulators with a z-score < -6 and > 4 (arbitrary cut-offs to shorten the list) for elafibranor vs. HFC are shown in Fig. 3C. The majority of the upstream regulators belonged to biological process categories with high relevance to NASH development, like 'Metabolism' , 'Inflammation' and 'Connective tissue' . Elafibranor led to inhibition of inflammatory upstream regulators, and as expected from a PPAR-α/δ agonist to upregulation of PPAR-α and PPAR-δ, as well as PPAR-γ and other metabolic upstream regulators.
E3L.CETP mice on HFC diet have a transcriptomic profile similar to humans with NASH.
On gene expression level elafibranor in the E3L.CETP mice predominantly reversed pathways that were induced by HFC. For an appropriate judgement of the translational value of the elafibranor effects in E3L.CETP, it is important to investigate to which degree the induction of NASH in E3L.CETP mice by HFC feeding mimics the human NASH patients. To investigate whether E3L.CETP mice on HFC diet indeed recapitulate the underlying disease pathways of NASH patients, hepatic gene expression of the mice was compared with a representative human NASH signature. To this end, the published hepatic gene expression profiles of four independent human studies with NASH patients and controls [38][39][40][41] were merged in such a way that only the differentially expressed genes that were found in at least two studies AND had the same 2logR direction were used for pathway analysis. In total, 160 differentially expressed pathways were identified in humans that distinguished NASH patients from controls. Of those, 139 or 87% were recapitulated in E3L.CETP mice on HFC diet (Fig. 4A). As compared to humans, E3L.CETP mice on HFC diet vs. chow diet had more (n = 327) differentially expressed pathways.
The top 15 most significantly enriched pathways for human NASH patients are visualized in Fig. 4B and the enrichment of those pathways in E3L.CETP mice on HFC diet was plotted herein. The top 15 consisted for the most part of pathways related to inflammation, followed by pathways involved in lipid metabolism. Furthermore, blood pressure and cancer related pathways rank high, as well as the hepatic fibrosis signalling pathway. In the E3L.CETP mice the top 15 of human NASH pathways were all recapitulated as well. Overall the enrichment of most inflammatory pathways and the hepatic fibrosis signalling pathway was larger in the E3L.CETP mice, while www.nature.com/scientificreports/ the enrichment in the cancer related pathway was lower. Additionally, the E3L.CETP mice on HFC diet had more (n = 188) differentially expressed pathways that were not observed in humans. The top 15 most enriched pathways of this portion of the Venn diagram is shown in Fig. 4C and reveals predominantly pathways related to inflammation, but matrix proteins/collagens and mitochondria and signalling pathways as well. Most of those pathways were enriched in humans as well, but did not reach the -log (p-value) cut-off of 2.
In addition to the human NASH gene signature, we investigated the recapitulation of fibrosis pathways in more detail. Hereto, a published human gene profile that specifically differentiates NASH patients with severe fibrosis (stage F3 or F4) from NASH patients with mild fibrosis (stage F0 or F1) was used 42 . This differential gene set consists of genes that are all upregulated in the NASH patients with severe fibrosis. This gene set was significantly upregulated as well, for all except one gene, in the E3L.CETP mice on HFC (Fig. 4D). Treatment with elafibranor in the E3L.CETP mice predominantly reversed the gene expression of this gene set (Fig. 4D).
Discussion
In this study, we demonstrate that treatment of obese, insulin-resistant and dyslipidemic E3L.CETP mice with elafibranor markedly ameliorated steatosis and lobular inflammation and blunted the progression of hepatic fibrosis. Bioinformatics analysis of gene expression identified regulatory pathways and upstream regulators in the liver that are specifically influenced by elafibranor.
To mimic the human situation of NASH patients as closely as possible, we used a transgenic mouse model with a lipoprotein metabolism resembling humans and used a dietary induction to represent the obesogenic diets to which many NASH patients are exposed. E3L.CETP mice, due to their genetic APOE*3Leiden mutation, have an impaired clearance of apoB-containing lipoproteins, thereby mimicking the slow clearance observed in humans and resulting in a mouse model that develops hyperlipidemia and atherosclerosis upon saturated fat and cholesterol feeding 13,14,31 . The model is therefore in clear contrast to wild-type C57BL6 mice that have a very rapid clearance of apoB-containing particles resulting in plasma cholesterol that is primarily contained in the HDL particle (and do not develop atherosclerosis). The high fat and cholesterol diet that was used in the current study induced obesity, insulin resistance and hyperlipidemia in the model and the hepatic phenotype of the mice resembled the human NASH pathology 33,34 . The model developed a substantial amount of macrovesicular steatosis, a hallmark of NASH patients, besides microvesicular steatosis. In addition, lobular (mixed) inflammation and increasing fibrosis, progressing to bridging fibrosis, developed in a pattern typical for dietary induction and resembling the human situation 34 . Another important feature of the E3L.CETP mice is that in contrast to wild-type mice they respond well to treatment with hypolipidemic drugs including statins and fibrates similarly as humans do 13,15,[19][20][21][22][23][25][26][27]30,31 . There is increasing evidence that statins and fibrates may have beneficial effects on NASH and liver fibrosis 9,44,45 .
In phase 2a trials in obese and dyslipidemic or obese and prediabetic patients, 80 mg/d elafibranor consistently improved plasma lipids (decrease in plasma triglycerides and LDL-cholesterol, increase in HDL-cholesterol), improved glucose homeostasis (decrease in plasma glucose, fructosamine, insulin and HOMA-IR, improvement of hepatic and peripheral insulin resistance during hyperinsulinemic euglycemic clamps) and in addition improved the levels of liver enzymes (plasma alanine aminotransferase, alkaline phospatase and γ-glutamyltr ansferase) 46,47 . In the subsequent phase 2 trial in NASH patients, 80 and 120 mg/d elafibranor were evaluated after 1 year for resolution of NASH without worsening of fibrosis and demonstrated that the 120 mg/d dose (but not the 80 mg/d dose) improved NAFLD activity score (NAS) without worsening of fibrosis 10 . A phase 3 trial with 120 mg/d elafibranor in NASH patients is currently ongoing and unfortunately, interim results of this trial reported that after 72 weeks the placebo arm had an unexpectedly high response and elafibranor failed to demonstrate a significant effect on the primary endpoint of NASH resolution without fibrosis worsening (19.2% of the patients met the primary endpoint in elafibranor treated group vs. 14.7% in placebo treated group, p = 0.0659) 11 .
In the current study in E3L.CETP mice, 15 mg/kg/d elafibranor administered after induction of disease significantly decreased plasma triglycerides and total cholesterol, increased HDL-cholesterol and decreased blood glucose and plasma insulin levels, similarly as seen in the human trials. In addition, we observed in the E3L.CETP mice a profound improvement in steatosis and hepatic inflammation, while precluding fibrosis development when treatment was started. These results are in line with the improvement in NAS score (or more specifically in steatosis and lobular inflammation score) in the GOLDEN505 phase 2 trial and corroborate as well the results of other reported preclinical rodent studies 5,48 . www.nature.com/scientificreports/ Although the responses to elafibranor in our study are in line with the phase 2 clinical trials and with the results of other preclinical studies, the recent interim results of the clinical phase 3 trial (that report a failure to meet the endpoint), demand for a critical view on the discrepancy between the very promising results in all kind of different preclinical models vs. the so far disappointing results of the phase 3 trial. In the clinical trials a dose of 120 mg/d was used. To translate this dose to the dosing used in mouse studies, the following simplified calculation can be used as a rough guide: 120 mg in a human of 80 kg would correspond with 1.5 mg/kg/d which would be equivalent to a dose of 15 mg/kg/d in mice, when taking into account the approximately 10 times faster metabolism in mice 49 . Although some preclinical studies used a relative high dose (30 mg/kg/d) 5,48 , in our study (15 mg/kg/d) and in a db/db mice study (1, 3 and 10 mg/kg/d) 5 a similar or lower doses, respectively, were used, suggesting the discrepancy cannot simply be explained by the dose.
In the clinical trials a NAS score is used for evaluation of NASH improvement. The NAS score is a combined score based on scoring of steatosis, ballooning and lobular inflammation. E3L.CETP mice on HFC diet demonstrate ballooning 34 . However, the prevalence of ballooning is not the same as seen in NASH patients and similar as for other mouse models 34,48,50 only marginal ballooning was observed. It is therefore important to realize that rodent NASH models in general, with respect to hepatocyte ballooning, do not entirely meet the histomorphological criteria and therefore scoring of ballooning in mouse models where ballooning cells are only occasionally found, can be misleading. Therefore a direct comparison with the NAS score of the clinical trials remains difficult. The more detailed results of the GOLDEN-505 phase 2 trial report the results for a subset of the patients with a basal NAS score of ≥ 4 (so excluding the patients with a basal NAS score of 3, similarly as currently has been done in the phase 3 trial) and demonstrate large and significant effects for ballooning and www.nature.com/scientificreports/ lobular inflammation, while there is no significant effect on steatosis and fibrosis 10 . In contrast, in our and all other preclinical studies a profound effect on steatosis is reported. A possible clue for this difference might be given by the other deviant parameters in our study, like the decrease in body weight that is not seen in humans but has been reported in other mouse studies as well 48 . While plasma AST decreased in our study, plasma ALT was not affected. In the reported clinical trials, plasma AST was not affected while plasma ALT was consistently decreased. It has been reported that the PPAR-α agonist fenofibrate decreases AST gene expression and plasma levels in mice whereas the compound increases the expression and levels in human liver cells 51 , which may have counteracted a potential beneficial effect of elafibranor in NASH patients. Also absolute liver weights were increased in our study by elafibranor (data not shown, 1.4-fold increase vs. HFC control, p < 0.001). Weight loss, hepatomegaly, hepatocyte peroxisome proliferation, as well as increased plasma ALT levels, have been consistent findings in rodent studies with PPAR-α agonists, but not in humans 5,48,52,53 . Moreover, importantly PPARα expression in rodents may be much higher as in humans [54][55][56][57] and PPARα activation is less pronounced in human liver compared to mouse liver 58 , and this could profoundly impact lipid metabolism and inflammation as well 8,55,59 , and subsequently development of fibrosis. It is conceivable that species-dependent metabolic effects of PPAR-α agonists explain the strong effects of elafibranor in preclinical models vs. the more modest effect of elafibranor in the clinical trials. An apparent contradiction to this postulate is however, that in hAPOE2/PPAR-α knockout mice 5 steatosis has been reported to improve as well, suggesting the effect on steatosis cannot fully be explained by PPAR-α activation. However elafibranor also exhibits PPAR-δ agonistic thereby inducing peripheral fatty acid oxidation and energy metabolism and having a positive effect on lipid metabolism. In summary, we reproduced NASH development with progression to fibrosis in HFC-fed E3L.CETP mice, a model that is characterized by obesity, metabolic anomalies and histopathological features and underlying disease pathways similar to those observed in human NASH. In this model, elafibranor exerted beneficial effects on steatosis and hepatic inflammation and a preventive effect on the progression of fibrosis. Taking into account that due to species differences the response to some targets, like PPAR-α, may be overrepresented in animal models, we infer that elafibranor will be particularly useful to reduce hepatic inflammation and could be a pharmacologically useful agent, probably in combination with other agents, for human NASH. | 8,036 | 2021-03-03T00:00:00.000 | [
"Biology",
"Medicine"
] |
Using apparatus with vortex layer of ferromagnetic particles for production of unburnt synthanite
Here is offered a method of production of unburnt synthanite from natural minerals using apparatus with vortex layer of ferromagnetic particles. There was fulfilled comparative quality analyses of anhydrite binding substances, produced by anhydrite rock milling together with complex solidifying activator in traditional ball mills, and substances produced by our method. It was stated that achieved rising of the hydration activity of synthanite by collective milling of the raw material components was a result of ferromagnetic particles hindered impact under magnetic field influence. The reason for such activity is not only increase in specific surface area of the milling material, but changes in anhydrite structure at the crystalline grid level, preserving a lot of defects on the separate particle surface.
Introduction
Russian gypsum industry now is in a very difficult situation.It faces a number of rather serious problems [1]: -decrease in sales volume and production of most popular materialsgypsum boards and gypsum fiber sheets; -constant rising of production expenses with simultaneous price reduction; -worsening of producer's technical and economic performance because of purchasing power reduction, followed by demand removing into cheap production segment.
At the same time it is noted that there's no background for fundamental change in the near future; we can't expect consumption rise in the industry as well as profits growth in production and sales.It's unlikely that new manufacturing sites will be built, but low-gain business will close inevitably.In such situation operating business needs maximum effort concentration on using resources at disposal, putting to use free sites and facilities, resource-saving and sustainable measures.That's why processing of anhydrite rock (waste product of gypsum rock production) with the aim of unburnt synthanite production and further wall stone production may be considered now rather promising.This path meets the requirements of the best available methods [2, 3[.Cost-benefit analyses proved that to put it into life is worthwhile when technology uses existing manufacturing area [4].
Synthanite was widely used in the end of XIXbeginning of the XX centuries for brickwork and plaster mortars and for wall stone production [5].Increasing volume of production of fast hardening low temperature calcined gypsum binder forced synthanite out of the building market.It is forgotten now, building industrial enterprises don't produce it.But more and more people are interested in synthanite.Some scientists are working out more perfect technology of synthanites production and usage.The results are being patented and published [6, 7 etc.], discussed at different conferences.
Personnel of the Don State Technical University in their complex studies research possibility and effect of anhydrite rock recycling.Economical and ecological viability of wall stone production based on unburnt synthanite as well as its technical realization are proved [8,9].It should be mentioned that not all results of investigation were published.Thus one investigation was devoted to production of unburnt synthanite with technical characteristics close to high temperature burnt binder.The hypothesis checked was: application of apparatus with vortex layer of ferromagnetic particles, with hindered impact on grind natural anhydrite rock and changing the substance structure at the crystalline grid level, will allow to produce unburnt synthanite with high hydration activity.This will be achieved not only because of the increase in specific surface area, but presence of multiple defects on the separate particle's surface.
Results confirming this hypothesis are given below.They also show that this direction of scientific research is actual and practical.
Synthanite can be produced from natural gypsum or anhydrite rock as well as from chemical industry-related waste, containing anhydrous calcium sulfate.Depending on the way of production, synthanites are divided into high-temperature burnt and unburnt [5].In the first case binding material is obtained by grinding burnt at the temperature 600-700 о C gypsum rock.In the second caseby milling in different mills natural anhydrite, or activation of industry-related waste that needs no grinding.
Synthanite produced by natural materials burning, compared with unburnt has higher quality.Russian government technical requirements stated brand marks for synthanite: 50, 100, 150 and 200.Brand 50 was used only for binder produced from natural anhydrite.Production of unburnt synthanite with higher characteristics is possible when its activity is increased.
Anhydrite hydration activity is increased by higher degree of grinding and (or) adding hardening activator.Main role in hardening process plays particles surface condition, which depends on the nature and concentration of active surface centres [10].While grinding natural anhydrous calcium sulfate, some chemical binds brake and form on particles surfaces non-saturated cations Са 2+ and anion tetrahedral groups SO 2 4 .Newly formed mineral particles surface is highly reactive.
To accelerate hardening, milling equipment with impact crushing of material should be used.In contrast with abrading mills, where milling produces increase of specific surface area of processed material, under impact force structure changes at crystalline grid level occur (chemical binds break, phase change, multiple structure defects appear, solid phase reactions accelerate etc.).This results in increased hydration activity of produced binding material [10][11][12][13].
In building materials industry mills can be divided into three groups depending on the method of obtaining dispersing stress: with mechanical, aerodynamic and electromagnetic way of energy transition to processed material particles.In the first group there are traditionally used ball, rod, vibration and planetary mills.They are characterized by design complexity, high metal and energy consumption, low productivity, being a source of much noise and dust.
Different fluid mills represent the second group of braking machines.Among their advantages are high efficiency with fast material processing, no wearing pieces, possibility to combine milling with other processes (for example, drying).But these mills require great power for stable aerodynamic operation mode.Granulometry of milling material has high polydispersity, and this gives negative impact on hydration behavior of the resulting binder.Thus, with narrow granulometry hydration of binding particles happens more homogeneously, with the same speed.This as a result accelerates processes of binding materials hardening, including anhydrite [11].
Representatives of the third group of mills are new, but well proved themselves.Electromagnetic mills and apparatus with vortex layer of ferromagnetic particles belong to this type of equipment.They guarantee high power density of milling, enable shortening of raw material processing time, increased dispersion degree of binder, with power consumption reduction at the same time.
Methods
Personnel of the Don State Technical University, Building Materials department have studied unburnt synthanite, produced by alternative methods of raw material milling: -high milling in the ball mill during several hours; -short -time (several minutes) grinding in apparatus with vortex layer of ferromagnetic particles.
As a raw material was taken natural anhydrite rock, not used for gypsum binders production because of low percent of calcium sulfate dehydrate.For hardening activator was used drowned nonhydraulic lime and silica material in quantities 5 and 15% of anhydrite rock mass respectively.
Dispersity of produced synthanite was estimated by specific surface area parameter, using PCH-11М(SP) device.
Cylinders of 5.05 cm in diameter and the same height were made from produced synthanite.They were formed of harsh mixture with water percentage equals 9 by pressing in special moulds under 40 MPa pressure.After 28 days of hardening under the normal conditions the cylinder samples were dried and tested.
Results
The results of estimating specific surface area of synthanites produced by different methods are given in Tables 1 and 2. It is known that after achieving some optimum value of milling fineness, found experimentally, potential energy of the particles surface rises considerably [12].This leads to their aggregation, followed by binder qualities worsening.Analyzing the results of the experiments, it became possible to state that efficiency of anhydrite rock milling in the ball mill decreases, when milling lasts longer than five hours.It should be also kept in mind that high milling fineness is accompanied by considerable growth of mechanical and electrical energy consumption.So we can conclude that milling longer than five hours is not practical.Milling time of producing synthanite with equivalent specific surface area in apparatus with vortex layer of ferromagnetic particles and in ball mills found to be incomparable.For example, to get binder specific surface area ~6000 cm 2 /g apparatus with vortex layer of ferromagnetic particles needs about 3.5 min, while in a ball mill it takes 5 hours.It must be also mentioned that in the apparatus with vortex layer of ferromagnetic particles grinding is a result of hindered impact of two ferromagnetic particles.Specific capacity in the impact points rises dramatically and pressure values may reach thousands of megapascals.Such conditions lead to crystalline grid deformation, sharp increase in free energy of substance, followed by rising chemical activity.Electromagnetic field influences significantly activity of processed with vortex layer substance.Authors of works [12][13][14][15] have proved that magnetic and electric fields influence both -physical properties of material and chemical reaction rate.Collective investigations made by personnel of Kurchatov institute and Moscow State university named after Lomonosov proved that while grinding non magnetic materials under magnetic field influence, there appear crack nuclei, which can't disappear because of repulsive interaction of crack edges [14].This means that magnetic field provides structure defects preservation, thus leading to growth of reacting power in binder and prevents separate particles aggregation during grinding.
It can be concluded that high frequency impact force, applied for small contact surfaces of ferromagnetic elements, under the rotating magnetic field influence leads to results exceeding those, produced by materials dispersion in traditional mills [12].
Particles of binding substance, grinded in vortex layer must have high energy, thus activating hydration process.This hypothesis was proved by results of breaking strength tests in the control cylinder samples group.The samples were made of unburnt synthanite produced in a ball mill and in the apparatus with vortex layer of ferromagnetic particles (Figure 1).It was noticed that with the binder specific surface area growth in both cases is observed increase of ultimate compressive strength of artificial rock.Accordingly, with specific surface area growth from 2615 to 6085 cm 2/ /g in the binder produced by apparatus with vortex layer of ferromagnetic particles, ultimate compression strength rises from 40.
Discussion
Applying apparatus with vortex layer of ferromagnetic particles for further development and perfection of synthanite production technologies may be considered to be a rational and true trend.This kind of anhydrite grinding provides industry with binding material of increased activity.Extra advantages of anhydrate rock grinding in apparatus with vortex layer of ferromagnetic particles are: low power consumption and short grinding time.
Ultimate compressive strength, MPa Specific surface area, cm 2 /g binder produced in apparatus with vortex layer of ferromagnetic particles; binder produced in a ball mill On achieving this technical effect, at the same time the problem is being solved of replenishment raw materials supply for gypsum binder production and building materials manufacturing, based on gypsum binder.As can be seen from the above, it is rational to process natural anhydrite rock not used for gypsum binder production (because of calcium sulfate dihydrate low level) for unburnt synthanite manufacturing.It can be basic material for wall stones (dividing and inside walls).For wider use of such materials further investigations of the ways to increase water resistance properties of artificial rock, based on studied synthanite, are necessary.
Fig. 1 .
Fig. 1.Correspondence between controlled samples ultimate compressive strength and synthanite specific surface area.
Table 1 .
Dependence of binder specific surface area from raw material milling time in a ball mill.
Table 2 .
Dependence of binder specific surface area from raw material milling time in apparatus with vortex layer of ferromagnetic particles.
3 to 65.6 MPa.Binder produced in the ball mill with comparable specific surface area (2171 -6007 cm 2 /g) gives smaller changes in ultimate compression strength (from 26.1 to 33.6 MPa).Therefore, conclusion proving the hypothesis, formulated earlier, is made.Usage of MATEC Web of Conferences 196, 04053 (2018) https://doi.org/10.1051/matecconf/201819604053XXVII R-S-P Seminar 2018, Theoretical Foundation of Civil Engineering modern impact force milling equipment enables production of unburnt synthanite with increased activity. | 2,716 | 2018-01-01T00:00:00.000 | [
"Materials Science"
] |
Counterfactuals and the fixity of the past
I argue that David Lewis’s attempt, in his ‘Counterfactual Dependence and Time’s Arrow’, to explain the fixity of the past in terms of counterfactual independence is unsuccessful. I point out that there is an ambiguity in the claim that the past is counterfactually independent of the present (or, more generally, that the earlier is counterfactually independent of the later), corresponding to two distinct theses about the relation between time and counterfactuals, both officially endorsed by Lewis. I argue that Lewis’s attempt is flawed for a variety of reasons, including the fact that his own theory about the evaluation of counterfactuals requires too many exceptions to the general rule that the past is counterfactually independent of the present. At the end of the paper, I consider a variant of Lewis’s strategy that attempts to explain the fixity of the past in terms of causal, rather than counterfactual, independence. I conclude that, although this variant avoids some of the objections that afflict Lewis’s account, it nevertheless seems to be incapable of giving a satisfactory explanation of the notion of the fixity of the past.
unique, settled, immutable actuality. These descriptions scarcely wear their meaning on their sleeves, yet do seem to capture some genuine and important difference between past and future. What can it be? (CDTA: 36) Lewis provides an answer: I suggest that the mysterious asymmetry between open future and fixed past is nothing else than the asymmetry of counterfactual dependence. The forking paths into the future -the actual one and all the rest -are the many alternative futures that would come about under various counterfactual suppositions about the present. The one actual, fixed past is the one past that would remain actual under this same range of suppositions. (CDTA: 38) Can the 'asymmetry of openness' be explained, as Lewis here proposes, in terms of a temporal asymmetry of counterfactual dependence? 1 In this paper, I argue that Lewis's attempt to explain the intuition of the fixity of the past in terms of counterfactual independence (or a failure of counterfactual dependence) is unsuccessful. 2 It follows that, regardless of whether the notion of the counterfactual dependence of the future on the past is relevant to the conception of the future as open, Lewis's ingenious attempt to explain the asymmetry of openness in terms of an asymmetry of counterfactual dependence does not work. 3 A preliminary comment is in order about the explanatory task concerning the fixity of the past. Lewis often writes-and, for the sake of brevity, I shall follow him in this-as if the task were to explain the fact that the past is fixed, rather than our intuition that it is. However, it is evident (as the first of the quotations above indicates) that for Lewis the principal explanandum is not the fact that the past is fixed, but rather our intuition that it is so-what I shall call 'the fixity intuition'. Some of my objections depend on the fact that it is the fixity intuition, rather than the existence of an objective correlate, that is the explanandum. The crucial issue is whether the way that we evaluate counterfactuals involves a temporal asymmetry that could explain our intuition that the past is fixed. 4 I have five objections to Lewis's proposal to explain the fixity of the past in terms of a failure of counterfactual dependence: I summarize them here. 1 Most of Lewis's paper is devoted to the task of constructing a (possible worlds) analysis of the truth conditions for counterfactuals that will yield the desired temporal asymmetry of counterfactual dependence, yet without building a temporal asymmetry into the analysis merely by fiat. 2 For the distinction between counterfactual independence and failure of counterfactual dependence, see §6 below. 3 Lewis's account of the openness of the future (and, by implication, his account of the fixity of the past) has recently been criticized by Barnes and Cameron (2011;§3). However, my criticisms are quite different from theirs. 4 Similarly, the corresponding task concerning the future is not to explain why the future is open, but rather our intuition that it is. Again, when Lewis offers his famous account of the criteria for the evaluation of the closeness of possible worlds that he takes to give 'the correct truth conditions' for counterfactuals, the principal desideratum for the correctness of his account is that it yield truth conditions that correspond to our firm intuitions about the truth values of relevant counterfactuals, such as the 'Nixon' counterfactual (CDTA: 46-48).
1. Lewis is wrong in claiming that we normally keep the past (relative to the time of the antecedent) fixed when evaluating backward counterfactuals ( §3 below). 2. Lewis's claim that we normally keep the more remote past (relative to the time of the antecedent) fixed when evaluating forward counterfactuals is at least doubtful ( §4). 3. Lewis's claim that, when evaluating forward counterfactuals, we assume a change in at least the immediate past (relative to the time of the antecedent) to allow for a smooth 'transition period' undermines his attempt to explain the fixity of the past in terms of counterfactual independence ( §5). 4. A retreat from 'counterfactual independence' to 'failure of (systematic) counterfactual dependence' does not help Lewis ( §6). 5. The substitution of 'causal independence' for 'counterfactual independence' would not help Lewis ( §7).
Before proceeding to the objections, there is an important distinction to be made between two different elaborations-represented by Thesis F and Thesis B of the next section-of the idea that the past is counterfactually independent of the present and future.
Forward counterfactuals, backward counterfactuals, and counterfactual independence
Lewis claims that the past is counterfactually independent of the present-and, more generally, that the earlier is counterfactually independent of the later. But this claim is, as it stands, potentially ambiguous. To explain why, I employ terminology taken from Bennett (1984: 57). I use 'forward counterfactual' to refer to one whose consequent is about a later time than any that its antecedent is about; and 'backward counterfactual' to refer to one whose consequent is about an earlier time than any that its antecedent is about. (Note that a 'backward counterfactual' in this sense need not say that the earlier would have been different had the later been different: thus the notion must be distinguished from the narrower notion of a backward counterfactual that is based on a 'back-tracking argument'-on which I shall have more to say shortly. 5 ) I stipulate that if a counterfactual's consequent is about both an earlier and a later time than any that its antecedent is about, then it is neither a forward nor a backward counterfactual. Finally, I use 'the A-time' to refer to the time that the antecedent of a given counterfactual is about (thus ignoring, for simplicity, any counterfactual whose antecedent is about a multiplicity of times). With this terminology established, we can see that to say that the past is counterfactually independent of the present-or, more generally, that the earlier is counterfactually independent of the later-could imply a commitment to either or both of the following theses: Thesis F: When we evaluate forward counterfactuals, we keep the past, relative to the A-time, fixed.
Thesis B: When we evaluate backward counterfactuals, we keep the past, relative to the A-time, fixed.
By 'we keep the past, relative to the A-time, fixed' when evaluating a counterfactual, I mean that we assume that, in the counterfactual situation, the past relative to the A-time is exactly the same as the actual past relative to the A-time. In terms of possible worlds: 'the closest possible A-worlds-i.e., worlds in which the antecedent is fulfilled-are worlds that share their past, relative to the A-time, with the actual world'.
It is important to consider Thesis F and Thesis B separately, given the possibility that different standards apply to the evaluation of forward and backward counterfactuals, standards that involve different treatments of the past relative to the A-time. In particular, we need to take seriously the proposal that Thesis F is true but Thesis B false. 6 Suppose that Jane is a student who is both a perfectionist and bad at meeting deadlines. The deadline for essay submission-2 p.m. on Monday-approaches, but half an hour before the deadline Jane's essay is still only a rough draft. The proponent of the view that Thesis F is true and Thesis B false might hold that when we evaluate, with respect to this scenario, the forward counterfactual (C1) If Jane had handed in her essay at 2 p.m. on Monday, it would have got a low mark, we keep the past relative to the A-time fixed, including the (actual) fact that by Monday afternoon she had only a rough draft of the essay (so that (C1), we may suppose, comes out true), whereas when we evaluate the backward counterfactual.
(C2) If Jane had handed in her essay at 2 p.m. on Monday, she would have revised it properly first, we do not keep the past relative to the A-time fixed, so (C2) may also come out true. What might explain this discrepancy in evaluation? The obvious thought is that when we consider the backward counterfactual (C2), our focus is on how the antecedent might or would have come about, whereas this is not our concern when considering the forward counterfactual (C1) (cf. Jackson 1977: 9, 11).
Obviously there is more to be said, since anyone who maintains that the standards for the evaluation of (C1) and (C2) differ in this way must confront the fact that (C2) appears to license the 'mixed' counterfactual 7 (C3) If Jane had handed in her essay at 2 p.m. on Monday, she would have revised it properly first, and it would not have got a low mark, and hence the forward counterfactual (C4) If Jane had handed in her essay at 2 p.m. on Monday, it would not have got a low mark, which is in opposition to (C1) and to the thesis (Thesis F) that the past is kept fixed when evaluating forward counterfactuals. 8 I shall not discuss this issue here, save to remark that the tension created by endorsing Thesis F while denying Thesis B might be relieved by putting a contextual restriction on the application of Thesis F.
Given the distinction between Thesis F and Thesis B, which does Lewis endorse in holding that the past is counterfactually independent of the present? The answer provided by the text of CDTA is: both. More precisely, Lewis's answer is 'both, subject to three qualifications'. These qualifications are: (1) a restriction to contexts that Lewis describes as involving 'the standard resolution' of the vagueness of counterfactuals; (2) a restriction to 'the sorts of familiar cases that arise in everyday life'-for example, we are to ignore bizarre cases involving time machines, black holes, or weird possible worlds consisting of just one atom in the void (CDTA: 35); and (3) a class of exceptions (about which I shall say more later) that is generated by the need to provide a smooth 'transition period' between the actual past relative to the A-time and the fulfilment of the antecedent of the counterfactual.
A forward counterfactual such as (C1) does not, of course, explicitly say anything about the past relative to the A-time. However, Lewis's account of the semantics of counterfactuals (under the 'standard resolution' of their vagueness) has consequences for how the past relative to the A-time is envisaged as being in the hypothetical scenario in which the antecedent is fulfilled (cf. Lewis CDTA: 46-48). According to Lewis (assuming that the antecedent of (C1) is in fact false-let's suppose that in fact Jane handed in her essay on Wednesday, rather than Monday) the closest possible A-worlds relevant to the truth conditions of the forward counterfactual (C1) are worlds that share their past with the actual world at least up until very shortly before the A-time (2 p.m. on Monday in this case), and then diverge. 9 Since, in these late-diverging possible worlds, Jane still hasn't finished her essay by 2 p.m. on Monday, the essay she hands in is a mess. And since, according to Lewis, the counterfactual (C1) is true if and only if, in the closest possible worlds where Jane hands in the essay at 2 p.m. on Monday, the essay gets a low mark, the forward counterfactual comes out true.
As for (C2), Lewis is quite explicit in stating that, under what he calls 'the standard resolution of the vagueness of counterfactuals', the backward 8 cf. Downing (1959). The moral that Downing draws from the apparent conflict is that a statement such as (C2) that appears to be a 'back-tracking' backward counterfactual conditional (or subjunctive conditional, to use Downing's preferred terminology) is in fact a statement of a different kind altogether, what Downing calls a 'subjunctive implication ' (1959: 131-132). Downing does not appear to have considered the possibility of salvaging the consistency of pairs such as (C1) and (C4) via an appeal to a shift of context. 9 Here and throughout this paper, I assume the truth of determinism, unless otherwise specified. counterfactual (C2) is false. According to Lewis, if we confine our attention to 'the sorts of familiar cases that arise in everyday life' (ignoring non-standard cases involving time machines, black holes, possible worlds consisting of one solitary atom in the void, etc.-none of which is relevant to the current example), and, on the understanding that we are talking only about 'the standard resolution of the vagueness of counterfactuals', then his asymmetry claim is as follows: Consider those counterfactuals of the form 'If it were that A, then it would be that C' in which the supposition A is indeed false, and in which A and C are entirely about the states of affairs at two times t A and t C respectively. Many such counterfactuals are true in which C also is false, and in which t C is later than t A . These are counterfactuals that say how the way things are later depends on the way things were earlier. But if t C is earlier than t A , then such counterfactuals are true if and only if C is true. These are the counterfactuals that tell us how the way things are earlier does not depend on the way things will be later. (CDTA: 35; bold emphasis mine) The claim I have emphasized in bold type is an explicit endorsement of Thesis B: i.e., the thesis that backward counterfactuals are assessed in a way that keeps the past relative to the A-time fixed. 10 And the commitment to Thesis B is evident elsewhere in Lewis's paper: for example, when he says: The past would be the same, however we acted now. The past does not at all depend on what we do now. It is counterfactually independent of the present. (CDTA: 38; bold emphasis mine) Lewis is explicit in claiming that what he describes as 'the standard resolution' of the vagueness of counterfactuals does not allow for 'back-tracking' arguments, whether the counterfactuals are forward counterfactuals or backward counterfactuals, where back-tracking arguments appeal to the idea that since present conditions have their past causes, 'if the present were different then these past causes would have to be different, else they would have caused the present to be as it actually is' (CDTA: 33). Lewis does explicitly note that there are occasions when a back-tracking resolution of the vagueness of counterfactuals is appropriate, and that this is an obstacle to presenting 'a neat contrast between counterfactual dependence in one direction of time and counterfactual independence in the other direction' (CDTA: 33): We know that present conditions have their past causes. We can persuade ourselves, and sometimes do, that if the present were different then these past causes would have to be different, else they would have caused the present to be as it actually is. Given such an argument -call it a back-tracking argument -we willingly grant that if the present were different, the past would be different too. (CDTA: 33) However, he claims that these contexts involve a special, non-standard resolution of the vagueness of counterfactuals: Under [the] standard resolution [of the vagueness of counterfactuals], backtracking arguments are mistaken: if the present were different, the past would be the same, but the same past causes would fail somehow to cause the same present effects. (CDTA: 34; bold emphasis mine) 3 Objection 1: Lewis, back-tracking, and the 'standard resolution of vagueness' Why am I placing so much emphasis on Lewis's commitment to Thesis B, as a thesis about the evaluation of backward counterfactuals under the 'standard resolution' of their vagueness? The reason is that Thesis B seems to me plainly false as a description of the way in which we normally evaluate backward counterfactuals. Confronted with the 'rival' backward counterfactuals (C2) and (C2*): (C2) If Jane had handed in her essay at 2 p.m. on Monday, she would have revised it properly first; (C2*) If Jane had handed in her essay at 2 p.m. on Monday, it would have been only a rough draft just before she submitted it, there seems no reason to say that the 'typical' or 'ordinary' verdict favours (C2*) rather than (C2). If anything, I think it is the other way round. 11 Nor is (C2) an isolated case. It is easy to produce examples concerning human behaviour where, in evaluating a backward counterfactual, we naturally suppose that if the agent had acted differently at a time t, then some features of the past relative to t (that are independent of the agent's character) would have been different, otherwise the agent would not have acted thus. 12 Downing's (1959) example involving prideful Jim, resentful Jack, and the quarrel, cited by Lewis (CDTA: 33) may be regarded (contrary to Downing's own verdict) as involving a backward counterfactual (or subjunctive) conditional of this 'character-accommodating' type. 13 Nor is the phenomenon of the naturalness of the back-tracking reading confined to backward counterfactuals that concern agency. Take the counterfactuals (C5) If there had been ice on the pond this morning, the temperature last night would have been lower than it actually was; (C6) If the roof had been intact today, it would not have been hit by a falling tree yesterday.
According to Lewis's version of Thesis B, the back-tracking evaluation that is required to make the counterfactuals (C5) and (C6) come out true should require a context that is abnormal or atypical or non-standard. Yet, I submit, this is simply not the case. 14 Yet if Thesis B is to be rejected, this represents a serious problem for Lewis. We may agree with Lewis that counterfactuals are infected with vagueness, and that different ways of resolving the vagueness are appropriate in different contexts (cf. CDTA: 34). We may thus agree that one way of resolving the vagueness of (C2) is to keep the past relative to the A-time fixed, with the result that (C2) comes out false. (On this resolution, if Jane had handed the essay in at 2 p.m. on Monday, it is not the case that she would have taken the precaution of revising it properly first; rather, she would have handed it in despite its unfinished state). However,-and here is my problem-with what right does Lewis call this (non-backtracking) resolution 'the standard resolution'?
If 'standard' means, as Lewis appears to intend it to mean, 'typical' or 'ordinary' or 'normal', then, for the reasons that I have indicated, Lewis's claim seems to be false. 15 Nor is Lewis well placed to assert that the 'standard' (in the sense of ordinary or typical) resolution of the vagueness of a backward counterfactual is one that outlaws back-tracking, given his view that the very assertion of a backward counterfactual such as (C2)-a counterfactual that requires support from a backtracking argument for its defence-may create a context that is hospitable to its 13 The relevant backward conditional is [(Jb)] 'If Jim were to ask Jack for help today, there would have been no quarrel yesterday', which, I claim, is naturally taken to be true, given the background assumption that Jim's pride would be an almost insuperable obstacle to his asking for help after a quarrel. Note that I make this claim only for the backward conditional. I do not automatically extend the claim to a forward conditional that might be supposed to be derived from (Jb), such as [(Jf)]'If Jim were to ask Jack for help today, Jack would help him'. Given my separation of Thesis B and Thesis F, I am willing to concede that the most natural evaluation of (Jf) is one that keeps fixed the past quarrel and Jack's consequent resentment, and thus supports the verdict that (Jf) is false. 14 Perhaps Lewis might claim that the plausibility of (C5) and (C6) depends on the fact that their consequents involve changes to the past before the A-time that are required to avoid an 'abrupt discontinuity' between the past before the A-time and the fulfilment of the antecedent (cf. CDTA: 40). However, this response would simply push the wrinkle in the carpet to another place. See my 'Objection 3' in §5 below. 15 Lewis suggests no other interpretation. And as well as describing back-tracking contexts as 'special', he uses the word 'ordinarily' when talking about the allegedly 'standard resolution' of backward counterfactuals (CTDA: 34). truth (CDTA: 34). 16 I conclude that Lewis has taken a feature (the 'keeping fixed' of the past relative to the A-time) that may with some plausibility be regarded as a feature of the standard resolution of the vagueness of forward counterfactuals, and has extended it, quite implausibly (and with dubious consistency), to the standard resolution of the vagueness of backward counterfactuals.
Even if I am right about this, though, does it really matter? Perhaps Lewis should not have described the resolution of the vagueness of backward counterfactuals that rules out back-tracking arguments as 'the standard resolution'. But so what? As long as there is some clearly identifiable resolution of the vagueness of backward counterfactuals (call it 'the Lewis resolution') that rules out back-tracking arguments and keeps the past fixed, isn't that enough for his purposes, as long as this 'Lewis resolution' is one that we do employ at least some of the time in evaluating backward counterfactuals (which is not in dispute)? My answer is that is not enough, given Lewis's ambition to explain the intuition of the fixity of the past. There is a dilemma here. Either Lewis's case for the fixity intuition rests partly on a version of Thesis B (concerning our treatment of the past when evaluating backward counterfactuals), or it does not. If it does not, then the appeal to the 'Lewis resolution' version of Thesis B is, of course, completely irrelevant to the explanatory project. But if it does, then the appeal to the 'Lewis resolution' version of Thesis B must be insufficient. If all that Lewis can maintain is that, when we evaluate backward counterfactuals, we sometimes keep the past fixed, although we sometimes do not, this is not a version of Thesis B that could help to explain the intuition of the fixity of the past.
At this point, it might be objected that the resolution that Lewis describes as 'the standard resolution'-the one that keeps the past fixed and outlaws back-trackingis not just any old resolution that we happen to use some of the time in our counterfactual thinking, but is the resolution of the vagueness of counterfactuals that gives the result that the direction of counterfactual dependence is the (standard) direction of causal dependence. Even though a backward counterfactual that is asserted on the basis of a back-tracking argument [such as my (C2)] may be (as I think) perfectly respectable as an illustration of the counterfactual dependence of the earlier on the later, no one wants to say that it represents a case of the causal dependence of the earlier on the later. On the contrary, it is of the very nature of a back-tracking argument that it infers the counterfactual dependence of the earlier on the later from the causal dependence of the later on the earlier. 16 Lewis says that a 'counterfactual saying that the past would be different if the present were somehow different… [that comes out] true under the special resolution of its vagueness, but false under the standard resolution' may be called 'a back-tracking counterfactual' (CDTA: 34). However, this should not, I think, be taken as a definition, for reasons independent of (what I regard as) Lewis's tendentious use of the expression 'standard resolution'. Independently of my quarrel with Lewis on that issue, Lewis's characterization seems too narrow to serve as a definition of 'back-tracking counterfactual', since it does not apply to cases like my forward counterfactual (C4), which requires support from a back-tracking argument even though it does not explicitly say that the earlier would have been different if the later had been different. (Bennett, who introduced the term 'back-tracking', used it for the phenomenon of 'counterfactualizing back in time and then forward again' (Bennett 2003: 208), as exemplified by my 'mixed' counterfactual (C3). However, subsequent usage has not followed Bennett in this respect). I shall return (in §7) to the question of the relation between the allegedly 'standard' (Lewisian) resolution of the vagueness of counterfactuals and the temporal asymmetry of causal dependence. For the present, I set it aside, and proceed to my second objection to Lewis, which concerns Thesis F. 4 Objection 2: Forward counterfactuals and keeping the past fixed Thesis F appears considerably more plausible, as a claim about our ordinary (standard, normal, typical) use of counterfactuals than does Thesis B. Or, at any rate, it is plausible when modified to allow for the possible exception of a 'transition period' leading from the actual past to the fulfilment of the antecedent. To accommodate this last point, let us consider, as an alternative to Thesis F, the following: Thesis F*: When we evaluate forward counterfactuals, we keep the past, relative to the A-time, fixed, with the possible exception of a transition period leading from the actual past to the fulfilment of the antecedent.
Even if Thesis B is false, it might nevertheless be suggested that the truth of Thesis F or Thesis F* would be sufficient to secure, for the past, a counterfactual independence with respect to the present and future that could explain our intuition of the fixity of the past. 17 It appears to be undeniable that forward counterfactuals play a much more prominent role in our counterfactual thinking than do backward counterfactuals. Hence, if Thesis F (or Thesis F*) is true, then, even if Thesis B is false, it would follow that 'keeping the past fixed' is a feature of the evaluation of most of the counterfactuals that we are actually inclined to assert or consider, simply because of the fact that most such counterfactuals are forward counterfactuals rather than backward counterfactuals. 18 Now, it might be objected that, strictly speaking, to say that the past is counterfactually independent of the present requires the explicit endorsement of backward counterfactuals that say that the past would have been the same if the present had been different, and thus that the mere endorsement of Thesis F/F*, without the endorsement of Thesis B, is not sufficient. However, even if this is, strictly speaking, true (of what it would take to establish that the past is counterfactually independent of the present), the more interesting issue is whether there is a temporal asymmetry in our counterfactual thinking that could explain the fixity intuition, even if the asymmetry does not involve the explicit assertion of any backward counterfactuals that say that the past would have been the same had the present been different. So I shall not rely on this objection (to the employment of Thesis F/F* in isolation from Thesis B).
What I shall maintain, however-and this is my second objection to Lewis-is that, although Thesis F-or at least its modification Thesis F*-has some plausibility, it is not obviously correct. How are we to tell whether Thesis F (or Thesis F*) is true of our normal practice, especially if (as the arguments of the last section imply) we cannot appeal, in testing Thesis F (or F*) against our practice, to the backward counterfactuals that we accept? In his initial remarks about the (alleged) asymmetry of counterfactual dependence, Lewis claims: In reasoning from a counterfactual supposition, we use auxiliary premises drawn from (what we take to be) our factual knowledge. But not just anything we know may be used… If the supposition were true, the future would be different and some things we know about the actual future might not hold in this different counterfactual future. But we do feel free, ordinarily, to use whatever we know about the past…. [I]n reasoning from a counterfactual supposition about any time, we ordinarily assume that facts about earlier times are counterfactually independent of the supposition and so may freely be used as auxiliary premises. (CDTA: 33; bold emphasis mine) If we take Lewis's claims in this passage as restricted to our use of forward counterfactuals, then they have some plausibility. But the data to which Lewis appeals here are not conclusive in favour of Thesis F/F*. They are consistent with something much weaker-for example, that we keep the past relative to the A-time fixed in respect of its salient or significant features. As Jonathan Bennett convincingly remarks (convincingly, that is, if the topic is taken to be subjunctive conditionals of the 'forward' variety): The plain person using a subjunctive conditional has a vague thought of a world that does not significantly differ from the actual one until a divergence leading to the truth of his antecedent. (Bennett 2003: 218) Bennett goes on to say that although one way to sharpen this vague thought is with (Lewis's) idea of a world that is exactly the same as the actual world up until a divergence leading to the truth of the antecedent, it is not the only way. Another way to sharpen the vague thought is with the idea of an 'exploding difference', that is: the idea of a world that is like [the actual world] in every respect we would ever think about and then suddenly, legally, and improbably embarks on a short course of events through which it becomes noticeably unlike [the actual world]. (Bennett 2003: 218) In other words, whereas Lewis sharpens the vague 'does not significantly differ' into what we can call 'Exact Match', leading to Thesis F or F*, a rival way of sharpening the vague 'does not significantly differ' is into 'Exploding Difference'. And Exploding Difference, unlike Exact Match, does not support Thesis F or F*. What it supports is at most: Weakened Thesis F: When we evaluate forward counterfactuals, we keep the past, relative to the A-time, fixed in salient respects, and Counterfactuals and the fixity of the past 407 Weakened Thesis F*: When we evaluate forward counterfactuals, we keep the past, relative to the A-time, fixed in salient respects, with the possible exception of a transition period leading from the actual past to the fulfilment of the antecedent.
And neither Weakened Thesis F nor Weakened Thesis F* is sufficient to explain or support the intuition of the fixity of the past. Why so? Because the fixity intuition does not discriminate between salient features and insignificant features of the past. If you were asked: 'If a feature of the past is one ''that you would never think about'', does this mean that this feature of the past might not be fixed?', then surely your answer would be 'No'. (Nor, of course, is there any paradox in asking people whether features that they would never think about (cf. Bennett's phrase in the passage quoted above) are fixed, since they can coherently consider the general question without thinking about any of the relevant unthought-of features.) Yet if Bennett is right in suggesting that our ordinary counterfactual thinking (with regard to forward counterfactuals) does not discriminate between keeping the past before the A-time fixed in the sense required by Exact Match, and keeping the past before the A-time fixed in salient respects (Exploding Difference), then this casts doubt on the claim that our practice supports Thesis F/F*. Speaking for myself, my intuition that the past is fixed-an instance of the intuition that Lewis seeks to explain by reference to features of our counterfactual thinking-is significantly more robust than my intuition that the past is to be kept fixed in the 'Exact Match' sense when evaluating forward counterfactuals. I think I could be persuaded that the notion of an exploding difference is sufficient to match my intuitions about the extent to which the past is kept fixed in the evaluation of forward counterfactuals. If I were so persuaded, then I would reject Thesis F and Thesis F*. But this would not in the least tempt me to give up my intuition that the past is fixed; nor do I think it should.
Objection 3: The 'transition period' and keeping the past fixed
My first two objections have been that neither Thesis F/F* nor Thesis B is sufficiently plausible to warrant the claim that we keep the past fixed when evaluating counterfactuals, whether these are forward counterfactuals (Thesis F/F*) or backward counterfactuals (Thesis B).
But a further problem for Lewis is evident, connected with the transition period that has already been mentioned. Famously, Lewis's account of counterfactuals delivers the result that, in perfectly ordinary cases (no time machines, black holes, weird possible worlds consisting of just one atom in the void, etc.), there are portions of the past that are not kept fixed when evaluating counterfactuals even under what Lewis calls 'the standard resolution' of their vagueness. These are the portions of the past that, according to Lewis's account, concern the transition period (or 'ramp', as Bennett (2003) calls it) from an initial divergence from the actual course of events to the fulfilment of the antecedent.
According to Lewis, the closest possible worlds relevant to the evaluation of a counterfactual, under the 'standard resolution' of vagueness, and assuming determinism, are ones in which the fulfilment of the antecedent typically comes about via a transition period that starts with a divergence from the actual course of events at some time prior to the time of the antecedent. [Under determinism, the divergence has to involve a breach of the actual laws of nature (hence, a 'miracle' in Lewis's quasi-technical sense of the term). 19 ] One might ask why Lewis needs the 'transition period': why can't the 'divergence miracle' itself be the fulfilment of the antecedent? We need the transition period, according to Lewis, in order to [avoid] abrupt discontinuities. Right up to t, the match was stationary and a foot away from the striking surface. If it had been struck at t, would it have travelled a foot in no time at all? No; we should sacrifice the independence of the immediate past to provide an orderly transition from actual past to counterfactual present and future. (CDTA: 39-40; bold emphasis mine) 20 There are two serious problems arising from this concession, given Lewis's ambition to explain the fixity of the past in terms of counterfactual independence.
First, although in this passage Lewis speaks of the transition period as involving only the immediate past, it seems impossible that, on his own account, the transition period can be so confined. For example (as Jonathan Bennett has pointed out), the details of Lewis's own theory (in particular, his insistence on the paramount importance of minimizing big miracles) appear to lead inevitably to the result that if we evaluate a counterfactual with the antecedent: (D) If dinosaurs had been roaming the earth today, the closest possible A-worlds will be ones whose divergence from the actual course of events occurred millions of years ago (cf. Bennett 2003: 220).
The dinosaurs example is a dramatic case. But it is easy to construct examples where the requirement to ensure a smooth transition from the actual past to the fulfilment of the antecedent requires a 'ramp' from the actual course of events that starts months or years before the A-time. Consider, for example, counterfactuals with antecedents such as 'If the Duchess of Cambridge had given birth to a child in January 2012', 'If John F. Kennedy had been alive today', or 'If in 1930 the indigenous population of Japan had been three times what it actually was'.
If (like Lewis) we are committed to the idea that there must be a transition period in order to allow the fulfilment of the counterfactual antecedent to be 'smoothly 19 Two possible worlds that have exactly the same past up until a certain time and then diverge cannot also have exactly the same deterministic laws. So, if the actual world is deterministic, any possible world that shares its past up until time t with the actual world and then diverges cannot have exactly the same laws as the actual world. 20 It is true that these remarks are made by Lewis in connection with an analysis of counterfactuals that he rejects ('Analysis 1'). However, he endorses this feature of it-that is, its appeal to the transition period. (He rejects Analysis 1 only because it builds the temporal asymmetry into the analysis by fiat, and favours his own analysis (Analysis 2) as yielding the asymmetry without doing so by fiat).
grafted' on to the actual past, then it seems inevitable that in some cases this will require the transition period to begin days, months, years, centuries, or even millions of years before the A-time. At any rate, this consequence seems to be unavoidable on Lewis's own 'minimization of big miracles' account. If so, then it is impossible for Lewis to maintain, with an appropriate degree of generality, the thesis that the past relative to the A-time is kept fixed in the evaluation of counterfactuals (even forward counterfactuals). And if this is the case, then it seems impossible for him to maintain that our intuition of the fixity of the past is to be explained by the extent to which the past is kept fixed in the evaluation of counterfactuals.
The second problem is that even if the transition period were confined to the immediate past relative to the A-time, this would still undermine Lewis's explanatory project. Remember that Lewis aims to use considerations about the evaluation of counterfactuals in order to explain the fixity intuition-an intuition that entirely ignores the distinction between immediate and more remote past. Nobody thinks that although what happened last century, last year, last month, last week, or yesterday, is 'fixed', what happened in the past five minutes (or even the past five seconds) may still be 'open' simply because it occurred such a short time ago. The result is that there is an apparently fatal mismatch between, on the one hand, the extent of counterfactual independence that Lewis's theory can consistently attribute to the past, and, on the other hand, the fixity intuition that his account purports to explain. Even setting aside the arguments of § §3-4 above, the prospects look bleak for Lewis's attempt to explain the fixity of the past in terms of counterfactual independence. 21 6 Objection 4: Lewis's retreat from 'counterfactual independence' to 'failure of (systematic) counterfactual dependence' Lewis is curiously untroubled by the problem posed by the transition period. In discussing the class of exceptions that the transition period requires, he argues that, 21 Lewis is committed, not only to there being, under determinism, prior changes to the past, but also to there being prior 'miracles'. Thus, given his own treatment of backward counterfactuals, he is committed (with regard to the notorious Nixon example (cf. CDTA: 43-48)) to the assertibility, in standard contexts, not only of: (N1) If Nixon had pressed the button at t, then some events prior to t would have been different in certain respects from the way that they actually were, and perhaps: (N2) If Nixon had pressed the button at t, then a few extra neurons would have fired in his brain shortly before t, but also of: (N3) If Nixon had pressed the button at t, then the laws of nature would have been different in some respect resulting in a difference in the course of events before t.
I agree that both (N1) and (N2) are plausible. But that is because I think that they are plausible examples of backward counterfactuals asserted on the basis of back-tracking arguments. Lewis can't say that. (N3), on the other hand, is something to which Lewis is firmly committed (on the assumption of determinism). But (N3) is an intuitively bizarre counterfactual, and not one that would be commonly accepted as true except as a consequence of a philosophical theory. even though they do represent a 'sacrifice' of the counterfactual independence of the past, they do not bring with them a type of counterfactual dependence of past on future that would, under the 'standard resolution', and in conjunction with his counterfactual theory of event causation, imply a causal dependence of past events on future ones, thus introducing unwanted (and unacceptable) cases of backward causation 'even in cases that are not at all extraordinary' (CDTA: 40). Now, Lewis may or may not be right in claiming that he can avoid the conclusion that the transition period introduces cases of counterfactual dependence that his theory of event causation would be required to treat as cases of causal dependence, and hence as cases of backwards causation. 22 Leaving that aside, there is the further question why Lewis thinks that merely avoiding the kind of systematic counterfactual dependence that would (by his lights) amount to causal dependence defuses the problem posed by the transition period. In this connection, he writes: [W]e should sacrifice the independence of the immediate past to provide an orderly transition from actual past to counterfactual present and future. That is not to say, however, that the immediate past depends on the present in any very definite way. There may be a variety of ways the transition might go, hence there may be no true counterfactuals that say in any detail how the immediate past would be if the present were different. I hope not, since if there were a definite and detailed dependence, it would be hard for me to say why some of this dependence should not be interpreted -wrongly, of course -as backward causation… in cases that are not at all extraordinary. (CDTA: 40; bold emphasis mine) And again, in the opening paragraph of CDTA: Suppose today were different. Suppose I were typing different words…. Would yesterday… be different? If so, how? Invited to answer, you will perhaps come up with something. But I do not think there is anything you can say about how yesterday would be that will seem clearly and uncontroversially true. (CDTA: 32; bold emphasis mine) However, there are two reasons why this retreat does not help Lewis. The first is that it makes the letter of what Lewis says inconsistent. The retreat undermines his right to make assertions such as: The past would be the same, however we acted now. The past does not at all depend on what we do now. It is counterfactually independent of the present. (CDTA: 38) [I]f the present were different the past would be the same, but the same past causes would fail somehow to cause the same present effects. (CDTA: 34) The second reason is that if the past would have been different in some way or other had the present been different, then there is a clear sense in which the past does depend counterfactually on the present. But if the past does depend counterfactually on the present, how can the thesis that the dependence is not systematic (even if correct) save the proposed explanation of the fixity of the past? 23 One might attempt to respond by pointing to the fact that, if the openness of the future really is a matter of the systematic counterfactual dependence of future events on the present, then, technically, there could still be an asymmetry between past and future as long as past events are not systematically counterfactually dependent on the present, even if they are not, strictly speaking, counterfactually independent of the present. And there is logical space for this in Lewis's account. 24 However, although this revision would introduce an asymmetry of counterfactual dependence, I do not believe that it is one that can play the role of explaining the fixity intuition. If someone were to tell me that my intuition that the past is fixed either amounts to, or can be explained by, the idea that although the past would indeed have been different had the present been different, there is no definite way in which it would have been different, then I would find the suggestion totally mysterious. How can the idea that, if today had been different, then yesterday would have been different, although not in any definite or specifiable way, coherently be invoked to explain the idea that the past is fixed? (Even the idea that the past might have been different had the present been different-let alone the idea that it would (though unspecifiable) have been different-appears to conflict with the idea that failure of counterfactual dependence has anything to do with the fixity of the past. 25 ) 7 Objection 5: The substitution of 'causal independence' for 'counterfactual independence' would not help Lewis Another possibility is that, in suggesting that the failure of counterfactual independence that is required by the transition period, as long as it does not also involve systematic counterfactual dependence, would not undermine his explanation 23 Remember that what Lewis is attempting to explain, in terms of an asymmetry of counterfactual dependence, is an intuition of the fixity of the past. If he concedes, as he appears to do, that the past in the transition period would have been different had the present been different, although there is no definite way in which it would have been different, the concession undermines this explanatory project. 24 See the characterizations of 'counterfactual dependence' and 'counterfactual independence' in Lewis (1973: 164-165;168). Lewis there characterizes counterfactual dependence primarily as a relational characteristic of a family of propositions with respect to another family of propositions, whereas 'counterfactual independence' is characterized as a feature that a single proposition has with respect to a family of propositions. This leaves it open that a proposition about the past might not be counterfactually independent of a family of propositions about the present, and yet might also fail to belong to a family of propositions about the past that depends counterfactually on a family of propositions about the present. This would appear to be a case of failure of counterfactual independence without (systematic) counterfactual dependence. 25 Bennett (2003: 290-291) makes the further objection that even if Lewis can maintain that the admission of the transition period introduces no systematic counterfactual dependence of earlier events on later events of a type that would cause trouble for his counterfactual account of event causation, this does not give Lewis a justification for rejecting the thesis that there is a systematic counterfactual dependence of earlier facts or states of affairs on later ones. But to allow the latter, Bennett plausibly maintains, makes trouble for Lewis's attempt to explain the fixity of the past in terms of a failure of counterfactual dependence.
of the fixity of the past, Lewis is, implicitly, appealing to the following: the idea that, even if the past may not be counterfactually independent of the present, it is nevertheless causally independent of the present-in the sense that, had the present been different, this would not have caused the past to be different. But could Lewis legitimately appeal to these considerations about causation in his explanation of the asymmetry of openness? I believe that he could not. Lewis makes it explicit that he wishes to explain both the temporal asymmetry in the direction of causation and the 'asymmetry of openness' in terms of an asymmetry of counterfactual dependence (CDTA: 35-36). If it should turn out that the asymmetry of openness is to be explained in terms of an asymmetry of causal dependence that is more fundamental than the asymmetry of counterfactual dependence, then Lewis's explanatory project would be undermined.
Concluding remarks
My criticisms of Lewis's attempt to explain the fixity intuition in terms of counterfactual independence naturally prompt two questions. One concerns the extent to which my criticisms can be generalized. Might there not be, for all that I have said here, some modification of Lewis's account that would escape my criticisms? The second question is: if the fixity intuition is not to be explained in terms of counterfactual independence, how is it to be explained? To conclude this paper, I briefly address these questions in turn.
Early in this paper I identified two candidates for the claim that the past is counterfactually independent of the present: Thesis F and Thesis B. I argued ( §3) that Thesis B is false, and I expressed scepticism about Thesis F and its modification (favoured by Lewis), Thesis F* ( §4). In addition, I argued that the adoption of Thesis F* is inimical to the project of explaining the fixity intuition in terms of the counterfactual independence of the past with respect to the present, on the grounds that Thesis F* requires exceptions to the 'counterfactual independence' rule that undermine the proposed explanation ( §5). If my arguments against Thesis B and against the employment of Thesis F* are cogent, then the only option, for the defender of the claim that the fixity intuition is to be explained in terms of counterfactual independence, appears to be to abandon both Thesis B and Thesis F*, and argue (as Lewis does not) for the following two claims: (i) that the unmodified version of Thesis F represents our practice in the evaluation of forward counterfactuals, and (ii) that Thesis F (without the support of Thesis B) is sufficient to explain the fixity intuition. Although I shall not argue for it here, I think that the prospects for success in either of these undertakings are not at all promising.
The second question is: if the fixity intuition is not to be explained in terms of the counterfactual independence 26 of the past with respect to the present and future, what does explain the fixity intuition? In this paper, I have said little about what the fixity intuition involves, except for the following: The fixity intuition does not discriminate between the immediate past and the more remote past ( §5).
The fixity intuition does not discriminate between salient features of the past and insignificant features of the past ( §4).
The fixity intuition is not satisfied by the claim that although the past does depend counterfactually on the present, the dependence is not systematic ( §6).
The fixity intuition involves an asymmetrical attitude to past and future (although the asymmetry need not be totally exceptionless).
To these points, we might add, following Lewis: The fixity intuition associates fixity with uniqueness, and openness with multiplicity (cf. the quotations from Lewis in §1 of this paper).
The fixity intuition associates fixity with being settled, and openness with being unsettled (ibid.). 27 These points appear to leave open the possibility that the fixity intuition is simply the intuition that the past (unlike the future) is causally independent of the present and future. I have argued that Lewis himself cannot legitimately appeal to this 'causal independence thesis', because his project is to explain both causal independence and the fixity intuition by appeal to a more basic notion of counterfactual independence ( §7). However, this leaves it open that the fixity intuition that Lewis tries and (if I am right) fails to capture is really nothing more than an intuition of causal independence.
I admit that it is natural to associate the idea that the past is fixed with the idea that nothing that we can now do, and nothing that can now happen, could have any effect on the past: that the past is, in that sense, immutable. Nor does the possibility of backwards causation necessarily threaten this association between causal independence and fixity, since it seems plausible to suppose that, were there to be a case of backwards causation, it would represent an exception to the general rule of the fixity of the past.
Nevertheless, I am sceptical about whether the notion of causal independence can be the key to the notion of the fixity of the past. The reason is that, intuitively, causal independence appears to be neither necessary nor sufficient for fixity.
Reflection on fatalistic thinking suggests that causal independence is not necessary for fixity-or, to put it another way: that fixity is compatible with causal dependence. By killing his father, Oedipus, we may suppose, brought it about that he married his mother: his incestuous marriage was causally dependent on his previous act of parricide. Yet (to the extent that we can take the fatalistic story seriously) it seems that we can accept that causal claim, yet still question whether Oedipus had before him, at any time in his existence, a future that was 'open' rather than 'fixed'.
In addition, it seems that causal independence is not sufficient for fixity. A future that is a completely random continuation of the present, if we can coherently envisage such a thing, is surely a future that is causally independent of the present. Yet such a 'random' future, far from being a 'fixed' future, would seem to be a paradigm of one type of openness, even if it represents a type of openness that brings with it no prospect of control over the course of events.
If this is right, and neither counterfactual independence nor causal independence is the key to the explanation of the fixity intuition, what is? Unfortunately, I do not have an answer to that question. However, my aim in this paper has been, not to solve this problem, but the more modest project of attempting to show that the idea that the past is counterfactually independent of the present and future cannot be invoked to give a satisfactory account of the intuition of the fixity of the past. As a consequence, Lewis's claim that 'the mysterious asymmetry between open future and fixed past is nothing else than the asymmetry of counterfactual dependence' (CDTA: 38) cannot be sustained. | 12,714.8 | 2013-04-24T00:00:00.000 | [
"Philosophy",
"Economics"
] |
Case Report: "ADHD Trainer": the mobile application that enhances cognitive skills in ADHD patients
We report the case of a 10 year old patient diagnosed with attention deficit hyperactivity disorder (ADHD) and comorbid video game addiction, who was treated with medication combined with a novel cognitive training method based on video games called TCT method. A great risk of developing video game or internet addiction has been reported in children, especially in children with ADHD. Despite this risk, we hypothesize that the good use of these new technologies might be useful to develop new methods of cognitive training. The cognitive areas in which a greater improvement was observed through the use of video games were visuospatial working memory and fine motor skills. TCT method is a cognitive training method that enhances cognitive skills such as attention, working memory, processing speed, calculation ability, reasoning, and visuomotor coordination. The purpose of reviewing this case is to highlight that regular cognitive computerized training in ADHD patients may improve some of their cognitive symptoms and might be helpful for treating video game addiction.
Attention deficit hyperactivity disorder (ADHD) is the most commonly diagnosed neurodevelopmental disorder in childhood, which affects 3% to 7% of the population worldwide 1 . ADHD is characterized by distractibility, hyperactivity and impulsivity. The standard treatment for ADHD includes mainly medication, psychosocial and behavioral treatment, and cognitive training exercises.
Cognitive training exercises are especially useful when cognitive impairment is observed and when a regular and personalized cognitive training is performed 2 . Studies in participants with cognitive impairment have shown that regular and daily cognitive training can improve some of their cognitive symptoms 3,4 . In addition, recent studies have demonstrated that computerized working memory and executive function training programs lead to better results than ordinary cognitive training methods in children with ADHD 5-7 .
Children's use of electronic devices, Internet and video games, has noticeably increased in the last 10 years. Since the first case of Internet addiction was described in 1996 by Young 6 , several other pathologies have been proposed including pathological gambling and dependence 7 . Despite extensive research literature available, the prevalence and proper diagnostic criteria for pathological gaming are still debated among the scientific community 8 . Gaming addiction represents part of the postulated construct of Internet addiction, and is the most widely studied specific form of Internet addiction to date 9 . Prevalence estimates range from 2% 10 to 15% 11 , depending on the respective socio-cultural context, sample, and assessment criteria utilized. A great risk of developing video game or Internet addiction has been reported in children, and especially in those with ADHD 8 . Stimulants such as methylphenidate (MPH), given to treat ADHD, and video game play have been found reduce Internet use in subjects with co-occurring ADHD and Internet video game addictions 9 .
Despite the risk of Internet addiction we hypothesize that good use of these new technologies can be useful to develop new methods of cognitive training useful in to treat ADHD an Internet addiction.
Case report
This case study involves a 10 year old child born in Madrid (Spain) who received treatment in a childhood psychiatry unit for 2 years due to behavioral disorders and ADHD. No other previous medical history was reported. His mother, aged 35, received psychological treatment for anxiety 3 years ago. His father, aged 36, works as an engineer and presented no relevant medical history. The patient was their only son. The parents described a great addiction to video games in the last year, referring 4 hours per day of video game playing, affecting his social interaction, and causing a lack of imaginative play and poor academic scores. Teachers at the school reported deterioration in his academic performance over the past year. At that time, the child was treated with methylphenidate 40 mg per day. The patient's parents reported to the psychiatrist that the only significant change from the previous year was a major addiction to a war videogame.
To reduce the exposure to video games, we used a novel technique, based on the Tajima Cognitive Method (TCT) called "ADHD Trainer". It consists in a cognitive stimulation program with a mobile/tablet application designed specifically to treat ADHD.
Behavioral and academic improvements were rated on the Conners Parent and Teacher Rating Scales (brief version) and Barkley School Situations Questionnaire.
ADHD diagnosis was made according to DSM V criteria 10,11 . Attention was rated with CPT Conners Conners Continuous Performance Test.
Differential diagnosis between oppositional defiant disorder and ADHD disorder was considered, because most of the symptoms were observed at home, however not angry or irritable mood was observed.
The patient was treated with a combination of methylphenidate and cognitive training method based in the TCT method. The patient received daily treatment with 40 mg of methylphenidate, and at least 10 minutes of daily cognitive training with the "ADHD Trainer" app.
The TCT is a type of computer adaptive test (CAT), as it adapts to the individual's cognitive strengths and weaknesses, based on his own scores over time, as well as those of his peers. Users receive separate scores in different cognitive areas, including simple calculation, attention, perceptual reasoning, and visuomotor coordination ( Figure 1). The goal of the daily training is to reach a pre-set individualized score in different cognitive domains, in order to complete a week of successful training. The exercises comprising "ADHD Trainer" are described in the following Table 1. During the first month of cognitive training therapy, the patient was only allowed to play with specific games based on the TCT Method, using the "ADHD Trainer" (Figure 2). The patient had to use the app every day at the same time, provided the other targets that were assigned in therapy, such as the progressive reduction in the number of hours to play other games and just being able to play with them once a week, were met. During the first month, he was allowed to play this game to a maximum range of 4 hours per day.
No addiction symptoms to this videogame was observed during the first month (tolerance, withdrawal or functional impairment). The average number of hours that the child played the video game was 1 hour a day. In the following months the objective was to play the game at least 10 minutes per day.
severity score for the Barkley School Situations Questionnaire was 70 before starting the training, and after the cognitive training the score was 66.
Both the school and the family reported a significant improvement in the patient after 6 months of TCT cognitive training, which included important improvements of both academic and behavioral outcomes.
Discussion
Most of the studies reported so far emphasize the potential addictive risk of new technologies and the influence they have on children's interpersonal development, by reducing the time children spend outside home and increasing the time they spend alone playing in front of a television or a computer screen 12 . It is also known that the new technologies may affect children's academic performance by reducing the number of hours that they dedicate to studying.
There are few studies which focus on the positive aspects of new technologies and the opportunities that they offer new ways of interaction between professionals and users as well as the development of new therapeutic methods, capable of reaching the young.
New technologies, in particular video games, can be used as therapeutic tools to train executive functions 6,7 . As they generate greater motivation in children and adolescents they will increase the frequency of performing cognitive tasks oriented to enhance executive functions, especially the working memory. Previous computerized methods have been purposed and have shown to be better than traditional ones 13,14 .
There are key advantages for children practicing the TCT Method relative to traditional cognitive training therapies which include: 1) Increased motivation in children for completing cognitive training therapy. This increase in motivation comes from: entertainment value (these games are designed to be similar to regular video games that children enjoy) and feedback on performances relative to own and peer scores (which improves children's sense of agency and self-efficacy, as demonstrated by documented research on motivation and learning) 12,15 .
2) Ease of accessing the application. Children can play the games at any place or time, day and night. In less than two months the video games abuse was substantially reduced, limiting their use to weekends, and always for periods not exceeding 4 hours in total. Although 4 hours a day might is a an important amount of time for a single day, the global reduction of the time wasted in videogames and its limitation to the weekend means a significant improvement in this particular case.
Behavioral and academic improvement was rated on the Conners Parent and Teacher Rating Scales and Barkley School Situations
Questionnaire. The initial score of the Conners was 19 for the teachers and 20 for the parents, and after the cognitive training the scores were 15 for the teachers and 16 for the parents. The main
Conclusion
ADHD patients are especially vulnerable to develop video gaming addition. ADHD patients often suffer from working memory and executive function dysfunctions, but we have observed that very few cognitive training techniques have been developed for ADHD patients in the last years. Poor completion rates of cognitive training in children with ADHD have been observed. We suggest that a daily cognitive computerized training in ADHD patients may improve some of their cognitive symptoms, and might be helpful for treating the video gaming addition.
Consent
Written informed consent to publish this report was obtained by the patient's parents.
Dr. Tajima takes responsibility for the integrity of the data and informed consent.
Author contributions
Dr. Gonzalo Ruiz wrote the manuscript, supervised by Dr. Kazuhiro Tajima-Pozo and Dr. Francisco Montañes-Rada. All authors agreed to the final content of the manuscript.
Competing interests
Dr. Kazuhiro Tajima-Pozo, participated in the development of "ADHD Trainer", and other mental health applications at TKT Brain Solutions, which is a Spanish startup, made up of physicians and engineers, whose aim is to develop mental health applications.
Grant information
The author(s) declared that no grants were involved in supporting this work.
"found reduce" -> "found to reduce" Here is one potential way to edit the last sentence (insertions marked with bold; deletions marked with underline and subscript). "Despite the risk of Internet addiction we hypothesize that , these new technologies can be useful new methods of cognitive training to of as treat ADHD an Internet addiction." d 3a. The exercises comprising "ADHD Trainer" are not described in any detail beyond listing the categories of mental functioning that each task is thought to reflect. For instance, what task is used for "Attention"? Has this implementation of the task been validated elsewhere, or adapted with unpublished modifications from a published task? And so on for each of the categories listed in the Table. 3b. As with any method, the authors need to provide some kind of information about where the reader can obtain "ADHD Trainer".
I have one additional comment that I forgot to add to the previous review.
5. The authors should acknowledge, perhaps in Conclusion, that behavioral interventions other than ADHD Trainer itself may account in part or in whole for the clinical improvement. Other interventions the child received include the following. "The patient was only allowed to play with" ADHD Trainer. "The patient had to use the app every day at the same time." The patient had to meet "the other targets that were assigned in therapy" including a "progressive reduction in the number of hours to play other games" and limiting other game play to once a week.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that good use of to develop useful in 1.
4.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
No competing interests were disclosed. This is an interesting and encouraging report, but I have the following reservations.
One case provides very limited evidence for efficacy and even less for safety. In this light, a couple of statements, including the following, are too enthusiastic and need to be toned down: "regular cognitive computerized training in ADHD patients can improve some of their cognitive symptoms and can help treating video game addiction" "We conclude that a daily cognitive computerized training in ADHD patients can improve some of their cognitive symptoms, and can help treating the video gaming addiction." The manuscript is understandable, but needs copy editing by a native English speaker. For instance, the first sentence of the abstract has 2 errors, and the following phrase is really hard to parse: "the method of Tajima Cognitive Method (TCT) cognitive training called 'ADHD Trainer'." The exercises comprising "ADHD Trainer" are not described in any detail beyond listing the categories of mental functioning that tasks were thought to reflect. If another publication or thesis describes it, a reference would suffice; otherwise a list of tasks would be a first step. Similarly, as with any method, the authors need to provide some kind of information about where the reader can obtain the TCT.
The Barkley School Situations Questionnaire was administered, but the scores are not reported.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
No competing interests were disclosed. Second, what evidence have you got that the child is not addicted to the educational game?
Third, 4 hours of play of a videogame post-treatment is still a lot, this should be mentioned as a limitation.
Fourth, why were the Conners ratings after treatment for parents and teachers lower compared with pre-treatment?
Fifth, the authors should be commended for the use of advanced computer games for treatment for ADHD. There are other tools for this purpose that are worthwhile mentioning such as ONTRAC (Mishra et ) and the game reported by 2013 al., Prins PJ ., 2011. et al I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
No competing interests were disclosed. | 3,365.6 | 2015-06-23T00:00:00.000 | [
"Computer Science",
"Medicine",
"Psychology"
] |
Exciton dynamics of C60-based single-photon emitters explored by Hanbury Brown–Twiss scanning tunnelling microscopy
Exciton creation and annihilation by charges are crucial processes for technologies relying on charge-exciton-photon conversion. Improvement of organic light sources or dye-sensitized solar cells requires methods to address exciton dynamics at the molecular scale. Near-field techniques have been instrumental for this purpose; however, characterizing exciton recombination with molecular resolution remained a challenge. Here, we study exciton dynamics by using scanning tunnelling microscopy to inject current with sub-molecular precision and Hanbury Brown–Twiss interferometry to measure photon correlations in the far-field electroluminescence. Controlled injection allows us to generate excitons in solid C60 and let them interact with charges during their lifetime. We demonstrate electrically driven single-photon emission from localized structural defects and determine exciton lifetimes in the picosecond range. Monitoring lifetime shortening and luminescence saturation for increasing carrier injection rates provides access to charge-exciton annihilation dynamics. Our approach introduces a unique way to study single quasi-particle dynamics on the ultimate molecular scale.
Supplementary Note 1: Scanning tunneling spectroscopy
In Supplementary Figure 2 we introduce two scanning tunneling spectra (dI/dU) obtained on the single photon emission center characterized in Fig. 3 of the main text. The spectra presented in Supplementary Figure 2 b were obtained on top of the positions marked with crosses of their respective colors in Supplementary Figure 2 a. The green spectrum was obtained on one of the three molecules that excite with high yield photon emission in the dislocation. The blue was obtained on a non-emitting molecule.
The green spectrum presents a filled electronic state shifted into the bandgap by 0.2 eV with respect to the blue spectrum. This state inside the bandgap is the cause for the strong emission obtained when electrons are extracted from that molecule and responsible for the single photon emission. The splitting off of the hole trap is typically much more pronounced than the split off of the electron trap. The spectra qualitatively corroborate the energy level diagram presented in the main text. Spatial mapping of the defective charge trap states in ECs will be published elsewhere.
We would like to comment on the difference between the band gap observed in Supplementary Figure 2 b and the luminescence photon energy of 1.7eV. Two corrections have to be applied. The thick dielectric C 60 layer penetrated by the strong electric field in the STM leads to a significant shift of observed electronic states with respect to their true (field free) energies. We found experimentally that under the given tunneling parameters the apparent widening of the band gap with respect to the true gap amounts to an increase of roughly 8% for each additional C 60 layer. Comparison of the value in Supplementary Figure 2 b with our data obtained for a double C 60 layer on Ag(111) (apparent gap 2.8eV) is thus in good agreement with an assumed thickness of 8 C 60 layers in our study. The remaining difference between 2.8eV and the classical literature value of 2.3eV 1 results from the remaining field shift within the last two layers. The second correction comes from the known exciton binding energy (electron-hole attraction) of 0.46eV. 2 Finally, a rather small residual contribution of 0.2eV-0.3eV may be attributed to the exciton trapping in the ECs.
Supplementary Note 2: Obtaining the life time from the correlation measurements:
The antibunching of sub-Poissonian emitters is described by a model containing only ground and excited state by the simple exponential recovery with the recovery time τ 3 : Models containing additional intermediate states may provide deviations from this simple shape. The broadening of function A in (1) by a normalized Gaussian detection function can be calculated analytically: where erf (x) is the error function. Function B with the known Gaussian width of 1.2ns FWHM (σ = 0.5 ns) could be used to fit the experimental g (2) (Δt) curves (Fig.3d) in the main text and to obtain from these fits the recovery times τ. See Supplementary Figure 3a for examples with various σ.
The data in the paper is evaluated and fitted by going even one step further. We employed the photon correlation of the two detectors in response to ps light pulses from a spectrally filtered (690nm) supercontinuum light source. The measured correlation function which is due to the detector characteristics could be described very well by the sum of two Gaussians with different heights and widths. With this approximation it is again possible to obtain the analytical convolution of (1) because the convolution of a sum of two functions is simply the sum of the two convoluted functions. The numerical fit of the analytical function to the measured data is straight forward and directly yields the recovery time τ in the experiment which are the ones we present in the paper.
Supplementary Note 3: Emission intensities and photon correlations in the three-state Model:
Correlation experiments in photo-excitation require a minimum of two states to account for the behavior: ground state and excited (singlet exciton) state. Including an additional long-lived excited state (e.g. triplet exciton) 4 leads to a three-state model (see Supplementary Figure 4).
The electrical excitation requires a minimum of 3 states because the generation of an exciton cannot be achieved in 1 step as in optical excitation but requires the successive creation of a hole and then the capturing of an electron at the monitored site. In the discussed experiments the electron extraction by the tip always precedes the electron capture so that a fourth (trapped electron) state will not be considered. The population of the trap by a hole is assumed to be linear in the tunnel current I tunnel so that the first rate constant (the inverse of the time constant) of the model is given by: In the model we regard, however, only those trapped holes, which are converted into an exciton because a separate detrapping process is not included. α is thus the exciton creation probability for each charge injected by the STM tip.
The capture of an electron requires the pre-existence of a trapped hole since due to the energetic position of electronic states a negatively charged trap lies too high in energy. The existence of a negatively charged trap under typical experimental conditions can be excluded since this electron could tunnel to the STM tip and emit a photon in an inelastic tunneling process thus emitting a broad plasmonic light spectrum, which is, in fact, not observed.
The electron capture by the trapped hole is fast (<< 1ns) as discussed in the main text. Its rate k 2 is assumed to be constant. If a weak dependence on the current exists, we would expect that a higher current would slightly increase the electron capture due to the increased driving force of the electric fields. The experimental observation is, however, opposite why we neglect the dependence of k 2 on current.
The decay of the exciton occurs by its proper time constant τ X 3 = 1 X
(5)
As shown in an earlier publication, this rate constant may contain already non-radiative quenching e.g. by the nearby metal electrodes. 5 This and another publication 6 suggest a life time of the lowest singlet exciton of the order of 1 ns. The rate equation model developed up to here can be solved analytically for the time-dependent occupation numbers of the three states: n g (∆t), n c (∆t), n X (∆t). When a recombination has taken place at time zero (initial condition of the solution), the probability for another exciton recombination at time delay ∆t is obtained as: with the substitutions The curve in Supplementary Figure 3b exhibits the characteristic recovery time of anti-bunching as 0.62 ns which is close although not identical to the inverse rate constant k 3 . The smallest of the rate constants, k 1 , enters into the intensity but does not appear as a time constant. Rate constant k 2 appears through a small parabolic section at the minimum at time-zero (compare to minimum of the red curve in Supplementary Figure 3a). This parabolic section is absent in two-state models and is expected to become measurable only for high enough detector time resolution and excellent correlation statistics.
The detected correlation rate is much lower than obtained by D due to various losses. These comprise a constant fraction of non-radiative recombination, finite coupling to plasmonic light emission and finally the optical transmission and detection efficiencies. We summarize the transmission remaining after all these losses in the constant η so that the experimental correlation rate can be described by: To account for the measured quantum yield of 2.5 * 10 -5 photons/electron we assume an exciton creation efficiency of α = 2.5*10 -3 excitons/charge and a detection loss factor η = 10 -2 . This value of 10 -2 is based on the estimated optical transmission of the optical line (ca 15% for optical transmission and detection 7 ) and the roughly 5% efficiency of plasmonic free space emission. 8 A more precise separation of the measured quantum yield into the two factors cannot be provided here as the two factors cannot be independently determined in the experiment.
From (7) we obtain the time-averaged photon intensity E in one detector as a function of tunnel current: ( tunnel ) = lim Δ →∞ ( ) = = 1 • 2 • 3 1 2 + 2 3 + 3 1 (8) which we plot in Supplementary Figure 5a for similar parameters as used in Supplementary Figure 3b. We find that the function is dominated by a linear dependence on the current and that deviations from linearity would occur only if k 1 becomes comparable to k 3 and k 2 .
Eqns. (7) and (8) do not yet include the current-dependent exciton life time reduction. We introduce parallel to process k 3 the non-radiative charge-induced exciton quenching of the exciton by k 3 '. This process is linear in the current with a charge exciton annihilation efficiency β: This modification turns the list of substitution parameters (6') into and the current dependent light emission (8) into: We conclude with a simplified result by introducing two approximations. We assume (*) that the electron capture process k 2 is by far the fastest of the three processes and that (**) the exciton creation efficiency α is much smaller than 1. Then we obtain wherein we substituted for simplicity tunnel = tunnel , the average time between two tunneling charges. As τ tunnel is given by the tunnel current and τ X is known from the time resolved correlation data, this formula has only two free parameters: The electroluminescence quantum yield α * η which is the slope near zero current and the annihilation quantum efficiency β which accounts for the deviation from linearity. In Supplementary Figure 5b we plot the best fit to the experimental data using eq. (12).
We shall comment that for larger injection rates (i.e. tunneling currents) the photon count rate theoretically completely levels off and even can decline. However, reaching these experimental conditions is a difficult task since high tunneling currents result in more unstable tips, and eventually in crashes destroying the ECs. However exploring these high current ranges can be of interest in future investigations. | 2,622.2 | 2015-09-29T00:00:00.000 | [
"Physics",
"Materials Science"
] |
One Shot of the Hydrothermal Route for the Synthesis of Zeolite LTA Using Kaolin
The zeolite LTA (Z-LTA) was successfully produced from the synthesis of kaolin and well-crystallized using a hydrothermal method. The main purpose of the synthesis Z-LTA from kaolin was to illustrate the transformation Z-LTA with different molarity of alkaline solution and crystallization time. The kaolin was heated in the furnace for 4 h at 500, 600, 700, and 800 °C, resulting in a metakaolinization process that transformed it into an amorphous state. The Z-LTA mixture was obtained by dissolving metakaolin in a sodium hydroxide (NaOH) solution without adding other silica or alumina sources. Before going through the hydrothermal synthesis process, the solution mixture was aged for 24 h. The crystal morphology and degree crystallinity (%) of Z-LTA were evaluated using the various molarity (M) of sodium hydroxide (NaOH) and crystallization time (h). The degree crystallinity of the Z-LTA increases from 62.83 to 86.8% when the molarity of NaOH is increased from 0.5 to 1 M but does not increase when the molarity of NaOH is increased to 1 M and 2 M (80.77% and 67.78%), respectively. The degree crystallinity (%) Z-LTA based on molarity proceeded with the various crystallization time (h) factors. 1 M NaOH with 9 h give highest degree crystallinity (88.45%) compared to other time, 12 h (86.16%), 16 h (87.11%), 24 h (86.80%) and 30 h (86.59%) respectively. The crystal morphology structure with Na, Al, O, and Si was seen in SEM images of the Z-LTA at a 1 M NaOH, 9 h crystallization time (Z-LTA, 1 M 9 h). The particle size of (Z-LTA, 1 M 9 h) is smaller, 0.329 µm, than kaolin, 0.497 µm. Under low alkali conditions and crystallization time, the hydrothermal process successfully generates high crystallinity from natural kaolin. As a result, low-grade Malaysian kaolin can be successfully transformed into Z-LTA using the traditional hydrothermal process when the composition parameters are carefully controlled. The hydrothermal synthesis from natural kaolin was the most cost-effective and efficient method, with comparable product quality to zeolite synthesized with conventional raw materials.
Introduction
Zeolites are aluminosilicates with a three-dimensional (3D) network that forms an open framework structure with molecular-sized pores and cavities [1]. Z-LTA is a low silica zeolite with Si/Al molar ratio = one that has a high ion-exchange capacity [2,3] and adsorption properties [4]. Since Z-LTA is used for a wide range of applications, including petroleum refining [5,6], water treatment [7,8], gas adsorption [9], agriculture [10], animal feed additives [11], and ecofriendly [12,13], they are now prevalent in our modern lifestyle. There are approximately 234 different framework types of zeolites, which can be found naturally [14,15] or synthesized [16]. Most zeolites are aluminosilicate materials; however, zeolites made up of different framework elements, such as titanosilicates, are now common [17]. The market for synthetic zeolite materials is now worth $5.2 billion per year, and it is expected to grow to $5.9 billion by 2023 [18].
The most popularly used zeolites are Z-LTA, X, Y, USY, and ZSM-5. In terms of volume and value, Z-LTA is one of the most widely used zeolites. Commercially, Z-LTA is primarily used as a laundry detergent additive [19]. Zeolite Linde Type A (Z-LTA or Zeolite A) was the first synthetic zeolite to be commercialized in 1956, according to Milton and colleagues [20]. It is also known as Zeolite 3A, 4A, or 5A, depending on the exchangeable cation in the zeolite structure and whether the material is potassium, sodium, or calcium ion-exchanged [21]. Na 12 [(AlO 2 ) 12 (SiO 2 ) 12 ]27H 2 O is the general formula for sodium exchanged Z-LTA, where the framework silicon to aluminium (Si/Al) mole ratio is one and sodium ions are the exchangeable extra-framework cations. In terms of structure, the lattice of zeolite LTA (Z-LTA) has two cage types: -cage (sodalite cage) and -cage [22]. In the centre of eight-membered rings, eight sodalite cages are linked by double four-membered rings to form a humongous -super cage. The Si/Al ratio of about 1 in the most common form of Z-LTA indicates that it has a high cation exchange capacity. Thermal stability [23], high selectivity, non-toxicity, and superior mechanical strength [24,25] are some of the other commercial advantages. Z-LTA is used in industry for ethanol dehydration in addition to being used in laundry detergents [26].
When Z-LTA is exchanged with silver ions, it can be used as an antibacterial material [27,28]. While medical devices, such as umbilical catheters, are impregnated with silver-Z-LTA, the incidence of catheter-related bloodstream infections is dramatically reduced [29]. This growing trend is highlighted by the following factors: (1) novel synthesis methods; (2) expansion of applications beyond detergents and ethanol dehydration; and (3) conversion of aluminium and silicon-containing waste materials to Z-LTA. Regrettably, it appears that the use of novel Z-LTA technologies in the industry has not progressed. In terms of manufacturing cost and market development, one justification is a compelling business case for the commercialization of nextgeneration Z-LTA materials. Another reason could be that no comprehensive analysis of Z-LTA synthesis and applications has been published which integrates data in a way that demonstrates cutting-edge achievements in this field. Furthermore, no extensive analysis has been conducted to determine the best path forward for turning exciting Z-LTA discoveries into commercially viable applications.
Clay minerals are prevalent in kaolin. The most important is kaolinite, which is made up of clay minerals as well as nonclay minerals. Kaolinite (Al 2 O 3 ·2SiO 2 ·2H 2 O) with Si/Al molar ratio = 1, which is similar to zeolite 4A, is considered the most suitable source to be used as a starting material for its production among all raw materials in the universe [30][31][32][33][34]. The framework structure of kaolinite is made up of a 1:1 ratio of one layer of alumina octahedron sheet and one layer of silica tetrahedron sheet. The crystalline phase of kaolinite should be metakaolinized into an amorphous and reactive phase in order to properly exploit kaolin as a feedstock for zeolite synthesis. Kaolinite minerals are highly valued as low-cost raw materials because they are abundant around the world. But even so, established procedures using kaolin as a raw material after thermal activation and synthesis steps are currently being researched due to differences in the conditions required to create metakaolin. In comparison to other countries, China was the first to develop and use kaolin resources on the planet, and it is also the world's largest source of kaolin [35]. China has six large kaolin mines, and there are also high-quality kaolin resources in the US, the UK, Brazil, India, and other countries [36]. The composition and structure of kaolin, on the other hand, vary depending on geological and weather conditions, which may affect its chemical reactivity [37,38]. The higher the kaolinite content in kaolin, the closer the chemical composition is to the theoretical one.
Metakaolinization is a process of requiring thermal activation at high temperatures, typically in the 600-1100 °C range [39,40]. The hydrothermal synthesis stage begins with the involvement of metakaolin mix with an aqueous alkali medium at suitable reaction temperatures. Kaolinite minerals are highly valued as low-cost raw materials because they are abundant around the world. However, established procedures using kaolin as a raw material after thermal activation and synthesis steps are currently being researched due to disagreements over the conditions required to create metakaolin. Depending on the origin of the raw material, these conflicts may be related to the activation temperature [41,42], impurity, particularly iron content [43,44], Si/Al molar ratio [45][46][47], quartz content [39,48], and initial crystallinity of raw material [49].
The hydrothermal synthesis method was the first and most widely used method for synthesizing zeolite, and it has continued to play a significant role. It's a common ingredient in the production of zeolites from kaolin. Based on the reaction temperature, the hydrothermal synthesis method is divided into subcritical and supercritical reactions. The hydrothermal synthesis method has several advantages, including high reactivity of the reactants, low energy consumption, and low air pollution [50]. Researchers have used this approach to conduct extensive research on the synthesis of zeolite using kaolin as a result of these benefits. The whiteness and ion exchange capacity of zeolite A have been greatly improved when compared to the old method of synthesis. This technique also has the advantage of being able to remove or transform impurities in the kaolin into chemical elements of the desired product. The writers use alkaline kaolin initiation to reduce pollution caused by traditional calcination processes, as well as to eliminate impurities in kaolin or convert them into components of the target product, allowing for more efficient use of natural resources.
Reagents and Materials
Delta Kaolin Sdn. Bhd, based in Selangor, Malaysia, provided the raw kaolin used in this study. The beneficiation of raw kaolin was insignificant progress. In one week, a 15 l container was filled with 10 l of distilled water and 3 kg of crushed kaolin. The soaking kaolin was stirred on a regular basis. The beneficiation had a positive effect in that the kaolin settled after stirring, the floating dirt was decanted with the supernatant, and all solid particles were removed by handpicking during decantation. This was ensured that this process would continue until there was no solid layer left at the bottom of the container. The fine particles suspended in water were dried for one day at room temperature before sieving with a 63-μm mesh. The sieve particle was then dried for 12 h in an oven. Bg Oil Chem Sdn Bhd provided the sodium hydroxide used in the experiments (Malaysia). These are analytical-grade reagents. In a laboratory, distilled water was created. In the laboratory, Z-LTA with various molarities and crystallization times was prepared from kaolin.
Zeolite Preparation and Hydrothermal Synthesis
The production of several zeolites from kaolin has relied heavily on hydrothermal synthesis [51,52]. The calcination of kaolin to form metakaolin was the first step in the synthesis of zeolite. After 4 h of calcination at 500, 600, 700, and 800 °C, metakaolin was obtained. Table 1 shows the chemical compositions of raw kaolin and metakaolin. The hydrothermal technique was used in the experiment, as shown in Fig. 1. 3 g of metakaolin were mixed with 0.5, 1, 2, and 3 M of NaOH, respectively. Then, the kaolin was dispersed in 80 mL of distilled water and aged for 24 h at 40 °C with stirring. The mixture was placed in a 100 mL stainlesssteel autoclave with a Teflon coating and hydrothermal at 100 °C for 9, 12, 16, 24, and 30 h, depending on the crystallization time. The heated product was then filtered with a pump, washed several times with distilled water to remove any remaining NaOH, and dried in a 60 °C oven for 12 h.
Characterization
SEM, FTIR, XRD, PSA, and TGA were used to characterize the crystallization and chemical properties of raw kaolin and Z-LTA: X-ray diffraction was used to identify the crystalline phases of the samples in this study (XRD, D8-Advance, Bruker, Germany). Cu K radiations can be used to determine the crystallinity of zeolite. XRD (Ultima IV, Rigaku, UK) analyses with Cu K radiation, a fixed power source, and a diffraction angle (2) of 10-90 were used to determine the crystal phase and mineral compositions of the samples. The sample was evaluated structurally, and Rietveld fitted using specialized software (XPert High score). Using Origin Pro software, the degree of crystallinity (%) was calculated as follows: A Fourier transform infrared (FTIR) spectrometer (Nicolet iS50, Thermo Fisher, America) was used to analyze the samples' infrared spectra in the range 4000-400 cm −1 . Prior to FTIR characterization, the samples were ground with dried KBr powder and pressed into small discs. An X-ray fluorescence spectrometer (AXIOS-mAX, PANalytical B. V., Holland) was used to examine the samples' major chemical compositions while Scanning electron microscopy was used to examine the surface morphologies (SEM, SU8010, Hitachi, Japan). A thermogravimetric differential thermal analyzer was used to measure thermal stability (TG-DSC, STA 409 PC, Netzsch, Germany). Malvern Mastersizer 2000 used a particle size analyzer to measure the particle size of raw kaolin and synthesize zeolite.
Characterization of Raw Kaolin
The raw kaolin's XRD pattern corresponded to kaolinite, which has a layered structure with diffraction planes (0 0 1) and (0 0 2) at 2 Theta (2θ) = 12.398 and 24.944, respectively, which are the kaolinite characteristic peaks (Fig. 2). The ICSD pattern number 98-0,037,558 was found to match the sharp peak. The XRF was used to determine the chemical composition of the kaolin, which is listed in Table 1. The kaolin contained SiO 2 (48.72%) and Al 2 O 3 (37.63%), which were used as Si and Al sources in the synthesis of zeolite. To synthesize, the various Z-LTA from the kaolin, silica, and alumina composition was used without adjustment based on the XRF analysis.
Phase Transformation of Kaolin to Metakaolin
After calcination, the metakaolin XRD pattern changes, with shifts and changes in peak intensities occurring between 2θ = 20.861 and 26.644 at 600, 700, and 800 °C, but not at 500 °C (Fig. 3). The temperatures of 600, 700, and 800 °C indicate that quartz is the primary component. Figure 4 also includes SEM images of the kaolin and metakaolin morphologies. Raw kaolin morphologies show lamellar structure (Fig. 4a). The metakaolin morphology, on the other hand, indicates newly formed tiny crystals on the kaolin particle surfaces (Fig. 4b-e). The calcination treatment has resulted in a highly disordered metakaolin with a sheet-like [54], and El-Diadamony et al. [55]. As shown in Fig. 4a [56], kaolin comprises a morphological assemblage of platelike hexagonal structures or book-like stacks. The raw kaolin clay appears to have a layered crystalline morphology, which supports earlier research [57,58] on a variation of mineralogy in kaolin Malaysia. Further examination of the micrograph revealed irregular platelets and a poorly defined flake containing sub-rounded particles. The kaolinite molecular structure is committed to the 400-1300 cm −1 band region of the kaolin clay spectrum (Fig. 5) [59,60]. At 436 to 700 cm −1 , Si-O stretching is described, which is likely the same as the previous study [61], which found a sharp peak at 700 cm −1 as Si-O. Furthermore, the bands at 700 cm −1 and 1033 cm −1 are described by Kovo and Holmes [62] as Si-O stretching, whereas bands due to hydroxyl groups are found between 3610 and 3695 cm −1 [59]. The bands depicting the hydroxyl groups for the kaolin samples, on the other hand, were identified at 3620 to 3690 cm −1 , indicating that most kaolin is hydrophilic [59]. After calcination, the two typical kaolinite vibration absorption peaks at 478 and 530 cm −1 disappear, and new characteristic vibrational bands appear at 1080, 790, and 470 cm −1 . The metakaolin shows the effect of metakaolinization treatment on kaolin samples when heated to 500-800 °C for 240 min. Because there is still kaolinite peat at 2θ = 12.376 and 24.899, the obtained X-ray diffraction pattern for treatment at 500 °C does not reveal a complete transformation of the kaolin to amorphous SiO 2 . However, due to a decrease in kaolinite peak intensity at 2θ = 12.398 and 24.944, the XRD treatment for 600 to 800 °C gradually shows a complete transformation of kaolin to amorphous SiO 2 .
The complete absence of a kaolinite peak at 2θ = 12.398 and 24.944 positions indicated that this parameter is suitable for calcination. According to TGA data, when the heating temperature was increased to 350 °C, the sample's initial weight decreased by 1.5% (Fig. 6). The decomposition and loss of crystallization water become noticeable as the heating temperature rises. The crystallization water is distinct from the "pore water" typically lost during the drying process. Crystallization water combines chemically to form a fundamental unit of the crystal structure of kaolin. Temperatures above 400 °C affect kaolin crystallization water [63].
The loss of crystallization water was sparked at around 570 °C, as shown in the kaolin thermograms, indicating the start of the dehydroxylation reaction. When the heating temperature was increased to 570 °C, a weight reduction of 10.5% was observed, attributed to the structural loss of the hydroxyl group present in the kaolinite layers [64]. From the TGA curves of kaolin samples, a dehydroxylation reaction of about 650 to 750 °C appears to have been achieved. As the heat temperature begins to rise, another event that could lead to an exothermic reaction is expected. Still, this research is limited to the response that could lead to metakaolin (dehydroxylation). At 710 °C, the mineral kaolinite disintegrates into free alumina, silica, and water [64]. Kaolin had lost 13% of its weight at this point. Table 2; Fig. 8). The molarity of 1 M NaOH increased the rate of dissolution of Si and Al ions, which helped form suitable Z-LTA crystal nuclei. The crystallization of the LTA zeolite is influenced by the formation of suitable nuclei.
Hydrothermal Synthesis of Zeolite
High supersaturation and steric stabilization of the nuclei are critical factors for minimizing the final zeolite crystal sizes [65]. Meanwhile, Reyes et al. [66] found that the Z-LTA could be synthesized less time using a lower concentration of NaOH (1.33 M). The crystallinity of the synthesis product was high, and the crystal size distribution was uniform (1.0 µm). Gougazeh and Buhl [67] made Na-A zeolite by heating kaolin with 1.0-4.0 M NaOH for 20 h at 100 °C. The crystallinity increased from 70.2 to 74.6% as the NaOH concentration increased, according to their findings (from 1.5 to 3.5 M). The crystallinity decreased from 68.2 to 50.8% when the NaOH concentration was increased from 3.5 to 4.0 M. 1 M NaOH was chosen for further study on the formation of higher crystallinity percentages of different crystallization times (9-30 h). Figure 9 shows a higher intensity peak of Z-LTA at 9 h when compared to other times. The crystallinity percentage of Z-LTA at 1 M NaOH is higher at 9 h (88.45%), 12 h (86.16%), 16 h (87.11%), 24 h (86.80%), and 30 h (86.59%) than at other times ( Table 2; Fig. 10). At the conclusion of the analysis, the mean diameter of Z-LTA 1 M, 9 h final product was 329 nm or 0.329 µm. Jawor and Jeong [68] confirmed this finding and found it to be almost identical to conventional zeolite, 0.427 µm. In addition, the mean diameter recorded at the end of the result for kaolin was 497 ± 0.4 nm which is equivalent to 0.497 µm. The result agreed with Yahaya et al. [69], the size of the kaolin is 0.4-0.75 µm. This finding indicates that varying the synthesis time resulted in the production of different zeolite types and purity [70]. This phenomenon can also be explained by the fact that zeolite is a thermodynamically metastable phase, which means that the synthesis process was overwhelmed by Ostwald's step rule of successive reactions. Because of the replacement of phases that occurred at varying crystallization times, this finding confirmed the earlier assertion that zeolites are thermodynamically metastable phases [71]. At 3 M NaOH, 24 h crystallization time, there was a zeolite crystallization failure for synthesis zeolite.
According to the studies above, NaOH concentrations less than 2 M are best for synthesizing Z-LTA because higher concentrations can produce impurities like hydroxy sodalite, which prevent crystallization (Fig. 11) [72]. When NaOH concentrations of over 3 M are used, previous studies [73] have also shown the presence of hydroxy sodalite. With a decrease in degree crystallinity, higher alkalinity promotes the formation of impurities and the decomposition of zeolite [74].
With a range of 400-4000 cm −1 , FTIR spectroscopy is used to specify the structure of zeolites and to monitor reactions in the zeolite framework. The different bands are illustrated by the IR spectra of raw kaolin, metakaolin (700 °C, 4 h, Fig. 5), and Z-LTA (1 M, 9 h, Fig. 12). After calcination, the two typical kaolinite vibration absorption peaks at 478 and 530 cm −1 disappear, and new characteristic vibrational bands show up at 1080, 790, and 470 cm −1 (Fig. 5b). Meanwhile, at 790 cm −1 , the vibration band appears, indicating that the kaolinite structure has been broken, resulting in amorphous aluminosilicate. The broadband appointed to Al-O bonds in Al 2 O 3 in the spectral range from about 930 cm −1 to about 700 cm −1 does not appear in zeolitic materials (Fig. 5a).
There was an occurrence of the same width bands of kaolin (Fig. 5b) after 240 min of exposure time that had never been seen before and was located at around 790 cm −1 . This was applied to the Al-O bond in Al 2 O 3 , implying that free alumina is formed and that coordinated octahedral Al is converted to Al with tetrahedral coordination [75]. At 1080 cm −1 , the shoulder was also wide. The stretching bands in SiO 2 are responsible for this. The broad shoulder peak in metakaolin is clearly not seen as the Si-O bond percentages of low quartz. As reported by Brindley and Nakahira [76], it again suggests the formation of free silica. The 4-coordinated Al-O stretching was also assigned to the 470 cm −1 bands in a metakaolinite structure [77]. This necessitated the development of a highly reactive substance capable of converting into zeolites. After rapid kaolin heating at 700 °C, the obtained bands generally show the conversion of kaolinite to metakaolinite.
In aluminosilicates with zeolite structure, the 1030 cm −1 band of kaolin was shifted to 995 cm −1 , which could be attributed to antisymmetric stretching of T-O bonds (T = Si or Al) (Fig. 12). During the reaction between raw kaolin and NaOH, SiO 2 and Al 2 O 3 are transformed into aluminosilicates. There is an important assignment in the literature for the characteristic bands between 995 cm −1 asymmetric stretching vibrations for all zeolitic materials in the range Bending vibration of Al-O groups in the zeolite structure could be responsible for the vibration at 452 cm −1 . Furthermore, FTIR analysis of the zeolite structure reveals that the potassium ions added during the reaction enter the pores or cages of zeolite and do not alter the zeolite's original framework structure. Table 3 shows the summary of the FTIR wavelength number for kaolin, metakaolin and Z-LTA 1M, 9h.
The detailed microstructure of zeolitic materials with sheet structure was revealed using the SEM technique. In (Fig. 13a-h). The crystalline particles with lattice frames could also have the LTA zeolite cubic structure, as shown in Fig. 13. Sodalite has a spherical form with an array of long fibres surrounding it, whereas zeolite A and LTA are cubic. Figure 13 shows an SEM micrograph demonstrating that the synthesized Z-LTA perfectly fits the cubic crystalline system family. The presence of well-crystalline zeolite crystals with the same cubic morphology and uniform sizes was revealed in Fig. 13. The cubic morphology of the zeolite crystals became very pronounced, with an average length of ± 2 µm. From 1 to 2 M and 9-30 h, it can be deduced that no significant amount of amorphous materials is detected at the end of the synthesis process. Indicating that low-grade Malaysian kaolin can be successfully transformed into Z-LTA using the traditional hydrothermal method when the composition parameters are carefully controlled. Water treatment, membrane separation and water softening are widely used for these Z-LTA.
Sodalite is a spherical mineral with a ring of long fibres surrounding it (see Fig. 13d). Due to the faster and more uniform heating, the 3 M produced more diminutive and homogeneous grain sizes of zeolite A and sodalite. Inadequate mixing was blamed for the formation of sodalite as a byproduct of the Z-LTA synthesis. When a Z-LTA mixture moved into the HS range, the dissolution and regrowth of crystal composition occurred. For example, when a Z-LTA mixture moved into the HS range, all of the LTA was eventually transformed into sodalite. The only sodalite that nucleated
Conclusion
Z-LTA 1 M, 9 h with good crystallization, was hydrothermally synthesized using raw kaolin from the Delta Kaolin, Selangor, Malaysia. Z-LTA was obtained under optimal conditions, 1 M NaOH and 9 h crystallization time. XRD analysis confirmed that Z-LTA 1 M, 9 h is zeolite type LTA with ICSD pattern number 98-0037558. The crystallinity percentage of Z-LTA at 1 M NaOH gives a higher reading which is 86.80%, followed by 2 M NaOH (80.77%), 3 M NaOH The SEM micrograph proved that the synthesized Z-LTA perfectly fits into the family of cubic crystalline systems. The presence of well-crystalline zeolite crystals with the same cubic morphology and uniform sizes revealed that the cubic morphology of the zeolite crystals became very pronounced, with an average length of 2 m. It can be stated that from 1 to 2 M and 9-30 h, no significant number of amorphous materials is detected at the end of the synthesis process. Indicating that low-grade Malaysian kaolin can be successfully transformed into Z-LTA using the traditional hydrothermal process when the composition parameters are carefully controlled. As a result, hydrothermal synthesis from natural kaolin was the most cost-effective and efficient method, with comparable product quality to zeolite synthesized with conventional raw materials.
Funding This work was supported by the Ministry of Higher Education Malaysia and Universiti Tun Hussein Onn Malaysia (UTHM) with GPPS Grant H696.
Data Availability Data will be made on request.
Code Availability Not applicable.
Conflict of interest
The authors declare that they have no conflict of interest that could have appeared to influence the work reported in this paper.
Ethical Approval Not applicable.
Informed Consent Not applicable.
Consent for Publication
Not applicable. | 5,978 | 2022-01-31T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
CryptoAnalytics: Cryptocoins Price Forecasting with Machine Learning Techniques
This paper introduces CryptoAnalytics, a software toolkit for cryptocoins price forecasting with machine learning (ML) techniques. Cryptocoins are tradable digital assets exchanged for specific trading prices. While history has shown the extreme volatility of such trading prices, the ability to efficiently model and forecast the time series resulting from the exchange price volatility remains an open research challenge. Good results can been achieved with state-of-the-art ML techniques, including Gradient-Boosting Machines (GBMs) and Recurrent Neural Networks (RNNs). CryptoAnalytics is a software toolkit to easily train these models and make inference on up-to-date cryptocoin trading price data, with facilities to fetch datasets from one of the main leading aggregator websites, i.e., CoinMarketCap, train models and infer the future trends. This software is implemented in Python. It relies on PyTorch for the implementation of RNNs (LSTM and GRU), while for GBMs, it leverages on XgBoost, LightGBM and CatBoost.
1 Motivation and Significance
Introduction to CryptoAnalytics
Cryptocoins are digitally-encrypted assets, used mostly in peer-to-peer networks.Depending on the underlying blockchain, cryptocoins are rewarded to nodes in the network.History has shown the extreme volatility of cryptocoins trading prices.In the first place, one could consider these price trends unpredictable and the resulting time series as a random walk.However, recent studies [1,2] revealed the presence of co-movement among different coins and cross-correlation phenomena in cryptocoins market prices trends.The main purpose of CryptoAnalytics is to provide third-party clients with a fast and reliable tool to leverage these co-movement patterns in order to forecast cryptocoins prices.CryptoAnalytics implements a wide range of state-of-the-art ML techniques for time series forecasting (LSTM [3], GRU [4], XgBoost [5], LightGBM [6] or CatBoost [7]) in order to predict the market price of the desired cryptocurrency basing on other closely-related coins.
Objective of CryptoAnalytics
The goal of CryptoAnalytics is to provide a wide range of potential clients (like investors, institutions and/or goverments) with a reliable and easy-to-implement service to forecast cryptocoins prices.The crypto market is characterized by an extreme volatility, with sudden and continous changes in trading prices, as we will further discuss in subsection 1.5.With Crypto-Analytics it is possible to handle these complexities using an efficient and scalable tool.Moreover, we extend our analysis to the deployment of Cryp-toAnalytics into a production environment (section 4), making it possible to design a cryptocurrency prediction service designed to be adopted by either technical and non-technical users, with a very limited (or no) knowledge of ML algorithms.
Scientific Contribution
CryptoAnalytics has been used in the context of two scientific papers, presented respectively at the IFIP/DAIS 2022 [8] and at the ACM/DEBS 2023 [9] conferences.In [8], CryptoAnalytics was leveraged to investigate daily, weekly and monthly correlation patterns exhibited by the two main cryptocoins, Bitcoin and Ethereum, against a remaining set of 66 altcoins.Moreover, in [9], CryptoAnalytics was used to study the trend correlations between and across a large set of 62 cryptocoins and subsequently to forecast Ethereum and Bitcoin price series basing on the trend of strongly-correlated crypto-assets.The results showed that CryptoAnalytics was able to provide reliable price forecasts with all the proposed state-of-the-art ML models.
Theoretical Foundations
To implement CryptoAnalytics, we considered two families of ML models: Gradient-Boosting Machines (GBMs) and Recurrent Neural Networks (RNNs), both adapted to cryptocoin price series forecasting.In [9] it was demonstrated that both GBMs and RNNs achieve reliable estimations of cryptocoin price series.More specifically, gradient-boosting machines were able to predict with high accuracy either stable trends, with no trace of short-term peaks/falls, and unstable ones.On the other hand, recurrent neural networks were less accurate in modeling stable price trends.In the following paragraphs we describe these models in detail.
Gradient-Boosting Machines
GBMs are "ensembles" of classification and regression trees [10].The main idea behind is to improve a single weak model by combining it with other weak models in order to generate a collective strong model.In GBMs, the iterative generation of weak models is determined minimizing the gradient over the chosen loss function.XGBoost, LightGBM and CatBoost are three notable state-of-the-art GBMs.
XGBoost is an open-source, scalable and distributed GBM that builds trees in parallel rather than sequentially.Microsoft's LightGBM on the other hand is characterized by fast training speed and efficiency, fairly low memory usage and scalability.CatBoost introduces ordered boosting to avoid the prediction shift of the learned model, a common problem for traditional GBMs training.
Recurrent Neural Networks
RNNs are a family of Neural Networks where the behavior of hidden neurons is not only determined by the activations in previous hidden layers, but also by the earlier stages [11].The training process of RNNs is usually complex, due to the unstable gradient problem: as a result of this phenomenon, vanilla RNNs are unable to model long term dependencies, lacking predictive ability when dealing with long sequences of data.Gated RNNs (LSTMs and GRUs) circumvent this problem in practical applications.LSTM embed cells with internal recurrence (a self-loop), in addition to the outer recurrence of the RNN.GRU uses a single gating unit that simultaneously controls the forgetting factor and the decision to update the state unit.
Cryptocurrency Market Challenges
Cryptocurrencies surely represent a cutting-edge innovation in the field of financial technology.In [12], cryptocoins are compared with two traditional and massively adopted financial assets: foreign exchange and stock.Basing on a four-year analysis of the daily close price trends, the authors conducted a comparatory study on five properties: volatility, centrality, clustering structure, robustness and risk.The cryptocurrency market proved to be more similar to that of the stock one, but characterized by an higher rate of fragility and risk.Evidence of this fragility can be found in the extreme volatility of cryptocoin trading prices, and possible reasons for this behavior include lack of adequate regulation, the inherent speculative nature of these assets, the lack of an institutional guarantor and pump-and-dump actions enacted by large stakes (i.e., whales owning large percentages of the issued coins).The complex nature of this market, characterized by sudden and several price fluctuations, determines the need for a reliable guidance for all the potential investors.CryptoAnalytics can represent an ideal solution to address these open challenges, by providing a fast, easy-to-implement and reliable solution to forecast future cryptocoin prices basing on past trend correlations between these crypto-assets.
Software Requirements
There are no minimum hardware requirements for CryptoAnalytics.Since the training process for RNNs can be computationally intensive, CryptoAnalytics automatically detects any CUDA-capable [13] hardware accelerator (i.e., GPUs) present on the machine, and uses it if available.Otherwise, the training process is performed on the CPU.Python v3.9.10 is required to run our tool.
Software Description
This section provides an overview of the CryptoAnalytics software architecture with a short description of its functionalities.
Software Architecture
CryptoAnalytics implements a cryptocoin forecasting pipeline, driven by easy-to-use processes, for which we implement a command line interface over five major functionalities: (1) data pull, (2) data split, (3) model pretrain, (4) model forecast, and optionally (5) correlation analysis.The execution workflow is depicted in Figure 1 and the software functionalities are described in detail next.
Software Functionalities
The data pull functionality provides methods to generate a new dataset of Open-High-Low-Close (OHLC) cryptocoin prices from the leading aggregator CoinMarketCap [14].OHLC charts are commonly used to illustrate the price movements of financial instruments (in this case cryptocurrencies) over time.Upon execution (Fig. 1-➊), data pull produces a new dataset ready to be used in the further analyses.User-specified arguments are: the destination directory, the file name, the list of coins to include in the dataset (in .jsonformat), start and end date of the pull (in %d-%m-%Y format).This dataset is sent as input (Fig. 1-➋) to the following step.The data split functionality provides methods to generate train and validation sets from the original data.Upon execution, data split produces two dataset splits in .csvformat: train and validation (Fig. 1-➌).Train and validation sets are used in the model training process (Fig. 1-➍).Moreover, the validation set is adopted in the forecasting phase as well to predict the feature coins price series for the forecasting horizon.User-specified arguments are: the destination directory, the file names, the path to the dataset gathered before, the price variable to consider (either the average OHLC price or just the Close price) and the ratios for train/validation split (as floats).The model pretrain functionality provides methods to efficiently pretrain ML models (RNNs and GBMs) for cryptocoins price forecast.The execution of this command (Fig. 1-➎) produces a pretrained model that is used further (Fig. 1-➏) in the final forecasting phase.More specifically, pretrained RNNs are stored as .pth,extensions designed to store serialized PyTorch state dictionaries, while pretrained GBMs are stored as .txtfiles.For all the considered ML models, the price forecast for a given user-specified cryptocurrency is given by observing the fluctuation of other highly-correlated coin series (namely, these will be the "feature variables").In order to pre-select these feature coins, the user might conduct a preliminary correlation analysis (detailed further).For the experimental setting, a list of user-specified configurations is needed.These configurations vary with respect to the ML model adopted.For RNNs, it has to be declared the size of the network (number of hidden layers and neurons), the number of training epochs and the batch size (namely, the number of samples processed before a model update).For GBMs, the user has to specify the number of tree splits generated by the model.Common configuration to all considered models are: the random seeds, the learning rate and the patience (i.e., number of "tolerated" consecutive epochs/tree splits without a model improvement before early-stopping the training).User-specified arguments are: the destination directory, the file name, the path to the train and validation sets, the ML model to use (either LSTM, GRU, XgBoost, LightGBM or CatBoost), the target coin to predict, the list of coins to use as feature/predicting variables (in .jsonformat) and the list of configurations to use for the experimental setting (in .jsonformat).The model forecast functionality provides methods to predict future cryptocoins prices using the pretrained.The validation set is used to fit a Holt-Winters Exponential Smoothing model [15] to forecast the feature coin price series for the user-specified predicting horizon.The predicted feature coins will be then fed to the pretrained ML model to generate price predictions for the selected target cryptocurrency.Moreover, the resulting predictions are stored in .txtformat (Fig. 1-➐).User-specified arguments are: the destination directory, the file name, the forecasting horizon (number of future daily prices to predict), the path to the validation set, the path to the pretrained model, the ML model to use, the target coin to predict and the list of coins to use as feature/predicting variables (in .jsonformat).These arguments must be the same used for the pretraining process.Finally, the correlation analysis functionality provides methods to analyze correlations among cryptocoin prices.It is not required for the main price prediction flow, but can be useful to pre-select highly correlated cryptocoins to use as feature variables.User-specified arguments are: the destination directory, the file names, the path to the original dataset, the price variable to consider (either the average OHLC price or the Close price), the time window to use for computations (either daily, weekly or monthly) and the correlation method to use (either Pearson, Kendall or Spearman [16]).
Illustrative example
This section provides a comprehensive illustrative example of the full price prediction flow with the CryptoAnalytics toolkit.File names and directories are listed as they appear in the project GitHub repository.
Data Pull
We start by pulling the dataset of cryptocoin OHLC market prices from CoinMarketCap.We decide to pull data for a list of 8 cryptocoins (BTC, ETH, USDT, USDC, XRP, BUSD, ADA, DOGE), defined in the "coins.json"file inside the /examples directory.Moreover, we adopt a time frame of three months (15-08-2023 to 15-11-2023).The command used is listed below: This command generates a "dataset.csv"file in the current working directory.This dataset is made up by 6 columns (Date, Open, High, Low, Close and Coin).
Data Split
We then proceed by splitting the pulled data into train and validation sets.
In the example we use a conventional split ratio of train = 80% and valid = 20%.The price variable considered for predictions is the average OHLC price.The command used is listed below:
Correlation Analysis (optional)
The correlation analysis is optional, but can be useful to pre-select cryptocoins to use as feature variables for the model pretrain and forecast.In our example, we decide to use Bitcoin (BTC) as the target coin to predict and further identify a set of 5 highly correlated (Pearson > 0.5) coins to BTC.
We use the Pearson coefficient to compute these correlations over a daily sliding window.The command used is listed below: This command generates a "correlations.csv"file in the current working directory, containing a cross-correlation table with all the cryptocoins in the dataset.We exclude the stablecoins and pre-select ETH, XRP, DOGE and ADA as feature variables.
Model Pretrain
We can then pretrain our ML model to forecast the average OHLC price of Bitcoin.In this demonstration, we choose to train a LSTM (Long-Short Term Memory) neural network with a set of predefined configurations, specified in the "config nn.json" file inside the /examples directory.In the same path we defined the set of pre-selected feature coins inside the "features.json"file.The command used is listed below: This command generates a "lstm.pth"file in the current working directory, containing the pretrained LSTM model.
Model Forecast
Finally, we can use the pretrained model to make inference on the unseen data.Our aim is to predict the price series of Bitcoin for the following week (namely, the forecasting horizon will be of 7 days).The configurations to use in this phase must be the same adopted for pretraining.The command used is listed below: This command generates the final output of the price prediction flow, that is a "predictions.txt"file containing the forecasts of Bitcoin average OHLC prices for the weekly time horizon (16-11-2023 to 22-11-2023).
The resulting predictions showed a fairly good accuracy, with a Mean Absolute Percentage Error (MAPE) ≈ 6.57%
Software Deployment
In this section, we discuss how to deploy CryptoAnalytics to build fast and reliable cryptocurrency prediction services with ML algorithms.To conduct our analysis, we analyzed the performance of CryptoAnalytics using three benchmark frameworks: TorchServe [17], BentoML [18] and MLFlow [19].Moreover, we considered two MLFlow scenarios: a base one, with local Flask server, and a second one with MLServer [20].We performed our benchmark study on Ubuntu 22.04.2LTS, Linux kernel 5.15.0-88-generic, 64-core CPU Intel™ Xeon™ E5-2683 v4 clocked at 2.10GHz with 128 GB RAM.For the evaluation, we used oha [21] to submit multiple prediction requests to the CryptoAnalytics server for a total time of 2 minutes.We analyzed the response time in relation to the number of steps (future daily prices to predict), ranging from 0 (the baseline) to 32.The results are shown in Figure 2.
Overall, there is no significant impact of the number of steps on the response time.MLFlow shows the best performance, with almost all response delays distributing around the minimum value of ≈ 0.04 seconds.These results are consistent in both MLFlow scenarios.
Examples of ready-to-use CryptoAnalytics deployments for each framework are available in the project repository.
Impact
The analysis and forecasting of cryptocoins price trends still offers challenges in both academic and industrial contexts, due to the abrupt nature of their fluctuations over time.The rapid development of ML techniques allowed new approaches to time series modeling, as a valid alternative to traditional econometrics ones (e.g., ARMA and VAR processes [23]).
However, these algorithms rely on complex statistical and mathematical concepts, making it difficult for end users lacking prior technical knowledge to train and validate these models.Moreover, also skilled users could benefit of a time-saving solution that does not require any experimental setup, but just a simple command-line interface that embeds the entire ML workflow.
CryptoAnalytics aims to be a solution for either types of users, providing them with a simple, clear and effective interface that allows to obtain cryptocoins prices predictions with just few lines of bash code.Moreover, we showed that CryptoAnalytics can be deployed efficiently using state-of-the art frameworks to build fast and reliable cryptocoins prediction services for a wide range of users.The nature of this software makes it a good fit also for business-oriented applications.Indeed, fluctuations in the prices of digital currencies naturally lead investors to have concerns, making the behaviour of crypto markets very difficult to predict.For this reason, a fast, easy-to-implement and reliable solution like CryptoAnalytics can allow to accurately forecast cryptocoin prices, in order to not only help investors in making decisions, but also governments in designing regulatory policies [24].As we previously mentioned, the use of CryptoAnalytics is attested in [8] and [9], where it has been applied to the analysis of cross-correlations patterns over a large set of cryptocoins in order to forecast their future price series.
6 Related Work
Cryptocoin Prediction Tools
Since cryptocurrencies became significant investment drivers in the recent years, the ability to predict their prices is crucial for the investors to make informed decisions.Here we will describe briefly some of the most wellknown tools on the online market that represent possible alternatives to CryptoAnalytics.
1. WalletInvestor [25] is an online prediction site that makes use of ML algorithms to produce price forecasts.WalletInvestor's cryptocurrency predictions are based on multiple economic factors like: changes in the exchange rates, trade volumes, volatilities of the past period.Long term (3 months, 1 year and 5 year) forecasts for more than 800 coins are available for free.
2. CryptoPredictions [26] is an online tool that makes use of historical exchange rates and market data to predict the future price trend of a given coin.The forecasting algorithm uses a combination of linear and polynomial regressions.Daily, monthly and yearly predictions for over 8000 cryptocurrencies are available for free.
3. DigitalCoinPrice [27] is a price-tracking website for cryptocurrencies.It provides as well a price prediction service for the listed cryptocoins based on historical data.Monthly and yearly predictions for over 8000 cryptocurrencies are available for free.
4. CryptoRating [28] is an online service for cryptocoin forecast and analysis.It makes use of a unique and elaborate ML algorithm that takes into account multiple factors, such as the Crypto Volatility Index.A paid subscription is required to get the full access to daily, monthly and yearly predictions for 100 cryptocurrencies.
Innovation and Advancements of CryptoAnalytics
Compared to other solutions already present on the market, CryptoAnalytics has surely the advantage of being an open-source, free and easy to access software.This aspect makes CryptoAnalytics a useful tool either for industrial and research applications.Moreover, the publicity of the code makes possible for other researchers and/or interested users to contribute and expand its scope.This can involve either the data sources (at the current state, just CoinMarketCap) and the ML algorithms to pretrain for practical forecasting.Also, the pool of models managed by CryptoAnalytics goes beyond traditional ML and time series modeling algorithms, involving Deep Learning ones (like RNNs).
A noticeable difference with the current state-of-the-art is that CryptoAnalytics provides individual price forecasts for each coin by looking at the time series of other highly-correlated cryptocurrencies, instead of the historical (lagged) data of the target to predict.This approach, as anticipated in the previous section, was successfully adopted in [9].Moreover, the study of co-movement and cross-correlation events in cryptocurrency market trends has been widely explored in the recent literature.In [1], the author found evidence of interdependencies between the Bitcoin and Ether, with price responsiveness to major news in the market.In [2], authors first showed that cryptocurrencies exhibit similar mean correlation among them, and then detected an independent behavior respect to other financial markets.
Conclusions
Cryptocoins show very volatile trends.Despite this behavior, the presence of co-movement and cross-correlation patterns among cryptocoins suggests that it might be possible to forecast a coin price evolution by observing fluctuations in other coins' trends.Machine Learning (ML) techniques like GBMs and RNNs constitute nowadays the state-of-the-art in modeling and predicting complex, time-varying and large-scale price series.We presented CryptoAnalytics, a toolkit based on Python, designed to easily train these models and make inference on up-to-date cryptocoin price data.Moreover, we discussed how to deploy CryptoAnalytics to build fast and reliable cryptocurrency prediction services using state-of-the-art frameworks (TorchServe, BentoML and MLFlow).
CryptoAnalytics can represent a useful tool, either in business or academic applications, to gather information from cryptocoins and leverage their co-movement behaviors in order to model and forecast the trends of the asset prices.
1and a Root Mean Squared Error (RMSE) ≈ 2438.37 USD respect to the observed average OHLC prices from CoinMarketCap.Listing 6: Bitcoin weekly price predictions vs real (in USD) as of15-11-2023. | 4,719.4 | 2024-05-01T00:00:00.000 | [
"Computer Science",
"Economics"
] |
PDALN: Progressive Domain Adaptation over a Pre-trained Model for Low-Resource Cross-Domain Named Entity Recognition
Cross-domain Named Entity Recognition (NER) transfers the NER knowledge from high-resource domains to the low-resource target domain. Due to limited labeled resources and domain shift, cross-domain NER is a challenging task. To address these challenges, we propose a progressive domain adaptation Knowledge Distillation (KD) approach – PDALN. It achieves superior domain adaptability by employing three components: (1) Adaptive data augmentation techniques, which alleviate cross-domain gap and label sparsity simultaneously; (2) Multi-level Domain invariant features, derived from a multi-grained MMD (Maximum Mean Discrepancy) approach, to enable knowledge transfer across domains; (3) Advanced KD schema, which progressively enables powerful pre-trained language models to perform domain adaptation. Extensive experiments on four benchmarks show that PDALN can effectively adapt high-resource domains to low-resource target domains, even if they are diverse in terms and writing styles. Comparison with other baselines indicates the state-of-the-art performance of PDALN.
Introduction
Named Entity Recognition (NER) is typically framed as a sequence labeling task that targets to locate and classify named entities in text into predefined semantic types, such as Person, Organization, Location, etc. NER is a fundamental task in information extraction (Karatay and Karagoz, 2015) and text understanding (Krasnashchok and Jouili, 2018). The effectiveness of most existing NER models depends on sufficient labeled data, which is time-consuming and labor-intensive. Current research proposes cross-domain NER, which enables NER on the low-resource target domain by transferring knowledge from other high-resource source domains.
However, it is challenging to build a crossdomain NER component with high precision and recall, due to the domain shift problem (Ben-David et al., 2010). When casting the cross-domain NER as a transfer learning problem, most solutions (He and Sun, 2017;Yang et al., 2017;Aguilar et al., 2017;Liu et al., 2020b) require high-quality cross-domain features for knowledge transfer. Limited labeled data prohibit transfer learning from extracting informative features. Besides, it is hard to find a single training dataset covering all the required NER types. Even if words overlap across domains, their combination or usage is different from each other.
Domain adaptation (Sun et al., 2015) is widely studied to solve the domain shift issue. Existing approaches mainly introduce either word-level or discourse-level domain adaptations to enable crossdomain NER. To mitigate the word-level discrepancy, previous endeavors propose distributed word embedding (Kulkarni et al., 2016), label-aware maximum mean discrepancy estimation (Wang et al., 2018), and projecting learning (Lin and Lu, 2018). As to the discourse-level discrepancy, existing approaches introduce multi-level adaptation layers (Lin and Lu, 2018), tensor decomposition (Jia et al., 2019), and multi-task learning with external information (Liu et al., 2020b;Aguilar et al., 2017). However, those methods require sufficient labeled data, which hinders their performances under low-resource scenarios. To tackle both label sparsity and domain shift problem, existing approaches (Liang et al., 2020;Simpson et al., 2020;Cao et al., 2020) exploit external resources to generate pseudo labels for the low-resource domain. Nevertheless, the less confident labels may deteriorate the robustness of models because of noise.
In this paper, we propose a progressive domain adaptation cross-domain NER model PDALN. It introduces a novel domain adaptation component, which is enhanced by a progressive KD framework.
PDALN addresses both word-and discourse-level domain adaptation on two low-resource scenarios: unsupervised and semi-supervised cross-domain NER. We first augment mix-domain training data by cross-domain anchor pairs, which alleviates the sparsity of annotated target domain. Next, we enable knowledge transfer across domains through domain invariant features learned from a multigrained MMD adaptation metric. Additionally, we fuse contrastive learning (Hadsell et al., 2006) with a pre-trained model to extract robust features. Finally, instead of directly fine-tuning the model on the augmented adaptive data under the MMD-based metric, we integrate the cross-domain NER model into a sequential KD framework to learn a lowcapacity student model.
Base Model
To obtain expressive sentence features, we adopt a pre-trained language model (e.g. BERT (Devlin et al., 2018)) to encode the sentence X = [x CLS , x 1 , ..., x N , x SEP ] (after padding tokens in BERT) into sentence representation h = [h CLS , h 1 , ..., h N , h SEP ]. The task objective is denote as CRF loss, where L crf = log p(Y|X ).
(2) where log φ n (y i = j|h i , V) = exp(V T j h i ), h i is the encoded contextualized word vector, V is the weight matrix. A is the parameter for the transition matrix φ e . Z(·) is the normalization constant.
Maximum Mean Discrepancy (MMD) Measurement
The MMD is defined in particular function spaces H k that measures the difference in cross domain distributions (P s , P t ). H k is the Reproducing Kernel Hilbert Space (RKHS) endowed with a characteristic kernel k. The squared formulation of MMD, d 2 k (P s , P t ), is defined as where ϕ : X → H k . The most important property is that P s = P t iff d 2 k (P s , P t ) = 0. The characteristic kernel associated with the feature map ϕ and Gaussian Kernel k(D s , D t ).
To calculate MMD loss in cross-domain NER, we first compute the squared formulation of MMD between the BERT representations of source/target samples: where H s and H t are sets of the BERT embeddings h s and h t with corresponding number N s and N t .
The Proposed Model
In this section, we present the structure of the proposed model. We first introduce domain adaptation components. On the one hand, we design an adaptive data augmentation to tackle the label sparsity issue. On the other hand, we introduce a multigrained MMD metric on the augmented adaptive data to extract domain invariant features. There is an intuitive illustration in Figure 1 to show how our domain adaption approach mitigates the domain shifting. Besides, we exploit the power of the pretrained model to capture expressive data features. We integrate a sequential self-training strategy to progressively and effectively perform our domain adaption components, as shown in Figure 2. We describe the details of cross-domain adaptation in Section 4.1 and progressive self-training for lowresource domain adaptation in Section 4.2.
Cross-domain Adaptation
When labels are insufficient in the target domain, most cross-domain NER models are vulnerable to over-fitting, thus yielding unsatisfactory performance. Therefore, we augment mix-domain data by Cross-Domain Anchor pairs. Those augmented data is defined as adaptive data, which can alleviate the data insufficiency problem. Our adaptive data is designed to simultaneously mitigate the domain gaps on both word-level and discourse-level. Those adaptive data form an adaptive space, as shown in Figure 1, which bridge two domains for cross-domain knowledge transferring.
Adaptive Data Augmentation
We first give the definition of Cross-Domain Anchor. An entity in source domain is denoted by e s whose labels are [y s i s , ...y s j s ]. A target entity is e t whose labels are The cross-domain anchor is a relationship between two entities from different domains. y s i s = y t i t denotes two entities belong to same entity type when their first label is the same. Intuitively, the anchor pairs address the cross-domain word discrepancy by sharing words per NER type cross domains.
Then, we use the cross-domain anchor pairs M Anchor to create adaptive data D aug . Suppose we have e p , where p ∈ {s, t} and e p ∈ X p = [x p 1 , ..., x p i p , ...x p j p , ..., x p |X p | ]. Given an anchor pair (e p , e q ) ∈ M Anchor , where q ∈ {s, t} and q = p, we replace e p in X p with e q as the augmented adap- Intuitively, the augmented adaptive sentences are regarded as mix-domain augmented data that share sentence pattern cross domains. Such semantically or syntactically similar sentences are the adaptive data, which can explore the unknown area in the target domain. The grey space, shown in Figure 1 (b), denotes the adaptive space, which is comprised of adaptive sentences like " The Australia firm's parent company." and "San Francisco will play three one-day internationals.". These two sentences are augmented by the Cross-Domain Anchor pair ("Australia", "San Francisco") which are both assigned to the label "LOC". When model fine-tuning is processed on the adaptive data, the model can benefit from the cross-domain features acquired from the adaptive space to improve model generalizability on the low-resource target domain.
Multi-grained MMD for Domain-invariant Features
As aforementioned, the adaptive space function is regarded as a cross-domain bridge. In this part, we seek to strengthen its domain adaptability and further aggregate the cross-domain features. We adapt domain-adaptation MMD (Long et al., 2015) to gather data points with similar word and sentence features, as shown in Figure 1 (c). Since MMD is to compute the norm of the difference between two domain means, MMD-based NER objective can thus learn both discriminative and domain invari-ant representations. We propose the multi-grained MMD method to simultaneously alleviate both the word-level and discourse-level discrepancy.
To distinguish the adaptation on word-level and discourse-level, we propose word MMD loss and sentence MMD loss, denoted by L w MMD and L d MMD respectively.
where H CLS is the set of CLS token embeddings. CLS is the sentence pool output for the token CLS in pre-trained language model. The word level MMD loss is denoted by the same label y ∈ label = {B-X, I-X, O}: where µ y is the corresponding coefficient. H y are the set of token embeddings with the label y.
Finally, the representations of a sentence and its tokens are the domain invariant features, which capture the cross-domain knowledge under the guide of L d MMD and L w MMD . As shown in Figure 1 (c), the domain invariant features work to gather samples around the adaptive space to assist adaptation on both source and target domains.
Self-training for Low-Resource Domain
Adaptation(DA)
Robust Feature Adaptation
Considering limited vocabulary and noise data samples on both source and target domains, we adopt contrastive learning (Hadsell et al., 2006;Ye et al., 2020;Liu et al., 2020a;Wu et al., 2020) to extract robust features through text augmentation like synonym replacement (Wu et al., 2020) and span deletion (Wei and Zou, 2019). We construct a distorted dataset where z = W h CLS is a mapping vector of a sentence X . W is a trainable parameter.z = W h CLS is the mapping vector of X that is augmented by operating synonym replacement or span deletion on X . Z neg is constructed by other sentences in D ∪ D c except X and X . τ is a temperature hyper-parameter.
Low-Resource Objectives
To address the low-resource scenarios, we consider both zero-resource and minimal-resource crossdomain NER training settings. We first perform the base model on both the source domain and target domain to seek the cross-domain bridge through multi-grained MMD adaptation. The unsupervised cross-domain NER loss is denoted as: which is free of any annotated target examples but still enables domain adaptation by L d MMD (D s , D tu ). The semi-supervised crossdomain NER objective is denoted as: where α and β are the hyperparameters to balance the multi-grained MMD loss terms.
Progressive Joint KD and DA
We propose a progressive domain adaptation by integrating a sequential teacher-student framework to prevent the model from over-fitting on limited labeled data and augmented adaptive data. The intuition is that the student easily overlooks "problematic" examples but learns things that generalize well. Therefore, the KD framework enjoys the merits that it progressively improves the domain adaptation confidence over data. The cross-domain NER loss over adaptive data is denoted as: In the progressive KD framework, we use f θtea and f θstu to denote teacher and student models, respectively. Suppose fθ is the base model learned by the objective in Equation 9, we initial the teacher model and the student model as: θ At t-th iteration, the student model loss is denoted as: Where X ∈ D aug , containing N entities. f ·,n (X ) means the output of entity n. The updated model isθ (t) stu = arg min θstu L distill . Finally, we update the teacher-student model for the (t + 1)-th iteration by: θ
Experiments
In this section, we evaluate PDALN and other baselines on four public benchmarks. We conduct two groups of comparison experiments for unsupervised and semi-supervised cross-domain NER separately. We also conduct further ablation studies and hyperparameter studies to validate the efficacy of the domain adaptation approaches.
Baselines
We compare PDALN with the following state-ofthe-art cross-domain NER models: BiLSTM+CRF (Lample et al., 2016) harnesses character-level Bi-LSTMs to capture the morphological and orthographic features and word-level Bi-LSTMs to integrate the sentence grammar feature. At last, the model stacks a CRF layer to predict the labels considering their dependencies. BERT+CRF replaces traditional BiLSTM component with the powerful pre-trained language model BERT to obtain more informative and contextual enhanced word representations. La-DTL (Simpson et al., 2020) proposes the labelaware MMD metric learning to mitigate the word distribution discrepancy. DATNet (Zhou et al., 2019) proposes a generalized resource-adversarial discriminator to capture the share feature space across different domains. Then the domain shared space guides the target domain prediction on NER task. JIA2019 (Jia et al., 2019) combines language model and NER task to construct multi-task learning structure, and then exploits tensor decomposition to learn the task embedding for cross-domain NER prediction over such task embeddings. Multi-Cell (Jia and Zhang, 2020) proposes a multi-cell compositional LSTM structure for crossdomain NER under the multi-task learning strategy.
In addition, we compare the evaluation of two variants of PDALN. We replace the sequential KD framework in the self-training stage with MT and VAT, Mean Teacher strategy (Tarvainen and Valpola, 2017) and Virtual Adversarial Training (Miyato et al., 2018), respectively.
Training and Implementation Details
We adopt the Adam optimization algorithm with a decreasing learning rate of 0.00005. We utilize the pre-trained BERT (BERT-base, cased) where the number of transformer blocks is 12, the hidden layer size is 768, and the number of self-attention heads is 12. Each batch contains 32 examples, with a maximum encoding length of 128. The coefficient µ y in Equation 6 is 0.25. The temperature hyper-parameter τ = 0.05. We choose 100 labeled target examples and 500 labeled source examples to augment adaptive data in the size of 1400 (100*4+500*2). Each target example operates 4 times anchor word replacement into 4 augmented sentences, while 2 replacements for each source example. Particularly, we take 10/100/240 as target/source/adaptation examples in the Webpage dataset, due to its insufficient target examples.
Results and Discussion
Domain Adaptation on Unsupervised NER The unsupervised NER follows the zero-shot paradigm, preventing model training from any testing labeled data. Compared with other unsupervised NER baselines, PDALN achieves the best F-1 on all benchmarks, even suffering failure on the precision scores. As the unsupervised NER results are shown in Table 1, PDALN and BERT+CRF both attain competitive performance on the recall scores, which benefits from the powerful contrastive-learning fused pre-trained language model. But for WNUT2016 and Wikigold, PDALN surpassing BERT+CRF shows the benefits from sentence-level domain adaptation through L d MMD and robust feature extraction through L c .
Evaluation on Semi-supervised NER As shown in Table 1, most of the baselines cannot achieve decent performance gain by taking in limited annotated resources. But PDALN outperforms the best public baseline range from 1.5% to 4.0% on all benchmarks. Most of the existing approaches adopt BiLSTM as their fundamental component to aggregate input information. Unfortunately, BiL-STM cannot capture expressive sentence features due to its intrinsic shortcomings, vanishing or exploding gradient problems. Therefore, these approaches are prone to increasing false-positive predictions and suffer unsatisfied recall scores. Even though pre-trained language models can attain stunning recall scores, their precision scores dramatically fall behind the baselines. The main reason is that such a powerful pre-trained model is prone to over-fitting on small annotated data. Compared with BERT+CRF, our promising precision gain and increasing recall scores show that our model can make a successful tradeoff between the precisions and recalls. Besides, we compare with two variants (w/ MT and w/ VAT) of our model with different KD strategies, like Mean Teacher and Virtual Adversarial Training. Their performance is close to ours on the high-quality labeled data, SciTech. But their performance on the other domains shows they are vulnerable to the noise and easily overfit on limited annotated samples. PDALN overcomes that well by the progressive domain adaptation with moderate knowledge distillation from the teachers.
Ablation Study
We conduct ablation studies that quantify the contribution of each adaptation component in PDALN. As Table 2 shows, the removal of augmented data causes dramatic performance decreases on all four benchmarks. That indicates adaptive data augmentation plays the most vital role in the low-resource cross-domain NER task. Our progressive KD framework shows its importance on precision gain as w/o L distill causes the worst precision drop. Our multi-grained MMD (either the sentence-level or word-level MMD) meth-ods play noteworthy contributions for cross-domain NER adaptation as well, as their removals also cause serious performance loss. The removal of L c attests the robust feature extraction works well when the annotated data (e.g. Wikigold) are not very precise.
Evaluation on Entity Type We provide PDALN's performance on each entity type in
Related Work
Recently, label sparsity has achieved great success in many research frontiers (Liu et al., , 2021Zhang et al., 2020c;Xia et al., 2018Xia et al., , 2020Xia et al., , 2021Zhang et al., 2020a,b). One of the widely adopted strategies is a cross-domain transfer which mainly deals with the domain shift problem. The causes for domain shift in NER are mainly twofold including the discrepancies of word distributions and sentence patterns between source and target domain.
On the one hand, word distributions are not compatible between different domain datasets. Therefore, existing works equip the model with diverse domain adaptation components to alleviate domain shift. Kulkarni et al. (2016) propose distributed word embedding methods to leverage domain-specific knowledge to boost their cross-domain NER performance. Wang et al. (2018) introduce a label-aware mechanism into maximum mean discrepancy (MMD) to explicitly reduce domain shift between the same labels across domains in medical data. Lin and Lu (2018) employ projecting learning to obtain a transfer matrix that maps target domain words into the word space of the source domain.
On the other hand, diverse sentence patterns are usually caused by various factors, like written styles, publication categories, data quality, etc. The solutions for mitigating the discourse-level discrepancy mainly include multi-level adaptation layers (Lin and Lu, 2018), tensor decomposition (Jia et al., 2019) and multi-task learning with external information (Liu et al., 2020b;Aguilar et al., 2017). As we mentioned before, Lin and Lu (2018) construct the word adaptation component in their model. Besides, they construct another sentence-adaptation layer, which takes in the adapted word embedding to extract another adaptation sentence feature. Jia et al. (2019) use multi-task learning and tensor decomposition to extract latent factors. Through latent factors, knowledge can be transferred across source and target domains. Liu et al. (2020b) employ NER label experts to guide model learning between domains. The label-aware guidance layer is key to enable domain adaptation. Jia and Zhang (2020) a multi-cell compositional LSTM structure for cross-domain NER under the multi-task learning strategy. Besides, those (Liang et al., 2020;Simpson et al., 2020;Cao et al., 2020) exploit external resources to generate pseudo labels for the low-resource domain with the assistance of a pretrained language model. However, those methods either lack the capability to capture expressive text features for the adaptation or require sufficient labeled target data, which impedes their performances under both zeroresource and minimal-resource scenarios. For the pre-trained model assisted approaches mainly rely on external knwledge bases which introduces too much noise.
Conclusion
In this paper, we propose a progressive adaptation knowledge distillation framework, including anchor-guided adaptive data to address data sparsity, multi-grained MMD to bridge the domain adaptation, and progressive KD to stably distill cross-domain knowledge. The results exhibit the model's superiority over the most state-of-the-arts. | 4,719.4 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Prescribing the scalar curvature problem on the four-dimensional half sphere
In this paper, we consider the problem of prescribing scalar curvature under minimal boundary conditions on the standard four-dimensional half sphere. We describe the lack of compactness of the associated variational problem and we give new existence and multiplicity results.
Introduction and main results
Let (M n , g) be an n-dimensional Riemannian manifold with boundary, n ≥ 3, and letg = u 4/(n−2) g be a conformal metric to g, where u is a smooth positive function. Then, the scalar curvatures R g and Rg and the mean curvatures of the boundary h g and hg, with respect to g andg respectively, are related by the following equations: (1.2) When K and H are constants, this problem is called "The Yamabe Problem on Manifolds with boundary". It has also been studied through the works [4,18,26,27,31,32]. When K = 0, the problem is called Boundary Mean curvature problem which has been studied by Escobar (see [28]) on manifolds which are not equivalent to the standard ball. On the ball, sufficient conditions in dimensions 3 and 4 are given in [1,2,25,29]. When H = 0, the problem is called scalar curvature under minimal boundary condition and has been studied in [14][15][16][17]23]. Previously, Cherrier [22] studied the regularity question for this equation. He showed that solutions of (1.2) which are of class H 1 are also smooth. We observe that the above problem is a natural generalization of the well-known "Scalar Curvature Problems on Closed manifolds": to find a positive smooth solution to the following equation: to which much work has been devoted (see [3,[5][6][7][8]10,11,13,20,21,30,34,35,37]).
In this paper, we consider the case where H = 0, on the standard four-dimensional half sphere under minimal boundary conditions. More precisely, let K be a C 2 positive Morse function on S 4 + , we look for conditions on K to ensure the existence of a positive solution of the problem ⎧ ⎨ ⎩ L g u = − g u + 2u = K u 3 where g is the standard metric of S 4 + = {x ∈ R 5 /|x| = 1, x 5 > 0}. The main analytic difficulty of this problem comes from the presence of the critical Sobolev exponent on the right hand side of our equation, which generates blow up and lack of compactness. Indeed, due to the fact that the embedding H 1 (S 4 + ) → L 4 (S 4 + ) is not compact, the Euler-Lagrange functional J associated with our problem fails to satisfy the Palais Smale condition. That is there exist noncompact sequences along which the functional is bounded and its gradient goes to zero. Therefore, it is not possible to apply the standard variational methods to prove the existence of solution. There are also topological obstructions of Kazdan-Warner type to solve (1.4) [similar to the one associated to (1.3)], and so a natural question arises: under which conditions on K , (1.4) has a positive solution?
This problem has been studied by Li [33], and Djadli-Malchiodi-Ould Ahmedou [24], on the threedimensional standard half sphere, using the blow-up analysis of some subcritical approximations and the use of the topological degree tools. In [16,17], the authors gave some topological conditions on K to prescribe the scalar curvature under minimal boundary conditions on half spheres of dimension bigger than or equal to 4 using the method of "critical points at infinity" due to Bahri [9] and Bahri-Coron [11]. In particular, they obtained an Euler-Hopf-type criterium reminiscent to the formula obtained by Bahri-Coron [11] for the scalar curvature problem on S 3 , see also Chang-Gursky-Yang [21].
In this paper, we give new existence as well as multiplicity results, extending the previous all known ones. To state our results, we need to introduce some notations and assumptions. We denote by G the Green's function of the conformal Laplacian L g on S 4 + and H its regular part defined by (1.5) Let 0 < K ∈ C 2 (S 4 + ) be a positive Morse function. We say that the function K satisfies the condition (H 0 ): • If y is a critical point of Denoting K the set of critical point of K , we set To each p-tuple τ p := (y 1 , . . . , y p ) ∈ K + , we associate a matrix M(τ p ) = (M i j ) defined by, We denote by ρ(τ p ) the least eigenvalue of M(τ p ), and we say that a function K satisfies the condition (H 1 ) if for every τ p ∈ (K + ) p , we have ρ(τ p ) = 0. We set and we define an index i : where ind(K , y i ) denotes the Morse index of K at its critical point y i . Now, we state our main result.
Then, there exists a solution to the problem (1.4) of Morse index less or equal than k + 1.
Moreover, for generic K , it holds where N k+1 denotes the set of solutions of (1.4) having their Morse indices less than or equal to k + 1. Please observe that, taking in the above k to be , where is the maximal index over all elements of F ∞ , the second assumption is trivially satisfied. Therefore, in this case, we have the following corollary, which recovers the previous existence result of Ben Ayed et al. [17].
then there exists at least one solution to (1.4).
Moreover, for generic K , it holds
where S denotes the set of solutions of (1.4).
We point out the main new contribution of Theorem 1.1 is that we address here the case where the total sum in the above corollary equals 1, but a partial one is not equal 1. The main issue being the possibility to use such an information to prove the existence of solution to the problem (1.4). Moreover, our result does not only give existence results, but also, under generic conditions, gives a lower bound on the number of solutions of (1.4). Such a result is reminiscent to the celebrated Morse Theorem, which states that, the number of critical points of a Morse function defined on a compact manifold, is lower bounded in terms of the topology of the underlying manifold. Our result can be seen as some sort of Morse Inequality at Infinity. Indeed, it gives a lower bound on the number of metrics with prescribed curvature in terms of the topology at infinity. The remainder of this paper is organized as follows. In Sect. 2, we set up the variational structure and the lack of compactness of Problem (1.4). In Sect. 3, we characterize the critical points at infinity associated with our problem. The last section is devoted to the proof of the main result.
Variational structure and lack of compactness
In this section, we recall the functional setting and the variational problem and its main features. Problem (1.4) has a variational structure, the Euler-Lagrange functional is We denote by the unit sphere of H 1 (S 4 + ), and we set + = {u ∈ , u > 0}. Problem (1.4) is equivalent to finding the critical points of J subjected to the constraint u ∈ + . The Palais-Smale condition fails to be satisfied for J on + . To describe the sequences failing the Palais-Smale condition, we need to introduce some notations. For a ∈ S 4 + and λ > 0, let Let Pδ a,λ be the unique solution of We define now the set of potential critical points at infinity associated with the function J . Let, for ε > 0, p ∈ N * and w either a solution of (1.4) or zero, The failure of Palais-Smale condition can be described, following the idea of [19,36,38] as follows: Proposition 2.1 Let (u k ) be a sequence in + , such that J (u k ) is bounded and ∂ J (u k ) goes to zero. Then, there exists an integer p ∈ N * , a sequence (ε k ) > 0, ε k tends to zero, and an extracted subsequence of u k 's, again denoted (u k ), such that u k ∈ V ( p, ε k , w), where w is zero or a solution of (1.4).
If u is a function in V ( p, ε, w), one can find an optimal representation, following the ideas introduced in Proposition 5.2 of [9] (see also pages 348-350 of [10]). Namely, we have has a unique solution (α, λ, a, h), up to a permutation.
In particular, we can write u as follows: ) and it satisfies (V 0 ), and T w (W u (w)) and T w (W s (w)) are the tangent spaces at w of the unstable and stable manifolds of w for a decreasing pseudo-gradient of J and (V 0 ) is the following: Here, Pδ i = Pδ (a i ,λ i ) and < ., . > denotes the scalar product defined on H 1 (S 4 + ) by Notice that Proposition 2.2 is also true if we take w = 0, and therefore, h = 0. In the next, we will say that v ∈ (V 0 ) if v satisfies (V 0 ). Now, arguing as in [10, pages 326, 327 and 334], we have the following Morse lemma which completely gets rid of the v contributions and shows that it can be neglected with respect to the concentration phenomenon. a, λ, h), such that v is unique and satisfies:
Moreover,there exists a change of variables
We notice that in the V variable, we define a pseudo-gradient by setting where μ is a very large constant. Then, at s = 1, V (s) = e −μs V (0), will be very small as we wish. This shows that, to define our deformation, we can work as if V was zero. The deformation will extend immediately with the same properties to a neighborhood of zero in the V variable.
Characterization of critical points at infinity
Following Bahri [9], we introduce the following definition. Definition 3.1 A critical point at infinity of J in + is a limit of a flow line u(s) of the following equation: Here, w is either zero or a solution of (1.4), and ε(s) is some function tending to zero when s → +∞. Using Proposition 2.2, u(s) can be written as: Denoting by a i = lim a i (s) and α i = lim α i (s) , we denote by such a critical point at infinity. If w = 0, it is called w-type. , w), we have the following expansion:
Proposition 3.2 For each u
Thus, Since the function h belongs to T w (W u (w)), which has a finite dimension equal to the index of w. Thus, Therefore, Using the fact that α 2 0 J (u) 2 = 1 + o(1), we get, Observe that arguing as in [10] (page 354), the quadratic form negative definite. Hence, our proof follows.
Proposition 3.3 For each u
, we have the following expansion:
7)
where c 1 and c 2 are some positive constants.
Proof We have Using [16], for u = p j=1 α j Pδ j ∈ V ( p, ε), we have (3.8) The stereographic projection and a direct calculation show that (3.9) Similarly, we have (3.10) From another part, we have (3.11) A straight for word computation yields: (3.12) (3.13) Using the above estimates and the fact that J 2 (u)α 2 i K (a i ) = 1 + o(1), Proposition 3.3 follows using similar argument as in [9]. , w), we have the following expansion:
Proposition 3.4 For each u
(3.14) Proof First observe that arguing as in [9], easy computations show the following estimates: (3.21) Using the above estimates and the fact that J 2 (u)α 2 i K (a i ) = 1 + o(1), Proposition 3.4 follows using similar argument as in [9].
3.2 Ruling out the existence of critical point at infinity in V ( p, ε, w) for w = 0 The aim of this section is to prove that, for K , a C 2 positive function satisfying the condition of theorem and w a solution of (1.4), then for each p ∈ N, there is no critical point or critical point at infinity of J in the set V ( p, ε, w).
Proposition 3.5 For p ≥ 1, there exists a pseudo-gradient W , so that the following holds: There is a constant
This pseudo-gradient satisfies the PS condition and it increases the least distance to the boundary along any flow line.
Proof Observe first, from Proposition 3.2, we have . . , p}, we introduce the following condition: Let d 0 > 0 be a fixed small enough constant. We divide the set {1, . . . , p} into the following: In T 2 ∪ T 3 , we order the λ i 's: λ i 1 ≤ · · · ≤ λ i s . Let c > 0, a fixed constant small enough, we define For a fixed constant c > 0 small enough, we also define From Proposition 3.3, we have Observe that if i ∈ T 2 ∪ T 3 , we have w(a i ) > cd i , and therefore, 1 (3.25) We need to add some more terms in our upper-bound. For this, let (3.26) Since i s ∈ I , we can make appearing the term 1 (λ i 1 d i 1 ) −3 in the last upper bound, and so all the terms 1 (λ i d i ) −3 . Observe that, if k ∈ T 1 and k = j, we have (3.27) Thus, we can make appearing k∈T 1 , j =k ε 3 2 i j in the last upper bound. From another part, for k, j ∈ I λ is , we have |ν k − ν j | = O(|a k − a j |). Thus, It remains to estimate the case where k ∈ I λ is and j ∈ I λ is . If k ∈ T 2 ∪ T 3 or j ∈ T 2 ∪ T 3 , we have If j, k ∈ T 1 , we claim that: and the claim follows. Using now (3.30), we deduce (3.39) Observe that 1 (3.40) For two fixed large enough constants m 3 > m 2 > 0, one has (3.41) The pseudo-gradient W will be defined by W = m 4 (Y + m 2 X + m 3 Z 1 ) + h, where m 4 > 0 is a large enough fixed constant. Thus, the first claim of the proposition follows. The second claim can be obtained once we have (i) arguing as in [10]. V ( p, ε, w). Now once mixed critical points at infinity are ruled out, it follows from [17], that the critical points at infinity are in one-to-one correspondence with the elements of the set F ∞ defined in (1.7), that is, a critical point at infinity corresponds to τ p := (y 1 , . . . , y p ) ∈ (K + ) p , such that the related matrix M(τ p ) defined in (1.6) is positive definite. Such a critical point at infinity will be denoted by τ ∞ p . Like a usual critical point, it is associated with a critical point at infinity x ∞ of the problem (1.4), which are combination of classical critical points with a one-dimensional asymptote, stable and unstable manifolds, W ∞ s (x ∞ ) and W ∞ u (x ∞ ). These manifolds can be easily described once a Morse-type reduction is performed, see [10]. In the following definition, we extend the notation of domination of critical points to critical points at infinity. Recall that i(x ∞ ), the Morse index, of such critical point at infinity is equal to the dimension of W ∞ u (y) ∞ . Definition 3.7 x ∞ is said to be dominated by another critical point at infinity x ∞ if W ∞ u (x ∞ )∩ W ∞ s (x ∞ ) = ∅. If we assume that the intersection is transverse, then we obtain i(x ∞ ) ≥ (x ∞ ) + 1.
Proof of Theorem 1.1
Setting l := max i(τ p ), τ p ∈ F ∞ . (4.1) It follows then that where N k+1 denotes the set of solutions of (1.4) having their Morse indices ≤ k + 1. This conclude the proof of Theorem 1.1.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 4,019.4 | 2016-10-12T00:00:00.000 | [
"Mathematics"
] |
Determination and evaluation of minimum miscibility pressure using various methods: experimental, visual observation, and simulation
This research proposes a simultaneous technique using various methods to yield the most reliable Minimum Miscibility Pressure (MMP) value. Several methods have been utilized in this study including slim tube test, swelling test, vanishing interfacial tension test, visual observation during swelling test and vanishing interfacial tension test, and simulation. The proposed method may reduce the uncertainty and avoid doubtful MMP. The method can also demonstrate discrepancies among the results. There were two samples used in this study namely Crude Oil AB-5 and Crude Oil AB-4. It showed that for Crude Oil AB-5 the discrepancies among the results from that of the slim tube test were between 3.9% and 10.4% and 0% and 5.9% for the temperature of 60 °C and 66 °C, respectively. The highest discrepancy was shown by the results from the visual observation during vanishing interfacial tension test and the lowest discrepancy was shown by the results from the swelling test. The vanishing interfacial tension test was found to be the fastest method for predicting the MMP. The method also consumed a smaller amount of oil and gas samples for the experiment. The simultaneous method proposed in this study is considered as more proper and exhibits a valuable method for predicting the MMP. This technique has never been found to be performed by previous researchers and accordingly it becomes the strong point of this study to contribute to the global research in the area of MMP determination.
In this study, we propose a new approach to produce the MMP with high confidence level and less doubtful results. We use various methods including slim tube test, vanishing interfacial tension test, swelling test, visual observation through swelling and vanishing interfacial tests, and simulations in order to obtain the MMP. Our approach is to simultaneously plot results of slim tube test vs. swelling test, VIT test vs. swelling test, and slim tube test vs. VIT test in the same graph. In the analysis, we include results of visual observation and simulations to obtain more reasonable MMP value.
Preparation and procedures
Various stages are intentionally prepared to yield appropriate results of either slim tube test, swelling test, vanishing interfacial test, visual observation, and simulation. Afterwards, the entire results are compared to each other as a means to examine any discrepancy among the results. Figure 1 exhibits the flowchart of our research methodology, while the following explanations describe each experiment stage conducted in the current study.
Light oil composition and CO 2 quality
Two types of crude oil are used in this study. The crude oil samples are taken from Layer AB-4 and AB-5 within Air Benakat Formation of South Sumatra Basin located in Jambi Province, Indonesia. The composition and other properties of the oil samples are shown in Tables 1, S1a, and S1b. The CO 2 gas used in this study has 99.99% of purity.
Slim tube test
The procedure for the experiments generally consists of three major stages including preparation, experiment, and cleaning stages. The stages applied for the slim tube test as well as the other tests. The preparation stage for the slim tube test includes leak testing, slim tube saturation using the sample of up to 2.0 PV, Back-Pressure Regulator (BPR) setting at the desired pressure, and air bath temperature system setting at the reservoir condition. The experimental stage involves CO 2 injection at the rate of 0.2 cc/min until it reaches 1.2 PV, effluent collection in the measuring cylinder and oil volume measurement. At the same time, the change in oil color is observed in the visual cell for estimating the condition of miscibility then the oil recovery is calculated for each pressure and the results are plotted. The cleaning stage involves oil sample cleaning in the slim tube using toluene and then nitrogen. The miscibility is estimated at the break-over point in the plot of recovery factor vs. pressure as suggested by Yellig and Metcalfe (1980). The experimental diagram of the slim tube test is shown in Figure S1. The description and specification of the slim tube is shown in Table S2.
Swelling test
The main apparatus used for the swelling test in the present study consists of a high-pressure cell made of sapphire glass. To fill-up the cell by CO 2 we use precision pump namely ISCO Pump 250DM. A heater is used to control the temperature of the air bath system. To help control the CO 2 liquid state before it is injected into the cell we use a cooler. To obtain images and to record the course of the experiment we use a simple camera located outside of the air bath system. A stirring bar located inside the cell is used to mix the CO 2 and oil until it reaches its equilibrium condition. A rare magnet located within a slot outside the cell is used to control the movement of the bar. Other standard auxiliary equipment for measuring pressure and temperature are also included in the experimental system. Our swelling test experimental diagram is shown in Figure 2.
Vanishing interfacial test
The experimental diagram of the vanishing interfacial test is shown in Figure S2. Two syringe pumps from ISCO Company is used for water and CO 2 injections and a goniometer apparatus from Rame-Hart Instrument Co. combined with a visual cell is used for this experiment. A high-pressure and high-temperature visual cell is equipped to measure the Interfacial Tension (IFT) in reservoir condition. The cell diameter is 30 mm, its height is 60 mm, and its thickness is 16 mm. The maximum operating pressure and temperature of the visual cell are 3000 psi and 300°C, respectively. The needle, which has Outside Diameter (OD) of 0.91 mm and length of 50 mm and made from stainless steel is used. A pair of face-to-face sapphireglass window is equipped within the visual cell. The glass windows with the thickness of 10 mm and the diameter of 30 mm is attached to the visual cell. A certain volume of dead-oil is mounted on the stainless-steel piston-chamber with 0-4000 psi maximum operating pressure. A metering valve and a check valve are applied to ensure the constant oil flow rate and to prevent the flow-back within the cell. The temperature is measured with a calibrated thermocouple located inside the cell. Afterwards, the pressure of the system is measured through a pressure indicator. All apparatus is connected by using stainless steel tubing lines.
Before initiating the measurement, all lines and the apparatus are cleaned by using toluene, dried by using nitrogen, and vacuumed. The pressure inside the cell is conditioned by injecting some CO 2 into the cell. Then, the temperature condition is maintained constant by installing the heater. A sequential experiment is run with a serial condition of pressure that ranges from 700 to 2500 psi and temperature that ranges from 60°C to 66°C. After that, the pressure and temperature inside the cell are constantly maintained according to the desired condition which usually takes about 20-30 min. The water with a specific rate of 0.1 cc/min is pumped into the chamber and the piston pushes up the dead oil inside the chamber. The dead oil is flown from the chamber through the tubing line until it reaches the needle's tip. When the oil drop reaches the needle's tip, the drop hang and this condition is maintained at stable condition for a certain time by adjusting the metering valve. In this experiment, the stable condition of the drop should be maintained between 40 and 60 s. This time range has been suggested by previous researchers including Yang and Gu (2005) and Yang et al. (2015).
Visual observation Visual observation during swelling test
Visual observation is performed through images captured as videos or pictures similar to the method conducted previously by Wang (1986). This method has also been performed by Abdurrahman et al. (2015). This method is aimed to observe visually the change in color of the oil as the pressure increases. The observation is done during the extraction-condensation stage when the swelling factor begins to decrease. The MMP should be obtained when the interface of the CO 2 -rich phase and the CO 2 vapor disappears. According to Abdurrahman et al. (2015), this method is obviously not accurate and should be regarded only as an approximate method to estimate the MMP.
Visual observation during VIT test
Visual observation through videos or pictures during the vanishing interfacial test is used in determining the MMP in this study. This is aimed to observe the change in drop shape of the oil as the pressure increases. When the pressure in the view cell is increased, the CO 2 dissolves in the crude oil and the oil drop shape at the tip of the needle gradually changes. This phenomenon occurs until the oil drop disappears at the tip of the needle. The MMP is obtained when the oil and the CO 2 become one phase or the oil drop disappears from the needle tip at some higher pressure. This phenomenon can be easily observed by visual means. Again, regardless this method is effective in recognizing when the miscibility occurs; it is obviously not accurate and should be regarded only as approximation.
Simulation
Zick (1986) discovered a combination between condensing and vaporizing gas drive mechanisms during miscibility process of CO 2 and crude oil. However, this phenomenon cannot be observed using ternary diagram. Therefore, according to Zick, various methods using numerical simulations have been proposed by Jaubert et al. (1998aJaubert et al. ( , 1998b, and Jaubert et al. (2002). Jaubert et al. (1998a) pointed out their study on predicting the MMP using a slim tube simulator. The result showed excellent accuracies compared to 1D simulator. In addition, the compositional slim tube simulator was about 18-80 times faster compared to 1D simulator. Jaubert et al. (1998a) devoted their study on using real petroleum fluid model. They mentioned that one cell simulator has some limitations for estimating the MMP. Due to the time consuming and very expensive experiments for determining the MMP, Jaubert et al. (2002) conducted swelling and multiple contact tests in their study. They concluded that the two tests are faster and cheaper for predicting the MMP.
In these days, numerical simulators offer several options for predicting the MMP. This paper uses CMG simulator of which it provides options in WinProp that can be utilized to calculate the MMP. The Multiple-Contact Miscibility (MCM) option or the First-Contact Miscibility (FCM) pressure for a given oil and solvent at a particular temperature or the Minimum Miscibility Enrichment (MME) level that is required for the multiple-or single-contact miscibility at given temperature, pressure, oil composition, primary and make up gas composition are available in the program. The C7+ characterization has to be made since the splitting of the oil compositional data of the samples is not possible while the simulator requires components to be defined until C35. The MMP can then be determined for a given solvent composition by entering a range of pressure to be tested. The program reports the MMP if it is found and the mechanism by which the miscibility occurs is achieved. It could be vaporizing or condensing drive mechanisms.
For the MMP calculation in the present study, the pressure increment is divided into 10 steps. The calculation begins with the lowest pressure of 500 psi and terminates at the maximum pressure of 2500 psi. The results including the ternary diagrams are collected for each pressure step of which is used to specify the pressure range and values corresponding to the MMP. The required data for the simulation includes temperatures of 60°C and 66°C, pressures ranging from 500 psi to 2500 psi, oil composition, and the primary gas composition, i.e. the CO 2 with the purity of 99.99%. The equation of state used in this work is Peng-Robinson EOS. The viscosity model is that of Jossi-Stiel-Thodos with the aqueous phase salinity (or NaCl) concentration equals to zero. The simulator version is CMG Sofware (2014).
Estimating MMP by slim tube test
The slim tube test experiment uses Crude Oil AB-5 for two reservoir temperature conditions of 60°C and 66°C. The slim tube experiment results for both temperatures are shown in Figure S3. The MMP is determined by using the break-over point technique as suggested by Yellig and Metcalfe (1980). The results from this slim tube experiment show that the miscibility occurs at 1540 psi and 1700 psi for the two temperature conditions of 60°C and 66°C, respectively.
Estimating MMP by plots of swelling factor vs. pressure
The swelling test experiment offers a new technique for estimating the MMP graphically by plotting the resulted swelling factor as a function of pressure. Tsau et al. (2010) suggested in their experiment on how to predict the MMP based on data resulted from a swelling test. The approach involves the determination of the MMP by recognizing the intersection between the extraction-condensation line and the condensation line. Figure S4 shows that the intersection between the condensation-extraction line and the extraction line that occurs at the pressure of 1600 psi and 1700 psi. As a result, for the temperature of 60°C, the MMP can be estimated as 1600 psi and for the temperature 66°C, the MMP is obtained as 1700 psi. Due to some technical limitations, this method can only be applied to Crude Oil AB-5.
Estimating MMP by plots of IFT vs. pressure
An experimental study performed by Yang and Gu (2005) explained that during the diffusion process the light and moderate components are rapidly extracted from the oil drop causing the CO 2 to be oil-rich. This phenomenon leads to the decrease of the Interfacial Tension (IFT) between the oil and the CO 2 . However, when the pressure increases and reaches the near-miscibility condition the heavy component remains in the crude oil. At this condition, the oil drop begins to shrink and the IFT reduces quite slowly. Based on this explanation, in the present study two regions are recognized during the vanishing interfacial test. The first is referred to as Region A representing the diffusion stage and the second is referred to as Region B representing the shrinkage stage. In this regard, the MMP is determined by linear extrapolation of the diffusion line versus pressure data to zero value of IFT. A linear regression analysis for estimating the MMP of Crude Oil AB-5 at temperatures of 60°C and 66°C results in the equations below: The first equation is for estimating the MMP at 60°C with the correlation coefficient is found to be R 2 = 99.99%. The second equation is for estimating the MMP at 66°C with the correlation coefficient is found to be R 2 = 99.99%. According to the value of R 2 for these equations, it is believed that the estimated MMP is considerably acceptable. Nevertheless, these equations can only be applied to a pressure range between 700 psi and 1500 psi for the temperature of 60°C and between 700 psi to 1550 psi for the temperature of 66°C. Above these two-pressure ranges the equation many not be applicable due to the occurrence of different phenomena.
Similarly, a linear regression analysis for estimating the MMP of Crude Oil AB-4 at temperatures of 60°C and 66°C results in the equations below: IFT ¼ À0:0258 Â P þ 53:44: The first equation is for estimating the MMP at 60°C with the correlation coefficient is found to be R 2 = 98.7%. The second equation is for estimating the MMP at 66°C with the correlation coefficient is found to be R 2 = 99.1%. According to the value of R 2 for these equations, it is also believed that the estimated MMP is considerably acceptable. Again, because of the occurrence of different phenomena at other pressure values, these equations can only be applied to a pressure range between 700 psi and 1800 psi for the temperature of 60°C and 700 and 1900 psi for the temperature of 66°C.
The MMP estimates under elevated pressure and temperature for Crude Oil AB-5 are shown in Figure S5 while the MMP estimates for Crude Oil AB-4 are shown in Figure S6. Using Equations (1) and (2) at the IFT value equals to zero the interfacial tension shows that the miscibility in Crude Oil AB-5 occurs at 1611 psi and 1777 psi for the temperatures of 60°C and 66°C, respectively. It can also be seen that the MMP increases as the temperature increases. Increasing the temperature from 60°C to 66°C causes the increase in the MMP of about 166 psi or 27.7 psi/°C. Similarly, using Equations (3) and (4) at the IFT value equals to zero the interfacial tension shows that the miscibility in Crude Oil AB-4 occurs at 1918 psi and 2072 psi for the temperatures of 60°C and 66°C, respectively. The MMP also increases as the temperature increases. The increase in temperature from 60°C to 66°C causes the increase in the MMP of about 154 psi or 25.7 psi/°C. These results are reasonably consistent with those of Hemmati-Sarapardeh et al. (2013). In their study, the increase of the MMP is about 22.6 psi/°C. At higher temperature, the CO 2 solubility in the crude oil is lower, which results in a higher MMP.
Visual observation during swelling test
The MMP may be estimated by visual observation during the swelling tests as suggested by Wang (1986). In the present study, the observation is focused on the occurrence when the swelling factor begins to decrease which happens in the extraction-condensation stage. It should be noted here that the MMP estimate through visual observation is certainly subjective and may not reflect the correct MMP. The timing when the extraction-condensation and the extraction stages occur in the cell should also be determined carefully. Relating to the means the MMP is defined, the MMP is estimated when the interface of the CO 2 -rich phase and the CO 2 vapor disappears. Figure S7 and Figure S8 depict the phenomenon during the swelling test for Crude Oil AB-5. The oil color changes slightly as the pressure increases. The more notable oil color change occurs when the swelling factor starts to decrease suggesting that the miscibility of the CO 2 and the oil has been achieved. As mentioned by Huang et al. (1989) and Wang (1986), the phenomenon when the oil color starts to change from its original color as the pressure increases is known as the transition zone. While the pressure keeps increasing at this stage, the CO 2 and the oil dissolve each other and eventually become one phase. Then, the color of the oil looks brighter. In this experiment, the oil color starts to change at the pressure of 1600 psi and 1700 psi for the temperature of 60°C and 66°C, respectively. In the extraction process, the pressure is higher than the MMP and the oil color changes to even much brighter as can be seen in Figure S9. This phenomenon results from a large number of moderate components of the oil that has been extracted leaving only the heavier components to remain at the bottom of the cell. The heaviest component subsequently precipitates in the form of black asphaltic flakes as it was also observed by Wang (1986). Assuming the yellow color observed within the cell represents the moderate components, this finding reveals clear evidences that the CO 2 extracts only the intermediate components of the oil. Similar observation results were also obtained by previous investigators. In their reports, it was mentioned that the CO 2 can extract only the oil components of C 5 to C 30 (Stalkup, 1984). Photographic sketches illustrating the process of miscibility development were demonstrated by Wang (1986) when the oil shrinks to its minimum volume. Wang (1986) suggested that the MMP can be estimated visually during a swelling test experiment. The method has also been done by Abdurrahman et al. (2015) to predict the MMP in their experiments. In the present work, the MMP is also proposed to be predicted by visual observation. Figure S10 depicts the phenomena in Crude Oil AB-5 during vanishing interfacial test at the temperature of 60°C. Clearly, at 1700 psi, the oil drop shape cannot be recognized as can be seen in the figure. Figure S11 depicts the same phenomena during the vanishing interfacial test for Crude Oil AB-5 at the temperature of 66°C. The oil drop shape slightly changes as the pressure increases. In this experiment, the oil drop shape starts to change its shape to an irregular form when the pressure is higher than 1650 psi. At the pressure of 1800 psi, the oil drop shape cannot be recognized clearly as can be seen in the figure. Figure S12 depicts the phenomena during the vanishing interfacial test at the temperature of 60°C for Crude Oil AB-4. The oil drop slightly changes as the pressure increases. It is clearly seen during the experiment that the oil drop shape does not change to irregular form until the pressure reaches 2600 psi. The IFT between oil and CO 2 at this pressure is 0.91 dyne/cm. This value can be categorized as ultra-low IFT. Figure S13 depicts similar phenomena for Crude Oil AB-4 during the vanishing interfacial test at the temperature of 66°C. It is also clearly seen from the figure that the oil drop shape does not change to any irregular form until the pressure is increased to 2600 psi. This is the highest pressure possible during the experiment. Any changes above this pressure is not possible due to some technical reasons. At the pressure of 2600 psi, the drop shape is still recognized as regular and the interfacial tension between the oil and CO 2 at this pressure is found to be 2.41 dyne/cm.
Determining MMP by simulations
Our simulation results using Peng-Robinson EOS calculated the MMP for Crude Oil AB-5 as 1670 psi and 1790 psi at 60°C and 66°C, respectively. Similarly, for Crude Oil AB-4, the MMP was found to be 2030 psi and 2240 psi at 60°C and 66°C, respectively. The cell-to-cell method is used in estimating the miscibility by detecting the pressure at which the tie line reaches the critical point. The simulation method is much faster than the experiment. It also does not require much input data. In this work, the MMP calculation is done without any tuning in predicting the phase behavior as suggested by Danesh (1998). The critical properties of the hydrocarbon and non-hydrocarbon components are defined in the simulator as shown in Table S3. The binary interaction coefficients between the hydrocarbon and non-hydrocarbon components are shown in Table S4 and Table S5.
Comparing MMP estimations between slim tube vs.
pressure and swelling factor vs. pressure In this analysis, the MMP is determined graphically from the results of the swelling test by plotting the swelling factor as a function of pressure. This can only be done for Crude Oil AB-5 because of the limited data from the slim tube test. As suggested by Tsau et al. (2010), the MMP can be obtained when the straight-line curves of the extractioncondensation stage and the extraction stage intersects each other. Therefore, the two curves representing the two stages are important to estimate the occurrence of miscibility and accordingly the MMP cannot be estimated with high-level of certainty if the straight lines cannot be well developed from the experimental data. In the condensation-extraction stage that is also called near-miscibility condition, the lightto-moderate components vaporize quite fast. Meanwhile, due to the already-reduced amount of the components, less moderate components vaporize in the extraction stage. The swelling factor decreases in both stages because CO 2 has extracted the light-to-moderate components. The intersection between the extraction-condensation and extraction straight lines is therefore crucial to construct in order to ensure the correct miscibility pressure. The solubility of CO 2 in oil increases as the pressure increases during the condensation stage and so the swelling factor. At some point, the oil is rich with the CO 2 and at the same time the oil reaches its maximum swelling factor. It is clearly shown in Figure S4 that the swelling factor reaches its maximum value of about 1.3 at 1400 psi. The miscibility, however, has not yet occurred. After the oil is fully saturated by CO 2 , the extraction-condensation stage begins and few moderate components of the oil move into the CO 2 phase as indicated by the decrease in swelling factor. The extraction stage occurs as the pressure increases further where the CO 2 vaporizes and more moderate components leave the oil phase causing the swelling factor to decrease continuously but at different rate. As a result, for the temperature of 60°C, the MMP is determined as 1600 psi (see Fig. S4).
Using the same procedure, the results for the other temperature of 66°C is also shown in Figure S4. The MMP for this case is obtained as 1700 psi. One thing that is important to note here is the effect of CO 2 solubility on the swelling factor. At higher pressure, the CO 2 solubility is also higher making its effect on the swelling factor is more significant. Therefore, the swelling factor of 1.4 at 66°C and 1600 psi that is higher than that of 1.3 at 60°C and 1400 psi in these experiments is mainly due to the effect of pressure which is more dominant than temperature. It is clearly understood from Figure S4 that the MMP cannot be determined if the extraction stage does not occur so that the intersection between the straight-lines of the extraction-condensation stage and the extraction stage cannot be identified. Because the extraction stage is quite faster than the rate of swelling after the miscibility occurs, the extraction causes the swelling factor to decrease rapidly. This phenomenon was also reported by previous investigators such as Tsau et al. (2010) and Harmon and Grigg (1988).
Regardless the dominant effect of pressure, the temperature in fact plays an important role in the extractioncondensation stage as well as in the extraction stage that follows. Because the solubility of CO 2 in oil is low at higher temperatures, the extraction of the oil components is also low. Thus, during the CO 2 injection process at the higher temperature of 66°C, the CO 2 dissolves only slightly in the oil as indicated in Figure S4 causing low concentration of the hydrocarbon that can be extracted. As a result, the oil shrinkage is also low during the extraction-condensation stage and the oil volume returns to its initial condition as indicated by the swelling factor of unity. When the injection is conducted at the lower temperature of 60°C, a different phenomenon occurs where more CO 2 dissolves into the oil and the oil shrinkage is quite high as indicated in Figure S4. Then, because more oil components are extracted during the extraction-condensation stage, the swelling factor decreases to a value of less than 1.0. However, when the temperature increases, the MMP also increases. In the present study, the increase in temperature of 6°C leads to the increase of the MMP by 100 psi or about 16.7 psi/°C. This result is very much close to the result of the work by Elsharkawy et al. (1992). They reported that the increase of the MMP affected by the increase of temperature fell within the range of 18.10 psi/°C and 27.02 psi/°C. Figure S4 clearly shows this effect of temperature on the MMP. At higher temperatures, the condensation-extraction and the extraction lines will be slightly flatter than the lines at lower temperatures. The logical reasoning for this phenomenon may be because at higher temperature the CO 2 extract less oil components and accordingly it requires higher pressure to achieve miscibility condition resulting in higher MMP. Figure S14 displays combined plots comparing the results of the slim tube experiment and those of the swelling test each of which for the temperatures of 60°C and 66°C. Analyzing the plots graphically, the MMP resulted from the swelling test is obviously in good agreement with that from the slim tube experiment. In other words, the MMP obtained from the swelling test in the present study is considerably correct. This result therefore invalidates the doubting relationship previously noted by Harmon and Grigg (1988) between the MMP obtained from the slim tube experiment and that from the swelling test. The difference between the MMP obtained from the slim tube and that from the swelling test is about 3.9% for the temperature of 60°C and basically zero for the temperature of 66°C (see Table S6).
Comparing MMP estimations between swelling factor vs. vanishing interfacial test
In this analysis, the results for Crude Oil AB-5 are examined. Figure S15 shows combined plots comparing the results of the vanishing interfacial test and that of the swelling test at the temperatures of 60°C and 66°C, respectively. A graphical analysis on the plots clearly shown that the MMP resulted from the swelling test is also in good agreement with that of the vanishing interfacial test. As a result, the MMP obtained from the swelling test and that from the vanishing interfacial test in the present study proved to be considerably correct. As displayed in Table S7, the differences between the results of the swelling test and that of the vanishing interfacial test are about 0.7% and 4.3% for the temperature of 60°C and 66°C, respectively. It follows that the MMP obtained from the swelling test and that from the vanishing interfacial test at low temperatures are closer to each other than the same results at higher temperatures. In the meantime, the MMP obtained from vanishing interfacial test is slightly higher than that obtained from the swelling test. However, the difference is still reasonable as it is considerably a small figure.
Following the explanation provided by Yang and Gu (2005), two regions namely Region A for diffusion stage and Region B for shrinkage stage are recognized. In Region A the CO 2 diffuses into the crude oil causing the oil to swell and accordingly the Interfacial Tension (IFT) to decrease. In this study, the IFT in Region A is found to decrease from 24.5 dyne/cm to 2.93 dyne/cm at 60°C and 24.9 dyne/cm to 3.4 dyne/cm at 66°C. However, as the pressure continues to increase the oil drop is lacking of moderate component leaving heavy component as the main constituent of the oil drop. In that condition the oil volume decreases owing to the shrinkage of the oil drop. On the other hand, in Region B the remaining heavy component in the oil drop only causes the IFT to slowly decrease. In this region, the IFT decreases from 2.6 dyne/cm to 2.5 dyne/cm at 60°C and 3.0 dyne/cm to 2.1 dyne/cm at 66°C. This phenomenon is clearly shown in Figure S15. Regarding the MMP explanations according to the molecular effects, the logical reasoning may be expressed as follows. When the CO 2 injection pressure increases at Region A, more CO 2 molecules will diffuse into the oil. Thus, the oil density decreases rapidly. The higher the injection pressure, the denser the CO 2 causing its density to be higher. This condition minimizes the density difference between the CO 2 and the oil drop. Lower density difference means the operating intermolecular forces between the CO 2 and the oil will be close at Region B. The interface between the CO 2 and the oil disappears when the operating intermolecular force between the two phases is in balanced condition. It is also believed that near-miscibility occurs at the intersecting point between Region A and B lines in the plot of the IFT vs. pressure. Prior to the near-miscibility region the IFT decreases more rapidly as the pressure increases. In contrast, after the near-miscibility region (or shrinkage region) the IFT continues but slowly decreases as the pressure increases. This is probably because of the domination of the heavy component.
It follows from the above that the existence of the intersecting lines of the extraction-condensation and the extraction stages is crucial in estimating the MMP through swelling test. During the swelling test, three regions namely condensation, condensation-extraction, and extraction occur and can be recognized easily. These regions do not exist and cannot be recognized during the vanishing interfacial test. The logical reasoning may be expressed as follows. When the swelling test is conducted, the oil volume is as much as 2.1 mL and it causes more CO 2 to dissolve into the oil. In that process, it is easy to recognize the three regions through the view cell. However, when a vanishing interfacial test is conducted the oil surrounded by the CO 2 is too small averaging only 3-8 lL. Because of the small oil volume, the process is not able to depict the three regions such as the process occurs during swelling test. However, from the similarity of the curve trend and the slope changes between the swelling test and the vanishing interfacial test during the extraction process it is believed that at the pressure above the MMP the heavy component is dominating the oil composition. Figure S4 as well as Figure S5 to Figure S6 may be useful to describe such phenomenon.
It is unfortunate that due to some technical reasons the swelling test and the slim tube test cannot be performed for Crude Oil AB-4. Therefore, the similar analysis to the results for Crude Oil AB-5 cannot be done for Crude Oil AB-4. However, as the composition may affect considerably the MMP as explained above, the higher molecular weight of the heptane-plus in Crude Oil AB-4 compared to Crude Oil AB-5 may be responsible for the higher MMP of Crude Oil AB-4 in the IFT experiment at both temperatures of 60°C and 66°C. To be precise, different composition between the two oil samples results in different MMP. In this case, the higher content of heavy component in Crude Oil AB-4 results in the higher MMP. Line B or shrinkage line of the IFT results proves the heavy component richness of Crude Oil AB-4. Also, the IFT still exist even at high pressures. Hence, the MMP in Crude Oil AB-4 is logically higher than that of Crude Oil AB-5. Table S8 shows the comparison of the MMP for the two oil samples based on the VIT test results.
Comparing MMP estimations by slim tube vs. VIT test
Because of the slim tube and VIT tests results availability, further analysis can be done for Crude Oil AB-5. Figure S16 shows combine plots comparing the results of the slim tube and VIT tests at 60°C and 66°C, respectively. The MMP resulted from the VIT test is in good agreement with that obtained from the slim tube test. It then follows that the MMP obtained from the VIT test is considered satisfactorily correct. By plotting the slim tube and the VIT tests results in the same graph, the uncertain and doubtful MMP from the VIT result can be diminished. The plot also provides enhancement to the MMP estimation. Table S9 shows the results of the MMP as well as their differences between the two tests. As shown in the table, the discrepancies between the two methods are 4.5% at 66°C and 4.6% at 66°C, respectively. In other words, the effect of temperature is not significant.
Comparing MMP estimations by experiments vs. simulation
The slight disagreement between the results of experimental and simulation methods is most likely caused by the specific property of the oil sample used in each method. The slim tube test, the vanishing interfacial test and the swelling test use dead oil samples while the simulation uses live oil samples. The gas composition in the live oil sample such as methane and nitrogen causes the MMP slightly higher (Dong et al., 2000). Table S10 shows the differences between the results of the swelling test and the simulation for Crude Oil AB-5 are about 4.2% and 5.0% at the temperature of 60°C and 66°C, respectively. Meanwhile, the differences between the results of the vanishing interfacial test and that of the simulation are about 3.5% and 0.7% at the temperature of 60°C and 66°C, respectively. Then, the differences between the results of the slim tube test and that of simulation are about 7.8% and 5.0% at the temperature of 60°C and 66°C, respectively. It then follows that all the experimental methods including the swelling test, the vanishing interfacial test, and the slim tube test provide satisfactory estimates of the MMP. It can also be seen in the table that the MMP estimated by the VIT method has the closest value to the MMP from the EOS method.
In contrast, the highest difference from EOS method is the result of the slim tube test. Furthermore, the difference of the MMP values obtained from swelling test to that of the EOS is consistent at both temperatures. As a conclusion, the MMP at a specific temperature resulted from the methods used in the present study does not show a single or exactly the same value. There is always a discrepancy among the results no matter how small it is. Generally, the MMP data obtained from the use of EOS is higher than those obtain from experimental methods including the VIT test, the swelling test, and the slim tube test particularly for Crude Oil AB-4. Shown in Table S11, the difference between the results of the vanishing interfacial test and the simulation is about 5.5% at the temperature of 60°C and 7.5% at 66°C. Clearly the difference of the MMP using EOS is higher for Crude Oil AB-4 than that of Crude Oil AB-5. It may provide further information and possible analysis if there are some data obtained from slim tube and swelling tests for Crude Oil AB-4. Accordingly, there may some other conclusions that can be drawn.
Conclusion
Drawing from the results of the present study, the following are summarized conclusions.
1. Very few investigators used simultaneous methods in predicting the MMP. 2. Analysis using data plotting from simultaneous methods has never been examined in detail previously. 3. The use of simultaneous methods is able to reduce uncertainties and doubts of the resulted MMP. 4. The MMP obtained from the slim tube test has been used as the standard result or baseline in the present study. 5. The visual observation either during the swelling test or interfacial tension test is worthy for recognizing the timing of the miscibility to occur during experiments. 6. The interfacial tension test is the most effective method in utilizing oil samples and gas for the experiment. Less time consumption and small amount of oil and gas to utilize are the main advantages of using this method regardless the analysis that shows slightly higher discrepancies than that of the slim tube test.
7. Despite the closeness of the results, each method has shown results that are in fact different to each other. However, in general, all the methods can be properly used for predicting the MMP. | 9,081.8 | 2019-01-01T00:00:00.000 | [
"Chemistry",
"Engineering",
"Environmental Science"
] |
Creating students’ communities of Inquiry (COI) in online learning using the Moodle Learning Management System
The issue of promoting high levels of interactivity in online learning is important and topical. There is always a need to provide opportunities for online learners to work with others and feel a sense of belonging. This desktop review paper explores the possibility of creating communities of inquiry using the Moodle learning management system. In this discussion, we review the general use of a learning management system in an institution of higher learning. We discuss the advantages and disadvantages of online learning. The concept community of inquiry is unpacked, with emphasis on the three presences namely the cognitive, social and teaching presences. By drawing on the interactive features of the Moodle LMS, we discuss how the three presences could be promoted. Conclusions and recommendations are drawn from the discussion.
Introduction
The utilisation of digital learning management systems for online learning has brought with it calls for increased interactivity. There are criticisms levelled against online learning and one of them is about challenges of student interactivity (Larson, 2002). MacKinnon (2002) also notes that online course instructors work under increased pressure to design and implement online programmes that are comparable if not better than face-to-face programmes in terms of all aspects of the programme such as learning outcomes, course content, teaching and learning activities and assignments. The issue of making opportunities available for students to work collaboratively online as they would do in a face-to-face contact class is very important. Espasa and Meneses (2010) identify three forms of interactivity in online learning namely student-student, studentinstructor and student-content interaction. Of importance in communities of inquiry is student-student interaction, which allows learners to work collaboratively with their peers. As observed by Salmon (2013) when learners work together they cease to rely on the course instructor but are provided with opportunities to co-construct and share knowledge. Through peer interaction, learners create and share meaning of the knowledge in course content. In the process of co-construction and sharing of knowledge, communities of practice are built. To this end, the course instructors would need to enhance collaborative learning by leveraging on the features of a learning management system.
There are two modern learning theories which underpin the significance of collaboration in online learning namely the online collaborative learning theory and connectivism. According to Harasim (2017) the online collaborative learning theory is rooted in social constructivism. It assumes that learners discuss and work together in the learning. It values the process of working together in a technology-mediated environment. Harasim (2017) identifies three stages in the theories which are idea generation, idea organisation and intellectual convergence. Features of a learning management system should be utilised to allow learners to generate and organise ideas collaboratively. The Connectivism theory by Siemens (2005) advances the view that learning is a process of creating networks. The learner creates networks with other learners by working with them online (Boitshwarelo ,2011, p.162).
The central idea in connectivism is that of learners connecting to a learning community and benefiting from it while also feeding it with information. The learning community is a group of people learning together through continuous dialogue because of their similar interests The idea of the learning community raised above is the same with the communities of practice, which provide the learners with great opportunity for creating networks. Learners work together and mutually benefit in knowledge sharing. Siemens (2005) further notes that in connectivism, learning resides outside the learner. This shows that the learner has to utilise online learning and social media tools to learn. To this end, knowledge is not viewed as not "only residing in the mind of an individual nor in one location but as being distributed across an information network or multiple individuals." (Boitshwarelo, 2011, p.162). The importance of working collaboratively with others by utilising online learning tools cannot be overemphasised.
Use of learning management systems
Turnbull, Chugh and Luck (2020 p 1) define learning management systems as "online learning technologies for the creation, management, and delivery of course material." A learning management system performs a number of tasks in managing students' learning. Juhary (2014, p. 23) notes that a digital learning platform "can be a singularly critical platform to report on students' learning progress and to monitor students' learning engagement". The view of managing learning through a digital learning platform is shared by Dalsgaard (2006) who observes that such a learning platform allows course instructors to integrate the different elements of the teaching and learning process. One of the functions of a learning management system is that it provides for course instructors' to organise and manage content (Martin-Blas & Serrano-Fernandez, 2009). A learning management system supports students' learning. In line with the Connectivism learning theory, the student is connected to the learning material, the fellow learners, and course instructors through an LMS. Learning in a networked environment promotes student-centredness and "promotes inquiry-based learning and digital literacy, empowers the learning, and offers flexibility as new technologies emerge" (Drexler, 2010, p.371). Furthermore, an LMS creates learning environments where learners can regulate and pace their learning. This allows learners to take control of their learning.
Apart from the delivery of content, a learning management system is useful in the administration of the learning process (Sallum, 2008). The course instructors are able to make learning content accessible and manageable to students. Other processes such as student registration, communication, testing, scheduling, student tracking, monitoring are possible through a learning management system (Cavus, 2013). Cavus and Alhih (2014, p.520) note that the LMS "manages, tracks and reports on the interaction between the learner and the content and the learner and the instructor." It is clear from the foregoing view that the course instructor is able to manage the process of teaching and learning and be in charge of the processes.
Advantages of online learning
Kattoua, Al-Lozi and Alrowwad (2016) studied E-Learning Systems in Higher Education and identified some advantages and disadvantages of e-learning. They state that online learning is less expensive to deliver, affordable and saves time.These advantages of e-learning have been echoed by other researchers. Muruthy and Yamin (2017) argue that e-learning spares the cost, time and space in the learning process and in that way is beneficial to the online users. Kattoua et al. (2016) also assert that online learning allows students to access the materials from anywhere at any time and access global resources and materials that meet their level of knowledge and interest. There is also selfpacing for slow or quick learners, which reduces stress and increases satisfaction and retention. In other words, online learning offers flexibility. Dumford and Miller (2018), however, argue that while online education has the potential to reach a wider audience, the unique needs and situations of these students can greatly impact their educational experiences. According to them, students' different background characteristics influence their preference for an online course format and their success or otherwise in any academic setting. Hence, Dumford and Miller (2018) caution that institutions of learning should be careful not to aggravate existing gaps among students.
Other advantages of e-learning cited by Kattoua et al. (2016) are that E-learning allows more effective interaction between the learners and their instructors. Learners can track their progress using emails, discussion boards and chat room. Learners can also learn through a variety of activities that apply to many different learning styles that learners have, it helps the learners develop knowledge of using the latest technologies and the Internet, and it could improve the quality of teaching and learning as it supports the face-to-face teaching approaches. The interactions provided by e-learning through discussion boards and chat rooms can be explored for effective student collaboration and building of communities of learning. Chen, deNoyelles, Patton, and Zydney (2017, p.165) explain that asynchronous discussions in online learning "provide a space for instructors and students to form a community, to engage in dialogue about the course content, and to co-construct knowledge". In asynchronous discussions, students have sufficient time to think before responding, form new knowledge and ideas through writing, and due to the nature of discussion forums students can always return to their original contributions, promoting reflection and self-assessment.
Disadvantages of online learning
Despite the aforementioned pros of online learning some disadvantages have been identified. Online students may have feelings of isolations as there is little or no "in-person" contact with the faculty member (Kattoua et al., 2016). Wijekumar et al. (2006) as cited in Dumford and Miller (2018) share the similar view about isolation in online learning. They posit that online learners may feel isolated from their course instructors if traditional assessments like multiple-choice quizzes and exams are used too heavily. Dumford and Miller (2018) further highlight issues of cheating and overreliance on the summative feedback from graded quizzes and exams which might limit the formative feedback given to students during the learning process.
Some problems are technical in the sense that students might experience a difficult learning curve in how to navigate within the system and problems with the technology (Kattoua et al., 2016). These authors added that there were also some disadvantages that seemed to be common in developing countries such as lack of funds to purchase new technology, lack of adequate e-learning strategies, training for staff members and most importantly the learner resistance to using the e-learning systems. Similar findings were reported by Mthethwa-Kunene and Maphosa (2020) that ODL students' utilisation of a Learning Management System was hindered by institutional factors such as inadequate technological infrastructure, insufficient student training and support, and limited usage of the utilisation of the LMS by course instructors. These factors limit students' interactions on the LMS and formation of effective communities of learning.
Kattoua, Al-Lozi and Alrowwad (2016) also highlighted that there was also learners' dissatisfaction in using elearning which included lack of a firm framework to encourage learners to learn, a high level of self-discipline or self-direction is required. Learners with low motivation or bad study habits may fall behind, absence of a learning atmosphere in e-learning systems, the distance-learning format minimizes the level of contact, e-learning lacks interpersonal and direct interaction among learners and teachers, and when compared to the face-to-face learning, the learning process is less efficient. Similarly, in order to be successful in online learning, students may need additional motivation, organization, and self-discipline (Dumford & Miller, 2018). Dumford and Miller (2018, p. 462) concluded that "If a primary goal of online learning is to reach a wider range of students and provide educational opportunities for those who might not otherwise have such access, then it is important to ensure that online education students are partaking in equally engaging educational experiences that contribute to their learning and success". Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.11, No.30, 2020 4. Unpacking community of inquiry framework Garrison, Anderson and Archer (2000) developed the "community of inquiry" (COI) model for online learning environments. They described an educational community of inquiry as a group of people who work collaboratively, engaged in purposeful critical discourse and reflection to construct personal meaning, and confirm mutual understanding. An educational community of inquiry is defined as "a group of individuals who collaboratively engage in purposeful critical discourse and reflection to construct personal meaning and confirm mutual understanding" (Garrison, 2011, p.2). According to Garrison (2009), the Community of Inquiry theoretical framework represents a process of creating a deep and meaningful (collaborative-constructivist) learning experience through the development of three interdependent elements -social, cognitive, and teaching presence (Garrison, 2009).
The community of inquiry has become one of the more popular models for online and blended courses that are designed to be highly interactive among learners and faculty using discussion boards, blogs, wikis, and videoconferencing. The COI model advocates for social, teaching, and cognitive learning. Social presence is "the ability of participants to identify with the community (e.g., course of study), communicate purposefully in a trusting environment, and develop interpersonal relationships by way of protecting their individual personalities." (Garrison, 2009). Teaching Presence is the design, facilitation, and direction of cognitive and social processes for realizing personally meaningful and educationally worthwhile learning outcomes (Anderson, Rourke, Garrison, & Archer, 2001). Cognitive Presence is the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse (Garrison, Anderson, & Archer, 2001).
The CoI framework postulates that teaching presence, social presence and cognitive presence are interrelated and work constantly changing ways throughout the educational experience (Nolan-Grant, 2019). Common issues in online learning such as limited interactivity may result in failure to enable a true Community of Inquiry. Nolan-Grant (2019) used a postgraduate online module to demonstrate how the CoI framework can be employed to address issues of engagement. That is, using CoI as a learning design model can mitigate engagement challenges.
Ways on promoting social presence on the Moodle LMS
Moodle is an example of LMS that shows how technology could facilitate components of the CoI framework for distance learners. Moodle as a teaching platform (Thomas, Herbert & Teras, 2014) brings out features that support social presence. The Moodle forum is used to encourage student to student as well as student to instructor interaction. In order to encourage social presence instructors can create a common room (Richardson, Ice, & Swan, 2016) forum on the general section on their Moodle course page. The forum is used to introduce instructors to students and students to introduce themselves to one another. Other discussion forums can be created around meaningful and stimulating questions.
In order to establish an online community of trust on the LMS, instructors develop initial course activities such as ice breakers (Fiock, 2020, p.141). The Moodle chat activity can be used to introduce an ice-breaker such as a word game. Richardson et al.(2016) point out that instructors using the Moodle discussion forum, can require learners to respond to their peers' postings or to respond to their own postings. They further state that students serve as experts when they are leading discussions thus making their presence as well. Another action step that promotes social presence is the grading of forums (Richardson et al., 2009). Moodle forum sum of rating makes participation in discussion a significant part of course grades .However, facilitating the forums comes with its own inherent challenges. It is worth noting that training and general awareness of the challenges can make forums a successful pedagogy technique.
Instructors could also consider incorporating Moodle wiki in their course activities. Wheeler, Yeomans, and Wheeler (2008, p.989) suggested that wikis enable students to collaboratively generate, mix, edit, and synthesize subject-specific knowledge within a shared and openly accessible digital space. Such activities are known (Peacock & Cowan, 2016) to enhance learners' social presence in an online learning environment. Joyce and Brown (2009) pointed out that wikis allow learners to co-construct knowledge but at the same time remediating isolation particularly, in distance learning. To add on, the Moodle journal is another activity that promotes social presence. Richardson et al., (2016) claim that learner to instructor interaction is enhanced on an individual basis. Nevertheless, wikis and journals should be used purposefully, instructors are encouraged to use the tools to give Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.11, No.30, 2020 147 meaningful assignments to learners. Dunlap and Lowenthal (2018) noted that learners may post video responses, using apps like screencasting. From that perspective, Moodle HTML editor provides features that allow incorporating video within the course content. Furthermore, Moodle can be integrated with third-party applications. Learners and instructors can create multimedia resources outside Moodle then upload using the Sharable Content Object Reference Model (SCORM) and IMS content packages. Pi, Hong and Yang (2017, p.347) highlighted the importance of nonverbal and relational cues in online learning (e.g. an instructor's image). Including an instructor's video enhances the social presence and achievement of learning outcomes.
Ways on promoting cognitive presence on the Moodle LMS
Enhancing cognitive presence can be done by allowing students to interact with sharing ideas. Moodle facilitates the sharing through the discussion forum, wiki, and journal (Peacock & Cowan,2016;Stewart, 2017). Moderating forums is an important strategy of supporting cognitive presence. However, it is important to note that the interaction among students does not automatically translate to meaningful knowledge construction.
Cognitive presence can be modeled using Moodle groups. Learners can discuss, brainstorm, and reflect collectively (Dunlap et al., 2016). Group tasks allow students to drill down on a topic of interest, providing opportunities for integration with previous knowledge. These group activities often lead learners to become critical thinkers.While Moodle supports groups it should be noted that as size increases, teamwork becomes more complex. Akcaoglu & Lee (2016) explains that the complexity is caused by the increased number of individuals in a group which may also impact group member's attention negatively .Therefore, the instructors are encouraged to play the facilitation role and implement strategies such as weaving and summarising discussions. Learners may also assist in the process.
Dunlap and Lowenthal (2018) articulate a view on promoting cognitive presence suggesting that learners should be given space to create and post resources. Moodle allows adding files in the activities. This necessitates learners to independently share resources related to concepts discussed in class. Furthermore, instructors should build opportunities for students to think and apply the course content together. Moodle wikis can be used to allow learners collaborate and create content.
Another perspective discussed by Richardson et al. (2009) is developing grading rubrics for discussion and assignments that reward desired cognitive behaviors. The authors further suggest that learners are to develop rubrics. The focus is to help students understand and apply new concepts. Furthermore, the use self-testing, practice assignments, simulations, and other interactive activities to support skill development and convergent thinking (Richardson et al., 2009). Moodle quiz supports ungraded assessments. Preisman (2014) suggests that one of the roles that instructors play in creating a teaching presence is designing and organizing of the learning experience. This should be done before the course begins and during the run of the course.Teaching presence is enhanced by personalising the Moodle course page for example choosing a different format for your Moodle site. To add on, video introductions help learners connect to the instructor and let them know there is a real person behind the course. A video introduction and tour of a Moodle course should be availed to assist learners with navigating the course.
Ways on promoting teaching presence on the Moodle LMS
Second, instructors are responsible for creating and monitoring activities that stimulate interaction between the learners, instructors, and content resources (Preisman, 2014). The Moodle LMS allows for regular communication which aids the instructor to build a strong teacher presence within a Moodle course. Giving feedback on assignments is a critical part of the direct instruction component of teaching presence. Moodle assignment activity has a "remind me to grade by" setting .Once the setting is enabled a reminder is sent to the instructor so they can grade and provide feedback in real-time. Feedback on Moodle can be provided in several ways that include Moodle HTML editor or via an Inbox message and group feedback using the group announcement or discussion forum. The announcements forum can be used to send normal email updates to help learners to remember due dates.
Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.11, No.30, 2020 148 Finally, Preisman (2014) suggests that instructors must contribute to academic knowledge and relevant experiences through forms of direct instruction. Anderson (2004) also notes that students contribute as well because they bring their own knowledge and experience to the course. Instructors may provide students' views and comments in Moodle forum conversations (Stewart, 2017). Design learning experiences that address all learning styles need to be considered (Dunlap & Lowenthal, 2018;Stephens & Roberts, 2017).Instead of just posting lecture notes, instructors may use audio or video for posting week after week lectures narrated presentation. Dunlap and Lowenthal (2018) suggested that to increase teaching presence, universal design for learning (UDL) principles must be addressed in all created materials. Narrated slides customize the learning experience for learners and create a feeling of association with the instructor. Within the narrated slides, the instructor can review assessments and demonstrate the solutions for question items. Though the narrated slides require a lot of effort to make, they can be archived and used for a few semesters without being edited.
Conclusions
It is clear from the foregoing discussion that online learning platforms should be utilised to enhance collaborative learning experiences for the learners. Course instructors would be required to understand the features of a learning management system and plan for the creation of communities of inquiry. There is a need for deliberate plans on the implementation of opportunities for collaborative learning online.
Recommendations
In the light of the foregoing discussion, we make the following recommendations; a) Course design for online learning programmes should provide for opportunities for high learner interactivity with content, course instructors and fellow learners.
b)
Features of a digital learning platform should be studied carefully and features that assist students to work collaboratively should be optimised.
c)
Course instructors should undergo periodic online pedagogical training in order to understand and appreciate the role of online pedagogies in influencing selection and use of relevant technologies to foster collaborative learning.
d)
The Moodle LMS as an open source software is accessible and affordable as a learning platform and course instructors should maximise the use of its functions to promote online collaborative learning.
e)
All the presences as advanced by the Communities of Inquiry theory namely the cognitive, teaching and social presences should be promoted in different ways by utilising the Moodle LMS where it is used as the preferred digital learning platform.
f)
Course instructors should promote high-level interactivity in online learning in order to offer learners very rich learning experiences by utilising the available technologies. | 5,094.6 | 2020-10-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
X-ray photoelectron spectroscopy study of high-k CeO2/La2O3 stacked dielectrics
This work presents a detailed study on the chemical composition and bond structures of CeO2/La2O3 stacked gate dielectrics based on x-ray photoelectron spectroscopy (XPS) measurements at different depths. The chemical bonding structures in the interfacial layers were revealed by Gaussian decompositions of Ce 3d, La 3d, Si 2s, and O 1s photoemission spectra at different depths. We found that La atoms can diffuse into the CeO2 layer and a cerium-lanthanum complex oxide was formed in between the CeO2 and La2O3 films. Ce3+ and Ce4+ states always coexist in the as-deposited CeO2 film. Quantitative analyses were also conducted. The amount of CeO2 phase decreases by about 8% as approaching the CeO2/La2O3 interface. In addition, as compared with the single layer La2O3 sample, the CeO2/La2O3 stack exhibits a larger extent of silicon oxidation at the La2O3/Si interface. For the CeO2/La2O3 gate stack, the out-diffused lanthanum atoms can promote the reduction of CeO2 which produce more atomic oxygen. This result con...
I. INTRODUCTION
Rare-earth (RE) lanthanum oxide (La 2 O 3 ) has attracted extensive attention as a promising candidate of gate dielectrics for next generation deca-nanoscale complementary metal-oxide-semiconductor (CMOS) applications. Lanthanum oxide has several outstanding features, such as high permittivity (k∼27), large energy gap (5.8∼6.55 eV), and suitable conduction band offset with silicon ( > 2 eV). 1,2 However, there are some fundamental problems, such as hygroscopic nature, thermal instability, and poor interface properties with Si substrate, associated with the La 2 O 3 film and need to be resolved in order to achieve better electrical and materials properties for high performance devices. 3,4 In particular, its high amount of oxygen vacancies has been recognized as one of the key issues for the deteriorated material stability and device reliability. The large amount of oxygen vacancies in the bulk of La 2 O 3 film would result in channel mobility degradation as well as threshold voltage shift. Additionally, oxygen vacancies can also induce the out-diffusion of substrate Si into the La 2 O 3 /Si interface and the bulk oxide as well. These effects will impede the realization of the smallest equivalent oxide thickness (EOT) due to the formation of low-k silicate layer. [5][6][7] Several methods, such as element doping, thermal annealing, and the adoption of alloy forms of complex oxides, have been proposed to resolve these issues. 8 Recently, a novel CeO 2 /La 2 O 3 gate stacked structure was proposed to control the level of oxygen vacancies in La 2 O 3 film. The multivalent cerium oxides (in CeO 2 and Ce 2 O 3 phases) have a smaller oxygen chemical potential and thus have low amount of oxygen vacancies. Cerium oxide can serve as a self-adapted oxygen reservoir. It supplies extra oxygen atoms to the La 2 O 3 film so as to reduce the oxygen vacancies therein. It has already been confirmed that more favorable electrical performance can be achieved with this structure. [5][6][7] However, the interface interactions of CeO 2 /La 2 O 3 and La 2 O 3 /Si in the CeO 2 /La 2 O 3 stacked structure have not been explored yet. Only few very primitive works on the bonding structures of the CeO 2 /La 2 O 3 stacks were reported. These works seem do not provide sufficient information for supporting the observed electrical results. To further improve the performance and reliability of devices, it is critical to have a better understanding on the chemical reactions taken place at the interfaces. With this connection, this work conducts a detailed study on the bonding structure as well as the chemical composition at different depths of the as-deposited CeO 2 /La 2 O 3 stack by using x-ray photoelectron spectroscopy (XPS) measurements. By using Gaussian deconvolution technique, we made some further analyses on the distribution of Ce 3+ states and Ce 4+ states in CeO x layer so as to investigate the material interactions occurred at the CeO 2 /La 2 O 3 and La 2 O 3 /Si interfaces.
II. EXPERIMENT
Tungsten/CeO 2 /La 2 O 3 gate stack was deposited on n-type Si (100) substrates as follows. A La 2 O 3 layer of about 5 nm thick and then a CeO 2 layer of about 2 nm thick were prepared by electron beam evaporation in an ultra-high vacuum chamber with a pressure of about 10 −7 Pa. The tungsten gate electrode of about 3 nm thick was then deposited in situ using magnetron sputtering to avoid any moisture absorption and potential contamination. The film thicknesses were measured using an ellipsometer and confirmed with transmission electron microscopy (TEM) measurements. The chemical composition and the bonding structures of the as-deposited W/CeO 2 /La 2 O 3 /Si gate stack at different depths were revealed by using x-ray photoelectron spectroscopy (XPS) measurement. The XPS machine is Physical Electronics Model PHI 5802 spectrometer with monochromatic Al Kα radiation energy of 1486.6 eV. Depth profiling was done by using Ar + sputtering at a rate of about 0.67 nm/min. The energy resolution is 0.1 eV. Fig.1 shows a typical atomic concentration profile of the stack we grew and a schematic diagram of the sample. With argon sputtering, we are able to register the composition change along the depth. As shown in Fig.1(b), in addition to the W, CeO 2 , and La 2 O 3 bulk layers, interfacial layers between the CeO 2 /La 2 O 3 and the La 2 O 3 /Si, indentified as, respectively, for the sputtering time between 11-16 min and 20-30 min, are also quite obvious. As will be shown later, CeO 2 and Ce 2 O 3 co-exist in the CeO 2 /La 2 O 3 interface region. The reduction of CeO 2 phase can be understood with the following reaction: 5,7
III. RESULTS AND DISCUSSION
For sputtering time during 16 ≤ t ≤ 20 min, it shows that the Ce content is very low and La dominates. This is the region of the bulk layer of La 2 O 3 film. As sputtering into deeper, there is a region (for sputtering time between 20 to 30 min) notable La and Si contents which is attributed to the interfacial silicate layer at the La 2 O 3 /Si interface. This region seems to be quite thick but a TEM picture show that the La 2 O 3 /Si interface of this sample is quite sharp (see Fig. 1(c)). The interface silicate layer may be formed due to the recoil of La ions into the substrate and also the substrate oxidation from the decomposed oxygen during argon profiling. Fig. 2 depicts a Ce 3d spectrum taken from the bulk CeO 2 . The Ce 3d spectrum exhibits Ce 3d 5/2 and Ce 3d 3/2 spin-orbit doublet peaks, respectively, at 881.4 and 899.9 eV. The strong satellite peaks locating at around 885.7 eV and 904.2 eV are due to the Ce 3+ bonding of Ce 2 O 3 . These findings agree well with the data reported in the literatures. [11][12][13][14] The present bulk Ce 3d spectrum indicates the co-existence of Ce 3+ and Ce 4+ bonding states. The recorded significant reduction of CeO 2 may be partially due to the reduction produced by the Ar sputtering. 14,15 In addition, the as-deposited CeO 2 layer should also contain high amount of Ce 2 O 3 phases. Nevertheless, depth profiling on the relative change of the Ce 3+ bonding states should still be able to reveal the additional reduction effect due to the La 2 O 3 layer. 16 Fig. 3 shows the Ce 3d XPS spectra taken from different depths, with sputtering time ranging from 9.5 min and 16 min. As the sputtering proceeds, slightly lower energy shift of the Ce 3d 5/2 peak from 881.4 eV to 881.2 eV was first observed for sputtering during 9.5 to 13 min. However, further sputtering would result in a higher energy shift instead (see Fig.3(b)). These phenomena are attributed, respectively, to the Ce-O-La bonding and the Ce-O-Si bonding. Unlike the bulk Ce-O-Ce bonding, the electron cloud on O would move closer to the Ce side in a Ce-O-La bonding because the Ce atom has slightly larger electronegativity (χ Ce = 1.12) than La atom (χ La = 1.10). 5 Thus Ce 3d 5/2 has a slightly lower binding energy during the period 9.5 ≤ t ≤ 13 min because of the formation of Ce-O-La complex bonding. As sputtering closer to the bulk of La 2 O 3 (during the period of 13 to 16 min), the high-energy shift of the Ce 3d 5/2 peak may be due to the Si atoms in the La 2 O 3 . It was reported that Si can readily diffuse into the La 2 O 3 via the oxygen vacancies. 8 The forming of Ce-O-Si bonding causes the high-energy shift as Si has a much larger electronegativity (χ Si = 1.9). 17 Ce 3d XPS spectra are much more complicated than other high-k materials due to the hybridization between the 4f levels and the O 2p states. 14 Both Ce 3d 5/2 and Ce 3d 3/2 levels are composed of five different states, labeled as V and U refer, respectively, to the Ce 3d 5/2 and Ce 3d 3/2 spin-orbit components. By using Gaussian deconvolution technique, we found that the recorded spectra can be decomposed into nine peaks, namely, V 0 (881.05 eV), V ′ (885.83 eV), U 0 (899.65 eV), U ′ (904.3 eV) corresponding to Ce 3+ species, and V (882.52 eV), V ′′ (888.2 eV), V ′′′ (898.0 eV), U (901.13 eV), U ′′ (907.2 eV) corresponding to Ce 4+ species (see Fig.3(a)). These fitting results are consistent with other published reports. 11,18,19 The decomposed spectra further confirm that both Ce 3+ and Ce 4+ states are co-existed in the CeO 2 layer. As shown in Fig. 3, when being sputtered deeper into the CeO 2 /La 2 O 3 interface, it is obvious that the peak of Ce 3+ state (V 0 , V ′ , U 0 , and U ′ ) become stronger than that of the Ce 4+ states (V, V ′′ , V ′′′ , U, and U ′′ ).
To have a clearer picture on the amount of the CeO 2 reduction, we conduct a quantization analysis on ratio of Ce 3+ and Ce 4+ bonding composition. As mentioned, the V 0 , V ′ , U 0 , and U ′ constitute the Ce 3+ states, the amount of this state should be governed by the total area of these peaks, i.e.
Similarly for Ce 4+ state, we have: The total fraction of the cerium in the Ce 3+ state (also referred to the degree of reduction) is: Noting that the U ′′′ state (∼916 eV) for Ce 4+ was not taken into account due to the limited energy range in the experiment. It would introduce some errors for the figures given above. Using the approach given above, we obtained the ratio of Ce 3+ content at different depths as indicated in Fig. 4. In the vicinity of the W/CeO 2 interface (t = 9.5 min), the percentage of Ce 3+ content is about 71.9% which is slightly smaller than the CeO 2 bulk of 77.6% (see the trace with sputtering time of 11 min). At the CeO 2 /La 2 O 3 interface, the Ce 3+ content increases to over 80%. It is noted that the smallest percentage of Ce 3+ is still over 71.9%. The high Ce 3+ content should be partially due to the artifact produced by Ar sputtering during XPS measurements. It was reported that Ar or ion beam can result in CeO 2 reduction. 14,15 A large amount of Ce 2 O 3 should also be formed during the deposition. Further increase of Ce 3+ content as sputtering closer to the CeO 2 /La 2 O 3 interface indicates that the lanthanum oxide had facilitated the reduction of cerium oxide because of the larger amount of oxygen vacancies in the La 2 O 3 film. 5,18 Figure 5 shows the La 3d spectra taken from different locations of the W/CeO 2 /La 2 O 3 by sputtering. The La 3d 3/2 spectra demonstrate a double peak structure with main peak energy of 851.0 eV and satellite peak energy of 855.6 eV. 1 As being sputtered deeper into the film, the doublet shows a slight high-energy shift to 851.3 eV. This shift is attributed to the presence of the Ce-O-La bond in the CeO x -La 2 O 3 mixture layer and that agrees with the Ce 3d result as given above. After 13 min sputtering, i.e. closed to the bulk La 2 O 3 , the La 3d spectra shift to higher energy side because of the present of Si neighbors for the La bonding. In the La 2 O 3 /Si interface region, with sputtering time between 20 to 30 min, the main peak of La 3d 3/2 shifts to even higher energy at 852.6 eV and the intensity of the satellite peak becomes weaker indicating more silicate bonding (La-O-Si) formed at the interface. No obvious signal corresponding to La-Si bonding can be detected at the La 2 O 3 /Si interface. Fig. 6(a) and 6(b) depict the Si 2s XPS spectra and Gaussian deconvolution results for the as-deposited W/CeO 2 /La 2 O 3 /Si sample. The peak at 150.5 eV is due to the Si-Si bonds and the high-energy shifts of the peaks are attributed to the La-rich lanthanum silicate or the Si-rich lanthanum silicate. The Si 2s peak shifts to 153.8 eV after being sputtered for 20 min. It represents the SiO 2 bonding at the SiO 2 /Si interface. In order to further study the effect of CeO 2 on the La 2 O 3 /Si interface, we compare the Si 2s spectra with the sample without CeO 2 capping. Detailed comparison and peak decomposition at three different locations, corresponding to the bulk La 2 O 3 , the La-rich silicate near La 2 O 3 /Si interface, and the Si-rich silicate near La 2 O 3 /Si interface, are FIG. 4. Deconvoluted Ce 3d spectra by using Gaussian decomposition. The percentages of Ce 3+ content indicated in the figure were calculated from the total area of peaks corresponding to Ce 3+ ions (i .e. V 0 , V ′ , U 0 , and U ′ peaks) as a fraction of total peak area corresponding to both Ce 3+ and Ce 4+ ions. This reaction would help to improve the quality of La 2 O 3 /Si interface layer and thus leads to better electrical characteristics of the devices. 6 The conjectures given above are further confirmed with the O 1s spectra as shown in Fig. 7. As shown in Fig. 7(a), several different kinds of bonding were observed as sputtering from CeO x layer (9.5 min) to near silicon substrate (27 min). Along the depth direction, we can observe first a slight low-energy shift from 530.5 eV to 530.3 eV (9.5 ≤ t ≤ 13.5 min) due to appearance of La-O-La (with O 1s energy of 528.8 eV) in the CeO x film, and then a high-energy shift to 531.0 eV (after sputtering time >13.5 min) because of the formation of silicate. Fig. 7(b) also depicts the decomposed O 1s peaks at different depths. For t = 11 min, the O 1s spectra of the CeO x layer can be decomposed into three peaks corresponding to Ce 3+ (530.6 eV) or Ce 4+ (529.8 eV and 531.8 eV). 17 For t = 13.5 min, the broad O 1s peak is constituted by both Ce-O bonding and La-O bonding. It further verifies that cerium-lanthanum complex oxide was formed. As etching closer to the CeO 2 /La 2 O 3 interface, the intensities of Ce (III)-O bonding and La-O bonding become stronger while the intensity of Ce (IV)-O bonding decreases, indicating more reduction of Ce 4+ to Ce 3+ near the La 2 O 3 film. This result agrees with the [Ce 3+ ] fraction as calculated from the Ce 3d spectrum as shown in Fig. 3. At the La 2 O 3 /Si interface (t = 20 min), the O 1s spectrum can be deconvoluted into La-O-Si (530.6 eV) and Si-O (531.5 eV). It confirms that the excess oxygen from CeO 2 layer can cause the interface oxidation and that is consistent with the Si 2s spectra given in Fig. 6.
IV. CONCLUSION
The chemical composition and the bond structure of the CeO 2 /La 2 O 3 /Si stack at different depths have been studied in detail by using x-ray photoelectron spectroscopy (XPS) measurements. Gaussian deconvolutions of the Ce 3d, Si 2s, and O 1s spectra at different depths reveal the material interactions in this stacked structure. Results indicate that cerium-lanthanum complex oxide was formed at the CeO 2 /La 2 O 3 interface. Ce 3+ and Ce 4+ states always co-existed and the amount of Ce 2 O 3 was over 70% in the as-deposited CeO 2 film which may be partially due to the reduction effect due Ar sputtering during XPS measurements. Near the CeO 2 /La 2 O 3 interface, the Ce 2 O 3 content increases to over 80% indicating the serious oxygen deficiency in La 2 O 3 film which has caused the reduction of cerium oxide to lower oxidation state. Different to the La 2 O 3 sample without the CeO 2 capping, the CeO 2 /La 2 O 3 stack exhibits interface oxidation at the La 2 O 3 /Si interface due to the present of excess oxygen from the capping CeO 2 layer. These observations explained the improved electrical characteristics of MOS transistors using CeO 2 /La 2 O 3 as the gate dielectrics reported earlier. 9 | 4,181.8 | 2014-11-13T00:00:00.000 | [
"Physics"
] |
Removal of Arsenic Using Hydrated Mixed Trivalent Iron-Aluminum Oxide Adsorbent: Prediction of Column Performance
The performance of the column experiment for the removal of arsenic from groundwater by using the method of adsorption using hydrated mixed trivalent Fe-Al oxide as adsorbent in the agglomerated nanoparticle form was explored. Efficiency of the adsorbent was scrutinized by carrying out the experiment with field groundwater sample, spiked with arsenic solution of a particular concentration at pH 7.5 and 30 oC at variable experimental conditions. For characterization, FTIR was done for the mixed binary oxide, pure Fe2O3 and Al2O3. Two breakthrough curves were plotted by varying the bed-depth of the adsorbent and the outflow rate to ascertain the condition for maximal scope of adsorption. The kinetic parameters from the breakthrough curves were evaluated using Thomas and Adams-Bohart model analyses. The result of the column study showed that, the adsorbent performed efficiently as a cost-effective scavenger of toxic arsenic from groundwater.
INTRODUCTION
Suffering of mankind from groundwater pollution arising out of arsenic poisoning has now become a worldwide environmental threat especially in the Bengal belt. In West Bengal, predominantly in the areas of Malda, Murshidabad, 24-Parganas, Howrah and Hooghly unfortunately exceeding 40 million people are staying above the recommended level of arsenic according to World Health Organization (WHO) guideline 1 . Mostly the poor people are the sufferers through their ingestion of arsenic containing drinking water, shows the symptoms of arsenicosis 2 . So, in recent times it becomes a matter of serious bothering about arsenic related incurable health problems and remedy from its poisoning effects.
Arsenic is a carcinogenic crystalline metalloid solid existing in the form of three allotropes. In the environment, it occurs mostly in the four oxidation states viz. -III, 0, +III and +V, of which +III and +V are the most common. As per WHO guideline, even 0.05 ppm concentration mark of arsenic has now been considered to be unsafe for our mankind. WHO has recommended 0.01 ppm level of arsenic toxicity to be the permissible limit in drinking water reducing it from 0.05 ppm 3 . Keeping in mind about the human health issues, many countries have already been implemented this as safe guideline value.
In Bengal belt (Bangladesh, West Bengal and its adjoining area in India), there has been a huge burden of arsenic prompted diseases due to its continuous exposure in an elevated concentration for a long term. In these areas, the arsenic is mainly originated from geological sources and its uplifted concentration is concomitant with the reductive dissolution of iron pyrites or iron-oxyhydroxide, which promotes the mobilization of sorbed arsenic in the alluvium region of Ganga-Bhahmaputra river 4,5 . The obtainability of arsenic in groundwater is highly probable due to its immoderate use for irrigation.
The degree of toxicity of inorganic arsenite and arsenate are found to be much greater in comparison to the organic methylated arsenicals. Trivalent arsenicals are even more toxic than the pentavalent ones. The carcinogenic effect of arsenic in the light of molecular biology inhibits replication of DNA and interrupts the repair mechanism through the linkage with thiol groups. Exposure to arsenic repeatedly via drinking water from tube-well affects a large number of human organs showing acute symptoms of malignancy in the lungs, liver, bladder, kidney, urinary tract and skin. Prolonged arsenic ingestion in higher concentration has adverse effects on human cardiovascular system. It shows clinical symptoms of arsenical dermatitis, hyperkeratosis and may cause symptomatic Blackfoot Disease 6 . Some widely used conventional low cost arsenic treatment technologies, viz. oxidation, co-precipitation, coagulation followed by flocculation, membrane filtration modified as electro-ultra filtration, adsorption using different solid materials, floatation and ion exchange etc. have been found to be reported in the developing countries [7][8][9][10][11] . A special priority has been given to the method of adsorption using different solid materials due to its easy handling and requirement of lesser volume for the treatment of greater arsenic concentration in groundwater. Several solid sorbent materials 12,13 viz. activated carbon, agricultural residues and its by-products, industrial waste, biomasses and metal oxide nanoparticles are used extensively for the removal of arsenic contamination. Various mineralogical forms of mixed trivalent iron-aluminum oxide and hydroxide [14][15][16][17][18][19][20] , rare earth oxides 21,22 and Ce(IV) doped iron oxide 23 are found to be used in large scale as adsorbents for the removal of poisonous arsenic from the groundwater.
This work is mainly based on the removal of deadly arsenic from the contaminated groundwater using a potent low cost adsorbent, hydrated trivalent mixed iron-aluminum oxide by performing column experiment under different operating conditions. Here the efficiency of the adsorbent has been judged on the basis of varying column bed height and flow rate of the spiked effluent. The resulting data have been plotted in the breakthrough curves from which kinetic parameters using Thomas and Adams-Bohart models have been analyzed.
Preparation of hydrated mixed trivalent Iron-Aluminum oxide
Equimolar (0.5 M) mixture of both FeCl 3 and AlCl 3 were taken together in an acidic solution of 0.1(M) HCl. It was stirred thoroughly and heated to 60 o C. The solution was made ammoniacal by adding NH 4 OH solution to it slowly with continuous stirring until the pH of the mixture attains almost neutrality. The dark brown colored gel-type slurry was formed. The overall solution along with the slurry was kept for aging for 30 hours. It was filtered and the gel-type precipitate was washed for four to five times with deionized water to make it free from other impurities. For making it dry completely, the slurry was transferred into a hot air oven. The pure solid product obtained was smashed in the form of fine grains having mesh size in the range of 0.14-0.29 mm. It was further heated for 3 h maintaining the temperature at about 120 o C for re-drying. Finally, the grains were homogenized to pH 7.5 and were ready to use as adsorbent for the column experiments.
Reagents
Ferric chloride hexahydrate (FeCl 3 .6H 2 O) and potassium iodide (KI) were purchased from Merck, India. Aluminum Chloride hexahydrate (AlCl 3 .6H 2 O) for adsorbent preparation and sodium borohydride (NaBH 4 ) for arsine formation were obtained from Loba Chemie, India. Silver diethyldithiocarbamate (SDDC) for arsenic adsorption purpose was procured from E. Merck, Germany. Ascorbic acid was purchased from SD Fine Chemicals, India. All other solvents and chemicals are either reagent or analytical grade and used as obtained.
Instruments
A digital electronic balance (Mettler AE-240) was used for different weighing purpose required for the experiment. For determination of pH of the solutions a pH meter (Elico LI 127) was utilized. A Fourier transform infrared spectrophotometer (Jasco 680 plus) was used for identification of the functional groups present in the oxides. For the spectral analysis of arsenic in the overall study an UV-Vis Spectrophotometer (Hitachi U3210) was also used.
Source of the field sample
Groundwater from about a 50 -55 m deep tube well at M. G. road, Kolkata, West Bengal (India) was collected and analyzed for arsenic detection. After reproducing the data thrice, the concentration of arsenic was confirmed to be 2.2 ×10 -3 mg/L in the field groundwater. Solution containing As (III) from outside was spiked into the field groundwater sample until the concentration became 1.3 × 10 -1 mg/L.
Analytical methods
In the field groundwater sample, the total dissolved inorganic arsenic was determined by the addition of 32% hydrochloric acid, 10% solution of potassium iodide and 1% solution of ascorbic acid, when reduction of arsenic took place from its pentavalent state to its trivalent one. It was then converted to arsine by using 3% solution of sodium borohydride. The arsine gas generated was driven off by flushing nitrogen gas from a cylinder to the absorber assembly, and was absorbed in the chloroform solution of silver diethyldithiocarbamate (SDDC). The absorbance was measured at wavelength 520 nm against blank reagent using quartz spectrometer cuvette of 1 cm path length. The absorbance data was tallied with a standard calibration curve to compute the concentration of arsenic. The detection limit and accuracy of the method were found to be 1µg and >90% respectively.
Column experimental procedure
In order to carry out column experiments 24 for the removal of arsenic from groundwater using adsorption technique the adsorbent, hydrated trivalent mixed iron-aluminum oxide was uniformly packed in the glass tubes having internal diameter of 7 mm and a height of 250 mm over a glass wool sheet as per the height of the bed required for the accumulation of arsenic from polluted spiked sample water over column bed. The packing of this type of binary oxide in columns should be handled with utmost care; otherwise there will be chances of formation of cracks and void space in column bed which might hamper the easy flow of the effluent.
Differential bed height of adsorbent
Three different bed heights of the column were chosen for this adsorption experiment. For varying the heights of the column bed as 5 cm, 6 cm and 7 cm the glass columns were required to be packed with 4.1 g, 5.1 g and 6.1 g of hydrated mixed trivalent iron-aluminum oxide respectively. The flow rate of the effluent was kept fixed at 1 mL/min. The arsenic concentration (C 0 ) in the influent was 1.3 × 10 -1 mg/L and the effluents were collected in fractions in regular intervals in 50 mL volumetric flasks and the absorbance was measured.
Outflow rate variation
One particular column bed height was taken into consideration for the flow rate variation of the effluent. Three glass columns were packed with 6.1 g of hydrated trivalent mixed oxide of iron-aluminum to make the bed height 7 cm. The field groundwater sample, spiked with 1.3 ×10 -1 mg/L concentration of arsenic was passed through each column varying the outflow rates by 1 mL, 3 mL and 5 mL per minute respectively. 100 mL volumetric flasks were used for the collection of effluents in a regular interval and the absorbance was measured in each case.
Adsorption kinetic modeling
For the elucidation of functioning and dynamic behavior of column studies two different adsorption kinetic models are well considered. Thomas and Adams-Bohart kinetic models have been chosen for better explanation. The models are given below.
Thomas model
One of the most extensively used fundamental kinetic models is the Thomas model 25 for the analysis of theoretical background of column performance. The model is based on some basic assumptions. The model assumes Langmuir isotherm of adsorption where reaction kinetics follows reversible pseudo second-order; axial and radial dispersions arising from kinetics of adsorption in the column bed are negligibly small; column void fraction is assumed to remain unchanged; physical properties of the solid adsorbent and the adsorbate are considered to be kept constant; during mass transfer the intra particle diffusion and external resistance are ignored directly. The mathematical expression for Thomas model is as follows: Where, k Th is the Thomas rate constant (mL/min.mg), q o is equilibrium arsenic uptake per g of adsorbent or adsorption capacity (mg/g), x is the total mass of the adsorbent (g), n is the flow rate of the effluent (mL/min), C t is the effluent concentration (mg/L) of metal at any time t (min), C o is the influent metal concentration (mg/L) and V eff is the outflow volume (mL) and t = V eff /V. k Th and q o values can be determined from the intercept and slope of the linear plot of C t /C o Vs 't'. This helps in explaining experimental data of the breakthrough curves.
Adams-Bohart model
Bohart and Adams 26 model is used to justify the effectual behavior of column. An elementary equation was established to describe the relationship between C t /C o and t. This model was set up initially for gases, later on it was transposed to liquids by changing the mathematical terms used in the expression. This model is used for the interpretation of the preliminary part of the breakthrough curve. This model is based on the assumption that the rate of adsorption is related to both the residual adsorbent capacity and the adsorbate concentration proportionally. The expression for the Adams-Bohart equation is given as.
Where, k AB is the kinetic constant (L/mg. min), F is the linear flow rate (cm/min) or superficial velocity (the volumetric flow rate/the column section area), Z is the bed-height (cm) of the column and N o is the saturation concentration (mg/L). Rest parameters are same as narrated in the Thomas model. The constant values are calculated from the plot of C t /C o against time (t) (min) for the breakthrough curves.
Fig. 1. FTIR-spectra of (a) Equimolar mixed trivalent Fe 2 O 3 -Al 2 O 3 (b) Pure Fe 2 O 3 (c) Pure Al 2 O 3
The Fourier transform infrared (FTIR) spectra a, b and c in Fig. 1 represents that of equimolar mixed trivalent Fe 2 O 3 -Al 2 O 3 , pure Fe 2 O 3 and pure Al 2 O 3 respectively. The spectra of three types of oxides showed a large no. of peaks and bands of variable intensities within the range of 4000-500 cm -1 , but attempt has not been made to assign each and every separate band of specific wave numbers. A broad band above 3300 cm -1 has been assigned to the symmetrical and asymmetrical stretching of bound H 2 O molecule O-H bond. A strong band identified in the range 1630-1650 cm -1 is allocated to the bending mode of hydroxyl (-OH) group. The bands at 694 and 478 cm -1 of spectrum a (Fig. 1), which is of binary oxide are attributed to the symmetrical and asymmetrical stretches of M(metal)-O bonds. Those bands in spectrum b are found at 672 and 465 cm -1 and the same bands for spectrum c are at positions 732 and 580 cm -1 . Furthermore, the wave numbers almost at 981 and 1467 cm -1 are assumed to be due to hydroxide bridging between the two hetero-metal ions present in the mixed trivalent oxide, referred to as symmetric and asymmetric bending frequencies.
Impact on breakthrough curve for variation of adsorbent bed height
The dependence of bed depth on breakthrough curve has been investigated by passing 1.30 × 10 -1 mg/L concentration of influent through the three columns packed with 4.1, 5.1 and 6.1 g of hydrated trivalent iron-aluminum mixed oxide as the adsorbent. The column heights became 5.0, 6.0 and 7.0 cm respectively and the flow rate of the effluent has been maintained at 1 mL/minute. The breakthrough curve of C t /C O vs time (min) has been manifested in Fig. 2, where C t is the outflow concentration at time (t) and C o is the initial input concentration of trivalent arsenic. get saturated faster than the higher ones. The breakthrough volumes for the columns with bed height 5.0, 6.0 and 7.0 cm are 3900, 5100 and 6600 mL respectively. This was due to the rise in the empty bed contact time (EBCT) with increasing the bed depth. The EBCT for the columns with bed heights 5.0, 6.0 and 7.0 cm were 1.82, 2.23 and 2.91 min respectively. With increased EBCT, the diffusion process had become so effective and faster that the breakthrough volume (V b ) becomes more and the breakthrough time (t b ) reaches later 27,28 . With increasing EBCT, the influent and the adsorbent in between contact time has been escalated and a higher amount of adsorbate had got adsorbed by the column bed and hence the V b has been increased with the rise of bed height.
The parameters calculated from the breakthrough curves varying the bed heights of the adsorbent from 5-7 cm at a fixed outflow rate using Thomas and Adams-Bohart kinetic models analyses have been represented in Table 1 and Table 2 respectively. For Thomas model, the Thomas rate constant (k Th ) has been found to be increased and the equilibrium arsenic uptake per gram of adsorbent i.e., the maximum adsorption capacity (q o ) is decreased with the rise of column bed depth. The rise of the k Th values can be justified considering the fact of decrease in the mass transport resistance with rise in the adsorbent bed height 29 . The changes in the parameters are related to the increase in empty bed contact time (EBCT) of the adsorbate with the active sites of the adsorbent, as with increasing the EBCT values the rate of sorption process is increased. For Adams-Bohart model the values of the kinetic constant (k AB ) and the saturation concentration (N o ) have been evaluated. The changes in the values of k AB and N o are due to the influence of mass transfer phenomenon particularly in order to explain the preliminary part of the adsorption process and the breakthrough curve analysis 30,31 .
Variation of flow rate and its impact on the breakthrough curve
The results demonstrated in Fig. 5 show the influence of the flow rate of the effluent on the breakthrough curve for the removal of arsenic at a fixed bed depth (7 cm) of hydrated mixed trivalent iron-aluminum oxide column.
Fig. 2. Influence on breakthrough curve with variable bed heights of hetero-oxide adsorbent
It has been observed from the plot that the shape and the inclination of the curves are somewhat different from each other with bed height variation. The break point reaches faster in the columns with lower bed depth. The higher uptake and gradual increase in slope of the breakthrough curves were observed at the initial stage of the curves. This gradual increase was continued up to the break point of the curve, but the arsenic concentration in the outflow was found to be increased readily after the break point is attained so as the slope of the curve. The columns with lower bed depth process becomes less potent which had resulted in lesser extent of adsorption 27,28 . Thus, longer time will be needed for the adsorbent to get bonded to the metal ion effectively. The column studies report states that the plateau of the breakthrough curve has reached faster with increase in the flow rate of the effluent. This is because of much reduced contact time spent between the solute present in the influent and the surface of the column bed of the adsorbent and the adsorption front reaches the bottom of the column quickly.
T h e va l u e s d e t e r m i n e d f r o m t h e breakthrough curves using the two kinetic models at variable flow rates are summarized in Table 1 and Table 2. Similarly as in the case for variable bed heights, here also with increasing outflow rates the adsorption model parameters can also be correlated to the decrease in the EBCT values and lowering in the mass transport resistance in the liquid film. Here in case of Thomas model, there occurs a rise in the k Th values and a decrease in the q o values with increase in the volumetric flow rate at a particular bed height of the column [29][30][31] . In case of Adams-Bohart model, the variation in the k AB and N o values with increase in the effluent flow rate and decrease in EBCT is quite significant. Thus kinetic modeling can be employed successfully to describe the behavior of adsorption in the column experiment. It was found that the breakthrough volumes V b (mL) for the variable outflow rates (mL/min) 1.0, 3.0 and 5.0 were 6600, 3500 and 1600 for arsenic (Fig. 3), respectively.
The increase in the rate of outflow results in decrease of the breakthrough volume as well as the breakthrough time. It has been aroused from the gradual decrease of EBCT (min) from 2.91 to 1.57 to 0.65 with increase in the rate of flow of influent arsenic 1.0, 3.0 and 5.0 mL/min respectively. With lowering of EBCT, the diffusion Scrutinization of some parameters for quality analysis of water Some parameters for checking water quality of the field sample were analyzed and summarized in Table 3. Also, their corresponding values after passing through the column bed of adsorbent of different bed heights at break point were collectively shown in Table 4.
CONCLUSION
The break through analysis is the preliminary investigation to precede the experimental work from the batch analysis study to its further application. In this work, the column study had been carried out to set up a suitable and effective application system that can be employed for the removal of arsenic present in much higher concentration than its permissible limit in contaminated groundwater. Here hydrated trivalent mixed iron-aluminum oxide was successfully utilized as column bed adsorbent to eliminate arsenic from the influent. The breakthrough curves have been studied thoroughly and the maximum adsorption capacities have been evaluated with the variation of both bed height of the adsorbent and flow rate of the effluent. With decrease in the rate of outflow and rise in the bed height of the column, the breakthrough volume has been found to be increased due the enhancement of empty bed contact time (EBCT). Column performance investigations using hydrated trivalent iron-aluminum hetero-oxide packed beds stipulate that its effectiveness for the arsenic removal is appreciable. Thus hydrated iron-aluminum binary oxide can be used as a potential, cost effective removing agent for eliminating this toxic metal from groundwater. The kinetic parameters using Thomas and Adams-Bohart models were predicted successfully and they were in good agreement with the experimentally determined EBCT values. This method of adsorption is a green chemistry based technology because it requires no extra energy to run the removal process. Nevertheless, this remediation technique can be considered as a safe option for the disposal of the arsenic from polluted groundwater.
ACKNOWLEDGEMENT
Author thanks the authorities of her college for various supports. Author also likes to acknowledge Prof. U. C. Ghosh, Department of Chemistry, Presidency College, Kolkata (currently recognized as Presidency University) for his helps. | 5,139.4 | 2020-06-25T00:00:00.000 | [
"Engineering"
] |
Coupling matrix synthesis of general chebyshev filters
A single optimization algorithm based on SolvOpt that synthesizes coupling matrices for cross-coupled microwave filters is presented. The rules for setting initial values of SolvOpt are proposed to find global minimum of the cost function. SolvOpt method provides faster convergence and higher accuracy to find the final solution compared with hybrid optimization algorithms. Application examples illustrate the excellent performance and the validity of this method.
Introduction
Filtering structures with increasingly stringent requirements can often be met only by using cross-coupled resonators to generate finite transmission zeros. Both analytical and numerical methods for the synthesis of coupling matrices corresponding to cross-coupled filters have been extensively studied. A fundamental analytical theory of cross-coupled resonator bandpass filters was developed in the 1970s by Atia and Williams [1]. A slightly different, widely used, analytical technique based on generating the Chebyshev filtering functions with prescribed transmission zeros was advanced by Cameron [2]. Cameron further proposed "N + 2" CM synthesis techniques for microwave filters with source/loadmultiresonator coupling [3]. These analytical techniques produce a full coupling matrix (CM) which must be transformed to a form suitable for realizations by repeated matrix similarity transformations. The main difficulty with these methods is that the sequence of matrix transformations is not known in advance and may be difficult to derive, for example box sections configurations. Numerical methods can strictly enforce the desired topology compared with analytical methods. Amari proposed CM synthesis of microwave filters based on the local optimization method [4,5], which relies upon on the provision of a good initial guess. However, how to set initial guess values is not discussed. Recently, a class of hybrid optimization methods combining local search methods with global methods has been reported [6,7]. For example, the paper in [6] presented a method consists of a Levenberg-Marquardt algorithm for a local optimizer and genetic algorithm for a global optimizer, respectively. In [7] a genetic algorithm is combined with a sequential quadratic programming local search method to form a hybrid method. These hybrid optimization methods can find a global minimum, however, they need more iteration, and the process of synthesizing CM becomes very complex.
A single optimization method based on SolvOpt that synthesizes CM for cross-coupled microwave filters is presented in this paper. SolvOpt is a solver for local optimization problems. Local optimization methods relies upon on the provision of a good initial guess at the solution, however, synthesizing CM by optimization is not a purely mathematical problem, and considering that the filters can be realized on the physical structure, the limited range of values of CM elements can be known in advance, so, we can easily guess a good initial values for SolvOpt algorithm to synthesize CM. The rules for setting initial values of SolvOpt optimization method are proposed. One can judge whether a final solution is a global optimum from the cost function value of the solution, because the value of cost function is zero in theory. So, local search method based on SolvOpt can also be guaranteed to find a global solution.
Coupling matrix synthesis using solvopt method
For any two-port lossless filter network, the transmission function S21 and reflection function S11 may be expressed as (1) Where is the normalized frequency variable and is a ripple constant related to the passband return loss RL by Sj is the location of the nth transmission zero in the complex s-plane, Cameron has proved that the number of transmission zeros with finite locations m must satisfy mN , those zeros without finite locations must be placed at infinity. However, the two-port networks without source/load-multiresonator coupling will realize a maximum of N2 finite-location transmission zeros [2,3]. Amari has given a rigorous proof for the maximum number of finite transmission zeros of cross-coupled filters with a given topology [8,9].
The S-parameters describe the response of a two-port filter network. The relation between S-parameters and the CM can be expressed as: for the case of "N" CM [4, eq.
for the case of "N+2" CM with source/load-multiresonator coupling case [5, eq.(4)], , The normalized load and source resistors R1 and RN can be accurately calculated in this paper using Cameron's analytical method [2], and this is different with other optimization methods.
The elements of CM Mi,j are known as the coupling coefficients and varying their values causes the response to change. The aim of the CM synthesis process is to select CM which causes (2) or (3) to produce a filter response coincide with the response obtained from (1).
The selection of an appropriate cost function is important for the success of any optimization method. The cost function given by Amari [4] is used for the current work as (4) Here, , P and Q are the number of finite transmission and reflection zeros, respectively, and are the location of the kth transmission and reflection zero at the normalized frequency, respectively. The variable x represents the set of control variables at the current iteration, that is, the elements of CM. The nonzero CM element Mi,j will be used as independent variables in the optimization process. The gradient of the cost function needs to be used in SolvOpt algorithm. The gradient of the |S11| and |S21| with respect to Mi,j was given in [4] for "N" CM case and [5] for "N+2" CM case. The gradient of the cost function with respect to an independent variable Mi,j can be derived from (4) as Although SolvOpt optimization methods relies on the provision of a good initial guess at the solution, considering that the filters can be realized on the physical structure, Generally, magnitudes of the direct coupling coefficients are bounded by 0.1 and 1, and the cross couplings by 0 and 0.8. Rules of setting initial values for SolvOpt are proposed as follow: For "N" CM, all cross and self couplings set to specific value ranged from zero to 0.2 and all direct couplings to specific value ranged from 0.4 to 0.6.
For "N+2" CM, direct couplings MS,1 (source to resonator 1 ) and ML,N (load to resonator N) set to 1, the rules of setting all remaining CM elements are the same as those of "N" CM.
We can synthesize "N" or "N+2" CM easily and efficiently by minimizing a cost function based on the rules above of setting initial values.
The SolvOpt optimization algorithm begins with an initial set of control variables x, which consists of elements of CM, according to the rules given in this paper. It can be repeatedly performed for more accurate solution; the solution will be used as the initial values of the next iteration. SolvOpt algorithm will terminate, when the value of the cost function reaches a target value. However, when maximum iterations have been performed and a target value of the cost function has not been satisfied, set of control variables x will be re-initialized according to setting rules. Usually, the value of the cost function with a good initial guess will reaches a value below 1.0×10 10 when two or three iterations have been performed. Generally, desired accuracy of the cost function will be obtained, when the guess number of initial set of control variables is one or two according to rules proposed in this paper.
Examples
In this section, for the verification of the presented method, it is applied to two examples of filter synthesis. Coupling schemes of three filters are shown in Fig. 1. In Fig.1, solid circle represents source or load; hollow circle represents the resonators; dashed line represents the cross coupling and solid line represents the direct coupling.
Symmetric 6th-order filter (Filter 1)
This is an example of synthesize "N" CM. We consider a symmetric 6th-order filter with four transmission zeros at ±1.592692 and ±2.132335 and a passband maximum return loss of 20 dB (filter 1). The six reflection zeros locates at ±0.9734, ±0.7498, ±0.2893 and R1=RN =0.9904, which are calculated using Cameron's method [2]. Coupling scheme of this filter is shown in Fig. 1(a). The initial guess of control variables, x, for this example consists of the following 7 variables, corresponds to setting all direct couplings, Mi,i+1, for i=1,2, . . . 5, to 0.5; the cross couplings, M2,5,and M1,6 to zero. The value of the cost function in (4)
Asymmetric 8th-order filter (Filter 2)
This is an example of synthesize "N+2" CM. We consider an asymmetric 8th-order filter with seven transmission zeros (in this case, three real-axis and two complex pairs) at 1.196, 1.45, 1.62, 0.148±j0.9040, and 0.49±j0.955 and a passband maximum return loss of 20 dB (filter 2), these specification are given in [6]. The eight reflection zeros locates at 0.97428, 0.78139, 0.98691, 0.87098, 0.61416, 0.46504, 0.26391, and 0.10520, which are calculated using Cameron's method [2]. Coupling scheme of this filter is shown in Fig. 1(b). The initial guess of control variables, x, for this example consists of the following 24 variables, corresponds to setting all direct couplings, Mi,i+1, for i=1,2, . . . 7, to 0.6; the cross couplings, M1, 3,MS,3,MS,4,ML,4,ML,5,M5,8,and M6,8 to Both the frequency response of the prototype as computed from (1) and that computed directly from the CM are shown in Fig. 3. The excellent agreement between the two, the difference is not visible in the figure, shows the accuracy of the SolvOpt method.
For the comparison, the value of cost function is equal to 1.203×10 6 , calculated by substituting CM in [6, 17(a), p.2164] into (4). More than 50 iterations are needed to converge for this example using the hybrid method in [6], however, SolvOpt method only needs two iterations to converge to 5.738×10 13 .
As can be seen, the proposed method provides faster convergence and higher accuracy to find the final solution than hybrid method in [6,7].
Summary
A single SolvOpt algorithm that synthesizes coupling matrix for cross-coupled microwave filters with or without source/load-multiresonator coupling has been presented, and its initial set has been proposed for fast convergence and good accuracy. The method has been applied to synthesis of filters with varied orders and symmetries and has yielded excellent results, which show simplicity, efficiency and accuracy of SolvOpt method, even for filter responses with large numbers of control variables to be optimized. The proposed SolvOpt algorithm simplifies the process of extracting CM and provides faster convergence and higher accuracy to find the final solution, compared hybrid optimization methods. | 2,415 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Polymorphonuclear granulocytes in human head and neck cancer: Enhanced inflammatory activity, modulation by cancer cells and expansion in advanced disease
The progression of epithelial cancer is associated with an intense immunological interaction between the tumor cells and immune cells of the host. However, little is known about the interaction between tumor cells and polymorphonuclear granulocytes (PMNs) in patients with head and neck squamous cell carcinoma (HNSCC). In our study, we investigated systemic PMN‐related alterations in HNSCC, the role of tumor‐infiltrating PMNs and their modulation by the tumor microenvironment. We assessed the infiltration of HNSCC tissue by PMNs (retrospectively) and systemic PMN‐related alterations in blood values (prospectively) in HNSCC patients (n = 99 and 114, respectively) and control subjects (n = 41). PMN recruitment, apoptosis and inflammatory activity were investigated in an in vitro system of peripheral blood PMNs and a human HNSCC cell line (FaDu). HNSCC tissue exhibited considerable infiltration by PMNs, and strong infiltration was associated with poorer survival in advanced disease. PMN count, neutrophil‐to‐lymphocyte ratio and serum concentrations of CXCL8 (interleukin‐8), CCL4 (MIP‐1β) and CCL5 (RANTES) were significantly higher in the peripheral blood of HNSCC patients than in that of controls. In vitro, HNSCC‐conditioned medium inhibited apoptosis of PMNs, increased chemokinesis and chemotaxis of PMNs, induced release of lactoferrin and matrix metalloproteinase 9 by PMNs and enhanced the secretion of CCL4 by PMN. Our findings demonstrate alterations in PMN biology in HNSCC patients. In vitro, tumor‐derived factors modulate cellular functions of PMNs and increase their inflammatory activity. Thus, the interaction between HNSCC and PMNs may contribute to host‐mediated changes in the tumor microenvironment.
Worldwide, head and neck cancer is one of the six most common cancers. More than 90% of head and neck cancers are squamous cell carcinomas (HNSCCs) that primarily origi-nate in the oral cavity, the pharynx and the larynx. [1][2][3] HNSCCs display an inflammatory microenvironment with frequent infiltration by large numbers of immune cells. This infiltration results in a reciprocal interaction between the malignant tissue and the immune cells that causes local and systemic alterations, often resulting in the downregulation of immune functions and the tumor escape from immune control. [4][5][6] Accumulating evidence suggests that polymorphonuclear granulocytes (PMNs) and other myeloid cells play an important tumor-promoting role during tumor progression. [7][8][9] High numbers of PMNs before treatment, as determined in the peripheral blood of patients with malignant melanoma, 10 and an increased neutrophil-to-lymphocyte ratio (NLR), as demonstrated in ovarian cancer, 11 have been proposed as independent prognostic factors for short overall survival. Tumor-infiltrating PMNs have been linked to a poorer prognosis for patients with lung adenocarcinoma of the bronchioloalveolar subtype, 12 but they seem to be associated with a reduced mortality for patients with gastric carcinoma. 13 PMN functions are modulated by a variety of cytokines and chemokines, 14 and many of those factors have been implicated in tumor progression: CXCL8 (interleukin-8) promotes the tumor infiltration by PMNs 15 and has been implicated in the modulation of the tumor microenvironment. 16,17 In HNSCC patients, serum CXCL8 has been suggested as a possible biomarker for response and survival. 18,19 CCL4 (macrophage inflammatory protein 1b) is produced by a variety of cells including PMNs. 20 It regulates the recruitment of both myeloid and lymphoid immune cells 21 and their intratumoral infiltration. 22 In contrast to some antitumor activity of CCL4, 22 recent reports have linked an overexpression of CCL4 with tumor recurrence or progression. 23,24 CCL5 (RANTES) promotes PMN chemotaxis, 25 seems to be a marker of disease progression for breast cancer patients 26 and has been shown to increase the production of matrix metalloproteinase 9 (MMP-9) by oral cancer cells. 27 It has been reported that tumor cells actively modulate the functions of PMNs. For example, the expression of CXCL8 by tumor cells promotes the recruitment of PMNs to the tumor and their activation. 28 Recruited PMNs exhibit increased production of reactive oxygen species, NADPH oxidase and myeloperoxidase (MPO). 29 In bronchioalveolar carcinoma, the local survival of PMNs is prolonged by the production of antiapoptotic factors by the tumor microenvironment. 30 In our study, we investigated local and systemic PMNrelated alterations in patients with HNSCC. We used an in vitro system to investigate the functional interaction between HNSCC cells and PMNs. We found that HNSCC cells upregulate inflammatory activity and also upregulate the production of factors in PMNs with the possible ability to promote tumor progression. HNSCC patients exhibit increased expression of the chemokines that regulate PMN biology. The intratumoral accumulation of PMNs was associated with poor survival in advanced disease.
Study subjects and tumor characteristics
The experiments were approved by the local ethics committee, and written informed consent was obtained before sample collection. Blood samples were prospectively collected from patients before oncologic therapy and from 41 healthy volunteers as controls. Altogether, 114 patients (median age, 63 years; range, 41-86 years) with HNSCC of the oral cavity, oropharynx, hypopharynx or larynx were enrolled from 2008 to 2009 (survival analysis not yet available). For characteristics of patients and tumors, see Table 1. All consenting patients were included in the study unless they had HNSCC in other locations, radiotherapy or chemotherapy within the past 5 years, synchronous carcinoma in another location or severe concomitant infectious disease. HNSCCs were staged according to the tumor-node-metastasis (TNM) system. 1 For tissue analysis, we retrospectively analyzed paraffin-embedded sections collected from 99 patients (median age, 59 years; range, 36-87 years) with HNSCC. No restriction of selection was used except of localization (oropharynx or hypopharynx), availability (tissue and clinical data) and date of first diagnosis (between 1995 and 2001) (see Table 1). Our focus was on advanced disease (Stage III or IV). Retrospective analysis of clinical courses shows that surgery alone was performed in 9%, surgery combined with adjuvant radio(chemo)therapy in 22% and primary radio(chemo)therapy in 69% of the patients (followed by salvage surgery in 36% of these). The median follow-up period for surviving patients was 69 months (range, 43-124 months).
Culture of HNSCC cell line
In our study, we used the human hypopharyngeal carcinoma cell line FaDu (American Type Culture Collection, ATCC). FaDu cells were cultured in RPMI-1640 (Invitrogen) supplemented with 10% fetal calf serum and antibiotics (Biochrom). Quality and identity of the cell line were validated consistently according to ATCC Technical Bulletin No. 8, including regular microscopic controls of morphology, growth curve recordings and PCR-based testing for mycoplasma infection. FaDu-conditioned medium was produced by incubating 2 Â 10 6 FaDu cells per milliliter for 24 hr at 37 C in RPMI-1640. Cellular debris was removed by centrifugation.
Isolation of PMNs from peripheral blood
We used previously established protocols for isolation of PMNs. 31 Peripheral blood from healthy subjects was diluted (1:1 v/v) with PBS and separated by gradient centrifugation with 1077-Lymphocyte Separation Medium (PAA). Erythrocytes were sedimented with 1% polyvinyl alcohol solution (1:1 v/v) (Sigma-Aldrich). Remaining erythrocytes were lysed with Aqua Braun (B. Braun). The resulting PMNs (purity of >98%) were cultured in RPMI-1640 supplemented as above.
Migration and apoptosis of PMNs
Directed migration (chemotaxis) and random migration (chemokinesis) of PMNs were examined by using 3-lm cell culture inserts in 24-well companion plates (BD Bioscience).
The companion plates were filled with 800 ll of medium with 5 ng/ml recombinant CXCL8 (R&D Systems) or FaDu supernatant, in the presence or absence of anti-CXCL8-neutralizing antibodies (R&D Systems) or isotype control (BD Pharmingen), respectively. PMNs (5 Â 10 5 cells per 200 ll) were placed in the inserts, allowed to migrate for 3 hr at 37 C and migrated cells were counted (CASY Model TT; Innovatis). The migration/chemotactic index is calculated according to the following formula: chemotactic index ¼ migration induced by the chemoattractant/spontaneous migration toward control medium. By definition, spontaneous migration of PMN in control medium has a chemotactic index of 1.
To measure apoptosis, we stained PMNs with PE-Annexin V and 7-amino-actinomycin D according to the manufacturer's instructions (BD Pharmingen). Quantification was performed with a FACSCanto II flow cytometer (BD). We performed independent experiments using PMNs from n ¼ 15 (for chemotaxis), n ¼ 7 (for chemokinesis) and n ¼ 3 (for apoptosis) donors.
Statistical analysis
Standard descriptive statistics were used (e.g., means and standard deviations). To assess between-group differences, we used nonparametric exact tests throughout (Mann-Whitney-Wilcoxon U-tests for two groups or Kruskal-Wallis tests for more than two groups). Correlation coefficients reported are (Spearman) rank correlations. Survival time in months was calculated as the difference between diagnosis date and date of death independent of the cause or last observation date in case of censoring. Although survival probabilities were graphically assessed by the Kaplan-Meier method, univariate and multivariate Cox regression analyses were used for inference both in the total tissue samples (n ¼ 99) and in a more homogeneous subsample (n ¼ 40) (see Results). For the in vitro experiments, we used two-sample Student's t-tests.
All reported p values are nominal, two sided with an a significance level of 0.05 and not adjusted for the testing of Tumor Immunology
PMN-related alterations in the peripheral blood of HNSCC patients
In a first attempt to assess the role of PMNs in the biology of HNSCC, we analyzed differences related to blood count. Study subjects included 41 controls and 114 patients with HNSCC. Patient and tumor characteristics are reported in Table 1, whereas descriptive summaries of blood countrelated markers are displayed in Table 2 (for male patients). Peripheral blood data of female patients (n ¼ 15) are shown separately in Supporting Information Table S1 in consideration of the well-known gender-specific (patho)physiological differences in blood count and immune function. 32,33 Despite those differences and the small cohort examined, similar tendencies of results were observed for females.
The percentages of PMNs and the leukocyte counts were higher in HNSCC patients than in control subjects (p < 0.001) ( Table 2). Because the total lymphocyte counts were similar in HNSCC patients and in controls, this resulted in a significantly higher NLR in HNSCC patients than in controls (p < 0.001). Further analysis indicated that PMN numbers, leukocyte counts and NLR were associated with the T stage of the tumor (with significantly higher numbers in T4 stage) (see Table 2) and with the lymph node involvement (N) stage (PMNs, p ¼ 0.048; leukocytes, p ¼ 0.018; NLR, p ¼ 0.027; data not shown). These results were supported by correlation testing using the tumor size as measured either by endoscopic examination or by radiologic imaging (data not shown). Taken together, these findings indicate that the increase of PMN count is the most important difference in the leukocyte composition of peripheral blood obtained from healthy donors and HNSCC patients. Next, we assessed the presence of inflammatory mediators in the peripheral blood. The serum concentrations of the chemokines CXCL8, CCL3, CCL4 and CCL5 in HNSCC patients and in control subjects are shown in Figure 1; the serum concentrations of CRP are shown in Table 2. We found that the serum concentrations of CXCL8, CCL4 and CCL5 were significantly higher in HNSCC patients than in controls. In contrast, there was no significant difference between the groups for CCL3. CRP concentrations were positively and significantly associated with the T stage (p ¼ 0.004) and the N stage of the tumor (p ¼ 0.001). In a subgroup of patients with tumors of the pharynx and the oral cavity, tumor size as determined by endoscopy (Spearman rank correlation), T stage (Kruskal-Wallis test) and presence of nodal metastasis (Mann-Whitney U-test) were positively and significantly associated with CXCL8 (tumor size, r ¼ 0.44, p ¼ 0.023; T stage, p ¼ 0.018; N0 versus N1-N3, p ¼ 0.017). Except for N stage, such associations were also observed for CCL4 (tumor size, r ¼ 0.51, p ¼ 0.006; T stage, p ¼ 0.049; N0 versus N1-N3, p ¼ n.s.) (data not shown). Taken together, these findings indicate that inflammatory activity in the peripheral blood of HNSCC patients is enhanced and that this enhanced activity is positively correlated with disease stage.
Increased tumor infiltration by PMNs in advanced disease
Increased chemokine concentrations may result in further activation of immune effector cells and in the recruitment of these cells to the tumor site. To investigate whether HNSCC tissue is infiltrated by PMNs, we stained tissue from 99 HNSCC patients for the granulocyte marker CD66b. We found that 93% of the tissue samples exhibited PMN infiltration in tissue areas consisting primarily of carcinoma cells or in stromal tissue regions (Fig. 2a). We further characterized the HNSCC tissue-associated PMNs by using immunohistochemical staining against the azurophilic (primary) granule marker MPO. We found that HNSCC tissues were infiltrated by MPO-positive cells (Fig. 2b). However, we also found cells that were morphologically recognizable as PMNs but that did not stain positive for MPO. These results could indicate that some of the tumor-infiltrating PMNs may have already executed their phagosomal oxidative burst and activity, although experimental artifacts cannot be excluded. Upon scoring the extent of PMN infiltration by using anti-CD66b staining (Fig. 2a), we observed a weak infiltration in 44% of the tissues, a medium infiltration in 30% and a strong infiltration in 19% (data not shown). When we compared PMN infiltration with tumor stage, most of the T4 tumors displayed medium or strong infiltration, whereas smaller and less-invasive tumors exhibited a lower degree of PMN infiltration (Fig. 2c).
These findings demonstrate that a considerable number of HNSCC cancers are infiltrated by PMNs and suggest a functional interaction between HNSCC cells and PMNs.
Tumor infiltration by PMNs is associated with poor survival in advanced disease
To assess the clinical and pathophysiological relevance of HNSCC-infiltrating PMNs, we analyzed the relationship between the extent of PMN infiltration and the clinical outcome. To eliminate variables associated with the difference in disease stage and general health condition, 34 we included only patients with advanced disease (Stage III or IV) and excluded those aged 70 years or more, those with the appearance of synchronous or metachronous cancer and those with severe systemic disease (ASA > 2) at the time of initial diagnosis. For this subgroup, the 5-year survival rate was 40% (Fig. 2d). Although analysis of the whole cohort (n ¼ 99) yielded no association between PMN infiltration and survival, univariate and multivariate Cox regression analyses of patients with advanced disease (n ¼ 40) demonstrated that the 5-year survival rate for patients with medium or strong PMN infiltration was significantly lower than that of patients with weak or no infiltration (p ¼ 0.045 and p ¼ 0.048, respectively) ( Fig. 2d and Supporting Information Table S2). These findings indicate that the strong infiltration of HNSCC tissue by PMNs may represent a negative prognostic factor for HNSCC patients with advanced disease. Table S1.
Tumor Immunology
Trellakis et al.
Modulation of PMN functions by HNSCC cells
In our final series of experiments, we set up an in vitro system to investigate cell biological mechanisms of HNSCC-PMN interaction. To this end, peripheral blood PMNs were stimulated with supernatant obtained from a human HNSCC cell line (FaDu).
We initially investigated the effect of FaDu-HNSCC cells on the migration and recruitment of PMNs. To assess Tumor Immunology random migration (chemokinesis), PMNs were stimulated with FaDu-conditioned supernatant or with control medium and allowed to migrate toward control medium. To assess directed migration (chemotaxis), PMNs incubated in control medium were allowed to migrate toward FaDu-HNSCC-conditioned supernatant or control medium. Counting the migrated cells after 3 hr of incubation demonstrated that, compared to the control medium, the tumor supernatant induced PMNs to respond with higher random migration (threefold) and higher directed migration (fourfold) (Fig. 3a). To investigate the mechanism of PMN chemotaxis, we used neutralizing antibodies against CXCL8. CXCL8 is a chemokine, which we found in high amounts (around 1 ng/ml) in FaDu-conditioned supernatant, in contrast to CXCL1 and CXCL6, which were not detectable (data not shown). The results show that neutralizing CXCL8 reduced PMN chemotaxis toward FaDu supernatant (Fig. 3b). Control experiments with recombinant CXCL8 demonstrated the potency of the inhibitory antibody as it reduced CXCL8-induced PMN chemotaxis to background levels (chemotactic index of 1). Fluorescence immunohistochemistry confirmed considerable expression of CXCL8 also in tissue sections from HNSCC patients (Supporting Information Fig. 1).
Next, to determine the effect of HNSCC cells on PMN survival, PMNs were stimulated with FaDu supernatant or control medium, and apoptosis was determined 8 and 24 hr later. The survival of PMNs was significantly higher in the presence of tumor cell supernatant compared to control medium: more than 80% of cells remained viable even after 24 hr of culture (Fig. 3c).
Upon activation, PMNs are known to release a multitude of inflammatory factors, such as cytokines or chemokines. We investigated the effect of the HNSCC cell line on the release of CCL4 by PMNs. We chose CCL4 because PMNs Recombinant CXCL8 (r-CXCL8, 5 ng/ml) was used as ''positive control.'' (c) For analyses of the tumor-dependent survival of PMNs, PMNs were incubated in HNSCC-conditioned medium for the indicated times. Cells were stained with Annexin V-phycoerythrin and 7-aminoactinomycin D, and the percentage of living cells was measured by flow cytometry (n ¼ 3 individual donors). (a-c) Shown above the plot are the means 6 SD from independent experiments and each p value, as determined by Student's t-test.
Tumor Immunology
Trellakis et al.
produce substantial amounts of CCL4 only after appropriate activation. Additionally, we have demonstrated that serum concentrations of this chemokine were elevated in HNSCC patients (Fig. 1). We observed that the tumor cell supernatant induced secretion of CCL4 by PMN to similar levels as LPS.
In addition, tumor cell supernatant enhanced LPS-induced CCL4 release by PMN (Fig. 4a). Because activated PMNs can also release factors that are contained in their granules, we determined the effects of HNSCC cell line supernatant on the release of lactoferrin (a marker for secondary granules) and MMP-9 (a marker for tertiary granules). The results indicated that, already 15 min after stimulation with tumor supernatant, the levels of both lactoferrin and MMP-9 in the supernatant were induced. These results show that PMNs are activated and rapidly degranulate after exposure to HNSCC cells (Figs. 4b and 4c).
In sum, these findings demonstrate that FaDu-HNSCC cells can modulate important cellular responses of PMNs, such as migration, apoptosis and the release of inflammatory factors, all of which may ultimately have important consequences for the tumor microenvironment and for tumorassociated inflammation.
Discussion
Recent studies using murine tumor models or involving cancer patients have provided evidence for an important functional role of PMN during tumor progression. 35,36 In our study, we investigated the modulation of granulocyte immunobiology in human HNSCC. Our findings suggest that the infiltration of HNSCC tissue by PMNs is associated positively with tumor stage and negatively with overall survival times. We also observed systemic differences in the PMN compartment in HNSCC patients, which correlated positively with tumor size. In vitro experiments demonstrated that HNSCC cells directly recruit PMNs, prolong their survival and promote their inflammatory activity. The findings of these in vitro experiments are supported by the finding that the serum concentrations of the inflammatory chemokines CCL4, CCL5 and CXCL8 are higher in the peripheral blood of HNSCC patients than in that of controls. Thus, our study indicates that PMNs are important mediators of tumor-associated inflammation and may influence the survival of HNSCC patients.
Solid tumors often show a high degree of leukocytic infiltration and a state of so-called cancer-related inflammation. As part of this inflammatory process, various leukocyte subsets are recruited to the malignant tissue, where they contribute to tumor progression. 5 Although the roles of tumorassociated macrophages, regulatory T cells, tumor-infiltrating lymphocytes and, more recently, myeloid-derived suppressor cells (MDSCs) in tumor progression have been intensively investigated, 4,6,9 the impact of PMNs is less clear. 37 This fact is surprising, because PMNs are the most abundant cell type in the peripheral blood. They secrete many potential immunomodulators, activate various other immune cells and, thus, play a role in many inflammatory diseases. 20,38 However, recent reports have provided strong evidence for an important role of PMNs in tumor-host interaction. In a murine model, two differing polarized populations of tumor-associated neutrophils (TANs) were characterized. Transforming growth factor-b within the tumor microenvironment induces a protumorigenic population of TANs, whereas its blockade results in the recruitment and activation of TANs that are switched to an antitumor phenotype. 35 Indeed, the impact of PMNs on tumor growth is characterized by dichotomous effects: PMNs function as antitumor effector cells 39 in therapeutic settings such as bacterial immunotherapy 31 or antibody-dependent cellular cytotoxicity. 40 As tumor-promoting effector cells, infiltrating PMNs may, for example, contribute to tumor angiogenesis. 41 Furthermore, activated PMNs seem to influence the course of cancer disease by their immunosuppressive effects and inhibition of T-cell functions. 42 Recently, intratumoral CD66b-positive PMNs have been described as an independent negative prognostic factor for renal cell carcinoma patients. 36 In our study, we observed several alterations in HNSCC patients related to peripheral blood and tumor-infiltrating PMN. Most HNSCC tissues were infiltrated by PMNs, and the degree of infiltration was positively correlated with the local tumor stage. Additionally, HNSCC cells seem to be a crucial trigger for the recruitment of PMNs. Our in vitro experiments showed that the mobility and migration of PMNs toward an HNSCC-conditioned medium are higher than toward a control medium. PMNs exposed to a culture environment conditioned with HNSCC cells demonstrated prolonged survival and enhanced inflammatory activity. This increased PMN activity may result in both protumor and antitumor effects. In particular, we observed that the release of lactoferrin, CCL4 and MMP-9 is increased when PMNs are exposed to HNSCC-conditioned medium. Lactoferrin is an important component of secondary granules in PMNs and has primarily been associated with antitumor effects. 43 Although the role of CCL4 during tumor development is controversial at present (see Introduction), MMP-9 produced by tumor-infiltrating PMNs may play a crucial role in activating tumor angiogenesis 41 and may contribute to carcinogenesis and further tumor progression. 44 In addition to evaluating the direct and local interaction between HNSCC cells and PMNs, we also analyzed systemic differences related to PMNs and inflammation in our patient cohort. Elevated serum concentrations of factors used as markers of inflammation, such as CRP and amyloid A, may be associated with poorer long-term survival, as has recently been shown for breast cancer. 45 Cytokine profiles including proinflammatory IL-6 are modulated in patients with advanced HNSCC. 46 Furthermore, serum CXCL8, an important cytokine in PMN biology, 15 is found in HNSCC cell lines, tissue as well as serum 47 and has been suggested as a possible biomarker for response and survival in HNSCC patients. 18,19 Our findings further support the relevance of inflammatory biomarkers in HNSCC. We found that the serum concentrations of CRP and CXCL8 are higher in cancer patients than in controls and correlated positively with the size of the tumor. Furthermore, we have shown that an HNSCC-conditioned culture environment promotes the secretion of CCL4 by PMNs. Both the serum concentration of CCL4 and the infiltration of tumor tissue by PMNs are associated positively with tumor stage. Taken together, these findings suggest that PMNs recruited by the tumor may be a source of inflammatory mediators in HNSCC and may influence disease progression. CCL4 released at the tumor site may further recruit mononuclear immune cells, such as T lymphocytes, natural killer and immature dendritic cells, thereby increasing the inflammatory tumor-host interaction. 21,48 A recent report documented the presence of activated CD66b-positive PMNs in the peripheral blood of renal cell carcinoma patients and related those PMNs to the so-called MDSCs. 49 MDSCs are a heterogeneous, partly granulocytic population of myeloid cells found in the spleens and tumors of mice with cancer. In mice, they are believed to suppress antitumor immunity by mechanisms involving arginase 1, nitric oxide and reactive oxygen intermediates, among others. 9,35 However, the functional relevance of those putative peripheral blood MDSCs to tumor progression in humans remains unclear at present. We have observed that HNSCC tissue is infiltrated by CD66b-positive PMNs and that human HNSCC cells prolong the survival of PMNs in vitro. Our finding that the degree to which PMNs infiltrate advanced HNSCC tissues is negatively associated with the survival times of these patients indicates that PMNs may play an important pathophysiological role in HNSCC. We also show that tumor-derived factors directly modulate the cellular functions of PMNs and increase their inflammatory activity. Thus, tumor-infiltrating PMNs may be essential contributors to the inflammatory tumor-host interaction, and consequently, HNSCC patients may benefit from direct or indirect targeting of the inflammatory functions of PMNs. | 5,682.6 | 2011-11-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Mechanical and Hydraulic Behaviors of Eco-Friendly Pervious Concrete Incorporating Fly Ash and Blast Furnace Slag
: Eco-friendly pervious concretes containing fly ash (FA) and blast furnace slag (BFS) were prepared in this study. The compressive strength and hydraulic behaviors were investigated to explore the effect of replacement content of FA and BFS. Rheological tests of cementitious pastes were first conducted and the results showed that FA could increase the apparent viscosity and BFS did not change the rheology performance. Compared to traditional concretes, FA and BFS both decreased the compressive strength of pervious concrete at 28 d, while pervious concrete incorporated with FA and/or BFS presented comparable strength at 60 d. Compared to the control concrete mix, at the same replacement rate, FA changed the compressive strength more obviously than BFS. FA and BFS both decreased the effective porosity and permeability coefficient of pervious concrete. However, when the replacement rate (30%) was the same, concretes with ternary blends presented obviously larger porosity than binary blends. The relationships between porosity and permeability, and strength were also established.
Introduction
Pervious concrete is a special type of Portland cement concrete composed of rationally graded coarse aggregate and cementitious materials which provide the mixture with an interconnected macro-pore internal structure [1,2]. Because of the structural characteristics, many benefits can be achieved by using pervious concrete, including quick water drainage, tire-pavement interaction noise abatement and reduction in urban heat island effect [2][3][4][5][6][7]. Because of these benefits, pervious concrete is widely used nowadays and it is attracting extensive concerns. In the US, pervious concrete pavements are considered as a structural infiltration best management practice (BMP) [8,9].
Generally, the porosity of a typical pervious concrete varies in the range of 15-25% [10] and the water permeability coefficient is about 2-6 mm/s [11]. The aggregate gradation for the pervious concrete typically consists of single-sized coarse aggregates. Cementitious material is used to coat and bond the aggregates together. Right now, there is no standard which gives the optimal cement content in the pervious concrete design. Cement content in literatures varied from 150 [12] to 500 kg/m 3 [13] according to different design purposes. The cementitious material coating thickness has been found to be a very important factor in assessing the structural and hydrological performances of pervious concrete [14,15]. An increase in the cement content generally increases the paste thickness around the aggregate, which may lead to a higher strength of pervious concrete but will defeat the purpose of using pervious concrete pavements in providing better permeability.
It has been reported that 4200 million metric tons of cement was produced worldwide [16] in 2016. The production of cement increases the carbon dioxide emissions, which presents a serious environmental burden. Using alternative materials (fly ash, blast furnace slag, et al.) to partially replace the cement is a sustainable approach to reduce the carbon dioxide emissions and is more environment-friendly.
Fly ash (FA) is a by-product of the process of coal burning. The fly ash cannot be directly released into the atmosphere because of the serious air pollution. All around the world, fly ash is generally stored at coal power plants or placed in landfills, which occupies a great quantity of soils and induces soil contamination. Therefore, the comprehensive utilization of fly ash is imperative. Fly ash generally includes substantial amounts of silicon dioxide (SiO 2 ), aluminum oxide (Al 2 O 3 ), calcium oxide (CaO), etc., which makes it possible to be used in the concrete preparation because these components also extensively exist in cement. Blast furnace slag (BFS) is a by-product of iron and steel-making. The disposal of BFS has become a thorny and expensive process as a result of the increasing strictness of environmental regulations. BFS generally consists primarily of silicates, alumina-silicates, and calcium-alumina-silicates. Nowadays, the utilizations of FA and BFS in the cement concrete industry bring lots of environmental and economic benefits.
The use of FA and BFS in ordinary concretes has been confirmed as a sustainable way to provide better or comparable properties of concrete in some aspects. FA and BFS were reported to increase the early-term thermal cracking resistance because of the lower hydration speed in comparison to ordinary Portland cement [17]. On the other hand, the slow hydration speed leads to a slow strength increase, so FA and BFS generally have a negative effect on the short-term strength development, while FA and BFS could both slightly increase the long-term strength [18]. Generally, FA and BFS which are used to partially replace the cement are very fine and show glassy texture. The small size and glassy texture of FA and BFS make it possible to reduce the water consumption to reach the required workability of the fresh concrete [17,19,20]. As to the durability, FA and BFS could both improve the resistance to diffusion of chloride ions, which may be due to the fact that FA and BFS improve the pore size distribution and more C-S-H gels are formed to adsorb more chloride ions and block diffusing path [21,22]. As far as the pervious concrete is concerned, replacement of cement by fly ash (≤20%) reduced the compressive strength and the total porosity of pervious concrete [23]. Compressive strength of pervious concrete with cementless binder (FA, BFS) decreased in comparison to the ordinary pervious concrete, but the difference was insignificant [24]. FA and BFS were also found to reduce the relative dynamic modulus of pervious concrete [24]. The aforementioned literature review shows that although the utilization of FA and BFS in pervious concrete is becoming a concern, the properties of pervious concrete containing FA and/or BFS are still very limited, especially in the study of the coupling effect of FA and BFS on various properties of pervious concrete.
Objective
The primary objective of this study was to evaluate the possible use of FA and BFS in pervious concrete and investigate the effects of FA and BFS on the mechanical and hydraulic properties of pervious concrete. Binary blends and ternary blends of cementitious materials were prepared. The rheology tests were first conducted to obtain the rheological behavior of cementitious pastes. Response of the mechanical and hydraulic performance to the content of FA and BFS was further studied.
Materials
The materials for the concrete in the paper are as follows:
Sample Preparation
The specimens for mechanical and hydraulic tests were prepared with the standard rodding efforts in accordance with the Chinese specification GB/T 50081-2002 [25] in two layers. Fresh concretes were prepared based on the design ratio in the later parts. Cube samples were prepared for mechanical test with the size 150 × 150 × 150 mm, while cylinder samples were made for hydraulic test with the size φ150 × 150 mm. Moulds with the corresponding size were selected to make the samples. After moulding, waterproof membranes were used to cover the surface of the concrete until the specimens were demolded 24 h after being casted. The specimens were cured at an air temperature 20 ± 2 • C and at a relative humidity of 95%. The relative humidity was provided and controlled by a humidifier. The mechanical tests were performed after 28 d and 60 d curing. The hydraulic-performance tests, such as permeability and effective porosity, were conducted after 28 d curing. For each type of test, triplicate specimens were used.
Rheology Test
All cementitious materials were dry blended prior to wet mixing. RS/SST rheology tester was used to obtain the rheological properties of cementitious pastes. The length and diameter of the spindle is 8 cm and 4 cm, respectively. The size of the cylinder containing cementitious materials is φ12 × 16 cm. The amount of cementitious material and the rheological behavior affect the hydraulic and mechanical properties of pervious concrete [26]. Viscosity and shear stress of cementitious materials were recorded when the shear rate changed from 0 to 100 s −1 . The mix proportions of cementitious pastes can be seen in Table 2. There are no fine aggregates in M1 to M6 in Table 2. The rheology test was performed immediately after the preparation of the cementitious paste.
Hydraulic Tests
The effective porosity was determined by testing the volume of water displaced by samples according to ASTM C1754/1754M-12 [27]. The sample was firstly oven dried at 110 • C for 24 ± 1 h. Specimens after drying should not be used to determine other properties. Hydraulic tests were conducted after the specimen cooled at room temperature for 1 to 3 h. Then, specimens were immersed in water for up to 24 h. By measuring the difference in the water level before and after immersing the sample, the volume of water repelled by the sample (V d ) can be readily determined. Subtracting V d from the sample bulk volume (V b ) yields the volume of open pores. The percentage of an effective porosity was expressed as: Water permeability of the pervious concrete was measured using the constant head method similar with ASTM D2434 [28], which is shown in Figure 1. To protect from the water leakage between sample and test device, the cylindrical specimen was wrapped with a rubber tube and tightened by circular clamps. Water was allowed into the specimen to obtain a steady state flow. The time in seconds (t) required for the water in the tuber (Q) to drop from the top to bottom was recorded. The coefficient of water permeability (k) in terms of centimeters per second (cm/s) was calculated using Darcy's Law as shown in Equation (1).
where k is coefficient of permeability; Q is quantity of water discharged; L is the height of the specimen; H is the distance between two water surface; A is the cross-section area of the specimen; t is time in seconds.
Mechanical Tests
In the study, strength tests were conducted on pervious concrete specimens by following the testing procedures specified in GB/T 50081-2002 [25]. Compressive strength tests were performed on the specimens at the 28 and 60 curing days, respectively.
To determine the optimal amount of cement, trail samples with different amounts of cementitious material were prepared first. It should be noted that there were no FA and BFS in the trail samples. The mix proportions and the 28 d compressive strengths, permeability coefficients and effective porosities are shown in Table 3. The aggregate content was 1450 kg/m 3 . Aggregate size was in the range 16-19 mm. Water to cementitious material ratio was kept as 0.35. As expected, as the content of cement increased, the compressive strength increased and the permeability coefficient decreased. In this study, the mass amount of cement was determined as 280 kg/m 3 by considering both the compressive strength and permeability coefficient. To evaluate the effects of FA and BFS on the mechanical and hydraulic properties, concretes with binary and ternary blends of Portland cement, FA and BFS were prepared. It should be noted that the total amount of cementitious material is 280 kg/m 3 , which was determined in Table 3. The mix proportions are shown in Table 4. It should be noted that the aggregate content and the water-cement ratio were the same as in Table 3.
Rheology Test
Viscosity and shear stress under different shear rates were obtained and are shown in Figures 2 and 3. Viscosities of all cementitious pastes showed the general decreasing trend with the increase of shear rate from 0 to 100 s −1 . However, it can be observed that the viscosity increased in shear rate between 10 and 40 s −1 for 10% FA-20% BFS, 15% FA-15% BFS and 20% FA-10% BFS. There are two main causes of the humps. First, the initial inadequate mixing in the paste preparation caused the increase of the apparent viscosity. As the shear rate increased, cementitious paste became relatively homogeneous. Second, particle migration may occur in the rheology tests because of the difference in shear gradient [29], which may cause the abnormal change of viscosity in certain range of shear rate. After the shear rate was larger than 80 s −1 , viscosities reached stable values. It can be clearly observed that the incorporation of FA significantly increased the apparent viscosity. While the sample with 30% BFS did not show viscosity increase after the shear rate was larger than 40 s −1 .
Non-Newtonian curves between the shear stress and shear rate as shown in Figure 3 was used to interpret the relationship between shear rate and the shear stress. The curves were fitted using a least square function corresponding to the Bingham model as shown in Equation (2) [30].
where τ is the shear stress, τ 0 is the yield stress, µ p is the plastic viscosity andγ is the shear rate.
All the pastes showed an essentially linear proportionality between the shear stress and shear rate for the range of shear rates selected. Replicated measurements on three separately prepared samples of cementitious paste indicated coefficients of variation of 16% and 9% for yield stress and plastic viscosity, respectively. From Figure 2, it can be observed that the high shear rate reduced the viscosity of all mixtures. At low shear rates (<40 s −1 ), viscosities of pastes with binary and ternary blends of FA and BFS were larger than the control paste. At large shear rates (≥40 s −1 ), paste containing 30% BFS presented the same viscosity as the control mix, while the incorporation of FA significantly increased both the apparent viscosity and shear stress. Research [31] showed that particle size and content of FA significantly affects the rheology behavior of cementitious pastes. It is the particle morphological difference between FA and BFS that led to the different rheology behavior of the six mixtures. A reconciling effect could be found by combining the effects of FA and BFS at all shear rates when the cementitious pastes were made with ternary blends of FA and BFS. In Figure 3, shear rates were plotted along with shear stresses. Linear regression was conducted, and the fitted curves showed all the mixtures belonged to the Bingham model. It can be observed that there was no obvious difference between the control mix and the mix with 30% BFS.
Compressive Strength
The cubic compressive strength of all the mixtures were shown in Figure 4. It can be clearly observed that the incorporation of FA and BFS had an adverse effect on the compressive strength at 28 d, which was caused by the fact that the hydration speed of FA and BFS was slower than cement [32]. In harden concretes, it is the C-S-H providing the mechanical properties [17]. C-S-H is the reaction product of SiO 2 and Ca(OH) 2 . The formation of C-S-H was still in progress and the strength was still developing at 28 d for concrete incorporated FA and/or BFS [32]. As the content of FA or BFS increased, the compressive strength decreased. Compared to BFS, concrete with the same content of FA showed a lower compressive strength, which was caused by the different chemical components in FA and BFS. In Table 1, compared to FA, BFS contained a larger content of calcium oxide (CaO), which provided a more suitable alkaline environment for the pozzolanic reaction, so FA showed more adverse effect on the 28 d strength than BFS. However, Figure 4a shows that A6 (10%FA-20%BFS) presented a slightly larger strength compared to A5 (30% BFS). The increase is about 1.3%. Compared to the standard deviation of A5, the increase can be negligible. On the other hand, some uncontrollable test errors may also lead to this result. At 60 d, there existed a slight increase in the compressive strength when FA and/or BFS were added. Linear regression (Figure 4b) was conducted to reveal the relationship between the cementitious components and the compressive strength, which is shown as Equation (3). In Equation (3), it can also be observed that FA and BFS play a negative effect at 28 d, while FA and BFS could slightly increase the compressive strength at 60 d. In contrast with BFS, the larger coefficients of FA indicate that FA plays a more significant role in affecting the compressive strength.
where S is the compressive strength; M FA is the fraction of FA; M BFS is the fraction of BFS. For ordinary Portland cement concrete (OPC), strength at 28 d is a commonly used parameter to evaluate the concrete mechanical properties. Many standards and guidelines [17,33,34] are established based on this parameter. Research from ACI Committee 209 shows that strength of OPC at 28 d is about 85% of its final strength at moist-curing condition. For pervious concrete, because of its high porosity, the long-term strength development is different from OPC. For concrete incorporating mineral additives (FA, BFS, etc.), the strength development closely relates to the properties of the mineral additives, such as components, particle morphology, etc. The effect of FA and/or BFS on the long-term strength of pervious concrete is out of the scope of this study and will be explored in the future study. Figure 5 shows the effective porosity of the seven concrete mixtures. At the age of 28 days, the porosities of concrete containing FA and/or BFS are lower than the control mixture. The addition, the fine particles of FA and BFS causes segmentation of large pores and increases nucleation sites for precipitation of hydration products in cement paste [35]. On the other hand, considering the binary blends, concretes with the same percentage of FA or BFS replacement showed nearly the same effective porosity. With the content increase of FA or BFS, the effective porosity decreased. However, when the replacement rate was 30% (A4, A5, A6, and A7), concretes with ternary blends presented obviously larger effective porosity. This may be caused by the interaction between FA and BFS. On the other hand, the difference of absorption capacity among cement, FA and BFS may also cause the porosity difference, since the published results already verified that the absorption capacity affects the porosity [36,37]. In this study, the effect of absorption capacity on the effective porosity was beyond the limit of this study, and further study on this point will be conducted in the future. Figure 6 gives the permeability coefficients of all concrete mixtures. Similar to effective porosity, FA and BFS both decreased the permeability and as the replacement of FA and BFS increased, the reduction in permeability was larger. Compared to FA, the permeability of concrete containing BFS was lower considering the same replacement content. As the effective porosity increases, the permeability increases correspondingly [38,39]. Figure 7 shows the relationship between effective porosity and permeability coefficient. It should be noted that data in Table 3 were also included in Figure 7. From these results it can be concluded that pervious concrete samples with higher average porosity also had a higher permeability. Neithalath, et al. [1] set up an exponential equation to represent the relationship between porosity and the permeability coefficient. However, permeability of a pervious concrete is also affected by many other factors, such as the pore structure, et al. In this study, an exponential equation was generated to present the relationship between the permeability coefficient and effective porosity. Besides, although the addition of FA and/or BFS decreased the permeability and porosity, the permeability coefficient and effective porosity were still in the general range [10,11]. The presence of pores can adversely affect the material's mechanical properties such as failure strength, elasticity and creep strains [40]. Besides the request in hydraulic performance, pervious concrete also needs to be able to withstand some traffic loads. Quantitatively set up the relationship between porosity and compressive strength of pervious concrete is important in characterizing the concrete behaviors and in the mix design of pervious concrete. Figure 8 shows the relationship between the effective porosity and the compressive strength of pervious concrete in Table 4. It can be clearly observed that as the porosity increased, the compressive strength decreased. For simple homogeneous materials, the relationship between porosity and strength can be expressed as the following equation [17]:
Hydraulic Performance
where S is the strength of the material which has a given porosity p; S 0 is the intrinsic strength at zero porosity; k is a constant parameter. In this study, regression was conducted using the same procedure. The fitting equation was shown in Figure 8. R 2 = 0.976 shows that the equation could accurately represent the relationship between the porosity and the strength.
Conclusions
In this study, eco-friendly pervious concrete containing FA and BFS was prepared to investigate the mechanical and hydraulic behaviors. Rheological tests were first conducted to explore the effect of FA and BFS on the behaviors of the fresh pastes. Compressive strength and permeability coefficient were selected as the indicators in the analysis of the mechanical and hydraulic performance. Based on the laboratory tests, the following conclusions can be drawn.
•
FA increased the apparent viscosity, while BFS was not able to change the rheology performance significantly. Rheology performance of cementitious pastes containing FA and/or BFS belonged to the Bingham model.
•
FA and BFS both decreased the compressive strength of pervious concrete at 28 d, while FA and/or BFS could slightly increase the compressive strength at 60 d. Compared to BFS, FA plays a more significant role in the compressive strength.
•
The effective porosity and permeability coefficient both decreased with the incorporation of FA and/or BFS in the pervious concrete. As the content of FA or BFS increased, the reduction was larger. However, when the replacement rate was 30%, concretes with ternary blends presented larger porosity than binary blends. • Permeability coefficient and compressive strength both decreased with the increase of effective porosity.
•
The use of FA and BFS is a sustainable approach in pervious concrete that considers both the mechanical properties and hydraulic properties.
Author Contributions: For this paper, H.P. formulated research ideas and conducted lab test, result analysis and manuscript writing. J.Y. and W.S. made some revisions to the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,119.8 | 2018-05-24T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Investigation of surface settlement of ground caused by cut-and-cover tunneling in Urmia Interchange
— Earth's surface settlement is one of the most important tunnel drilling circumstances that has been studied by many international investigations to control its effects. This paper investigates the effect of cut-and-cover tunnel construction at the ground level adjacent to the non-surface interchange of the Urmia city, Iran. At the beginning of this research, the measurement of the Earth's ground settlement at some section of the non-surface interchange that is obtained from local surveys is provided. At the next step, it is compared with the analytical results of PLAXIS 3D and local data and soil parameters. The exact surface, obtained from the regional organization, was used to measure the Earth's ground settlement. According to the results obtained from the measurements, the maximum settlement is 9.95 mm. The calculated subsidence value of numerical modelling is lower than the results of local surveys, which may be due to the accuracy of soil laboratory parameters. At the end of the research, the actual soil parameters were obtained using recursive analysis. The measured session values are within the range of the results of other researchers.
I. INTRODUCTION
Tunnels are one of fundamental structures utilized in many countries. From structural viewpoint, they could be constructed with various geometry and materials which each of these factors can influence dominant behavior of structures, [1], [2]. One of the main differences of tunnels with other structures is their structural restraints. In most of the building structures, the supports of the structure are in discrete points and their configuration can play significant role in the overall behavior of the structure, [3]. However, tunnels have continues supports in the ground. The interaction between the tunnel and the soil buried in is mostly the result of the geotechnical properties of the soil. Nowadays, with increasing demand for in-city trips, there is an urgent need for shallow and easy-to-installation tunnel. Since tunnels are completely buried in ground, the surrounding soil has significant effect on construction process from geotechnical and environmental perspective. From environmental viewpoint, the surrounding soil could be contaminated by some toxic materials like arsenic which should be stabilized using different methods such as utilizing lime dust and cement kiln dust, [4]. From geometrical viewpoint, understanding the ground's response to tunnel drilling is essential for creating safe and affordable construction. This response, which appears as stress field changes and displacement of the soil mass around the tunnel, depends on various factors including geology, geotechnical properties, drilling method as well as tunnel maintenance and equipment facilities. A precise understanding of this can be achieved through local measurements such as those in this article. Adequate attention and study of the interaction between drilling and its surrounding soil is an essential prerequisite for providing a valid prediction regarding the excavation of surface tunnels based on numerical analysis. On the other hand, the safety and resistance of structures in front of various hazards and loads are vital for the economy and industrial improvement, and they should be included in the designing of the structures as well [5]. This discussion presents an investigation of the behavior and deformation of the land surface due to drilling and construction in urban areas based on numerical modeling in tunneling.
II. BORING THE TUNNEL
In general, tunnel drilling is done in two ways: open front and closed front. The following is a summary of each of these two methods.
A. Open-front Tunneling Method
Open-front tunnel construction includes tunneling techniques without the use of permanent maintenance for the drilled tunnel front. Shielded mechanized tunneling can also be used as open-front tunneling. In this case, the main factors of the meeting are: 1-Move the earth toward a part of the tunnel that is not maintained 2-The radial motion of the earth towards the deformed cover 3-The radial motion of the earth towards the cover thus consolidates The initial case of the meeting can be reduced by reducing the length of the tunnel not maintained by restraint on the work front. The latter is usually high, which is why primary shotgun coating is used for primary maintenance. Various additives are used to accelerate the hardening of concrete, allowing the drilling speed to increase. When tunneling is done on low-permeability soils, some consolidation may occur after tunneling. In cases where the completed tunnel acts as a drain or impedes further consolidation to the surrounding soil, delayed radial movement may occur. In lands, with high permeability, the drainage pressure drops and consolidation phenomena occur in front of the tunnel front, and ground movement may occur rapidly during consolidation [6].
B. Closed Front Tunnel Method
This tunneling method involves continuous maintenance of the tunnel front. In contrast to open-frontal tunneling, land Investigation of surface settlement of ground caused by cut-and-cover tunneling in Urmia Interchange
Chiya Gharakhan 1 , Hajir Mohammad Eisa 2 , Masoud Hajialilue Bonab 3
deformation is less common in this method. This issue is particularly important in urban and shallow areas. There is a great deal of variability in the maintenance tool in this method, and so in unstable terrain, the workflow can be sustained by mounting restraint or nail soil after each maintenance sequence. In some cases, the use of compressed air can cause tunneling with a closed work surface. Along with the use of shotcrete and containment, fast closing of the loop helps greatly stop the deformation of the ground. The small deformation of land that results in closed-loop tunneling results in high tunnel cover forces, but if the tunneling is shallow in urban areas, the loads on the cover are relatively small. Meyer and Taylor expressed their idea of deformation associated with shield tunneling as follows: 1-Moving the earth toward the work front, thereby releasing tension 2-The radial motion of the Earth towards the shield, as a result of tunnel pre-drilling 3-The radial motion of the earth toward the endless space, thereby creating a gap between the shield and the cover 4-The radial motion of the ground towards the coating, thereby deforming the coating 5-The radial motion of the earth toward the cover, which results in consolidation One of the methods of tunnel excavation is the closed-loop method. In this method, the trenches from the surface are drilled to the desired depth and width so that the floor of the trench will be tuned to the floor of the tunnel. Then install the desired facility in the tunnel and wall it with maintenance equipment and embark on it to the ground level. This method is possible in cases where there are no surface structures or damage to the site in question. According to the experiences in different cities of the world, in general, it can be said that in case of deeper tunnels from 10 to 14 meter, the method of sputtering is cheaper and easier than other methods and the construction of subway tunnels to the depth of 18 meters is also quite practical and affordable.
C. Types of cut-and-cover tunneling methods
Depending on the type of execution, the slower method is categorized as follows: 1-Side piles as retaining wall 2-Side piles as soil retaining wall and pillar instrumentation 3-Side and middle pillar piles and ceiling in insitu fills 4-Piles for side and middle columns and prefabricated ceiling
EXCAVATION
The adverse effects of drilling operations on the construction of a tunnel or ditch on land surface structures led researchers to conduct extensive research to develop and develop methods for estimating and evaluating land surface meetings. In this context, not only the size of the final meeting was examined, but also the amount of the meeting at various stages in the preparation of a meeting procedure. Also, the question of which surface structures will be affected by drilling operations and to what extent this will be affected has been one of the major issues raised by many researchers. Past tunneling research can be divided into four main groups: experimental research, analytical research, laboratory research, and numerical research.
IV. NUMERICAL METHODS
The use of the finite element method as one of the methods for geotechnical engineering began in the year 1966 and proved to be a robust method for analyzing the behavior of different structures in civil engineering using different software such as ABAQUS, PLAXIS, PFC2D, and so on [7]. By using these software, the user could use this kind of analysis to be simple and quick for two-dimensional and three-dimensional structures as well [8]. Clough and wood ward [9] used this method to characterize stresses and displacements in the embankments, and Deer and Reyes explained its use for analyzing tunnels and underground excavations in rock. Cho in 1994 in his doctoral dissertation "Predicting Surface Occurrence as a result of Tunneling in Soft Lands," he used two-dimensional finite element analysis to investigate the impact of different soil behavioral models on the shape of the subsidence pit. Fowell and Karakus in their paper [10], investigates the effects of drilling on the amount of subsidence using the finite element method. Underground structures are one of the most important ways of dealing with traffic in big cities today. Important underground structures in the cities can be pointed to the tunnels built and covered method. Fowell and Karakus [10], in this research, the static analysis of tunnels in coarse-grained wetlands using numerical modeling of discrete elements (DEM) and PFC2D software has been investigated, and the effect of tunnel depth on land surface profile has been investigated. Meanwhile, these underground structures could affect the performance of the over pavement as well. In this regard, investigation on the methods of reinforcing/stabilization of pavement layers illustrated that reinforcement and increasing the resilient modulus of pavement layers leads to reducing the permanent deformation (rutting) of flexible pavement, especially for the pavement constructed over weak subgrades layers [11] and in continue developed a step-by-step framework and general guidelines for the process of project evaluation of existing pavement conditions following the proposed six steps and developed the methods for the selection of feasible maintenance/rehabilitation alternatives for the pavement [12]. In addition, For performed finite element studies in this field, the results of the analysis software are compared with the analytical solution, FEM, and PLAXIS solution [8]- [11], [13]- [16].
Mahmoud Vafaian et al. [17] in 2001 comparison of Mohr-Coulomb Behavior Models and Hardened Soils to Estimate Maximum Surface Settlement and Survey of Underground Stability in Shallow Tunnels Using PLAXIS Software. If the Mohr-Coulomb behavior model is used, the maximum surface subsidence will increase with increasing tunnel depth, which may not be acceptable in some cases, but in the advanced hardened soil model with increasing depth of drilling depth, the maximum surface subsidence and stability factor the tunnel decreases and increases in order, which is acceptable.
A. Geotechnical studies of the area
The geological and geotechnical information of the study area is based on the Stocklin segmentation of the project site in Alborz Zone. The Alborz Mountains in the east connect to the Pamir Mountains through the Hindu Kush. But the western and northwestern stretches of the mountains or the Libs, are ambiguous. Looking at the geological map of Azerbaijan, we observe that sedimentary, volcanic rocks cover much of it. Also, in some places, such as Tabriz and Maku, igneous rocks are exposed in and need such as syenite. Fig. 1 shows the location of the project study area.
To determine the engineering parameters of each layer, the results of laboratory and field experiments were analyzed based on the location of each layer. Then based on the analysis of the proposed values of each parameter is presented. To select the engineering parameters of each layer, the data is scattered, and the results are far from realistic.
Soil at the project site from the ground level up to a depth of 1.5 meters from the loam soil, from 1. Table 1.
B. Dimensions and specifications of retaining wall
Drilling width and depth of 28 meters (with two 14 m space) drilling in the study area is about 5.5 m. The temporary structure was carried out using pile running piles to retrieve the pit before drilling following Fig. 2 and 3. In such a way that piles with 1-meter diameters (side piles) and 1.5 meters (intermediate piles) and 15.5-meter height with 3 meters distance from each other, are executed at the project site. The drilling process is such that three rows of piles like Fig. 4 the shape of the pile are ground in fine grit and then drilled by a shovel and loader.
C. Settlement measurement
The precision level obtained from the province's survey organization was used to measure the ground settlement. Totally 28 points were selected that 6 pinots were lost during drilling of point data. The total length of 80 m, according to fig. 4 and table 2. Points that selected were at 10 m from each other, and settlement was measured in four steps. The surveillance camera was used to measure the meetings, which were read at the mapping station created at the project site.
D. Discussion of results
As mentioned, the points of displacement at various stages of drilling were read using a surveying camera. To investigate the results obtained, the five profiles of transverse seating profiles were plotted at different stages, and in the axis, the direction shown in fig. 5 to 9. As these profiles show, by decreasing the edge of the hole, the subsidence decreases and tends to zero. Also, as the depth of digging increases, the number of subsidence increases. According to the results obtained from the measurements, the maximum sum of points taken at the nearest point to the drilling edge is 3 m from the drilling edge, which was 9.95 millimeters. In the mentioned fig., series 1 related to 2m depth, series to related to 3.5m depth, and series 3 related to 5.5m depth.
A. General Model Specifications
As mentioned before, to verify the results of numerical modeling, the drilling situation of the cross-section in the point of 12+600 Km is modeled with Plaxis 3D in a realistic space, and finally, the results of numerical modeling will be compared with the results obtained from the mapping readings. Fig. 10 to 14 represent how to settle on the ground for the time after the piles are placed in the soil after 2 m, 3.5, and 5.5 m, respectively. Fig. 14 shows how the orbitals are changed in the piles. As represented in fig. 10 to 14, the maximum land surface settlement occurs at the edge of the pit and the depth of 5.5 m the drilling meter. According to the diagram, the maximum meeting is 7.1 mm, which is a good estimate compared to the one observed during drilling. The diagrams also show that by dropping off the drilling edge, the number of subsidence decreases, and as the depth of digging increases, the number of subsidence increases. It is also well-visible in this form of the rising floor phenomenon, which is one of the causes of corrosion. The amount of this elevation increases with increasing excavation so that the amount of 16 millimeter reaches the end of the excavation. It should be noted, however, that this value would be reduced if modeled with the hardened soil model (HS). In the early phase of drilling, the results of the modeling study were more than those measured at the site, which would be found by moving away from the edge of the drilling rig. In the second step, the measured subsidence value is higher than the analysis, which may be due to the effect of the Mohr-Columbus model. In the third stage, the drilling session was further analyzed, and this may be due to the inaccuracy of the soil parameters obtained from the laboratory.
VIII. DETERMINE THE ACTUAL SOIL PARAMETERS
To improve the results of the modeling and prediction of drilling-induced sedimentation, we introduce some variables that are relevant to soil parameters that can bring us closer to the results of laboratory testing. Table 3 shows the range of soil parameters. From the results obtained, it is clear that the changes in friction and cohesion have a greater impact on the settlement and elastic modulus changes and have a very limited effect on them. The main parameters of the project sandstones are the elastic modulus 40,000 KN/m 2 , friction angle 38, and cohesion 11 KN/m 2 .
IX. CONCLUSION
In this paper, we compare the results of numerical modeling and local measurements of soil parameters. The exact level of the land, obtained from the Provincial Research Organization, was used to measure land settlement. The surveillance camera was used to measure the meetings, which were read at the mapping station created at the project site. To investigate the results obtained, the profile of transverse subsidence profiles is plotted at different stages along the drill axis. By examining the graphs, it is clear that: 1-Moving away from the edge of the cavity, the amount of subsidence decreases and tends to zero 2-As the depth of excavation increases, subsidence increases 3-According to the results obtained from the measurements, the maximum meeting point at the nearest point to the drilling edge is 3 m from the drilling edge, which is in 9.95 millimeters.
By modeling the desired problem in Plaxis 3D software after 2, 3.5, and 5.5 drilling shows that: 1-The maximum land surface subsidence occurs at the edge of the pit and the 5.5 depth of drilling. The maximum meeting with the diagram is 7.1 mm, which is a good estimate compared to the one observed during drilling. 2-The diagrams also show that by dropping off the drill edge, the number of subsidence decreases, and the subsidence rate increases with increasing depth of digging. 3-The results obtained from the numerical modeling of the bottom floor elevation phenomenon, which is one of the causes of corrosion damage, are well visible. The amount of this elevation increases as the number of excavations increases so that at the end of the drilling, it reaches a 16 millimeter.
It should be noted, however, that if modeled with a hardened soil model (HS), this value would decrease 4-In the early phase of drilling, the modeling results of this study were more than those measured at the site, which was reduced by moving away from the drilling edge. In the second stage of drilling, the measured subsidence value is higher than the analysis, which may be due to the effect of the Mohr-Columbus model. In the third stage, the drilling session was further analyzed, and this may be due to the inaccuracy of the soil parameters obtained from the laboratory. | 4,338.6 | 2020-03-28T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geology"
] |
Benefits and Limitations of Common Directional Microphones in Real-World Sounds
We present extensive experimental data to objectively evaluate the benefits and limitations of common directional microphones in real-world sound fields. The microphones include a conventional directional microphone(DM), a balanced DM, etc., plus the Omni microphone (mic) as a benchmark. The evaluation focuses on noise outputs, signal-to-noise ratios(S/Ns) and distortions; the real-world sounds include male voices, female voices, babble noises, white noises and talking interferences. Each type of noises is at 4 or 5 levels, from 30 to 70 dB SPL, at 10 dB step, and each talking interference is at 3 levels: 50, 60 and 70 dB SPL. The research methods include analytically deriving sensitivity-gains, statistically calculating the three mics’ outputs, experimentally viewing waveforms and spectra, and using large-sample wave files for a high confidence level. According to the experimental results, this paper concludes that 1) for a conversation in a quiet field, in soft or low noise field, the common DMs achieve comfortable S/Ns: 7 to 33 dB, similar to what the Omni mic does; 2) for a conversation in low, competing or strong talking interference fields, the common DMs achieve about 16 dB better S/N than the Omni mic does; 3) for a conversation in competing or strong surrounding noise field, the common DMs do not achieve beneficial S/N to understand speech; the common DMs’ noises are close to the Omni mic noise; 4) in various experiments, the balanced DM preserve speech fidelity well as the Omni mic does, while the conventional DM does poorly. This paper further introduces the Simulink experimental manipulations, such as digital FIR filters’ design, stereo channels’ wave files creation, etc., in the
Introduction
Directional microphones (DMs) in hearing aids have been researched and developed for more than 20 years. While the noise suppression benefits for hearing-impaired persons are established, the limitations have not been popularly known. At present, the DMs have not been researched thoroughly [1]. Audiologists and hearing professionals have been developing new DM technologies. The SpeechFocus can provide an adaptive beam-former [2]. When the speech sound comes from the back, on the left or right site, the lobe of the beam-former faces the back, the left or right side. A technology similar to the SpeechFocus, called auto ZoomControl, earlier was proposed [3,4]; further, when it is combined with a binaural wireless communication, the auto ZoomControl is nearly optimized. A super-directional beamformer is able to increase S/N to achieve normal speech understanding in noise [5]; this beam-former can form a proper width beam so as to attenuate off-beam signals and to preserve spatial cues of the environment. Through six different experiments, the conclusion was that this beamformer outperformed the Omni mic at noisiness and acceptance, but specific conditions of the experimental noise sources were not described. These new technologies were upgraded and approximated adaptive beam-formers, thus offering S/N improvement and better speech intelligibility. Hopefully, these new technologies will be implemented in available products in near future.
However, audiologists, developing manufacturers and researchers never stop testing and evaluating the performance of existing directional hearing products and technologies [6][7][8]. A three-year investigation for DMs' effectiveness on 94 subjects was conducted [9], and it concluded that those directional hearing aids performed better in objective S/N measurement in the laboratory, but advantages were less clear for subjective measurements in environments. A DM evaluation study was conducted, based on nine-article literature review [10], and it concluded that those evidences of DMs' effectiveness provided weak support; the careful consideration of assessing methodologies was encouraged. Simulink experiments using wave files of real-world voices and noises were proposed [11]. Based on experimental polar plots of a conventional DM, it was suggested that the DM can obtain much more S/N benefit than the Omni mic for beamed noise sources, such as a talking interference, but not for a surrounding noise.
A balanced DM is a conventional DM whose frequency response is balanced by multi-band gains. In a practical directional hearing aid, balanced processing minimizes spectrum distortion. Therefore, conventional DMs and balanced DMs have been common DMs in real-life hearing aids. Here we evaluate the common DMs' performance, using the Omni mic as a benchmark, on speech enhancement, noise suppression, S/N improvement and spectrum distortion; for a high confidence level, we used various noises and speech voices from large-sample, real-world sources.
Internal Noises of DMs
Usually, a DM is composed of two or three Omni microphones (mics) located on a line array and an operation circuit. When we measured the equivalent input noise level (EINL) of a directional hearing aid, the EINL in directional mode always was 5~7 dB higher than that in Omni mode. DM product specifications also indicate that the EINL of a DM, e.g., 32 dB SPL, is larger than the EINL of its Omni mics, e.g., 26.5 dB SPL. The evidences tell us that internal noise level of a DM is higher than that of an Omni mic.
Internal noises of the Omni mics are a critical factor affecting the DM output noise. Miniature microphones used for hearing aids are extremely refined, as required by hearing aid manufacturers. A mic is usually made up of an electret condenser sensor and an integrated circuit(IC) amplifier. Figure 1 shows an anatomy diagram of a typical electret condenser microphone [12]. The diaphragm is metalized on the outside or inside surface, which electrically conducts to a part of the mic case. The metal back-plate is coated with electret material. The diaphragm and back-plate form a parallel plate capacitor. This is why the mic output characterizes capacitance. When a signal is generated between the diaphragm and back-plate, it is delivered to gate of a field effect transistor (FET) in IC, which has a very high input impedance and low output impedance. The IC amplifier has an extremely wide, flat frequency response and very low noise [13]. Thus, the frequency response of an electret mic is dominated by the electret sensor response. Obviously, the electret mic internal noise originates from its sensor and IC amplifier. There are air flows to move along three parts of the electret sensor, the front volume, the gap between the diaphragm and back-plate, and the back volume. By physics, when molecules of the air flows impact the backplate, the diaphragm and the electret, the parts produce noises. The noises are of white spectrum because samplings of each noise output are independent over time. An evidence that the noises originate from the air flows is that the mic noise output measured in a vacuum container is significantly lower than that measured in normal air [13].
An electronic device usually makes three types of noises: thermal, shot and flicker. Thermal noise originates from a heating element, usually, a heating resister; and its spectrum is a constant, related to the element temperature and resistance. Shot noise originates only from electric current across potential barriers in the semiconductor element, so the FET in the IC is the only source of shot noise. Flicker noise is also called 1/f noise, its major energy distributes in a low frequency region. The former two noises in the IC are of white spectrum and dominant, then total output noise of the electret microphone is considered an approximately white noise.
Sonion manufactures various miniature acoustic devices, including hearing aid microphones with low noise levels. Sonion Data Sheets provide the "Typical response curve" and "Typical 1/3 octave equivalent noise" [14]. The former is a sensitivity curve (dB re 1V/Pa), and the latter is an equivalent input noise curve (dB SPL). In order to obtain a curve of microphone output noise, we first need to transform unit of the Y axis sensitivity into (dB re 1V/SPL). Since 0 dB SPL is equivalent to 20 µPa, 0 dB SPL is just equivalent to -94 dB re 1 Pa; in addition, (-94 dB re 1V/SPL)=(-34 dB re 1mV/SPL). We selected three Sonion microphones, models: 6922, 6913 and 6295. A microphone output noise is equal to its equivalent input noise times its sensitivity, then the mics' output noises were calculated, as showed in Figure 2. The spectra of the output noises are not so white, caused by Flicker noise and acoustic resonance of the mics. Meanwhile, we smoothed the equivalent noise of 6922 at 10k Hz. The graphs are bar type with 1/3 octave band, and their unit is (dB re. mV).
Basic Behaviors of a Few DMs
When two Omni mics combine with a delay filter and a subtracter, as shown in Figure 3, they build a simple sound beam-former, i.e., a conventional DM, also called a 1st-order DM. In practical applications, a conventional DM has evolved into multiple varieties, such as balanced DM, 2ndorder DM, adaptive DM, etc. Practical DM circuits are complex, but their performance is still determined by their basic configurations.
A conventional DM
In Figure 3, the solid arrows represent the 0° (front) incidence, and the dashed arrows represent the non-zero degree incidence. Without losing generality, the Omni mics' sensitivities are assigned as 1, and the A/D converters are ignored. In the case that incoming sounds are pure tones, assuming that the front mic output is y t sin 2πft , f is tone frequency and t is time. The rear mic output is y t y t δ θ , and δ(θ) is external delay time between the rear mic and front mic signals. Depending on the two mic ports' spacing d p and incident angle θ, δ(θ) can be denoted as δ(θ) = ∆cos(θ), ∆ is delay time of the ports' spacing. A delay filter is in the rear mic output circuit, and its parameter τ is called internal delay time, controlling the DM polar pattern shape. The DM output is the front mic output minus the filter output, y t sin 2πft sin 2πf t δ θ τ 2sin πf τ δ θ cos 2πft πf τ+δ(θ))] The DM output still is a tone signal with an additional phase -πf( τ δ (θ)) and has an amplitude 2sin πf τ δ θ . The DM spatial performance focuses on its gain polar pattern. Here, we concern only with the DM amplitude, which is related to ports' spacing, filter delay and incident angle. We still use a strict concept, sensitivity-gain(S-gain) 11]. When τ=∆, the DM of Figure 3 is a typical cardioid DM. We can derive S-gain polar pattern of the cardioid DM from (1) as g θ, f 2sin πfΔ 1 cos θ (2) Assuming d p =16 mm or ∆=0.04662 ms, we can plot polar patterns of the cardioid DM, as shown in Figure 4. They result from three tones of frequencies: 5k, 2k and 500 Hz, and each pattern has a zero notch at incident 180°. In all the experiments below, we used the 5k Hz tone to represent the high frequency region; 2k Hz, mid frequency region; and 500 Hz, low frequency region. In Figure 4, the outer pattern of 5k Hz has a max gain 2(6dB) at 0°; and the inner one of 500 Hz has a 0° gain 0.292(-10.7dB). The lower the frequency, the less the gain. Only one polar pattern of 5k Hz shown may mislead performance of the DM.
A 2nd-order DM
Based on the DM configuration shown in Figure 3, we can build a high-order sound beam-former by combining more Omni mics on a line array with more delay filters and subtracters. Figure 5 shows a 2nd-order DM configuration, which is composed of three conventional 1st-order DMs. The front mic, mid mic, subtracter1 and delay1 form a front 1storder DM. The mid mic, rear mic, subtracter2 and delay2 form a rear 1st-order DM. The output of the front DM, the output of the rear DM, subtracter3 and delay3 form a 3rd 1storder DM, and its output is just the 2nd-order DM output. Assuming that the front mic output is y t sin 2πft , the mid mic output is y t sin 2πf t δ θ , and τ1=∆ the delay time between front mic and mid mic ports, then output of the front 1st-order DM is y t sin 2πft sin 2πf t δ θ τ1 2sin πfΔ 1 cos θ cos! 2πft πf Δ δ θ " When τ2= ∆, output of the rear 1st-order DM is y t Δ Comparing equations (4) and (3), we can know that amplitude of the rear 1st-order DM output is equal to that of the front 1st-order DM output, and their time functions have a time difference ∆ only. When the two output signals are used as the inputs of the 3rd 1st-order DM, the output of the 3rd 1st-order DM can be derived as where 2sin πfΔ 1 cos θ has a cardioid pattern and 2sin !πf&τ3 δ θ " has a flexible pattern, depending on the value of τ3. When τ3=∆, the pattern of the 2nd-order DM is a 2nd-order cardioid, and can be plotted by means of g #' θ, f 4sin # !πfΔ 1 cos θ " (6) Figure 6 shows the S-gain polar patterns of the 2nd-order cardioid DM, which result from three tones of frequencies 5k, 2k and 500 Hz. Each pattern has a zero notch at incident 180°. The outer pattern results from the 5k Hz tone, having a max gain 4(12 dB) at 0°; the inner one, from the 500 Hz tone, having a gain 0.0853 (-21.4 dB) at 0°. Thus, the resulting pattern has a narrower lobe than the 1st-order DM.
A Balanced DM
Frequency response of a conventional DM has a 6 dB up slope in the low-mid frequency region, as shown in Figure 9. In practice, the response slope can be reduced by balancing the response with multi-band multipliers, as shown in Figure 7. We designed eight-band multipliers, each of which was composed of a band-pass filter and a multiplier. The entire coverage of the eight bands is 200~8000 Hz, meeting the related requirements of Standards ANSI S3.22 and IEC 60118. The input in Figure 7 is connected to the conventional DM output. In a modern hearing aid, advantage features, such as noise reduction, feedback cancelation, etc., also are implemented in such multi-band processor. Thus, in a practical hearing aid, the multi-band processor functions in Omni mode too. The multiplier value of each band depends on the corresponding band gain of the conventional DM. The lower the gain is, the larger the multiplier is. Design of these band-pass filters focused on outputs' balance and delay times' consistency, which affected the speech signal fidelity.
The frequency response ripples were tested within ±1.2 dB, and the delay times, about 80 samples, 1.8 ms. For details of band-pass filter design, refer to the Appendix. Figure 8 shows polar patterns of a balanced DM with three tones of frequencies 5k, 2k and 500 Hz. The patterns have a zero notch at incident 180° and nearby the same gain, 6 dB at 0°. Thus, it performs with good directivity and balanced frequency response in all the frequency regions, so the balanced DM can benefit in spatial and frequency domains.
Analytical Study of Speech Enhancement
DM S/N improvement can be studied on two aspects: speech enhancement and noise suppression. Here, we study the former. Using Eqs. (2), (6) and the multi-band multipliers of Figure 7, we calculated S-gains of the three DMs plus Omni mic as a benchmark. Figure 9 shows the resulting Sgain frequency responses at incident 0°. The test conditions were sampling rate 44.1k Hz and DM ports' spacing 16 mm. From this figure, we can observe that (1) the Omi mic has a flat curve of 0 dB; (2) the conventional DM has a 6 dB/octave up slope curve in low-mid frequency region; (3) the 2nd-order DM has a 12 dB/octave up slope curve in low-mid frequency region; (4) the balanced DM has a saw-like, flat curve around 6 dB. Note: the saw fluctuation is related to number, center of the frequency bands, as well as the operation word-length; hardware DM will smooth the curve well. Figure 9 also tell us shows that 1) the curves of the conventional and 2nd-order DMs cross the curve of the Omni mic at frequency 1.78kHz; so, when speech signals go through the conventional or 2ndorder DM, they may not be enhanced well as through the Omni mic; 2) the balanced DM performs around 6 dB speech enhancement; and 3) the summit frequencies of the conventional and 2nd-order DMs are the same, 5.36k Hz. The conventional DM may cause severe speech spectrum distortion [11]. From the slopes in Figure 9, the spectrum distortion of the 2nd-order DM is deteriorated, compared to the 1st-order DM, thus eliminating the need for further study on the 2nd-order DM.
Furthermore, Figure 10 shows other S-gain frequency responses of the three DMs and the Omni mic at incident 90°. Compared to Figure 9, we can observe how many dB the main lobe drops at ±90°. For example, when the lobe width is ±90° and frequency is 2k Hz, the conventional DM gain drop is about 5.6 dB, the balanced DM, 6 dB, and the 2nd-order DM, 11.3 dB. The data indicate that the DMs' spatial resolutions are not high to differentiate among sounds.
Simulating Experiments for Speech Enhancement
A pure tone is a sound signal of impulse autocorrelation and line spectrum. In Figure 9, the gain response of the conventional DM crosses the response of the Omni mic at 1.78k Hz, so we cannot indicate that the DM performs better speech enhancement. Simulating experiments with real-world speech can help to figure out it. For a high confidence level, a large-sample speech time-series is required. We acquired English speech, which is composed of 11-word phrase spoken by a female announcer Amy [15], i.e., "Hi, one of the available high quality texts to speech voices". Its wave file lasts about 3.8 s and contains about 167,500 samplings, sampling rate 44.1k Hz, word length 16 Bits. Figure 11 shows the Amy original speech spectrum. If the cut-off frequency is defined as a 30 dB spectrum drop, the speech spectrum width is about 8k Hz. High energy of the spectrum distributes in the frequency region <500 Hz. The speech RMS was recorded as 0.0484. Figure 12 shows the Omni mic output spectrum with Amy speech. Compared to Figure 11, we can observe that this spectrum does not change significantly except in the frequency region >8k Hz. This is because the mic IC amplifier and the pre-amplifier cut off the frequency components with >8k Hz. The RMS of the Omni mic output was recorded as 0.0472. Figure 13 shows the conventional DM output spectrum with Amy speech. Compared to Figure 11, we can observe that the spectrum drops significantly at frequencies <2k Hz, the spectrum is enhanced in frequency range 2k~8k Hz, and the spectrum with frequencies >8k Hz disappears. The RMS of this DM output was recorded as 0.0275, about 58% of the Omni mic output, indicating that this DM losses part of the speech. Figure 14 shows the balanced DM output spectrum with Amy speech. Compared to Figure 13, we can observe that the spectrum energy in the low-frequency region is recovered, and this DM enhances energy in the frequency range 0~3k Hz; the spectrum with frequencies >8k Hz also disappears. The RMS of the balanced DM output is recorded as 0.0570, about 120% of the Omni mic output, so this DM does not losses but rather enhance the speech. In a word, the conventional DM performs speech enhancement -4.70 dB, and the balanced DM, 1.64 dB, relative to the Omni mic performance.
Common DMs' Performance in Noises and Interference
Usually, noise fields mean that there exist peripheral noises or interferences into listeners, such as party noise, equipment noise, talking interference, etc. [16]. The noises and interference probably are soft or strong in listening situations. Using hearing aids, wearers also concerns with weak internal noises of the common DMs and Omni mic. At present, the large-sample real-world noises can be easily acquired from wave files in many online references, which contain noise time-series, lasting several seconds or longer. Such noise resources can be imported to make our experiments as in real fields. We acquired voices from Amy and Brian (male announcer) in a quiet room, and wave files of both voices contain a 12-word phrase of about 176,400 samples and 4 s period. Considering that the two announcers' voices can represent average speech/conversation, the sound pressure of their voices should be 60 dB SPL. Statistics of the two acquired time-series were: Amy RMS=0.0485, Brian RMS=0.0385. Their mean is 0.0435 and can be used as criterion RMS equivalent to the sound pressure of 60 dB SPL. Logically, if RMS of another large-sample sound is equal to 0.0435, its SPL just is 60 dB. Thus, a noise or interference from other wave files can be calibrated conveniently with the criterion RMS.
The input configuration of a DM is showed in upper Figure 15. Because these devices are linear and timeinvariable, we can change the connection order for convenient measurement. We move the mic IC into the box Pre-amplifier. The box IC plus Pre-amplifier can be represented with a low-pass filter, as shown in lower Figure 15. The low-pass filter covers the entire speech frequency range, 0~8k Hz. Actually, its pass-band is narrow relative to the sampling rate 44.1k Hz in our experiments, so it cuts off much of the white noise energy at mid-high frequencies. Based on the principles of Figure 3, Figure 7 and Figure 15, when configuring the Omni mic's and common DMs' experiments, we designed a low-pass filter and eight bandpass filters of Chebeshev II direct-type. Details on the Chebyshev Filtering block are provided in the Appendix. In Figure 16, configuration of the Omni mic experiment is shown. The input block SpchAmy60dBL.mat provided the mic output time-series. There were two recording blocks, AmySpch.mat for Amy original speech and AmyOmni.mat for the Omni mic output. In addition, two pairs of Time Scope and Spectrum Scope were used to monitor the waveforms, statistics and spectra of the input and output of the Omni mic processor, respectively.
Based on Figure 16, we inserted DM operation blocks between the two low-pass filter outputs and the multi-band processor input; the configuration of conventional DM experiment is shown in Figure 17. The upper SpchAmy60dBL.mat became the front mic output; the lower SpchAmy60dBL.mat was used for rear mic output. A spacing delay between the mic ports was connected to the rear mic output to control the back sound orientation. An extra Time Scope was connected to the input end of the internal delay for monitoring the rear mic output. Additionally, the block AmyConv.mat was used to record the conventional DM output.
Based on Figure 17, we inserted eight multipliers between the multi-band outputs and the Adder inputs; the configuration of the balanced DM experiment is shown in Figure 18. The eight gain values were 5.4, 2.36, 1.49, 1.17, 1.03 and three 1s for bands 600, 1.5k, …, 7.5k Hz, respectively. The gain values were calculated according to the slope of the conventional DM frequency response and the balancing requirements. The block AmyBalcd.mat was used to record the balanced DM output.
DMs in a Quiet Field
In a quiet field, the output noise of a practical hearing aid is its internal noise, related to the elements, devices and processors inside. The noise sources in the Omni mic or the DMs are critical, particularly those from mic IC, electret sensor and a pre-amplifier. Recalling Section 2, we conducted the output noise calculation for the Omni mics. In the case of the DMs, we can obtain their results through experimenting. A white noise time-series of about 4 s was taken from the Simulink because of verified white spectrum. It was calibrated to meet RMS=0.00138 to ensure that the sound pressure was 30 dB SPL. When internal noise takes effect in the DMs, it has no orientation. We used noises at eight angles 0°, 45°, …, 315° (the more, the more exact) to represent the internal noise. Mean power of the output noises at all the angles was calculated as the end DM output. Usually, when S/N is 9 dB, no listening effort is needed to understand speech in noise. We specified output S/N criterion to evaluate the mics' performance: S/N<1 dB, very poor; 1 dB <S/N≤4 dB, poor; 4 dB <S/N ≤7 dB, fair; 7 dB <S/N≤11 dB, good; 11 dB <S/N≤15 dB, very good; and 15 dB < S/N, excellent. Table 1 lists the outputs and S/Ns of the common DMs and Omni mic in the quiet field. The conventional DM output noise was 0.001, the mid; the balanced DM, 0.00115, the highest; and the Omni mic, 0.000873, the lowest. When plus a speech, the S/N of the conventional DM output was 27.3 dB, the balanced DM, 33.5 dB, and the Omni mic, 33.6 dB; all the mics achieved excellent S/Ns. The noise drop of the common DMs resulted not only from the polar pattern suppression, but also from the low-pass filter effect.
DMs in Soft Noises
When the sound pressure of a noise in a listening field is around 40 dB SPL, it is a soft level for average conversations. We assigned a soft babble noise and soft white noise. The former was acquired from a wave file, which was a multi-person talking record for simulating a party noise [17]; the latter was taken in the same way as in Section 5.1. The noises were calibrated to ensure that the sound pressure was 40 dB SPL. When an aid wearer enters a party, the noise surrounds the hearing aid from all orientations. We selected eight angles 0°, 45°, 90°, …, 315° to represent the surrounding noise intrusion, but three of them, 225°, 270° and 315°, were ignored because of the shade effect of the wearer's head. Mean power of the output noises at the five angles was calculated as the end DM output. Considering that the output noises were independent time-series, their mean power was calculated based on a sum of their powers. Table 2 lists the outputs and S/Ns of the common DMs and Omni mic in the soft noises. The results indicate that 1) in the babble noise, the conventional DM output was 0.00117, the lowest; the balanced DM, 0.00388, the mid; and the Omni mic, 0.00435, the highest. When plus a speech, the S/N of the conventional DM was 25.9 dB; the balanced DM, 22.9 dB; and the Omni mic, 19.7 dB; all the mics achieved excellent S/Ns. 2) In white noise, the conventional DM output was 0.00317, the mid; the balanced DM, 0.0037, the highest; and the Omni mic, 0.00276, the lowest. When plus a speech, the S/N of the conventional DM was 17.3 dB; the balanced DM, 23.3 dB; and the Omni, 23.6 dB; all the mics achieved excellent S/Ns.
DMs in Low Noises and Low Interference
When the sound pressure of a noise or interference in a listening field is around 50 dB SPL, it is a low level for average conversations. We assigned a low babble noise, a low white noise and low talking interference. The two noises were acquired in the same way as Section 5.2; the noises were calibrated to ensure that the sound pressure was 50 dB SPL. We used two talking interferences, which were acquired from Amy's and Brian's voices in wave files [15]. The calibration way is the same as the way in the noises. Additionally, a hearing aid can be controlled by its wearer's head to back onto the interference, so we selected the individual talking interferences at five angles 135°,157.5°, …, 225° to represent the back intrusion, but two of them, 202.5° and 225°, were ignored because of the shade effect of the wearer's head. Considering that the speech outputs at the three angles were correlated timeseries, their mean power is calculated based on a sum of their RMS values. We used a mean of outputs in two experiments with Amy's and Brian's voices as the end interference output. Table 3 lists the outputs and S/Ns of the common DMs and Omni mic in the low noises and low interference. The results indicated that 1) in the babble noise, the conventional DM output was 0.0037, the lowest; the balanced DM, 0.0123, the mid; and the Omni mic, 0.0138, the highest. When plus a speech, the S/N of the conventional DM was 15.9 dB, excellent; the balanced DM, 12.9 dB, very good; and the Omni mic, 9.6 dB, good. 2) In the white noise, the conventional DM output was 0.01, the mid; the balanced DM, 0.0117, the highest, and the Omni mic, 0.00873, the lowest. When plus a speech, the S/N of the conventional DM was 7.3 dB, good; the balanced DM, 13.3 dB, very good; the Omni mic, 13.6 dB, very good. 3) In the talking interference, the conventional DM output was 0.00126, the lowest; the balanced DM, 0.00234, the mid; the Omni, 0.0132, the highest. When plus a speech, the S/N of the conventional DM was 25.3 dB, excellent; the balanced DM, 27.3 dB, excellent; but the Omni mic, 10 dB, good.
DMs in Competing Noises and Competing Interference
When the sound pressure of a noise or interference in a listening field is around 60 dB SPL, it is a competing level for average conversations. We assigned a competing babble noise, a competing white noise and competing talking interference. The noises and talking interferences were acquired as previously described, and were calibrated to ensure that the sound pressure was 60 dB SPL. Table 4 lists the outputs and S/Ns of the common DMs and Omni mic in the competing noises and competing interference. The results indicate that 1) in the babble noise, the conventional DM output was 0.0117, the lowest; the balanced DM, 0.0388, the mid; and the Omni, 0.0435, the highest. When plus a speech, the S/N of the conventional DM was 5.9 dB, fair; the balanced DM, 2.9 dB, poor; and the Omni mic, -0.35 dB, very poor. 2) In the white noise, the conventional DM output was 0.0317, the mid; the balanced DM, 0.037, the highest; and the Omni mic, 0.0276, the lowest. When plus a speech, the S/N of the conventional DM was -2.75 dB, very poor; the balanced DM, 3.32 dB, poor; and the Omni mic, 3.6 dB, poor. 3) In the talking interference, the conventional DM output was 0.00397, the lowest; the balanced DM, 0.0074, the mid; and the Omni mic, 0.0418, the highest. When plus a speech, the S/N of the conventional DMs was 15.3 dB, excellent; the balanced DM, 17.3 dB, excellent; but the Omni mic, 0 dB, very poor.
DMs in Strong Noises and Strong Interference
When the sound pressure of a noise or interference in a listening field is around 70 dB SPL, it is a strong level for average conversation. We assigned a strong babble noise, a strong white noise and strong talking interference. The noises were at 5 incident angles and the interferences were at 3 incident angles, as described in Section 5.4. The noises and the interference were calibrated to ensure that the sound pressure was 70 dB SPL. Table 5 lists the outputs and S/Ns of the common DMs and Omni mic in the strong noises and strong interference. The results indicate that 1) in the babble noise, the conventional DM output was 0.037, the lowest; the balanced DM, 0.123, the mid; and the Omni mic, 0.138, the highest. When plus a speech, the S/N of the conventional DM was -4.1 dB; the balanced DM, -7.1 dB; the Omni mic, -10.4 dB; and all the mics achieved very poor S/N. 2) In the white noise, the conventional DM output was 0.101, the mid; the balanced DM, 0.117, the highest; and the Omni mic, 0.0873, the lowest. When plus a speech, the S/N of the conventional DM was -12.8 dB, the balanced DM, -6.7 dB; the Omni mic, -6.4 dB; and all the mics achieved very poor S/N. 3) In the talking interference, the conventional DM output was 0.0126, the lowest; the balanced DM, 0.0234, the mid; and the Omni mic, 0.132, the highest. When plus a speech, the S/N of the conventional DM was 5.26 dB, fair; the balanced DM, 7.3 dB, good; but the Omni mic, -10 dB, very poor.
Distortion of Common DMs
Significant spectrum distortion of a conventional DM was illustrated with two-word English phrases [11]. Here we selected the large-sample, real-world speech of 3.8 s, as described in section 4.2. In order to compare waveforms and spectra of the original speech to those of the Omni mic, the conventional DM and balanced DM outputs, we recorded data at four test points in Figure 16 to Figure 18. No.1 was the original speech output, recorded into block AmySpch.mat. No.2 was the Omni mic output, recorded into block AmyOmni.mat. The both were done in the Omni mic experiment of Figure 16. No.3 was the conventional DM output, recorded into block AmyConv.mat placed at the Adder output in Figure 17. No.4 was the balanced DM output, recorded into block AmyBalcd.mat placed at the Adder output in Figure 18. The four blocks were To File type in Simulink, and were saved in Matlab Workplace after running. They needed to be written into wave files for future viewing and listening. For details of the wave file creation, refer to the Appendix. It is convenient to test the waveforms and spectra by using the Adobe SoundBooth. Figure 19 shows the waveform (upper) of the original speech of Amy, which was the criterion waveform for our evaluations. The gaps between the two-word waveforms are nothing, and the envelope of the speech waveform are deep, indicating clean speech; the spectrum(lower) shows that high energy is in the low-mid frequency region. Figure 20 shows the waveform and spectrum of the Omni mic output. We observe that the waveform and spectrum have little distortion, compared to the criterion waveform. Thus, the Omni mic preserves the input speech fidelity very well. Figure 21 shows the waveform and spectrum of the conventional DM output. The waveform has significantly distorted: some word waveforms are declined, while the others are expanded, depending on frequency components of the word waveforms [11]. Figure 22 shows the waveform and spectrum of the balanced DM output. The entire waveform is enhanced by max 6 dB, compared to the criterion waveform, and the magnified waveform of Figure 22 preserves the fidelity of the original speech. Furthermore, we also listened to the speeches during all the playbacks, and could not perceive distortion except the speech in Figure 21, which sounded much different from the original speech in Figure 19: the high pitches were significantly increased. These findings are consistent with the frequency responses of the common DMs.
Conclusions
The data, waveforms, spectra and graphs acquired through our experiments facilitated our evaluation of the benefits and limitations of the common DMs.
(1 Effectiveness to suppress a surrounding noise, e.g., party noise, improves a little on the Omni mic. However, when an interference is intruding as a beamed sound, e.g., individual talking interference, the DMs suppress it effectively, even at a strong level. structure, as shown in Figure A1. SimuLink 2017b provides a Low-pass Filtering block. Before running it, we needed to set up its parameters: Type, Chebeshev II; Pass-band gain, 0 dB; Ripples, 0.1 dB; Pass-band edge, 8k Hz; Stop-band edge, 10k Hz and Stop-band attenuation, -40 dB. By clicking a left box Review Response, we verified some characteristics of the resulting filter and what we concerned about, e.g., the delay time is 20 samples, 0.453 ms. Multi-band filters for a balanced DM are composed of many band-pass filters, which can be FIR filter bank or FFT structure; in fact, each channel of FFT also is a FIR filter. We did not use IIR(infinite impulse response) and octave filtering structure because of long group delays. The long and stagger delays may cause severe waveform distortion when multiband summing. For our experiments, eight band-pass filters were designed, their center frequencies are 600, 1.5k,…,7.5k Hz, and their bandwidths are equal, 1k Hz except that the 1st one has a width 800 Hz. The 8 filters cover a frequency range of 200~8k Hz, which is enough wide to pass almost all components of speech spectra [18]. When setting up the band-pass blocks in Figure 16 and Figure 17 and Figure 18, we needed to specify many specifications and to select options: Figure A2 shows one of the designed band-pass filters' responses, which has a center frequency 2.5k Hz. We also can view Delay Time and Order number, etc., by clicking the specification in menu.
When summing outputs of the filters, interactions of output phases may cause big difference between the design ripples and the tested ripples. Thus, integrated frequency response of the multi-band filters must be verified before applying them. Fpass1 and Fpass2 of each filter need to be adjusted until the ripples of the integrated response meet ±1.2 dB. The resulting delay was about 80 samples, 1.8 ms. When we selected the octave filter or IIR filter blocks, the delay time was about 200 samples.
Appendi 2. Manipulations of Reading and Writing Wave Files
When starting an experiment of Figure 16, Figure 17 or Figure 18, the input blocks of From-File in Workplace must be invoked as sound sources. In order to prepare the sources, we needed some readable time-series files of speech voices, field noises and device noise, etc. Here we give an example to illustrate. In Matlab 2017b, some syntaxes to manipulate wave files have been updated from the previous versions. Assign Amy wave file name to be Amy60dB4s.wav, the following syntaxes are available. Because the Amy wave file was recorded in stereo channels, we had to read data from both left and right channels into a column vector by, and to split into two vectors AmyL and AmyR. Then, we needed to build a simple model in SimuLink, which was composed of only two directly-connecting blocks. A block of From Workplace, AmyL, read the vector AmyL; another block of To File, SpchAmy.mat, recorded the data from the block AmyL when the model was running. In each of our experiments, there were two pairs of Time Scope and Spectrum Scope to be connected to the input and output ends of the multi-band processing. So, it was easy to view behaviors of the Omni mic and the DMs when running. For a backup of the experimental results, we needed to record these data into wave files. This required a reverse manipulation to the above syntaxes. For example, when the experiment of Figure 16 was done, all the Omni mic output data had been recorded into the block AmyOmni.mat. The following is syntaxes which write a wave file with the block mat-file. ', yb,44100) where load means to open the mat-file AmyOmni.mat in Workplace. The syntax y3(1,:)= AmyLPOmni.data; means to take only data from time-series AmyLPOmni; the name AmyLPOmni was assigned when we set up the parameters of the block AmyOmni.mat. The syntax audiowrite ('AmyLOmni.wav', yb,44100) means to use data of vector yb and sampling rate 44.1k Hz to write a wave file, whose name is AmyLOmni.wav. soundsc(yb,44100) and pause are to listen to the sound before writing the wave file. In the same way, we can write the other three wave files with the mat-files AmyConv.mat, AmyBalcd.mat and AmySpch.mat, which were acquired in the experiments of Figure 17, Figure18 and Figure16, respectively. Adobe SoundBooth CS4 is a type excellent audio analyzer and audio signal editor, and imports sounds by reading wave files. For details of listening to sounds, viewing waveforms and analyzing spectra, refer to the SoundBooth Helps.
Appendi 3. Wave File Creation and Playback of Double Channels
After opening a wave file, its waveform appears in a track. In order to compare another waveform, we can select Add an audio Track from a small menu in the upper left of the Editor panel, then we open a 2nd wave file and its waveform appears in another track below. If there exists a big difference between two waveforms, we can easily observe it. For example, the waveforms in Figure 21 and Figure 19, some word waveforms are expanded significantly, and some word waveforms are declined significantly. However, if differences between two waveforms are very tiny, it is difficult to recognize where the differences are. For example, the Amy original speech in Figure 19 and the Omni mic output in Figure 20 are almost the same; then, we cannot recognize either of the waveforms by viewing separately. Instead of the MultiTracks playing, we can write two waveforms into two channels of "stereo" sounds, respectively, then play and view them in the same track. The following syntaxes write the Amy original speech into left channel and write the Omni mic output into right channel. .wav', y4,44100) where the y4 is a two-column vector; y3(1,:) contains Amy original speech, and y3(2,:) contains the Omni mic output. Such resulting wave file AmyLSD&Omni3_8s.wav contains "stereo" sounds, as shown in Figure A3, and can be viewed and differentiated. Before playing the stereo sound, we selected View/Channel/Layer on the main menu of SoundBooth CS4. We can observe that the two waveforms are overlapping together, but their spectra are separated; the green waveform is the original speech, the blue one is the Omni output, and the dodger blue one is the overlapping area. As a result, the differences between the two waveforms appear clearly, the both are very close(the dodger blue is almost 100% area) but not the same. Using such stereo creation, we also can easily recognize artifacts caused by a DSP processor.
Biography
Xubao Zhang received his doctorate in electronics from Xi'an Electronic Science and Technology University in China and was a postdoctoral fellow at McMaster University in Canada. He has been interested in hearing aid technology strategies and performance evaluation. He worked as an EA and EMC engineer with Sonova Unitron, also with Oticon Canada. And he worked as Associate Professor at EE department of the Xidian University for radar signal processing research. He is the author of one book and more than 40 articles. | 9,844.8 | 2018-10-26T00:00:00.000 | [
"Physics"
] |
Measurement of the polarization of W bosons with large transverse momenta in W+jets events at the LHC
: A first measurement of the polarization of W bosons with large transverse momenta in pp collisions is presented. The measurement is based on 36 pb-1 of data recorded at √s=7 TeV by the CMS detector at the LHC. The left-handed, right-handed, and longitudinal polarization fractions (fL, fR, and f0, respectively) of W bosons with transverse momenta larger than 50 GeV are determined by using decays to both electrons and muons. The muon final state yields the most precise measurement: (fL-fR)-=0.240±0.036(stat)±0.031(syst) and f0-=0.183±0.087(stat)±0.123(syst) for negatively charged W bosons and (fL-fR)+=0.310±0.036(stat)±0.017(syst) and f0+=0.171±0.085(stat)±0.099(syst) for positively charged W bosons. This establishes, for the first time, that W bosons produced in pp collisions with large transverse momenta are predominantly left-handed, as expected in the standard model. Published by the American Physical Society under the terms of the Creative Commons Attribution 3.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. © 2011 CERN, for the CMS Collaboration https://doi.org/10.1103/PhysRevLett.107.021802 Abstract A first measurement of the polarization of W bosons with large transverse momenta in pp collisions is presented. The measurement is based on 36 pb − 1 of data recorded at √ s = 7 TeV by the CMS detector at the LHC. The left-handed, right-handed and longitudinal polarization fractions ( f L , f R , f 0 ) of W bosons with transverse momenta larger than 50 GeV are determined using decays to both electrons and muons. The muon final state yields the most precise measurement, ( f L − f R ) − = 0.240 ± 0.036 (stat.) ± 0.031 (syst.) and f − 0 = 0.183 ± 0.087 (stat.) ± 0.123 (syst.) for negatively charged W bosons, and ( f L − f R ) + = 0.310 ± 0.036 (stat.) ± 0.017 (syst.) and f + 0 = 0.171 ± 0.085 (stat.) ± 0.099 (syst.) for positively charged W bosons. This establishes, for the first time, that W bosons produced in pp collisions with large transverse momenta are predominantly left-handed, as expected in the standard model. The measurement of the kinematic properties of W bosons produced at hadron colliders provides a stringent test of perturbative quantum chromodynamics (QCD) calculations as well as being an important prerequisite to searches for physics beyond the standard model. The pp collisions at the Large Hadron Collider (LHC) offer both a new environment and higher energy to study W bosons with large transverse momenta recoiling against several energetic jets. The sizable production cross section results in significant samples of W bosons, while the nature of the initial state leads to an enhancement of the quark-gluon contribution to W+jet production when compared to the Tevatron pp collider, where quark-gluon and antiquark-gluon processes contribute equally. This dominance of quark-gluon initial states, along with the V − A nature of the coupling of the W boson to fermions, implies that at the LHC W bosons with high transverse momenta are expected to exhibit a sizable left-handed polarization. A significant asymmetry in the transverse momentum spectra of the neutrino and charged lepton from subsequent leptonic W decays is therefore expected. This Letter reports the first measurement of the polarization of W bosons with large transverse momenta at the LHC, using a data sample of pp collisions corresponding to an integrated luminosity of 36 ± 1.4 pb − 1 at a center-of-mass energy of 7 TeV, recorded with the Compact Muon Solenoid (CMS) detector. We measure the polarization of the W boson in the helicity frame, where the polar angle ( θ ∗ ) of the charged lepton from the decay in the W rest frame is measured with respect to the boson flight direction in the laboratory frame. The azimuthal angle ( φ ∗ ) is defined to be zero for the proton which has the smaller θ ∗ in the boson rest frame. The cross section for W production at a hadron collider with a subsequent leptonic decay, d N /d Ω , is by [1]
1
The measurement of the kinematic properties of W bosons produced at hadron colliders provides a stringent test of perturbative quantum chromodynamics (QCD) calculations as well as being an important prerequisite to searches for physics beyond the standard model. The pp collisions at the Large Hadron Collider (LHC) offer both a new environment and higher energy to study W bosons with large transverse momenta recoiling against several energetic jets. The sizable production cross section results in significant samples of W bosons, while the nature of the initial state leads to an enhancement of the quark-gluon contribution to W+jet production when compared to the Tevatron pp collider, where quark-gluon and antiquarkgluon processes contribute equally. This dominance of quark-gluon initial states, along with the V − A nature of the coupling of the W boson to fermions, implies that at the LHC W bosons with high transverse momenta are expected to exhibit a sizable left-handed polarization. A significant asymmetry in the transverse momentum spectra of the neutrino and charged lepton from subsequent leptonic W decays is therefore expected. This Letter reports the first measurement of the polarization of W bosons with large transverse momenta at the LHC, using a data sample of pp collisions corresponding to an integrated luminosity of 36 ± 1.4 pb −1 at a center-of-mass energy of 7 TeV, recorded with the Compact Muon Solenoid (CMS) detector.
We measure the polarization of the W boson in the helicity frame, where the polar angle (θ * ) of the charged lepton from the decay in the W rest frame is measured with respect to the boson flight direction in the laboratory frame. The azimuthal angle (φ * ) is defined to be zero for the proton which has the smaller θ * in the boson rest frame. The cross section for W production at a hadron collider with a subsequent leptonic decay, dN/dΩ, is given by [1] where the coefficients A i (i = 0, . . . , 4) depend on the W boson charge, transverse momentum and rapidity, and make up the elements of the polarization density matrix. Integrating Eq. (1) over φ * yields dN d cos θ * ∝ (1 + cos 2 θ * ) + 1 The fractions of left-handed, right-handed, and longitudinal W bosons ( f L , f R and f 0 , respectively) are related to the parameters the values of the f i parameters are not expected to be the same for both charges, since for partons which carry a large fraction of the proton's momentum, the ratio of valence u quarks to sea quarks is higher than that for valence d quarks.
The amount of W boson momentum imparted to the charged decay lepton is determined by cos θ * , and hence an asymmetry in the cos θ * distribution leads to an asymmetry between the neutrino and charged-lepton momentum spectra. This can be quantified via a measurement of the A 4 parameter. However, the inability to determine the momentum of the neutrino along the beam axis introduces a two-fold ambiguity in the determination of the momentum of the W boson. Therefore, it is not possible to precisely determine the W boson rest frame required to extract the W decay angles. To overcome this, a variable which exhibits a strong correlation with cos θ * is introduced. The lepton projection variable, L P , is defined as the projection of the scaled transverse momentum of the charged lepton, p T ()/| p T (W)|, onto the normalized transverse momentum of the parent W boson, p T (W)/| p T (W)|: In the above expression, p T (W) is estimated from the vectorial sum of the missing transverse energy E / T and p T () in the event. Experimentally, E / T is reconstructed as the negative vector sum of the transverse energy vectors of all particles identified in the event using a particle flow algorithm [2]. In the limit of very high p T (W), L P lies within the range [0,1] and cos θ * = 2(L P − 1 2 ). The central feature of the CMS apparatus is a superconducting solenoid, 13 m in length and 6 m in diameter, which provides an axial magnetic field of 3.8 T. The bore of the solenoid is instrumented with various particle detection systems. Charged particle trajectories are measured by the silicon pixel and strip tracking detectors, covering 0 < φ < 2π in azimuth and |η| < 2.5, where the pseudorapidity is defined as η = − ln[tan(θ/2)], and θ is the polar angle of the trajectory of the particle with respect to the counterclockwise beam direction. A crystal electromagnetic calorimeter (ECAL) and a brass/scintillator hadron calorimeter (HCAL) surround the tracking volume and cover the region |η| < 3. The steel return yoke outside the solenoid is in turn instrumented with gas detectors which are used to identify muons. The detector is nearly hermetic, allowing for energy balance measurements in the plane transverse to the beam direction. A more detailed description of the CMS detector can be found elsewhere [3].
The trigger providing the data sample used in this analysis is based on the presence of at least one charged lepton, either an electron or a muon, with a minimum transverse momentum of 22 (15) GeV for the electron (muon). Events passing this trigger are required to have at least one good reconstructed pp interaction vertex [4]. Electrons and muons are reconstructed and selected using the procedure and requirements described in the measurement of the inclusive W/Z boson cross section [5]. The selection of W boson candidates requires one electron (muon), with p T > 25 (20) GeV in |η| < 2.4 (2.1). High-p T leptons are also found in events in which hadronic jets mimic the lepton signature. Such misidentified leptons, as well as non prompt leptons arising from decays of heavy-flavor hadrons or decays of light mesons within jets, are suppressed by imposing limits on the additional hadronic activity surrounding the lepton candidate in an event. The scalar sum of the transverse momenta of all charged particle tracks and the transverse energy in the ECAL and HCAL in a cone of ∆R = (∆φ) 2 + (∆η) 2 = 0.3 centered on the lepton candidate is calculated, excluding the contribution from the candidate itself. The candidate is retained if this sum is less than 4 (10)% of the electron (muon) p T . Electrons (muons) from decays of Z bosons are suppressed by vetoing events containing a second lepton with p T > 15 (10) GeV passing looser isolation criteria.
Since the analysis measures the lepton and neutrino momenta from W boson decays, there is no requirement on the E / T in the event. Instead, to further reduce backgrounds from QCD multijet production, the selection requires M T > 50 (30) GeV for the electron (muon) channel, where and ∆φ is the angle between the missing transverse momentum and the lepton transverse momentum. The requirement on M T is higher in the electron channel to compensate for the larger QCD multijet background. Given that the polarization and correlation of L P with cos θ * increase with p T (W), while the number of available events decreases sharply with p T (W), we require p T (W) > 50 GeV as the result of an optimization study based on the expected statistical uncertainty of the ( f L − f R ) measurement. As high-p T W bosons are also produced in top quark decays, only events with up to three reconstructed jets are retained. The jets considered are particle-flow based [6] with p T > 30 GeV, |η| < 5, and are clustered using the anti-k T algorithm [7] with a distance parameter of 0.5. In data, a total of 5485 (8626) events pass the selection requirements in the electron (muon) channel. These events are almost entirely W+jets events, with a small contamination from the processes tt +jets, Z+jets and photon+jets. All these processes, and their expectations, are produced using the MADGRAPH [8,9] generator, with the CTEQ6L [10] parton distribution function set, and are passed through a full simulation of the CMS detector based on the GEANT4 [11] package. There are 252 ± 93 (266 ± 84) estimated background events from simulation in the electron (muon) channel, where the uncertainty corresponds to the theoretical uncertainty on the relevant cross sections.
In the muon channel, the background from QCD multijet and heavy flavor production is expected to be negligible. In the electron channel, the simulation predicts a higher level of multijet background, and therefore the distribution of the L P variable for the surviving background events is needed. This distribution is obtained using data enriched in misidentified electrons by reversing some of the electron selection requirements, as in [5]. We refer to this as the "antiselected sample". As a cross-check, the procedure is also applied to simulated samples. The L P distribution from the QCD multijet background after all selection cuts is found to be well reproduced by the antiselected electron sample.
The polarization fraction parameters ( f L − f R ) and f 0 are measured using a binned maximum likelihood fit to the L P variable, separately for W + and W − bosons in the electron and muon final states. The L P distribution for each of the three polarization states of the W boson is extracted from Monte Carlo samples which are reweighted to the angular distributions expected from each polarization state in the W boson center-of-mass frame. The L P distributions are simulated in the presence of pile-up events matching the vertex multiplicity distribution observed in data, corresponding to an average of 2.8 reconstructed vertices per event.
The L P distributions for electrons and muons are shown in Figs. 1 and 2, respectively. Also shown are the results of the fit to the individual components corresponding to the three W polarization states, and to the background. The background consists of an electroweak component and a QCD multijet component, which is negligible in the muon sample. The fit is carried out by keeping the electroweak background contribution fixed to the value predicted by simulation, whereas all other components, including the QCD multijet background, are allowed to vary. The results of the fits, along with the correlations between these extracted parameters, are listed for positively and negatively charged electrons and muons in Table 1. For each W boson charge, the results for electrons and muons are self-consistent. The correlations differ due to the QCD multijet component included in the fit to the electron final state. Also shown are the results from performing a combined fit, simultaneously to both the electron and muon data.
Several experimental and theoretical effects are considered as sources of systematic uncertainty. The most significant sources, which are listed in Table 2, stem from the recoil energy scale and resolution [12] uncertainties, which enter in the measurement of the transverse momentum of the W boson. The recoil energy scale is varied by its measured uncertainty [13] and the effect is propagated through the analysis, resulting in modified L P distributions. The measurement is repeated and the full difference from the nominal value is quoted as the systematic uncertainty from this source. The effect is smaller for values of L P close to one, corresponding to low values of E / T , and hence the uncertainty is smaller for W − relative to W + . The same procedure is followed for the recoil resolution, electron energy, and muon momentum scale. Decays of Z bosons to electrons are used to derive corrections, in bins of the electron pseudorapidity, which calibrate the electron energy scale. An uncertainty of ± 50% on these corrections is assumed, in order to cover the full range of variations. Decays of Z bosons to muons are used to constrain the muon momentum scale and an uncertainty of 1% at 100 GeV is found. The Table 2: Summary of the leading systematic uncertainties for the electron and muon final states, as well as for the combined measurement. The total systematic uncertainties are also shown for reference. fit range of the lepton projection variable is restricted to 0.0 < L P < 1.3, as a result of the minimization of the combined statistical and systematic uncertainties of the measurement.
The uncertainty on the modeling of the QCD background in the electron channel is estimated using the sample of antiselected electrons which yields the shape of the L P distribution for this background. The fit is repeated multiple times, whilst varying the L P distribution of the antiselected sample within its statistical uncertainties. The variation in the fit results is then used as an estimate of the systematic uncertainty, which is found to be negligible when compared to the leading systematic uncertainties.
A mismeasurement of the lepton charge dilutes the measurement of the W boson polarization. The misidentification rate is studied as a function of pseudorapidity using Z bosons decaying into a pair of oppositely charged leptons. This effect is found to be negligible for both electron and muon channels.
The systematic uncertainty arising from matching the vertex multiplicity distribution in the simulation to that observed in the data is estimated by varying the former within the statistical uncertainty of the latter, and is found to be negligible.
The effect of the theoretical uncertainties on the normalization of the electroweak background distributions, corresponding to 25% for the Z boson and 50% for the top quark, is included in the fit and found to contribute a negligible systematic uncertainty to the W boson polarization measurement. The lepton projection variable also depends weakly on the values of the polarization parameters A 1 , A 2 and A 3 , which are not measured. In order to evaluate the magnitude of the effect, these coefficients are varied by ± 10% with respect to recent standard model calculations at leading-order QCD [14]. These variations produce a negligible change in the W boson polarization measurement. A similar result is obtained for the shape of the L P distributions by varying the parton distribution functions using the CTEQ6.6 PDF error set. The muon fit result, having the smallest total uncertainty, is shown in the (( f L − f R ), f 0 ) plane for each W charge in Fig. 3. The 68% confidence level contours for both the statistical and total uncertainties are also shown. With the current sensitivity, the values of ( f L − f R ) and f 0 do not differ significantly for W + and W − . When compared to recent standard model calculations [14], the results agree well.
In conclusion, the first measurement of the polarization of W bosons with large transverse momenta at a pp collider has been presented. Using a sample of collision data corresponding to an integrated luminosity of 36 pb −1 , the measurement is performed for both charges of the W boson, in the electron and muon final states. The results from both of these channels are consistent, as are the combined fit results. The muon fit result yields the most precise measurement, ( f L − f R ) − = 0.240 ± 0.036 (stat.) ± 0.031 (syst.) and f − 0 = 0.183 ± 0.087 (stat.) ± 0.123 (syst.) for negatively charged W bosons, and ( f L − f R ) + = 0.310 ± 0.036 (stat.) ± 0.017 (syst.) and f + 0 = 0.171 ± 0.085 (stat.) ± 0.099 (syst.) for positively charged W bosons. This measurement establishes a difference between the left-handed and right-handed polarization parameters with a significance of 7.8 standard deviations for W + bosons and 5.1 standard deviations for W − bosons. This is the first observation that high-p T W bosons produced in pp collisions are predominantly left-handed, as expected in the standard model.
We wish to congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We thank the technical and administrative staff at CERN and other CMS institutes, and acknowledge support from: FMSR (Austria); FNRS and FWO (Bel | 4,741.8 | 2011-07-06T00:00:00.000 | [
"Physics"
] |
Catching Common Cold Virus with a Net: Pyridostatin Forms Filaments in Tris Buffer That Trap Viruses—A Novel Antiviral Strategy?
The neutrophil extracellular trap (ET) is a eukaryotic host defense machinery that operates by capturing and concentrating pathogens in a filamentous network manufactured by neutrophils and made of DNA, histones, and many other components. Respiratory virus-induced ETs are involved in tissue damage and impairment of the alveolar–capillary barrier, but they also aid in fending off infection. We found that the small organic compound pyridostatin (PDS) forms somewhat similar fibrillary structures in Tris buffer in a concentration-dependent manner. Common cold viruses promote this process and become entrapped in the network, decreasing their infectivity by about 70% in tissue culture. We propose studying this novel mechanism of virus inhibition for its utility in preventing viral infection.
Text
Extracellular traps (ETs) are fibrillary networks formed from~16 nm diameter filaments constituted mainly of nuclear or mitochondrial DNA from mast cells, eosinophils, macrophages, and neutrophils [1]. ETs carry histones, gelatinase, proteinases, elastase, cathepsins, lactoferrin, myeloperoxidase, and other defensins and are part of the innate immune system [2]. ET production is triggered by bacteria, but more recently, viruses were found to also induce ETs [3]. For example, influenza virus infection gives rise to specific ETs that cannot cross-neutralize bacteria but generate inflammation that undermines the alveolar-capillary barrier function and thereby promotes secondary bacterial infection. However, neutrophil ETs also inhibit virus infection [4]. We here demonstrate, via immunolabeling, that a rhinovirus interacts with ETs ( Figure S1 and Movie S1). Viruses cannot escape from ETs, as this would require nucleic acid hydrolysing enzymes at their surfaces, as are present, for example, in group A Streptococcus [5].
A recent in silico study of all available human virus genomes revealed the presence of multiple G-quadruplexes (G4s), repeats of Hoogsteen paired guanosines, in all genomes [6], including those of rhinoviruses, the main cause of the common cold. This prompted us to study whether G4-stabilizing compounds, such as pyridostatin (4-(2-aminoethoxy)-N2,N6-bis[4-(2-aminoethoxy)-2quinolinyl]-2,6-pyridinedicarboxamide; PDS), might interfere with the release of the ssRNA genome from a prototype rhinovirus, RV-A2, and thus prevent infection via impeding G4 unfolding. PDS and similar compounds are being investigated as anticancer drugs, as they stabilize G4s in telomeres, impacting cellular DNA replication [7]. We found that RV-A2 infection was indeed inhibited upon preincubation with PDS at room temperature. For control purposes, the same incubations of virus and PDS were also carried out at 4 • C, a temperature that very much reduces the diffusion of the compound through the viral protein shell to attain the RNA; capsid breathing dynamically opens conduits in the viral capsid, but this phenomenon is highly temperature-dependent [8]. We were intrigued to see that co-incubation at 4 • C also inhibited infection, but only in Tris buffers. Inhibition of infection upon incubation at room temperature was, however, independent from the buffer and only due to the above G4-stabilization (manuscript under submission). Since PDS contains planar rings (Figure 1a), we wondered whether individual molecules might stack on top of each other and act via aggregating the virus. Such aggregates might bind the virus and reduce its effective concentration. To reveal a putative higher-order structure at an ultrastructural level, 4 µM PDS in water was applied onto freshly cleaved mica and left for 5 min. The PDS solution was then replaced with phosphate-buffered saline (PBS). Using an atomic force microscope (AFM) equipped with a fluidic cell in a Pico-SPM (Molecular Imaging, Phoenix, AZ, USA) [9], we saw that the PDS molecules attached to the mica and aggregated into long fibers with a height of 1.8 ± 0.2 nm (Figure 1b). Such fibers did not form when the PDS was dissolved in PBS ( Figure 1c).
The above observations pointed to a dependence of the fiber-generation on the buffer components. To investigate this, we dissolved PDS at 200 µM in different buffers and applied the solutions onto glow-discharged carbon-coated electron microscopy (EM) grids, subjected them to negative staining with phosphotungstate, and observed the samples in a FEI Morgagni 268D electron microscope at 80 kV ( Figure 1d). In contrast to the sample adsorbed to mica and observed with AFM, PDS only formed small amorphous adducts in water and very few aggregates (white arrowheads) on the carbon-coated grids. 'Protofibrils'(black arrows) were also seen with PDS dissolved in Dulbecco's modified Eagle medium (DMEM, Sigma Aldrich; St. Louis, MO, USA) supplemented with 10% fetal bovine serum (Gibco; Thermo Fisher Scientific, Waltham, MA, USA). However, fibers formed in 50 mM NaCl, 25 mM Tris-HCl (pH 7.5), but not in PBS.
To further study the PDS fiber-generating conditions, we dissolved PDS at various concentrations in the above Tris buffer and observed the samples by TEM (Figure 1e) as above. We noticed a clear concentration dependence of fiber-generation with no fibers occurring at or below 20 µM PDS. 'Protofibrils' appeared at 40 µM and well-defined fibers from 60 µM PDS onwards.
We then investigated the influence of 20 and 100 µM PDS in the above Tris buffer on the light-up of SYTO82 (Thermo Fisher Scientific), a fluorescent probe diluted to 5 µM in the same buffer (Figure 1f). Nucleic acids can dramatically increase the fluorescence of such probes due to a forced planarization or rigidification of the probe [10]. For example, production of ETs by different cells has been monitored with SYTOX, a fluorescent probe that lights up upon interaction with nucleotide polymers [11]. The fluorescent signal emission intensity (excitation 541 nm ⁄ emission 560 nm) was acquired at room temperature using a Jasco 6500 fluorometer and plotted as relative fluorescence intensity (RFU). We observed a strong increase of SYTO 82 fluorescence emission upon adding increasing concentrations of PDS. This might be taken to indicate that the PDS can arrange in higher-order structures probably due to π-π stacking and a hydrophobic effect similarly observed in nucleotide polymers (reviewed in Friedman and Honig [12] and references within). Taken together, the AFM and TEM observation of PDS producing networks of fibers and the increase of the SYTO82 nucleic acid binding capacity led us to ask whether the PDS fibers might be capable of trapping viruses similar to ETs and thereby inhibiting viral infectivity, despite being completely different with respect to their composition. To test that in a physiologically relevant system, we measured the infectivity of PDS-treated virus in HeLa cells. We are aware that such an experiment does not take into account that the fibers might damage the cells similarly to natural ETs (see above); if so, the decreased cell survival could be misinterpreted as increased infectivity. To avoid this, we first decreased the PDS concentrations to 20 µM and incubated 1 µg/mL RV-A2 in Tris buffer for 30 min on ice to prevent capsid breathing and, thus, the interaction of PDS with the viral genome within the protein shell. TEM observation suggested that the presence of virus particles induced the formation of fibers, as they were already observed at the low concentration of 20 µM PDS (Figure 2a). Note that the shape of the PDS fibers differs to some extent in different experiments (left panel). 4 Cl and added to the cells. One hour post-challenge, the medium was replaced with fresh infection medium without NH 4 Cl to initiate uncoating. As a second control, NH 4 Cl was maintained throughout the experiment. At 8 h post-infection, the cells were prepared for immunofluorescence, and the number of cells producing viral antigen, indicating infection, was determined in a TissueFAXS. The average and standard error of the mean of infected cells from three independent assays were plotted. The figure was prepared and the significance levels determined by using GraphPad Prism 6.0 using one-way ANOVA. * p < 0.0001 vs. RVs without PDS.
To test whether the PDS fibers would indeed trap the virus and thus reduce infection, we grew HeLa cells until sub-confluent on coverslips and challenged them with RV-A2, RV-B14, and RV-A89, respectively, at a multiplicity of infection (MOI) of 100, as in Real-Hohn et al. [13]; the viruses had been diluted in Tris buffer with 20 µM PDS as above and incubated on ice for 30 min. For control, the viruses were incubated on ice for 30 min in the same buffer but in the absence of PDS. The mixture was then diluted ten times with infection medium (DMEM plus 2% fetal bovine serum) and transferred onto the cells on the coverslips. The respective virus was allowed to enter the cells, but not to uncoat by the presence of 25 mM NH 4 Cl for 1 h at 34 • C; ammonium ions neutralize the endosomal acidic pH preventing the structural changes necessary for the release of viral RNA and infection. NH 4 Cl was washed away, setting the time for synchronized productive infection. At 8 h post-infection, the cells were washed, fixed, permeabilized, and blocked with goat serum. The RV-A2-infected sample was incubated with monoclonal antibody 8F5, diluted to 10 µg/mL in goat serum, and the RV-B14 and RV-A89 infected samples with specific rabbit antisera diluted to 1:500 in goat serum. For detection, secondary antibodies labeled with Alexa fluor (Life Technologies, Carlsbad, CA, USA) were used at 1 µg/mL, followed by extensive washing. The slides were mounted, and fluorescence microscopy images were recorded on a TissueFAXS automated microscope (TissueGnostics, Vienna, Austria) as in Ganjian et al. [14]. The presence of PDS diminished the number of virus-positive cells in all cases by about 70% compared with cells identically infected with the respective untreated virus. In the continuous presence of the uncoating inhibitor NH 4 Cl, virus production was reduced to about 99% in each instance (Figure 2b).
Finally, we induced release of neutrophil ETs with phorbol 12-myristate 13-acetate acetate (PMA) from isolated neutrophils and observed significant capturing of added RV-A2 particles on elastase-decorated chromatin-based nets ( Figure S1 and Movie S1). Taken together, we conclude that the PDS fibrils might act similarly to ETs by trapping the virus and preventing it from binding and being taken up by the cells. However, more work is required to find out whether the observed viral trapping could indeed be used in antiviral therapy. It had been shown that Tris, called 'Tromethamine' in the below citation, administered through nebulization alleviated respiratory problems resulting from cystic fibrosis [15]; adding the minimally toxic PDS to such solutions might be a way of application.
Supplementary Materials: The following are available online at http://www.mdpi.com/1999-4915/12/7/723/s1, Figure S1: Neutrophils were prepared from freshly collected blood from a healthy donor by using centrifugation on a density gradient; Movie S1: Freshly collected neutrophils from a healthy donor and purified as above by density gradient separation were placed on coverslips for attachment and maintained in HBSS +/+. | 2,486.4 | 2020-07-01T00:00:00.000 | [
"Biology"
] |
Molecular characterisation and the protective immunity evaluation of Eimeria maxima surface antigen gene
Coccidiosis is recognised as a major parasitic disease in chickens. Eimeria maxima is considered as a highly immunoprotective species within the Eimeria spp. family that infects chickens. In the present research, the surface antigen gene of E. maxima (EmSAG) was cloned, and the ability of EmSAG to stimulate protection against E. maxima was evaluated. Prokaryotic and eukaryotic plasmids expressing EmSAG were constructed. The EmSAG transcription and expression in vivo was performed based on the RT-PCR and immunoblot analysis. The expression of EmSAG in sporozoites and merozoites was detected through immunofluorescence analyses. The immune protection was assessed based on challenge experiments. Flow cytometry assays were used to determine the T cell subpopulations. The serum antibody and cytokine levels were evaluated by ELISA. The open reading frame (ORF) of EmSAG gene contained 645 bp encoding 214 amino acid residues. The immunoblot and RT-PCR analyses indicated that the EmSAG gene were transcribed and expressed in vivo. The EmSAG proteins were expressed in sporozoite and merozoite stages of E. maxima by the immunofluorescence assay. Challenge experiments showed that both pVAX1-SAG and the recombinant EmSAG (rEmSAG) proteins were successful in alleviating jejunal lesions, decreasing loss of body weight and the oocyst ratio. Additionally, these experiments possessed anticoccidial indices (ACI) of more than 170. Higher percentages of CD4+ and CD8+ T cells were detected in both EmSAG-inoculated birds than those of the negative control groups (P < 0.05). The EmSAG-specific antibody concentrations of both the rEmSAG and pVAX1-EmSAG groups were much higher than those of the negative controls (P < 0.05). Higher concentrations of IL-4, IFN-γ, TGF-β1 and IL-17 were observed more in both the rEmSAG protein and pVAX1-SAG inoculated groups than those of negative controls (P < 0.05). Our findings suggest that EmSAG is capable of eliciting a moderate immune protection and could be used as an effective vaccine candidate against E. maxima.
Background
Coccidiosis is recognised as a major parasitic disease in chickens seriously affecting the efficiency of feed conversion and leading to decreased production. Eimeria maxima has been recognised as one of the most economically significant species of Eimeria [1]. Currently, prophylactic chemotherapy with anticoccidial drugs is the major control strategy for coccidiosis. Traditional anticoccidial drugs and live vaccines have their own defect [2]. Subunit vaccines encoding the Eimeria proteins which stimulated protective immunity were accepted as effective vaccines against coccidiosis [3][4][5]. Recently, many reports have shown that cell-mediated immunity could be stimulated by DNA vaccines [6][7][8][9][10].
Surface antigens have been proven to confer protection against coccidiosis by altering key processes in host cell invasions [11]. The SAGs protein of Eimeria tenella is capable of inducing an immune response against coccidiosis in chickens [12]. Therefore, surface antigens and cell adhesion proteins have been suggested as promising vaccine candidates against parasitic infections [13,14].
Eimeria maxima is regarded as a highly immunoprotective species within the family of Eimeria spp. affecting chickens [15][16][17][18][19]. In this study, subunit and DNA vaccines made from EmSAG were evaluated for their protection against E. maxima.
Chickens and parasites
Eimeria-free birds at one day of age were reared in captivity with provided water and feed ad libitum. The birds were placed in a coccidia-free environment. The Jiangsu strain of E. maxima was developed and maintained in Eimeria-free birds our laboratory. Sporozoites from E. maxima oocysts were cleaned and sporulated as previously described [20].
Expression of the recombinant EmSAG protein
The sequence identity of EmSAG was compared to the known SAG sequences of other Eimeria spp. and assessed using the BLASTx and BLASTp search tools (http://blast.ncbi.nlm.nih.gov/Blast.cgi). The amino acid sequence of EmSAG was used to predict N-terminal signal peptides through a bioinformatics online program (http://www.cbs.dtu.dk/services/SignalP/). The cladogram was made using the MEGA 6.0 programme with the neighbour-joining method. The pET-32a/EmSAG was expressed in E. coli BL21 (DE3) as described previously [21]. The recombinant protein was purified and the concentration of the sample was determined using the Bradford method [22]. The rEmSAG protein was kept frozen (-70°C) until further analysis.
Development of anti-EmSAG antibodies against the rEmSAG protein
Rat polyclonal anti-EmSAG antibodies were generated in the Sprague-Dawley rats at 4 weeks of age. Rats were subcutaneously immunised with a total of 0.3 mg of rEmSAG protein mixed with Freund's complete adjuvant. Fourteen days after the first immunisation, the rats were given a booster injection with 0.3 mg of rEmSAG protein in Freund's incomplete adjuvant. Three booster doses were given at 1-week intervals. Finally, rat serum containing antibodies were obtained after the last booster injection and kept frozen (at -70°C) until subsequent analysis. Pre-immunisation serum was obtained for later use as the negative control [23].
Construction of eukaryotic plasmid of EmSAG
The construction of eukaryotic plasmid of EmSAG was conducted and purified as previously described [20]. Briefly, the EmSAG fragment was inserted into the pVAX1, following the sequence analysis (Invitrogen Biotech) and verification, the positive clones were confirmed as pVAX1-EmSAG. The plasmids encoding EmSAG were extracted using EndoFree Plasmid MEGA Kit (Qiagen, Valencia, CA, USA). The concentration of the sample was determined using as per the method suggested earlier [20]. Finally, the plasmids were kept frozen (-20°C) until subsequent analysis.
Immunoblot analysis of native EmSAG and rEmSAG proteins
Immunoblot analyses for rEmSAG and native EmSAG were performed as described in an earlier work [20]. Rat anti-rEmSAG sera (dilutions of 1:200) were used to detect sporozoites. Chicken anti-E. maxima sera (dilutions of 1:100) were used to detect the rEmSAG protein. Goat anti-rat HRP-IgG and donkey anti-chick HRP-IgG (Sigma-Aldrich, St. Louis, MO, USA) were used as a secondary antibody.
Transcription and expression of pVAX1-EmSAG in vivo
The EmSAG transcription in vivo was performed based on the RT-PCR and immunoblot analysis, as previously described [24]. Briefly, in coccidia-free chickens, a total of 100 μg pVAX1-EmSAG plasmid was intramuscularly injected into the legs. In the pVAX1 control group, the 100 μg pVAX1 plasmid was injected into the legs. One week post-immunisation, the tissues from vaccinated and non-vaccinated chickens were collected for both RT-PCR and immunoblot analyses. EmSAG-specific primers were utilised for the RT-PCR assays. Rat anti-rEmSAG sera (dilutions of 1:200) were used to detect pVAX1-EmSAG expression. The secondary antibody was HRP-conjugated goat anti-rat IgG (Sigma-Aldrich).
Experimental design
Chickens at 14 days of age, negative for Eimeria were placed in six groups, each including 30 birds. The chickens were inoculated intramuscularly injection with the pVAX1-SAG (100 μg/chick) or rEmSAG protein (200 μg/chick). In the pVAX1 control chickens, a total of 100 μg pVAX1 plasmid was injected into the legs. In the pET-32a control chickens group, a total of 200 μg pET-32a protein was injected as above. The challenged and unchallenged control birds were immunised with PBS. One week later, the birds were boosted with the same route as the primary immunisation. Subsequently, 7 days after the last immunisation, 1 × 10 5 sporulated oocysts of E. maxima were given to all the birds except the negative control birds. Seven days later, the birds were euthanised to measure their immune response and degree of coccidial protection. Moreover, the birds (n = 5 per group) were placed in another coccidia-free room. Finally, 10 days after the last immunisation, the serum samples were harvested and kept frozen (-20°C) until further antibodies and cytokine production analysis could be conducted.
Assessment of immune protection
The chickens were monitored for body weight gain and signs of immune protection (jejunal lesion score, survival rate and change in oocyst ratio). Lesion scrapings were microscopically examined for any coccidia, whenever there was doubt of truly coccidia-induced lesions. The jejunal lesion scores of the birds were also evaluated, as described in previous research [23]. The body weight gains were measured at different time-points: the days of vaccination, at the time of the coccidia challenge, and at the end of the test. All the jejunal contents from each bird were harvested and used to evaluate the oocyst counts as described in a previous study [26]. Using the McMaster's counting method, oocysts and the oocyst ratio were assessed as previously described [27]. Anticoccidial index (ACI) values were evaluated as per the standard formula for assessing protection against E. maxima [20].
ELISA analysis of the serum antibody and cytokine
EmSAG-specific IgY/IgG antibodies were detected by ELISA using the rEmSAG protein as a coating antigen, following previous protocols [28]. The serum samples (1:50 dilution) were detected using the secondary antibodies of donkey anti-chicken HRP-conjugated IgG monoclonal antibody (Sigma-Aldrich). The experiment was completed in duplicate.
For cytokines level analysis, serum samples were obtained and measured as previously described [29]. Briefly, 10 days after the last inoculation, the serum samples of the birds (n = 5) per group were harvested to evaluate the cytokines. The titers of IL-4, IL-17, IFN-γ and TGF-β1 were measured using ELISA kits (CUSABIO, Wuhan, China). The data was pooled from three independent experiments.
Determination of T-cell response
The counts of T cells in the treatment groups were evaluated by flow cytometry analysis as previously described [30,31]. Spleens were extracted from 5 chickens of each group at pre-, first-, and second-vaccination days. Lymphocytes were obtained from the spleens were stained with SPRD-conjugated CD3 monoclonal antibodies. The cells were then probed with PE-conjugated mouse monoclonal anti-chicken CD4 or the PE-conjugated mouse monoclonal anti-chicken CD8 (Southern Biotechnology Associates, Birmingham, AL, USA). Using FACS flow cytometer, the stained cells were analysed with Cell Quest software (BD Biosciences, San Jose, CA, USA).
Statistical analysis
All data was expressed as the mean ± standard deviation using the SPSS Statistical Software (SPSS Inc., Chicago, IL, USA). The data were analysed with one-way ANOVA using Duncan's post-hoc test and considered to be statistically significant at P < 0.05.
EmSAG sequence analysis
Using E. maxima cDNA as a template, the PCR product of EmSAG was isolated and ligated with pMD19-T. Sequence analysis showed that the EmSAG ORF encoded a protein of 24.73 kDa with a pI of 4.808. As shown in Fig. 1, the phylogenetic tree formulation indicated that the kinship of EmSAG protein was highly related to the EtTA4 and EnNA4 when compared with other Eimeria spp. (E. mitis, E. brunetti, E. praecox, E. acervulina and E. necatrix). The amino acid sequence was analysed with the SignalP programme. The findings suggested an obvious signal peptide possessed a cleavage site between position 21 and 22.
Purification of the rEmSAG protein
The pET32a/EmSAG plasmids were expressed in E. coli BL21. After IPTG induction, the rEmSAG proteins were harvested. The purified fusion rEmSAG protein was approximately 43 kDa (Fig. 2). This calculated total value of 43 kDa was considered accurate as the sum of both the approximate 20 kDa length of pET-32a (+) and the approximate 23 kDa length of the EmSAG protein.
Immunoblot analysis of native and rEmSAG proteins
The native and rEmSAG proteins were evaluated by the western blot method (Fig. 3). The rEmSAG protein was tested using chicken E. maxima-specific antibodies, but not by the antibodies of unimmunised chickens. Furthermore, the western blot assay also showed a band of almost 26 kDa belonging to the sporozoites protein detected by rat anti-rEmSAG antibodies (Fig. 3), in contrast to the serum from the negative control rats that did not display any bands.
Identification of EmSAG location in sporozoites and merozoites
The location of EmSAG in sporozoites and merozoites of E. maxima was confirmed using immunofluorescence analyses (Fig. 4). The EmSAG protein was detected using rat anti-rEmSAG antibodies, and Cy3-conjugated goat anti-rat IgG as secondary antibodies shown in red Fig. 1 The phylogenetic tree was constructed using CLUSTAL W alignment and neighbour-joining method of the software MEGA 6.0
Identification of transcription and expression of pVAX1-EmSAG in vivo
Transcription and expression of pVAX1-SAG in vivo was evaluated through RT-PCR, using the EmSAG-specific primers. A specific DNA band was detected belonging to pVAX1-EmSAG in the tissues of the injected site (Fig. 5a, Lane 4). The RNA samples from non-inoculated and pVAX1-inoculated tissues did not detect any band in the RT-PCR analyses (Fig. 5a, Lanes 1, 2 and 3).
In addition, expression of pVAX1-SAG in vivo was detected through immunoblot analysis. A unique band of approximately 26 kDa was detected in the pVAX1-EmSAG-vaccinated muscle sample. In contrast, no band was shown in the pVAX1-immunised muscle samples (Fig. 5b). These results indicate the successful transcription and expression of the EmSAG gene in vivo.
Determination of IgY/IgG and cytokines levels using ELISA
To evaluate the titers of IgY/IgG and the cytokines, serum samples from the immunised birds (n = 5 per group) were harvested at 10 days after the last vaccination. The anti-EmSAG IgY/IgG titers of each group are shown in Fig. 6. The IgY/IgG titers of both EmSAG-immunised groups were much higher (ANOVA, F (4, 20) = 77.78, P < 0.0001) compared to the controls.
The titers of cytokines were measured using ELISA (Fig. 7). The serum samples in both pVAX1-EmSAG and rEmSAG-immunised chickens displayed higher titers of IFN-γ (ANOVA, F
Immune protection of EmSAG against E. maxima
To analyse the immune protection of EmSAG against E. maxima, the challenge experiments were assessed. The degrees of immune protection conferred by vaccinations of pVAX1-EmSAG and rEmSAG proteins were measured, and the results of ACI are described in Table 2. Birds inoculated with EmSAG exhibited higher weight gains (ANOVA, F (5, 174) = 27.67, P < 0.0001) and greater decreases in oocyst ratios when compared to all other groups. The ACIs of the EmSAG-immunised chickens were more than 170, providing moderate protective immunity.
Discussion
In this research, both DNA and recombinant protein vaccines encoding EmSAG of E. maxima were compared regarding their abilities to induce protection against E. maxima infection. These results indicated that inoculation with EmSAG could promote IgG levels in the sera and upregulated the titers of IL-4, IFN-γ, IL-17 and TGF-β1. Furthermore, the data from the animal experiments proved that EmSAG-immunised groups could produce ACIs of more than 170. Taken together, these data demonstrate that EmSAG vaccines could stimulate moderate protection against E. maxima. Higher body weight gain, lower fecal oocyst shedding and reduced intestinal pathology were detected for immune protection. Jang et al. [32] reported that birds had lower oocyst concentration in droppings and reduced intestinal pathology after vaccination with Gam82 and challenged with E. maxima when compared with non-vaccinated and parasite-challenged groups. Xu et al. [33] determined that pcDNA3.0-TA4-IL-2 could decrease caecal lesions and body weight loss as well as produce an ACI of 192. Song et al. [34] reported that chickens immunised with pMP13 plasmid showed significantly lower number of oocysts following the Fig. 4 Expression of EmSAG protein in sporozoites and merozoites at 100× magnification. a The sporozoites were detected by rat anti-rEmSAG antibodies. a1 Sporozoites were dyed by Cy3. a2 The nuclei were probed by DAPI. a3 Overlaps of Cy3 and DAPI. b The sporozoites were detected by unimmunised rat antibodies. b1 Cy3 stains. b2 DAPI stains. b3 Merge. c Merozoites were detected by rat anti-rEmSAG antibodies. c1 Cy3 stains. c2 DAPI stains. c3 Merge. d The merozoites were detected by unimmunised rat antibodies. d1 Cy3 stains. d2 DAPI stains. d3 Merge. Scale-bars: 10 μm challenge with E. acervulina compared to those in the negative controls. Similar results were detected in this study, both pVAX1-EmSAG and rEmSAG vaccines were successful in alleviating jejunal lesions, decreasing loss of body weight and the oocyst ratio.
The chick-anti-Eimeria specific antibodies have been previously documented to provide minor protection against coccidiosis. However, humoral immunity may also contribute to the formation of protective immune responses [35]. Furthermore, Wallach [36] pointed out that antibodies could inhibit parasite development and provide passive immune protection. Lin et al. [37] reported that birds immunised with the E. tenella rEF-1α protein exhibited higher specific antibodies concentration than the negative controls. In this research, the antibody titers of the EmSAG-immunised animals were higher than the negative controls. The findings of this investigation confirmed that EmSAG could induce humoral immune response.
IFN-γ is an important cytokine involved in the Th1-mediated immune response. Chicken IFN-γ could elicit lymphocytes and enhance expression of MHC class II antigens [38]. IFN-γ could also reduce sporozoites development without affecting the sporozoite invasion of host cells [39]. In previous research, higher titers of IFN-γ were detected in the EmMIC7 vaccinated birds [20]. In this study, higher IFN-γ titers in the vaccinated birds were also detected than those in the control birds. These results demonstrate that EmSAG could elicit Th1 cellular immune responses against E. maxima.
It has been noted that cell-mediated immunity is the most important immune response to Eimeria infection. In this study, the CD4 + and CD8 + percentages were higher in the groups immunised with pVAX1-EmSAG and rEmSAG protein, when compared to the control groups. This demonstrated that EmSAG might be able to stimulate cellular immunity.
IL-4 is known as a marker of the Th2 immune response [40] and has been reported as an important factor in protective immunity against parasite infections [41]. Tian et al. [42] reported that groups vaccinated with EmGAPDH exhibited higher concentrations of IL-4 compared to control groups injected with PBS and pVAX1 alone. The results of this study demonstrated an increased IL-4 level in the EmSAG-vaccinated birds compared to those in the negative control. Coupled with the high antibody concentration, these data indicate that EmSAG could stimulate humoral immune response to E. maxima.
A new class of T-helper cells known as Th17 cells is associated with interleukin IL-17 production [43]. In the avian immune system, IL-17 functions as a stimulator of cytokine productions [44]. It has been confirmed that co-vaccination of IL-17 with 3-1 E protein induced better protection against E. acervulina than 3-1 E alone [4]. Previously, it was reported that the immunisation of animals with DNA vaccines produced higher levels of IL-17 production [45]. However, IL-17 neutralised antibody treated birds showed enhanced IL-12 and IFN-γ expression [46]. In this research, a significant increment of IL-17 concentrations was detected ten days after the last immunisation. This finding coupled with the high IFN-γ titers, indicated that EmSAG could induce Th1 and Th17 response. However, the exact function of TH17 in immunisation against Eimeria spp. needs further investigation.
TGF-β is a cytokine that has been recognised as part of the immune suppression mechanism [47,48]. TGF-β has been reported to induce protective immunity and increased TNF-α production [49,50]. Hoan et al. [51] also reported that EbAMA1 could induce significantly higher concentrations of TGF-β1 and IL17 in the vaccinated groups. Likewise, in the current research, birds vaccinated with the rEmSAG protein and pVAX1-EmSAG showed higher concentrations of TGF-β1 than that of control groups. However, the exact function of TGF-β in protecting against coccidiosis needs further investigation.
Antibodies and cytokines have been shown to influence the protective immunity against coccidiosis infections. In previous reports, monoclonal antibodies showed the ability to reduce oocyst shedding and provide partial protection against E. maxima or E. tenella challenge infections [52,53]. IL-4 could enhance the production of the antibody [54]. Chickens injected with recombinant IFN-γ showed improved protective immunity following E. acervulina infection [55][56][57]. Rose et al. [58] found that neutralising IFN-γ though monoclonal antibody could increase the output of oocysts and loss of body weight. Additionally, oocyst shedding was decreased in birds co-injected with IFN-γ or TGF-β with the 3-1E DNA vaccine compared to the birds inoculated with the DNA vaccine alone [59]. Lillehoj et al. [60] reported that co-vaccination with EtMIC2 and TGF-β significantly reduced oocyst shedding and enhanced weight gains beyond those injected by EtMIC2 alone. Zhang et al. [46] found that the IL-17 neutralised birds showed decreased fecal oocyst output and caecal lesion scores, Note: In each column, different letters indicate a significant difference (P < 0.05) between numbers. There is no significant difference (P > 0.05) between numbers with the same letter as well as increased body weight gain. Geriletu et al. [44] reported that vaccination with IL-17A and MZP5-7 reduced oocyst shedding and decreased intestinal lesions following E. tenella challenge compared to inoculation with MZP5-7 alone. In this study, challenge experiments showed that the concentration of anti-EmSAG antibodies, IFN-γ, IL-4, TGF-β and IL-17 were increased in both the rEmSAG protein and pVAX1-SAG immunised groups. Additionally, the jejunal lesions, loss of body weight and oocyst production ratio were all decreased.
These results indicate that the antibodies and cytokines played a role in the immune protection induced by the rEmSAG protein.
Localisation of the proteins is critical to understanding the role which they play in parasite binding and the invasion of the host cell [61,62]. Previous studies reported that monoclonal antibodies were able to detect proteins on the parasite surface, such as EtSAG1 and the micronemes of the sporozoites and merozoites [63][64][65]. Jenkins et al. [66] showed the immune-mapped protein 1 could be detected in the sporozoites. Zhang et al. [31] found EaMIC3 on the apical tip of E. acervulina sporozoites. Our findings suggest that EmSAG is expressed in the sporozoite and merozoite stages of E. maxima, and might play an important role in the host invasion mechanism.
Conclusions
In conclusion, our findings indicate that vaccination with EmSAG is capable of eliciting both humoral immunity and cell-mediated immunity, exploring a moderate protective immunity against E. maxima. This work suggests that EmSAG could be used as an effective vaccine candidate to resist E. maxima infection. | 4,932.2 | 2018-05-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Femtosecond XUV induced dynamics of the methyl iodide cation
. Ultrashort XUV wavelength-selected pulses obtained with high harmonic generation are used to study the dynamics of molecular cations with state-to-state resolution. We demonstrate this by XUV pump - IR probe experiments on CH 3 I + cations and identify both resonant and non-resonant dynamics. to study the state-to-state dynamics and also to distinguish processes in cations from those in highly excited neutrals, because the former are present with all harmonics of sufficient energy and the latter are resonant and thus only present for a certain XUV photon energy. The monochromator delivers selected harmonics, obtained from the output of an HHG-source, with a spectral bandwidth of 300 meV and a time duration of 20-25 fs. To perform pump-probe experiments, the wavelength-selected XUV beam is recombined with 25 fs IR pulses. In the present contribution we investigate the dynamics of cationic states of the CH 3 I molecule. The dissociation of the CH 3 I + cation along the C-I axis is mainly accompanied by a geometry change in the methyl part from
Introduction
Ultrashort extreme ultraviolet (XUV) pulses obtained from high harmonic generation (HHG) allow for studies of the dynamics in molecular cations. Typical pump-probe experiments using HHG have excellent time-resolution and can access processes such as charge migration [1], evolution of electronic and vibrational wavepackets [2], and fast decays through conical intersections [3]. However, in experiments using the full bandwidth of HHG sources, signals from different states are often mixed, which makes interpretation difficult. Here, we employ a time-delay-compensating XUV monochromator [4] which defines the photon energy and allows to study the state-to-state dynamics and also to distinguish processes in cations from those in highly excited neutrals, because the former are present with all harmonics of sufficient energy and the latter are resonant and thus only present for a certain XUV photon energy. The monochromator delivers selected harmonics, obtained from the output of an HHG-source, with a spectral bandwidth of 300 meV and a time duration of 20-25 fs. To perform pump-probe experiments, the wavelength-selected XUV beam is recombined with 25 fs IR pulses. In the present contribution we investigate the dynamics of cationic states of the CH3I molecule. The dissociation of the CH3I + cation along the C-I axis is mainly accompanied by a geometry change in the methyl part from 一 Corresponding author<EMAIL_ADDRESS>pyramid to planar and the spin-orbit splitting in the iodine atoms [5]. The latter induces avoided crossings in the manifold of potential energy surfaces which make CH3I + dissociation complex and interesting.
Results and Discussion
In the experiments, we used three harmonics: the 7 th , 9 th , and 11 th of the 800 nm fundamental driving pulse. With photon energy Eph = 10.9 eV (7 th harmonic), only the spinorbit split ground state ( X 2 E3/2,1/2) can be populated. With the next harmonic Eph = 14.0 eV (9 th harmonic), the weakly bound first excited state ( Ã 2 A1) is populated as well, while the 11 th harmonic (Eph = 17.1 eV) also reaches the repulsive B 2 E state. Note that at these photon energies also neutral Rydberg series converging to the respective excited cations can be resonantly excited, leading to fluorescence, auto-ionization, or neutral dissociation processes. For each harmonic we record ion velocity map images for both the CH3I + and I + fragments, observing both dynamics in the cation as well as dynamics of resonant neutral states.
With Eph = 10.9 eV photons, no decay dynamics in the cation are expected upon ionization to the X 2 E3/2,1/2 spin-orbit ground states since both states are bound and their vibrational activity is negligible [6]. However, in the experiment we observe an enhancement of the signal at temporal overlap followed by a decay with a time constant of 90fs. This behavior cannot be explained by cation dynamics, and therefore must come from dynamics in the neutral molecule upon resonant absorption. This is supported by the fact that these dynamics are not observed for higher harmonics. Previous photoabsorption studies indeed have identified Rydberg series converging to the à 2 A1 and B 2 E potential energy surfaces [7]. Calculations are currently in progress to establish with certainty which of these states are responsible for the observed behavior and to explain the observed 90 fs decay timescale. For photon energies of 14.0 and 17.1 eV we obtain very comparable data, both different from 10.9 eV. Figure 1 displays maps of the total kinetic energy release (KER) as a function of time delay for dissociation into I + CH3 + and CH3 + I + , obtained for the Eph = 17.1 eV pump and IR probe scheme. When IR comes after XUV, the yield of CH3 + fragments decreases while the yield of I + fragments is enhanced. On top of this long-lived contribution, clear oscillatory signals are observed in both ionic fragment yields. The period of this oscillation is 127 ± 3 fs and the oscillations in the CH3 + and I + yield are in opposite phase with respect to each other. The observed frequency closely corresponds to the frequency of the C-I stretching mode in the à 2 A1 state [8]. Therefore we interpret the observation as follows. The à 2 A1 state is populated at the inner turning point of the potential well, launching a wavepacket composed of several vibrational states. In the XUV-only experiment the most common dissociation pathway would be a non-adiabatic coupling to hot vibrational bands in the spin-orbit split ground state of the cation X 2 E3/2,1/2. The molecule then statistically dissociates producing CH3 + . The latter mechanism is consistent with the discrepancy between lifetime of the à 2 A1 state (10 -10 s) and the dissociation rate constant (10 -7 s) [8,9]. The low total KER for the dissociation into I + CH3 + is also consistent with this scenario.
The IR pulse couples the à 2 A1 state population to a repulsive potential energy surface leading to dissociation into CH3 + I + . As this state serves as a common final state for the coherently populated vibrational levels in the à 2 A1 state, these pathways interfere with a beating frequency equivalent to the energetic separation between those states. The fact that dissociation into CH3 + I + occurs with a nonzero KER indicates a prompt dissociation process. Inherently, the I + CH3 + dissociation channel should exhibit the same oscillation, but with opposite sign.
To complete the picture, it is necessary to know the exact probing mechanism induced by the IR. The potential energy surfaces of reference [5] can qualitatively describe the probing step. Further calculations are underway to obtain further insight into the role of the IR field on the coupling to dissociative states.
Conclusion
We used ultrafast wavelength-selected XUV pulses to study the dynamics of the lowest excited electronic states of the CH3I + cation. We observed dynamics upon resonant absorption by members of Rydberg series converging to these ion states as well as vibrational wavepacket dynamics that can be used to steer the dissociation to either the I + CH3 + or the CH3 + I + channels depending on the phase of the wavepacket at the arrival time of the probe pulse. In the future, we aim to quantify this control process with the help of potential energy surface calculations on which a wavepacket will be propagated in the presence of the IR field. This work has been supported by Laser lab Europe. | 1,759.8 | 2019-07-01T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Development Tactic of Reformation System by Competitiveness of the Enterprise
To provide for an enterprise’s own survival in the long-term, it is necessary for its goods (services) to be interesting to buyers to make them to give money for those, and also they should be more interesting to the buyers than the same or similar consumer quality products (services) manufactured by other companies. Thus, managerial decisions with regard to production and sales of products should be based on full knowledge of market factors and taken with account of the impact that these decisions may have on the market. The results of this analysis have a direct impact on the adoption of an optimal decision associated with the following: formation of assortment programs; renewal of products; change in the specificity of the enterprise; change in the profile of production; ensuring timely sale of products; obtaining necessary investments for the development of production on beneficial terms, etc.
Introduction
Management of competitiveness is the strategic objective of any enterprise, the main one at marketing research of competitiveness, development and implementation of a comprehensive concept of competitiveness management.
An organizational and economic mechanism of support of competitiveness management is required, which would be based on target-oriented complex components reflecting the interdependence of organizational, economic, technical, and technological activities, the implementation of which contributes to the efficient implementation of managerial decisions in this field.
A reforming or improvement management system of an enterprise is possible due to changes in its characteristics or due to qualitative and quantitative modification of the control system.
The definition of all possible environmental factors caused by the unstable economic situation is succeeded by their selection to identify the most significant one.Thus, the use of any effective methods is possible -analytical, statistical, and economic-mathematical ones (Nikitina, 2013).
In developing the model of reforming the enterprise's management system in these conditions, the mechanism of considering the chosen factors on the enterprise and the management system during its reforming is needed.At the same time, the reforming process consists of changes in the components of the management system and its characteristics.
Method
The system analysis allows evaluating the whole set of factors determining the integral capacity of an enterprise and its competitiveness.But for the effective management of competitiveness of an enterprise, it is necessary to focus on a number of principles.
Orientation of the country's economy to market relations and rapid development of foreign economic relations determine the necessity of radical changes in the views on the management of production and create prerequisites for the development and implementation of methods of competitiveness management as the most powerful tool for elimination of controversies between customers' needs and capabilities of enterprises.Competitiveness wins a special status now, when foreign products that usually have much better quality than the domestic ones appear in the domestic market.
Improvement of the situation in the market assumes radical improvement of the effectiveness of the economy through implementation of new production and management technologies focused on constant renewal of products and significant improvement of their quality.Today, therefore, considerable research efforts are directed to study the essence of competitiveness, the factors that affect its level, the methods of influence on it, as well as the tools of competitiveness management.
Competitiveness is systemic if its determinants can be understood only in case of interrelated exchange between the elements formed at different levels of the social system.Therefore, it is not sufficient to consider only the micro-level (enterprises, consumers, and market transactions) and the macro-level (trade and exchange rate, the state budget, and the foreign trade policy), which, of course, does not mean that these levels are less important.It is necessary to study the meta-level issues in order to understand why the state creates general conditions that are more or less favorable for the sustainable economic development, what role is played by different actors in the society at that, how the interaction of the state and non-state institutions takes place, and what goals of economic development are pursued in the course of this interaction, and the meso-level issues in order to analyze the measures that have a decisive influence on the performance of individual sectors and territories.The resulting model of the "systemic competitiveness" of the national economy includes four levels of analysis.
In order to form the system of evaluation of the enterprise competitiveness as the main activity lines of an enterprise and its main competitors, a clearly defined limited number of indicators can be selected.The list of the parameters used and the degree of their specification were defined by the following methodological assumptions.
Firstly, the number of evaluated characteristics should be rather limited in order to ensure the effectiveness of the adopted managerial decisions.
Secondly, because of the complexity and diversity of the problem and the lack of generally accepted approaches to the assessment of competitiveness, which requires extensive independent research, the proposed model uses the results obtained previously by domestic and foreign authors.The indicators were grouped based on the analysis of a wide range of problems of technical, economic, and social nature, which results in identifying the variables that provide for the competitiveness.The starting point of this analysis is to determine the list of technical and economic factors of competitiveness.A factor of competitiveness is the immediate cause, the presence of which is necessary and sufficient to change one or more criteria of competitiveness.The factors that are identified are the production, sales, service, and market factors, which reflect the influence of the internal and external environment on the enterprise.
During the reforming of the control system, an industrial enterprise in the conditions of the instability of the economic situation has to encounter with the following problems: 1) The impact of environmental factors, caused by economic instability, which are shown differently and can be differently estimated, must be considered.
2) The mechanism must be developed, with the help of which it would be possible to decide on the order of the reforming management system elements in the short-term, i.e. with the help of which it would be possible to develop the reforming tactics for the control system by an industrial enterprise.
For the solution of these problems, it is possible to use mathematical models and methods, and decision-making methods as the most promising ones (Belousov, 2008).
Decision-making in the real task management is a polysyllabic problem burdened by a variety of alternatives and existence of restrictions in opportunities of a person who has begun to make his decision.Moreover, at analyzing the tasks, facing the control systems of industrial enterprises, it is possible to assert with confidence that all of them are of multicriteria nature.Even a simple statement about the achievement of the maximum economic benefit at minimum expenses already includes two criteria: the maximum profit and the minimum investment.In practice, it is necessary to link numerous operational tasks at the same time: -Minimum environmental pollution (Polzunova & Kraev, 2006).
It is obvious that for the successful operation of an enterprise as a whole and the enterprise management system, in particular, it is necessary to adhere to such a methodology of reforming the system of enterprise management, which would promptly consider the negative manifestations of economic instability of the environment.
We believe that the main criterion for the process of reforming the enterprise management system under these conditions is to achieve the highest possible effectiveness of management.
V. I. Mukhin believes that the effectiveness of management (of control actions) is the degree of compliance of the (actual or expected) result to the required (desired) one or, in other words, the degree of the goal achievement.The degree of adaptation to the goal achievement was also concerned by V. S. Anfilatov.
The team of authors headed by Yu.V. Vasilyev expresses the view that "the effectiveness of management is a comparative characteristic of the operation effectiveness of a particular control system, which is reflected in various indicators of both the control object and the management activities (the subject of management), and these indicators can be both quantitative and qualitative".Thus, effectiveness is a measure of implementation of functions of the system as a whole; the result of the system operation is the fact of implementation of these functions.The effectiveness of an enterprise operation is determined by how fully the market opportunities of the enterprise are identified and used with the maximum utilization of the potential in the work.
In the course of studying the issue of evaluation of the effectiveness of management systems, we identified the following main approaches to evaluating the effectiveness of the organizational management system: 1) The effectiveness of the management system is evaluated by the indicators characterizing the effectiveness of the enterprise performance.The full range of indicators that describe the financial and economic activities is analyzed.
2) Effectiveness of a management system is evaluated by a comprehensive indicator that combines characteristics of both the cost effectiveness of the management system and the production effectiveness.At that, the cost effectiveness of the management system (Es) is defined as the ratio of management costs to the value of fixed assets and working capital.The indicator of production effectiveness (Ep) is calculated as the ratio of labor productivity to the workforce.The general criterion of the management system effectiveness is the ratio Es/Ep.
3) The effectiveness of the management system is evaluated through an expert method, preferably using qualitative criteria, a set of which is rather versatile (expenses for the management apparatus, the general and specific objectives and functions of management, the organizational structure of management, the characteristics of the management process, the methods of management and development of managerial decisions, etc.).4) Effectiveness of the management system is a function of the target (R/T) and the resource (R/C) effectiveness: where R is the result of the enterprise operation; T are the targets; C are the costs of the enterprise.
5) The effectiveness of the control system is evaluated using three interrelated groups of indicators: -Resource effectiveness; -Qualitative parameters characterizing the organization of the management process; -Parameters characterizing the rationality of the organizational structure and its technical and organizational level.
6) The effectiveness of the control system is evaluated based on the resource-potential approach, according to which the level of use of the system's potential is taken into account.
The classical method of determining cost-effectiveness is based on the ratio of economic results to the cost of labor.As it is often difficult to provide direct assessment of the results in management, they apply indirect assessment that allows finding the share of the contribution of each employee in the final performance of the managerial staff: where E is the management effectiveness; P m -performance of management; S -specific management costs.
In addition, there is a modification of this method for the analysis of the effectiveness of the collective managerial labor (E c ): where V is the volume of production, rubles; C l are the labor costs of employees, rubles; F rev are the operation costs for the revolving funds, rubles; F fa is the cost of industrial fixed assets, rubles.
E is the effectiveness coefficient with regard to the use of production assets.
7) From the perspective of the system analysis, all systems (including the enterprise management system) can be characterized based on the parameters of the material properties of the systems: -The general system properties: integrity, stability, observability, controllability, determinacy, openness, dynamism, etc.; -The structural properties: structure, coherence, organization, complexity, scale, spatial scale, centralization, volume, etc.
However, achieving the required level of effectiveness of the management system of an industrial enterprise in the current circumstances often becomes quite a complex problem.
As we have noted above, all of the construction materials industry enterprises in Russia, as a whole, and in the Belgorod Region, in particular, face certain difficulties.These negative phenomena were caused by external factors existing in the country.Because of the global crisis, stable economic relations between the whole sectors of the economy interconnected with various commodity-money relations were undermined.
The stability of the economy was broken; the economic crisis led to the so-called unstable economic situation.In addition, on the example of the construction materials industry enterprises, we can see the specific manifestations of market fluctuations, which can be treated as the stability disorder (the seasonal nature of demand, the emergence of substitute products).
To develop a methodology of the problem solution, we propose to resort to the system analysis.
System analysis is the most constructive line of system studies.At the same time, until now, the term "system analysis" is construed very broadly.It is known as a problem-solving methodology.In addition, system analysis is considered as the methodology for creating systems or improving a specific area of activity.Thus, the system analysis is a set of specific research methods and practical devices of solving various problems in all spheres of human activity based on the systematic approach and presentation of the object of study as a system.The system analysis is characterized by an orderly, logically substantiated approach to the study of problems and the use of existing methods for solving them, which could be developed in other sciences.
There are several approaches to solving problems using the system analysis.For their classification, we will expand the list compiled by V. N. Volkova and A. A. Denisov.
Thus, summarizing the stages of techniques of the system analysis, we can arrange them as follows: 1) Study of the problem's relevance.
2) Definition of goals that arise at solving the problem.
3) Decomposition of tasks assigned to solve the problem.
4) Synthesis of the key aspects of the problem solution.
5) Building the problem solution model.
6) Implementation of the problem solution model.
7) Verification.
For this study, the solution of the following problem is relevant: reforming (or improving) the enterprise management system in the circumstances of economic instability in the optimal way.In view of the main provisions of the system analysis, the following mechanism to solve this problem is proposed: 1) Study of the relevance and assignment of goals of reforming the management system in the circumstances of the economic instability.
2) Decomposition of tasks arising at reforming the management system.
3) Synthesis of the key aspects of the problem solution.
4) Development and verification of the found model of the enterprise reforming.
When the goals for reforming of the management system are being set, the most relevant task is to improve its effectiveness.
Results
Let us consider application of the decision-making methods possible in these conditions.It should be noted that the tasks of decision-making are a rather numerous class of tasks encountered in many subject areas.Thus, the person who is responsible for the design and solution of this task must complete several stages of actions: 1) Prepare a set of admissible solutions.
2) Formulate goals of the decision-making.
3) Select the necessary decisions, which are optimal for achieving the decision-making goals.
In this case, we are inevitably confronted with the so-called multicriteria tasks, in which the relative importance of private criteria of optimality is considered (Nogalski, 2005).
The multicriteria task decision-making model can be represented as follows: where t is the setting of the (type of) task; S is the set of solutions; K is the set of criteria; X is the variety scales of criteria; f is the mapping of the set of feasible solutions to the set of vector estimates; R is the system of preferences of the decision-maker; r is the decision rule.
The statement of the problem is consistent with the objectives of the decision-maker.The decision maker is understood as an individual (administrator, manager, engineer, designer) who seeks to formulate and solve the problem (i.e.make the necessary decisions) in a particular domain based on his perceptions of the importance of the parameters of the described control system (Nikitina, 2013).
At the same time, we need to determine the concept of the mathematical model of a system, which means the dependence of the characteristics of a system ( (5) Dependence can be described in various ways: analytical expressions, tables, algorithms, etc.
The set S represents a set of decisions, which satisfy each task to certain restrictions and are considered as possible ways of the goal achievement.The elements of the set S can also be named by versions of decisions, strategy, actions, alternatives, options, etc. (Russell & Taylor, 2005) In this case, each decision should lead to a specific outcome, the consequences of which are evaluated by criteria K 1 , K 2 , … K m .In some tasks, the set of criteria can be set, but usually it is formed during the study of the assigned task.
The indicators admitting the decision-maker important with regard to the goal are called criteria.They are common for all admissible decisions and characterize the general value of decision, so that the decision-maker seeks to receive the most preferable estimates on them.Thus, they cannot be presented in the form of restrictions.
A scale should be built for each criterion, which scale represents a set of estimates with relation to the perfect order that has to be constructed.The scales m Õ Õ Õ ,..., , 2 1 , forming the set X, can be numerical and non-numerical; numerical scales can be discrete and continuous.The set X may contain scales of various types. .In the decision theory, it is assumed that each decision-maker has a system of preferences, which should be for rational action.The system preferences of a decision-maker are understood as a set of his structured representations associated with the advantages and disadvantages of the compared solutions (Worthington, 2010).
The Cartesian product
Stages of solution of multicriteria tasks can be presented as the algorithm shown in Figure 1.Specifically in our case, there is a task of tactical (operational) reforming of the elements of the control system of an industrial enterprise in the circumstances of instability (Shchetinina & Polarus, 2012).The set of alternatives will be represented by separate elements of a control system, the set of criteria -by the factors of environment reflecting the influence of the economic instability.Graphically, the stages of our multiobjective task are presented in Figure 1.
Currently, there are several approaches to solving multiobjective problems: 1) The abstract model of choice of the multiscale extreme mechanisms that allow for a rational choice with respect to the vector criterion.
2) Application of the theory of utility for the multicriteria selection of a discrete set of alternatives in the circumstances of risk and uncertainty.
3) The decision based on the set of axioms established in advance.
4) The converting multi-criteria choice task of scalar optimization with vector convolution of criteria.5) Construction of an area of compromises and its corresponding set of Pareto-optimum decisions for some classes of multicriteria optimization tasks.
Discussion
Analyzing the characteristics of the enterprise management, we should allocate a number of the so-called criteria or properties, which the chosen method of the solution of set multicriteria task should have, in our opinion (6).Firstly, it must conform to the course of human thought.Thus, mathematical bases laid in it should not replace the human mind and experience in interpretation of the real world.Secondly, the method must take into account the fact that, as a rule, there are many opinions, many styles of decision-making.In the process of developing a uniform solution the conflicts are possible.Therefore, mechanisms of consent achievement are necessary (Snitko, Alyabyeva, & Dotsenko, 2011).The method must take into account the fact that often (especially for scale tasks) there are multiple solutions.As a result, unsystematic decision-making process brings uncertainty, affecting the quality of decisions.Besides, it is not always possible to build a logical chain of reasoning, when it is possible to choose only one of two options and compromises are not admissible, and to choose the best solution.Therefore, the mechanism of quantitative ranking (prioritization) for possible solutions is needed to ensure clarity (Problems and Trends of Economy, 2012).
Figure 1.Stages of solution of multicriteria tasks Also, the method should serve as a universal basis for systematic decision-making, allowing to make a cottage industry out of the decision-making process.(Instead of the brainstorming organized spontaneously and without a clear plan, we obtain a clear algorithm of organizations' reflecting over decision-making in any field of activity) (Gorbashko & Maksimtsev, 2014).The method must assume reasonable and comprehensible way of rating possible decisions.Otherwise the process of decision-making may be uncertain, and potential opportunities can be lost.The method must take into account both available quantitative information, and qualitative information about the preferences of decision-makers (like -not like, better -worse, etc.), that is extremely important for the economy, the policy, the management, and the social sphere.In this regard, the procedure of paired comparisons can be useful.
Conclusion
One of the promising methods for solving multicriteria problems (satisfying circumstances to above) is, also called, analytic hierarchy method.The analytic hierarchy method is a tool of the system approach to solution of the complex problems of decision-making.This method was developed by R. Bellman, B. N. Brooke, V. N. Burkov (Bertsekas, 2007).The method became very popular after publication of the works by T. Saati who called this procedure the method of analytic hierarchy (MAH) (Bichoyeva, 2011).Т. Saati's publications opened the great capacity of the MAH at solving various tasks of both theoretical and practical nature.The peculiarity of the MAH is that it does not offer the person who is making a decision (decision-maker) a "correct" decision, and allows him to find in an interactive mode such an option (alternative), which best conforms to his understanding of the nature of the problem and requirements to its solution.Along with the math, it is based on psychological aspects (Rudychev, Nikitina, & Levchenko, 2013).MAH allows clearly and rationally structuring a thorny problem of making decision as a hierarchy, comparing and performing quantitative assessment of alternative options of the decision (Roberts, 1965).The analytic hierarchy method used worldwide for making decision in a variety of situations: from management at the State level to solve industry specific problems and in business, industry, health and education (Chizova, 2002).The computer support of MAH is ensured with software products developed by various companies.The analysis of the problem of making decisions in MAH begins with building a hierarchical structure, which includes the purpose, the criteria, the alternatives and considers other factors influencing the choice (Feldman & Audretsch, 1999).Each element of the hierarchy can represent various aspects of the task to be solved; moreover, both material and intangible factors, measured quantitative and qualitative characteristics, objective data and subjective expert Setting goals and objectives: ranking management system elements for reforming under the influence of the factors of the economic instability Selecting the factors of the economic instability.
Selecting the control system of an industrial enterprise in order to reform the objects
Development of rating scales
Evaluation of the control system elements on the scale of criteria Selecting the solution rules Ordering the management system elements of an industrial enterprise Ordering is satisfactory Analysis of the reasons of the model insufficiency and necessary corrective actions Reforming the management system elements in a suitable order of the model judgment can be taken into account (Porter, 1998).In other words, the analysis situation of choosing a decision with MAH reminds of the procedures and argumentation's methods, which are used at the intuitive level.The next step of the analysis is the definition of priorities, representing the relative importance or preference of elements of constructed hierarchical structure, using the procedure of pair comparisons.The dimensionless priorities allow comparing diverse factors reasonably, which is a distinctive feature of MAH (European Commission, 2002).At the final stage of the analysis, the synthesis (linear convolution) of priorities in the hierarchy takes place, in which priorities of alternative solutions are calculated concerning the main objectives.
The alternative with the highest priority value is considered the best one.
-
Maximum profit; -Maximum utilization of equipment; -Maximum utilization of useful working time workers; -Maximum demands; -Maximum market share; -Maximum profitability; -Minimum stock of finished products; -Minimum contingencies; -Minimum order backlog; e. to find the admissible decisions, S is put in compliance with the set of the admissible vector estimates Y A by mapping A S f :
Figure 2 .
Figure 2. Steps for solving the problem of formation tactics of reforming the management system of an industrial enterprise | 5,608.4 | 2015-03-20T00:00:00.000 | [
"Business",
"Economics"
] |
Alterations in Gene Expression of Proprotein Convertases in Human Lung Cancer Have a Limited Number of Scenarios
Proprotein convertases (PCs) is a protein family which includes nine highly specific subtilisin-like serine endopeptidases in mammals. The system of PCs is involved in carcinogenesis and levels of PC mRNAs alter in cancer, which suggests expression status of PCs as a possible marker for cancer typing and prognosis. The goal of this work was to assess the information value of expression profiling of PC genes. Quantitative polymerase chain reaction was used for the first time to analyze mRNA levels of all PC genes as well as matrix metalloproteinase genes MMP2 and MMP14, which are substrates of PCs, in 30 matched pairs of samples of human lung cancer tumor and adjacent tissues without pathology. Significant changes in the expression of PCs have been revealed in tumor tissues: increased FURIN mRNA level (p<0.00005) and decreased mRNA levels of PCSK2 (p<0.007), PCSK5 (p<0.0002), PCSK7 (p<0.002), PCSK9 (p<0.00008), and MBTPS1 (p<0.00004) as well as a tendency to increase in the level of PCSK1 mRNA. Four distinct groups of samples have been identified by cluster analysis of the expression patterns of PC genes in tumor vs. normal tissue. Three of these groups covering 80% of samples feature a strong elevation in the expression of a single gene in cancer: FURIN, PCSK1, or PCSK6. Thus, the changes in the expression of PC genes have a limited number of scenarios, which may reflect different pathways of tumor development and cryptic features of tumors. This finding allows to consider the mRNAs of PC genes as potentially important tumor markers.
Introduction
Proprotein convertases (PCs) is a protein family which includes nine highly specific subtilisin-like serine endopeptidases in mammals (reviewed in [1]). The key function of these enzymes is processing and/or activation of numerous proteins and peptides. Endogenous substrates of PCs include neuropeptides, peptide hormones, growth and differentiation factors, adhesion molecules, extracellular matrix proteins, receptors, enzymes, blood coagulation factors, and plasma proteins. In addition, pathogenic viruses and bacteria can use host PCs to switch on their proteins such as viral coat proteins or bacterial toxins. Since the activation of proproteins in the right time and place is clearly crucial for homeostasis, PCs are involved in the control of various physiological processes in health and disease. PCs as a processing system feature a combination of specificity and redundancy [2]: each protein of the group has unique structural and functional properties; at the same time, the properties of PCs overlap. The specificity and redundancy are observed not only at the levels of substrate specificity and cellular localization but also for the temporal/tissue profiles and, possibly, for the regulation mechanisms of gene expression. In this context, the identification of individual physiological properties and natural partners of enzymes of this group is not an easy matter, which can be properly solved only when PCs are considered as an integrated system.
Many substrates of PCs are associated with malignant diseases. For instance, the direct involvement in tumor progression and metastasis has been demonstrated for insulin-like growth factor 1 (IGF-1) and its receptor (IGF-1R), transforming growth factor b (TGF-b), vascular endothelial growth factor C (VEGF-C), and matrix metalloproteinases (MMPs) (reviewed in [3]). By activating the key cancer-associated proteins, PCs have an effect on cell proliferation, motility, and adhesion as well as tumor invasion, which suggest PCs as promising therapeutic targets [4].
The first data on the association of PCs with cancer were published in 1987 [5]. Since then, numerous studies analyzed the expression of PCs in cancer and the correlations between the PC expression levels and cancer properties using various experimental approaches [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. Overall, the data obtained demonstrated altered levels of PC mRNAs in cancer. Correlations between PC expression profiles and cancer aggressiveness [9,12,16], survival rate [18], and neuroendocrine differentiation of cancer cells [6,7,13] were shown. This allows us to propose the expression status of the PC system as a possible marker for cancer typing and prognosis.
Lung cancer is the most widespread oncological disease, which causes 1.4 million deaths annually [22]. Not surprisingly, PC expression data were first obtained for this cancer type [5][6][7]. These as well as more recent publications [8,[11][12][13] demonstrated altered expression of FURIN, PCSK1, PCSK2, and PCSK6 genes in lung cancer. (Hereafter, gene symbols follow the recommendations of the HUGO Gene Nomenclature Committee, www.genenames. org. The corresponding common protein designations are given in Table 1.) High FURIN expression was found in non-small cell lung carcinomas (NSCLCs) vs. small cell lung carcinomas (SCLCs) [5,8] and correlated with the aggressiveness of lung cancer cell lines [12]. PCSK1 and PCSK2 expression is largely detected in cancers with neuroendocrine features, in particular, SCLC [6][7][8]11,13]. At the same time, PCSK6 expression is not necessarily observed in lung cancer, although it is more common in NSCLC than in SCLC [8]. Thus, the response of the PC system varies with lung cancer types. Gene expression surveys involving microarray analysis (including whole-transcriptome ones) demonstrate a substantial heterogeneity of lung cancer samples [13,[23][24][25][26], and the revealed differences correlate with the patients' survival rate [23,24,26]. Overall, this proposes lung cancer as a test system to assess the information value of the approach based on expression profiling of PC genes.
In this context, the present work for the first time evaluated mRNA levels of all PC genes in lung cancer using reverse transcription followed by a quantitative real-time polymerase chain reaction (qPCR). In addition, we studied the gene expression of two matrix metalloproteinases (MMP2 and MMP14), which are key factors in cancer invasion and metastasis [27] and substrates of PCs [28][29][30].
Ethics Statement
The research was approved by the Institutional Review Board of Blokhin Cancer Research Center (Moscow, Russia), and written informed consent was obtained from each patient involved in the study.
Collection of tissue samples
Specimens of cancer tumor tissues and of adjacent tissues without histological pathology (further referred to as normal tissue) were taken from 30 patients with diagnosed small cell lung carcinoma and non-small cell lung carcinoma (tumor stage I-III) during surgery ( Figure 1, Table S1). In every case localization of a primary tumor node was determined. If a tumor originated from the smallest bronchi in peripheral segments of lung and had no connection with bronchi lumen, its localization was considered peripheral. If a tumor originated from a large bronchus, its localization was considered central. The normal tissue specimens were taken from the edge of resections (the distance between tumor and normal tissues was no less than 20 mm). All patients were under medical supervision in the Blokhin Cancer Research Center (Moscow, Russia) during a period from May 2004 to November 2005. None of these patients received radio or chemical therapy to the moment of the investigation.
Each specimen was split into two portions. The first one was immediately frozen in liquid nitrogen for mRNA isolation. The second portion was used for histological examination after hematoxylin and eosin staining of the paraffin sections. Tumor tissue specimens contained more than 70% of malignant cells. In normal tissue specimens, no malignant cells were found. For SCC samples presence of keratinazation was determined. Existence of keratinization allowed to refer a sample to a group of welldifferentiated cancer.
RNA isolation and purification
The total RNA was isolated from homogenized tumor or normal tissues by guanidine isothiocyanate lysis and acid-phenol extraction with subsequent removal of polysaccharide admixtures [31]. Additional purification was performed by RNA precipitation using an RNeasy Mini kit (Qiagen, USA). Further treatment with DNase I (Promega, USA) was done according to the supplier's recommendations. The obtained RNA samples were characterized
Double-stranded cDNA synthesis
Oligonucleotides AAGCAGTGGTATCAACGCAGAG-TACGCrGrGrG and AAGCAGTGGTATCAACGCAGAG-TACT 30 VN (V = C, or G, or A) (Syntol, Russia) were used in the reverse transcription reaction. For the first strand cDNA synthesis, 1 mg of isolated RNA was incubated with reverse transcriptase PowerScript (Clontech, USA) as described by Y. Zhu et al. [32]. The obtained reaction mixture was used for the second strand synthesis followed by PCR using the Advantage 2 DNA polymerase (Clontech, USA) and primer AAGCAGTGGTAT-CAACGCAGAGT under the following conditions: 95uC for 1.5 min; up to 17 cycles of 95uC for 20 s; 65uC for 20 s; and 72uC for 3 min. To obtain equal amounts of all amplification products, the number of cycles varied (commonly, 15 cycles).
Real-time PCR
Real-time PCR was performed using the primers and probes of the TaqMan Gene Expression Assays system (Applied Biosystems, USA) ( Table 1). TaqMan Pre-Developed Assay Reagent GAPDH 206(Applied Biosystems, USA) was used to quantify the reference gene, glyceraldehyde 3-phosphate dehydrogenase (GAPDH). PCR was conducted using a Chromo4 Dyad Disciple cycler (BioRad, USA) according to the supplier's recommendations with the following program: 50uC for 2 min; 95uC for 10 min; 45 cycles of 95uC for 15 s and 60uC for 60 s; the reaction volume was 20 ml. Every sample was tested at least twice in duplicates. The threshold cycle was defined using the Opticon Monitor 3 software (BioRad, USA).
Experimental data processing
The experimental data obtained for genes under study were normalized to GAPDH mRNA levels using the formula: Þ , and the results were averaged (Table S1). The values for tumor and normal tissues were designated as Expr T and Expr N , respectively. The Expr T to Expr N ratio (Ratio T/N ) and 95% confidence intervals were calculated for each gene.
In some samples, mRNAs of certain genes were not detected in tumor or normal tissues in one of two independent experiments. In these cases, the Expr and Ratio values were calculated from the data of the other experiment. In some samples, real-time PCR failed to detect mRNAs of certain genes in tumor or normal tissues in both experiments; in these cases the C T was set equal to 42 in the Ratio T/N calculations. If mRNAs were undetectable in tumor and normal tissues in both experiments, the Ratio T/N values were not calculated.
Statistical analyses
Wilcoxon matched-pairs rank-sum test was used to evaluate the significance of difference between the mRNA levels of genes in tumor and normal tissues. Kruskal-Wallis one-way analysis of variance was performed to evaluate the influence of tumor type, stage, and TNM characteristics on mRNA levels of the studied genes. Spearman rank correlation coefficients were calculated to evaluate the relationship between pairs of gene expression profiles. Cluster analyses of gene expression patterns and expression profiles were performed for the Expr and Ratio T/N values by the Ward method using Spearman rank correlation coefficients as the distance measure. All above statistical analyses were performed using the Statistica 8.0 software (StatSoft, USA). Heat maps were built using the Matrix2png software tool [33].
Results and Discussion
In this work, quantitative PCR was used for the first time to analyze mRNA levels of all PC genes (listed in Table 1) as well as matrix metalloproteinase genes MMP2 and MMP14 in 30 matched pairs of samples of human lung cancer tumor and adjacent normal tissues (Table S1, Figure 1). Expression of MBTPS1, PCSK7, MMP2, and MMP14 was observed in all tumor and normal tissue samples; and PCSK5, in almost all samples. Conversely, PCSK4 mRNA was detected in two tumor samples only. The expression profiles of other genes were more complex. PCSK9 transcript was found in 29 normal tissue samples but only in 18 tumor ones. PCSK6 expression was detected in about two thirds of normal and tumor tissues; and PCSK2, in 15 and 11, respectively. The most pronounced differences between normal and tumor tissue expression was observed for FURIN and PCSK1. The samples where mRNAs of these genes were detected were twice more frequent in tumor than in normal tissues, while their expression was undetectable in a substantial fraction of both tumor (21/30 for PCSK1 and 8/30 for FURIN) and normal samples (26/ 30 and 20/30, respectively). Overall, these findings demonstrate significant differences in the expression of individual PC genes in the human lung, on the one hand, and high variation in their expression patterns (i.e., combinations of expression levels of the genes) between individuals, on the other hand.
The data obtained are, in general, in good agreement with published results. PCSK4 was shown to be largely limited to testicular and ovarian germ cells [34][35][36]. PCSK1 and PCSK2, the principal activators of prohormones and proneuropeptides within the regulated secretory pathway, are largely detected in neural and endocrine cells [37]. All other genes studied (encoding both PCs and MMPs) are commonly reported as ubiquitously or widely expressed. As well as other authors, we found mRNA of MBTPS1, PCSK5, MMP2, and MMP14 in all or practically all the samples of tumor and normal lung tissues (e.g. [38,39], datasets GDS1650, GDS1673, and GDS2491 in the Gene Expression Omnibus database at www.ncbi.nlm.nih.gov/geo/). In case of PCSK6, in full conformity to other published data ( [8], GDS1650, GDS1673, and GDS2491), we found the mRNA not in all, but major portion of the samples analyzed. Our results concerning PCSK9 are also in agreement with the published data (GDS1673), although the current information about expression of this gene in lung is scarce. We found FURIN mRNA in approximately half of all samples, which is in agreement with data obtained by microarray technology (GDS1650, GDS1673, and GDS2491), but in contradiction to published data acquired by Nothern blot analysis [5,8]. The reason of this discrepancy may be explained by features of the methods used. The largest inconsistency concerns PCSK7. There are few data about its expression in lung. The data presented so far are obtained with microarray technology and do not match to each other. We found PCSK7 mRNA in all our samples, Gruber with coworkers found it in 14 out of 40 of notdiseased lung samples ( [40] and GDS1673), and Stearman with colleagues did not find it in any of 20 tumor and 19 normal lung tissue samples ( [41] and GDS1650). It does not seem possible to explain reasons of the distinctions, but it seems most probably to be due to differences in the experimental platforms used: qPCR and various generations of Affymetrix chips.
Thus, the obtained and published data demonstrate a high variation in PC gene expression between individuals. This gives no grounds to expect that mRNA levels of PCs in tumor or normal tissue alone can have any prognostic value or can be used for cancer typing. Indeed, no significant differences between the expression levels of the genes analyzed or their expression patterns have been revealed for groups of normal or cancer samples with similar clinical features. Likewise, cluster analysis failed to reveal groups of samples with similar gene expression patterns.
At the same time, we have found moderate but significant (p,0.05) pairwise correlations between the expression profiles of the genes studied ( Table 2). Note that the sets of correlated profiles substantially differed for tumor and normal tissues. Assuming that the revealed correlations indicate the coordinated regulation of gene expression, one can propose that the regulation of expression of PCs and MMPs is modified in lung cancer. It is important to note, this applies to the great majority of PC genes. Moreover, the correlations between the profiles of changes in the expression in tumor vs. normal tissues (differential expression profiles) can be attributed to the mechanisms underlying the expression changes common for several genes.
Analysis of mRNA levels of the studied genes demonstrated significant differences between tumor and normal tissues: the average level of FURIN mRNA increased (p,0.00005); mRNA levels of PCSK2 (p,0.007), PCSK5 (p,0.0002), PCSK7 (p,0.002), PCSK9 (p,0.00008), and MBTPS1 (p,0.00004) decreased; while Table 2. Correlations between expression profiles of the genes studied. PCSK1 mRNA level showed a tendency to increase ( Figure 1). Thus, the expression of seven out of eight PC genes (except PCSK6), whose mRNA is detectable in the lung, demonstrated unidirectional changes in lung cancer in our samples. Although the expression of PCs was analyzed in many publications, this is an original finding, since mRNA levels of most PC genes in tumor vs. normal tissues have not been quantified previously. At the same time, the high level of FURIN expression in NSCLC [5,8] and other cancer types [10,16,18] has been reported previously. The role of MMPs in cancer progression as regulators of the tumor microenvironment is currently receiving much attention (reviewed in [42,43]). In this study we analyzed expression of two MMP genes of different types: secreted MMP2 membraneanchored MMP14. These proteases are the major MMPs involved in cancer cell invasion and proliferation, tumor angiogenesis and vasculogenesis, cell adhesion and migration as well as in immune surveillance. Taking this into account, the absence of significant differences between MMP2 and MMP14 expression levels in tumor and adjacent tissues without histological pathology can look surprising. However, this result is in agreement with ample evidence for high levels of their expression both in cancer and stromal cells in NSCLC [38,39,[44][45][46][47][48][49][50][51][52]. At the same time, a direct comparison of MMP2 and MMP14 expression in cancer tumor vs. adjacent normal tissues was reported only in two publications, and their conclusions are at variance. The former, similar to our study, observed significant differences for neither MMP2 nor MMP14 [46]. The latter publication demonstrated an elevated expression of MMP14 in cancer relative to normal lung specimens [39]. Most likely, this discrepancy is due to different specimen types analyzed: squamous cell carcinomas (SCCs) prevailed in the former study [46] and in our work, while more adenocarcinomas (AdCs) were analyzed in the latter report [39].
Normal tissue
In our view, the most prominent result was obtained by comparing the patterns of changes in the expression of PC genes between tumor and normal tissues. Cluster analysis divided studied samples into four groups (Figure 2), which did not correlate with the available clinical features of tumors. Three of these groups (C1, C2, and C3) cover 80% of samples. Each group is rather homogeneous and has a single key gene: FURIN in C1, PCSK1 in C2, and PCSK6 in C3. Usually, the key gene expression is elevated in cancer; although, it can be unaltered or slightly decreased against the background of a substantial decrease in mRNA levels of other PCs (Figure 1). Undetectable PCSK6 expression in most samples is an extra character of C1, while C3 features undetectable expression of PCSK1 and/or FURIN in more than a half of samples. C4 is more heterogeneous. The samples in this group share similar expression changes of PCSK5, PCSK7, PCSK9, and MBTPS1 as well as undetectable mRNAs of FURIN and PCSK1 in most cases. Thus, the changes in the expression of PC genes in lung cancer have a limited number of scenarios, which may correspond to previously undetected NSCLC types. It is of interest that the enzymes encoded by the key genes of the revealed groups belong to different types of PCs [53]. Furin, PCSK1, and PCSK6 have different kinds of C-terminal extensions. These PCs have different expression profiles: PCSK1 is localized in neural and endocrine cells, PCSK6 occurs widely, and furin is ubiquitous. Finally, they exhibit different secretion patterns: PCSK1 follows the regulated secretory pathway, while furin and PCSK6 are constitutively secreted. In this context, one can propose that different scenarios of alteration in the expression of PC genes induce different changes in the range of activated substrates.
The data available to date can provide only hints about the origin of the revealed groups. For instance, C2 with active PCSK1 can correspond to NSCLCs with signs of neuroendocrine differentiation [54][55][56][57][58][59][60][61][62][63][64][65], which can point to the origin of these tumors. The formation of C1 (FURIN) and C3 (PCSK6) can be mediated by the E2F1 transcription factor, which specifically upregulates PCSK6 but not FURIN or PCSK5 [66]. However, these data provide no reliable explanation of the mechanisms underlying the typical scenarios of changes in the transcription of PCs in lung cancer. Still, at least two radically different considerations can be advanced. On the one hand, the observed effects can stem from the differences that existed before malignant transformation, e.g., genotype differences between individuals or cell type differences within an individual. On the other hand, it can be due to the alterations emerged during cancer formation, in particular, local disorders in the expression of individual PC genes or varieties of the global dysregulation of gene expression. Moreover, the observed events can result from the effect of several factors at the same time.
Overall, analysis of the patterns of changes in the expression of PC genes in individuals allowed us to reveal several NSCLC types and to demonstrate that the expression changes have a limited number of scenarios, which may reflect different pathways of tumor development and cryptic features of tumors. This finding warrants further investigation, and allows to consider the mRNAs of PC genes as potentially important tumor markers.
Supporting Information
Table S1 Characteristics of specimens and gene expression data. (XLS) | 4,823 | 2013-02-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
Throughput Evaluation of Downlink Multiuser-MIMO OFDM-LTE System
Recently, the mobile communication industry is moving rapidly towards long-term evolution (LTE) systems. LTE aims to provide improved service quality over 3G systems in terms of throughput, spectral efficiency, latency, and peak data rate, and MIMO technique is one of the key enablers of the LTE system for achieving these diverse goals. Among several operational modes of MIMO, multiuser MIMO (MU-MIMO) in which the base station transmits multiple streams to multiple users, has received much attention as a way for achieving improvement in performance. In this paper we present a Multiuser MIMO-OFDM-based simulator that includes the main physical layer functionalities and calculate the throughput of LTE Frequency Division Duplex (FDD) and Time Division Duplex (TDD) systems. The simulator has been used to evaluate the performance of the 3GPP Long-Term Evolution (LTE) technology.
Introduction
LTE is one of the most promising wirelesses-technology platforms for the future.The version being deployed today is just the beginning of a series of innovations that will increase performance, efficiency, and capabilities.To address the growing mobile broadband demand, the 3 GPP standards body released the next technological step, Long Term Evolution (LTE) [1] [2].LTE is designed to substantially improve end-user throughputs, increase sector capacity and reduce user plane latency.Among many features in the LTE which supports up to 3Gbps throughput in downlink, Multi user multiple-input-multiple-output (MU-MIMO) scheme has been identified as one of the key enablers for achieving a high spectral efficiency.Both in theory and design perspectives, MU-MIMO systems have several unique features distinct from single user MIMO (SU-MIMO) systems [3].To make up for the shortcomings of SU-MIMO, early LTE [4] standards (Rel.8 and 9) defined a primitive form of the MU-MIMO mode.Many of us might have heard about LTE peak throughput is 300 Mbps, but how many of us know how we calculate that?This paper provides the information, how this number is calculated?In this pa-per, we explained the calculations of theoretical throughput for both the LTE Frequency Division Duplex (FDD) and Time Division Duplex (TDD) systems [5] [6].
System Model
We consider a downlink MIMO-OFDM system with m users, f N subcarriers, T n transmit antennas at the base station, and Rm n receive antennas at the m th mobile station.The data for a particular user, for example, m, are transmitted in packets, and denoted as , where m N is the number of spatial sub channels that are offered from the multiple transmit antennas.Since the channels are assumed to be quasi-static fading from one OFDM symbol to another, the time index is omitted for simplicity.We also assume that the elements in With proper guard timing and cyclic prefix, the estimated frequency domain signal is where With the assumption that the interference terms in (2) are Gaussian and independent, from the information-theoretic viewpoint, the achievable aggregate rate for user m, which is denoted as m υ , becomes [ ] ( ) Therefore, the system throughput is
Maximum Throughput with Maximum Bandwidth
For any system throughput is calculated as symbols per second.Further it is converted into bits per second depending on the how many bits a symbol can carry.For a LTE [7] system with 4 × 4 MIMO (4T4R) the throughput will be four times of single chain throughput.i.e. 403.2 Mbps.Many simulations and studies show that there is 25% of overhead used for Controlling and signalling.So the effective throughput will be 300 Mbps.The 300 Mbps number is for downlink and not valid for uplink.In uplink we have only one transmit chain at UE end.So with 20 MHz we can get Maximum of 100.8 Mbps as calculation shown above.After considering 25% of overhead we get 75 Mbps in uplink.This is the way how we get the number of throughput 300 Mbps for Downlink and 75 Mbps for Uplink shown everywhere.
Duplex Schemes
Spectrum flexibility is one of the key features of LTE.In addition to the flexibility in transmission bandwidth, LTE also supports operation in both paired and unpaired spectrum by supporting both FDD-and TDD-based duplex operation with the time frequency structures.Although the time-domain structure is, in most respects, the same for FDD and TDD, there are some differences, most notably the presence of a special sub frame in the case of TDD.The special sub frame is used to provide the necessary guard time for downlink-uplink switching shown in Table 1.
DL and UL Throughput Calculation for LTE FDD
The FDD system has a paired spectrum, same bandwidth for Downlink as well as for Uplink.20 MHz FDD system have 20 MHz for Downlink and 20 MHz for Uplink.For throughput calculation, suppose: Bandwidth-20 MHz Multiplexing scheme-FDD UE category-Cat.
LTE TDD and Its Frame Structure
Before starting throughput calculation, let's become familiar with LTE-TDD [6].As stated earlier, TDD is unpaired spectrum.We have to use same bandwidth for DL and UL on time sharing basis.Suppose if we have 20 MHz spectrum, we have to use this 20 MHz bandwidth for both DL and UL.LTE TDD frame structure is shown in Figure 1.The TD frame consists of Downlink sub frame, Uplink and Special sub frame.There are seven possible configurations for LTE TDD frame as shown below.Here D-is downlink, S-for Special sub frame and U-for Uplink.As shown 5 ms periodicity frame have two "S" sub frame and 10 ms frames have only one "S" sub frame.
Special sub frame has 9 different configurations [8].A special sub frame is divided into Downlink Pilot Time Slot (DwPTS), Guard Period (GP) and Uplink Pilot Time Slot (UpPTS) depending upon the number of symbols.
DL and UL Throughput Calculations for LTE TDD
TDD system throughput calculations are somewhat complex as compared to FDD system as same spectrum is used by uplink, downlink and for the guard period (Used for transition from downlink to uplink) [9].For throughput calculation, suppose: Bandwidth-20 MHz Multiplexing Scheme-TDD TDD Configuration-2 (D-6, S-2 and U-2) Special Sub frame configuration-7 (DwPTS-10, GP-2 and UpPTS-2) UE category-Cat.
Conclusion
In this paper, we discussed about LTE system throughput calculation for both TDD and FDD system.
1 .
m n H denotes the MIMO channel matrix from the base station to user m at subcarrier n.The data symbol vector [ ] m n S is post multiplied by the transmit beam forming matrix [ ] C K m m n N ∈ × T before transmitting from the antennas.Having set the transmit power to be [ ] The same holds for the receive beam forming matrix [ ] i.i.d.complex Gaussian with zero mean and variance of 2 σ .The fidelity of the signal [ ]
3 GPP LTE technology support both TDD and FDD multiplexing.The paper describes all the factors which affect the throughput like Bandwidth, Modulation, UE category and multiplexing.It also describes how we get throughput 300 Mbps in DL and 75 Mbps in UL and what are assumptions taken to calculate the same.Paper describes the steps and formulae to calculate the throughput for FDD system for TDD Configuration 1 and Configuration 2.
In LTE for 20 MHz, there are 100 Resource Blocks and each Resource block have 12 × 7 × 2 = 168 Symbols per ms in case of Normal CP.So there are 16,800 Symbols per ms or 16,800,000 Symbols per second or 16.8 Msps.If modulation used is 64 QAM (6 bits per symbol) then throughput will be 16.8 × 6 = 100.8Mbps for a single chain.
3 Modulation supported-as per Cat 3 TBS index 26 for DL (75376 for 100 RBs) and 21 for UL (51024 for 100 RBs).Throughput in TDD can be calculated by following formula DL Throughput = Number of Chains × TB size × (Contribution by DL Sub frame + Contribution by DwPTS in SSF).UL Throughput = Number of Chains × TB size × (Contribution by UL Sub frame + Contribution by UpPTS in SSF).TB size for DL is 75376 and for UL it is 51,024 for category 3 UE.2% or 20% contribution is by 2 UL sub frame and [0.2 × (2/14)] factor contribution by Special sub frame comes twice whose 2 symbols out of 14 are for uplink.So UL throughput = 1 × 51024 × (0.228571) = 11.66263− 12 Mbps. | 1,888.6 | 2015-04-02T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Analysis of 3D crack patterns in a free plate caused by thermal shock using FEM-bifurcation
Damage to components made of brittle material due to thermal shock represents a high safety risk. Predicting the degree of damage is therefore very important to avoid catastrophic failure. An energy-based linear elastic fracture mechanics bifurcation analysis using a three-dimensional finite element model is presented here, which allows the determination of crack length and crack spacing for a defined thermal load in a free plate. It is assumed that a hierarchical crack pattern is formed due to cooling penetration. The constant growth of the ideal regular pattern of hexagons can change into a pattern with a different symmetry by slightly changing the cooling conditions. This bifurcation point is determined by the second derivative of the mechanical potential with respect to the geometry of the crack front. The very high computational effort for the second derivative is reduced by describing the three-dimensional crack front with a limited number of Fourier coefficients. A one-dimensional transient temperature field at a sufficient distance from the plate edge is assumed. For alumina, the crack length and crack spacing curves are computed for different quenching temperatures and heat transfer coefficients. The corresponding final crack lengths are also calculated as a measure of damage. Comparison with a two-dimensional model confirms the expected 1/2 difference in crack spacing. Data from thermal shock experiments are also presented. However, due to the cracks caused by the strong cooling at the edge, these correspond to the results of the two-dimensional model.
Introduction
The study of thermal shocks in brittle materials and the accompanying damage in form of cracks employed by scientists for decades. Fascinating complex crack pattern formations arise during thermal shock. Beside this it is also important to know the residual strength after a thermal shock case to avoid catastrophic failure in technical systems such as gas turbines. Engineering ceramics are widely used as thermal barriers, as they can withstand very high temperatures. This application is always critical due to the relatively low fracture toughness of ceramics. It was analyzed in Kingery (1955) and Hasselman (1963Hasselman ( , 1969 which material properties affect thermal shock damage. This was investigated experimentally by quenching alumina rods in water (Hasselman 1970). An overview of various failure phenomena and related thermal stress resistance parameters is given in Hasselman (1985). The first investigations of 2D hierarchical parallel crack patterns and the associated analysis with a bifurcation instability criterion were carried out in Nemat-Nasser et al. (1978) and Bažant et al. (1979). Based on this criterion, the 2D crack patterns that occur when glass quartz plates are quenched in water (Bahr et al. 1986) were qualitatively investigated in Bahr et al. (1988) using the Boundary Element Method (BEM). It was found that after thermal shock the final crack length has a significant influence on the strength degradation. Nemat-Nasser et al. (1980) then published another bifurcation instability criterion which was also used in later publications (Bahr et al. 1996(Bahr et al. , 2010Hofmann et al. 2011). In Bahr et al. (1992) another bifurcation criterion based on the mechanical potential and an eigenvalue analysis was shown. This criterion is equivalent to that of Nemat-Nasser et al. (1980). Furthermore, in Bahr et al. (1992) the correctness of the assumption of the hierarchical crack pattern, that one crack stops and another one grows, and the influence of a disturbance on it was shown. Many publications investigated crack patterns by experiments and by various simulation methods, e.g. Shao et al. (2011, and Xu et al. (2016a). However, they all focus on 2D crack patterns.
In Bourdin et al. (2014), a scalar damage variable within a non-local damage model was used to simulate the growth of 3D columnar joints in thermal shocked ceramics. It was shown that this can explain the formation of imperfect polygonal patterns and their selective coarsening during propagation, but an analysis of the 3D crack pattern geometry is still missing. 3D crack patterns of basalt columns were investigated with the Finite Element Method (FEM) in Bahr et al. (2009) by a fracture mechanic bifurcation analysis based on the local energy release rate. This research is similar to the investigated 2D cases in Bahr et al. (1988Bahr et al. ( , 2009Bahr et al. ( , 2010. In order to determine the 3D crack contour, an elaborate gradient method was used for the bifurcation analysis. To avoid this cumbersome iteration of the crack front, a new effective calculation method was developed based on the global energy release rate G and on a Fourier series expansion of the 3D crack front in order to simplify the bifurcation analysis. The application of this method has been demonstrated in Anderssohn et al. (2018). To enable a comparison with Bahr et al. (2009), the growth of 3D crack patterns of basalt columns was also investigated with a steady-state temperature field. The method of Anderssohn et al. (2018) can now be used to investigate the growth of other crack patterns, e.g., generated by drying or, as here, by a transient thermal shock load case for a finite length.
In the present work, we use the ideal hexagonal three column pattern from Bahr et al. (2009) and the Fourier series expansion of the 3D crack front contour from Anderssohn et al. (2018). An FE model for an infinite free plate of thickness 2b is built, which allows for calculating the resulting normalized crack spacing L/b and crack length a 0 /b due to thermal shock. Finally, the goal of the present study is to predict the normalized final crack length a end /b which is crucial for the residual strength of ceramic components that were damaged by thermal shocks. This paper is organised as follows: Sect. 2 presents the analytical temperature field and the basic thermomechanical equations. In addition, the underlying model of the periodic contiuable ideal hexagonal three column pattern is described. The presentation of the theory of eigenanalysis of the global mechanical potential together with the development of the crack front geometry as a Fourier series, which was used to find the bifurcation points, concludes the section. In Sect. 3, the numerical parameters of the FE model and the central Finite Difference Method (FDM) used to determine the derivatives of the mechanical potential are explained. A convergence analysis and the effects of different numbers of Fourier series coefficients on the result are shown. Successful calculations for different temperature differences ΔT and heat transfer coeffi-cients h were performed in Sect. 4. The influence of ΔT and h on the crack length a 0 /b and the crack spacing L/b is discussed. For the validation of the 3D model, thermoshock experiments and calculations for an already existing 2D model were performed, evaluated and compared with the data of the 3D model in Sect. 5. Furthermore, a novel evaluation method based on Computed Tomography (CT) was applied to the thermo-shocked specimens in this work. Note: Similar experiments have already been performed, for example, in Shao et al. (2010) and Xu et al. (2016b). Unfortunately, no material data are given in these publications and thus cannot be used for validation. Finally, in Sect. 6, several conclusions are drawn and an outlook on possible future applications of the model presented in this paper is given.
Temperature profile
First, a plate is heated to the temperature T 0 and then quenched in cooling liquid of temperature T 1 . The area under consideration is the center of the plate far away from the edges. In the center the temperature field can be described by a simple 1D analytical solution, while at the edges a more complex temperature field is expected. It is assumed that the cracking process has no influence on the temperature field. This assumption is confirmed by two points: firstly, the cracks are orthogonal to the isotherms that means that they do not interfere with the heat flux. Secondly the crack opening is very small so heat transport through the air filled cracks is very small compared to the heat transport through the solid. Due to the short duration of the thermal shock process, this is a common hypothesis (Bahr et al. 1988;Li et al. 2013). Therefore, the transient temperature profile is used for an infinite plate with a thickness of 2b and with symmetrical boundary conditions (Tautz 1971) where δ = √ 4Dt, μ n = hb/κ cot(μ n ) and ΔT = T 0 − T 1 . Here, μ n , n = 1...∞ are the positive eigenvalues, κ the thermal conductivity, h heat transfer coefficient and D the thermal diffusivity. In Tautz (1971), the origin of the z coordinate is in the centre of the plate, whereas in the present study it is at the surface. This shift of the coordinate system is implemented in Eq. (1) by the −μ n term in the second cosine function (see also Bahr et al. 1987Bahr et al. , 1988. For short times δ 2 /4b 2 1, (it should be noted that in the literature often τ for δ 2 /4b 2 is used) the number of the used eigenvalues μ n must be high enough. Otherwise Eq. (1) is not valid (Bahr et al. 1987;Martin et al. 2019). 50 and 100 eigenvalues were calculated with the mathematic program MATLAB. For δ/b = 0.01, the maximal difference of the summation term in Eq. (1) was approx 3.6 · 10 −4 . In the simulations the smallest time is δ/b = 0.1, therefore 100 eigenvalues are more than enough.
In Fig. 1 the temperature function Eq. (1) is plotted for different hb/κ and δ/b. The greater hb/κ is, the better is the heat transport between the heated plate and the cooling liquid. An increase of hb/κ leads to an increase of the temperature gradient and therefore the thermal stresses also increase. Because of that, the value of the heat transfer coefficient h is very important for the determination of the damage by the thermal shock. It should be noted that the experimental determination of h is very difficult. In Singh et al. (1981), Zhou et al. (2012), and Jiang et al. (2012) different methods were used to determine h in the case of a thermal shock of alumina in water. The results for the heat transfer coefficient h range from 10 3 Wm −2 K −1 to 10 5 Wm −2 K −1 .
Fundamental thermoelastic equations
Due to the thermal shock, the ceramic plate experiences shrinkage leading to mechanical stresses. In Takeuti and Furukawa (1981) it is shown that the ratio of V = v p b/D decides if acceleration terms have to be considered for the penetration of a temperature field.
. The thermal diffusivity can be calculated with D = κ/ρC p . However, for the materials used in this work, neither D nor the specific heat capacity C p is given, see Table 1. From Zhou et al. (2012), D = 5.4 · 10 −6 m 2 s −1 for alumina with similar purity (99.5%) and density (ρ = 3.98 · 10 3 kg m −3 ) can be taken. For the Al 2 O 3 material with 99.7% purity used in the experiments, with a plate thickness of b = 3.5 · 10 −3 m, V = v p b/D = 6.9 · 10 6 is obtained. For such large values of V , according to Takeuti and Furukawa (1981), p. 118, the ratio of dynamic and quasi-static stresses is equal to one. Thus the acceleration terms can be neglected. Volume forces are not present and the local mechanical equilibrium conditions read where σ i j are the mechanical stresses. Eq.
(2) must be fulfilled in every point of the structure. We assume an isotopic linear thermo-elastic material with linarised kinematics for which the strain ε i j can separated into an elastic and a thermal part: Here, u i denotes the displacement, E is the Young's modulus, ν the Poisson's ratio and δ i j the Kronecker symbol in the elastic strain ε el i j . The third term describes the thermal strain ε th i j . Here, α is the thermal expansion coefficient and T (z, δ) − T 0 the temperature difference according to Eq. (1). T 0 is the reference temperature at which no thermal stresses are present.
For a plate having a 3D crack pattern, to which an instationary temperature field like Eq. (1) is applied, unfortunately no analytical solution of Eq. (2) is available. Therefore, the solution must be obtained numerically. In the present research, this is done by the FEM through the program Ansys. Please note that an analytical solution of the stress field which fulfills Eq. (2) for a plate without cracks is given in Timoshenko and Goodier (2017) and Parkus (1968).
Model development and boundary conditions
In experimental investigations on dried starch-water suspensions, it was found that a regular hexagonal cracking pattern causes three columns to merge into a larger one, see Fig. 2a. A similar type of column could be observed in basalt columns, which were formed by thermal contraction of solidified lava. In both cases, the tensile stress caused by drying or cooling is the reason for the appearance of the cracks (see also Goehring et al. 2006;Goehring 2008). The process of merging three columns into one is periodically repeatable in the depth (z-direction) and also in the width of the material. This was used to form the model described below. Fig. 2b shows the idealised model with three regions. In region (I) there are three hexagonal columns of equal size with column diameter L. These columns are caused by the steady propagation of the crack front and are referred to as fundamental solution. The model then assumes that the crack stops growing at the (bp) point in the middle of the three columns. So there is another solution besides the fundamental solution. The point (bp) is hence a bifurcation point in the mathematical sense.
In region (II), the crack front between s/p = 0 and s/p = 1 in Fig. 3a stops growing. With this termination, the high symmetry of the hexagons is lost and mixed mode crack propagation with curved crack faces occurs. The three columns merge in region (III) to form a larger column with a new column diameter of √ 3L. It should be noted that region (II) after the bifurcation point is called the post-critical solution.
As in Anderssohn et al. (2018) and Bahr et al. (2009), we use symmetry and periodicity to reduce the computational costs. The representative volume element, used for the calculation in the case of a bifurcation, is one half of a hexagonal column (gray area in Fig. 3b). This is sufficient due to the periodic repetition. In Fig. 3b this is shown by the three points 0 in which the crack stops in the other two surrounding three hexagonal column configurations. This leads to a point symmetry with respect to point 2 with the following coupling of the displacements in the ligament surfaces E and F of the representative volume element in Fig 3b: Here, the index n stands for the normal direction of the ligament surfaces and s is the coordinate of the crack front. Furthermore, the stresses must be consistent at the surfaces E and F. The plate is not subjected to external kinematic constraints, so that the two surfaces G and the ligament surface at D are unloaded in the normal direction, i.e. 0 = σ i j n j d A and u i n i = constant .
This condition was implemented by the Ansys command CE with the requirement that all FE nodes have the same displacement in the surface normal direction. The surface of the plate z = 0 and the crack surface are unloaded, i.e. σ i j n j = 0. The lower face, at z = b, is the symmetry area at which the boundary condition u i n i = 0 and σ i j n j = 0 (i = j) applies.
In the case of uniform column growth, a twelfth of the column is sufficient for the calculation since a higher symmetry is present than in the case of column merging. The ligament face B, shown in Fig. 4, is forcefree in the normal direction according to Eq. (5). The faces A, C, and lower face, at z = b, are symmetry areas for which the boundary conditions u j n j = 0 and σ i j n j = 0 (i = j) hold. The top surface and the crack surface are free of forces σ i j n j = 0 as in the half column model.
for the parameterization of the crack front. Here, a 0 is the crack length, j the number of cosine elements, C i the time-invariant coefficients,p = L/ √ 3 the column side width shown in Fig. 3b, and s the crack front curve variable. To ensure that the Fourier series can be used as the crack front geometry for the fundamental and also for the bifurcation solution, the product 2p is included. The function a(s) is the crack length measured from the surface of the plate, as shown in Fig. 4.
For steady growth, the crack front geometries of the three-column configuration must match the opposite column. Due to the symmetry and periodicity of this configuration, i = 4, 8, . . . are the only coefficients in Eq. (6) that are nonzero. Due to the huge increase in computation time by increasing the number of coefficients and the low effect on the result (see Fig. 9), the highest Fourier coefficient C 4 is selected.
We introduce here for the case of quasi-static crack growth, the change in the global mechanical potential (see for example Kuna (2013, pp. 42-44)) where the elastic strain energy reads and the fracture energy W a is the external work. In Eq. (7), the dot above the symbols represents the change between two states. These states can be two different times or, as in this study, the first state is before the crack extension and the second state is after the crack extension ∂ A. As there are no external forces or supports acting on the free plate, Furthermore, the kinetic energy equals zero. The thermal energy has no influence on the crack growth and thus the change in the thermal energy is negligible. Consequently, Eq. (7) has the character of a potential. The integral over the crack front a(s) in W f r is equal to the full crack area. G C is the critical energy release rate of the material. Integration over the full volume of the strain energy density 1 2 σ kl ε el i j leads to the elastic strain energy U el . The elastic strain energy U el is implicitly dependent on a(s). Using the Fourier series (Eq. (6)), the mechanical potential Π(a(s)), which is a functional, is thus converted into the function Π(a 0 , C i ). As is the length in the 3D case. The fracture mechanics length l 0 is a value that contrasts the fracture energy per crack area G = G c , necessary for crack propagation and the stored elastic energy per volume. According to Irwin (1958, p. 560), the mode I energy release rate for the 3D case can be calculated by l 0 and κ/ h are given by the mechanical and thermal material properties. The geometrical length b is given by half of the plate thickness. We consider the problem quasi-statically for a time δ = √ 4Dt. For this time, a 0 and L can be calculated using the two equations for stationary crack propagation G = G c and the bifurcation criterion, which will be discussed below.
In the following we give a short overview of the bifurcation analysis considerations according to Anderssohn et al. (2018).
For stationary crack propagation, it is necessary that is fulfilled.
The system tends to a minimum. This can be calculated by setting the first partial variation of the mechanical potential Eq. (7) to zero, i.e.
Due to the mechanical potential Π(a 0 , C i ), Eq. (14) can be rewritten as which is the fundamental solution of the problem. This solution leads to j + 1 equations. G c and l 0 are both known in Eq. (11) and if we assume that L is also given, we can use Eq. (15) to determine a 0 and C i at a time δ. Note that changes in the pattern perpendicular to the crack growth direction are excluded in the case of the fundamental solution. This is ensured by the fact that mode I is the only stress intensity factor acting on the entire crack front. The phenomenon of the merging of the three hexagonal columns, as described in Sect. 2.3, can be understood as a mathematical bifurcation problem. This means that there exists a second solution in addition to the fundamental one. Our goal is to find this bifurcation solution for a set of critical characteristic lengths. Starting from Eq. (15), we formed the second derivative, which leads to the Hessian matrix where C 0 = a 0 . The eigenvalue problem has a nontrivial solution when the coefficient determinant is zero This leads to j + 1 eigenvalues λ i . All eigenvalues have real values because of the symmetry of the Hessian matrix, which results in j + 1 real eigenvectors v i computed by (H − λ i I)v i = 0. If Eq. (18) results in positive eigenvalues, then the minimum of the mechanical potential Π (Eq. (7)) is found and the system is stable. This is the case for the fundamental solution, see Sect. 2.3. When one or more of the eigenvalues become negative, the system is unstable and bifurcation occurs. This leads to the merging of the three columns into one. The behavior after the bifurcation can be determined from the eigenvector v in conjunction with the fundamental solution a(s) (Nguyen 1987). A set of characteristic lengths is found where the smallest eigenvalue becomes zero, i.e.
Then, the bifurcation point is reached. The crack grows stationary as long as Eq. (13) is true. It stops growing when the condition (see also Bahr et al. 1988) is fulfilled. This condition states that for a fixed crack length a 0 the energy release rate G does not increase even though the temperature field (the cooling) continues to penetrate. If the condition is fulfilled also with the still further penetrating temperature field, it does not come again to the crack growth. Thus, the fixed crack length a 0 in Eq. (20) is the final crack length a end .
Finite element model
In this section we will first describe the general procedure how to determine a 0 /b and L/b from the FE analysis. In the following we will give more details about the used numerical methods and the influence of the number of Fourier series coefficients. As described in Sect. 2.4 we have six characteristic lengths of which four are given. The remaining two can be determined with the condition for stationary crack growth Eq. (13) and with the condition for the occurrence of a bifurcation point Eq. (19).
For a fixed hb/κ and δ/b, for different crack lengths a 0 /b and crack spacings L/b, the energy release rate G cF E M is calculated according to the Eq. (15) 1 and the smallest eigenvalue λ min is determined according to the Eqs. (16)-(18) by FE analysis. Due to the restriction to linear elastic material and small deformations, normalized material values E = 1, α = 1 and a normalized temperature difference ΔT = 1 can be used in the FE calculations. The dependence of ν will be discussed below. Now, by using G cF E M and the normalized E, α, ΔT and a given ν, the fracture mechanical length l 0F E M , according to Eq. (11), can be obtained. In a further step, the searched a 0 /b and L/b are now 19) is fulfilled. Furthermore, l 0Mat is calculated from a given material, e.g. in Table 1. The fracture mechanical length l 0F E M is then linearly interpolated so that it matches l 0Mat . This corresponds to the condition for the steadystate crack growth, Eq. (13). Thus, the crack length a 0 /b and crack spacing L/b were determined for a given δ/b and a given hb/κ. This process was then repeated for further δ/b and hb/κ.
Note that ∂G/∂δ is always determined with the FE analysis. During the evaluation it can be checked whether the condition for crack stop, Eq. (20), is true. Furthermore, Eq. (11) depends on the Poission's ratio ν. In Fig. 5 the normalised fracture mechanical length l 0 /b is plotted against ν. It can be seen that ν has a nonnegligible influence. The dependence of ν is also given in the analytical solution of the temperature-loaded free plate in Timoshenko and Goodier (2017). Therefore, the Poisson's ratio ν of the material must be taken into account in the FE analysis.
Geometric limits result from the variation of the crack space L/b = 0.05 . . . 1. The change of L causes a change of the model size and due to the meshing, the crack length is thus limited in the range of a 0 /b = 0.1 . . . 0.9. For some combinations of a 0 /b and L/b, crack closures occur when δ/b < 0.5. A more detailed investigation for the values a 0 /b < 0.1, L/b > 1 and δ/b < 0.5 is possible by adapting the FE mesh.
The determination of the derivatives in the fundamental solution Eq. (15), the bifurcation solution Eq. (19), and the change of the energy release rate over δ/b Eq. (20) was performed with the FDM analogous to Anderssohn et al. (2018). The central difference quo-tient with step size h a for crack length and h δ for time was used for this purpose.
Isoparametric hexahedral elements with quadratic shape functions were chosen for the FE mesh. To guarantee the independence of the step size h a in the FDM for the first and second derivatives, small step sizes are required compared to the crack spacing. This means that the strain energies must be determined as accurately as possible. Especially at the crack front, increased stresses and strains occur due to the singularity. Therefore, the crack tip was meshed with singular elements (Barsoum 1976) and a much finer discretisation was carried out in the vicinity of the crack tip (see Fig. 1). It was checked by a convergence analysis that the FE mesh is fine enough and the step sizes h a /b and h δ /b are sufficiently small for the FDM. Fig. 6 shows the results for the fundamental solution Eq. (15) 1 using the twelfth hexagonal model and Fig. 7a shows the results for λ min (Eqs. (16)-(18)) using the half hexagonal model. Differences in the computation of l 0 /b and λ min are not recognized for the presented range of a 0 /b with increasingly finer FE meshes and shorter step sizes. For the calculation of λ min , the second derivative of the strain energy is needed according to Eq. (16). Thus, λ min is more sensitive to numerical inaccuracies compared to l 0 /b. Therefore, for λ min the range in which the abscissa is cut (in which then also the bifurcation point occurs according to Eq. (19)), was shown detailed in Fig. 7b. Because of the many possible configurations for L/b, δ/b, and hb/κ, the following settings were chosen for the further analyses for certainty: 65000 nodes for the twelfth hexagonal model, 380000 nodes for the half hexagonal model and h a /b = 5·10 −4 for the step size of the crack length. For the choice of h δ /b, a similar consideration was made for ∂G/∂δ as for λ min shown in Fig. 8. Again, the differences due to various step sizes h δ /b are negligible. Based on Fig. 8b, the step size h δ /b = 5 · 10 −4 was chosen for the further calculations.
The Hessian matrix Eq. (16) was passed to the mathematical program Matlab, where the eigenvalues λ i were calculated.
The 1D temperature field Eq. (1) was loaded into the nodes of the 3D FE mesh using a special routine.
To determine C 4 /b (according to Eq. (15) 2 ), five FEM calculations were performed with given crack length a 0 /b and given crack spacing L/b using the twelfth column model (fundamental solution). From these calculations with different given values for C 4 /b, Fig. 6 Results of the convergence test calculations l 0 /b according to Eq. (15) 1 with Eq. (11) for given L/b = 0.5, δ/b = 1, ν = 0.22 and hb/κ = 1.0833, testing three different mesh qualities (n. mean number of FE nodes) and three different step sizes h a five values for the elastic strain energy Eq. (8) are obtained. According to the principle of minimum potential energy, the correct value for C 4 /b is the minimum of the least squares interpolation function. As an example, the calculated minimum and maximum values of C 4 /b with the corresponding settings are given in Table 2.
The influence of the number of coefficients j in the Fourier series Eq. (6) is shown in Fig. 9. The calculations were performed for the Al 2 O 3 99.8% plate (Table 1) with h = 10 4 W/m 2 K resulting in hb/κ = 1.0833 and ΔT = 800K . The crack spacing L/b versus the crack length a 0 /b is presented in Fig. 9a. In Fig. 9b, the time variation of the energy release rate ∂G/∂δ versus the crack length a 0 /b is shown. As can be seen, the difference in the results is low. This is in good agreement with the results by Anderssohn et al. (2018). For example, in Fig. 9b, where the curves cross the abscissa, the final crack length a end /b is readable. The relative difference between j = 1 (a end,C1 /b = 0.4839) and j = 4 (a end,C4 /b = 0.4936) is about 2%. To save computational time, j = 1 (a 0 and C 1 ) was chosen for further calculations.
Crack pattern in thermo-shocked free alumina plate
It is assumed that an alumina plate is heated to a temperature T 0 . Then the plate is quenched in a cooling Table 2 Calculated minimum and maximum values of C 4 /b with the corresponding settings for hb/κ = 1.0833 and ν = 0.23 (19), with j = 1 (a 0 and C 1 ) for given L/b = 0.5, δ/b = 1, ν = 0.22 and hb/κ = 1.0833, testing three different mesh qualities (n. mean number of FE nodes) and three different step sizes h a , b detailed plot at the intersection with the abscissa (Table 1) with h = 10 4 Wm −2 K −1 → hb/κ = 1.0833 and ΔT = 800 K) liquid of constant temperature T 1 . At a sufficient distance from the interface, the temperature field (Eq. (1)) holds in the plate. Table 1 shows the material, thermal and geometrical properties of alumina plates.
Extensive simulations were performed for the Al 2 O 3 99.7% plate ( Table 1). The results are illustrated in Figs. 10 and 11.
If a end /b or L end /b or better both are known from experiments and all parameters listed in Table 1 are also known, it is generally possible to determine the heat transfer coefficient h from diagrams like Fig. 10 between any brittle material and the cooling medium.
Here, the value hb/κ = 1 represents a kind of transition point. For hb/κ < 1, small changes of hb/κ have a very strong influence on the temperature gradient and thus on the stress. The final crack length achieved (Fig. 10a) and the final crack spacing (Fig. 10b) are very different. For hb/κ > 1, only large changes in hb/κ have a noticeable effect on a end /b and L end /b. If hb/κ > 2 the values of a end /b and L end /b very quickly approach the values of the asymptote at hb/κ = ∞. This means for the alumina plate 99.7% (Table 1) that the dependence in the final results is small for values of the heat transfer coefficient in the range h = 16,000Wm −2 K −1 to h = ∞ Wm −2 K −1 . This could explain why in the literature, e.g. Hasselman (1970), Jiang et al. (2012), Li et al. (2013), and Xu et al. (2016a), the heat transfer coefficient h between water and ceramics was determined to vary over a large range.
The progression of L/b over a 0 /b for hb/κ = 1.0833 and hb/κ = ∞ as significant examples is shown in Fig. 11. As can be seen, L/b decreases with increasing ΔT . This could also be observed in the experiments, see Figs. 12, 13, and 14, and by Bahr et al. (1986, Fig . 2) and Xu et al. (2016b, Fig . 2) on the outside of the quenched plates. Furthermore, in Fig. 11a it can be seen that the value of the crack spacing L/b hardly changes with the growth of the crack a 0 /b. This indicates that there is no merging of three columns as described in Sect. 2.3 and that the columns therefore grow in a steady course (fundamental solution). The calculated bifurcation points are not valid because the fixed point 0 in the model (Fig. 3) wants to catch up again. A postcritical analysis could show this (see Bahr et al. 2009, p. 4). However, for hb/κ = ∞ in Fig. 11b the crack space L/b is almost tripled for ΔT = 400 K as well as for ΔT = 450 K. For the other ΔT , L/b changes also significantly with increasing crack length. If the perturbation of the crack pattern is small enough, this will show the same trend as in Fig. 3 and the new diameter should be close to √ 3L. With the increase of ΔT the tensile stress increases according to Eq. (3). This leads to more cracks and thus to a decrease in the crack spacing L/b. This was also found out in 2D by thermal shock experiments with 50 × 10 × 1 mm 3 thin ceramic specimens in Jiang et al. (2012 , Table 1).
Consequently, the competition between short cracks increases. Short cracks have a smaller crack spacing while long cracks have a larger one.
Comparison with 2D model and experiments
To validate the developed model, analyses were also performed with a comparable 2D FE model. Furthermore, alumina plates were thermally quenched for different ΔT and evaluated. In Fig. 12 the results from the thermal shock experiment, the 2D and the 3D FEM bifurcation analysis are plotted. First, a brief explanation of the experiments and their evaluation is given. This is followed by a description of the 2D model with a comparison to the data from the 3D model and the experiments. At the end of this section, possible reasons why the 2D model fits the experimental data better than the 3D model are discussed.
The plates for the experiments were made of DOCE-RAM A-132 Al 2 O 3 99.7% with the properties given in Table 1 with dimensions of 40 × 40 mm 2 . The plates were heated to T = 550 • C, 750 • C, 950 • C and quenched in boiled water of T = 100 • C. The quenched temperatures were thus ΔT = 450 • C, 650 • C, 850 • C. Then the plates were treated with a contrast agent (Schilling et al. 2005) to achieve a better separation of ceramic and crack for the evaluation. Next, the plates were scanned with CT in High Aspect Ratio Tomography mode.
Through the CT scan and the contrast agent, it is possible to produce images of the surface of the sample, but also of the deep interior of the sample with the cracks highlighted, see Figs. 13 and 14.
The crack length a 0 /b and the crack spacing L/b of the thermal shock experiment were measured from Fig. 14. The procedure is exemplified by Fig. 14 (e). The sectional view was loaded into the programme Engauge Digitizer version 10.10. Based on the scale line, the thickness of the sample was determined (2b = 6.97 mm). Then horizontal lines were inserted from the top to the middle. These lines start and end at the outer cracks. The intersections of the lines with the cracks were counted. The crack widths are given by L =(width of the line)/(number of intersections -1). The associated crack length a 0 is the z-position of the line. Then the ratios L/b and a 0 /b can be determined with the measured b. All information is collected in Table 3. The 2D FE model is described in Bahr et al. (1988). For the analysis the bifurcation criterion from Nemat-Nasser et al. (1980) and Bahr et al. (1992) was used. In order to compare the results with the 3D model, the same material, thermal and geometrical settings (see Table 1 Al 2 O 3 99.7%) were used in the 2D model as in the 3D model. For the 2D model, the same temperature field as in Eq. (1) was applied.
The results for the final crack length a end /b with the corresponding final penetration depth of the temperature field δ end /b and the final crack space L end /b in the case of ideal cooling (hb/κ = ∞) are presented in Fig. 15 for the 2D and 3D FE model. The values of δ end /b match quite well. The achieved values for a end /b are slightly larger in the 2D FE model than in the 3D FE model. The difference for L end /b is slightly larger. A factor of 2 or less between the results of the 2D FE model and those of the 3D FE model agrees well with the findings in Bahr et al. (2009). This kind of difference in a end /b and L end /b was also found out in the experiments by Shao et al. (2010). In our investigations, similar results were obtained for other values of hb/κ, as shown in Fig. 12.
As shown in Fig. 12, the results of the 2D analysis between hb/κ = 0.5416 and hb/κ = 1.0833 agree well with the measured values of the samples. Thus, a heat transfer coefficient between h = 4.3 · 10 3 Wm −2 K −1 and h = 8.7 · 10 3 Wm −2 K −1 can be determined. This is slightly lower than the measured h in Zhou et al. (2012). This is due to the fact that nucleate boiling occurs at low sample temperatures due to the lower water temperature in Zhou et al. (2012) of 20 • C, thus increasing the heat transfer coefficient. Quenching in boiling water favours the formation of a vapour film, which decreases the heat transfer coefficient (Singh et al. 1981).
It is the question of why the 2D bifurcation analysis seems to predict the crack width L/b better than the 3D one. Here are two possible explanations.
First, due to the fine grain of the material, the cracks can close again after unloading. Thus, it is possible that the contrast agent does not reach every crack and thus the cracks or their lengths are partially poorly detectable. For more details see Zielke et al. (2021).
Second, the 2D and 3D models do not predict cracks going through the entire body, see Fig. 14. In Fig. 13, especially in Fig. 13b, e, h, it can be seen that the cracks start from the edge and go into the whole body. It can be assumed that there is a surplus of energy at the edge (Table 1) for different values of hb/κ and the quenching temperatures ΔT of 400 K, 450 K, 650 K, 850 K and 1050 K Fig. 11 Normalised crack space L/b versus normalised crack length a 0 /b (to crack stop) for a hb/κ = 1.0833 and b hb/κ = ∞ for the material parameters of alumina plate 99.7% (Table 1) due to the high temperature gradient. This causes the cracks to grow unstably into the body. The fact that the cracks also branch out is an indication of this (Kanninen and Popelar 1985, pp. 205-207). These cracks relieve the body and convert the 3D stress state in a 2D stress state.
Thus, there is no longer a closed plate when the cooling penetrates further from the surface. The penetrating cracks have cut the plate into many small slices. As has been shown by Shao et al. (2010, Figs. 3e and f), when comparing a whole plate and stacked individual plates at high temperature differences ΔT , the crack pattern on the cooled surface looks similar. So, even for the stacked individual plates, the crack pattern is not invariant in the x-or y-direction (see also Bahr et al. 1986). However, between the interior surfaces of the stacked single plates and the cut surfaces at the whole plates, the patterns differ. Thus, more cracks are found in the whole plates, and thus a shorter crack spacing L/b, than in the interior surfaces of the stacked individual plates. This is not addressed in Shao et al. (2010). Only the fact that crack branching occurs in the stacked single plates and not in the whole plate is explained by a different stress field caused by edge effects in the stacked single plates.
One reason or a combination of both reasons may explain why the 2D bifurcation analysis fits well with the measured data from the experiments.
Nevertheless, in the following aspect the data from the 3D model, as well as the 2D model, match the experiments. Both models predict that the crack spacing will change only slightly. This means that the cracks grow stable and do not recede. This agrees well with the measured data shown in Fig. 12, although it should be noted that the increase in experimental data in L/b is due to the cracks penetrating from the edge.
Conclusion and future work
The model developed by Bahr et al. (2009) was applied and adapted to predict the complex growth of the hierarchical 3D crack pattern of a brittle plate under thermal shock conditions without external constraints. This model is based on the assumption that the cracks form ideal hexagonal columns and that in this idealised crack pattern, the merging of three columns into a larger column is the mechanism for increasing the crack spacing. For cooling penetration, it was assumed that a 1D stationary temperature field (Eq. (1)) exists far enough away from the edges. Based on these assumptions, a 3D FEM bifurcation analysis with the Fourier expansion of the crack front (Anderssohn et al. 2018) was performed. Besides the different type of temperature field, the following differences and extensions are present in the model used in this work compared to Bahr et al. (2009) and Anderssohn et al. (2018).
The model no longer has infinite dimensions in the z-direction but the finite thickness b. The temporal penetration of the symmetrical temperature field Eq. (1), the associated reduction of the temperature gradient and thus the reduction of the mechanical stress, leads to crack arrest and thus to a final crack length a end . The final crack length was determined by evaluating the change in the energy release rate as a function of time (Eq. (20)). a end is the measure of the damage to the structure and can be used later for a residual strength analysis. Because the plate can contract without hindrance, freedom from forces in the lateral direction of the model is required.
Through a convergence study, which includes the optimisation of the FE mesh and the influence of the FDM step size, the computational effort could be reduced. Furthermore, it was shown that the results converge rapidly for an increasing number of Fourier coefficients. To reduce the computational effort even further, the different calculations were carried out with only one Fourier coefficient ( j = 1).
Due to the linear nature of the problem and the use of the characteristic lengths, describing the material and the thermal load, it was effectively possible to determine the evolution of the crack spacing L/b over the crack length a 0 /b by a parametric analysis.
The 3D model was verified by a comparison with an existing 2D model by Bahr et al. (1988). As in Bahr et al. (2009), the expected difference between 2D and 3D FE model in L/b was smaller than a factor of 2. Due to the intense cooling at the edge of the specimens and the associated formation of cracks through the whole specimen, a validation of the 3D model by experimental data was unfortunately not possible. In further research, additional experimental investigations are planned in which the specimen will be isolated at the edge. It should be noted that due to the evaluation by CT scan, the dimensions of the specimens are limited and specimens with larger dimensions cannot be simply used.
Further investigations with the presented 3D model are possible. For example, by modifying the FE mesh towards very short cracks, the critical temperature difference ΔT c for thermal shock could be analysed at which no crack growth occurs. It should be noted that a good knowledge of the mechanical and thermal material data is important for an accurate analysis. Conversely, it is possible to determine the critical stress intensity factor K I C and the heat transfer coefficient h using 3D FEM bifurcation analysis. Both values are difficult to determine using other methods. For this, the curves of a 0 /b and L/b must be known from thermal shock experiments. Fig. 14 a is I, b is II, c is III, d is IV, e is V and f is VI Fig. 13 b, e, h Funding Open Access funding enabled and organized by Projekt DEAL.
Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Cre-ative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/ by/4.0/. | 11,054.4 | 2023-01-26T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Renormalization group improvement of the effective potential in a (1+1) dimensional Gross-Neveu model
In this work, we investigate the consequences of the Renormalization Group Equation (RGE) in the determination of the effective potential and the study of Dynamical Symmetry Breaking (DSB) in an Gross-Neveu (GN) model with N fermions fields in (1+1) dimensional space-time, which can be applied as a model to describe certain properties of the polyacetylene. The classical Lagrangian of the model is scale invariant, but radiative corrections to the effective potential can lead to dimensional transmutation, when a dimensionless parameter (coupling constant) of the classical Lagrangian is exchanged for a dimensionful one, a dynamically generated mass for the fermion fields. For the model we are considering, perturbative calculations of the effective potential and renormalization group functions up to three loops are available, but we use the RGE and the leading logs approximation to calculate an improved effective potential, including contributions up to six loops orders. We then perform a systematic study of the general aspects of DSB in the GN model with finite N, comparing the results we obtain with the ones derived from the original unimproved effective potential we started with.
I. INTRODUCTION
In quantum field theory Dynamical Symmetry Breaking (DSB) is a key mechanism that has applications in particle physics and condensed matter systems [1][2][3], where quantum corrections are entirely responsible for the appearance of nontrivial minima of the effective potential. In the case of particle physics, for example, we have a Higgs mechanism playing a fundamental role in the Standard Model: in this case, the symmetry breaking requires a mass parameter in the tree-level Lagrangian, but Coleman and Weinberg (CW) demonstrated [4] that spontaneous symmetry breaking may occur due to radiative corrections even when this mass parameter is absent from the Lagrangian (which is, therefore, scale invariant). For the study of the CW mechanism, we need to calculate the effective potential, a powerful tool to explore many aspects of the low-energy sector of a quantum field theory. In many cases, the one-loop approximation is good enough, but it can be improved it, by adding higher order contributions in the loop expansion. A standard tool for improving a perturbative calculation performed up to some loop level is the Renormalization Group Equation (RGE) which, together with a reorganization of the perturbative results in terms of leading logs, have been shown to be very effective [5][6][7][8][9][10]. We refer the reader to section 3 in [8] for a short review of the method, and [11][12][13][14] for some of the interesting results that have been reported with the use of the RG improvement.
The Gross-Neveu (GN) model with N = 2 fermions has great relevance in the study of the polyacetylene, (CH) x , which is a polymer with conductive properties which are acquired through doping [15]. Polyacetylene is a straight chain that can have two forms, trans and cis. The trans form (trans-polyacetylene), which is the most stable, has a doubly degenerate ground state. These circumstances allow the existence of topological excitations, which entails a great phenomenological richness in this type of models. In [3,16] it was shown that in the continuous limit, and in the approximation where the dynamical vibration of lattice (phonons) is ignored, the metal-insulator transition in the polyacetylene can be described by the GN model in N = 2. In addition, the polyacetylene exhibits some remarkable effects, such as the Peierls mechanism [17], which is the generation of an energy gap for electrons through the coupling with phonons. This mechanism is analogous to the Yukawa interaction in the Standard Model.
The GN model can be seen as an effective low energy theory for the polyacetylene. This was shown by the Takayama-Lin-Liu-Maki (TLM) model [16], where the effective low-energy theories of the Su-Schrieffer-Heeger (SSH) model [18] are described by a theory of four fermions fields in (1 + 1) dimensions. In this model the behavior of the energy band (gap) ∆ is described as, where W is the width of the energy band, v f is the Fermi velocity, w 2 g /g 2 T LM is the coupling constant between the electron and phonons. If the adiabatic approximation is used in the TLM model, then it can be related to the GN model, and therefore, we can find an expression analogous to Eq. (1), which is related to the mass obtained in GN by symmetry breaking, σ 0 being a constant scalar field and Λ a renormalization scale, which is not a physical parameter, and therefore the only quantity that is measured is the mass, m. On the other hand, if we compare this with its analog, Eq. (2), ∆ and W are parameters measured in (CH) x . Now comparing (1) and (2) we can observe a relationship between the coupling constants, where we have replaced N = 2, which is the relevant value for the description of the polyacetylene.
Our goal here is to study, via radiative corrections, the generation of mass by DSB. In this case the mass will be obtained by where µ is the renormalization scale introduced in our model by regularization, and V eff (σ) is the effective potential which is a function of the (classical) scalar field σ.
In this paper, we considered the three loops calculation of renormalization group functions and effective potential for the (1 + 1) dimensional GN model with finite N (that is to say, without recourse to the 1/N expansion) that have been described in [19]. The RGE is then used to improve this calculation, incorporating terms that originate from higher loop orders (up to six). Then, we study the DSB properties of the model using the unimproved (directly obtained by perturbative calculations) and RGE improved effective potentials, and we observe that the improvement of the effective potential leads to relevant differences in comparison with the unimproved one found in the literature.
There have been many studies of the (1 + 1) GN model in the literature, usually considering the 1/N expansion. In this regard, Ref. [20] presents a nice review of leading and sub-leading orders in this expansion, at finite temperature. The phase diagram for the model has been first established in Ref. [21], and recently revised by lattice computations [22][23][24][25]. Another recent study of this phase diagram, using mean-field techniques, is presented in [26]. Finally, studies using the functional renormalization group have also been reported [26]. Our approach is complementary, for not resorting to the 1/N expansion, thus being particularly adequate for models with small N ; on the other hand, it is inherently perturbative. It is also interesting to notice that we work at zero temperature and chemical potential, so we investigate a single point in the phase diagram that was discussed in the above-mentioned works. But, at this single point, we are able to perform calculations analytically. Generalizations for finite temperature and chemical potential are possible, but not trivial, and are left for future works. This paper is organized as follows: in section. II we present our model with the renormalization group functions and unimproved effective potential found in the literature. In section III we calculate the improvement of effective potential using the standard approach of RGE, and section IV is devoted to study DSB in our model. In section V we present our conclusions and future perspectives.
II. RENORMALIZATION GROUP FUNCTIONS AND UNIMPROVED EFFECTIVE POTENTIAL FOR GROSS-NEVEU MODEL
We start with the Euclidean formulation of the massless (1 + 1) dimensional GN model studied by Luperini and Rossi [19] whose Lagrangian with N fermions fields and U (N ) symmetry is, This model has a discrete γ 5 invariance ψ → exp [( π /2) γ 5 ] ψ, whose spontaneous breakdown leads to a nonzero vacuum expectation value ψ ψ and thus to a dynamical mass generation [1]. Also, the model is known to be asymptotically free in two dimensions, and can be extended to, where σ is the scalar field, ψ is the fermion field, g and h are dimensionless coupling constants that appear with the introduction of the auxiliary field σ, which carries the same quantum numbers asψψ, i.e. σ = −gψψ. The Lagrangians L 1 and L 2 are equivalent both at classical level (using the equations of motion for σ in (6) to obtain (5)) as well as at the quantum level, since a gaussian integration over σ in the partition function calculated from (6) leads to the same partition function derived by (5). The renormalization group functions β and γ were calculated for this model up to three loop order (see the Ref. [19] for more details), and we quote the result, where where where and with γ (1) In the previous equations, the superscript mean the global power of coupling constant in each term. This notation will be usefull to organize the terms for the calculation of the improved version of the effective potential, in the next section.
The effective potential was also calculated up to three loops, in the minimal subtraction (MS) scheme, as follows, with where µ being the mass scale that is introduced to keep the dimensions of the relevant quantities unchanged, and where ζ (3) 1.202 is known as Apéri constant.
III. IMPROVEMENT OF EFFECTIVE POTENTIAL FOR THE GN MODEL
In this section we compute the improvement of effective potential of the model defined by the Lagrangian (6). We start with where S I eff is a function that remains to be determined. On dimensional grounds, we can assume the following Ansatz, where L is given by Eq. (17), and the coefficients A, B, C and D are functions only of the (dimensionless) coupling constants. The main idea behind the method is the observation that the coefficients in (20) are not all independent, since changes in µ must be compensated for by changes in the other parameters, according to the renormalization group. This is the same as saying that the effective potential has to satisfy a RGE. Following the procedure in [5,8,10], and using the conventions given in [19] and quoted in the last section, we can write the RGE for S eff in the form where the renormalization group functions are defined by equations (7), (9), (11) and (13). One should note, at this point, that these functions were computed [19] in the MS scheme. In principle, they should be adapted to a different scheme for our applications -however, as discussed in [8], this is not necessary when UV divergences appear at second or higher loop level, as it is the present case. Therefore, this issue does not have to be dealt with and, for our purposes, we can directly apply the renormalization group equations obtained in [19] for the RGE improvement.
If we use the Ansatz in Eq. (20) together with Eq. (21), it is possible to calculate recursively, order by order in the coupling constants, the functions A (g, h), B (g, h), C (g, h) and D (g, h). In particular, A (g, h) is fixed by the tree-level effective potential, Eq. (18), in the form where A (i) with i = 0, 1, 3 are known functions, and again the superscript represents the global power of coupling constants in each term. Following the same pattern, we want to calculate the remaining functions order by order in coupling constants, so we also write, Terms of O L 0 in the RGE correspond to the function B (g, h) in the Ansatz (20). These can be calculated from our knowledge of A (g, h) and the renormalization group functions. To do that, we substitute (20) into (21) and separate the terms proportional to L 0 , obtaining the following expression, Substituting (22) and (23), together with the renormalization group functions, Eqs. (7), (9), (11) and (13), into (26) leads us to the following expression, and from the previous equation, we can obtain, For the purpose of this paper, we will only consider terms up to sixth order in the coupling constants because we only know the β function up to four order. Terms of O (L) in the RGE will lead to the calculation of the C's in (24) from the knowledge we already have from the perturbative calculations, as well as the B's that we just obtained. Repeating the same procedure as before, we find the following results, Going to O L 2 in the RGE, we can find all the D s in (25), and the result is, Finally, with the values of A, B, C and D that have been obtained, we obtain V I eff (σ), which we call the improved effective potential, since it contains higher-orders (in the coupling constants) terms that were obtained from the RGE, and beyond what can be obtained by direct loop calculation, as presented in Sec. II. Notice that it is possible to obtain the unimproved version of the effective potential, V U eff (σ), that was calculated up to three loop order in [19] by setting B 4 = B 5 = B 6 = 0, C 4 = C 5 = C 6 = 0 and D 4 = D 5 = D 6 = 0. This is a proof of the consistency of our calculations.
IV. DYNAMICAL SYMMETRIC BREAKING
We start this section analyzing the behavior of the DSB for the unimproved and improved versions of the effective potential, Eq. (15). First, one has to recognize that the effective potentials that we computed actually correspond to the regularized effective potential, and we still need to fix a finite renormalization constant that is introduced via where ρ can be fixed with the Coleman-Weinberg (CW) [4] condition, The second step is to enforce that V U/I eff,R (σ) has a minimum at σ = µ. This is done imposing the condition, together with where m 2 σ is the mass generated by radiative corrections for the σ scalar field. It is interesting to notice that, here, this last condition (34) is actually equivalent to the CW condition, that is to say, Eq. (34) is automatically satisfied once (32) is enforced. The same does not happen in other models that were studied within this approach, such as [8,11,27], where Eq. (34) provides an additional selection rule to be considered when looking into solutions for Eq. (33).
From a computational point of view, since we want to study the general properties of the DSB mechanism in this model for a wide range of the values of its coupling constants, we will use Eq. (33) to fix the value of the constant g I as a function of h and N , which will remain as free parameters. Also, at this point, the rescaling g → g/π and h → h/π suggested in [19] was implemented. Upon explicit calculation, Eq. (33) turns out to be a polynomial equation in g I , and among its solutions we look for those which are real and positive, and which lie in the perturbative regime, g < 1.
To analyze the DSB in our model both for the unimproved and improved cases, we created a program in Mathemat-ica© to systematically apply the previous steps for arbitrary values of the free parameters. In other words: for any reasonable value of h and N , we apply the CW condition, Eq. (32), to fix the renormalization constant ρ, then we use Eq. (33) to find solutions of g in terms of h and N , from which we separate the physical solutions that are real and positive, and also satisfy g < 1 to ensure we are within the perturbative regime. Any solution with g > 1 is discarded as nonphysical, since our approach is inherently perturbative. This procedure is applied both for the unimproved and improved regularized effective potentials, for the sake of comparing both cases, and we denote as g I the value of the coupling constant g obtained with the improved potential, and g U with the unimproved one.
As a first step, the parameters space in which the DSB is operational was found by scanning the whole parameter space determined by 0 ≤ h ≤ 1 and 0 ≤ N ≤ 1000, and obtaining a region plot showing where the DSB occurs (i.e., the region where the previously explained procedure yields consistent minima away from the origin). These plots are presented in the figure 1, both for the unimproved (figure 1a) and improved (figure 1b) cases. As we can see, the parameter space for which the DSB is possible in the improved case is much smaller than the unimproved one, which is consistent with previous results in this type of studies, for example in three and four dimensional space-time models [8,9]. We also observed the existence of more than one possible solution for g I and g U for a given value of h and N . For this, the coupling constants for both cases were plotted as a function of h in the interval of 0 ≤ h ≤ 0.8 for different fixed values of N , as shown in figure 2. It is interesting to note that for N = 3 (figure 2a), there is a single solution for g in both cases. Also, we note that there is a very small difference between g I and g U for an interval of values of 0 ≤ h ≤ 0.1. However, we can consider an example where it is possible to observe the behavior between the minimum of improved and unimproved effective potentials, for the values of g I = 0.9073 and g U = 0.9579 respectively, corresponding to N = 3 and h = 0.05, as we shown in the figure 3. 15) and (19) respectively, and these are evaluated in the interval 0 ≤ σ/µ ≤ 1.8.
On the other hand, if we analyze the cases N = 10, N = 20, and N = 30, which are shown in figures 2b to 2d, we observe that they present more than one value for g I , while g U continues with a single value. We note that a set of values of g I only appear for small values of h and these tend to decrease as N increases. To observe the effects of these values on the minimum of potentials, we consider an example where N = 10, h = 0.06, g I 1 = 0.6771, g I 2 = 0.3554 and g U = 0.3610, as we shown in figure 4. We observe that there is not much difference in the minimum of the effective potential for the values of g I 2 and g U considered in this example. Finally, the plot in figure 4 also exemplifies the fact that, for several of the solutions defined by g U and g I , the point σ = µ is actual a meta-stable local minima, and not the global minima, which actually appears for 0 < σ < µ. 15) and (19) which were evaluated in the interval 0 ≤ σ/µ ≤ 1.8.
It is interesting to note the deep differences in the general properties of the DSB mechanism, and quantitative aspects of the mechanism, in the case of the improved effective potential. We also point out that our results are in general compatible with the results obtained in three and four dimensional space-time models, where the improved effective potential was also calculated from the RGE, in the approximation of leading logarithms [6][7][8][9][10]27].
We close this section by pointing out the fact that common artifacts of the perturbative calculation of the effective potential are non-convexity and even instabilities (i.e., the potential not being bounded from below). One notable case of this last problem is the so-called conformal limit of the Standard Model, where the inclusion of the top quark contribution to the one loop perturbative effective potential lead to an unstable potential, a problem that was solved by summing up the leading logs corrections using the RGE [11]. Additional improvements of this idea were further developed, and actually led to a calculation of the Higgs mass of M H = 141GeV, not far from the experimental value of 125GeV [14]. We can also quote [28,29] for showing how an improved calculation of the effective potential may cure these ailments.
V. CONCLUSIONS
In this paper we have studied the behavior of the unimproved and improved effective potential in a massless (1 + 1) dimensional Gross-Neveu model with N fermions fields. We have observed that the improvement of the effective potential, which we calculated up to the sixth power of the coupling constants, leads to different results in comparison with the unimproved case. As a general rule, the use of the RGE allows us to obtain higher order corrections to the effective potential, based on the knowledge of the renormalization group functions calculated up to some loop level (three in the case we considered here [19]), and this could lead to a better understanding of the DSB mechanism.
We notice that the improvement that we have performed in this work has not been able to fully avoid such problems of the perturbative effective potential. We can see in Fig. 4 one of the improved potentials failing to be convex in the region between two local minima. These potentials might also become unstable for larger values of σ. We believe this comes from the fact that we were able to sum up only contributions up to six loop order in the V I eff (σ). A different summation scheme, closer to the one adopted in [11,27], might allow for summing up infinite sub-series of higher loop order contributions to V I eff (σ), and that would probably eliminate at least some of these problems. This is one topic we want to discuss in a future work.
Another interesting perspective is to incorporate a term associated with the chemical potential: usually this appears as a mass parameter associated with fermions, and it was not considered in the model studied here, since the RGE improvement is simpler when the starting Lagrangian is scale invariant. It has been reported in the literature that the chemical potential is a key ingredient in the study of the polyacetylene properties, corresponding for example to the doping concentration, as discussed in [2,[30][31][32][33] up to one loop order. Therefore, the idea would be to observe the behavior of the effective potential when it has an explicit dependence on the chemical potential at higher loop orders. This problem would involve a multi-scale approach to the RGE improvement, as discussed, for example, in [34,35]. The presence of a dimensional constant in the starting Lagrangian leads to the appearance of two independent logarithms in the perturbative expression for the effective potential since there would be, in general, contributions involving also ln m µ , with m the fermion mass, related to the chemical potential. That is another topic we intent to investigate further. | 5,348 | 2021-08-09T00:00:00.000 | [
"Physics"
] |
Hot electron generation by aluminum oligomers in plasmonic ultraviolet photodetectors
We report on an integrated plasmonic ultraviolet (UV) photodetector composed of aluminum Fano-resonant heptamer nanoantennas deposited on a Gallium Nitride (GaN) active layer which is grown on a sapphire substrate to generate significant photocurrent via formation of hot electrons by nanoclusters upon the decay of nonequilibrium plasmons. Using the plasmon hybridization theory and finite-difference time-domain (FDTD) method, it is shown that the generation of hot carriers by metallic clusters illuminated by UV beam leads to a large photocurrent. The induced Fano resonance (FR) minimum across the UV spectrum allows for noticeable enhancement in the absorption of optical power yielding a plasmonic UV photodetector with a high responsivity. It is also shown that varying the thickness of the oxide layer (Al2O3) around the nanodisks (tox) in a heptamer assembly adjusted the generated photocurrent and responsivity. The proposed plasmonic structure opens new horizons for designing and fabricating efficient opto-electronics devices with high gain and responsivity. ©2016 Optical Society of America OCIS codes: (040.5160) Photodetectors; (250.5403) Plasmonics; (250.0250) Optoelectronics. References and links 1. B. Heshmat, H. Pahlevaninezhad, Y. Pang, M. Masnadi-Shirazi, R. Burton Lewis, T. Tiedje, R. Gordon, and T. E. Darcie, “Nanoplasmonic terahertz photoconductive switch on GaAs,” Nano Lett. 12(12), 6255–6259 (2012). 2. C. C. Chang, Y. D. Sharma, Y. S. Kim, J. A. Bur, R. V. Shenoi, S. Krishna, D. Huang, and S.-Y. Lin, “A surface plasmon enhanced infrared photodetector based on InAs quantum dots,” Nano Lett. 10(5), 1704–1709 (2010). 3. J. Hetterich, B. Bastian, N. A. Gippius, S. G. Tikhodeev, G. von Plessen, and U. Lemmer, “Optimized design of plasmonic MSM photodetector,” IEEE Quantum Electron. 43(10), 855–859 (2007). 4. X. Wang, Z. Cheng, K. Xu, H. K. Tsang, and J. B. Xu, “High-responsivity graphene/silicon-heterostructure waveguide detectors,” Nat. Photonics 7(11), 888–891 (2013). 5. D. M. Schaadt, B. Feng, and E. T. Yu, “Enhanced semiconductor optical absorption via surface plasmon excitation in metal nanoparticles,” Appl. Phys. Lett. 86(6), 063106 (2005). 6. R. Sundararaman, P. Narang, A. S. Jermyn, W. A. Goddard 3rd, and H. A. Atwater, “Theoretical predictions for hot-carrier generation from surface plasmon decay,” Nat. Commun. 5, 5788 (2014). 7. H. Morkoc, A. Di Carlo, and R. Cingolani, “GaN-based modulation doped FETs and UV detectors,” Solid-State Electron. 46(2), 157–202 (2002). 8. W. Zhang, J. Xu, W. Ye, Y. Li, Z. Qi, J. Dai, Z. Wu, C. Chen, J. Yin, J. Li, H. Jiang, and Y. Fang, “Highperformance AlGaN metal-semiconductor-metal solar blind ultraviolet photodetectors by localized surface plasmon enhancement,” Appl. Phys. Lett. 106(2), 021112 (2015). 9. D. Gedamu, I. Paulowicz, S. Kaps, O. Lupan, S. Wille, G. Haidarschin, Y. K. Mishra, and R. Adelung, “Rapid fabrication technique for interpenetrated ZnO nanotetrapod networks for fast UV sensors,” Adv. Mater. 26(10), 1541–1550 (2014). 10. D. Li, X. Sun, H. Song, Z. Li, Y. Chen, H. Jiang, and G. Miao, “Realization of a high-performance GaN UV detector by nanoplasmonic enhancement,” Adv. Mater. 24(6), 845–849 (2012). 11. G. C. Hu, C. X. Shan, N. Zhang, M. M. Jiang, S. P. Wang, and D. Z. Shen, “High gain Ga2O3 solar-blind photodetectors realized via a carrier multiplication process,” Opt. Express 23(10), 13554–13561 (2015). 12. B. Zhao, F. Wang, H. Chen, Y. Wang, M. Jiang, X. Fang, and D. Zhao, “Solar-blind Avalanche photodetector based on single ZnO-Ga2O3 core-shell microwire,” Nano Lett. 15(6), 3988–3993 (2015). 13. B. Mallampati, S. V. Nair, H. E. Ruda, and U. Philipose, “Role of surface in high photoconductive gain measured in ZnO nanowire-based photodetector,” J. Nanopart. Res. 17(4), 176 (2015). #261332 Received 17 Mar 2016; revised 21 May 2016; accepted 25 May 2016; published 10 Jun 2016 © 2016 OSA 13 Jun 2016 | Vol. 24, No. 12 | DOI:10.1364/OE.24.013665 | OPTICS EXPRESS 13665 14. J. Yu, C. X. Shan, X. M. Huang, X. W. Zhang, S. P. Wang, and D. Z. Shen, “ZnO-based ultraviolet avalanche photodetectors,” J. Phys. D Appl. Phys. 46(30), 305105 (2013). 15. Y. Yu, Y. Jiang, K. Zheng, Z. Zhu, X. Lan, Y. Zhang, Y. Zhang, and X. Xuan, “Ultralow-voltage and high gain photoconductor based on ZnS:Ga nanoribbons for the detection of low-intensity ultraviolet light,” J. Mater. Chem. C Mater. Opt. Electron. Devices 2(18), 3583–3588 (2014). 16. W. Y. Weng, T. J. Hsueh, S. J. Chang, S. B. Wang, H. T. Hsueh, and G. J. Huang, “A high-responsivity GaN nanowire UV photodetector,” IEEE Sel. Top. Quantum Electron. 17(4), 996–1001 (2011). 17. Y. Q. Bie, Z.-M. Liao, H.-Z. Zhang, G.-R. Li, Y. Ye, Y.-B. Zhou, J. Xu, Z.-X. Qin, L. Dai, and D.-P. Yu, “Selfpowered, ultrafast, visible-blind UV detection and optical logical operation based on ZnO/GaN nanoscale p-n junctions,” Adv. Mater. 23(5), 649–653 (2011). 18. M. W. Knight, N. S. King, L. Liu, H. O. Everitt, P. Nordlander, and N. J. Halas, “Aluminum for plasmonics,” ACS Nano 8(1), 834–840 (2014). 19. M. W. Knight, L. Liu, Y. Wang, L. Brown, S. Mukherjee, N. S. King, H. O. Everitt, P. Nordlander, and N. J. Halas, “Aluminum plasmonic nanoantennas,” Nano Lett. 12(11), 6000–6004 (2012). 20. V. S. Kortov, S. V. Zvonarev, and A. Medvedev, “Pulsed cathodoluminescence of nanoscale aluminum oxide with different phase compositions,” J. Lumin. 131(9), 1904–1907 (2011). 21. Q. Xu, F. Liu, Y. Liu, W. Meng, K. Cui, X. Feng, W. Zhang, and Y. Huang, “Aluminum plasmonic nanoparticles enhanced dye sensitized solar cells,” Opt. Express 22(S2), A301–A310 (2014). 22. J. Becker, Plasmons as Sensors (Springer, 2012). 23. J. A. Fan, C. Wu, K. Bao, J. Bao, R. Bardhan, N. J. Halas, V. N. Manoharan, P. Nordlander, G. Shvets, and F. Capasso, “Self-assembled plasmonic nanoparticle clusters,” Science 328(5982), 1135–1138 (2010). 24. P. Nordlander, C. Oubre, E. Prodan, K. Li, and M. I. Stockman, “Plasmon hybridization in nanoparticle dimers,” Nano Lett. 4(5), 899–903 (2004). 25. B. Luk’yanchuk, N. I. Zheludev, S. A. Maier, N. J. Halas, P. Nordlander, H. Giessen, and C. T. Chong, “The Fano resonance in plasmonic nanostructures and metamaterials,” Nat. Mater. 9(9), 707–715 (2010). 26. J. B. Lassiter, H. Sobhani, J. A. Fan, J. Kundu, F. Capasso, P. Nordlander, and N. J. Halas, “Fano resonances in plasmonic nanoclusters: geometrical and chemical tunability,” Nano Lett. 10(8), 3184–3189 (2010). 27. Z. Fang, Y. Wang, Z. Liu, A. Schlather, P. M. Ajayan, F. H. L. Koppens, P. Nordlander, and N. J. Halas, “Plasmon-induced doping of graphene,” ACS Nano 6(11), 10222–10228 (2012). 28. S. Golmohammadi and A. Ahmadivand, “Fano resonances in compositional clusters of aluminum nanodisks at the UV spectrum: A route to design efficient and precise biochemical sensing,” Plasmonics 9(6), 1447–1456 (2014). 29. V. Giannini, A. I. Fernández-Domínguez, Y. Sonnefraud, T. Roschuk, R. Fernández-García, and S. A. Maier, “Controlling light localization and light-matter interactions with nanoplasmonics,” Small 6(22), 2498–2507 (2010). 30. V. Giannini, A. I. Fernández-Domínguez, S. C. Heck, and S. A. Maier, “Plasmonic nanoantennas: fundamentals and their use in controlling the radiative properties of nanoemitters,” Chem. Rev. 111(6), 3888–3912 (2011). 31. P. Reineck, G. P. Lee, D. Brick, M. Karg, P. Mulvaney, and U. Bach, “A solid-state plasmonic solar cell via metal nanoparticle self-assembly,” Adv. Mater. 24(35), 4750–4755 (2012). 32. A. Manjavacas, J. G. Liu, V. Kulkarni, and P. Nordlander, “Plasmon-induced hot carriers in metallic nanoparticles,” ACS Nano 8(8), 7630–7638 (2014). 33. M. W. Knight, H. Sobhani, P. Nordlander, and N. J. Halas, “Photodetection with active optical antennas,” Science 332(6030), 702–704 (2011). 34. F. Wang and N. A. Melosh, “Plasmonic energy collection through hot carrier extraction,” Nano Lett. 11(12),
Introduction
In recent years, there has been growing interest for plasmonic photodetectors for a wide spectral range covering terahertz (THz) to visible frequencies [1][2][3].In all these works, several strategies have been used to enhance the absorption, responsivity, and quantum efficiency of the photodetectors.For instance, graphene plasmonics has been introduced as a promising platform to enhance the absorption of metal-semiconductor-metal (MSM) photodetectors including Schottky contacts at the optical and THz frequencies [4].Moreover, plasmonic nanoparticles with absorptive characteristics (ohmic losses) have widely been utilized to improve the spectral response of detectors [5].In the previously reported works, several techniques were used to enhance the detection performance of plasmonic photodetectors such as enhancing of the Schottky barrier height at the metal-semiconductor interface, which provides a wider depletion region [6,7], and excitation of surface plasmon resonances based on collective, coherent hot electron oscillations [8,9].The ultraviolet (UV) detectors are very useful for applications in UV astronomy, environmental monitoring, missile warning, and biotechnology and medicine.However, in spite of the extensive researches, the UV photodetectors suffer from dissipative losses, large dark currents, limited responsivity and quantum efficiency [10,11].To address these challenges and improve the performance of the UV detectors, two major methods have been proposed: (1) Avalanche multiplication [12], and (2) photoconductive gain [13].However, high responsive GaN-based avalanche detectors suffer from an increased noise [14].On the other hand, the photoconductive UV detectors are slow and noisy [15].As another solution, GaN-based UV photodetectors with silver (Ag) plasmonic nanoparticles have been introduced to enhance the responsivity [6,16,17].The major problem correlating with this method is the performance of utilized metals for UV bandwidth.The plasmon resonances in the subwavelength structures based on conventional noble metals (e.g.Au, Ag, and Cu) can be tuned across the visible wavelengths to the near infrared region (NIR).However, extending these plasmonic properties into the UV spectrum is highly challenging due to the intrinsic limitations in the chemical characteristics of the used metals.For instance, silver shows a dramatic degradation in plasmonic properties because of rapid oxidation and gold suffers from the interband transitions in the UV band [18].Lately, Aluminum (Al), Rhodium (Rh), Gallium (Ga), Chromium (Cr), and Indium (In) have been introduced as potential plasmonic materials for the UV spectrum [19].Aluminum has widely been employed in designing light harvesting devices, nanoantennas, cathodoluminescence spectroscopy, and antireflective surfaces [18][19][20][21], in spite of the inherent and rapid oxidation.
Aluminum also shows significant EM field localization because of its low screening ( ε ∞ ≈ 1) in comparison to gold ( ε ∞ ≈ 9) and silver ( ε ∞ ≈ 5).In addition, aluminum has high electron density since a single aluminum atom contributes three electrons compared to a single electron per atom for gold and silver [22].Due to the negligible influence of interband transitions in aluminum across the UV spectrum, therefore, the geometry of nanoscale structure plays a major role in decaying plasmons and generation of photoexcited hot carriers during light-matter interactions.
Closely packed and strongly coupled plasmonic nanoparticle assemblies in symmetric and antisymmetric orientations, known as plasmonic oligomers, can be tailored to support strong resonances across the visible to the NIR [23].These nanoparticle clusters show significant absorption cross-sections in the visible and NIR ranges, including strong plasmon resonance hybridization in the offset gaps between proximal particles [24].Depending on their shape and orientation, nanoparticle clusters are able to show unique spectral lineshapes, called "Fano resonances (FR)" that can be characterized by narrow spectral windows, where scattering maxima are suppressed and absorption peaks are enhanced [23][24][25].The physical mechanism behind formation of the plasmonic FR is a weak and destructive coupling between a spectrally broad superradiant mode and a narrow subradiant mode.When a plasmonic oligomer is excited at the frequency of the bonding mode, the incident light directly couples into the bonding mode via light-matter interaction resulting a robust indirect excitation of the antibonding resonant mode.In the nonretarded limit, the antibonding mode is dark without a net dipole moment and, hence, cannot be coupled directly to the incident beam.In contrast, in the retarded limit, the bonding mode becomes bright and a weak coupling mediated by the strong near-field coupling gives rise to the interaction between bonding and antibonding resonant modes, inducing a FR dip in the bonding continuum at the energy level of the dark mode [26].Excitation of a plasmonic FR mode leads to significant absorption compared to excitation of usual bright resonant mode which could be used for hot electron generation.This feature of plasmonic FR mode can be exploited to enhance the photocurrent in plasmonic photodetectors [27].However, inducing FR dips in the UV band is challenging due to limitations correlating with the chemical properties of conventional noble metals at this domain.Recently, it is proved that Al/Al 2 O 3 nanodisk clusters in symmetric and antisymmetric orientations are able to support strong FR modes with excellent absorption coefficient at the UV spectrum [28].Low-cost, CMOS compatibility, and supporting strong plasmon resonances at the UV spectrum are some of unique features of aluminum-based molecular clusters that make these nanoscale assemblies suitable for designing efficient nanoplasmonic devices.
In this paper, we propose a novel device based on plasmonic Al/Al 2 O 3 nanoparticle assemblies integrated into a GaN UV photodetector.To this end, we utilized seven-member heptamers with the symmetry of a benzene molecule as Fano-resonant plasmonic nanoclusters.All of the aluminum particles are deposited between Ni/Au fingers on a GaN active layer grown on a sapphire substrate.The presented results show that nanoplasmonic aluminum assemblies could generate hot electrons to enhance the absorption via inducing the FR modes across the UV spectrum.The proposed structure could realize the UV photodetectors with a significantly improved responsivity.
The proposed device
Radiative and non-radiative excitation of plasmons in metallic components and their decay leads to generation of hot carriers at the metal-semiconductor interfaces [6,[29][30][31].The surface modes are important to achieve the plasmon resonant behavior and hot carrier distribution at the metal-semiconductor interfaces [8,36,37].Plasmonic photovoltaic devices [32,33], and photodetectors [34,35] employ the decay of plasmons to generate hot carriers.Aluminum with the ability of generating continuous energy distribution for electrons is one of the most suitable metals for this process [6].In our proposed system, the confined plasmons also lead to the hotspot formation with extremely intense local fields in the capacitive regions between the proximal particles.Assuming electrons have an isotropic momentum distribution, approximately half of the photoexcited electrons are expected to be transported to the aluminum-GaN interface.Due to the continuous distribution of highly energetic hot electrons in the aluminum nanostructures [6,38], hence, we expect large number of charges to reach the metal/semiconductor interface compared to the conventional noble metals.Figure 1(a) shows a three-dimensional schematic of the proposed plasmonic UV detector (not to scale).The device comprises arrays of Al/Al 2 O 3 heptamer antennas between two Ni/Au fingers (electrodes) deposited on an undoped n-type GaN epilayer with the thickness of 4 μm which is grown on a sapphire substrate.The inset figure shows the geometry of the heptamer assembly.The space between two neighboring heptamers is set to 250 nm to prevent any destructive optical interference between the scattered fields associating with hybridized modes arising from the nearby antennas.In Fig. 1(b), we show the important geometrical dimensions for the metallic electrodes, the overall size of the proposed photodetector, and the distance between two fingers.Using the plasmon hybridization theory to analyze closely packed nanoscale assemblies, the plasmon responses of various types of aluminum-based nanodisk oligomers and monomers have already been investigated numerically and analytically [28,39].It is also shown that aluminum nanodisk heptamer antennas with a thin oxide layer (2-25 nm, depends on the size of consisting particles) can be tailored to support strong plasmonic FR mode across the near-UV (λ~350 nm) band [39].However, this wavelength is not unique and the position of FR minimum can be tuned via modifications in the geometrical, chemical and environmental parameters of the assembly.In the proposed UV detector, we used nanodisks with the geometrical dimensions that were calculated by Golmohammadi et al. [28], and accordingly, the radius of nanodisks is R = 70 nm with the thickness of t = 35 nm separated with the offset gap of D 7h = 12 nm.It should be noted that while the thickness of the oxide layer around nanoparticles is varied the size of offset gap is kept fixed to satisfy the required near-field coupling strength.To provide a detailed study and compare the effect of aluminum heptamer arrays on the responsivity and performance of the structure, we also demonstrate the spectral response of the structure without presence of antennas on GaN as the non-plasmonic regime.To this end, we used empirically measured values for a GaN-based UV detectors reported by Li et al. [6].
The frequency-dependent absorption mechanism of the proposed plasmonic UV detector is based on the hybridized plasmon resonant modes due to the interaction of an incident beam with the metallic antenna.The maximum absorption can be achieved at the spectral position of the antisymmetric FR dip, because of the suppression of the scattering bright resonant dipolar peak by narrow antibonding dark mode.Such a significant absorption leads to generation of large number of hot carriers at the metal-dielectric interface, which are transferred to semiconductor surmounting the Schottky barrier and collected by the electrodes resulting a remarkable photocurrent and hence, high responsivity.Figure 1(c) exhibits a twodimensional (xz-view) cross-sectional schematic of the proposed UV detector displaying the hot electron transport in the GaN layer to the adjacent electrodes.It is well-understood that in a metal-semiconductor system, reduced electron-electron scattering in the metallic part of the nanoantenna increases the number of hot electrons transferred to the semiconductor layer [33,41].This ultrafast transition of plasmonic charges leads to accumulation of more hot electrons and sweeping them before immediate recombination.Figure 1(d) demonstrates the schematic band diagram profile for the proposed plasmonic device, showing the carrier formation and transition mechanisms and sweeping opposite charges to the nearby electrodes.When a bias is applied (here 5.0 V) between the metallic contacts one forward and one reverse biased Schottky junctions are formed.The large electric field in between, results in sweeping of the photogenerated hot electrons to the positive electrode and thereby producing a photocurrent.However, due to losses via back-scattering, inelastic collisions, and heat energy conversion (internal damping), not all of the photoexcited electrons are injected to the semiconductor [44,45].Thus we have to consider only the hot carriers within the mean-free path (MFP) length (l p ) distance from the interface for transferring to the semiconductor over the Schottky barrier [45][46][47].In this approach, photoexcited electrons are excited from the energy states below the Fermi level (d-band) to the higher energy levels, and once they arrive at the interface with an energy larger than the Schottky barrier height get injected to the GaN.Experimental results show that MFP for electrons is strongly energy-dependent and minor perturbations in energy level results significant changes [48][49][50].In our analysis we assumed l p = 25 nm for electrons 5 eV above Fermi level energy as reported in the literature [49].Considering depicted band diagram for the Aluminum-GaN-Ni/Au structure, the electrical simulation results verify formation of a Schottky barrier with the height of Φ B = 0.87 eV.In this regime, the decayed plasmons result hot electrons that are arrived to the interface with higher energies more than ~0.87 eV are able to pass the barrier and transit to reach the biased electrode.In the examined device, hot electron generation rate (G he ) by aluminum nanoantennas at the Fano dip wavelength due to photoexcitation can be calculated using [27]: G he = PC abs (λ)/ħωA h , where P is the incident light power (20 μW), C abs (λ) is the absorption cross section as a function of resonant wavelength, and A h is the metallic nanoantenna area and found to be G he = 5 × 10 17 s −1 .Then, the electron concentration can be calculated as [27]: n e = τG he /A h , where τ is the relaxation time, which estimated to be 0.825 × 10 −6 s (see methods).The approximate electron concentration at the aluminum-GaN interface is defined as n e = 1.04 × 10 17 cm −2 .Comparing hot electron generation rate and the associated concentration in the proposed system with gold nonamers and gratings for the same purpose [27,33,40], we realized a significant enhancement due to inherent and remarkable absorptive behavior of aluminum across the UV spectrum as well as continuous electron energy distribution [6].
For conventional semiconductor layers that have broadly been utilized for photocurrent generation in designing plasmonic devices (e.g.silicon, cadmium selenide, etc.), the carrier lifetime is in the range of ~100 μs.Long carrier lifetime prevents immediate recombination and facilitates generation of large photocurrent.In contrast, the carrier lifetime and recombination process for UV-compatible GaN around is in the range of a few nanoseconds.In this regime, extremely short transition time (in the range of a few picoseconds) is required to overcome the immediate recombination of the carriers.Using experimentally and theoretically obtained values for the saturation velocity (V sat ) in n-type GaN [42,43], the transition time (t tr ) can be determined by: t tr = L/V sat , where L is the pitch (see methods).For the saturation velocity of 10 5 cm/s, with the pitch of 500 nm, the transition time is calculated as 5 × 10 −12 s (5 ps) which is extremely short (t tr <<τ n ) compared to the carrier lifetime in GaN which is around 6.5 ns (see methods).Therefore, large number of electrons can be collected before they recombine resulting photocurrent with gain.Additionally, for the uncovered parts of the photodetector, the incident photons with the energies larger than the bandgap of GaN can also be absorbed and generate electron-hole pairs.These electron-hole pairs in the uncovered parts will be added to the hot electron pairs of clusters and contribute to the photocurrent [see Fig. 1(d)].The fast relaxation and transition times constitute the base for very fast temporal response for the proposed devices.We estimated that the rise and fall times are in the sub-microsecond range using the standard methods [51,52].
Results and discussion
Figure 2(a) represents the scattering and absorption cross-sections for the aluminum heptamer.Clearly, an antisymmetric, narrow, and tunable plasmonic FR mode is induced around λ~325 nm, which is between two distinct shoulders correlating with the bonding and antibonding plasmon modes at λ~250 nm and λ~385 nm, respectively.These spectral responses are calculated by selecting the geometrical dimensions of the molecular heptamer to tune the UV detector frequency close to the FR dip frequency.Using previously discussed geometries for the heptamer clusters, we set the oxide (Al 2 O 3 ) layer thickness to t ox = 2 nm.In our analysis, we detected a distinct absorption extreme at the FR position with a couple of absorption shoulders in the vicinity of the bonding and antibonding modes in both of the examined heptamers with two different aluminum types [see Fig. 2(a)].However, the absorption at these wavelengths is not as high as the one at the FR dip position.When the incident UV beam is resonant with the induced absorption window, the strong near-field coupling of light and cluster gives rise to generation of hot electron-hole pairs.Figure 2(b) exhibits the normalized E-field map of the plasmon resonance excitation and hybridization corresponding to the plasmonic Fano dip mode wavelength in an isolated aluminum heptamer.In the plotted snapshot, formation of the hotspots at the offset spots between nanodisks is clearly visible.The full optical responses of simple and complex aluminum antennas such as scattering profiles, and extinction spectra are discussed with details in previous studies [28,53], hence, here we just considered the absorption properties of the proposed heptamer assemblies.Since the oxidation of aluminum in the subwavelength regime also affects the dielectric permittivity of the particles, the thickness of the oxide layer has an important role in hybridization of plasmon resonances in closely spaced nanoparticle assemblies.This effect is due to the variations in both chemical and geometrical features of the oxidized nanoparticle assembly.The chemical influence of the oxide layer is discussed in numerical setup part of the Methods section.For the geometrical dimensions, increasing the oxide layer directly increases the overall size of the cluster, hence, we expect noticeable variations in the spectral response of the cluster.The effect of oxide layer on the scattering efficiency profile for an isolated nanodisk with an oxide thickness of t ox ~2-3 nm have been examined, theoretically and experimentally [18].Besides, it is shown that increasing the thickness of the oxide layer results in a red-shift of scattering dipolar resonance peak to the longer wavelengths (from UV to the visible band) [53].This red-shift also contains dramatic decrements in the scattering efficiency peak.Therefore, we have to find an acceptable trade-off between scattering and absorption efficiencies by finding appropriate dimensions for the oxide layer around nanodisks in the heptamer assembly.To this end, we investigated the effect of oxide coverage on the plasmon response of the proposed heptamer.The absorption spectra for an isolated aluminum nanodisk heptamer on a glass host is plotted in Fig. 3(a).Increasing the thickness of the oxide layer leads to enhancements in the absorption efficiency due to formation of narrower FR minimum resulted by the suppression of the scattering extreme by antibonding dark resonant mode [26,39,53].This phenomenon includes a red-shift in the position of the peaks to the longer wavelengths.The reason originates from the strong EM field hybridization of plasmonic resonances in large size heptamer clusters.As a result, deeper Fano minimum in the extinction profile can be induced, including a significant enhancement in the ratio of the absorbed power.It is well-accepted that Fano dips are very sensitive to the minor alterations in the structural properties of nanoparticle clusters [25,26].Noticing in the absorption profile in Fig. 3(a), for the heptamer assembly with thicker oxide layer, the peak of the absorption is shifted to the visible spectrum that is not desired for our UV photodetector.In addition, for the ideal case, for an entirely aluminum cluster without oxide layer (t ox ~0 nm), a noticeable extreme is appeared at the short wavelengths around λ~280 nm, close to deep-UV band.For #261332 t ox = 2 nm and 4 nm two absorption extremes are obtained at λ~320 nm and 345 nm, respectively, which have almost equal amplitude.This profile also shows the absorption spectra for the UV detector without presence of nanoparticle clusters.Noticing in the corresponding curve, due to the absence of metallic components and plasmonic effects, we observe only the natural absorption of incoming UV beam by GaN substrate, which is reduced dramatically after UV band λ>400 nm.electric field is observed at the offset spots between aluminum nanodisks because of hybridization and strong confinement of the plasmonic resonant modes.Comparing two types of antennas with different oxide thicknesses, a slight difference in the enhancement is noticed.This effect can be described by the effect of oxide layer thickness on the Bruggeman dielectric function of the entire composite aluminum antenna [18], yielding different real and imaginary permittivities at different wavelengths.Hence, modifying the oxide thickness can lead to severe changes in the spectral response of the structure, as shown in the preceding profiles.This plot also shows that no distinct shoulder is observed in the electric field profile at the illumination spots for the absence of metallic heptamers and therefore the absence of the plasmonic effects.For the case without the heptamers, a thin layer of electric field appears at the surface of the GaN (with the magnitude of 1.15 × 10 5 V/cm).While for the plasmonic case, a much larger electric field is monitored below the cluster due to hybridization of plasmons (with the magnitude of 3.95 × 10 6 V/cm).Fig. 4. a,b) carrier concentration for the detector system without heptamers with bias (5.0 V), while the UV light is in OFF and ON states, respectively c,d) carrier concentration for the system with heptamers with bias (5.0 V), while the UV light is in OFF and ON states, respectively, e,f) E-field enhancement map for the device with and without clusters, while the UV light is ON and bias is 0.0 V.
Figures 4(a) and 4(b) display the electron concentration for the proposed UV photodetector device without presence of aluminum heptamers, while the bias is applied (5.0 V) and the light source is OFF and ON, respectively [these states are indicated inside the corresponding profile in Fig. 4].By applying both bias and UV beam, comparing to the absence of the beam [Fig.4(a)], a noticeable electron concentration is obtained under the electrodes [Fig.4(b)], resulting a photocurrent.On the other hand, by adding metallic nanoscale heptamers between electrodes, we observed a dramatic enhancement in the concentration of carriers resulted by the metallic clusters, as shown in Figs.4(c) and 4(d), respectively.To show the effect of plasmonic clusters on carrier generation, we used aluminum nanoparticles with the oxide coverage of t ox = 2 nm.In this regime, the generated #261332 large carrier concentration causes Schottky barrier lowering which could contribute to the enhancement of the photocurrent [54,55].The above comparison between non-plasmonic and plasmonic UV detectors can be further illustrated by plotting corresponding E-field maps for the excited electric field at the surface of the GaN, below the antennas, as shown in Figs.4(e) and 4(f), where the effect of plasmonic antennas in formation of a large field at the GaNaluminum interface is obvious.
Further, we study the electrical response of the plasmonic GaN photodetector. Figure 5(a) illustrates the current-voltage (I-V) characteristic calculated for the peak of the absorption profile along with the one for the device without plasmonic assemblies.In the calculated response, the voltage is changed between 0 V to 5.0 V.For a bias of 5.0V the plasmonic detector with the heptamer arrays with t ox = 2 nm and 4 nm, yield the photocurrents of 88.56 μA and 90.25 μA, respectively.For the non-plasmonic case (absence of metallic nanoparticle clusters), the photocurrent is found as 1.72 μA under 5.0 V bias.Dramatic enhancement in the photocurrent due to the plasmonic heptamers is clearly visible.The inset of Fig. 5(a) shows the extracted dark current as a function of the bias voltage, which reaches to 47.95 nA, 52.5 nA and 55.25 nA for the non-plasmonic case without the heptamers, t ox = 2 nm and 4 nm, respectively, under 5.0 V bias. Figure 5(b) shows the photocurrent as a function of the polarization angle of the incoming light.In the plotted figure, hallow and solid circles represent the calculated photocurrents for different incident polarization modes in two types of heptamers with different oxide thicknesses.It is observed that the response of the proposed structure is insensitive to the variations of the polarization angle of the incident EM energy due to the inherent symmetry of the molecular heptamer cluster.In addition, besides the ability to support pronounced Fano dip at the UV spectrum, it should be noted that antisymmetric structures with more complex geometries cannot provide such a high and polarization-independent absorption spectra [56].
Figure 6(a) represents the spectral response of the proposed UV detector with and without aluminum antenna arrays, where the thickness of the oxide layer is changed and the bias is kept as 5.0 V.For the plasmonic regime, the responsivity peaks are at λ~325 nm and λ~330 nm, and the cutoff wavelengths here is at λ~335 nm and λ~345 nm, for t ox = 2 nm and 4 nm, respectively.The peak responsivity (R ph ) corresponds to the position of FR dip of heptamers.At the peaks of the curve for the heptamer with t ox = 2 nm and 4 nm, the responsivity of the proposed plasmonic UV photodetector exceed 20.8 A/W and 21.9 A/W, respectively.This outcome shows the superior responsivity of the examined UV detector in comparison to analogous nanoscale devices [6,17].On the other hand, for the non-plasmonic case, we observed a conventional responsivity with a distinct shoulder at the UV spectra in the range of λ~300 nm to 350 nm, where at the highest peak this parameter is measured as approximately 0.13 A/W [see the inset diagram in Fig. 6(a)].Using the calculated responsivity data for the proposed plasmonic UV photodetector, we extracted the external quantum efficiency (EQE) for the structure in two different regimes by employing the conventional equation [11]: EQE = hcR/eλ, where h is the Planck's constant, c is the velocity of light, e is the electron charge, R is the responsivity of the device, and λ is the wavelength of the incoming optical power.The calculated EQE for the UV detector in non-plasmonic case is 64.5% while EQE is 8065% and 8116% for the devices with the presence of aluminum clusters with t ox = 2 nm and 4 nm, respectively.This dramatic enhancement in the responsivity and efficiency of the device during transition from non-plasmonic to the plasmon regime originates from the generation of hot carriers due to strong hybridization of plasmons at resonant frequencies.As the other important parameter, we estimated the internal quantum efficiency (IQE) of the proposed UV photodetector, which is the number of the produced charge carriers per incident photon and can be calculated using the computed photocurrent profile as well as the incoming photon energy flux on the subwavelength heptamer antennas.ε is the effective permittivity of the semiconductor substrate and metallic heptamer that are contributed in the absorption mechanism.Accordingly, the number of absorbed photon is given by: ( ) On the other hand, using the equation above, we define the IQE [41]: Number of hot electrons/Sec Total absorbed photons/Sec IQE = (2) Figure 6(b) exhibits numerically obtained IQE for the proposed device as a function of the incident UV light, where the peaks for t ox = 2 nm and 4 nm with the values of 38% and 40%, respectively are induced at the FR dip positions.It is also worth noting that at these short wavelengths, generation of hot electrons by metallic nanodisk heptamers has an undeniable impact in having such a large photocurrent as well as a significant IQE.We also calculated corresponding IQE for the UV detector without nanodisk clusters displayed in the profile with dotted curve as 15.6%.A comparison of the performance of all the examined regimes for the proposed UV photodetector shows that inducing the plasmonic effect via aluminum clusters enhances the responsivity and photocurrent of the device with the expense of having a few nano-amperes dark current.Additionally, comparing with the reported IQE for the recent works, 6,11 the proposed plasmonic UV photodetector shows significant efficiency.The response of the plasmonic UV photodetector is comparable with the more complex designs that have been proposed, such as coupling of plasmons between aluminum particles and zinc oxide (ZnO) nanoparticles, or using multilayer substrate to enhance the electron-hole confinement to improve generated photocurrent [6,[57][58][59][60][61][62][63].Finally, we estimated the corresponding gain (Γ ph ) of the investigated photodetector using [64,65]: where q is the elementary charge, and c is the velocity of light.The corresponding gain is found to be Γ ph = 2.1 × 10 2 for the aluminum cluster with the oxide thickness of t ox = 2 nm.
Conclusion
In conclusion, a method is proposed to enhance the photoresponse of a GaN UV detector using plasmon hybridization mechanism.Using aluminum-based symmetric heptamer clusters deposited on a GaN active layer, we developed a structure with enhanced photocurrent due to significant increase in the generation of hot electrons under the heptamer clusters resulted by the decay of the plasmons on the Al disks.We also investigated the effect of oxide layer thickness variations on the characteristics of the UV photodetector.Inducing a pronounced Fano dip in the UV region, we obtained a significant absorption of incoming light power by suppression of the scattering maxima.Calculating the important parameters for the proposed photodetector, we proved its superior performance and quality in comparison to analogous devices without plasmonic structures.Possessing high responsivity, quantum efficiency, internal gain, and significant photocurrent across the UV spectrum make this structure as a potential platform for designing and fabricating optoelectronic UV devices for several sensing applications.
Methods
Definition of the optical response of the proposed device.To extract the optical properties of the proposed UV photodetector, we investigated the excitation of plasmon resonant bright and dark modes and their interference using the finite-difference time-domain (FDTD) method (Lumerical FDTD).In the simulations, to determine the plasmonic responses, following parameters were employed: The spatial cell sizes were set to d x = d y = d z = 0.8 nm, and 48 perfectly matched layers (PMLs) were the boundaries.Additionally, simulation time step was set to the 0.01 fs according to the Courant stability.The light source was a linear plane wave electric source with a pulse length of 2.6533 fs, offset time of 7.5231 fs, and with the illumination power of P = 20 μW/mm 2 .Using the ellipsometric data for Al thin films that were obtained in recent works [18,19], we calculated the dielectric function for the nanostructures composed of a thin coverage of oxide layer using Bruggeman dielectric model for modified Drude model as following: where ɛ ∞ (~2-3) is the high frequency response, ω p (~13.9 eV) is the bulk plasmon frequency, and Γ (~1.2 eV) is the damping constant [19].It should be underlined that our FDTD simulations, the empirical settings and conditions are employed to study the features and plasmon responses of the proposed UV detector [6].Definition of the electrical response of the proposed device.On the other hand, to determine the electrical properties such as responsivity and dark current characteristic (I-V) diagrams, we used fully physics-based Lumerical DEVICE.To this end, we applied finite element mesh (FEM) generation method to solve the electrical properties of the plasmonic UV detector numerically.For the GaN substrate with the presence of metallic electrodes and clusters, the work function is taken as 5.85 eV by following Kim et al. [65] with the dc permittivity of 9.7.Additionally, the bias voltage during the simulations is set as 0.0 V<V G <5.0 V.The majority carriers lifetime (electrons) and diffusion length were gotten from experimentally determined works for n-type GaN epilayers on a sapphire substrate with dislocation density of 10 8 cm −2 , where the recombination lifetime for carriers was set to τ n = 6.5 ns [66][67][68].
Saturation velocity (V sat ) calculation.As a general rule for III-V materials, the bulk saturation velocity in high-field mobility can be defined by modeling as a function of lattice temperature given by [69]: ( ) where V sat is the saturation velocity at the lattice temperature (T = 300 K), and A represents the temperature coefficient, showing the strong dependency of the various materials which are included in the mechanism (Monte Carlo simulations model).Relaxation time estimation.For the examined aluminum antennas, using the following method [70,71]: where R L is the aluminum antenna resistance, and C is the gate-interface capacitance that can be defined as below: ( ) where ɛ 0 is the permittivity of the vacuum., L and W are the pitch and electrode width, respectively.Therefore, for L = 500 nm, W = 200 nm, R L = 250 kΩ, and ɛ GaN = 9.7, the capacitance is calculated as 330 pF.Using the computed values, the relaxation time is defined as 0.825 × 10 −6 s.
Fig. 1 .
Fig. 1. a) Schematic of the plasmonic photodetector composed of aluminum nanodisk clusters deposited on GaN-sapphire substrates.The inset are the definitions for a nanodisk and a heptamer cluster with geometrical parameters, b) a top-view of the photodetector with the geometrical dimensions identification, c) the cross-sectional view of the hot electron generation and transform under the aluminum-based nanodisk clusters at the GaN-metal interface, d) schematic band diagram for the aluminum-GaN interface, showing the carrier formation mechanism in the device.
Fig. 2 .
Fig. 2. a) Scattering and absorption cross-sectional profiles for an aluminum heptamer antenna with the oxide size of 2 nm around nanoparticles for Knight et al. aluminum, b) E-field map of the plasmon resonance excitation and hybridization in the antenna, and formation of energetic hotspots between proximal nanodisks are obvious.
Fig. 3 .
Fig. 3.The plasmon responses for the UV photodetector, a) the absorption spectra for heptamer clusters deposited in GaN epilayer with variant oxide thickness and without metallic heptamers, b) E-field enhancement diagram for the UV device with and without aluminum clusters, c) numerically plotted absorption spectra for the oxide layer thickness as a function of incident UV beam in a heptamer nanocluster.
Figure 3 (
Figure 3(b) represents numerically obtained absorption spectra for Al 2 O 3 thickness variations as a function of incident beam.The absorption ratio increased significantly including a red-shift to the visible spectrum by increasing the thickness and the entire size of the heptamer.The enhancement of the electric field |E| at the gap spots between central nanodisks of the heptamer cluster is shown in Fig. 3(c).Significant enhancements of the
Fig. 5 .
Fig.5.Electrical response for the UV photodetector, A) numerically achieved photocurrentvoltage (I-V) curves for two different oxide thicknesses of a heptamer cluster and without heptamers.The inset is the dark current-voltage (I-V) curves for the non-plasmonic and plasmonic UV photodetector for two different oxide thicknesses at λ = 325 nm and 335 nm for t ox = 2 nm, and 4 nm, respectively, B) polarization-independency of the generated photocurrent (blue-spheres) of the device for the polarization angle variations of the incident UV beam.
Fig. 6 .
Fig.6.The spectral responses for the UV detector in both non-plasmonic and plasmonic regimes, with variant Al 2 O 3 thicknesses of heptamer clusters, A) responsivity profile under 5.0 V applied bias.Inset is the responsivity profile for the non-plasmonic regime, B) internal quantum efficiencies (IQEs) for different regimes of the UV detector.The absorbed power by the structure is given by 261332 Received 17 Mar 2016; revised 21 May 2016; accepted 25 May 2016; published 10 Jun 2016 © 2016 OSA 13 Jun 2016 | Vol. 24, No. 12 | DOI:10.1364/OE.24.013665 | OPTICS EXPRESS 13677 | 9,521.4 | 2016-06-13T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Integrated Pedal System for Data Driven Rehabilitation
We present a system capable of providing visual feedback for ergometer training, allowing detailed analysis and gamification. The presented solution can easily upgrade any existing ergometer device. The system consists of a set of pedals with embedded sensors, readout electronics and wireless communication modules and a tablet device for interaction with the users, which can be mounted on any ergometer, transforming it into a full analytical assessment tool with interactive training capabilities. The methods to capture the forces and moments applied to the pedal, as well as the pedal’s angular position, were validated using reference sensors and high-speed video capture systems. The mean-absolute error (MAE) for load is found to be 18.82 N, 25.35 N, 0.153 Nm for Fx, Fz and Mx respectively and the MAE for the pedal angle is 13.2°. A fully gamified experience of ergometer training has been demonstrated with the presented system to enhance the rehabilitation experience with audio visual feedback, based on measured cycling parameters.
Introduction
A growing global population combined with higher life expectancy has increased the number of elderly people in the world to unprecedented levels. As a consequence, the demand of healthcare services and expenditures in national health services has seen a dramatic increase. Physiotherapy is a major part of these services needed by the elderly population for physical rehabilitation, injury prevention, and well-being. In Europe, the market for physiotherapy services was forecast to grow 7.7% annually from 2018 to 2023, with the global market expected to reach over $165 billion by 2023 [1]. This also increases the load on hospitals and clinics for such services. There is an immediate need for new technologies to efficiently handle the needs of the aging population.
The aim of rehabilitation is to enable a person to regain their health after an injury, disease, or surgery [2]. Successful rehabilitation leads to higher independence for the individual, decreasing the load imposed on caretakers (e.g., nursing homes) and on their families. Studies have shown that rehabilitation is most effective when it is tailored to the individual. Thus, it is of paramount importance that training programs and intervention strategies are planned on a patient by patient basis. Currently, this often is not the case, due to the high costs and limited availability of specialized caregivers [3,4].
Ergometer training has been regularly utilized for rehabilitation and is shown to provide many benefits to patients, such as increased muscle strength, reduced risk of cardiovascular disorders, and significant improvements in metabolic responses [5]. Patients suffering from neurological or physiological conditions that result in an impairment of coordination, strength or conditioning, as well as patients suffering from cardiopulmonary diseases, benefit significantly from rehabilitation with ergometer training [6][7][8]. Today, most ergometer devices do not provide the ability to provide advanced analytics, and training sessions are dull for the patient where progress is either not monitored or observed only by the total power output and the average cadence. Most rehabilitation exercises including ergometer training are repetitive and require a long-term commitment to see any benefits. However, less than half of the patients actually perform the training exercises prescribed by their therapists [9]. Psychological encouragement is important to motivate participants to train regularly, and an individual's motivation has been shown to be strongly linked to training participation, likelihood to continue the rehabilitation, and overall performance [10,11]. Gamifying rehabilitation and athletic exercises with interactive games have been attempted to motivate patients to perform the necessary training. Video games have been utilized with success for this purpose in both athletic and rehabilitation purposes [12]. Overall, ergometer training for rehabilitation would benefit from a refined individualized training approach with a motivational stimulus.
Ergometer training is performed by a large number of athletes seeking to maximize their performance. Their post-session analyses are often more detailed, including but not limited to, tracking the power output of each leg individually, the applied forces, and the joint angles for each phase of a pedaling cycle. This detailed analysis allows athletes to fine tune their training sessions to specifically work on weaknesses and imbalances for better performance. The recent tools and technologies utilized by athletes have not been adapted by rehabilitation programs for patients. This is mainly due to complexity in integrating different systems, and costs associated with upgrading equipment.
In this work, we present a system capable of providing advanced feedback for ergometer training, allowing for detailed analysis and gamification. The presented solution can easily upgrade any existing ergometer device. The system consists of a set of pedals with embedded sensors, readout electronics and a wireless communication module, which can be mounted on any ergometer. It will transform the ergometer into a full analytical assessment tool with interactive training capabilities. A complete analysis of ergometer training can be performed by capturing training parameters not measured by standard ergometers. The system also allows for the gamification of the rehabilitation exercises due to the large number of captured parameters. By measuring the user's output continuously and giving feedback to the user with an appended tablet device, the pedals can be used as controllers to play a game on the tablet. Our system augments the rehabilitation experience by giving a motivational stimulus, through the gamification of the training process and providing in-depth analytics.
See Figure 1 for a depiction of the developed system. The contribution of the design is of practical nature, meaning that the ultimate goal is to perform experiments with the developed system and assess its contribution to the rehabilitation process. This paper is structured as follows; we first describe the pedal system and present its operation principle. We evaluate a method to extract the applied forces and torques compared to from raw sensor data. We then explore the methods with which a patient's cycling parameters are estimated. Specifically, the methods for estimating the pedal and crank angle are presented. The former is of importance as it is closely linked to the foot's ankle angle, a key metric for assessing a person's joint control. Crank angle, on the other hand, provides the ability to compare the consistency of one's pedaling patterns during a session, allowing for a more refined post-session analysis. We then give an overview of all analytical outputs, which are generated by the system. Lastly, an implementation of an interactive training program utilizing the pedal system and a game displayed on a tablet device is presented.
Hardware
The developed pedals comprise an inductive sensor measuring the applied load and an inertial measurement unit (IMU) consisting of an accelerometer and a gyroscope. The pedal's sensor suite thus measures the experienced load, acceleration, and angular velocity each along three axes. A full breakdown of the components in the pedal system can be seen in Figure 2. The system further includes: a nRF52832 (Nordic Semiconductor, Trondheim, Norway) SoC running a custom C firmware handling sensor readout and BLE data communication to a smartphone/tablet; a rechargeable, single cell LiPo battery; and a battery charger.
Our inductive sensor consists of a copper plate (target) and an inductive coil wired in parallel with a capacitor. This creates an LC resonant circuit with variable inductance L, which we refer to as LC-tank. By changing the relative position of the target and coil, the resonance frequency of the LC-tank shifts as a consequence of Faraday's law of induction [13,14]. By measuring the resonance frequency of the LC-tank, the distance between the target and the coil can be calculated. The target is mounted on a spring, so the load applied to the target can be translated into displacement, and hence, with the LC-tank into a change in resonance frequency. This process is illustrated in Figure 3. By utilizing four LC-tanks and placing them in a certain orientation with respect to the target, the displacement of the target in three axes can be calculated. Loads exerted along three axes can be measured using this method. Load sensor working principle. By applying a force F to the target T, the spring S is gets displaced by δx. This change in target position causes the inductance L of the LC-tank to change, and thus causes a change in resonance frequency δ f . By measuring the resonance frequency f of the LC-tank for various forces F, one can construct a mapping from f to F and thus estimate the forces based on the LC-tank's resonance frequency.
A spring that deforms in 3D was designed to measure the most significant forces and torques for the intended application, i.e., pedalling. These are the normal force F z , the forward shear force F x and torque around the x-axis M x . The shear forces in y-direction, and the torques in yand z-direction, are either negligible or not of interest for the intended application. The configuration of the coils and a qualitative depiction of the behavior of the resonance frequencies as a function of the three typical load cases can be seen in Figure 4. . Working principle for 3D load detection using three coils. Here, a forward shear force F x will cause a change in f 0 , and the opposite change in f 1 and f 2 . Torque around the x-axis M x would cause a change in f 1 and the opposite change in f 2 , with f 0 remaining constant. Finally, a normal force F z will cause all frequencies to change equally. With this, all load types can be differentiated.
Definitions and Notation
In this section, we explain the calibration procedures used for obtaining the desired information from the raw sensor data. First, we illustrate the procedure for mapping LC-tanks resonance frequencies f to load values F. Second, we show how we get the kinematic parameters of interest for our application. The kinematic parameters of interest are the crank angle φ, the pedal angle θ, and the cadenceφ. Everything reported herein applies to both hand-sides, left and right, but each side is evaluated independent from the other. For details on the used notation, please refer to the Appendix A.1.
Crank Angle Definition
We define the world frame I to be centered on the axle of the crank arm with the xand y-direction parallel to the floor, with the x-direction pointing towards the 'direction of cycling', and the y-direction pointing left. The z-axis is normal to the ground and pointing upwards. The crank angle φ is defined as the angle between the z-axis of the world frame, and the line connecting the crank axle with the pedal's axle. See Figure 5 for reference.
Pedal Angle Definition
The pedal body frame B is centred on the axis of rotation of the pedal with the xdirection running towards the 'direction of cycling'. The z-axis normal to the surface of the pedal pointing downwards when the pedal's x-axis is aligned with the world frame's x-axis. The pedal angle θ is defined as the angle between the x-axis in the pedal frame and the x-axis of the world frame Figure 5. According to our definition, the pedal angle is expected to be constrained to a subrange of [−90 • , 90 • ] depending on the user's ankle flexibility. Note that the pedal angle θ is defined with respect to the worldhorizontal plane (perpendicualr to the gravity vector g) and is independent of the crank-angle φ. The gyroscope measures the pedal's angular rateθ = ω, while the accelerometer measures the pedal's acceleration biased by gravity B¨ x = B a + B g. Please also note that both frames of reference I and B are 3D orthonormal, right-handed frames, with their y-axis pointing inward and outward respectively. These axes are not depicted in the image to avoid overcrowding. The pedal's motion is mechanically constrained to the xz-plane, and it can thus be assumed, without loss of generality, that the y-position of the pedal is constant at 0.
Load Sensor Calibration
We calibrate each axis of the load sensor individually by collecting data with our sensor and a reference sensor (OMD-45-FH-2000N; OptoForce Ltd., Budapest, Hungary) having 1 N resolution and 1000 N max (compressive) load. We map the resonance frequency readouts f to the load readouts F using linear machine learning models and cross-validation.
A custom calibration setup was built for collecting calibration data. The calibration setup consists of a mounting tower, a reference sensor, and a load-lever. The calibration foresees an operator handling the data acquisition with the two sensors and applying loads to the system. In the following, we detail the calibration protocol and the models used for mapping from frequency-readouts to force/torque readouts. A schematic representation of the calibration setup is depicted in Figure 6.
For each desired output F ∈ U = {F x , F z , M x }, the sensor is mounted on the calibration device such that the dominant applied load is the desired output. After both systems, reference sensor and pedal, have started logging, the system is loaded eight times in cycles within the calibration range. We denote with T = {k : k ∈ calibration time} the set of all time samples occurring during the calibration run. We obtain two datasets where e[k] denotes the estimation error. From here on, we drop the time sample k dependence, as the mapping does not depend on time, since it is algebraic.
Thus, we define the three estimatesF x ,F z , andM x to bê As mappings M i , we use cross-validation LASSO estimators. The models are trained using scikit-learn [15] with 10-fold cross-validation and 30% test-fraction, and a 25-elements regularization log-space from 10 −10 to 10 2 . Figure 6. Schematic representation of the force-calibration setup. Depending on the mounting mode of the pedal P, the operator can apply forces F to P and collect, simultaneously, data coming from P and the reference sensor O. Three mounting modes are possible, enabling loading in F x , F z , and M x .
Pre-Processing
In order to limit the effect of noise and improve results, the raw data were low-pass filtered before being further processed. All analysis relevant signals have been passed through a second order Butterworth [16] low-pass filter with the cutoff frequency set at f C = 2.2 Hz. The filter was applied using the forward-backward filtering filtfilt() function implemented in the scipy.signal [17] Python module.
Kinematic Model
For slowly changing cadences, i.e.,φ ≈ 0, we can write the acceleration of the pedal represented in the world frame I¨ x as where r is the crank-arm length, i.e., the distance between the crank-axle-center and the pedal-axle-center, and g = 9.81 m/s 2 is the gravitational acceleration. By applying the rotation matrix R BI to the acceleration, we get the theoretical output of the IMU as B¨ x = R BI · I¨ x = −rφ 2 (sin(φ) cos(θ) + cos(φ) sin(θ)) − g sin(θ) −rφ 2 (sin(φ)sin(θ) − cos(φ)cos(θ)) + g cos(θ) .
For more details on the derivation of these results, the interested reader is referred to Appendix A.
Crank Angle Estimation
The squared magnitude of the accelerationẍ 2 for crank revolutions performed at approximately constant rates can be expressed as it can be seen that this is maximal when the crank-arm is at top dead center (TDC) φ = 0 • and minimal at bottom dead center (BDT) φ = 180 • . Given that, for our application, the approximation for constant cadenceφ ≈ 0 is reasonable, this allows us to detect the crank angle at TDC and BDC by identification of the maxima and the minima of the acceleration magnitude signalä(t). To obtain the crank angle φ(t) for every time-stamp t, we apply linear interpolation for all points between successive maxima and minima. This approach is aligned with the assumption of slowly changing cadencesφ ≈ 0. Our assumption is thus that each individual crank revolution is carried out at a constant rate, but steady-state pedalling is not required i.e., a constant cadence throughout the measurement session is not required.
To validate this method, we check the offset between the left and right pedal: ideally, the crank angle should be offset by ∆φ LR = 180 • between the two sides. This is done by taking the crank angle of one side for a specific time stamp and computing the offset with that of the measured value on the other side. Since the time stamps are not perfectly synchronized, linear interpolation is done between the value of the crank angle of the closest time stamp before and after the value in question to estimate the value of the crank angle on the other side. The average offset for the session is then taken. The average crank angle offset is computed to be ∆φ LR = 175.75 • ± 0.920 • .
Pedal Angle Estimation
The overall pedal angle estimation method is detailed in Figure 7. The method consists in a first rough estimation from the accelerometer measurements and then feeding this rough estimate to a Kalman filter (KF), in which the gyroscope is integrated. This method has already been proposed in [18] while using high sampling rates ( f s = 500 Hz). Since our sensors sample at f s = 25 Hz, the method's effectiveness at these lower frequencies has to be evaluated. Figure 7. Flow chart outlining the process used to derive the pedal angle. After passing the analysisrelevant signals through second order Buttwerworth low-pass filter (LPF), we compute a rough estimate of the pedal angle based on accelerometer measurementsθ a and then fuse the gyroscope measurements ω withθ a using a KF to get a refined version of the estimateθ.
Rough Estimate-Acceleration Angle
We compute a first rough pedal angle estimateθ a (t) using the accelerometer measurements:θ a (t) = arctan2( Bẍx , Bẍz ). (8) This estimation is exact for the case in whichφ = 0 ∀ t, as can be seen from the following equation: However, this estimate is only accurate when either the crank arm is stationaryφ = 0 or when the crank arm is at φ ∈ {0 • , 90 • }, as can be seen in (6). We can define the uncertainty of thisθ a as ρ(t), and thus writê Refining the Estimation-Kalman Filter As the pedals feature a gyroscope, we can also estimate the pedal angle by integrating the angular velocity ω(t) =θ signal of the pedal. Dead reckoning is a notoriously difficult task, and naively integrating noisy IMU data is as unreliable as the acceleration-based estimation previously introduced [18]. But by fusing the two approaches by means of a KF, an improved estimate for the pedal-angleθ can be obtained.
The KF fuses the data received from the accelerations and that of the angular velocity. The KF [19] gives the best estimate of the pedal angle by accounting for noise, by approximating the sensor data output as Gaussian distributions.
As done in [18], we define the underlying state space model used in our KF implementation as follows: where we use the discrete sample time [k], T s is the sampling interval, ν(·) ∼ N (0, Q) is the process noise and ρ(·) ∼ N (0, R) is the measurement noise (comprehending both actual noise and rough estimate uncertainty due to pedalling). It shall be noted that the assumption of ρ(·) ∼ N (0, R) is strong and does not reflect the reality of the system, as the uncertainty due to pedalling is actually correlated. Nevertheless, this is the simplest model, and the goal herein is to investigate what the limits of this simplification are.
Calibration Results
For any estimatex of a reference value x, we define the estimation error to be e x = x −x. We further denote the average error of said estimate with µ x = E[e x ], and the error standard deviation with σ x . The mean absolute error (MAE) is defined to be MAE x = E[|e x |] and the root mean squared error (RMSE) is RMSE x = E[e 2 x ].
Load
We execute the force calibration procedure for all loads of interest and obtain three linear models relating the resonance frequencies f of the LC-tanks to the loads F. That is one model for each load. The performance of the F z model on the test-data-set is shown in Figure 8. The full force calibration results statistics are reported in Table 1. The proposed models are capable of reliably mapping the resonance frequencies f to the loads of interest.
Kinematic Parameters
The final pedal angle estimationθ for the validation dataset has the following error e = θ −θ statistics: root mean squared error RMSE θ = 16.72 • ; mean absolute error MAE θ = 13.20 • ; average error µ θ = −2.85 • ; error standard deviation σ θ = 16.48 • ; and coefficient of determination R 2 θ = 0.601. It shall be noted that the performance is suboptimal if compared to the one presented in [18]. This is mainly due to two facts. First, the sampling rate used by the IMU is only 25 Hz. The system would greatly benefit from higher sampling rates, but this is problematic from a raw-data transmission point of view, as the bandwidth of the used BLE protocol is limited. Second, another source of error is to be identified in the assumption of ρ being normally distributed ρ(·) ∼ N (0, R). While this might well be a good approximation for the sensor-noise, it certainly does not reflect the character of human-pedalling. As a consequence, the residual estimation error is not normally distributed, as can be seen in Figure 9. Despite these short-comings, it is worth noticing that the KF improves the estimate of the pedal-angleθ significantly compared to the accelerometer-based estimateθ a . The validation dataset was collected using video footage at 120 fps to track the orientation of the pedal angle throughout a session. With this, the parameters of the KF were adjusted to increase accuracy. A comparison between the acceleration-based estimateθ a performance and the KF estimateθ is shown in Figure 9. It should be noted that the majority of the error is attributed to areas near the extrema of the extracted pedal angle. This corresponds to moments where large changes in pedal angle are seen decreasing accuracy at low frequencies and regions where the rough pedal angle estimateθ a (t) is least accurate, i.e., φ ∈ {90 • , 270 • }. Tracking of the peaks can be improved with a dynamic noise covariance and higher sampling rates.
Applications
We present a system that allows for a refined analysis of ergometer training exercises. This system is interchangeable with any ergometer device, allowing for it to be seamlessly integrated to any ergometer training set up for individual or clinical use. The presented system can extract individuals cycling parameters in real time, and it allows for a comprehensive data visualization and gamification of ergometer exercises. Possible implementation of data visualization and gamification is given in the next two subsections.
Data Visualization
In order to provide a database and data visualization tool for individuals and doctors, an iPad (Apple Inc., Cupertino, CA, USA) app was developed. This allows for a live feed of a patient's current pedalling performance (forces and cadence), as well as a gamified experience to individuals while conducting a session. Additionally, the data collected during the sessions are stored for future use by the therapist; in particular, means for visualizing the performance of the patient are implemented.
The metrics include commonly used features such as cadence, as well as lesser used metrics such as the normal force, shear force or pedal angle to be shown for the session as a whole or for different regions of the session for left and right foot separately Figure 10. With the crank angle, we can also provide insights of these parameters for different portions of the pedalling phase, like illustrated for the force magnitude in Figures 11 and 12. The bio-mechanics of cycling for athletes is well known and such information could prove to be useful when diagnosing and treating patients. This is largely due to how the general profile of these parameters as a function of the crank angle is linked to features such as the work done by certain muscle groups, joint torques, symmetry between the left and right side, and overall performance [20,21].
Gamification
Using the presented pedal system and a tablet device for providing audio-visual feedback, a gamified experience was realized to add a motivational aspect for physical rehabilitation. This system can be utilized for providing a personalized training experience by actively tracking certain cycling parameters and adjusting the training settings to specifically improve those parameters.
In the developed app, while the patient is pedalling, their motions control a kite flying along a trajectory. The cadence controls the speed of the kite and the force ratio between the left, and the right foot controls the yaw of the kite (e.g., if the total force exerted on the left pedal is higher than that of the right, the kite will yaw towards the left).
The patient encounters a path to follow on the screen, and by controlling their cycling parameters, they try to follow this path. A number of circuits have been designed which the patients can play in. In addition to following the circuit, the patients have to collect golden coins along the way. The more coins they collect, the higher their final score. The coins are generated along the path and stimulate the users to take more control over the kite's position, improving the balance. The circuits are essentially 2D paths rendered in a 3D world. In order to avoid the users to go adrift, the kite's position is constrained by an (invisible) tube along the path.
Depending on the type of therapy, one could modify the controls of the kite in the app so that different features (force, cadence, pedal angle, etc.) take control over specific game parameters. For example, a patient with significant foot-drop could be motivated by coupling their pedal angle with the kite's pitch angle, and the positioning of coins in the top part of the circuit-tube.
A rendering of the live-view of the game can be found in Figure 13. An overview of the methods used for the post session analysis as well as the gamified display is seen in Figure 14.
Conclusions
We presented an advanced ergometer rehabilitation system, which provided a detailed analysis of exercises, and enables more interactive training sessions with audio-visual feedback. We studied the sensing characteristics of the system. The methods for extracting the forces and torques applied to the pedal along with the angular position of the pedal have been evaluated using their respective reference datasets. The MAE load estimation errors are found to be 18.82 N, 25.35 N, 0.153 Nm for F x , F z and M x respectively. The use of the linear KF decreased both the RMSE of the estimated angle and the standard deviation of the error of the estimated angle by 32.2% and 32.8% respectively from that of the rough estimate. We successfully demonstrated the usage of the developed system on an ergometer, and explored the limits of the used models. This first implementation will serve as a benchmark for future improvements. Force and cadence measurements can be used for providing feedback to the user in the game, while the pedal angle can be used for qualitative feedback, due to its relatively higher error.
We also presented a new data visualization interface, giving more insight into ergometer training sessions. An app was developed, which utilizes the feedback from the sensing system in order to gamify rehabilitation exercises. With the system in place, the effectiveness and response of patients to the gamified experience need to be further explored. This includes, but is not limited to, the importance of the visual and audio component of the game, and its ability to steer patients to cycle optimally in individual sessions and over multiple sessions in time.
Outlook
The area where the system exhibits the largest room for improvement is the pedal angle estimation. The used KF improves the naive estimate obtained from the accelerometer-onlŷ θ a significantly, but still sub-optimally. We believe that the main reasons for this are the low sampling rate of 25 Hz and the strong assumption of ρ ∼ N (0, R). More comprehensive estimators, such as an extended Kalman filter or an unscented Kalman filter (UKF) [22], shall be implemented to achieve better results. In particular, the UKF is promising, as it allows for a sampling of the actual uncertainty, rather than having to assume Gaussian random variables.
It will be interesting to investigate whether our simplification of constant-rate crank revolutions is robust enough for rehabilitation applications. While this is a reasonable assumption for healthy users pedalling regularly, it might not be a good approximation for impaired users who might display nonlinear crank-revolution patterns, which are potentially not approximated well enough by our linear model.
Including the ergometer's dynamics into the model in order for it to better map the nonlinearities due to the pedalling of the user and the response of the ergometer is another path for improvement for the presented system. Ergometers possess safety features such as viscous-feedback forces, and the coupling of these dynamic effects with the user could be an interesting research topic. The modeling of these effects could improve the presented system's accuracy.
We developed the system with real-time, gamified feedback to have an impact on rehabilitation. Due to its design it can upgrade any existing ergometer. Rehabilitating patients are going to benefit from this device. The future goal is to quantify the benefits brought by the system to actual patients. This shall be achieved by performing clinical trials where patients' performance is monitored over extended periods.
Abbreviations
The hence, for every revolution of the crank which can be seen to be approximately performed at a constant rateφ, the acceleration magnitude has the formẍ = α cos(φ) + β with α, β > 0 and is therefore maximal at φ = 0 rad and minimal at φ = π rad. It shall be noted that the requirement of each revolution being carried out at a constant rateφ(t) = const ∀ t ∈ [t TDC i , t TDC i+1 ] is weaker than the requirement of steady state pedalling throughout a sessionφ(t) = const ∀ t.
Moreover, it shall be noticed that the pedal, i.e., the IMU in our system, experiences only translations (imagine keeping the pedal flat w.r.t. the ground and rotating the crank: the sensor's axes keep aligned with the world frame, meaning that there is no rotation at all-the gyroscope will readout 0-despite the pedal moving on a circle) in addition to an oscillation about its axis (pedal angle θ), which is the source for the non-zero gyroscope measurements.
Appendix A.3. A Note on the Chosen Frames of Reference and the Transformation Matrix R BI
The careful reader will have observed that the depicted frames I and B cannot be related by a rotation matrix R IB = R BI ∈ SO(2), as one would expect for two orthonormal frames. This can be dealt with by expanding the frames of reference to 3D, and by applying a first rotation of π rad about the x-axis when going from I to B. This will cause the two frames I and B to have their y-axes pointing in opposite directions. In fact, if we multiply the rotation matrix R x (α) ∈ SO(3) describing the first rotation about x by the angle α with α = π rad with the rotation matrix R y (β) ∈ SO (3) which one can see matches the definition of R BI given in (A6) with an added dimension (y). We could write all kinematics in 3D with the y components constrained to 0, but we opted for the more compact 2D notation. It shall be noted that this choice is debatable as it seems, as the right-handed system I is transformed into a left-handed system B, due to the fact that the 3D rotation corresponds to a 2D reflection, but one can always imagine the third component to be present but set to 0, and I and B have e B y = − e I y , which 'restores' B to a right-handed system. | 7,547.4 | 2021-12-01T00:00:00.000 | [
"Computer Science"
] |
Qualitative investigation of Hamiltonian systems by application of skew-symmetric differential forms
A great number of works is devoted to qualitative investigation of Hamiltonian systems. One of tools of such investigation is the method of skew-symmetric differential forms. In present work, under investigation Hamiltonian systems in addition to skew-symmetric exterior differential forms, skew-symmetric differential forms, which differ in their properties from exterior forms, are used. These are skew-symmetric differential forms defined on manifolds that are nondifferentiable ones. Such manifolds result, for example, under describing physical processes by differential equations. This approach to investigation of Hamiltonian systems enables one to understand a connection between Hamiltonian systems and partial differential equations, which describe physical processes, and to see peculiarities of Hamiltonian systems and relevant phase spaces connected with this fact.
Connection between Hamiltonian systems and partial differential equations
The connection of Hamiltonian systems with partial differential equations can be understood if one performs the analysis of partial differential equations by means of skew-symmetric differential forms. Such method of investigation has been developed by Cartan [2] in his analysis of integrability of differential equations. In present work we will call attention to some new aspects of such investigation. Let F (x i , u, p i ) = 0, p i = ∂u/∂x i (1) be a partial differential equation of the first order. Let us consider the functional relation where θ = p i dx i (the summation over repeated indices is implied). Here θ = p i dx i is a differential form of the first degree. The specific feature of functional relation (2) is that in the general case, for example, when differential equation (1) describes any physical processes, this relation turns out to be nonidentical.
The left-hand side of this relation involves a differential, and the right-hand side includes the differential form θ = p i dx i . For this relation be identical, the differential form θ = p i dx i must also be a differential (like the left-hand side of relation (2)), that is, it has to be a closed exterior differential form. To do this, it requires the commutator K ij = ∂p j /∂x i − ∂p i /∂x j of the differential form θ has to vanish.
In the general case, from equation (1) it does not follow (explicitly) that the derivatives p i = ∂u/∂x i , which obey to the equation (and given boundary or initial conditions of the problem), make up a differential. Without any supplementary conditions the commutator K ij of the differential form θ is not equal to zero. The form θ = p i dx i turns out to be unclosed and is not a differential like the left-hand side of relation (2). Functional relation (2) appears to be nonidentical: the left-hand side of this relation is a differential, whereas the right-hand side is not a differential.
[Nonidentity of such relation has been pointed out in the work [5]. In that case a possibility of using a symbol of differential in the left-hand side of this relation has been allowed.] [Functional relation (2) can be written in the form This is a well-known Pfaff equation for partial differential equation. However, the relation cannot be treated as an equation. To solve the equation means to find the derivatives of equation (2') which make up a differential (and equation (2') is turned to identity). In this case the derivatives of equation (1) that do not obey these conditions are ignored, although they satisfy original equation (1) and boundary and initial conditions. But in the relation all derivatuves that satisfy the original equation and boundary or initial conditions are accounted for, and their role in the physical process under consideration is analyzed.] The nonidentity of functional relation (2) points to a fact that without additional conditions the derivatives of original equation do not make up a differential. This means that the corresponding solution u of the differential equation will not be a function of only variables x i . The solution will depend on the commutator of the form θ, that is, it will be a functional.
To obtain a solution that is a function (i.e., the derivatives of this solution compose a differential), it is necessary to add a closure condition for the form θ = p i dx i and for corresponding dual form (in the present case the functional F plays the role of a form dual to θ) [2]: [The dual form that corresponds to exterior differential form defines a manifold or structure, on which the exterior form is defined.] If we expand the differentials, we get a set of homogeneous equations with respect to dx i and dp i (in the 2n-dimensional tangent space): Solvability conditions for this system (vanishing of the determinant composed of coefficients at dx i , dp i ) have the form: The relations obtained establish a connection between the differentials of coordinates {dx i } and differentials of derivatives {dp i }, which satisfy the original equation. It is clear that these differentials specify integral curves on which the derivatives of original equations form a differential. In their properties the integral curves in phase space are pseudostructures. The differential, which is defined only on integral curve, is an interior differential. This differential makes up a closed inexact exterior form, namely, an exterior form closed only on some pseudostructure (to the pseudostructure it is assigned the dual form).
Since on integral curves defined by relations (5) the derivatives of equation (1) constitute a differential, the relevant solution to original equation is a function rather then a functional. Such solutions that are functions (i.e. depend only on variables) and are defined only on pseudostructures are so-called generalized solutions [6]. Derivatives of generalized solution constitute an exterior form, which is closed on the pseudostructure.
If conditions (5) are not satisfied, the differential form θ = p i dx i is unclosed and is not a differential. The derivatives do not form a differential, the solution that corresponds to such derivatives will depend on the differential form commutator K ij composed of derivatives. That means that the solution is a functional rather then a function.
Relations (5), which are integrating relations, can be obtained by other means. If we find the characteristics of equation (1), it turns out that relations (5) are conditions which specify characteristics of the equation under consideration. That is, integrating relations (5) are characteristic relations for partial differential equation. (In what follows such relations will be referred to as characteristic relations).
Here it should call attention to some points that will be necessary in carrying out further investigations.
Firstly, the characteristic relations have been obtained from a requirement of vanishing the determinant (set of equations (4)). This means that a change from original equation to the equation that obeys characteristic relations is a degenerate transformation. One can see that this degenerate transformation is a transition from derivatives of original equation in tangent space to derivatives in cotangent space.
And secondly, to obtain the solution that is a function, it is necessary to impose two (rather then one) additional conditions on the original equation; 1) The closure condition of the differential form composed of derivatives of original equation, and 2) the condition of closure of relevant dual form. The first condition is that the closed form is a differential. In this case, as one can see from the results obtained, the closed form can be only an inexact form, that is, this form is closed only on some pseudostructure. The second condition just allows to obtain such pseudostricture. Now assume that equation (1) does not depend explicitly on u and is resolved with respect to some variable, for example t, that is, the equation has the form In this case the differential form θ in functional relation (2) will have the form θ = −Edt + p j dx j Relation (5) (the closure conditions of the differential form θ and the corresponding dual form) can be written as (in this case ∂F/∂p 1 = 1) and can be reduced to the form These are integral (characteristic) relations for equation (6), which are conditions of integrability of this equation.
From relations (7) it follows that on integral curves the differential of the form θ equals zero: dθ π = d(− E dt + p j dx j ) = 0 (here the index π corresponds to integral curve, namely, to pseudostructure. ). This means that the derivatives of equation (6) p 1 = ∂u/∂t , p j = ∂u/∂x j on integral curves obtained from relations (7) make up a closed inexact exterior form θ π = (− E dt + p j dx j ) π , namely, an interior, on integral curves, differential The solution to equation (6) corresponding to such derivatives will be a function (generalized) rather than a functional.
The equations of field theory have the form similar to that of equation (6) ∂s ∂t where s is the field function (the state function) for the action functional S = L dt. Here L(t, q j ,q j ) is the Lagrange function, H is the Hamilton function: Corresponding characteristic relations for equation (8) have the form that is, they are Hamiltonian systems.
As it is well known, the canonical relations have just such a form. The analogy between Hamiltonian systems and characteristic relations, as it will be shown below, allows one to see peculiarities of Hamiltonian systems.
Here it should be emphasized that, in spite of equations (6) and (8) have the same form, they fundamentally differ from one another. As it is known, equation (8) is referred to as the Hamilton-Jacobi equation. Unlike equation (6), where no restrictions are imposed on the function E and this function is defined on tangent manifold, in equation (8) the function H is the Hamilton function defined on cotangent manifold, that is, additional conditions are already imposed on this function. These specific features will be considered below.
Properties and peculiarities of characteristic relations
As it has been shown above, the characteristic relations were obtained from the first-order partial differential equation under the conditions that the differential form composed of derivatives of this equation and corresponding dual form have to be closed. Only under such conditions the derivatives of original equation make up a differential, and corresponding solutions prove to be functions rather than functionals, that is, they depend only on variables.
Here it should be remembered that we analyze partial differential equations which arise while describing physical processes. Without additional conditions such equations are nonintegrable. The characteristic relations are just additional conditions under which the derivatives of original equation make up a differential. They define integral curves {x i (t), p i (t)} on which the derivatives of original equation make up a closed inexact form, namely, an interior differential.
What are peculiarities of characteristic relations? We will analyze this by the example of relations (5).
As it was pointed above, characteristic relations (5) were obtained from the condition that the determinant composed of coefficients at dx i , dp i of the set of equations (4) vanish.
This means that the characteristic relations are conditions of degenerate transformation. (It turns out that only under vanishing some determinant, that is, under the condition of degenerate transformation, the derivatives of original equation can make up a differential.) Peculiarities of characteristic relations (and, as it will be shown below, of Hamiltonian systems) are just connected with the properties of degenerate transformation.
The degenerate transformation can be mathematically presented as a transition from one coordinate system to another, nonequivalent, one. In the case under consideration this is a transition from tangent space, in which the derivatives of original equation are defined, to cotangent space, in which the derivatives of original equation make up a differential.
In the case under consideration the tangent space is not a differentiable manifold. (If the tangent space be differentiable, the differential of differential form θ = p i dx i would be equal to zero, that is, this form would be closed one). The frame of reference connected with such manifold cannot be an inertial system. In the case of degeneration it takes place a transition from tangent space to a manifold made up by pseudostructures (integral curves). The frame of reference connected with such manifold is a locally inertial one. That is, in this case the degenerate transformation is a transition from the frame of reference which is not inertial (and even cannot be locally inertial) to the locally inertial frame of reference.
(It should be pointed out that, if the tangent manifold be differentiable, the transition from tangent space to cotangent one would be not a degenerate transformation. This would be a transition from one inertial frame of reference to another inertial frame of reference.) It turns out that the transition from derivatives of original equation, which are defined in tangent space and do not make up a differential, to derivatives that are defined in cotangent space and make up a differential, is possible as a degenerate transformation. Since in this case cotangent manifold and tangent manifold are not in one-to one correspondence, the integral curves it can serve only sections of cotangent bundle, namely, pseudostructures.
[Examples of pseudostructures and surfaces generated by them are characteristics, cohomology, eikonal surfaces, surfaces made up by shock wave fronts, potential surfaces, pseudo-Euclidean and pseudo-Riemannian spaces and so on.] Since the differential form composed of derivatives of original equation can be closed only on pseudostructure, this form is an inexact exterior differential form, that is, only an interior (on pseudostructure or on integral curve) differential. And this means that corresponding solutions to original equation, which are functions, are defined discretely, namely, only on pseudostructures.
As it was already pointed out, the solutions, which are defined on pseudostructures and are functions, are so-called generalized solutions. The derivatives of the generalized solution make up the exterior form, which is closed on the pseudostructure. (Under description of physical processes in material systems such solutions are state functions because they have a differential.) Since the functions, that are the generalized solutions, are defined only on the pseudostructures (on integral curves), they have discontinuities in derivatives in the directions normal to pseudostructures.
To understand with what such discontinuities are connected and how much is their value, one has to focus his attention to the following fact. The derivatives of original equation simultaneously make up two skew-symmetric differential forms, namely, one is an unclosed differential form composed of derivatives of original equation and defined on tangent manifold, and the second is a closed inexact exterior form defined on sections of cotangent bundles (on pseudostructures). The form closed on pseudostructure is an interior differential, and this enables one to obtain the solution (generalized) to original equation on pseudostructure. And the discontinuities that have the derivatives of this solutions in the direction normal to pseudostructure are specified by the commutator of the first-order unclosed differential form. (Nonclosure of the first differential form is connected with the fact that the tangent manifold corresponding to original equation is not differentiable. In the case when tangent manifold is differentiable, both first and second differential forms are closed and the derivatives of solutions to original equation have no discontinuities. In should be noted that, for differential equations describing physical processes, the tangent manifold cannot be differentiable. For this reason the solutions of all differential equations describing physical processes have the above described functional properties [4]).
[The above considered functional properties of the set of differential equations follow from kinematic and dynamical conditions of consistency [7]. In the appendix to the work [8] the results of calculating values of discontinuities of derivatives for entropy and sound speed in the gas dynamic problem are presented.] Thus, it turns out that in general case the derivatives of partial differential equations compose a differential only under degenerate transformation. The characteristic relations are the conditions of such degenerate transformation. The derivatives of differential equation obeying characteristic relations make up an interior (on pseudostructure defined by characteristic relation) differential and relevant solutions to original equation are functions on pseudostructure. In this case the derivatives normal to pseudostructure undergo a discontinuity. Such specific features of characteristic relations and the solutions corresponding to such relations enable one to see some peculiarities of Hamiltonian systems and their relations to the equations of mathematical physics.
Analysis of Hamiltonian systems
Hamiltonian systems arise in the problems of functional extremum which have wide application in quantum field theory and in the problems of classical mechanics at the basis of which it lie such dynamic principles as the principle of minimal action, the principle of virtual motions and so on.
Hamiltonian system (9) appears under the Legendre transformation: H(t, q j , p j ) = p jqj −L, p j = ∂L/∂q j , which converts the Lagrange function L(t, q j ,q j ) defined on tangent manifold {q j ,q j } into the Hamilton function H(t, q j , p j ) defined on cotangent manifold {q j , p j }.
The Hamiltonian system is connected with the Lagrange equation which specifies a curve that is an extremal of the functional. The connection of Hamiltonian systems with the Lagrange equation can be traced by comparing the differential of Hamilton function H(p, q, t) with the differential of the function (pq − L). (Such comparison is presented in the work [1]. However, in present case we shall focus our attention on some points of such comparison.) The total differential of the Hamilton function H(p, q, t) is written in the form And the total differential of the Hamilton function expressed in terms of the Lagrange function H = pq − L has the form These expressions will be identical under the conditioṅ From Lagrange equation (10) it follows that ∂L/∂q =ṗ. Replacing in the second relation (11) ∂L/∂q byṗ, we obtain dp dt = − ∂H ∂q which corresponds to the second relation of Hamiltonian system. That is, the second relation of Hamiltonian system is just the Lagrange equation. But from relations (11) one can see that under changing from the Lagrange function to the Hamilton function in addition to the relation corresponding to the Lagrange equation it arises one more relation, namely, the first relation (11), which corresponds to the first relation for Hamiltonian system (9). The physical meaning of such difference between Hamiltonian system and the Lagrange equation will be analyzed below.
Thus, the connection of the Lagrange equation with Hamiltonian system is seen. The transition from the Lagrange equation to Hamiltonian system is a transition from tangent manifold to cotangent one. [Tangent and cotangent manifolds for Lagrangian system are tangent and cotangent bundles of configuration space]. When tangent manifold is a differentiable one, such transition is a degenerate transformation. The transition from tangent manifold to cotangent one is one-to-one mapping, and Hamiltonian system and the Lagrange equation are identical.
While deriving the Lagrange equation for mechanical system it was assumed that constraints are ideal holonomic ones. In this case configuration space and tangent manifold of Lagrangian system are differentiable manifolds [1], and the transition from tangent space to cotangent one is a nondegenerate transformation.
In the case on nonholonomic constraints the tangent manifold of Lagrangian system will be not a differentiable manifold. In this case the transition from tangent manifold to cotangent one, that is, the transition from the Lagrange function to the Hamilton function and, correspondingly, from the Lagrange equation to Hamiltonian system, is possible only as a degenerate transformation. This means that the transition to subset of cotangent manifold composed of pseudostructures (sections of cotangent bundles) is only possible. That is, Hamiltonian system can be realized only discretely, namely, on pseudostructures.
In essence, Hamiltonian system will turn out to be a characteristic relation for the Lagrange equation and will have the same peculiarities as the characteristic relations.
In general case (when constrains are nonholonomic) the Lagrange equation is nonintegrable equation. The solutions to the Lagrange equations define a curve which is an extremal of the functional. But for these curves be integral curves, the conditions of integrability have to be satisfied. The availability of closed exterior form serves as the integrability condition.
The condition of maximum of the action functional S, from which the Lagrange equation has been obtained, is one of conditions being necessary in definition of closed form. But for the differential form be closed, it is necessary that the relevant dual form (determining manifold or structure on which the skew-symmetric differential form is defined) be closed. The first relation of Hamiltonian system is just such a condition. In the case when tangent manifold of Lagrangian system is differentiable (this is possible only for holonomic constrains), this condition is satisfied automatically. Thus, Hamiltonian system is equivalent to the Lagrange equation. In general case the tangent manifold of Lagrangian system is not a differentiable manifold, and hence the Lagrange equation can become integrable one only under additional conditions. In this case the first relation for Hamiltonian system proves to be such additional condition of integrability of Lagrange equation. (In the calculus of variations to such additional condition there corresponds the condition of transversality.) Thus, one can see that the Lagrange equation is equivalent to Hamiltonian system only if the conditions of integrability are satisfied. In the case of nonholonomic constrains when tangent manifold of Lagrangian system is not differentiable one, such correspondence is satisfied only under degenerate transformation. The correspondence between Hamiltonian system and the Lagrange equation is not identical.
What peculiarities in Hamiltonian system appear in the case when the tangent manifold is nondifferentiable manifold and the transition from Lagrange function to Hamilton function turns out to be degenerate transformation?
As it was already pointed out, under degenerate transformation the transition from tangent space is possible only to pseudostructures. This means that as phase space formatted it can be only the subset of cotangent manifold composed of pseudostructures (sections of cotangent bundles). That is, in this case as the phase space it can serve only cotangent bundle sections of manifold of Lagrangian system.
What properties has such phase space?
To answer this question, let us study a relation between Hamiltonian system and the Hamilton-Jacobi equation.
It has been shown above the analogy between Hamiltonian system and the characteristic relation for the first-order partial differential equation. The Hamilton-Jacobi equation is an equation of similar type. However, in the Hamilton-Jacobi equation it is apriori assumed a fulfilment of additional conditions, namely, the conditions of integrability. In this equation the function H is Hamilton function, that is, a function defined on cotangent and not on tangent manifold. This fact points to a correspondence between the Hamilton-Jacobi equation for state function and Hamiltonian system. Since the Hamiltonian system is fulfilled only on pseudostructures, the solutions to the Hamilton-Jacobi equation, which define the state function, can be only generalized functions. That is, the state function is defined only on pseudostructures, and the derivatives of state function have discontinuities in the direction normal to pseudostructure, namely, to the phase trajectory. Just with this fact the peculiarities of phase trajectories and phase space of Hamiltonian system are connected.
It is known that in the case when the tangent manifold is defferentiable and hence when the transition from tangent space to cotangent space is one-to one mapping, in the extended phase space {t, q j , p j } there exists the Poincare invariant ds = − Hdt + p j dx j (the differential ds directly follows from the Hamilton-Jacobi equation In the case when tangent manifold is not differentiable manifold (and hence when the transition from tangent space to cotangent space is degenerate) both Hamiltonian system and the Hamilton-Jacobi equation will be fulfilled only on pseudostructures, the Poincare invariant will be also fulfilled only on pseudostructures, namely, on integral curves. In the directions normal to integral curves the differential ds, which corresponds to the Poincare invariant, will be discontinuous.
Invariants on pseudostructures set up invariant structures, which are connected with conservation laws.
Closed exterior forms are invariants. The closed exterior form is conservative quantity because the differential of closed form equals zero. This means that the closed form reflects conservation laws. And the closed inexact exterior form (a form closed on pseudostructure) describes a conservative object, namely, pseudostructure with conservative quantity. Such object is a physical structure and corresponds to conservation law. Phase trajectories with invariants (with closed forms) make up invariant structures, which are physical structures corresponding to conservation laws. Discontinuities (jumps) of invariants explain a discreteness of physical structures. (It can be noted that such invariant structures are an example of differential-geometric G-structure).
As it was already pointed out above, Hamiltonian system is nothing more than canonical relations.
It is known that canonical relations execute nondegenerate transformations, namely, transformations which conserve a differential. The connection of Hamiltonian system with characteristic and canonical relations discloses a duality of Hamiltonian system. From one hand, in the case when tangent manifold is not differentiable manifold, Hamilton system represents a characteristic relation, which is obtained as a condition of degenerate transformation. And from another hand, Hamiltonian system is canonical relations, which execute nondegenerate transformation. The degenerate transformation is a transition from tangent space (q j ,q j )) to cotangent manifold (q j , p j ). And the nondegenerate transformation is a transition in cotangent space from some pseudostructure (phase trajectory) (q j , p j ) to another pseudostructure (Q j , P j ). [The formula of canonical transformation can be written as p j dq j = P j dQ j + dW , where W is the generating function].
Thus, it turns out that Hamiltonian systems, from one hand (in the case when the tangent manifold {q j ,q j } is not differentiable one) are characteris-tic relations, which execute degenerate transformations describing a transition from tangent manifold on which there is no invariant structure, to cotangent space, on which there is invariant structure. And from other hand, Hamiltonian systems are canonical relations, which execute nondegenerate transformations of invariant structures.
The transition from tangent space to cotangent one under degenerate transformation when the closed exterior form is realized describes an origination of invariant structure. And the nondegenerate transformation (with the help of canonical relations) is a transition from one invariant structures to another invariant structure. (This demonstrates the connection of degenerate and nondegenerate transformation.) Nondegenerate transformations can be described by pseudogroups, in particular, by Lie pseudogroups. But the group theory is not sufficient for describing a behavior of Lagrangian systems in the case of real physical processes. | 6,216 | 2005-03-14T00:00:00.000 | [
"Mathematics",
"Physics"
] |
The Effect of Module-Assisted Direct Instruction on Problem-Solving Ability Based on Mathematical Resilience
Mathematical resilience is necessary for learning mathematics because the nature and impression of mathematics is a complex subject for most students. This study aimed to determine the effect of the module-assisted Direct Instruction model and mathematical resilience on the problem-solving abilities of prospective teacher students at a private university in Yogyakarta. This research is a quantitative descriptive study with a quasi-experimental design of the nonequivalent control group design type. The sample used was 40 students divided into 19 students in the experimental class, namely the class that received the module-assisted Direct Instruction learning model, and 21 students in the control class, namely the class that received the Expository learning model. The data collection technique is done by giving a problem-solving ability test and mathematical resilience questionnaire. The data analysis technique used quantitative descriptive analysis techniques with the ANCOVA test. The results also show that the Direct Instruction learning model assisted by the module effectively supports problem-solving abilities controlled by the variable of mathematical resilience in discrete mathematics lectures.
INTRODUCTION
Discrete mathematics is one of the subjects that prospective teacher students must take.Discrete mathematics material that is loaded with proof (Mujib, 2019) apart from being able to train abstract thinking, discrete mathematics can also train higher-order thinking skills (HOTS) (Nopriana & Noto, 2017;Rahmawati et al., 2018).
One of the higher-order thinking skills that prospective teacher students must possess is problem-solving skills (Kuncoro et al., 2018;Sulistyowati et al., 2017).Not only problem solving through discrete mathematics learning, but the abilities expected to develop from prospective teachers students also understand concepts, and reasoning patterns, think creatively and flexibly and make mathematical connections in solving everyday problems (Oktaviana, 2017).
However, achieving all these discrete mathematics learning objectives is not easy (Mujib, 2019).Students must have a persistent, diligent, and unyielding attitude to stay focused in facing the challenges and difficulties they face.This attitude is called mathematical resilience (Attami et al., 2020;Hafiz & Dahlan, 2017).
Mathematical resilience is not instant to have but needs to be trained and developed.As for growing strong mathematical resilience, it starts with having an attitude of being ready to face risks, viewing challenges as opportunities to learn, and strengthening the attitude of a belief that through the learning process, students can develop better (Dilla et al., 2018;Zhanty, 2019).
Mathematical
resilience is essential in learning mathematics because the nature and impression of mathematics is a challenging subject for most students due to a lack of learning and practicing solving mathematical problems (Fathonah et al., 2018;Istiqomah, 2016;Rofi'ah et al., 2019).The results of initial interviews with students revealed that high competitiveness with peers creates anxiety, so they tend to avoid anything related to challenges and difficulties in learning mathematics that can interfere with achievement.One way that can be done to overcome this is to provide assistance and guidance to students to stay motivated in learning (Iswara & Sundayana, 2021).
One learning model suitable for use is the Direct Instruction learning model.Through the Direct Instruction learning model, lecturers help students learn step by step, starting from simple concepts to more abstract and complex concepts (Salam et al., 2019).Likewise, examples and practice questions are made in stages to assist students in constructing knowledge.All these steps, the researchers combined with the module so that each level in the presentation of discrete mathematics material could be followed and understood by students better.
Based on this explanation, this study aimed to determine the effect of the module-assisted Direct Instruction model and mathematical resilience on the problem-solving abilities of prospective teacher students at a private university in Yogyakarta.
METHODS
This research is a quantitative descriptive study with a quasiexperimental design of the nonequivalent control group design (Sugiyono, 2014).The research subjects are even semester teacher candidates taking discrete mathematics courses at a private university in Yogyakarta.The sample used was 40 students divided into 19 students in the experimental class, namely the class that received the module-assisted Direct Instruction learning model, and 21 students in the control class, namely the class that received the expository learning model.
This study involved three variables, namely the independent variable, the dependent variable, and the covariate variable.The independent variables are Direct Instruction learning with module-assisted learning in the experimental class and expository learning in the control class.The dependent variable is a problem-solving ability, while the covariate variable is mathematical resilience.Indicators of problem-solving ability used in this study are Polya's four-step process for problem-solving: (1) understanding the problem; (2) Devising a plan (composing a strategy or settlement plan); (3) Carry out the plan (solve problems according to the plan that has been made), and (4) Look back (check and interpret) (Polya, 2004).
This research was conducted to determine the effectiveness of the Direct Instruction learning model assisted by the module and the expository learning model on problem-solving abilities controlled by mathematical resilience variables in discrete mathematics lectures.The effectiveness indicators in this study are based on the results of Ancova's statistical testing.
Data collection techniques were carried out by giving mathematical resilience questionnaires to prospective teacher students who were carried out before the treatment was given and a problem-solving ability test after being given treatment.Problem-solving ability questions are essay questions for discrete mathematics courses with as many as six questions that three material experts have validated.
The data analysis technique uses quantitative descriptive analysis techniques assisted by exposure to the parametric analysis prerequisite test results (Arikunto, 2012).The description was carried out on the results of the hypothesis analysis test in the form of covariance analysis (ANCOVA) to determine the difference in problemsolving abilities between classes that received the Direct Instruction learning model assisted by modules and classes that received the expository learning model.Furthermore, the results of the linearity test in Table 3 show the sig value of deviation from linearity of 0.767 > 0.05.This means a linear relationship exists between mathematical resilience data and problem-solving ability data.
FINDINGS AND DISCUSSION
Mathematical resilience needs to be controlled and used as a covariate variable.This is done so that the mathematical resilience variable does not affect the results of problem-solving abilities.After all the prerequisite tests are met, the ANCOVA test is continued.
The results of the one-way ANCOVA hypothesis test are presented in Table 4.
The analysis results show that the value of Sig is 0.007 less than 0.05.Thus H1 is accepted, which means that after being controlled by the mathematical resilience covariable, there are differences in the problem-solving abilities of students in the experimental class and control classes.The difference in problem-solving abilities between the Direct Instruction and Expository classes can also be seen from the mean between the two classes in Table 5.
The average value of the Direct Instruction class is 72.62, and the mean of the Expository class is 60.53.This shows that the average value of the Direct Instruction class is higher than the average value of the Expository class.Furthermore, to find out whether the module-assisted Direct Instruction learning model is effective or not can be seen in the output of Table 6.The value of Sig = 0.007 < 0.05, which means that the module-assisted Direct Instruction learning model is effective in problemsolving abilities.The results of the ANCOVA test in Tables 4 and 6 show that the moduleassisted Direct Instruction learning model affects students' problem-solving abilities.The magnitude of the influence of the module-assisted Direct Instruction learning model on students' problemsolving abilities can be seen in Table 7.The problem-solving ability variable is 0.178 or 17.8% in the Partial Eta Squared part.The influence of the moduleassisted Direct Instruction learning model on students' problem-solving abilities is 17.8%.Furthermore, correlation analysis and simple regression can be known to see the relationship and effect of resilience on problem-solving ability.The test results for the two variables can be seen in Table 8. 8, it can be seen that the coefficient of determination (R2) is 0.301, which means that mathematical resilience has an effect of 30.1%, while other variables influence the rest.
DISSCUSSION
The results of data analysis using the ANCOVA test showed differences in problem-solving abilities between the experimental class that was taught using Direct Instruction with the help of a module and the control class that was given expository learning controlled by the mathematical resilience variable.
The effect of module-assisted Direct Instruction on the problemsolving ability of prospective teacherstudents is 17.8%.The average value of the experimental class is higher than the average value of the control class.This shows that module-assisted Direct Instruction can affect the problemsolving ability of prospective teacher students, although not too significantly.
While resilience itself affects the ability to solve problems by 30.1%, the results showed that the effect of resilience on students' problem-solving abilities was not too significant.One of the causes of the low influence of mathematical resilience on problemsolving abilities is that the lecture process is still carried out online.
Although online learning has many advantages, there are still many shortcomings in its implementation.Some of the obstacles encountered during the research were the lack of opportunities for direct discussions.Therefore, students in the lecture process are required to develop mathematical resilience in addressing this problem.This in line with research that has been (Mujib, 2019).Students who have high resilience will be able to do problem-solving well (Juniasani et al., 2022), able to carry out the evidentiary process appropriately, structured and systematic.
CONCLUSION
Module assisted Direct Instruction has been shown to positively affect the problem-solving abilities of prospective teacher students, especially in discrete mathematics lectures.This can be seen from the average value of students' problem-solving abilities in the experimental class, which is higher than that of students' problem-solving abilities in the control class.The results also show that the Direct Instruction learning model assisted by the module effectively controls problem-solving abilities controlled by the variable of mathematical resilience in discrete mathematics lectures. | 2,269.4 | 2023-01-18T00:00:00.000 | [
"Mathematics",
"Education"
] |
Refined Spherulites of PP Induced by Supercritical N2 and Graphite Nanosheet and Foaming Performance
The isothermal crystallization properties of polypropylene/graphite nanosheet (PP/GN) nanocomposites under supercritical N2 were systematically studied by a self-made in situ high-pressure microscope system. The results showed that the GN caused irregular lamellar crystals to form within the spherulites due to its effect on heterogeneous nucleation. It was found that the grain growth rate exhibits a decreasing and then increasing trend with the enhancement of N2 pressure. Using the secondary nucleation model, the secondary nucleation rate for spherulites of PP/GN nanocomposites was investigated from an energy perspective. The increase in free energy introduced by the desorbed N2 is the essential reason for the increase in the secondary nucleation rate. The results from the secondary nucleation model were consistent with those acquired through isothermal crystallization experiments, suggesting that the model can accurately predict the grain growth rate of PP/GN nanocomposites under supercritical N2 conditions. Furthermore, these nanocomposites demonstrated good foam behavior under supercritical N2.
Introduction
Many functional lightweight microcellular foams have been prepared based on developed microcellular foaming technology [1,2], which are widely used in industrial applications requiring properties such as sound insulation [3], thermal insulation [4,5], and electromagnetic shielding [6,7]. The crystallization kinetics of composites directly affect cellular structure, which in turn affects the physical properties of the microcellular foam.
Primary nucleation and secondary nucleation together constitute the entire crystallization behavior of the polymer. Primary nucleation is a "from nothing to something" process, i.e., the formation of ordered regions in the disordered phase, which can be characterized by grain density. Secondary nucleation describes the process of continued growth in the nucleus [8] and is usually quantified by the grain growth rate [9,10]. Currently, studies on the crystallization behavior mainly focus on the total crystallization kinetics [11][12][13], without separating the primary and secondary nucleation [14,15]. Numerous scholars have explored the total crystallization kinetics based on the Avrami equation [16], which determines the crystallization behavior through the crystallization rate constant as well as the Avrami index [17][18][19]. However, the crystallinity increases due to further refinement of crystals; in practice, the calculated Avrami index is not an integer [20]. Therefore, the quantitative interpretation of the nucleation and crystal growth patterns by Avrami's index is not reliable. A secondary nucleation model of the crystal growth frontier was later developed by Lauritzen and Hoffmann and was mainly used to determine the regime transition in crystallization behavior.
The presence of supercritical gas increases the free volume, thus enhancing the mobility of molecular chains and giving the polymer molecules a strong plasticizing effect [21,22]. The improvement of polymer crystallization behavior by high pressure gas is related to its solubility in the polymer [23,24]. In order to have a clearer view of the crystal growth of polymer composites under supercritical fluids, the research cannot remain in the study of the total crystallization kinetics [25,26]. Therefore, it is essential to employ in situ highpressure visualization devices to characterize the crystallization behavior online. However, the above crystallization kinetic model is not applicable to describe the crystallization behavior under supercritical fluid. The current rapid development of supercritical fluid foaming technology makes it important to develop a secondary nucleation model applicable to describe the crystallization behavior under high pressure gas.
In this paper, supercritical N 2 is used as the high-pressure medium, which can prepare excellent microcellular foams with dense and fine cell structure. The inexpensive and widely used thermoplastic polypropylene as well as the common graphite nanoflakes were adopted as the research objects. The effects of crystallization behavior and grain morphology of PP/GN nanocomposites under high pressure N 2 were systematically researched using a homemade in-situ high pressure microscope (HPM-2) system. A secondary nucleation model for the three-phase system was successfully developed. The free energy induced by GN was considered. The inherent mechanism of N 2 pressure on the crystal growth rate of PP/GN nanocomposites was investigated. Finally, the feasibility of PP/GN nanocomposite foaming was verified by using the mold opening foam injection technique (MOFIM).
Materials and Preparation
Homopolymerized polypropylene (J-150) was supplied by Lotte Chemical Co., which has a density of 0.90 g/cm 3 and a melt index of 10 g/10 min. The number-average molecular weight (Mn) is approximately 250,000, and its crystallization and melting temperatures are 117.7 • C and 168.7 • C, respectively. The graphite nanoflake (XF011) was prepared by XFNANO Tech. Co., Ltd. (Nanjing, China); its diameter is between 3 and 6 microns, and its thickness is about 40 nm. N 2, with a purity of 99.99%, was used as a supercritical fluid. Prior to formal experience, in order to remove moisture, PP was maintained at 80 • C for 4 h. A twin-screw extruder (SJZS-10B, Wuhan, China) was subsequently employed to prepare PP/GN nanocomposites. a homogeneous PP/GN nanocomposites with a GN content of 0.1% were successfully fabricated under the shearing action of the twin screws, which displayed in Figure 1a. A hot press device was adopted to prepare the films for visualization and observation. A small amount of PP/GN was placed between two clean glass sheets to form a sandwich structure, as shown in Figure 1b, which was subsequently placed together on a hot press device and formed by hot pressing at 190 • C and 2000 psi for 5 min. The thickness of thermoforming film is approximately 10 µm.
Online Characterization of Crystallization Behavior
The crystal morphology and crystal growth behavior of PP/GN nanocomposites under different N 2 pressures were observed by a self-developed in situ high-pressure visualization system, the schematic diagram of which is shown in Figure 1c. Figure 1d displays an N 2 and temperature treatment diagram during isothermal crystallization. Samples were first held at 190 • C for 5 min to eliminate thermal history. The samples were then cooled to different heat treatment temperatures (T 2 ) using alcohol as the cooling medium with a cooling rate of approximately 10 • C/min, at which the nucleation and growth of the crystals were observed. N 2 was introduced prior to specimen heating and subsequently drained to remove N 2 after sufficient crystallization.
Online Characterization of Crystallization Behavior
The crystal morphology and crystal growth behavior of PP/GN nanocomposites under different N2 pressures were observed by a self-developed in situ high-pressure visualization system, the schematic diagram of which is shown in Figure 1c. Figure 1d displays an N2 and temperature treatment diagram during isothermal crystallization. Samples were first held at 190 °C for 5 min to eliminate thermal history. The samples were then cooled to different heat treatment temperatures (T2) using alcohol as the cooling medium with a cooling rate of approximately 10 °C/min, at which the nucleation and growth of the crystals were observed. N2 was introduced prior to specimen heating and subsequently drained to remove N2 after sufficient crystallization.
MOFIM Fabrication Process
The mold opening foam injection molding technology was adopted to fabricate lightweight PP/GN nanocomposite foams. The melt temperatures from the loading port to the injection end were 200 °C, 210 °C, 220 °C, 220 °C, 210 °C, and 200 °C, respectively. However, the mold temperature used in the experiments was 90 °C. The injection rate and shot size to be applied were 100 mm/s and 60 mm 3 , respectively. The packing pressure was held at 40 MPa for 26 s to ensure that the cells formed during the filling process were recompacted and dissolved into the polymer melt. However, the samples were subsequently removed after a cooling time of 40 s.
Characterizations
The Nanomeasure software was used to measure the grain size of the same spherulites at different moments, and the slope of its variation with time is the spherulites growth
MOFIM Fabrication Process
The mold opening foam injection molding technology was adopted to fabricate lightweight PP/GN nanocomposite foams. The melt temperatures from the loading port to the injection end were 200 • C, 210 • C, 220 • C, 220 • C, 210 • C, and 200 • C, respectively. However, the mold temperature used in the experiments was 90 • C. The injection rate and shot size to be applied were 100 mm/s and 60 mm 3 , respectively. The packing pressure was held at 40 MPa for 26 s to ensure that the cells formed during the filling process were recompacted and dissolved into the polymer melt. However, the samples were subsequently removed after a cooling time of 40 s.
Characterizations
The Nanomeasure software was used to measure the grain size of the same spherulites at different moments, and the slope of its variation with time is the spherulites growth rate. Grain density was defined as the number of grains in the observation area divided by the area.
After completing the isothermal crystallization behavior of PP/GN nanocomposites on the visualization device, the fully crystallized sample is transferred to the center of the carrier stage. Additionally, the spherical morphology of PP/GN nanocomposites was observed by polarized light microscopy (POM, BX53, Olympus, Tokyo, Japan).
The three-dimensional grain morphology of PP/GN nanocomposites was recorded using a confocal laser microscope (CLSM, LSM 800, Carl Zeiss, Oberkochen, Germany). The spherulite morphology was first observed under the microscope, then switched to scanning mode with the laser turned on to get a clear confocal scan image.
X-ray diffraction (XRD) was employed to determine the crystal structure of PP and PP/GN nanocomposites. The voltage and current used for the tests were 40 kV and 100 mA, respectively. Fourier transform infrared spectroscopy (FTIR) was conducted to investigate the functional groups of PP and PP/GN nanocomposites in the range of 500 cm −1 to 4000 cm −1 .
The cell structure of PP/GN nanocomposite foams was studied with field emission scanning electron microscopy (FE-SEM). The samples were first immersed in liquid nitrogen for 30 min, followed by rapid fracture to keep the section intact. The surface of the sample was sprayed with platinum, and the morphology was observed under SEM.
Secondary Nucleation Model
The growth process of spherulites includes the formation of nuclei and the growth of grains. The density of spherulites stands for the primary nucleation rate, while the growth rate of spherulites shows the secondary nucleation rate. To quantify the secondary nucleation rate of PP/GN nanocomposites, a secondary nucleation model is usually used to calculate it, which is expressed as follows, where, nkT h is the prefactor expression, T is the isothermal treatment temperature, and k and h are the Boltzmann constant and Planck's constant, respectively. n stands for the number of kinetic units capable of nucleation, ∆E and ∆G * represent the diffusion activation energy and the critical nucleation free energy, respectively. ∆G N is the additional free energy caused by N 2 . The detailed calculation method regarding ∆G N are discussed in our previous articles [27,28], and can be calculated by, where, ∆G m represents the mixing free energy of the polymer/N 2 system, ∆G t is the translational free energy of N 2 , ∆G as and ∆G aw stand for the stronger hydrogen-like interaction energy and the relatively weak interaction energy, such as the dispersion force required for the desorption of N 2 from the polymer homogeneous system, respectively. The introduction of GN will change the crystallization behavior as well as the crystal morphology of PP materials. Generally speaking, the addition of GN lowers the nucleation barrier of PP crystals, and the entanglement between PP and GN also affects the nucleation of PP molecular chains. Therefore, the free energy variations attributed to GN cannot be neglected, which can be expressed as: where, φ 3 is filler volume fraction, D t and L t stand for the diameter and length of snake tube model, respectively. In addition, γ represents the interfacial energy of the matrix and filler, f (θ) is a coefficient considering the interfacial wetting angle. Finally, after the free energy changes caused by N 2 and GN are considered, the nucleation rate of the PP/GN/N 2 system can be determined by [29],
Structure and Morphology
In order to explore the effect of GN on the crystal structure of PP, the XRD spectra of PP, GN, and PP/GN nanocomposites are given in Figure 2a. A strong diffraction peak of GN was found only at 26.52 • over the entire test range for the (002) crystal plane, which represents the characteristic π-π stacking [30][31][32]. Pure PP exhibited three strong diffraction peaks at 13.8 • , 16.6 • , and 18.26 • , which correspond to the (110), (040), and (130) crystallographic planes of PP α grains, respectively. Three weaker diffraction peaks were also found at 20.86 • , 21.54 • , and 25.14 • , belonging to the (111), (−131), and (060) crystal planes, respectively. After the incorporation of GN, in addition to the conventional six diffraction peaks, the PP/GN nanocomposites exhibited a diffraction peak corresponding to GN at 26.38. However, no new crystal structures were induced under the current GN content, which also implies that GN is uniformly dispersed in the PP/GN matrix. Transmittance (a.u.) Figure 3 shows the crystallization behavior of PP and PP/GN nanocomposites under air. Larger and sparser spherulites are observed in Figure 3a. After the introduction of GN, the spherulites size is significantly refined, and the spherulites density is greatly increased, as shown in Figure 3b. According to the POM graphs in Figure 3c,d, the PP spherulites possess a clear cross extinction phenomenon and no β-type grains were found [35], which has been verified by the XRD results in the last section. In addition, the refining effect of GN on PP grains is more clearly shown in the POM diagram. The FTIR spectra of PP, GN, and PP/GN nanocomposites are shown in Figure 2b. It can be noticed that four sharp and strong peaks are found near 2917 cm −1 -2838 cm −1 , which are characteristic peaks of PP. Among them, the asymmetric stretching vibration peaks of −CH 3 and −CH 2 were detected at 2951 cm −1 and 2917 cm −1 , respectively, and the symmetric stretching vibration peaks of −CH 3 and −CH 2 were observed at 2871 cm −1 and 2838 cm −1 , respectively. The peaks at 1456 cm −1 and 1375 cm −1 are due to the bending vibration of −CH 2 and the symmetric deformation vibration of −CH 3 . No significant detection peaks were found in the FTIR spectrum of GN [33], which further indicates that GN is pure graphite and does not have any oxygen-containing functional groups [34]. As expected, the incorporation of GN does not affect the structure of PP. Figure 2c gives the cross-sectional morphology of the PP/GN nanocomposites. Apparently, GN is uniformly dispersed in the PP matrix. The good dispersion of GN provides the basis for the subsequent analysis of the crystallization behavior. Figure 3 shows the crystallization behavior of PP and PP/GN nanocomposites under air. Larger and sparser spherulites are observed in Figure 3a. After the introduction of GN, the spherulites size is significantly refined, and the spherulites density is greatly increased, as shown in Figure 3b. According to the POM graphs in Figure 3c,d, the PP spherulites possess a clear cross extinction phenomenon and no β-type grains were found [35], which has been verified by the XRD results in the last section. In addition, the refining effect of GN on PP grains is more clearly shown in the POM diagram. Figure 4a, the distribution of GN within the PP matrix is relatively uniform. Moreover, it can be seen from Figure 4b that many spherulites are nucleated around GN, which fully demonstrates the heterophase nucleation of GN. Due to the facilitative nucleation effect of GN and the reduction of the system's nucleation energy barrier, GN, as nucleation sites, induce a large number of ordered structures. Consequently, the PP molecular chains on the surface of GN preferentially start the orderly chain arrangement to form spherical crystals first. With the passage of time, spherulites centered on GN were eventually formed, as shown in Figure 4c,d. However, the N2 exclusion phenomenon, i.e., where the red arrow in Figure 4d is observed. This is due to the different solubility of N2 in the crystalline and amorphous regions. N2 can only be dissolved in the amorphous region rather than in the crystalline phase. Above the melting points of PP, PP/GN, and N2, they form a homogeneous three-phase system. After dropping to the isothermal treatment temperature, more and more amorphous regions are transformed into crystalline ordered regions as the crystallization process continues to complete. The N2 in the decreasing amorphous region is continuously expelled, which is shown in the macroscopic expression as the region shown by the red arrow. Figure 4a, the distribution of GN within the PP matrix is relatively uniform. Moreover, it can be seen from Figure 4b that many spherulites are nucleated around GN, which fully demonstrates the heterophase nucleation of GN. Due to the facilitative nucleation effect of GN and the reduction of the system's nucleation energy barrier, GN, as nucleation sites, induce a large number of ordered structures. Consequently, the PP molecular chains on the surface of GN preferentially start the orderly chain arrangement to form spherical crystals first. With the passage of time, spherulites centered on GN were eventually formed, as shown in Figure 4c,d. However, the N 2 exclusion phenomenon, i.e., where the red arrow in Figure 4d is observed. This is due to the different solubility of N 2 in the crystalline and amorphous regions. N 2 can only be dissolved in the amorphous region rather than in the crystalline phase. Above the melting points of PP, PP/GN, and N 2 , they form a homogeneous threephase system. After dropping to the isothermal treatment temperature, more and more Figures 5a-j plot POM images of PP/GN nanocomposites crystallized isothermally at 130 °C and 140 °C under elevated N2 pressure. It can be noticed that the cross extinction (Maltese-cross) phenomenon of the grains is not obvious, and the spherulites also behave more coarsely under all experimental conditions. This may be due to the disordered arrangement of the stacked sheet crystals of spherulites, which diminishes their optical anisotropy. It is the heterogeneous nucleation effect of GN that enables the disorderly arrangement of such irregularly shaped and laminated stacked crystals; as shown in Figure 4, the spherulites grow gradually from the GN surface. Moreover, the disorderly stacking of lamellae is further accelerated by the exclusion of N2 during grain growth with pressure, which makes this phenomenon more obvious. To quantitatively analyze the effects of T2 and N2 pressure on the crystallization behavior of PP/GN nanocomposites, the average grain size of spherical crystals was calculated based on the POM diagram, and the grain density as well as the grain growth rate were obtained from the visualization results. It can be noticed that the cross extinction (Maltese-cross) phenomenon of the grains is not obvious, and the spherulites also behave more coarsely under all experimental conditions. This may be due to the disordered arrangement of the stacked sheet crystals of spherulites, which diminishes their optical anisotropy. It is the heterogeneous nucleation effect of GN that enables the disorderly arrangement of such irregularly shaped and laminated stacked crystals; as shown in Figure 4, the spherulites grow gradually from the GN surface. Moreover, the disorderly stacking of lamellae is further accelerated by the exclusion of N 2 during grain growth with pressure, which makes this phenomenon more obvious. To quantitatively analyze the effects of T 2 and N 2 pressure on the crystallization behavior of PP/GN nanocomposites, the average grain size of spherical crystals was calculated based on the POM diagram, and the grain density as well as the grain growth rate were obtained from the visualization results. Figure 5k-l provides the dependence of grain size, grain density, and grain growth rate of PP/GN nanocomposites on pressure. PP/GN nanocomposites exhibit the same crystallization behavior across the two T2. As the N2 pressure increases, the grain size gradually decreases, while the grain density shows the opposite trend. This indicates that although GN has refined the PP grains, supercritical N2 still exhibits further refinement of the crystallization behavior. This is due to the plasticizing effect of supercritical N2 on PP/GN nanocomposites, which enhances the motility of crystallizable molecules. In addition, the free volume of materials is increased with the addition of N2, which induces the rearrangement of molecular chains into a crystal structure with lower free energy and thus easier nucleation. Seeger et al. [36] found that the Tm of PP under supercritical N2 is slightly augmented with increasing N2 pressure. The enhancement in Tm implies the formation of more perfect, thicker lamellar crystals and more stable grains [37,38]. These perfect crystals with thicker sheet crystals usually take longer to melt. At the same melting time, the crystalline residues increase with increasing N2 pressure due to the melt memory effect. These crystalline residues then become a thermal nucleation site [39,40], which further increase the nucleation density.
Crystalline Morphology of PP/GN
Compared to the variation of grain size and grain density with pressure, the grain growth rate demonstrates two different trends within the scope of pressure. As the pressure was lower than 13.79 MPa, the increase in N2 pressure showed an inhibitory effect on the grain growth rate, while a further increase in pressure displayed a promotional effect. This interesting phenomenon may be due to the nucleation-limiting effect and the entropy-increasing effect of N2 at high pressure. However, the phenomenon is not significant, and the grain growth rate remains lower at higher temperatures, such as 140 °C. This may be due to the lack of self-folding drive for the molecular chains caused by the lower supercooling, which results in a lower growth rate. To further explain this phenomenon from the energy point of view, a secondary nucleation model for a three-phase sys- Figure 5k,l provides the dependence of grain size, grain density, and grain growth rate of PP/GN nanocomposites on pressure. PP/GN nanocomposites exhibit the same crystallization behavior across the two T 2 . As the N 2 pressure increases, the grain size gradually decreases, while the grain density shows the opposite trend. This indicates that although GN has refined the PP grains, supercritical N 2 still exhibits further refinement of the crystallization behavior. This is due to the plasticizing effect of supercritical N 2 on PP/GN nanocomposites, which enhances the motility of crystallizable molecules. In addition, the free volume of materials is increased with the addition of N 2 , which induces the rearrangement of molecular chains into a crystal structure with lower free energy and thus easier nucleation. Seeger et al. [36] found that the T m of PP under supercritical N 2 is slightly augmented with increasing N 2 pressure. The enhancement in T m implies the formation of more perfect, thicker lamellar crystals and more stable grains [37,38]. These perfect crystals with thicker sheet crystals usually take longer to melt. At the same melting time, the crystalline residues increase with increasing N 2 pressure due to the melt memory effect. These crystalline residues then become a thermal nucleation site [39,40], which further increase the nucleation density.
Compared to the variation of grain size and grain density with pressure, the grain growth rate demonstrates two different trends within the scope of pressure. As the pressure was lower than 13.79 MPa, the increase in N 2 pressure showed an inhibitory effect on the grain growth rate, while a further increase in pressure displayed a promotional effect. This interesting phenomenon may be due to the nucleation-limiting effect and the entropyincreasing effect of N 2 at high pressure. However, the phenomenon is not significant, and the grain growth rate remains lower at higher temperatures, such as 140 • C. This may be due to the lack of self-folding drive for the molecular chains caused by the lower supercooling, which results in a lower growth rate. To further explain this phenomenon from the energy point of view, a secondary nucleation model for a three-phase system was developed.
In order to observe the crystal morphology of PP/GN nanocomposites more clearly, the scanning images using CLSM are given in Figure 6. From the 3D image, it can be seen that the spherical crystals exhibit a radial shape. It is also observed that the Z-axis thickness of the spherical crystal that crystallized isothermally at 140 • C is thicker compared to those at 130 • C. This indicates that higher layer thicknesses are formed at higher isothermal treatment temperatures. In addition, based on the L-H theory, samples treated at higher isothermal treatment temperatures also have higher melting points. As the crystallization is completed, the PP molecular chains are progressively consumed and N 2 is continuously discharged. When all molecular chains are depleted, concave molecular dissipation regions as well as the expulsion of N 2 are found.
Secondary Nucleation Rate of PP/GN Nanocomposite
Based on the established secondary nucleation model, the mechanism for the effect of high-pressure N 2 on the grain growth behavior was explored in detail with the addition of 0.1% GN as an example. The experimental results and model predictions regarding the grain growth rate are shown in Figure 7. It can be seen that the calculated results match well with the trends for the experimental results, indicating that the established PP/GN/N 2 secondary nucleation model can predict the crystallization behavior of PP/GN nanocomposites under supercritical N 2 . Compared with the previously studied pure PP crystallization behavior under supercritical N 2 , there was no significant difference in terms of the growth rate trend, except that it became slower. It still manifests the nucleationlimiting effect of N 2 at relatively low pressure and the nucleation-promoting effect of N 2 at higher pressure. As shown in Figure 7a, an interesting phenomenon is that the secondary nucleation rate of PP/GN nanocomposites at 5 MPa is significantly greater than that at 22.5 MPa at 130 • C. However, the difference in secondary nucleation rates between 5 MPa and 22.5 MPa was not considerable at 140 • C. This implies that the repromotion of N 2 on the secondary nucleation rate allows the material to grow at the same rate at higher pressures as at lower pressures at higher Tc. This stronger repromotion at high temperatures may be attributed to the relatively high solubility of N 2 under high temperatures and pressures. secondary nucleation rate allows the material to grow at the same rate at higher pressures as at lower pressures at higher Tc. This stronger repromotion at high temperatures may be attributed to the relatively high solubility of N2 under high temperatures and pressures. In order to elucidate the re-promoting effect of supercritical N2 on the grain growth rate, Figure 8 shows the variation of each free energy with pressure. The percentage of each free energy at a given pressure is given in Table 1. After the addition of GN, it is still the E and N G that have a large effect on the secondary nucleation rate. f G , the percentage of free energy caused by GN accounts for only 0.03%, which is due to the relatively small amount of GN added. It is important to note that the positive value of f G is a resistance to secondary nucleation, which will limit the grain growth rate. This is mainly due to the influence of the interfacial energy existing between GN and PP. For the PP molecular chain to detach itself from the PP/GN/N2 three-phase system to nucleate, it needs to conquer not only the influence of N2, but also the interfacial energy between PP and GN. In order to elucidate the re-promoting effect of supercritical N 2 on the grain growth rate, Figure 8 shows the variation of each free energy with pressure. The percentage of each free energy at a given pressure is given in Table 1. After the addition of GN, it is still the ∆E and ∆G N that have a large effect on the secondary nucleation rate. ∆G f , the percentage of free energy caused by GN accounts for only 0.03%, which is due to the relatively small amount of GN added. It is important to note that the positive value of ∆G f is a resistance to secondary nucleation, which will limit the grain growth rate. This is mainly due to the influence of the interfacial energy existing between GN and PP. For the PP molecular chain to detach itself from the PP/GN/N 2 three-phase system to nucleate, it needs to conquer not only the influence of N 2 , but also the interfacial energy between PP and GN. For ∆G N , it shows a tendency to augment and then reduces with increasing pressure, and the proportion for each free energy of ∆G N is shown in Table 2. It can be found that all ∆G t rises at each temperature. For example, ∆G t accounts for only 19.52% at 6 MPa, while 21.8% is recorded with 21.8% as the pressure enhances to 23 MPa with a T c of 130 • C. It is the increase in ∆G t that impairs the nucleation limiting effect of N 2 and consequently reveals a re-promotion of the secondary nucleation rate. Therefore, the increase in ∆G t induced by N 2 desorbed from the homogeneous system is the underlying reason for the re-promotion effect exhibited by N 2 at higher pressures, i.e., the entropy-increase induced by N 2 promotes crystallization. quently reveals a re-promotion of the secondary nucleation rate. Therefore, the increase in t G induced by N2 desorbed from the homogeneous system is the underlying reason for the re-promotion effect exhibited by N2 at higher pressures, i.e., the entropy-increase induced by N2 promotes crystallization.
Foaming Performance of PP/GN Nanocomposite
The PP/GN nanocomposite microporous plastic parts were prepared by MOFIM technology, and the foaming feasibility of this material was investigated. Figure 9 demonstrates the cell morphology of PP/GN nanocomposite microporous plastic parts. It can be noticed that the introduced supercritical N 2 excites the cell structure of PP/GN nanocomposites, and in addition, the addition of GN refines the cell structure of PP materials. At the incorporation of 0.05% GN, the cell size is still large, and the cell density is low. However, the cell size is reduced and the cell structure is denser at the same opening distance after adding 0.1% GN. Furthermore, the 0.1% GN widens the opening distance of PP/GN nanocomposites. This implies that compliant products can be prepared with less raw material, which saves resources. posites, and in addition, the addition of GN refines the cell structure of PP materials. At the incorporation of 0.05% GN, the cell size is still large, and the cell density is low. However, the cell size is reduced and the cell structure is denser at the same opening distance after adding 0.1% GN. Furthermore, the 0.1% GN widens the opening distance of PP/GN nanocomposites. This implies that compliant products can be prepared with less raw material, which saves resources.
Conclusions
The isothermal crystallization behaviors of PP/GN nanocomposites at different treatment temperatures and N2 pressures were researched by a self-made in-situ high pressure microscopy system. The PP/GN nanocomposites exhibit a decrease in spherulite size and an increase in spherulite density with enhanced N2 pressure within the scope of pressure. The grain growth rate of PP/GN nanocomposite displays a trend of inhibition followed by promotion with rising N2 pressure. Based on secondary nucleation model, according to the proportion for the respective energies of nucleation energy and their changes at different N2 pressures, it is found that the increased t G in the homogeneous system under higher pressure N2 is the essential reason for the augment secondary nucleation rate, which means that the entropy-increasing effect caused by N2 promotes the crystallization. Moreover, PP/GN nanocomposites exhibit good foaming ability at supercritical N2.
Conclusions
The isothermal crystallization behaviors of PP/GN nanocomposites at different treatment temperatures and N 2 pressures were researched by a self-made in-situ high pressure microscopy system. The PP/GN nanocomposites exhibit a decrease in spherulite size and an increase in spherulite density with enhanced N 2 pressure within the scope of pressure. The grain growth rate of PP/GN nanocomposite displays a trend of inhibition followed by promotion with rising N 2 pressure. Based on secondary nucleation model, according to the proportion for the respective energies of nucleation energy and their changes at different N 2 pressures, it is found that the increased ∆G t in the homogeneous system under higher pressure N 2 is the essential reason for the augment secondary nucleation rate, which means that the entropy-increasing effect caused by N 2 promotes the crystallization. Moreover, PP/GN nanocomposites exhibit good foaming ability at supercritical N 2 . | 7,621.6 | 2023-02-27T00:00:00.000 | [
"Materials Science"
] |
APPLICATION OF VIRTUALISATION ENVIRONMENT FOR DATA SECURITY IN OPERATIONAL DATA PROCESSING SYSTEMS
This paper presents a concept, developed and tested by the authors, of a virtualisation environment enabling the protection of aggregated data through the use of high availability (HA) of IT systems. The presented solution allows securing the central database system and virtualised server machines by using a scalable environment consisting of physical servers and disk arrays. The authors of this paper focus on ensuring the continuity of system operation and on minimising the risk of failures related to the availability of the operational data analysis system.
INTRODUCTION
The use of virtualisation technology is having a significant impact on the entire IT industry. Its use is particularly evident in data centres to reduce costs and increase efficiency. [7] Virtualisation technology is developing rapidly and more and more developers are aiming to create cloud applications. All these advances can be used in the future to simplify data handling techniques and enhance IT security. [7] Virtualisation is a technology used to share the capabilities of physical computers by sharing resources between operating systems. Currently, there are several virtualization techniques that can be used to support the creation of entire operating systems in a scalable environment. We classify virtualization techniques from an operating system point of view: operating system level virtualization and paravirtualization. [1,5,6] The figure above shows a diagram of the architecture of a basic virtualisation server, which works on the basis of a master server or hypervisor. The solution that is shown demonstrates the principle of sharing the resources of the master server for virtualised machines.
Virtualisation technology provides an alternative technical approach to delivering infrastructure, platforms and operating systems, servers, software and systems and applications. Most virtualised computing environments have much in common with conventional data centres, using highperformance hardware and specialised software that allows a single physical server to function as multiple instances running in parallel. The use of virtual environments allows organisations to utilise IT resources more efficiently by scaling up or down depending on business needs. The use of virtual environments utilizes many of the same procedures and criteria used in data centre audits, with additional emphasis on provisioning, deprovisioning, managing and maintaining multiple virtual servers that share compute, network and infrastructure resources. [6][7][8]
PROCESSING SYSTEM
The IT system, the protection of which is analysed in this study, is presented in Fig. 2. The presented architecture takes into account the use of a central database system, access to which is provided from devices located in places with access to the Internet by means of appropriately configured VPN network protocols. Thanks to the indicated solution, the system will be secured both in terms of hardware and software. The authors of the article, while carrying out research related to the subject of exploitation data analysis, have observed that the constantly developing railway market sector requires undertaking work related to computerisation of maintenance processes. Therefore, the proposed database system architecture is based on a professional virtualised IT environment from VMware. These systems, when properly configured, can allow the availability of services in a high reliability mode. The use of multiple master servers called hosts and an extensive disk array can ensure the availability of the offered services and access to the database system with minimum failure times.
High Reliability [2] or HA systems are a solution that continuously monitors all servers in the resource pool and detects server failures. An agent located on each server maintains constant communication with other servers connected in the resource pool, and loss of communication initiates the process of restarting all affected virtual machines on other servers. This type of solution can allow to ensure the security of the exploitation data processing systems that are analysed in this paper. [3] The HA solution ensures that sufficient resources are available in the resource pool at all times to restart VMs on different physical servers in the event of a server failure. VM restart is made possible through the use of a clustered Virtual Machine File System (VMFS), which provides multiple instances with simultaneous read and write access to the same virtual machine files. High availability systems can be easily configured with the appropriate virtual environment management software. [3] The decision to consider a solution based on virtualisation systems is related to the necessity of its possible scalability. Dynamic load distribution, which is possible in the above mentioned systems, allows smooth scaling of the virtual environment along with the increase in demand for resources for the application responsible for storing and processing operational data.
The implementation of the solution proposed by the authors in a virtualisation environment may ensure a quick response during a possible system failure. The use of a virtual environment makes it possible to create snapshots of the entire virtualised machine. Thanks to this solution, while making frequent security copies on external carriers and NAS disks, it is possible to restore a fully functional system without having to install the entire environment from scratch.
The suggested solution presented in this article consists of three DELL PowerEdge M620 physical servers running VMware ESXi Server 6.7 installed in a blade enclosure chassis M1000e. The servers are located in a single data centre and form a private computing cloud based on a VMware cluster called "Cluster". The cluster provides the functionality of VMware HA increased availability and constitutes a platform for the virtual machine environment.
The servers were named esx01.pr.radom.pl, esx02.pr.radom.pl and esx03.pr.radom.pl respectively and use shared LAN and SAN resources provided on a Hitachi HUS 100 array.
For central management of the entire virtual infrastructure, VMware vCenter Server was used, which is installed as a virtual machine (appliance. vCenter Server uses the SUSE 11 operating system, a preinstalled vPostgreSQL database and Single Sign-On (SSO) management service. Figure 3 shows the logical architecture of the virtual environment developed by the authors.
II. TRANSMISSION PERFORMANCE
In order to provide redundancy of the high performance LAN connection for the entire virtualisation environment, the authors applied the use of two physical interfaces with a capacity of 10 Gbps each. The high performance of the connection is necessary to maximise the efficiency of the operating data processing environment.
The cards indicated in Figure 4 in the proposed solution should be combined into the following functional pairs: • vmnic0 (active) and vmnic1 (standby) cards for production traffic, MGMT and vMotion • vmnic1 (active) and vmnic0 (standby) cards for iSCSI SAN support
Fig. 4. Network interfaces required in a virtualisation environment (own elaboration)
Master servers, which perform the function of virtualisation HOSTs, should be equipped with two SAS disks and must be connected to the local RAID controller of PERC H310 Mini type in RAID1 arrangement (mirror). In such configuration it is possible to create a virtual disk volume of 278GB. These resources may be used for the needs of virtualisation platform and storage of low critical virtual machines. The RAID controller configuration is shown in Figure 5. The basis of the production disk subsystem must be a disk array connected redundantly to each ESXi server via a single data network. The network in the proposed environment consists of two switches with a capacity of 10Gb each. The storage should be directly connected via two 10GbE Fibre Channel cables to the switches supporting the indicated 10Gb per second throughput.
It is necessary to create logical volumes on the disk array, which can then be configured by ESXi servers. [own elaboration].
Each server can include a software implemented iSCSI initiator. The iSCSI operations are in this case performed by the processor (and not by a separate PCI-X/PCI-Express card). Thanks to the high performance of currently available processors, this solution does not noticeably reduce server performance. Normal network cards are used to transmit SCSI commands. This way of connecting the disk subsystem allows access to disk resources using 1 data path, one path to each LUN. [own elaboration].
III. DISK RESOURCES -USE OF ARRAYS FOR THE SECURITY OF PROCESSED DATA
A server in the storage network is referred to as an iSCSI target. One iSCSI target can provide one or more Logical Units (LUs). Logical Units are often abbreviated as LUNs (although this abbreviation stands for Logical Unit Number).
[own elaboration]. Within the array designated as 001 configured in RAID5 mode (Fig. 6) and using 7 disks for data and 1 disk for parity, 3 LUNs were created for the virtual environment with the following characteristics illustrated in Table 5.
IV. SUMMARY
After the study, it can be concluded that by using virtualisation technology operating in HA mode, i.e. high availability, in the case of a need to expand the server environment, the exploitation data analysis system in question or a failure of the master server responsible for virtualisation, we have ensured increased data security. The use of high reliability mode for systems responsible for safety is a priority, because an appropriate analysis of operational data can directly contribute to increased safety during all transport processes. | 2,090.2 | 2021-12-31T00:00:00.000 | [
"Computer Science"
] |
On ( p , q ) -Analogues of Laplace-Typed Integral Transforms and Applications
: In this paper, we establish ( p , q ) -analogues of Laplace-type integral transforms by using the concept of ( p , q ) -calculus. Moreover, we study some properties of ( p , q ) -analogues of Laplace-type integral transforms and apply them to solve some ( p , q )-differential equations.
Introduction
Integral transform techniques are very important for solving many problems in applied mathematics, physics, astronomy, economics and engineering. The integral transform techniques have contributed largely to a variety of theories and applications, such as Laplace, Sumudu, σ-Integral Laplace, Mohand, Sawi, Kamal and Pourreza transforms. In the sequence of such integral transforms, in 2017, H. Kim [1] introduced the Laplace-typed integral transform or α G-transform, which is defined by where α ∈ R. The α G-transform can be applied directly to a suitable problem by choosing α appropriately. In Table 1, we list a few of them with their definitions and set u, α for converting α G-transform into appropriate transforms. [6] s 2 ∞ 0 f (t)e −st dt u = 1/s and α = −2 Sawi [7] 1 s 2 ∞ 0 f (t)e −t/s dt u = s and α = −2 Kamal [8] ∞ 0 f (t)e −t/s dt u = s and α = 0 Pourreza [9] s ∞ 0 f (t)e s 2 t dt u = 1/s 2 and α = −1/2
Preliminaries
In this section, we give basic knowledge that will be used in our work. Throughout this paper, let 0 < q < p ≤ 1 be constants.
Let us introduce (p, q)-analogue or (p, q)-number for n ∈ N, which is defined by [n] p,q = p n − q n p − q .
If p = 1 in (1), then (1) is q-analogue of n or q-number; see [26] for more details.
Definition 1 ([35]). If f is an arbitrary function, then is the (p, q)-derivative of the function f .
If p = 1 in (4), then D p,q f (x) = D q f (x), which is the q-derivative of the function f ; in addition, if q → 1 in (4), then we get the classical derivative.
Proposition 1. The (p, q)-derivatives of the product and quotient rules of functions f and g are as follows: The proof of Proposition 1 is given in [35].
If p = 1 in (7), then (7) reduces to the q-integral of the function f ; also, if q → 1 in (7), then we get the classical integral.
Proposition 2. If f and g are arbitrary functions, then b a f (px)(D p,q g(x))d p, is the (p, q) integration by parts. Note that b = ∞ is allowed.
The proofs of Proposition 2 are given in [35].
Proposition 3 ([35]
). If n ∈ R, then the following identities hold: The proofs of the following Propositions are given in [48].
Definition 5 ([38]
). For s, t ∈ N, the (p, q)-beta function is defined by Theorem 1. For s, t ∈ N, the relation between the (p, q)-gamma function and the (p, q)-beta function is The proof of this Theorem is given in [38].
Properties of (p, q)-Analogues of Laplace-Typed Integral Transform
In this section, we introduce (p, q)-analogues of the Laplace-typed integral transform in the form 1 α G p,q and 2 α G p,q , which are called α G p,q -transform of type one and type two, respectively. Let and Now the definition of the α G p,q -transform of type one and type two is given by: Definition 6. The 1 α G p,q ( f (t); u) over the set A in (26) and the 2 α G p,q ( f (t); u) over the set B in (27) are defined as follows: If u = 1/s, p = 1 and α = 0, then (28) and (29) reduce to and respectively, which appeared in [28]. If u = s and α = −1, then (28) and (29) reduce to and respectively, which appeared in [49].
Theorem 2. (Linearity):
If f 1 , g 1 ∈ A and f 2 , g 2 ∈ B, then for constants c and d, we have Proof. The theorem follows immediately from Definition 6.
Theorem 3. (Scaling):
If f 1 ∈ A and g 1 ∈ B, then the following formulas hold for non-zero constants β and γ: Proof. Using (28) and Proposition 7, we have The proof of (33) is similar to (32), and therefore the proof is completed.
Theorem 4. Let α ∈ R; then the following formulas hold: Proof. Using (28) and (12) to prove (34), we get The proof of the part (35) utilizes a similar process as for (34). Therefore, the proof is completed.
Theorem 5. If n ∈ N, then the following identities hold: .
Proof. Using (8) and (28) to prove (i), we have We prove (iii) by mathematical induction: obviously, (iii) is true for n = 1. Assuming that (iii) is true and using the (p, q)-integration by parts, we obtain .
The proofs of (ii) and (iv) use (8) and (29); then we follow a similar process for (i) and (iii), respectively. Therefore, the proof is completed.
respectively, which appeared in [28]. If u = s and α = −1, then Theorem 5 (i) and (iii) reduce to S p,q (t; s) = s p and S p,q (t n ; s) = s n [n] p,q !
Theorem 9. (Transforms of integrals):
Let f ∈ A andf ∈ B, then the following identities hold: Proof. Using (8) and (28) to prove (i) − (iii), we have We give g(t) = E p,q − t u , h(t) = ∞ 0 f (x)d p,q x and apply the formula of (p, q)integration by parts, we obtain Next, we get Consequently, After continuing this process, we obtain the sequence The proofs of (iv) − (vi) utilize (8) and (29), and then follow the similar process for (i) − (iii). The proof is completed. Remark 6. If p = 1, then Theorem 9 (iii) reduces to Furthermore, if q → 1, then (36) reduces to the α G-transform of integrals, which appeared in [12].
Theorem 10. (Transforms of derivatives):
If f ∈ A and D n p,q has the 1 α G ( p, q)-transform of type one for each n ∈ N, then the transforms of the first, second and n-th derivatives of f can be written in the following forms: Proof. Using (8) and (28) to prove (i), we have Applying the equation above with n = 2 to prove (ii), we get 1 α G p,q (D (2) p,q f (t) In (iii), if n = 1, it is not difficult to see that 1 α G p,q (D (n) p,q f (t); u) = 1 α G p,q ( f (t); up n ) u n p nα p ( n+1 The proof of α G p,q -transform of the type two in (29) is similar to the one for Theorem 10, and is therefore is omitted.
Theorem 11. (Derivative of transforms):
For n ∈ N, the following formulas hold: Proof. Using (28) Taking (p, q)-derivative on both sides with respect to 1/u, we get From (37), taking the second (p, q)-derivative on both sides with respect to 1/u to prove (ii), we have Following the same process, we can prove (iii). Therefore, the proof is completed.
The proof of α G p,q -transform of the type two in (29) is similar to the one for Theorem 11 and therefore is omitted.
The proof of α G p,q -transform of the type two in (29) is similar to the one for Corollary 2, but changes e p,q (at) to E p,q (at), and therefore is omitted.
Theorem 13. (Transforms of the Dirac delta function): For a ≥ 0, let If δ(t − a) denotes the limit of f k as k → 0, then we have where δ is the Dirac delta function.
Proof. Using (28) to prove (44), we obtain If we take the limit of f k as k → 0, then The proof of (45) uses (29), and then follows the similar process for (44). Therefore, the proof is completed.
Therefore, the proof is completed.
The proof of α G p,q -transform of the type two in (29) is similar to one for Theorem 14 and therefore is omitted.
Examples
In this section, we solve the (p, q)-differential equations using the definition and properties of α G p,q -transform of type one. We consider the (p, q)-Cauchy problem and two second-order (p, q)-differential equations. where c is a constant. | 2,053.8 | 2021-04-09T00:00:00.000 | [
"Mathematics"
] |
Eating Healthy Might Help the Immune System Fight Cancer
Obesity has been known for years to be a major health problem. Rates of obesity have been steadily increasing all over the world. Many factors, including healthy eating habits and exercise, play important roles in controlling obesity. In our study, we compared the function of cells of the immune system called natural killer (NK) cells between healthy and obese groups of mice. We found that obese mice have lower numbers of NK cells and that NK cells from obese mice are less functional. Lower NK cell activity is related to a higher risk of infections and cancer in the obese group. This research could show a relationship between what we eat and our ability to defend ourselves against diseases like cancer.
Figure Figure
Where do immune cells come from? The immune system contains many types of cells, all of which originate from cells called multipotent hematopoietic stem cells, which are found in the bone marrow. As HSCs divide, they develop into the many cell types shown in this diagram. In our research, we were interested in NK cells, which are one of the three kinds of lymphocytes.
contribute to a person's likelihood of becoming obese, but other things, like lack of exercise and eating a high-fat diet, can also contribute to obesity. The number of obese people in the world is steadily increasing, and obesity is now treated as a disease. We know that obesity causes many health issues. For example, obesity can cause heart problems, because the excess weight makes the heart work too hard, so it gets tired sooner. When the heart gets too tired and weak, it might stop beating and pumping the blood, which can cause death. cardiac arrest or heart failure While we know obesity a ects the heart, we do not know everything that it does to other parts of our body. Obese people have also been found to have a higher risk of cancer. Increased risk of cancer could be related to a decreased ability of the immune system to protect us from cancerous cells. Therefore,
IMMUNE SYSTEM
A system of the body that protects it from dangers, such as bacteria, viruses, etc.
we decided to study cells of the immune system, to see if they are somehow di erent in healthy mice than in obese mice.
WHAT IS THE IMMUNE SYSTEM?
The immune system is extremely important to keep us healthy. It helps defend against infections and diseases caused by bacteria and viruses, and it is also believed to help defend us against cancer. Since we depend on the immune system so much, it makes sense to keep it working at its best. The immune system needs several di erent kinds of cells to successfully defend our bodies. Figure shows a list of the many kinds of immune cells the body uses to defend itself. All immune cells originate from cells called hematopoietic stem cells, which are found in the bone marrow. All of these immune cells are important, but our study focused on one type of cells, called lymphocytes. ). Lymphocytes can remember what they attack. These memories allow the immune system to respond faster the next time they see the same threat. Faster responses mean we get sick less. Did you ever
PROTEASES
A type of protein that assists in breaking down other proteins.
Perforins are used to punch holes in the outside walls of cancer cells. The holes allow proteases to invade the cancer cells. Once inside, the proteases attack and break down important cell parts that the cancer needs to spread and survive. If cancer is not able to spread and renew itself, it is much less dangerous. To successfully fight cancer cells, NK cells need to be healthy and fully functional (Figure ).
OBESE MICE HAVE FEWER, LESS ACTIVE NK CELLS
To see if diet could a ect the number of NK cells, we fed mice with either a high fat-calorie diet (HFCD) or a control, healthy diet (CD). In -weeks, mice fed HFCD gained -times more weight compared to mice fed CD. Using flow cytometric analysis, we counted the number of NK cells in both obese HFCD mice and CD mice and found that there were fewer NK cells in obese mice ( Figure A) [ , ].
To see whether the NK cells from healthy and obese mice could kill cancer cells, we ran cytotoxicity assay in which NK cells are e ector cells and tumor cells are targets. We found that NK cells from obese mice were less capable of killing cancer cells ( Figure B). We think this means that, if the obese mice had cancer, the cancer could spread very easily because the NK cells of those mice are weaker and would not be able to destroy the cancer cells [ ]. Although this finding still needs to be confirmed in humans, the results obtained from mice suggest that obese people might also have a smaller number of NK cells and that those NK cells might be less functional than NK cells from non-obese people. This could help explain why obese people have a higher risk of getting cancer compared to healthy people.
PUTTING IT ALL TOGETHER
The immune system is important. We use it every day for everything from fighting o infections that cause the sni es up to more serious things, like protecting us from cancer. So, it is in our best interest to keep the immune system in great shape. When our bodies get out of shape and become obese, our natural killer cells may show signs of being out of shape, too. In obese mice, there are fewer NK cells and those cells do not do their job of defending mice very well. If these findings hold true in humans, this could make obese humans more vulnerable to cancer, perhaps making it easier for the cancer to spread from one organ to another. It is possible that simply eating a little less fat every day and keeping your body a little healthier may be enough to help your NK cells fight o your body's enemies-even something as dangerous as cancer.
CONFLICT OF INTEREST:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | 1,485.4 | 2020-12-11T00:00:00.000 | [
"Medicine",
"Biology"
] |
Improving production and quality of life for smallholder farmers through a climate resilience program: An experience in the Brazilian Sertão
We use a combination of economic and wellbeing metrics to evaluate the impacts of a climate resilience program designed for family farmers in the semiarid region of Brazil. Most family farmers in the region are on the verge of income and food insufficiency, both of which are exacerbated in prolonged periods of droughts. The program assisted farmers in their milk and sheepmeat production, implementing a set of climate-smart production practices and locally-adapted technologies. We find that the program under evaluation had substantive and significant impacts on production practices, land management, and quality of life in general, using several different quasi-experimental strategies to estimate the average treatment effect on the treated farmers. We highlight the strengths and limitations of each evaluation strategy and how the set of analyses and outcome indicators complement each other. The evaluation provides valuable insights into the economic and environmental sustainability of family farming in semiarid regions, which are under growing pressure from climate change and environmental degradation worldwide.
Introduction
Family farming, both in Brazil and globally, is under tremendous pressures from climate change and environmental degradation, both of which are often in a positive (detrimental) feedback cycle [1]. In semiarid regions, where social and climate vulnerability tends to be widespread, minor changes in the environment may have harmful impacts on water supply and local food security [2,3]. Economic pressures (most family farmers are poor) also mean that transient shocks can lead to long-run negative impacts, where farmers must deplete natural capital stock to cope with decreasing productivity.
Studies have highlighted how adaptive and mitigative strategies may simultaneously bring climate resilience and improve farmers' quality of life. Farmers may adapt as a response to social programs in the developing world creates similar series of empirical challenges for evaluation to the ones we faced; we provide a roadmap for leveraging existing programs to learn more about which programs work and which do not, even when experimental implementation falls apart.
Building sustainable and resilient agriculture in Sertão
The Brazilian Sertão provides a unique opportunity to analyze how specific climate resilience strategies may improve the quality of life for small-scale farmers. It is the most populous semiarid region of the world, home to roughly 20 million people in 2010 [12]. The main biome of the Sertão is known as caatinga, which extends over 900,000 km 2 (10% of the country) and presents patterns of anomalous precipitation with occasional multi-year droughts [13]. The biome presents a rich biodiversity, with high levels of endemism [14]. However, poor land use practices, growing rates of deforestation, and prolonged periods of drought have compromised human development in the region. A long-term sustainable development in the Sertão means finding a balance between ecosystem conservation and agricultural production. The region is home to 1.8 million farmers in Brazil (36% of the total 5.1 million), and two-thirds of them had a total value of production lower than 5,000 Brazilian reais (1,500 dollars) in 2016 (only 26% in Brazil) [15]. Most family farmers in the region are on the verge of income and food insufficiency, which tend to be exacerbated by climate change. In the Jacuípe Basin-a subset of the region in Bahia State-the average temperature increased by more than 2˚C over the past 40 years, while the average precipitation fell between 300 and 450 mm, which corresponds to a reduction of 30% [11]. The main economic activity in the region is livestock and dairy farming, which is directly exposed to climate conditions in several ways. First, the animals themselves are exposed to droughts and heat stresses, and dairy production, in particular, is sensitive to temperature changes [6]; second, environmental conditions determine the amount of forage produced, which affects animal health and growth. In the long run, the progressive replacement of the native caatinga vegetation with grass pasture over time has also reduced farmers' resilience to climate changes by decreasing the water retention capacity of the soil and its microbial biomass, and by further exposing animals by removing tree cover [16,17].
Climate change also means that farmers must adapt to new environmental conditions through climate-resilient agriculture, using strategies that are able to recover from climate impacts in an effective manner. Resilient-agriculture in the Sertão requires, above all, farming systems that optimize the use of the water generated by low and unpredictable rainfall, increases the water storage in the soil, and the use of drought-tolerant crops [18]. For example, the integration between agriculture, livestock, and forest in Sertão has shown to present soils that are more resistant and resilient to prolonged droughts, especially the surface soil [19]. The extensive livestock production prevailing in the Sertão presents the lowest stocking rates in Brazil, and are primarily dependent on local natural resources [20]. Therefore, the preservation of the natural vegetation might be fundamental to sustain the long-term cattle raising in the region.
The MAIS program is an example of climate-resilient strategies for agriculture in the Sertão. The MAIS is a set of climate-smart production practices and locally-adapted technologies designed as a whole to be both resilient to climate variations and regenerative of the natural ecosystem [9]. In terms of land management, the MAIS defines a minimum area of production (20 hectares) to guarantee a sustainable provision of pastures over seasonal-and 2-3 year droughts. Farmers set aside an area for Livestock-Forest-Pasture integration (silvopasture), and intensively cultivate hay and forage, mainly Opuntia-Ficus Indica (pricklypear cactus). Livestock management includes optimal herd sizing to ensure sustainable production in the long run without the depletion of natural resources, especially soil, and a set of best animal management practices. Farmers also organize their farms to include a management center designed to promote sustainable intensification of livestock production and reduce animal heat stress. As needed, they construct wells, water cisterns, and earth damns to ensure family and animal needs during prolonged droughts. Finally, depending on their local conditions, they may purchase recommended small-scale and low-cost machinery, especially tools with a high aggregated labor value, to reduce manual work-this includes technologies like mechanical feed crunchers to process Opuntia. All MAIS farmers received technical assistance and training over months in proper implementation and management of the production system.
The benefits of climate resilience interventions
A growing number of studies have evaluated the impacts of agricultural interventions on indicators of agricultural welfare, which includes increases in production, farm income, profits, and decreases in the production costs [21]. In general, access to technical training and the adoption of basic technologies have shown to have a positive impact on agricultural production and farm income. In China, for example, the assistance provided by agrarian scientists in rural communities generated significant benefits on agricultural outcomes [22]. The assistance enabled the diffusion of adequate management practices and overcoming multifaceted yieldlimiting factors involving agronomic, infrastructural, and socioeconomic conditions. In the semiarid of Ghana, the combination of credit supply and access to irrigation effectively reduced poverty and the risks associated with climate vulnerability in drought years [23]. In the Brazilian Sertão, the access to water for irrigation is scarce, most wells are saline or brackish water, but the adoption of basic types of machinery and fertilizers have shown to have remarkable impacts on family farming production in the region [6].
The adoption of sustainable agricultural practices also benefits the environment and, indirectly, agricultural production. For example, silvopastoral systems in the Caatinga have been able to minimize soil degradation processes, reduce water erosion and losses of nutrients and carbon [24]. In the middle and long-term, improved environmental conditions may increase yield and farm income. For example, Araujo et al. [18] show how increasing the percentage of natural lands in the Caatinga may increase biomass energy production, maintain the flow of essential ecosystem services (such as groundwater stocks), and improve food production.
The benefits of a climate resilience program like the MAIS may also go beyond those captured by traditional indicators of agricultural production and income. For example, the access to information about soil and water conservation, local market opportunities, livelihood diversification, and adaptive household capacity can improve social capital and have spillover effects on life conditions [25]. Strategies that increase agricultural labor productivity may also reduce the farmers' exposure to high temperatures, reducing heat stress, and improving working conditions [26]. For those farmers who rely on subsistence agriculture, the adoption of adaptive strategies may also be a key factor to increase land productivity and guarantee food security locally [27,28].
Although household income has been widely used to evaluate the impacts of public policies on quality of life, measures of SWB have attracted growing interest in the literature. Wellbeing encompasses multi-dimensional aspects and gives a sense of how people's lives are evolving. SWB is self-reported measures of the individuals' perception about their living conditions that simultaneously incorporate subjective and objective perceptions of life, such as health, comfort, and wealth [29]. One main advantage of measures of SWB in the evaluation of social programs is that they assess more general aspects of the social life, such as life satisfaction and worries with the past and future life conditions [30]. Measures of SWB have been shown to be an effective way to evaluate the perceived benefits of policies targeted to the poorest in developing countries [31]. Subjective measures have also been successfully employed to assess food security situations among impoverished family farmers [5]. Because family farmers in the Sertão are subjected to unpredictable production conditions and environmental risks that are not easily captured by traditional socioeconomic indicators, we expect that SWB measures may better capture farmers' perceptions with improvements in their income and food security, quality of work, and life in general.
The indicators of SWB can be used for the evaluation of policies in many domains, but they are not without caveats. One main concern is that the participation in the program may alter preferences, perceptions, and expectations, which are related to the subjective evaluations of wellbeing (Hawthorne effect) [32]. In this respect, SWB evaluations should also be validated by comparing the convergence of responses with other indicators related to the same concept [33]. For example, changes in the subjective evaluations of income satisfaction and working conditions may be related to improvements in the adoption of new production practices, such as hiring labor and labor-saving technologies.
Sample design
The survey used in this study involved no risk of physical, informational, or psychological harm to individuals who participated in the interviews. Data were stripped of identifying information, and research subjects did not include vulnerable or dependent groups. Respondents voluntarily agreed (verbal consent) to participate in the survey and answer questions about their agricultural practices, with an option not to respond available for all questions. We did not ask for ethical consent because the institutional review board in Brazil did not require ethnical consent for surveys in applied social sciences at the time that this survey was developed.
Between 2016 and 2018, the MAIS program assisted 100 family farmers in their milk and sheepmeat production. The non-profit organization responsible for the MAIS program (Adapta Sertão) conducted a survey in the Jacuípe Basin in 2015 (henceforth, survey 2015), one year before the implementation of the MAIS program. The aim was to understand the production practices in the region and better select the 100 farmers to receive the MAIS program. The selection of the MAIS farmers was partially random. Fifty farmers were strategically (nonrandomly) selected among those with the best-perceived likelihood for success. Based on information collected in the survey 2015, the Adapta Sertão ranked the farmers using a score (henceforth, Adapta score) containing seven main dimensions, each one ranging from 0 to 10: education; family structure; technical training; financial resources; market integration; access to water; land area and management (described in Table 2). The selection of the other 50 farmers was based on: i) a random selection of those farmers who met threshold criteria determined by Adapta Sertao, but who were not among the Adapta selection; ii) farmers recommended by the local cooperative and rural associations.
We conducted a follow-up survey among the MAIS and non-MAIS farmers between October 2017 and January 2018 (henceforth, survey 2018), a few months before the Adapta Sertão ended its technical intervention. Participation in the survey was voluntary and did not involve any risk of informational harm to individuals. The initial idea was to follow 100 adopters (treatment group) and a group of 100 non-adopters (control group) before (survey 2015) and after the technical intervention (survey 2018). A secondary goal of the project at its outset was to assess the targeting ability of the organization by comparing the 50 randomly-selected farmers with the 50 Adapta-selected farmers (and comparing both to control farmers). But most farmers selected to participate in the MAIS withdrew the program even before receiving the treatment (non-compliers) because, in the wake of the national financial crisis, the Brazilian Government failed to provide subsidized credit to finance the activities (technical assistance and loans for purchases and on-farm improvements). New farmers were then selected to participate in the MAIS, and now the selection was mainly based on the recommendation of other farmers and local leaderships. These new MAIS farmers were not interviewed in the survey 2015, compromising our study design.
To address these issues as well as possible, we surveyed 201 farmers in 2018 (survey 2018): 94 MAIS and 107 non-MAIS farmers. We pre-selected the non-MAIS farmers to minimize the selection bias caused by the non-random designation of the treatment. We used the data provided by the survey 2015 to fit a logistic regression for the probability of being selected by the MAIS system. Our dependent variable was the log of odds of participation in the MAIS system, where the participation was measured by a binary variable for treatment (MAIS). The independent variables were the Adapta scores and a binary variable that assumes 1 when the farmer was a member of the local cooperative. Next, we predicted the probability of participation in the MAIS program for all farmers. The survey 2018 prioritized the selection of those non-MAIS farmers with the highest probabilities, or those non-MAIS farmers most similar to the MAIS farmers along with the program selection criteria. We provided a list containing 130 non-MAIS farmers to be interviewed, but only 87 were found in the field survey. We then randomly selected the other 20 non-MAIS farmers living in nearby localities.
Here we present an analysis based on two non-mutually exclusive data sets derived from the 2015 and 2018 surveys. The first is the cross-sectional
Outcome variables
We examine the impact of the MAIS on three sets of outcomes that range from very proximate to the intervention (are farmers using the technologies, as intended?) to further downstream (has the MAIS translated into improved welfare?) ( Table 1). These outcome categories are (i) production practices; (ii) land management; (iii) income and subjective wellbeing.
The first group of indicators focuses on the main production practices among family farmers in the Brazilian semiarid region: hiring farm workers, use of brush cutters and shredders, access to a structure to store hay, soil treatment in pastures and control of diseases in the animals. These management adaptations are all directly encouraged under the MAIS system, so these outcomes represent, to some extent, a technical validity check. The share of MAIS farmers with permanent or temporary workers was 27 percentage points higher than that of non-MAIS farmers in the survey 2018 (62 of 94 MAIS farmers versus 42 of 107 non-MAIS farmers). Both milk and sheep meat production are labor-intensive activities, and the hiring of manual labor is a vital component to increase production. The use of brush cutters or shredders may to some extent replace the use of labor and is also higher among MAIS farmers: 85% of the MAIS farmers (80 farmers) had access to one of these technologies in the survey 2018, versus 62% of non-MAIS farmers (66 farmers). These technologies are widespread in the region, and the adoption was promoted as part of the MAIS system as a cost-effective strategy to reduce manual labor requirements. The use of a structure for hay storage, which is crucial for feeding animals during prolonged periods of drought, was 22 percentage points higher among MAIS versus non-MAIS farmers (60 MAIS versus 45 non-MAIS). We also found that MAIS farmers are far more likely to have adopted soil treatment in pastures and control of diseases in the
PLOS ONE
The impacts of a climate resilience program in the Sertão animals than non-MAIS farmers in the survey 2018: 79 MAIS farmers (84%) used fertilizer, manure or soil corrective (versus 60 non-MAIS farmers, or 56%); and 23 MAIS farmers (24%) controlled diseases in the animals (11 non-MAIS farmers, or 11%) by strategies of deworming, sanitation, vaccination, and medication. The second group of outcomes encompasses variables related to land management practices targeted as part of the environmental component of the MAIS system. We analyzed the impacts on (i) area of capoeira, secondary vegetation in Sertão formed mainly by grass and bushes; (ii) area of forage, or planted vegetation used for grazing or cut to feed to livestock; ; Training score (10 for yes, 0 for no); Finance scoreaccess to credit (10 for year, 0 for no), paid debts (10 for yes or no debit, 0 otherwise) with weight 1, prior farm income (10 for R$24000 or more, 8 for R$ 12000-24000, 6 for R$ 7200-12000, 4 for 2400-72000, 2 for R$ 0-2400) with weight 3, non-farm income (10 for R$24000 or more, 8 for R$ 12000-24000, 6 for R$ 7200-12000, 4 for 2400-72000, 2 for R$ 0-2400) with weight 1; Market score-mild sold to the cooperative (10 for 71-100%, 8 for 51-70%, 6 for 31-50%, 4 for 11-30%, 2 for 0-10%), and milk sold to the market (10 for 0-10%, 8 for 11-30%, 6 for 31-50%, 4 for 51-70%%, 2 for 71-100%); Water score-months with water in the dam (10 for 13 months or more; 8 for 9-12 months, 6 for 7-8 months, 4 for 5-6 months, 2 for 3-4 months, 0 for 0-2 months) with weight 3, flow of fresh water in the well (10 for USD 4,200) higher than that of non-MAIS farmers (95% higher) in the survey 2018. We also asked farmers to self-report their perceptions about variations in the sufficiency of income, the quantity of food consumed, quality work, and quality of life in general (these variables were only provided in the survey 2018.) MAIS farmers also reported better perceptions of variation in the last two years for the subjective measures of income (22 percentage points higher), quality of work (30 percentage points higher), and quality of life in general (16 percentage points higher). In turn, there was no significant difference concerning the satisfaction with the variation in the quantity of food in the last two years. Table 2 presents basic descriptive statistics for the explanatory variables used in the sample selection. The variables are divided into three main groups of analysis: Adapta scores, variables related to both the participation in the MAIS and farmers' outcomes (henceforth, vector x); location, variable related to farmers' outcomes but with no direct (causal) relation with participation in MAIS farmers (henceforth, Z 1 ); cooperativism, a variable related to the participation of MAIS farmers, but which has no direct relation to farmers' outcomes (henceforth, Z 2 ).
The balance of covariates between MAIS and non-MAIS farmers
The Adapta scores were chosen among those indicators that better identified the most productive family farmers in the region. As a result, MAIS farmers tended to be positively selected in terms of some dimensions. For example, the financial score of MAIS farmers was 10% higher than those of non-MAIS farmers in the survey 2018. This score includes access to credit, debits, nonfarm income, and farm income in the five years prior to the program. MAIS farms also tended to have larger areas in the survey 2018 (land score of MAIS farmers 6% higher than that of non-MAIS farmers). The project prioritized farmers with an area larger than 20 hectares to guarantee a minimum sustainable agricultural production.
In turn, we identified a negative relation between the water score and the participation in the program. This score was defined by a weighted average of variables related to the current structure of water storage: months in the year with water in the dam; the flow of freshwater and brackish in the well; the number of small and large cisterns. Access to water during prolonged periods of droughts is the essential resource for agriculture in Sertão. The water score of MAIS farmers was 12% lower than that of non-MAIS farmers in the survey 2018: the average water score was equal to 5.8 for MAIS farmers and 6.5 for non-MAIS farmers. Other Adapta scores (education, family structure, training, and access to market) did not statistically differ between MAIS and non-MAIS farmers. These results suggest that the Adpata Sertão did not target as effectively as may be expected. Factors that were not initially controlled by the organization may have also influenced the selection of MAIS farmers.
The most striking difference between MAIS and non-MAIS farmers was related to the membership in a local cooperative: 66% of the MAIS farmers were members of a local farmer cooperative in the survey 2018, versus 46% of the non-MAIS farmers. To account for this, we created a binary variable to help explain unobservable factors related to the participation in the program. Once we control for the Adapta scores, this variable is not expected to have direct impacts (causal relation) on agricultural production. The region does not have a tradition with cooperativism and rural associativism [34]. The main action of the local cooperative is to facilitate the commercialization of agricultural products into the market, which we already account for through controlling for market access (market score).
Finally, we used the distance to the nearest urban center as a proxy for access to urban commercialization channels. The access to paved roads in Sertão is scarce, and long distances to commercialization channels may largely increase the costs of production. The average distance between the farms and urban centers was 17 km for MAIS farmers and 15 km for non-MAIS farmers. Despite this small difference between MAIS and non-MAIS farmers, this variable did not play any major role in the selection of the program. We want to estimate δ, the average impact of the treatment (T = 0 for non-MAIS and 1 for MAIS farmers) on the outcome Y, controlling for farmers' unobserved heterogeneity c i :
Difference-in-
Where α is the intercept; x is a vector of control variables that are jointly related to Y and T (the Adapta scores), and β its respective vector of coefficients; Z 1 is an exogenous determinant of Y (distance to the nearest urban center), and θ 1 its respective coefficient; d is the coefficient for the time period dummy; and ε is the idiosyncratic error. Controlling for Z 1 allows us to obtain unbiased estimates for δ, even if distance and the designation of the treatment are weakly correlated.
The first-differenced equation gives the DID estimator of δ: Where The DID estimator controls the farmers' unobservable characteristics c i that may affect both Y and the selection in the treatment, which are considered constant over time (agricultural skills, for example). The main limitation of the DID estimator in our study is related to efficiency because the precision of our DID estimates is limited by the small sample of MAIS farmers in the panel 2015-2018. The consistency of the DID estimator also relies on the assumption of parallel trends, or that the evolution over time of the MAIS and non-MAIS farmers would be the same in the absence of intervention. Nonetheless, MAIS and non-MAIS farmers may differ in ways that affect their trends over time, as well as their compositions may change over time.
Propensity score matching.
We applied PS methods in our survey 2018 using the 201 family farmers of the survey 2018: 94 MAIS and 107 non-MAIS. The PS minimizes the selection bias in the designation of the treatment [35]. Techniques based on the PS have been widely used to evaluate the impacts of policies on a treatment group [36][37][38]. The method uses the propensity score p to balance the treatment and control groups based on a set of observable characteristics that are related to the outcome and the designation in the treatment [39].
We estimated the average treatment effect on the treated (ATT) using three PS methods The former two methods (NN and Kernel) estimate the ATT by differencing the average value of Y for treatment and control groups conditioned on p, i.e., the difference between the average Y of matched MAIS and non-MAIS farmers. The selection models used in matching methods must include all variables related to the outcome, whether or not they are related to the treatment, to minimize potential bias in the ATT estimates [39]. Our selection models were then defined by a probit function given by p = P(T = 1) = ϕ(x, Z 1 , Z 2 ).
The latter method (IPWRA) estimates the ATT by using weighted regression coefficients, where the weights are the estimated inverse probabilities of treatment. The method obtains consistent estimates even when only one of the two equations (selection or outcome model) is correctly specified, this means, the IPWRA is considered a doubly robust strategy [41]. The variables included in the selection and outcome models do not necessarily need to be the same. We added the explanatory variables x and Z 1 in the outcome model, which are directly related to Y; and the explanatory variables x and Z 2 in the selection model, which are directly related to T.
Two main hypotheses must be satisfied to obtain unbiased estimates for the ATT using PS methods: (i) balancing hypothesis; (ii) conditional independence assumption (CIA). The first hypothesis assumes that the pre-treatment values of the observable characteristics are independent of the treatment, conditioned on the values of p [42]. The CIA assumes that, if the potential results are independent of the participation in the program conditioned on the observable characteristics, then these values are also independent of the p [36].
Two-stage estimators.
We applied two strategies based on 2S regressions to control endogeneity in our survey 2018: local average treatment effect (LATE) and control function (CF). One main limitation in cross-sectional studies is the lack of control for unobservables that may be endogenous to variable T representing the designation in the program (selection bias). In our case, the variable T tends to be endogenous because unobservable factors affecting the designation may also affect Y, i.e., we tend to have a correlation between the error ε i and the regressor T in the structural equation: The LATE estimator was obtained through two-stage least squares using an exogenous variable as an instrument for T in Eq (3) [43]. The consistency of the LATE estimator relies on three main assumptions. First, the relevance assumption assumes that the instrument has a causal effect on T. Second, the exclusion restriction assumes that the instrument affects Y only through T, i.e., the instrument has no direct impact on Y once we control for T. Third, the monotonicity assumption assumes that all those who are affected by instrument (positively or negatively) are affected in the same way. In our study, the instrument was represented by Z 2 (membership in the cooperative), which determined the participation in the MAIS (assumption 1) but has no direct impact on the outcome Y (assumption 2). The membership in the cooperative may also increase the probability of participation in the MAIS for all farmers (assumption 3). Under these assumptions, the LATE estimates the average causal effect of treatment on an instrument-specific subpopulation. In our case, the LATE estimates the average causal effect of treatment on the subpopulation of MAIS farmers that were members of the cooperative.
In turn, the CF controls the endogeneity of T by including a proxy for the correlation between the unobservable factor and T in the regression model [44]: Where v is a proxy for the unobservable factors affecting T and was obtained from: The idea under the CF estimator is that, by including v in Eq (4), we obtain an error term ε that is uncorrelated with T. But we also need an additional exogenous regressor in the selection model (Eq 5), which in our case was the same variable used as an instrument in the LATE estimator: membership in the local cooperative. One main advantage of the CF over the PS methods is the greater robustness to misspecification of the conditioning variables, i.e., our variables x, Z 1 and Z 2 [45]. We also checked to what extent endogeneity may be a major concern by testing the significance of the coefficient ρ in Eq (4). If ρ = 0, then T is exogenous, and the ATT can accordingly be estimated by controlling only by the observable variables x and Z 1 .
Oaxaca-Blinder decomposition.
Our final empirical strategy applied the Oaxaca-Blinder (OB) decomposition to our survey 2018 [46,47]. The OB decomposition estimates one outcome model for each treatment (subscript T) and control groups (subscript C): The vector w includes the determinants of Y, i.e., our variables x and Z 1 . To account for selectivity, we weighted the observations of the treatment group by 1/p and the observations of the control group by 1/ (1−p). This strategy gives a higher weight for the treated observations that are more closely related to the control group and gives higher weights for those controlled observations that are more closely related to the treated group. The next step is to decompose the average difference between MAIS and non-MAIS farmers (D � Y TÀ C ) into: Where the first component is the unexplained effect, which represents differences due to unobservable characteristics (for example, the knowledge acquired by the technical assistance provided by the MAIS program, which was not measured in our survey); and the second component is explained effect, which represents differences between the outcome indicators that are explained by observable characteristics (human capital and technology, for example). The unexplained effect also offers a robust estimate of the ATT, while the explained effect includes the selection bias [48][49][50].
One main advantage of the OB decomposition is to allow us to estimate the direct and indirect impacts of the MAIS program on the quality of life. The MAIS program may directly impact the quality of life through improvements in agricultural production and farm income. The direct impacts would be related, for example, to improvements in the use of better production practices and land management. The idea of this empirical strategy is to compare the unexplained component using different groups of control variables in vector w. For example, we can compare the unexplained component of the income differences between MAIS and non-MAIS farmers without and with the controls for production practices. While the former estimate (without control) would represent the total impact of the MAIS program (direct + indirect impacts), the latter (with control) would represent the direct impact. The difference between these estimates would represent the indirect impact of the MAIS on farm income that is a result of improvements in production practices. One second advantage of the OB decomposition is to validate the SWB evaluations. For example, we can check to what extent differences between the subjective assessments of MAIS and non-MAIS farmers are related to differences in the objective indicators of production practices and land management. Table 3 presents the ATT estimates using the panel 2015-2018 (DID estimator) and survey 2018 (PS and 2S estimators). Estimates in S1 Table refer to the selection model used in the PS methods. We also included traditional Ordinary Least Squares (OLS) estimates for Eq (3) using the survey 2018. The idea is to evaluate to what extent the OLS estimates may be biased due to selectivity. The DID estimates are not available for the SWB indicators because these questions were only applied in the survey 2018.
The impacts of the MAIS program
As a result of the small sample size, most DID estimates were insignificant at 10%. The exceptions are the use of brush cutters or shredders (the MAIS would increase the use by 39%) and farm income (the MAIS would increase the income by R$ 20,203 annually, nearly USD 5,000). This variation in the farm income is a meaningful result because it means a twofold increase in relation to the income of non-MAIS farmers.
The PS methods using the CS 2018 (NN, Kernel, and IPWRA) provided more precise estimates than those obtained by DID and 2S estimators (LATE and CF). As a result, most estimates were significant at 5%, suggesting that the MAIS generated positive impacts on several indicators. The significance of the LATE and CF estimates was largely compromised by the high standard errors, which are nearly five times higher than those obtained by PS. Nonetheless, the magnitude of the LATE and CF estimates tends to consistent with those obtained by the PS methods.
The impacts of MAIS on farm income were significant at 1% for all PS estimates (ATT around R$ 17,000, nearly USD 4,250). The PS estimates also suggest that the MAIS improved the perception of improvements in the income (ATT between 19 and 28 percentage points), quality of working conditions (between 30 and 35 percentage points), and satisfaction with the general quality of life (between 16 and 29 percentage points). Positive and significant estimates were also obtained by the CF method. In other words, MAIS farmers tended to present better perceptions of variation of their wellbeing in the last two years. But the perception of change in the quantity of food they consumed was not affected. This may be related to the fact that the MAIS program focused on the production of milk and lamb, which does not necessarily have a direct impact on the diversity of food or farmers' diets.
The PS estimates were also positive and significant on most indicators of labor and technology, production practices, and land management. The CF estimates were reasonably larger than those obtained by PS, while the LATE estimates were inconclusive due to the large standard errors. The most meaningful results were obtained for: use of hired worker (PS estimates between 18 and 27 percentage points, and CF estimate equals to 56 percentage points); use of brush cutters or shredders (PS estimates between 12 -insignificant at 10%-and 23 percentage points, and CF estimate equals to 49 percentage points); use of hay storage (PS estimates between 17 and 22 percentage points, and CF estimate equals to 48 percentage points); use of
PLOS ONE
soil treatment (PS estimates between 28 and 32 percentage points, and CF estimates equal to 49 percentage points); area of caatinga (PS and CF estimates between 4 and 7 hectares); area of forage (PS and CF estimates between 4 and 5 hectares); area of Opuntia (PS and CF estimates between 0.5 and 0.9 hectare). Insignificant impacts were observed for control of disease in animals, areas of capoeira, and reforestation. The validity of the PS estimates relies on two main assumptions: (i) balancing hypothesis: the ability of the PS to match MAIS and non-MAIS farmers with similar observable characteristics; (ii) CIA: the participation in the program is not strongly influenced by unobservable variables. The statistics used to test the balancing hypothesis (S2 Table) indicate that the average differences between MAIS and non-MAIS farmers are nearly zero after the matching. We used the method of Rosenbaum bounds to test the CIA for the matching strategies of NN and Kernel [51] (S3 Table). Our results suggest that most ATT estimates are robust to the effects of omitted factors, independent of the matching strategy.
Decomposing the impacts on quality of life
Finally, we decomposed the differences between the indicators of quality of life (income farm and measures of SWB) of MAIS and non-MAIS farmers into (Eq 8): i) explained differences due to observable (control) variables; ii) unexplained differences due to unobservable factors. Fig 1 summarizes the estimates combining different sets of control variables (more information is provided in S4 Table). We only present the estimates for the variables with some component significant at 10%: farm income, variation in the satisfaction with income, work, and life in general. Model 1 controls exclusively by the determinants of Y that were not targeted by the MAIS system (x and Z 1 ). The unexplained component in Model 1 can be interpreted as the total impact of the MAIS. For example, the unexplained difference in Model 1 suggests that the total impact of MAIS on farm income was equal to R$ 15,296.
Model 2, which adds controls for production practices proposed by the MAIS, presents an unexplained component of R$ 13,218 for farm income. The difference between the unexplained components of Models 1 and 2 (R$ 15,296−13,218 = 2,078) is a proxy for the indirect benefit of the MAIS program on farm income through improvements in the number of hired workers and better access to technology. Model 3 controls for the whole set of control variables and indicates that nearly 1/3 of the total impact of the MAIS program on farm income (or R$ 5,449) could be indirectly explained by changes in the production practices and land management.
Similar results were obtained by the indicators of SWB. The total impact of the MAIS program on the indicators of SWB ranged from 18 (variation in life satisfaction) and 28 (variation in working conditions) percentage points (Model 1). And the indirect impacts of the program through changes in the production practices and land management (Model 3) ranged between 8 (variation in life satisfaction) and 9 (variation in working conditions) percentage points.
Discussion and conclusion
This paper adds both empirical and analytical contributions to the literature about the impacts of climate-smart strategies on agriculture production in the developing world. Our main empirical contribution is to emphasize the challenges inherent in, and best solutions for, accurately estimating the impacts of the MAIS program using a small sample of farmers that was not merely a random selection of the population, as would be expected in a pure experimental design. The MAIS program illustrates a relevant case of policy in the developing world, where experimental designs can fall victim to severe budget constraints and political mismanagement, or where randomization is unfeasible more generally.
We present estimates for the impacts of the program using different identification strategies and indicators of agricultural production and quality of life. The indicators include both measures of economic welfare, such as income and production practices, and subjective wellbeing, such as life satisfaction. Each strategy has its associated strengths and limitations, and the set of analyses and outcome indicators complement each other and should be viewed as a whole. The methods based on PS provided more precise estimates, although their consistency relies on the strength of observational variables to control the selection bias. The DID estimates control for farmers' unobservable factors that are constant over time, but their precision was compromised by the low number of treated farmers in the follow-up study. The accuracy of the estimates based on 2S strategies relies on the strength of our instrumental variable, membership in the cooperative. However, the consistency of these estimates may also be compromised by the small sample size.
We did not find remarkable differences between the OLS and the PS estimates, suggesting that the selection bias in the observable characteristics may not be a severe threat in the impact evaluation. One hypothesis is that the pre-selection of the control group largely attenuated the observable differences between MAIS and non-MAIS farmers. The local cooperative played a major role in the selection of MAIS farmers is likely the main source of bias on unobservables. The DID, LATE, and CF strategies controlled for the selection on unobservables and reinforced many of the positive impacts of the MAIS program, although the precision of these estimates was compromised by the small sample size.
Our main substantive, policy-oriented contribution lies in the finding that basic and lowcost adaptive strategies may have remarkable impacts on income and quality of life of smallholder farmers. Some main consistent achievements were increasing the farm income and access to essential agricultural technologies. This is the first study to evaluate the impacts of a climate resilience program in the Brazilian semiarid region, where family farmers have historically suffered from recurrent and prolonged droughts that have worsened in the last decades. The study provides evidence that fairly simple farm management strategies may be an effective tool for building resilience into rural agricultural systems. MAIS farmers fare better than non-MAIS farmers across several indicators of agricultural production and income, as they also reported better improvements in their work and life conditions. One caveat of the MAIS program is the null impact on the perceptions of improvements in food security. This may be because the program prioritized cash crop productions (milk and sheep meat) rather than the food sufficiency of impoverished farmers-more research into the pathways of impact for cash versus non-cash agriculture for household food security.
We were not able to evaluate the middle-and long-term environmental benefits of the MAIS program. The MAIS also stimulated the reforestation of the native vegetation and the adoption of agroforestry systems, which tend to minimize soil degradation, water erosion, and losses of nutrients and carbon storage in the soil. More sustainable agricultural practices are an urgent need in the Sertão, where degraded pastures have extensively replaced the native caatinga vegetation.
A general conclusion is that, albeit still limited in the region, institutional policies aimed to promote access to basic technical guidance and measures to change production practices should be prioritized. Nearly one-third of the impacts of the MAIS on farm income and SWB indicators were related to differences in the use of hired labor, technology, production practices, and land management. For example, one primary strategy disseminated in the program was how to accordingly cultivate densely spaced Opuntia cactus. This low-cost strategy focuses on growing forage crops with much higher yields than the traditional cultivation, allowing farmers to feed the animals during periods of prolonged droughts, and avoid overgrazing, land degradation, and land-use change. The other two-thirds of the impact of the MAIS were related to differences that were not directly measured, for example, the technical knowledge disseminated by the program. The MAIS trained extension personnel that was responsible for visiting and advising farmers on a regular and individualized basis, helping farmers to solve several production constraints. This highlights the potentially large role for agricultural extension services in adapting agricultural production systems to changing conditions. Supporting information S1 | 9,940 | 2021-05-21T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Economics"
] |
High static low dynamic stiffness outriggers effects on vibration control on cantilever Timoshenko Beam under earthquake excitation
High Static Low Dynamic Stiffness (HSLDS) is a kind of nonlinear visco-elastic device with features of passive control systems. This device presents the main advantages of working without a need for external energy and a maintenance of low cost. Thus, the paper deals with effects of HSLDS-outriggers at a predefined location on a high rise building subjected under earthquake excitation. The partial derivative equations based on the Timoshenko theory are used to model the tall building as an elastic-continuum beam. The nonstationary random approach is used to illustrate the dynamics of earthquake excitation of repeated sequences. Known as the powerful analytical tool and currently applicable to a variety of stochastic and deterministic problems, the stochastic averaging generalized by harmonic function is developed to linearize the modal equation of the structural system. It is showed that the direct simulation is good agreement with equivalent linearization technique. In doing so, it appears that this approximate analytical technique is very convenient to quantify the threshold values of parameters of the HSLDS control device. The obtained results come out that the control device significantly improved the seismic performance of the structural system at an acceptable level.
Introduction
The reduction of earthquake-induced vibration of tall buildings is an important research topic in the areas of structural reliability. Hence, different sophisticated methods have been made to guarantee the safety and the stability of these structures. To illustrate the real interest, the outrigger system was developed. In the current context, its design is defined as the one of the most promising alternative solutions [1,2]. It traditionally consists of a core wall, perimeter columns, and outriggers that are rigidly connected to the first-two previous elements [3][4][5][6][7]. However, one can note some cases already implemented in the world as super-tall building of 212.88 m St. Francis Shangri-La Place in Philippines [8], Shanghai Center located in Shanghai with the height of 632 m [9] and The Burj Khalifa in Dubai with the height of 828 m [10]. Although that the typical configuration of the outrigger system provides sufficient means to mitigate the undesirable vibration. It is convenient to insert the control devices into the outriggers. Since their presence provides additional energy dissipation to the whole structure [11]. Because the structures are initially not designed to withstand all possible external loads [12]. It is the main reason that the concept of damped-outriggers are widely explored in a great number of works [13,14].
In the literature there is three kinds of the control devices [15]: One can firstly note, the passive devices. These do not need an external voltage source to operate. The second, these are active devices. Unlike to the passive case, these devices need a large external voltage source to operate. Finally, one has the semi-active devices. They need a low external voltage to operate. Despite the fact that these last exhibit the combined passive and active properties. The researchers and engineers do not cease to multiply the intensive research efforts in view of reinforcing the capacities of passive devices. All this is due to their maintenance of low cost. It is this wake that the major contribution the choice of study of High-static low dynamic stiffness (HSLDS) using in this paper also falls. This control system is defined as the one more promising device that is able to improve the vibration isolator performance. It combines the positive and negative stiffness elements at an equilibrium positive. It follows that the nonlinear stiffness of this system strongly influences its dynamic responses and vibration isolation performance [16]. In the same context, considerable research attention has been devoted to attest the isolation performance of the device within theoretical and experimental analysis [17,18]. Wang et al. [19] explored the effects of the stiffness range parameter and static equilibrium position stiffness on the dynamic responses of the system. According to the authors, the increase of the stiffness range parameter and reduction of others improve the isolation performance of the device.
In addition to the above reports, the assumption is also considered to evaluate the behaviour dynamics of the outrigger system by neglecting the influence due to perimeter columns. Regarding the theoretical study, Pin et al. [20] investigated the effect of the damped outrigger as a general rotational spring acting on a Bernoulli-Euler beam. The authors showed that the modal damping ratio is significantly influenced by the stiffness ratio of the core to the column, and is more sensitive to damping than the position of the damped outrigger. Chen et al. [21] studied in free vibration under the assumptions of Bernoulli-Euler beam theory with two intermediate cantilever-attached viscous dampers. They obtained a transcendental equation that governs complex eigenvalues of system, whereby pseudo undamped natural frequencies, corresponding damping ratios and mode shapes are attainable. Lin et al. [22] studied the damped-outrigger, incorporating the buckling restrained brace (BRB) as an energy dissipation device. They pointed out a properly designed BRB-outrigger system can behave like a traditional elastic outrigger through BRB's elastic responses.
Note that all aforementioned studies showed that the shear as well as the rotary deformation in the dynamic behaviour of frame core-tube are not included in the assumption of the dynamic behaviour of frame core-tube.
In the present work, The frame-core tube is considered as a continuum cantilever Timoshenko beam with a constant cross-section. This model is defined as a mathematical expansion of the Euler-Bernoulli [23].
In this present paper, the performance of the HSLDS device on the outrigger system is theoretically studied. The stochastic averaging method [24][25][26] extensively used in engineering application is developed. This analytical technique is applied to also linearize the modal equation of the structural system. Our main objective is to find the suitable values of control parameters of this passive energy dissipation leading to an acceptable level of the earthquakeinduced vibration.
Known as the one more powerful of the catastrophic event influences the dynamics of a structures and building in worldwide [27]. The earthquake with two repeated sequences will be explored in work [28]. Because the structure gets damaged in the first sequence, and additional damage accumulates from secondary sequence before any repair is possible. It is thus that the efficiency of the control device will evaluate during its two intervals.
The rest of the paper is structured as follows: In Sect. 2.1, focuses on the description of the structural system. The mathematical model allowing to investigate its dynamics is developed and the linearization form of the modal equation is also illustrated. Section 3 is devoted to the numerical results and discussion. Section 4 concludes the article.
Description of the system
The simplified schematic of the structural system under the ground earthquake is represented in Fig. 1a. It is constituted of an uniform cantilever beam illustrating the dynamic behaviour of the core-tube and the dampedoutriggers. The configuration of these ones is made so that they are forced to work as a group. It comes out from this figure that the elements such as core-tube and perimeter column are rigidly connected to work together, in order to resist lateral force.
The damped-outriggers behave as a rigid body and are located at a point x a along with the height of the core tube. Note that the damped-outriggers communicate to the perimeter columns through the visco-elastic devices (HSBS) vertically installed, as illustrated in Fig. 1a. Thus, the simple schematic of this mentioned devices is displayed in Fig. 1b. The design of this system is a nonlinear isolator with high static low-dynamic-stiffness of which comprising three springs of linear stiffness( two horizontals k 1 and one vertical k 0 ) and a dashpot of the linear damping coefficient c 0 . Adding of these devices should enhance the dynamic performance of the structural system by providing supplementary energy dissipation [11]. By passing, it is important to point up that the outriggers and the exterior columns have commonly a high stiffness. In this context, the bending stiffness E 0 I 0 is assumed to be infinitely rigid.
Mathematical model
Note that the assumption well-used in the literature is to consider that the structure displayed in Fig. 1a is a cantilever uniform beam. For that, m 1 defines the mass per unit length; EI is the flexural rigidity . Thus, I is the moment of inertia of the cross-section about the neutral axis, E is Young's modulus. G is the shear modulus of elasticity; r a is the radius of gyration of the cross-section. These defined geometrical and material properties are assumed constant.
The lateral displacement is defined by the variable y(x, t) = y , which varies with the coordinate along the beam x and with time t. To remind as mentioned above the readers that the influence of the perimeter columns on the dynamics of the core is not taken into consideration. As a result, the governing equation describing the dynamics of the cantilever Timoshenko beam with damped outrigger subject to horizontal earthquake loadings can be written as follows [29] (1) In this above formulation, the third term from the left side represents the correction for rotary inertia plus the shear deformation effect. For the convenient study, it should be noted in passing that the joint action of rotary inertia and shear deformation effects is neglected. The dimensionless quantity k s is the shear coefficient depending on the geometric of the cross-section of the beam and depends on as well as of the Poisson's ratio. The random function ÿ g (t) represents the ground acceleration. It is worth pointing out that the dot denotes the derivative with respect to t. Thus, the seismic events are described through the below analytical expression [30] Here, x g (t) is the filter response. It is important to note that x g (t) is numerically obtained from 3. This allows to determine the function ÿ g (t) that represents the earthquake dynamics.
w a (t) is a stationary Gaussian white noise process with the following statistics
Fig. 1 Simple structural model with nonlinear isolator
S 0 is the constant power spectral intensity of the noise. The evolutionary power spectrum is described as [28] in which e(t) is a deterministic envelope function of time and is defined as where e 0i and i are positive constants that control, respectively, intensity and non-stationarity trend of the ith acceleration sequence. Note that this function in (5) is introduced to represent nonstationary into the process.
Expression in (6) was suggested by Abbas and Takewakib [28]. It illustrates the repeated sequences of the earthquake excitation.
Based on the study of the frequency content of a number of strong ground-motion records [31], the spectral density for the ground acceleration of the earth surface layer was suggested by Kanai and Tajimi [32]. The mathematical formulation is expressed as where g is the dominant frequency of the soil site, g is the associated damping ratio of the soil layer representing the spectral characteristics of the ground excitation.
Note that the modelling earthquake excitation has received more attention. This is due to the fact that it can be defined as an analytical convenient approach showing many advantages within the assessment of the structural behaviour. Because it allows mainly characterising an accurate behaviour of the recorded models of different sites, by adjusting the intensity and frequency content or their statistical properties. It can help to also estimate the past nonstationary ground excitation having hit structures of different countries in the past for many decades.
Note that the outrigger system works by transferring global bending load from the core of the building to the outside columns [33]. Hence, the last term of Eq. (1) demonstrates that the induced-effects of outriggers on the core tube are considered as resistant moments [13]. Consequently, expression of concentrated moment generated by the control device is where (x − x a ) denotes the Dirac function. It shows the predefined location where the damped outrigger is installed. The point x a indicates the distance from the bottom of the tall building. However, the mentioned function in (8) has the property as The distance from the control devices to the centre of the core is denoted r, and is also defined as the length of each outrigger.
The number two introduces in Eq. (8), denotes the quantity of the HSLDS installed since the damped outrigger is symmetric in relation to core-tube. The force f H of Eq. (8) is given as follows [34].
The obtained expression from Eq. (10) has been studied in ref. [34]. They pointed that two are additional horizontal springs, each with stiffness k 1 . These ones have the effects of creating a nonlinear that can be adjusted, and by automatically modifying the linear natural frequency of the structural system.
The vertical spring of linear stiffness is so-called k 0 . The free length of the lateral springs is so-called s a , and s is the length of each spring in the horizontal position.
Owing to reduce the mathematical difficulty of the Eq. (10) within the analytical framework, it is convenient to make Taylor's development that consequently, lost a lot of information that is neglected. Thus, the approximated polynomial form yields [34] It can be clearly seen that Eq. (11) is rewritten under of a third-order polynomial. Thus, one of the major steps will greatly allow to analyse the effects of different parameters of the control device.
By introducing the dimensionless variables defined as L This leads to rewrite the Eq. (1) as and Eq. (11) becomes For the analytical purpose, it is convenient to reduce the partial differential Eq. (12) to a set of ordinary differential equations. For that, the transversal deflection of the beam Y(X, t) can be rewritten in term of product of two variables in the following form: with j is the modal participation factor of the ith mode of vibration; and can be determined through the following form [36] N is the total number of modes, j (t) is relative displacement response of a SDOF system, and j (X ) is the amplitude of the ith mode at nondimensional height X defined as.
in which,
Modal equation
To assess the dynamic behaviour response of the structural system; it is worth reducing the partial differential equations to the modal equation. For analysis purposes, the Eq. (14) is considered and substituting into (12), performing the integration from 0 to 1 and algebraic manipulating yields z(t) is displacement of the whole system corresponding to the jth mode.
Noting that the above Eq. (17) denotes the modal equation of the structural system subjected to earthquake excitation.
With the damping coefficient gives by
Analytical approach
Defined as a powerful approximate technique for the prediction of response of linear or non-linear under random vibration, the stochastic averaging method is widely used in literature. Its application has proved that it is a useful tool for deriving approximate solutions to problems involving the vibration response [38] The purpose here is to develop theoretical investigation through the stochastic averaging method that will provide a good estimate of effects on the parameters of the control device such as stiffness and damping coefficient on the vibration amplitude of the whole structure.
Before starting, it is important to apply the equivalent statistic linearization method. This stochastic technique proposed by Kougioumtzoglou et al. [37] allows to approximate the non-linear system to a linear form. Thus, application to Eq. (17) leads to a suitable transform as follows.
The following step of this analyze is to consider that the amplitude of the structural system can be decomposed as with natural frequency given by (18) It is observed that the natural frequency from (23) should be function of the amplitude.
Combining Eqs. (21) and (22), this yields By differentiating (22) with respect to time and combining with (21) into (20), the following averaged equations can be derived w a (t) represents a stationary, zero-mean Gaussian white noise process of unit intensity [25]. It is noted that the amplitude (A j ) in (26) is decoupled with the phase ( ) . The reason why these variables can be treated separately.
In what follows, the Fokker-Planck expression from (26) is governed by where P(A j , t) denotes the probability density amplitude-depending.
This above equation allows to determine the nonstationary response amplitude P(A j , t) , in which a solution of Eq. (27) can be approximated as follows [24,39] where the function c j (t) accounts for the time-dependent variance of the response process z j .
To determine the mentioned function defined in (28), it is readily found that by substituting the Eq. (28) into (27), and the manipulating yields c j (t) = − j c(t) + 2 j S g Ω eq (c j (t)), t Ω eq (c j (t)) 2 with the equivalent time-dependant stiffness Ω 2 eq (c(t)) given by and the moment of the amplitude Remind the reader that the subscript (j = 1, 2, 3..) represents the mode of vibration of the structural system.
Note by passing that Eq. (29) is a first-order nonlinear ordinary differential equation, which can be solved numerically.
Floor displacement
In Tall buildings combining the shear and flexural effects, the interstory drift ratio IDR can be explored. It is defined as the difference of displacements of the floors above and below the story of interest normalized by the interstory height [36]. Although, IDR seems to correlate well with the seismic damage potential of buildings [40]. It will not develop here, because the obtained results by Xie and Wen [41] indicated that Timoshenko's theory is restricted for the evaluation of lateral drifts for shear wall structures and might not be adequate.
In our context, Floor's displacement is defined as the displacement of each floor of the tall building. The study of this approach is a powerful tool to better observe. it also allows to analyse the effect of the control device on each floor of the tall building. Thus, Floor's displacement can be computed through the formulated Eq. (14) of the transverse deflection. Hence, the below equation can be deduced as follows N m is the number of modes considered in this work As the Tall building illustrated Herein, has a finite number of stories. The value of each of stories should be found within interval nondimensional height 0 < X < 1
Numerical analysis and discussions
It is well-known that the tall buildings primarily consist of structural members (columns, walls, floors) with a considerable number of degrees of freedom. To reduce this complexity, the mentioned structure can approximately describe by equivalent homogeneous elastic-continuum. Deng et al. [35] were found that the results obtained from the simplified model agree well with those obtained from the finite element model. Note that the main objective is to find the threshold values of K 0 and C 0 that limit the seismic-induced structural vibration to a considerable level.
To investigate the dynamic response of the structure, the simplified model based on a cantilever beam leads us to define the concrete core with geometric properties 12 m × 12 m , a thickness of 0.5 m, and with sixty-story a total building height of 210 m [29]. The mass per unit length is m 1 = 62500Kg∕m . Note that the structural system described here, have a single outrigger therefore the effect of the distance from the core to the perimeter columns on the dynamic response will be analysed.
Through the constitutive relationship existing between the coefficients, this leads to having the value of parameters as presented in Table 1. It illustrates the parameters of the shape function and frequencies of the modal equation in the three first modes of vibration.
In this paper, the mathematical model of the earthquake acceleration sequences is governed by Eqs.
(2)- (6). Hence, the simulated nonstationary ground acceleration is shown in Fig. 2. The intensities of the acceleration sequences at the first and second sequences S 0 = 0.02 m 2 ∕s 3 and S 0 = 0.01 m 2 ∕s 3 , respectively. The parameters of the envelope functions are adopted as 0.30 and 0.35, and the separating time interval between the sequences is 15 s. The interval time of the envelope function are t 1 = 5s , t 2 = 25 , t 3 = 40 and t 3 = 60 Figure 2 explicitly displays the temporal dynamics of seismic events from Eqs. 2 and 3. It can be seen that, depending on the envelope function (see Eq.6), the dynamic exhibits two sequences with the separating time interval both of them.
The stiffness coefficient must be selected suitably so that the nonlinear stiffness is always positive. Hence, the values of different stiffness are chosen as K 1 = K 0 ∕4, K 2 = 55 K 0 .
In order to quantify the effect of the stiffness coefficient K 0 on response of the structural system, Fig. 3 shows the variation of the stiffness on the peak response amplitude corresponding to different values of the dimensionless damping coefficient C 0 . It is obviously observed that when C 0 increases, the response amplitude decreases. In each of the cases shown, it can be concluded that the influence of K 0 automatically reinforces the reduction of the response of the structure. Figure 4 shows the influence of damping coefficient on response amplitude in the three first modes of vibration. It can be seen the increasing damping coefficient decreases automatically the response amplitude. Figure 4a matches in the first vibration mode, thus, it is observed a reduction amplitude rapidly occurs when the damping coefficient C 0 increases. While the attenuation of the amplitude observed in Fig. 4b and c is due to especially at the frequency values corresponding to the second and third modes of vibration, respectively. It comes out as further information that C 0 considerably influences the amplitude.
The result is evaluated in the first mode of vibration. As similar information will be obtained to other modes of vibration. It is not necessary to display here because the analyse of modes is independence.
In Fig. 5, the comparison is made with results based on Monte Carlos simulations of linear and nonlinear systems. More specifically, the response amplitude obtained by the linearization approach (see (20)) is very similar qualitatively to that obtained by nonlinear simulation (see (17)). The nonlinear response amplitude is in good agreement with linear simulation estimates, with a notable discrepancy when of linear the coefficient stiffness K 0 of the vertical spring increases. The response amplitude calculated by . 2 The simulated acceleration sequences, g = 3 rad∕s the linearization equation , with the low computed values of K 0 in practice, is accurately satisfied. It is thus that the results shown in Fig. 5a also indicate in first sequence a difference of the percentage of Error=1.6% of the peak value of amplitudes, and an error 0.75% in second sequence ground excitation. While Fig. 5b exhibits a percentage Error=3.9% in the first sequence and the error in the second sequence is 1.9%. According to Fig. 5c, a percentage of error=5.4% between the different peak amplitude first sequence and the error 2.7% in the second sequence are observed.
Note that the errors are extremely small when K 0 decreases, causing a softening nonlinearity. Although when K 0 increases, it causes a hardening nonlinearity that an undesirable effect as mentioned in ref. [34]. They indicated that the presence of the mentioned effects in the isolator results in the resonance peak bending to higher frequencies range over there is a violation isolator. Thus, to analyse the influence of the control device on each floor. The maximal lateral deflection versus some stories is presented in Fig. 6. It can be noticed that in Fig. 6a, the influence of the coefficient stiffness is illustrated. While in Fig. 6b rather exhibits, the influence of the damping coefficient of the control device on the control vibration of the structural system The supplementary information from Fig. 6 allow to found the threshold values of coefficients K 0 and C 0 . These parameters should lead the control device to provide the high capability of mitigating transversal displacements against seismic events. As a result, Table 2 illustrates the reduction percentage of the vibration of each floor of the tall building when the stiffness coefficient K 0 of the control device increases as seen in Fig. 6a. It can be seen that its influence considerably affects the displacement response at the bottom than the top of the structural system.
Unlike the observation made in Table 2, the results of the Table 3 illustrate the reduction percentage of the vibration of each floor of the tall building when the damping coefficient C 0 of the control device increases as seen in Fig. 6b. It can be seen that its variation significantly affects the transversal displacement at the top than the bottom of the tall building compared to the case displays in Table 2.
Thus, By combining information from both Tables 2 and 3, it can be concluded that the variation of coefficients K 0 and C 0 significantly reduces of the transversal response at the top and the bottom of the structural system.
As mentioned above, the outrigger without the control device is one of the structural element which is connected to the core-tube and perimeter columns. Their association forms a unit block that is able to provide a dynamic action in resisting the lateral loads. On top of that it is important to note by passing that the position of outrigger along the height of the structure is a major factor that significantly affects the dynamics of the tall building [29]. However, one of the major steps will be to evaluate the efficiency outrigger's length on deflection transverse of the whole system. Figure 7 displays the influence of the outrigger's length on maximal deflection displacement of each floor. It comes out that when the increase of the outrigger's length significantly reduces the earthquake-induced structural vibration.
In what follows, The results show in Tables 4, 5 and 6 illustrate the data from in Fig. 7a-c, respectively. It can be seen that the variation of the outrigger's length on the dynamic response by reducing the excessive vibration of the whole structure. Moreover, the results indicate that the distance between the control device and core-tube should not be close to each other. For a design contribution of the outrigger system, a distance must be respected to accentuate the effectiveness of the dynamic response of the structure. Figure 8 shows the temporal evolution of root mean square acceleration. The results indicate that the percentage of the peak root mean square in the first sequence is 3.5% and the second sequence is 4.43% . It comes out that the variation of parameters of the control device slowly affects the acceleration amplitude. Nevertheless, they allow to rapidly reach the reduction amplitude during each sequence of the earthquake excitation.
Conclusion
The current paper investigated the High static low dynamic stiffness outrigger effects on the cantilever beam under earthquake loads. The Timoshenko model based on partial equations is used to describe the dynamic response of the core-tube of the structural system. However a numerical comparison is made between equivalent linearisation method and the direct simulation approach to justify the precision of the analytical averaging method used. This has allowed to determine the threshold values of stiffness as well as the damping coefficients of the nonlinear control device. Moreover, it is showed that the performance of the control devices on the structural system depends intensively on of stiffness and damping coefficients. As a matter of fact, the provided results clearly reveal that the control device has the potential to reduce the excessive lateral deflection up to 10% at the top and bottom of the structural system. It was also possible to assess the impact of the distance of the control device vertically installed at the column to the centre of the core-tube. It is observed that the variation of this distance can greatly influence the dynamics of the outrigger system with a reduction of vibration up to 39% at the middle of the structure. This suggests that a good compromise should be found between the control devices and the centre of the coretube to optimise the performance of structural system. Another conclusion of this work is that the analytical investigation is really necessary to estimate the threshold parameters of the nonlinear control device leading the acceptable level of the reduced amplitude of the structural system. Thus, the future work will focus on the investigation the delay-effects of HSLDS on the structural response.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 6,979 | 2021-02-15T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Synthesis, structure and optical properties of Indium filled CoSb3 nanomaterials
The nano-sized InyCoSb3 skutterudites (y = 0.0125, 0.025, 0.0375 and 0.0625) were synthesized by solvo-/hydrothermal method at 240 °C for 24 hours. The surface morphology of as-synthesized samples analyzed by Field Emission Scanning Electron Microscope (FESEM) shows nanoparticles with size of around 50 nm depicting narrow size distribution and Electron X-ray diffraction spectroscopy (EDX) spectrum confirms the purity of the In-filled CoSb3 nanomaterials. The powder X-ray diffraction (pXRD) pattern reveals all the compositions showing diffraction peaks indexed to cubic phase of CoSb3 with Space group Im3¯ . However, no significant variation in the pXRD pattern of In filled CoSb3 as compared to pure CoSb3 justifies the successful filling of voids in cage-like structure of CoSb3 with Indium. FTIR spectra of In filled CoSb3 presents vibration modes below 1000 cm−1 corresponding to Co-Sb bonding and Cobalt complex. A significance of In filling into CoSb3 structure is also evident. A wide absorption has been witnessed from UV region to large part of visible region, hence indicating the effect of nanoparticle, agglomeration and filling.
Introduction:
The field of thermoelectrics is growing progressively because of its capability of direct and reversible conversion of thermal energy into electricity and it has been recognized as a potentially transformative power generation technology. Thermoelectric devices would have many advantages such as no moving parts, inexpensive and environmentally friendly. The efficiency of TE materials is defined by the dimensionless quantity so-called figure of merit, ZT = S 2 σT/k, where S is the Seebeck coefficient, σ is the electrical conductivity, T is the absolute temperature, k is the total thermal conductivity (k= k el + k lat , where k el and k lat are the electronic and lattice thermal conductivities, respectively) [1,2]. Because of its low energy conversion efficiency, the applications of TE devices are still limited. Hence to improve the efficiency of thermoelectric devices, materials with high ZT value are required. Minimizing the k value and maximizing the power factor, defined as (S 2 σ) can results in the high ZT value [3]. Many researchers have developed various strategies to reduce the lattice thermal conductivity by creating structural disorder, complex crystal structure and nanostructuring [4] etc. Based on the above criteria, among various TE materials, skutterudite compounds are promising candidates due to their tuneable transport properties. The skutterudite compound can be represented by MX 3 where M is a metal atom and X is a pnictogen atom. It belongs to cubic structure, space group Im and there are two relatively large voids per unit cell in the structure. When third atom is inserted into the void they form a filled skutterudite represented by RMX 3 , which supports the Phonon-glass electron-crystal (PGEC) concept. This filled atom introduces additional phonon scattering to reduce lattice thermal conductivity and also donates electrons into the CoSb 3 structure there by enhancing electrical conductivity of the material [5]. Various elements like rare-earth elements (La, Ce, Yb), alkaline-earth elements (Ba, Sr, Ga) and other (Sn, In, Ge) have been used to fill the large voids in the crystal to search for better thermoelectric materials [6][7][8]. The basic criterion for choosing Indium as a filler is because covalent radius of indium (1.06A 0 ) is smaller than the void radius (1.892A 0 ), it is easily go on to the void site, rattle well and also indium containing CoSb 3 shows large negative seebeck coefficient [9]. Over the recent years many researchers have synthesized the In filled CoSb 3 by various methods. He [11]. The above synthesis techniques requires a long duration, high synthesis temperature, ultra-high vacuum condition in the order of 10 -5 to 10 -6 Torr and are also cost effective. For commercial applications, the cost of TE materials is important and also short duration synthesis is required [12]. Solvothermal technique is a reliable synthesis technique because of its comparatively low synthesis temperature, low cost and high reliability [13,14]. In the present study, Indium filled CoSb 3 nanomaterials were prepared by Solvo-/Hydrothermal method and their structural, optical properties are studied. Variation of the lattice parameter as a function of the In filling ratio of In y CoSb 3 is investigated.
Experimental Procedure:
Indium filled CoSb 3 skutterudites (In y CoSb 3, with y= 0.0125, 0.025, 0.0375, 0.0625) were prepared by solvo-/hydrothermal method. CoCl 2 .6H 2 O, SbCl 3 , InCl 3 and NaBH 4 were used as starting materials. The precursors CoCl 2 .6H 2 O and SbCl 3 were weighed in the stiochiometric ratio (1:3) along with InCl 3 as filler and dissolved in water then the solutions are sonicated 20 minutes for better dispersion. The solution of NaBH 4 is taken in a burette; added drop wise and reduction reaction was carried out for 15 minutes. After the completion of reduction reaction, the solution is transferred to PPL-lined autoclave (50 mL) and filled with DMF as solvent up to 70% of its volume. The autoclave was sealed properly and placed in wide-mouth muffle furnace maintained at 240 0 C for 24 hrs. After the reaction is completed, temperature of the furnace was brought down to room temperature naturally. The samples were filtered and washed with distilled water and ethanol in sequence for several times, then dried at a temperature slightly above room temperature for 30 minutes. The obtained black powder is taken in a quartz boat and placed in a tubular furnace, annealed at 300 0 C for 5 hrs in inert atmosphere (Argon). Inset of 1(b) represents the EDX spectrum of the above compound. Structural characterization was studied by powder X-ray diffraction (pXRD) using Rigaku Ultima IV Powder X-Ray Diffractometer with CuK α radiation (λ= 1.54178 A°). The surface morphology of as-synthesized sample was investigated by Field Emission Scanning Electron Microscopy (FESEM; JEOL JSM-7100F) employed with energy-dispersive X-ray (EDX) analysis to obtain the chemical composition information. The UV-VIS absorption spectrum was measured using Perkin Elmer UV/VIS LAMBDA 365 instrument. Perkin Elmer Spectrometer was used to record IR spectrum of the compounds in the range 400 to 4000 cm -1 in KBr medium at room temperature. Figure 1(a) shows the surface morphology of CoSb 3 nanomaterials synthesized by solvo-/hydrothermal method. The nanoparticles are found to be uniform in shape and particle size varies from 50-100 nm with some agglomeration. Figure 1(b) represents the XRD and EDX spectrum of CoSb 3 nanomaterials. Almost all the peak positions and hkl values match very well with the binary skutterudite CoSb 3 compound and were indexed to JCPDS File No. 76-0470 with cubic phase and space group Im . A peak corresponding to CoSb 2 secondary phase exists due to chemical synthesis technique [15]. J.L. Mi et al., [16] reported that the secondary phases (CoSb 2 ) acts as an intermediate product during the formation of CoSb 3 phase. However, these secondary phases are often reduced or can be completely removed by high temperature and high pressure treatment before transport measurements [15]. Inset of Fig. 1(b) shows the EDX spectrum of CoSb 3 nanomaterials. It indicates that presence of Co, Sb and small amount of oxygen impurity that is anticipated to be originated during the sample processing. The powder XRD patterns of In y CoSb 3 (y= 0.0125, 0.025, 0.0375, 0.0625) are as shown in Fig. 2. It reveals that all the main diffraction peaks are indexed to the CoSb 3 phase with space group Im (JCPDS File No. 76-0470). In addition to that peaks corresponding to CoSb 2 , InSb, Sb secondary phases were observed in all the filled samples. Guanghe et al., reported that filling fraction limit of voids in In x Co 4 Sb 12 should be in the range of 0.05 < x < 0.10 and when the filling limit exceeds the range Insb, CoSb 2 , Sb impurity phases are formed [17]. However, the formation of InSb phase in nanoscale is expected to enhance the TE properties [18]. Using the pXRD data different parameters such as d-spacing, lattice parameter (a) and crystalline size (D p ) can be calculated as shown in Table 1. The Crystalline size Dp, lattice parameter (a) and d-spacing have been calculated using the following equation: D p = kλ/β cosθ, a = [d 2 (h 2 + k 2 + l 2 )] 1/2 and nλ = 2d sinθ, respectively. Table. 1 interprets variation of lattice parameter with increasing In filling fraction. It is revealed that lattice parameter (a) decreases with an increase in filling concentration. From Table.1 it is also observed that crystalline size and d-spacing significantly decreases as compared to the unfilled CoSb 3 due to increasing grain boundaries in CoSb 3 . The reduction in crystallite size is due to the atomic radii of In (1.06A 0 ) is smaller than the void radius (1. Inset shows corresponding EDX spectrum.
Structural analysis of Indium filled CoSb 3 nanomaterials
A typical FESEM images for Indium filled CoSb 3 nanomaterials with various concentration are shown in Fig. 3. These images show that Indium filled CoSb 3 have slightly irregular shape but uniformly distributed particles with size of 50 -70 nm. Inset of Fig. 3 (a-d) shows the EDX spectrum which confirms the presence of Co, Sb and Indium in all the In filled CoSb 3 nanomaterials. Based on the theoretical studies, the vibrational modes of CoSb 3 are available below 400 cm -1 [21,22]. Due to the measurement limitations low frequency phonon vibration modes are not obtainable and hence the information shown in the range 400 to 4000 cm -1 as shown in the Fig. 4. In both filled and unfilled CoSb 3 nanomaterials, broad absorption peak and small peak found from 2000 to 4000 cm -1 indicates the presence of O-H bonding which arises due to the moisture in the KBr pellet or surface absorption of moisture during the sample processing [23]. For all the samples (y= 0 to 0.0625), weak peak is found in the regions 1600, 1625, 1623, 1631, 1628 and 1767 cm -1 , respectively corresponds to metal-oxygen bonding. The peaks at 1387, 1384, 1383, 1384 for samples (y= 0 to 0.0625), assigned to Cobalt complex and O-H in plane bonding. All the other peaks below 1000 cm -1 are assigned to Co-Sb bonding. The peaks at 739, 738, 740 and 741 can be attributed to cobalt complex and which also confirms Indium filling in the void site of CoSb 3 [24]. From the FTIR spectra, it is also clear that with increase in Indium substance there is no substantial impact on the vibrational modes in CoSb 3 structure, except slight variation in position and intensity. In further studies, FTIR spectrum below 400 cm -1 is expected to provide more information about the phonon modes of vibration in the CoSb 3 samples.
Optical Characterization by UV-Visible Absorption Spectroscopy
The UV-Visible absorption spectra of Indium filled CoSb 3 (y= 0 to 0.0625) recorded in the wavelength range 200 to 800 nm are shown in Fig. 5. It reveals that, no absorption peaks available in the wavelength range 300-800 nm. Therefore, the material is suitable for manufacture of NLO (non linear optical) devices [25]. The unfilled CoSb 3 sample have broad absorption peak in the wavelength range 270 nm and filled samples have single absorption peak in range 271 -282 nm. This indicates the material is suitable for UV filters [24]. Also from the Fig. 5 it is observed that with increasing filling fraction the absorption peak shift and also reduction in the peak intensity is observed in all the filled samples due to nanostruturing [26] and exceeded filling fraction of Indium into the CoSb 3 voids. Our further studies on UV-VIS spectroscopy effectively depicts the band gap evaluation.
Conclusion:
Solvo-/hydrothermal method was employed to prepare In y CoSb 3 (y = 0, 0.0125, 0.025, 0.0375, 0.0625) nanomaterials and their structure and optical properties were investigated. The Powder X-ray diffraction (pXRD) pattern shows that both unfilled and filled samples have cubic structure with space group Im . A FESEM study shows the surface morphology of In y CoSb 3 nanomaterials with particle size in the range 50-70 nm. FTIR analysis shows the chemical bonds below 1000 cm -1 corresponding to Co-Sb with the effect of Indium filling. UV-absorption data reveals that, there is a shift of the absorption peak towards the higher wavelength and also reduction in the peak intensity in all the filled samples. | 2,891 | 2020-03-01T00:00:00.000 | [
"Materials Science"
] |
Probing Electron Excitation Characters of Carboline-Based Bis-Tridentate Ir(III) Complexes
In this work, we report a series of bis-tridentate Ir(III) metal complexes, comprising a dianionic pyrazole-pyridine-phenyl tridentate chelate and a monoanionic chelate bearing a peripheral carbene and carboline coordination fragment that is linked to the central phenyl group. All these Ir(III) complexes were synthesized with an efficient one-pot and two-step method, and their emission hue was fine-tuned by variation of the substituent at the central coordination entity (i.e., pyridinyl and phenyl group) of each of the tridentate chelates. Their photophysical and electrochemical properties, thermal stabilities and electroluminescence performances are examined and discussed comprehensively. The doped devices based on [Ir(cbF)(phyz1)] (Cb1) and [Ir(cbB)(phyz1)] (Cb4) give a maximum external quantum efficiency (current efficiency) of 16.6% (55.2 cd/A) and 13.9% (43.8 cd/A), respectively. The relatively high electroluminescence efficiencies indicate that bis-tridentate Ir(III) complexes are promising candidates for OLED applications.
Introduction
Organic light-emitting diodes (OLEDs) have been widely employed in the fabrication of flat panel displays and solid-state lighting luminaries. In this regard, Ir(III) phosphors have received special attention for their capability in harvesting both the singlet and triplet excited states formed in the devices [1]. The triplet states account for 75% of the total excited states generated; hence, the strong spin-orbit coupling exerted by the Ir(III) metal atom can reduce the radiative lifetime of triplet excited states, resulting in a significant improvement of the overall efficiency of OLEDs. This has triggered numerous studies on the quest of chemically and photochemically stable Ir(III) metal complexes, to which the efficient phosphorescence from the coupled ligand-centered (LC) ππ* and metal-to-ligand charge transfer (MLCT) excited states tend to fulfil the criteria for higher OLED efficiency [2][3][4][5][6][7].
Traditionally, these Ir(III) emitters were constructed using bidentate cyclometalates such as 2-phenylpyridine or functional analogues (CˆN) and/or monoanionic ancillary chelate, denoted as (LˆX). The tris-homoleptic and heteroleptic Ir(III) complexes [Ir(CˆN) 3 ] and [Ir(CˆN) 2 (LˆX)] have been extensively designed and studied [8]. In theory, both of them are capable of affording at least two stereoisomers, which are controlled by their intrinsic kinetic and thermodynamic factors. They are named as fac-(facial) and mer-(meridional) isomers in the case of homoleptic complexes [Ir(CˆN) 3 ]. Generally, these stereoisomers possess distinctive chemical and physical properties and, hence, their interconversion should be limited during preparation. One possible method in preventing the formation of multiple stereoisomers is to employ the bis-tridentate architectures, to which the planar
General Information
All solvents were dried and degassed before used, and commercially available reagents were used without further purification. 2,6-Dibromo-4-methoxypyridine [36,37], 2,6dibromo-N,N-dimethylpyridin-4-amine [38,39] and 6-(tert-butyl)-9H-pyrido [2,3-b]indole [40] were prepared using methods reported in literature. All reactions were conducted under N 2 atmosphere and monitored by precoated TLC plates (0.20 nm with fluorescent indicator F254). 1 H and 19 F spectra were recorded with Bruker 400 MHz AVANCE III Nuclear Magnetic Resonance System. Elemental analysis was performed by an elemental carbon-hydrogen-nitrogen analyzer (Elementar). Mass spectra were obtained on 4800 Plus MALDI TOF/TOF Analyzer (ABI), where 2,5-dihydroxybenzoic acid was applied as the matrix. TGA measurements were performed on a TA Instrument TGAQ50, at a heating rate of 10 • C min −1 under N 2 atmosphere. The X-ray intensity data were measured using phi and omega scan modes (APEX3) at 233 K on a Bruker D8 Venture Photon II diffractometer with microfocus X-ray sources.
After that, the preparation of the bis-tridentate Ir(III) complexes Cb1-5 was conducted using a one-pot and two-step method. As a generalized protocol, the carboline chelate (cbF)H·HF 6 (or (cbB)H·HF 6 ) was first heated with [Ir(COD)Cl] 2 and sodium acetate in degassed acetonitrile. The intermediate was next reacted with a series of second chelate (phyzn)H 2 (n = 1, 2 and 3) in decalin to afford the desired Ir(III) complexes in moderate yields. The mass spectrometry and 1 H and 19 F NMR spectroscopies, together with a single crystal X-ray diffraction study on Cb1, were examined to offer the needed characterizations. Their structural drawings are depicted in Scheme 2 for scrutiny. Scheme 2. Structural drawings of the bis-tridentate Ir(III) complexes Cb1-5. Figure 1 depicts the molecular drawing of Cb1, with thermal ellipsoids drawn at a level of 30% probability. The crystal of Cb1 for X-ray diffraction was obtained via the slow diffusion of hexane into a saturated CH 2 Cl 2 solution of Cb1 at RT. The Ir(III) metal atom constituted a slightly distorted octahedral coordination arrangement with two mutually orthogonal tridentate chelates. The phyz1 chelate is essentially planar, while that of the tridentate chelate cbF underwent a slight distortion at the outer hexagonal ring of the carboline unit, which can be attributed to the unfavourable steric interaction between carboline and central benzene fragments. In agreement with the prediction of trans-influence [42], the carbene Ir-C distance (Ir-C(39) = 2.004(3) Å) is relatively shorter than the typical Ir-C distances observed in other bis-tridentate Ir(III) complexes bearing symmetrically arranged carbene pincer chelates (2.043 − 2.062 Å) [43,44]. Concomitantly, the Ir-C distance of central benzene group (Ir-C(31) = 2.011(3) Å) elongated slightly in comparison to that of the corresponding carbene pincer chelates (1.950-1.960 Å). Table 1. All Ir(III) complexes give similar absorption patterns, and the higher energy bands above 380 nm are attributed to the spin-allowed ππ* transition, while those occurring at the longer wavelength regions of 380-450 nm are assigned to the singlet metal-to-ligand charge transfer ( 1 MLCT). The next lower absorption bands spanning the region from 450 nm up to the onset are ascribed to the mixed spin-forbidden ligand-centered ππ* transition and MLCT transition processes.
Photophysical and Electrochemical Properties
Molecules 2021, 26, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/molecules fostered stronger spin-orbital coupling and faster phosphorescence. This tendency was also observed by comparing the second set of the Ir(III) complexes Cb4 and Cb5, with the radiative rate constant being 2.9 × 10 5 s −1 and 3.3 × 10 5 s −1 , respectively. Furthermore, for Cb3 and Cb5, the bathochromic shift can also be rationalized with the electron-donating effect of NMe2 substituent at the 4-position of pyridinyl group, giving a higher-lying HOMO level and hence a narrower energy gap. Figure 3 shows the electrochemical properties of bis-tridentate Ir(III) complexes Cb1−5, with numerical data listed in Table 2. All complexes present reversible oxidation and irreversible reduction waves. Replacing CF3 with the tert-butyl substituent in the monoanionic carbene pincer chelate induces a cathodic shift on the oxidation potential, Upon photoexcitation, an intense green emission was observed among Cb1, Cb2 and Cb3 in the degassed CH 2 Cl 2 solution with the peak wavelength at 525, 521 and 529 nm, respectively. The slight shifting of peak indicates the substituent effects of the pyridinyl coordination unit. It is worth noting that the shoulder at the right of emission profile gradually vanished in accordance with the sequence of hydrogen, methoxy, dimethylamino presented, manifesting an increased MLCT contribution for a structureless profile. In addition, the radiative rate constant (k r ) for Cb1 to Cb3 (2.0, 3.2 and 3.4 × 10 5 s −1 ), which was calculated from quantum yield (Φ) divided by the observed lifetime (τ obs ), revealed an acending trend to the increased MLCT contribution, as it fostered stronger spin-orbital coupling and faster phosphorescence. This tendency was also observed by comparing the second set of the Ir(III) complexes Cb4 and Cb5, with the radiative rate constant being 2.9 × 10 5 s −1 and 3.3 × 10 5 s −1 , respectively. Furthermore, for Cb3 and Cb5, the bathochromic shift can also be rationalized with the electron-donating effect of NMe 2 substituent at the 4-position of pyridinyl group, giving a higher-lying HOMO level and hence a narrower energy gap. Figure 3 shows the electrochemical properties of bis-tridentate Ir(III) complexes Cb1−5, with numerical data listed in Table 2. All complexes present reversible oxidation and irreversible reduction waves. Replacing CF 3 with the tert-butyl substituent in the monoanionic carbene pincer chelate induces a cathodic shift on the oxidation potential, e.g., Cb1 (0.56 V) to Cb4 (0.35 V). For Cb1, Cb2 and Cb3, the oxidation potentials experience a decrease from 0.56 V and 0.53 V to 0.45 V, with changing 4-hydrogen atom on the pyridinyl fragment to methoxy and dimethylamino substituents. A similar trend is also observed between Cb4 and Cb5, which varied from 0.35 V to 0.25 V, after the introduction of the dimethylamino group. Meanwhile, the reduction potentials are also influenced by the substituent effect as mentioned earlier. Among Ir(III) complexes Cb1-3, Cb3 exhibits the most destabilized LUMO by giving the most negative potential at −2.48 V, which can be explained by the strongest electron-donating ability of the dimethylamino group. Moreover, both the Ir(III) complexes Cb4 and Cb5 (−2.50 V and −2.56 V, respectively) with the tert-butyl substituent on the monoanionic tridentate chelate display more negative reduction potentials than that of the CF 3 substituted counterparts Cb1, Cb2 and Cb3 (−2.42 V, −2.45 V and −2.48 V, respectively), showing that the LUMO is not associated with this pyridinyl coordination unit. Table 2. Electrochemical data of the Ir(III) metal complexes Cb1−5 in acetonitrile at RT.
Theoretical Calculation
We then conducted the density functional theory (DFT) calculations at PBE0/LANL2DZ (Ir) and PBE0/6-31g(d,p) (H, C, N, F, O) levels using CH2Cl2 as the solvent to optimize the ground-state (S0) geometries of all molecules. In addition, timedependent (TD) DFT calcualtions at the same levels were performed to optimize the geometries of the excited states and to probe the transition characteristics of the studied Ir(III) complexes. The calculated transition energies and major assignments of Ir(III)
Theoretical Calculation
We then conducted the density functional theory (DFT) calculations at PBE0/LANL2DZ (Ir) and PBE0/6-31g(d,p) (H, C, N, F, O) levels using CH 2 Cl 2 as the solvent to optimize the ground-state (S 0 ) geometries of all molecules. In addition, time-dependent (TD) DFT calcualtions at the same levels were performed to optimize the geometries of the excited states and to probe the transition characteristics of the studied Ir(III) complexes. The calculated transition energies and major assignments of Ir(III) complexes Cb1-5 in CH 2 Cl 2 solution are summarized in Tables 3 and S1-S5, respectively. The frontier molecular orbitals involved in the major transitions were also depicted in Figures 4 and S1 and 587.7 nm and Cb5: 520.2 and 598.9 nm, respectively. For Cb1-5, the calculated S 1 → S 0 wavelengths were all close to the onset of the emission spectra while the T 1 → S 0 wavelengths were akin to the experimental emissive peaks as recorded in Figure 2. The trends of S 0 → S 1 absorption and T 1 → S 0 emission were in good agreement with their corresponding absorption and phosphorescence spectra, respectively. Moreover, the S 0 → S 1 absorption was derived mainly from HOMO → LUMO+1 for Cb1 and Cb4 and HOMO → LUMO for Cb2, Cb3 and Cb5, respectively ( Table 3). The S 1 → S 0 and T 1 → S 0 emission were all assigned to LUMO → HOMO for Cb1-5. For the ground state S 0 of Cb1-5, the electron density distribution of the HOMO was mainly localized at the central Ir(III) metal atom (31-34%) and delocalized over the chromophoric chelate 2-phenyl-6-(3-(trifluoromethyl)-1H-pyrazol-5-yl)pyridine (phyz) and carbene-benzene-carboline (cb), while the electron density distribution of the LUMO and LUMO+1 was mainly localized at the cb or phyz chelate, respectively, accompanying a little contribution at the Ir(III) atom (1-3%) (Figures 4 and S1-S5). For the excited states S 1 and T 1 of Cb1-5, the electron density distribution of the HOMO was mainly localized at the central Ir(III) metal atom (29-36%) and delocalized over the phyz and cb fragment, while the electron density distribution of the LUMO was mainly localized at the cb or phyz chelate, together with a few contribution at the Ir(III) atom (2-4%). Moreover, it is notable that LUMO is partially shifted to carboline moiety in Cb3, while completely moved to carboline moiety as observed in Cb5. We attributed this to the introducing of the dimethylamino substituent at the pyridinyl unit of the dianionic chelate that greatly increased the associated π* orbital energy, such that the LUMO is now dominated by the relatively unaffected carboline π* orbital. Overall, the S 0 → S 1 , S 1 → S 0 and T 1 → S 0 transitions were all mainly ascribed to the metal-to-ligand charge transfer (MLCT) process (19-31%), accompanied by minor ligand-to-ligand charge transfer (LLCT) or intraligand charge transfer (ILCT). These high MLCT characters were in nice relevance to the moderate emission quantum yield (41-69%) of the emissive complexes Cb1-5 in Tables 1 and 3. Furthermore, with regard to the calculated HOMO energy levels of S 0 , S 1 and T 1 , Cb3 was higher than Cb1 and Cb2 due to the electron-donating effect of NMe 2 substituent at the 4-position of pyridinyl group in Cb3. Additionally, Cb5 is higher than that of Cb4 (Table 2 and Figures S1-S5). The trend of calculated HOMO energy levels is in good agreement with the experimental results (vide supra).
Fabrication of OLED Devices
All these new Ir(III) complexes showed a high decomposition temperature (>283 • C, Figure S6), which is suitable for conducting device fabrication via thermal deposition. In view of their better photophysical properties, Cb1 and Cb4 were selected as the dopant emitter in fabrication of OLED devices with architecture: ITO/TAPC (40 nm)/TCTA (10 nm)/mCP (10 nm)/8 wt.% dopant in mCP (20 nm)/TmPyPB (45 nm)/LiF (1 nm)/Al. Figure 5 presents the chemical structures of the employed materials and device configuration. The obtained device characteristics and key parameters are summarized in Figure 6 and Table 4 for scrutiny. Here, 1,1-bis((di-4-tolylamino)phenyl)cyclohexane (TAPC) and tris(4-carbazoyl-9-ylphenyl)amine (TCTA) are taken as the hole-transporting and electronblocking layer. 1,3-Bis(N-carbazolyl)benzene (mCP) serves as both the hole-blocking layer and host in the emissive layer. 1,3,5-Tri(3-pyridyl-3-phenyl)benzene (TmPyPB), LiF and Al are acting as the electron-transporting layer, electron injection layer and cathode, respectively. As showed in Figure 6, their normalized EL spectra resemble the PL spectra recorded in the degassed CH 2 Cl 2 solution, confirming that the emission is solely generated from the emitters, from which EL of Cb4 is also red-shifted compared to that of Cb1. Moreover, the Cb4-based device shows a relatively lower current density at the same voltage compared to that of the Cb1-based device, which can be ascribed to the carrier trapping effect of Cb4 with a narrower energy gap than that of Cb1 [45,46]. In contrast, the Cb1-based device exhibited a bright green emission with EL peak at 530 nm and a maximum luminance of 12,420 cd/m 2 at 11.5 V, while the Cb4-based device delivered a yellow EL peak centered at 559 nm with a maximum luminance of 21,480 cd/m 2 at 13.0 V. A maximum external quantum efficiency (current efficiency) of 16.6% (55.2 cd/A) and 13.9% (43.8 cd/A) was also observed for Cb1-and Cb4-based devices, respectively. More importantly, both OLED devices present a small efficiency roll-off at 1000 cd/m 2 (15.4% and 12.1% for Cb1 and Cb4-based devices, respectively), evidencing good carrier balance during device operation.
Conclusions
In summary, by introducing varied substituents at the 4-position of central pyridinyl fragment of dianionic chelate or on the central phenyl coordination unit of carboline-based monoanionic pincer chelate, a series of five bis-tridentate Ir(III) complexes were successfully designed and synthesized, with an isolation yield higher than 50% and absence of any isomeric product. This result is consistent with those documented in literature [37,43]. The addition of methoxy and dimethylamino substituents at the 4-position of central pyridinyl fragment of dianionic chelate effectively increased the electron density at the Ir(III) metal center, which increased the MLCT contribution at the excited states, and gave a structureless emission profile. As for Ir(III) complexes Cb4 and Cb5, the tert-butyl substituent on the 4-position of the phenyl ring also red-shifted the emission and exhibited slightly reduced emission quantum yields. Next, Cb1 and Cb4 were doped into the emission layer for fabrication of OLEDs, achieving a maximum external quantum efficiency (current efficiency) of 16.6% (55.2 cd/A) and 13.9% (43.8 cd/A), respectively. The wellperformed electroluminescence efficiencies indicate that the studied bis-tridentate Ir(III) complexes and their future derivations are promising candidates for OLED applications.
Supplementary Materials: The following are available online. General experimental procedures of all measurements and calculations, synthetic protocol of chelates, original electrochemical data and detailed TD-DFT results of studied Ir(III) metal complexes. Scheme S1. Synthetic protocol given the employed dianionic chelates (phyz)H 2 ; Scheme S2. Synthetic protocol given the employed carboline chelates (cbF)H·HF 6 and (cbB)H·HF 6 ; Figure S1. Frontier molecular orbitals pertinent to the optical transitions for the ground state S 0 , excited state T 1 and S 1 of Ir(III) complex Cb1. The electron density distributions of Ir atoms in each molecular orbital are shown; Figure S2. Frontier molecular orbitals pertinent to the optical transitions for the ground state S 0 , excited state T 1 and S 1 of Ir(III) complex Cb2. The electron density distributions of Ir atoms in each molecular orbital are shown; Figure S3. Frontier molecular orbitals pertinent to the optical transitions for the ground state S 0 , excited state T 1 and S 1 of Ir(III) complex Cb3. The electron density distributions of Ir atoms in each molecular orbital are shown; Figure S4. Frontier molecular orbitals pertinent to the optical transitions for the ground state S 0 , excited state T 1 and S 1 of Ir(III) complex Cb4. The electron density distributions of Ir atoms in each molecular orbital are shown; Figure S5. Frontier molecular orbitals pertinent to the optical transitions for the ground state S 0 , excited state T 1 and S 1 of Ir(III) complex Cb5. The electron density distributions of Ir atoms in each molecular orbital are shown; Figure S6. Thermal gravimetric analysis of studied Ir(III) complexes Cb 1-5 with a decomposition temperature (T d ) showing a loss of 5% in weight; Table S1. The calculated wavelengths, transition probabilities and charge transfer character of the optical transitions for Ir(III) complex Cb1 in CH 2 Cl 2 ; Table S2. The calculated wavelengths, transition probabilities and charge transfer character of the optical transitions for Ir(III) complex Cb2 in CH 2 Cl 2 ; Table S3. The calculated wavelengths, transition probabilities and charge transfer character of the optical transitions for Ir(III) complex Cb3 in CH 2 Cl 2 ; Table S4. The calculated wavelengths, transition probabilities and charge transfer character of the optical transitions for Ir(III) complex Cb4 in CH 2 Cl 2 ; Table S5. The calculated wavelengths, transition probabilities and charge transfer character of the optical transitions for Ir(III) complex Cb5 in CH 2 Cl 2 . | 4,330.2 | 2021-10-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Topological Photonic Media and the Possibility of Toroidal Electromagnetic
This study aims to present a theoretical investigation of a feasible electromagnetic wavepacket with toroidal-type dual vortices. The paper begins with a discussion on geometric phases and angular momenta of electromagnetic vortices in free space and periodic structures, and introduces topological photonic media with a review on topological phenomena of electron systems in solids, such as quantum Hall systems and topological insulators. Representative simulations demonstrate both the characteristics of electromagnetic vortices in a periodic structure and of exotic boundary modes of a topological photonic crystal, on a Y-shaped waveguide configuration. Those boundary modes stem from photonic helical surface modes, i.e., a photonic analog of electronic helical surface states of topological insulators. Then, we discuss the possibility of toroidal electromagnetic wavepackets via topological photonic media, based on the dynamics of an electronic wavepacket around the boundary of a topological insulator and a correspondence relation between electronic helical surface states and photonic helical surface modes. Finally, after introducing a simple algorithm for the construction of wavepacket solutions to Maxwell’s equations with multiple types of vortices, we examine the stability of a toroidal electromagnetic wavepacket against reflection and refraction, and further discuss the transformation laws of its topological properties in the corresponding processes.
Introduction
Physical concepts proposed for a system are sometimes applicable to other systems that initially looks considerably different from the original system.Such concepts can be used to predict novel phenomena in the latter system and to explain a mechanism governing the phenomena.Such a mechanism could be conversely applied to the original system and predict similar phenomena in it.Finally, we come to realize their universality.Here we discuss about some interconnection among such concepts and mechanisms, e.g., band theory, geometric phase, Hall effect, topological phase and so on."Energy bands" and "band gaps" were originally cultivated in the field of condensed matter theory which concerns electron systems in solids, e.g., natural crystals or artificial periodic structures.These concepts were applied to the old research theme [1] on electromagnetic waves in periodic structures composed of different kinds of dielectrics and magnetic materials, consequently establishing the concept of "photonic crystals" [2][3][4], which plays an important role in the realization and extension of "metamaterials" [5][6][7][8].The concept of "geometric phase" was initially introduced in an electromagnetic system [9], and became clearly recognized in a quantum system with spin degrees-of-freedom (DOF) for electron systems in solids [10].Interestingly, this clear-cut recognition was reapplied to an electromagnetic system, i.e., a photon system, and its validity became clear in a photon system than in electron systems [11].On the contrary, the vortex structure of an electromagnetic wave became widely recognized to closely relate with the orbital angular momentum of photons [12], a view currently being implemented in the optical and quantum information communication technology [13][14][15].Moreover, electromagnetic vortices can appear in periodic structures, such as photonic crystals [16].This suggests a new kind of internal orbital angular momenta of photon in such systems.These internal orbital angular momenta may be interpreted as quasi-spin DOF and potentially cause a variety of geometric phase effects.Specifically, an electromagnetic wavepacket composed from wave modes with such vortices can have orbital angular momentum perpendicular to its propagation direction.This relation between angular momentum and propagation direction for such a wavepacket is similar to that for an atmospheric tornado which shows unexpected exotic motions.
Herein, we theoretically investigate a possible electromagnetic wavepacket with toroidal-type dual vortices, i.e., having a ring vortex inside the wavepacket and a line vortex along its propagation direction.The line vortex resembles that of a Laguerre-Gaussian beam and suggests a finite orbital angular momentum of the wavepacket.This paper is organized as follows.In Section 2, we review the relation between geometric phases and angular momenta, followed by the discussion on electromagnetic vortices in periodic structures in Section 3, which further demonstrates the propagation characteristics of such electromagnetic vortices by conducting numerical simulations on Y-shaped waveguides.In Section 4, we introduce the topological photonic media, while reviewing topological phenomena of electron systems in solids, such as quantum Hall systems and topological insulators.Herein, a class of topological photonic media is interpreted as a photonic version of topological insulator and can be realized as an extension of photonic crystals accompanying the electromagnetic vortices.Moreover, we present another simulation of waveguide propagation via exotic boundary modes of such a medium.In Section 5, referring to the dynamics of an electronic wavepacket around the boundary between a topological insulator and conductor, we consider the possibility of toroidal electromagnetic wavepackets with an argument on a correspondence relation between electronic helical surface states of topological insulator and photonic helical surface modes of topological photonic media.In Section 6, we present an algorithm for constructing wavepacket solutions of Maxwell's equations with multiple types of vortices.Next, we numerically investigate the stability of the toroidal electromagnetic wavepacket in reflection and refraction at interfaces between homogeneous isotropic dielectrics, and reveal the transformation laws of topological charges of line and ring vortices.
In the next section and beyond, we adopt the natural system of units as h = 1 (h: Dirac constant or reduced Planck constant) and c = 1 (c: speed of light), unless those symbols are explicitly stated.We will not distinguish between the wavevector k of a plane wave and the momentum hk of a quantum particle derived from second quantization of the wave.Likewise, the frequency ω of a harmonic wave and the energy hω of a corresponding quantum particle will not be distinguished.For convenience on later discussions, we introduce a spherical basis {e k , e θ , e φ } (e k = k/k) in wavevector space.
Geometric Phases and Angular Momenta of Electromagnetic Vortices
In this section, we first review the relation that exists between the spin angular momentum and geometric phase of an electromagnetic wavepacket, following those between polarization state and spin angular momentum and between polarization vector and geometric phase.Figure 1 describes the relation between polarization state and angular momentum [17] of a right circularly polarized wavepacket propagating in the z-direction (upward in the drawing).The arrows represent the deviation of the wavepacket energy flux density from the product of the energy density and the averaged velocity vector, whereas the hue represents the energy density (cold color < warm color).For simplicity, we considered the situation wherein the wavepacket spread is sufficiently large with respect to its central wavelength so that its deformation can be ignored.In Figure 1, we can find a clockwise vortex in the view facing the propagation direction (z-direction) of the wavepacket, indicating that the right circularly polarized wavepacket has a spin angular momentum in the propagation direction.By contrast, we could not find such structure of energy flux density in a similar plot of a linearly polarized wavepacket in the same scale as Figure 1.Furthermore, once the well-known indications, i.e., the geometric phase appearing due to the change in polarization state [9] and the geometric phase due to orbital change of polarized beam [18], are accepted, the relation between spin and geometric phase looms into view.Herein, by specifically looking at the relation between polarization vector and geometric phase, we confirm the relation between the three concepts more explicitly.For that purpose, we introduce two quantities, the Berry connection and curvature, defined as where {e kα } is an orthonormal basis of polarization vectors normal to the wavevector k (e † kα • e kβ = δ αβ , and e † kα • e k = 0); the symbol α is an indicator of polarization state; and e † kα is the complex conjugate transpose (the Hermitian conjugate) of e kα .Herein, we also introduce the Jones vector corresponding to this orthonormal basis as |z k ).With (z k | as the Hermitian conjugate of |z k ), they satisfy (z Although representations of Λ k and Ω k depend on how the basis is selected, (z k |Ω k |z k ) is uniquely determined once a state is given.On the basis of right and left circular polarizations, the Berry curvature is represented as k/k 3 σ 3 , where σ 3 is the third component of Pauli matrices σ = (σ 1 , σ 2 , σ 3 ) and is diagonal in the standard representation.On the contrary, (z k |Ω k |z k ) can be expressed as (|z kR | 2 − |z kL | 2 )/k 2 e k using the corresponding representation [z kR , z kL ] of |z k ).Since the expected value of the spin angular momentum per photon s k is evaluated as We can extend the above discussion to a case accompanied by orbital angular momentum.A close relation between the internal orbital angular momentum and Berry curvature is derived in the same way as above, while its discussion gets a little bit complex.Herein, we consider a beam of a central wavevector k c and introduce the extension where l α corresponds to the vorticity of the α-polarized component and is an integer number, i.e., l α ∈ Z.
For simplicity, we consider only a class of beams that are a superposition of a given polarization component with the vorticity l and its orthogonal component with l .In other words, we restrict ourselves to a finite subspace of infinite whole space of states.On the basis of right and left circular polarizations, the Berry connection and curvature of this subspace are represented as, where l OAM is a 2 × 2 Hermitian matrix with a pair of integer eigenvalues (l, l ).On the other hand, the expected value of the total angular momentum per photon j k is evaluated as We can find a close relation Next, we consider the meaning of the form of the Berry curvature, k/k 3 σ 3 , on the basis of circular polarizations which we shall call helicity basis hereafter.As we shall see, this form reflects the photon characteristics of a spin-1 gauge symmetric massless boson.To this end, we introduced a degenerate two-band model of a spin-1/2 fermion system with conical dispersions, similar to relativistic electrons.The Hamiltonian of this model and its projection to the subspace of definite wavevector k are given by where α and β are Dirac matrices and v and ∆ are parameters of effective phase velocity and band gap, respectively.The band gap of this model plays a similar role as the mass gaps in relativistic theories; hence, we shall refer to the limit ∆ → 0 as massless limit.The eigenvalue problem of H k can be easily solved.
The eigenvalues of the upper and lower bands are obtained as ± √ v 2 k 2 + ∆ 2 ; only the upper bands will be considered hereafter.The degenerate eigenstates of the upper band |u kλ in terms of the spherical coordinate of wavevector space (k, θ, φ) are expressed as where and the index λ corresponds to the degrees of helicity, i.e., spin component in the direction of k.Note that the above expressions are well-defined only in regions excluding the points θ = 0 and π.However, we shall not step into regularization at these points but only mention that expressions valid at θ = 0 or π are obtained by some gauge transformations of the above expressions.One can easily confirm that the above expressions satisfy the time-independent Schrödinger equation in k-subspace, At this point, we would like to mention that the electromagnetic polarization vectors e kλ can be interpreted as solutions of a similar Schrödinger-type matrix equation, which is a transcription of Maxwell's equations.The correspondence relation is confirmed by the replacements, E k = k/( √ √ µ), and where , µ, and ijk are the relative permittivity, relative permeability, and the Levi-Civita symbol, respectively.For simplicity, we assumed a homogeneous and isotropic background medium here.Therefore, by means of the state vectors |u kλ , we can define the Berry connection in a form common in electronic and electromagnetic systems, Now, let us get back to the degenerate two-band model.On the helicity basis, the Berry connection and curvature of the upper band are expressed by In the massless limit ∆ → 0, this coincides with the Berry curvature of photons except for the overall coefficient due to a different spin magnitude.In the non-relativistic limit, i.e., an increasing |∆| for a fixed k, the Berry curvature decreases in the form 1/∆ 2 , similar to the scale of the spin-orbit interaction.Next, let us consider the relation between the spin angular momentum and the geometric phase in this spin-1/2 massive fermionic system.The spin operator s and its projection to the upper band s k are given as follows: In the massless limit ∆ → 0, s k matches its photonic version except for the coefficient 1/2 that comes from the spin-1/2 nature of the present fermionic system.We can again find the simple relation between the Berry curvature and the spin angular momentum, As for the electromagnetic waves in periodic structures, we can develop the same discussion by replacing the polarization vectors with eigenstate vectors expressed using Bloch wave functions, as in the example of a spin-1/2 fermion system discussed above [19,20].We do not intend to have an unnecessary abstract debate using "Berry connection" and "Berry curvature".Thus, whereas they were introduced in a way that makes the theories under consideration abstract, we can use a common principle that is independent of the details of electron or photon systems for better understanding.Although the definition of angular momentum in a periodic system is accompanied by ambiguity, the Berry curvature can be uniquely defined apart from the freedom-of-choice of basis.Our knowledge of phenomena or effects in a given system are easily applicable to the realization of analogous phenomena or effects in other systems.From this viewpoint, information on the Berry curvature of each band helps us organize the relation between photonic bands in wavevector space and vortices in real space, and serves as guide for controlling vortices.
Electromagnetic Vortices in Periodic Systems
The spin DOF of photon is no longer well-defined in the presence of periodic structure, while we shall need an alternative concept of photonic spin to discuss the possibility of toroidal electromagnetic wavepackets in Section 5, based on an electron dynamics around the interface of a topological insulator where the concept of electronic spin still works well.(The photonic version of topological insulator will be introduced in Section 4.) One possibility of the alternative is the internal rotational motion of electromagnetic Bloch modes with vortices.In this section, we step further into electromagnetic vortices in periodic structures such as photonic crystals.After looking back briefly on the relation between the Berry curvature and real-space vortex structure in periodic systems, we present a numerical simulation on propagation modes around a standing vortex mode in a photonic crystal, indicating the effect of the internal rotation in the propagation process.
Useful functionalities of a photonic crystal, such as light confinement and waveguide, are realized by adjusting energy (frequency) dispersion relations and forbidden bands by the periodic structure and symmetry design of the system.The required design procedures stem from the band theory common to wave phenomena in periodic structures.Moreover, based on the studies of geometric Hall effect in electron systems [21,22], we can also use such design to control the Berry curvature of each band [23,24].For instance, by employing a two-dimensional (2D) periodic system in the xy direction, we can consider a situation where two bands are immediately prior to touching each other at a point k 0 in a wavevector space.Given an approximate description, for the local energy dispersion of each, the z-component of the Berry curvature of each band is estimated as In other words, we can control the Berry curvature by adjusting the level repulsion ∆.We confirmed this mechanism in a more strict theory exactly treating the periodic structure, as well as the relation among the Berry curvature, the angular momentum [19,20] and the real-space vortex structure [16,25].The electromagnetic vortex can be also controlled through the adjustment of ∆. Figure 2 shows an example of a set of such periodic structure, band diagram, and electromagnetic vortex.As for the 2D photonic crystals, we considered only photonic modes of two distinct polarizations, i.e., transverse-electric (TE) and transverse-magnetic (TM), which propagate strictly parallel to the plane with a 2D periodicity.Herein, we adopted this definition: TE modes have magnetic fields normal to the plane and electric fields in the plane; conversely, TM modes have electric fields normal to the plane and magnetic fields in the plane.The electromagnetic vortex shown in Figure 2c corresponds to a standing wave mode of a zero group velocity.On the other hand, modes around it have finite group velocities in addition to the vortex structure, and can propagate through the crystal with rotational motion.Next, we describe the propagation characteristics of such modes in a Y-shaped waveguide in Figure 3a composed of the crystal in Figure 2a and a block layer with a sufficiently large band gap covering the relevant frequency range.Since it was not easy to excite a specific electromagnetic vortex mode in a real-time-and-space simulation, we adopted an excitation using a linear source with a line width of a few percent around the central frequency of the targeted vortex modes.Moreover, the linear source was set at the left end of the left branch of the waveguide and the vortex modes were excited by electric field oscillations along the source.Figure 3b shows the z-component of the magnetic field H z and Figure 3c displays the transmission spectra measured at the ends of the upper right and lower right branches.As the vertical axis is in arbitrary unit, we also plotted the spectrum (blue and red broken lines) of the case where the Y-shaped region is replaced by vacuum, for comparison.Two broken lines overlapped each other, and only the blue broken line is visible.The transmission spectra of the target system in Figure 3c is extremely asymmetric, whereas we find only weak asymmetry in the real-space image of Figure 3b. Figure 3b also shows that this system contains accidental edge modes localized around interfaces aside from bulk vortices; therefore, the asymmetry could not be attributed solely to the bulk vortex modes.An additional simulation (not shown here) confirmed that the edge modes in this system are strongly reflected at the bents of the Y-shaped waveguide; therefore, we conclude that the bulk vortex modes contribute primarily to the asymmetric propagation.
Topological Photonic Media
Based on the pioneering studies on quantum Hall effect in 2D electron systems [26][27][28], a topological invariant, known as Chern number or index, is assigned to an isolated band via integration of its Berry curvature over the entire first Brillouin zone.Depending on total Chern numbers of bulk bands below a bulk gap, exotic states localize at the edge of a finite system and form edge bands traversing over the bulk gap [29,30].Furthermore, each state works a one-way waveguide; therefore, is called a chiral edge state.For a Chern number of an isolated band to be nonzero, time-reversal symmetry breaking is necessary.This symmetry may be broken not only by an externally applied magnetic field but also by a spontaneously induced magnetic order [28].When the quantum Hall effect is induced by the latter mechanism, it is sometimes called the spontaneous quantum Hall effect, as distinguished from the original one.To realize a similar situation in photon systems, the above mechanism requires isolated bands to form with band gaps stemming from time-reversal symmetry breaking.For instance, a magnetic body of a complex permittivity tensor with imaginary off-diagonal components breaks time-reversal symmetry for the photon system.At least the necessary conditions are satisfied by designing a 2D periodic structure made of such a material to form isolated photonic bands.This analogy was the basis for a photonic version of the quantum Hall system theoretically proposed [31][32][33] and experimentally confirmed [34,35].Presently, a clear-cut demonstration has also been made on nonreciprocal lasing from chiral edge modes surrounding a network of topological cavities in arbitrary geometry [36].
In reverse, a one-way propagation of a chiral edge mode inevitably breaks the time-reversal symmetry.(Time-reversal symmetry breaking is a necessary condition for the presence of a single chiral edge state; conversely, the presence of a single chiral edge state is a sufficient condition for the symmetry breaking.)Fortunately, however, chiral edge states and their time-reversal partners can simultaneously exist in a single system preserving time-reversal symmetry in its entirety [37].As an extension of quantum Hall system to the case with time-reversal symmetry, an insulator with a topologically-protected pair of edge states has been proposed [38].Such insulator and edge states are respectively called topological insulator and helical edge states [39].The propagation direction of a helical edge state is selectively governed by its spin polarization.In a naive picture, topological insulator is understood as a superposition of a spontaneous quantum Hall system spin-polarized in a specific direction and its time-reversal partner spin-polarized in the opposite direction which is necessary to maintain time-reversal symmetry.Based on this situation, it was initially called quantum spin Hall system.More precisely, the parity of the numbers of Kramers pairs of helical edge states is critical [38] and corresponds to the topological invariant called Z 2 index.As the index can be calculated by means of the bulk states of a system with periodic boundary conditions, topological insulators can be distinguished from non-topological insulators even with bulk information alone.Furthermore, topological crystalline insulators were proposed by introducing combinations of crystalline symmetries and time-reversal symmetry [40], extending to their photonic version [41][42][43].A wider range of topological materials, including superconductors, have been systematically classified based on symmetry and dimensionality [44,45].
Typical physical conditions under which topological insulators emerge are as follows: (1) two pairs of bands with different parities opposite to each pair, which are energetically close to each other, hybridize with each other through a strong spin-orbit interaction and; (2) the resultant level repulsion forms an enough sized bulk gap [46].By contrast, for photon systems in periodic structures, what kind of DOF should be regarded as spin DOF remains unclear.Nevertheless, if the difference between certain degenerate modes is approximately regarded as pseudo-spin DOF and the coupling between electric and magnetic fields introduced by an artificial chiral medium are regarded as effective spin-orbit interaction of photon, then a similar mechanism could be applied to photon systems in periodic structures.Photonic versions of topological crystalline insulator using metamaterials as artificial chiral media have been proposed [41,42].After such proposals, it has been pointed out that those topological photonic media could be realized even by photonic crystals composed of only ordinary dielectrics [43], whereas such systems needed to introduce some complication to the unit cell of the crystal.Therefore, we should regard the chiral medium as an example of effective spin-orbit interaction implementation, and not an item of necessity.Figure 4a,b show examples closely related to the all-dielectric topological photonic crystals in Ref. [43].The structure of Figure 4a is an inversion asymmetric deformation of a topological photonic crystal; hence, we shall call it a quasi-topological photonic crystal for convenience.Compared to the case of Figure 2a, the degree of the symmetry breaking is so weak that it is not easy to distinguish two kinds of rods colored by dark gray and black.The bulk bands are almost unchanged from the symmetric case as depicted in Figure 4c.However, as we shall see below, this symmetry breaking clearly resolves the degeneracy of edge modes and opens a recognizable gap in edge bands.The crystal comes into the topological phase when the inversion symmetry is restored by setting the same values of relative permittivity, i.e., 10, in the dark gray and black regions.Contrastingly, the crystal of Figure 4b is in non-topological phase.Figure 4c is the band diagram of the TM modes of the quasi-topological photonic crystal in Figure 4a.We can find sufficiently-sized band gap at approximately 0.5 in the unit ωa/(2πc).Figure 5a displays the unit cell of the superlattice composed of the crystals of Figure 4a,b.Figure 5b is a closeup of the projected band diagram of TM modes in the superlattice.The edge modes are emphasized in red.The red-dotted lines are the edge modes when the inversion symmetry of the middle part is restored.In this superlattice, the structure around the edge part also breaks the inversion symmetry of the whole system, as evidenced by a small gap in the edge band modes even when the middle part is in the topological phase.The explicit breaking of the inversion symmetry in the middle part increases the size of this gap as well as resolves the degeneracy of the edge modes.Figure 5c gives the energy flux density of an edge mode belonging to the lowest branch.We can see that the mode is well confined around the boundary and accompanies some eddies.The propagation characteristics of the edge modes were demonstrated in a Y-shaped waveguide in Figure 6a, where the Y-shaped part is composed of the crystal in Figure 4a, whereas the block layer is composed of the crystal in Figure 4b with a sufficiently large band gap to cover the relevant frequency range.The linear source was set at the left end of the left branch of the waveguide, and the vortex modes were excited by electric field oscillations along the rods, or specifically, in the vertical direction to the page.Figure 6b shows the z-component of electric field E z .Here, we can see an extremely anisotropic propagation through helical edge channels.Figure 6c shows the transmission spectra (blue and red solid lines) measured at the ends of the upper and lower right branches along with the spectra (blue and red broken lines) of the case where the Y-shaped region is replaced by vacuum, for comparison.The blue and red broken lines should overlap for the idealistic simulation treating each rod as a material with an exactly sharp boundary.However, smearing each boundary was introduced in the real simulation and the order of introducing the parts influenced the actually simulated structure.For the present case, the simulated structure of the block layer weakly broke the space inversion symmetry; hence, a small discrepancy appeared between the blue and red broken lines.By contrast, a large difference appeared between the blue and red solid lines: the transmission characteristics of the target system were quite asymmetric.
Electronic State with Twisted Spin-polarization
Focusing on the highly-resolved spin selectivity of helical edge states of topological insulators, we proposed a spin filter using one of the edge states as a conduction channel and a spin control method using hybridization between the edge states and conduction electrons in References [47,48].For example, we simulated the reflection of an electronic wavepacket at a boundary between the 2D conductor (left side) and topological insulator (right side), as shown in Figure 7, using an effective tight-binding lattice model.For the convenience of numerical treatment, the model was constructed on a simple square lattice, r = n 1 a 1 + n 2 a 2 , where n 1 and n 2 are integers and a 1 and a 2 are primitive lattice vectors of the square lattice.As our focus was on a single-particle state, in principle the first quantization formalism is sufficient.Nevertheless, we introduced the second quantized formalism as a convenient representation method, which enables us to represent operators in compact forms.The Hamiltonian of the conductor part was modeled following a simple tight-binding model, where c † r and c r are the creation and annihilation operators at a lattice site, and H.c. is Hermitian conjugate.Here, c r is a spinor operator consisting of up and down spin components, i.e., c r = (c r↑ , c r↓ ) .We introduced the last term to adjust the bottom of the conduction band to the origin of energy E k=0 = 0.Under periodic boundary conditions, the energy dispersion of this model is derived as and which mimics a conventional k-square dispersion around the origin of k-space, i.e., E k ∼ = k 2 /(2m * ) (m * = 2t 0 a 2 ).In modeling the topological insulator, we introduced a spin-dependent π-flux per square plaquette by the nearest-neighbor hopping of the magnitude |t n | and classified all the lattice points alternatively to the sub-lattices, A (n 1 + n 2 ∈ even) and B (n 1 + n 2 ∈ odd).(The unit cell is doubled, and the primitive vectors of each sub-lattice are given by a 1 + a 2 and −a 1 + a 2 .)Next, we introduced the next-nearest-neighbor hopping of the magnitude |t nn | with alternating signs depending on the sub-lattices and a staggered potential of magnitude |v s |.The Hamiltonian of the topological insulator part is represented by where (−1) r = (−1) n 1 +n 2 = ±1 for A and B sub-lattices, respectively.The Hamiltonian H 2DTI is time-reversal invariant as a whole as well as H 2DC , because each of the spin sectors is a time-reversal partner of the other.In the parameter range 4|t nn | > |v s |, each of the spin sectors comes in a quantum Hall phase.The spin-resolved quantized Hall conductances have a common absolute value and different signs.
They cancel each other so as to preserve time-reversal symmetry.A homogeneously spin-polarized incident wavepacket is illustrated in Figure 7a.The packet is incident from the conductor (left) perpendicularly to the boundary (red vertical line); its spin polarization is uniformly pointing in the incident direction.On the other hand, the spin-polarization state of the reflected wavepacket is depicted in Figure 7b.The spin density of the top white area faces in this side of the page, whereas that of the bottom black area faces the back.Polarization in the vicinity of the wavepacket center has the same state as the pre-incidence.
(See Figure 7c for details about the correspondence between spin density and color space.)These results suggest that an electronic state with twisted spin-polarization can be generated from a homogeneously spin-polarized state via topological interface between a conductor and a topological insulator, where helical edge states run along the boundary.The concept of topological insulator extends to the three-dimensional (3D) system, where helical surface/interface states traversing bulk band gaps emerge [49,50].Figure 8a shows a conceptual diagram of idealistic helical surface states.The green balls depict the electrons, whose propagation direction and spin angular momentum are indicated by each set of green and black arrows, respectively.For example, the 2D topological insulator model in Equation (12) can also be extended to 3D versions with some generalizations.A sequence of 3D models is constructed on a simple cubic lattice r = ∑ µ n µ a µ (n µ ∈ Z) with an orthogonal set of unit lattice vectors a µ (µ = 1, 2, 3).Every site is classified into either A or B sub-lattice as r ∈ A(B) when ∑ µ n µ = even(odd), in which a sign symbol (−1) r can be introduced as (−1) r = (−1) ∑ µ n µ .Moreover, each sub-lattice forms a face-centered cubic lattice.The sequence of 3D models is characterized by three types of parameters t µ , t µν (= t νµ ), and v s , along with SU(2) matrices {U µ } (µ, ν = 1, 2, 3) representing the spin-precession processes in the µ-directional nearest-neighbor hoppings.The Hamiltonian of the sequence is given by This sequence is advantageous in that the edge states of a member with open boundary condition can be analytically investigated in some parameter regions, provided that {U µ } satisfies the conditions Here, U † µ and U µ are Hermitian and complex conjugates of U µ , respectively.The symbol σ 0 stands for the 2 × 2 unit matrix in the spin space.Unfortunately, there remain unresolved issues in plausible modeling of the interface between a member of this sequence and conductor.The analysis also accompanies technical complications and will be given elsewhere.Besides, 3D versions of topological photonic crystals have also been proposed [51,52].Although the relation between a photon's pseudo-spin and actual angular momentum in a periodic structure remains ambiguous currently, examples of energy flux densities of photonic chiral edge modes in References [25,33] and photonic helical edge modes in Figure 5 suggest that the former corresponds to a vortex structure stemming from the latter.A schematic of the ideal photonic helical surface modes is shown in Figure 8b.Here, each set of yellow and black arrows represent the propagation direction and local angular momentum density of a photonic helical surface mode, respectively, whereas the yellow circles containing arrows represent the local vortex structures of the modes.Let us look back to the electronic wavepacket with a twisted spin structure in Figure 7b and consider its 3D extension.Suppose a case exists where a wavepacket homogeneously polarized in the propagation direction is perpendicularly incident on the surface of an idealistic 3D topological insulator depicted in Figure 8a.Since in this case we can find the rotational symmetry around the incident axis, the reflected wavepacket is expected to accompany a toroidally-twisted spin texture derived by rotating Figure 7b around the incident axis.Therefore, the question that arises is how do we extend the above discussion of electoronic wavepacket to its photonic version.Section 4 argues that various types of topological photonic media can be proposed based on the same idea in electron systems.As speculated in Figure 8b, the electronic spin would be replaced by a photonic vortex structure.It is reasonable to replace the incident electronic wavepacket homogeneously spin-polarized to the propagation direction by an electromagnetic wavepacket with photonic orbital angular momentum in the propagation direction, i.e., by that as depicted in Figure 9. Similarly, a simple thinking on the reflected wavepacket would suggest that a toroidally-twisted spin structure can be replaced by a toroidal vortex structure, as depicted in Figure 10.(The hue and the arrows in Figures 9 and 10 represent the energy density and the deviation of energy flux density, respectively, as in Figure 1.)This speculation appears to be extremely naive, because in general, the vortex structure of a photonic helical surface mode is complicated, as displayed in Figure 5c.Nevertheless, as long as the focus is on the topological information of wavepackets, e.g., a set of topological charges of multiple-vortex structure, some realistic vortices are very likely to belong to the same topological class as in Figure 5c, as was the case for Laguerre-Gaussian beams.Therefore, studying the possibility and stability of a photonic/electromagnetic wavepacket with such toroidal vortex structure is worthwhile, not only from an academic point-of-view but also from an application perspective.
Propagation Characteristics of Toroidal Electromagnetic Wavepacket
The electromagnetic vortices shown in Figure 9 can be implemented into a quantum digit for information communication [13] and mode division multiplexing in telecommunication technology [14,15].These applications use the fact that there are multiple quasi-orthogonal modes in a narrow frequency band.Hence, it is a meaningful task to devise various extensions of such electromagnetic vortices.This section brings up the toroidal vortex structure in Figure 10 as one of those extensions.Moreover, it aims to answer questions such as whether the electromagnetic wavepacket with a toroidal vortex can exist as a solution to Maxwell's equations and how stable it is when it can be present.
To answer the initial question, we shall present a procedure to construct wavepacket solutions with generic vortex structures.In the process, we assume that we already have the information of Fourier components e kα of plane wave solutions of mode α with eigenfrequency ω kα , and that the set {e kα } (α = 1, 2, • • • ) constitutes a perfect orthonormal system, at least at a practical approximation level.The construction procedure consists of four steps as follows: 1. Construct normalized scalar wavepackets { f α (r)} with trial vortex structures for mode α. 2. Calculate the Fourier transform { fkα } of { f α (r)}.
3. Construct the solution of electromagnetic field Ẽ(k, t) in k-space by Ẽ(k, t) = ∑ α fkα z kα e kα exp (iω kα t) , ( where the set of parameters {z kα } reduces to Jones vector in simple cases.4. Calculate the inverse Fourier transform of Ẽ(k, t), and take its real part as the solution E(r, t).
Generally, the above procedure can contain numerical calculations and is inevitably accompanied by approximation due to discretization of both real and wavevector spaces.Nonetheless, in principle the obtained solution converges to an exact solution in the continuous limit.Electromagnetic wavepackets with any vortex structure can actually exist, whereas the stability remains uncertain.In other words, this procedure is applicable as long as the quasi-complete set of {e kα } is obtained by either an analytical or numerical method.For instance, a typical case for the former may include the reflection and refraction of linearly polarized plane waves at a flat interface between two different media of homogeneous isotropic permittivity and permeability µ.Analytical expressions of {e kα } are given by Fresnel's equations, where the index α stands for either P-or S-polarization, while the index k can represent the wavevector of an incident plane wave.As for the latter, we can consider an extension to periodic systems by replacing momentum k by crystal momentum and making mode index α include a band index, along with a degenerate mode index, as in α → nλ (n: band index and λ: degenerate mode index).Finally, note that the center of a wavepacket can be easily shifted by r 0 through the replacement { f α (r)} → { f α (r − r 0 )}.
To simplify the discussion, we shall omit the mode dependence of trial functions introduced above, and pick up only linearly-polarized wavepackets here.Figure 10 provides a sample electromagnetic wavepacket constructed from the procedure above, and which propagates at a positive z-direction.The wavepacket has an energy density distributed in a hollow torus-shape, and the wavepacket has a ring-shaped vortex along the internal hollow part of the torus, in addition to a line-shaped vortex associated with the orbital angular momentum directed to positive z-direction, which penetrates through the central hole of the torus.Figure 10a represents the xy-cross-section of the wavepacket, where we can find the eddy structure of energy flux density corresponding to the orbital angular momentum.On the other hand, Figure 10b represents the yz-cross-section of the wavepacket, where we can find another vortex structure whose core corresponds to the hollow part inside the torus.Let us take a closer look at a trial scalar wavepacket with toroidal-type vortex structure, f TWP (r).This function contains seven types of parameters, namely m line : vorticity of line vortex; m ring : vorticity of ring vortex; k c : central wavevector; R : radius of the central ring inside the hollow region; r : radius of torus-type tube; ∆ : thickness of surface layer of hollow torus and; v : size of vortex core, where we set the core sizes of two kinds of vortices to be the same.By introducing a right-handed basis set {e 1 , e 2 , e 3 } with the condition e 3 = k c /|k c |, we can give an example such as where N is a normalization factor.Figure 10 corresponds to the case where m line = 1, m ring = −1, λ = 2π/|k c | = 0.445 0 , R = 6 0 , r = 3 0 , ∆ = 0 , and v = 0.25 0 where 0 is a unit of length scale.Herein, we shall consider only this set of parameters for toroidal wavepackets, as our focus is limited on the topological properties of toroidal wavepackets, and does not extend to details of their shape deformations.
For better understanding by way of comparison, we present trial scalar functions f GWP (r) and f LGWP (r) for Gaussian and Laguerre-Gaussian wavepackets, respectively, defined by The Gaussian wavepacket in Figure 1 and the Laguerre-Gaussian wavepacket in Figure 9 correspond to the cases with R = 6 0 and with m line = 1, R = 6 0 , r = 2 0 , v = 0.25 0 , respectively.In both cases, the wavelength is set at λ = 2π/|k c | = 0.445 0 .
For the second question, we shall consider the stability of the toroidal wavepacket against reflection and refraction on flat interfaces between different kinds of homogeneous isotropic media.The dielectric constants on the lower and upper sides are represented by the symbols 1 and 2 , respectively.Figures 11-13 show the time lapses for cases with 2 / 1 = 0.40, 0.75, and 2.50, respectively.The incident angle is set at 45 • .The time is measured in units of 0 √ 1 µ 0 .The dimensionless time τ of each frame is τ = −16, −8, 0, +8, +16 from left to right.Figure 14 shows the incident-angle dependence for 2 / 1 = 2.50.The incident angle θ of each frame is θ = 0 • , 15 • , 30 • , 45 • , 60 • from left to right, and the dimensionless time τ is τ = +16 in every frame.In all cases, the magnetic permeability is set at µ = µ 0 everywhere, and only the xz-cross-sections are depicted.(x-and z-axes correspond to horizontal and vertical directions, respectively.)We adopt quasi-P-type configuration for the polarization state of every incident wavepacket.(The mean magnetic field of every incident wavepacket is parallel to the interface and normal to the quasi-incident-plane.) From the result in Figure 11, we can presume that the toroidal vortex is stable against reflection.On the other hand, two cases for refraction emerge.First, as the refractive index at the transmission side (Figure 12) decreases, the wavepacket shape stretches in a similar manner to its central wavelength, leading to an unstable ring vortex.Second, as the refractive index at the transmission side (Figure 13) increases, the wavepacket compresses in a similar manner to its central wavelength, resulting in a stable ring vortex at least up to θ = 60 • (Figure 14).
Finally, we would like to mention the transformation laws of the wavepacket topological properties.The vorticity m line of the linear vortex corresponding to the orbital angular momentum changes as m line → −m line in reflection, while it does not in refraction, suggested by our analogy with the Laguerre-Gaussian beam with an orbital angular momentum.By contrast, the vorticity m ring of the ring vortex does not change in both reflection and refraction.In general, recognizing a wavepacket as a particle-like object may give odd results at first glance.However, the wavepacket is actually a wave phenomenon, and it transforms to get turned inside out in reflection.Since both the rotational flow stemming from the ring vortex and the propagation direction change, the vorticity m ring defined based on the propagation direction remains unchanged.We would like to conclude this section with a note.As for the cases of partial reflection in Figures 12 and 13, it is not easy to identify reflected wavepackets in the present color contrast due to their weak intensities.Vague reflected wavepackets should appear after extremely increasing the contrast of these figures.On the other hand, in the incident-angle dependence of Figure 14, it becomes possible to recognize reflected wave packets as the incident angle gets away from Brewster's angle for 2 / 1 = 2.50 (∼57.7 • ).
Discussion
Inspired by electromagnetic vortices in free space and periodic structures, and by exotic boundary modes of topological photonic media, we theoretically investigated the topological characteristics and feasibility of a toroidal electromagnetic wavepacket.Our proposal was also based on the numerical analysis of an electronic wavepacket with toroidally-twisted spin structure, generated by a reflection at the interface between an electronic topological insulator and a conductor.We recognized a class of topological photonic media as the photonic version of the electronic topological insulator and further interpreted it as an extension of a class of photonic crystals where various types of electromagnetic vortex modes emerge with their time-reversal partners.Furthermore, we referred to the fact that modern information transmission technology via electromagnetic waves started to pay attention to photonic orbital angular momentum in a unique way.For instance, optically-based communication technologies have demonstrated the use of photonic orbital angular momentum in the realization of quantum digits for single-photon communication and in the development of a new scheme for multiplexing signals in telecommunications.A key concept common in both examples is the presence of multiple nearly-orthogonal modes within a narrow range of frequencies.From this point of view, we stressed the meaningful benefits of investigating the extensions of this concept, and proposed the toroidal electromagnetic wavepacket as a fusional application with the exotic surface modes of topological photonic media.The electromagnetic wavepacket with toroidal-type dual vortices is an extension of the Laguerre-Gaussian wavepacket whose line vortex corresponds to the photonic orbital angular momentum.Herein, we presented the procedure to construct the solutions of Maxwell's equations with multiple types of vortices.Afterward, we numerically examined the stability of the toroidal electromagnetic wavepacket against reflection and refraction at flat interfaces between the homogeneous isotropic media.Finally, we derived the transformation laws of topological charges of line and ring vortices in these processes.
Figure 2 .
Figure 2. (a) A sample inversion asymmetric 2D photonic crystal for relative permittivity of 1, 3, and 12 in white, gray, and black regions, respectively.(b) Band diagram of TE modes for (a).Dotted lines show the case where the relative permittivity of gray rods is set to one.The vertical axis represents the dimensionless frequency ωa/(2πc) (a: lattice constant; c: speed of light).(c) A sample optical tornado: energy flux density of a state at a K-point of the TE 2nd band.
Figure 3 .
Figure 3. (a) Y-shaped waveguide composed of the photonic crystal in Figure 2. The relative permittivity of the gray region of block layer around the waveguide is set to be 9.(b) z-component of magnetic field H z .(c) Spectra of the transmissions to the upper right (blue line) and to the lower right (red line) branches.The broken lines represent the transmission spectra for a vacuum Y-shaped region.
Figure 4 .Figure 5 .
Figure 4. (a) A sample inversion asymmetric 2D quasi-topological photonic crystal for relative permittivity of 1, 9, and 11 in white, dark gray, and black regions, respectively.When the inversion symmetry is restored, the crystal can be in the topological phase.(b) A sample 2D photonic crystal in non-topological phase for relative permittivity of 1 and 10 in white and black regions, respectively.(c) Band diagram of TM modes of the photonic crystal in (a); dotted lines show the case where the relative permittivities of the gray and black rods are set to 10.The vertical axis represents the dimensionless frequency ωa/(2πc).(a: lattice constant and c: speed of light).(a)
Figure 6 .
Figure 6.(a) Y-shaped waveguide composed of quasi-topological and non-topological photonic crystals in Figure 4. (b) z-component of electric field E z .(c) Spectra of the transmissions to the upper right (blue line) and lower right (red line) branches.The broken lines represent the transmission spectra for a vacuum Y-shaped region.
Figure 7 .
Figure 7. (a) An incident wavepacket with homogeneous polarization along the x-direction and (b) a reflected wavepacket with twisted spin texture.(c) Relative correspondence between spin density S and hue, lightness, and saturation (HLS) color space.Approximately, HLS correspond to the azimuth angle, polar angle, and magnitude of spin density, respectively.
Figure 8 .
Figure 8. Conceptual diagrams of (a) electronic helical surface states and (b) photonic helical surface modes.
Figure 11 .
Figure 11.Reflection of a toroidal-vortex at the interface of 2 / 1 = 0.40 with the incident angle of 45 • .
Figure 12 .
Figure 12.Refraction of a toroidal-vortex at the interface of 2 / 1 = 0.75 with the incident angle of 45 • . | 10,994.2 | 2019-04-08T00:00:00.000 | [
"Physics"
] |
Philosophy Study in the Development of Reading Teaching Materials Based on Cultural Wisdom of Betawi Fairy Tales
The purpose of this study was to examine the philosophical study contained in the development of reading teaching materials based on cultural wisdom of Betawi fairy tales for grade X science students at SMAN 107 Jakarta. This research was qualitative research with a descriptive method. Data collection techniques were carried out through library research on references related to the topics discussed namely teaching materials, readings, and Betawi fairy tales. The results of this research regarding the development of reading teaching materials based on cultural wisdom of Betawi fairy tales then were analyzed using three philosophical studies namely Ontology, Epistemology and Axiology. The Ontology study was employed to examine the development of reading teaching materials with cultural wisdom of Betawi tales. Next, Epistemology was formulated to design and make prototypes of the development of teaching materials so that they can be evaluated and get feedback from the parties involved. Subsequently, the axiology which focused on value of educational technology was intended to intrigue and promote a sense of humanity among students through the depiction of the characters told in the fairy tale
INTRODUCTION
Enormous advancement in technology can serve as two sides of the same coin. On one side it helps meet the increasing and complex needs of humans but on the other side it can be the source of distractions and addictions mainly for young generations. Thousand information which can be accessed easily can distract their common senses so that they are unable to filter out which one is appropriate or not especially related to their own culture. As a result, It will imperil the existence of local wisdom value. The disadvantage can be minimized by a means of teaching and learning activities that exposes the true value of one's own culture.
In real-life situations, learning is often self-motivated, driven by intrinsic curiosity in a particular topic, rather than by external reward (Ryan & Deci, 2000). It could be concluded that curiosity drives our brains more responsive to learning. Therefore, telling a local culture through a story can invoke the curiosity of students and it will make the learning process more effective and enjoyable. Curious students not only ask questions, but also actively seek out the answers. Inculcating students with a strong desire to learn something becomes one of the teacher's responsibilities.
Human curiosity about one phenomenon is the pioneer of knowledge. Humans develop their knowledge as an effort to survive in life. Therefore, they always improve and increase the experience whether in a bad or good situation so that it continues to grow well. By doing so, life goes on. This knowledge is sought tirelessly in order to fulfill the desire to be curious about something. In order to understand more deeply about "knowledge", it is necessary to understand the act of "knowing" (Wahana, 2008). Meanwhile, referring to Notoatmodjo (2012) there are many approaches to acquire the knowledge, namely (1) by trial and error, (2) by chance, (3) by power or authority, (4) based on personal experience, (5) by common sense, (6) truth accepting revelation, (7) truth intuitively, and (8) by research methods. However, the last approach, namely the scientific method, is the most widely used by scientists to prove that the knowledge obtained contains the truth. This method is absolutely rational and undeniable to be conducted for seeking the facts.
The questions about the nature of justice, knowledge or being are of great interest and debate in philosophy. Finding the appropriate answers is in a sense a problem of understanding the question. In doing so, it requires a good command of language knowledge.
The language is used to convey or disseminate the knowledge that has been possessed by humans throughout the journey of life to other people or to a society. Mastery language skill is required although language acquisition is an innate property, an ability is one which a person is born with. This means that humans are able to speak as God gifted. Using language to speak, write, listen and read is a skill for us, special to humans. Human language therefore cannot be considered without people and people using, creating, developing and changing it. Language is a questionable concept not only in philosophy but also in social and educational sciences. For some sciences language is a major concept of research and for others a secondary discussion topic when handling another problematic object The nature of language is dynamic not static, it keeps on growing or dying down in accordance with the needs of the communication to express the thought, feeling and even to influence others for a certain generation era. Language is a questionable concept not only in philosophy but also in social and educational sciences. For some sciences language is a major concept of research and for others a secondary discussion topic when handling another problematic object.
Therefore, language skill is unquestionably needed in order for the dissemination of knowledge. It leads to communication becoming well-directed and easy to understand. It is a tool for us to share our ideas, feelings and desires. It helps us to reveal our thoughts and communicate with others. Human language is one of the most developed and complicated means to transmit knowledge. Language is a must for poetry, prose and drama. It is what makes a community out of a group of people.
In English, there are four basic language skills to be developed, namely the ability to read, write, speak and listen. Reading is a skill that can be mastered through various processes. One of these processes is reading fairy tales. Fairy tales are a type of folklore and are basically almost the same as legends, i.e. objects or animals or humans (human nature) as the core of the story to be presented, then with the formation, the human is given meaning by applying his imagination to create a story (Susena & Rudito, 2017). Usually, fairy tales are written following the structure of narrative texts.
During the teaching and learning process, several elements are involved including a teacher who is an indispensable agent to narrate the content materials of local culture which symbolizes appropriate values to the younger generation (Pudjiati & Zuriyati, 2022). According to the syllabus, there are some genres learned by students of grade X of SMAN 107 Jakarta. One of them is narrative text. The Narrative text deals with problematic events which lead to a crisis or turning point of some kind, which in turn finds a resolution" and its social function is to entertain or amuse the reader (Gerot & Wignell, 1995). This type of text is also known as story, or more specifically fiction. It can be in the form of novels, short stories, legends and fairy tales. Anderson and Anderson (2003) explains that the generic structure of narrative text is divided into orientation, complication, coda, sequence of events, and resolution. Furthermore, Chatman and Attlebery (1993) divides the narrative text into the following four basic parts: Characters, settings, plot, and conclusion. Based on the above explanation, the narrative text is expected to amuse the students and in turn it will invoke students' interest to personally participate in the text.
In learning English at the high school level or equivalent, narrative texts in the form of fairy tales are studied in class X. Teaching materials for learning English that include fairy tales, in this case local fairy tales, are still not widely available. According to Lwin (2017), some of the obstacles that cause educators not to use fairy tales to teach are that local fairy tales are translated into English not for learning purposes and there is no information about the level of English language skills of students who can use these local fairy tales. Davidsen and Cuandani (2021) has raised traditional Indonesian stories from a foreigner's point of view written in two languages, namely English and Indonesian. The stories are taken from various provinces in Indonesia, such as Java, North Sumatra, Kalimantan, South Sulawesi, and Papua. This book is intended for foreigners learning Indonesian and is equipped with Indonesian audio. Meanwhile, Amandangi et al. (2020) made Central Javanese folklore texts in Indonesian web-based media for middle-level foreign speakers. They are "The Legend of the Rawa Dizziness", "The Legend of Telaga Warna", "The Legend of the Sikidang Crater", "The Legend of Mount Tidar", "Goa Kreo", and "Jaka Linglung" which have cultural and tourism content. They use computerassisted language learning (CALL) and online courses to enhance students' experience of using technology and facilitate self-directed learning.
One of the benefits of reading fairy tales is to increase critical thinking because the reader is encouraged to cultivate imagination. By reading fairy tales the critical reading skill can be developed to the fullest (Abidin, Mulyati, & Yunansah, 2015). Critical thinking is the ability to think clearly and rationally, understanding the logical connection between ideas. Critical thinkers rigorously question ideas and assumptions rather than accepting them at face value. They will always seek to determine whether the ideas, arguments and findings represent the entire picture and are open to finding that they do not. Moreover, as we have admitted that the millennial problem appears on self-searching identity. So, as to resist the influence of western negative values and ways of life that might not be suitable to Indonesian cultural wisdom value of life. The learning models implementing the cultural wisdom of Betawi fairy tales to improve critical thinking skills are still not widely 61 | Pudjiati & Mawarni, Philosophy Study in the Development of … applied by people. Therefore, the research question is stated as "How is the philosophical study in the development of reading teaching materials examined based on cultural wisdom of Betawi fairy tales?"
Previously Related-Research
There are previous researches conducted based on the philosophical concepts of ontology, axiology, and epistemology. First, Rangel (2019) employed a qualitative approach and multidisciplinary lenses to examine testimonios of five Faculty of Color committed to racial dynamics within academic institutions. The research delves into: (1) the ontological, epistemological and axiological principles that shape the ways that five social justice Faculty of Color approach their work, (2) the strategies that they use in the professional training and credentialing at schools of education at the colleges they work, and finally (3) the ways they navigate with the racial dynamics in their academic institutions.
Second, Chesky and Wolfmeyer (2015) used the philosophical concepts of ontology, axiology, and epistemology to understand STEM (science, technology, and engineering with mathematics) more completely. Ontology relates to the conceptual assumptions we have about what STEM is about (e.g., for mathematics, what numbers are, how functions and geometric properties interact with the empirical world). Epistemology relates to pedagogical theories as to how best to teach STEM, which are based on a theoretical and/or research-driven approach that claims children learn mathematics, science, engineering, and technology knowledge in a certain way. Axiology relates to objectives of STEM education regarding why children should learn STEM content. These are based on broader normative views as to what STEM knowledge ought to be used for.
Third, Setiawan (2015) elaborated that there is still a difference of opinion among experts in the field of management of what is meant by management, namely whether management is a science, an art or a profession. In addition, the management theory and studies have also experienced rapid growth, especially until the 19th century until the present. These developments have given rise to various groups of schools of thought about management, which is a group of classical management perspectives, a group of behavior management perspectives, and a group of quantitative management perspectives. Therefore, it is necessary to study the development of management in terms of the philosophy of science perspective. By doing this assessment, management will be studied ontological, epistemological and axiological. Ontologically, management is the science, art and profession of work done through others. In management development, ontologically most experts view the reality of social management in management as something objective, not subjective. Epistemologically, in management development, the approach most widely used by management experts is a deductive approach. Related to axiology, when considering a policy, the manager of the company is should pay attention to the values of ethics and humanity Fourth, Franzen (2012) completed research using principles in metaethics, ontology, and epistemology to examine the quest for the institutionalization of sociology in America. Metaethical commitments to moral realism inform our moralistic identity and our particular approach to interventionism. Ontological commitments to ideal types and universal laws imagined a mechanistic social world. Epistemological commitments to unit homogeneity, simple causation, deductive nomological logic, and radical decontextualization led to sociology's variant of the scientific method. These ontological and epistemological commitments combined to provide a scientific rationale for the discipline of sociology that is reflected in our methods to this day.
Based on the four previous researches, we conclude that epistemology, ontology and axiology can be considered as some of the main structures of education, research and academic discourses. As a result, we employed the three philosophical pillars to examine and develop the reading teaching materials based on cultural wisdom of Betawi fairy tales.
RESEARCH METHODS
This research uses qualitative research method with descriptive qualitative. The reason why the qualitative method was chosen is because this research aims to provide case insight on the philosophical study contained in the development of reading teaching materials based on cultural wisdom of Betawi fairy tales.
Qualitative descriptive method is a research method based on the philosophy of post positivism to examine the condition of natural objects. The qualitative research deals with naturalistic fields where different techniques of data collection may need to be followed. While according to Creswell (1994), qualitative research begins with assumptions, a worldview, the possible use of a theoretical lens, and the study of research problems inquiring into the meaning individuals or groups ascribe to a social or human problem. Hesse-Biber and Leavy (2006, p. 49) suggest that qualitative research seeks to discover, explain, and generate ideas or theories about the phenomenon under investigation and to understand and explain social patterns (the How' questions).
As claimed by Berg (2007), qualitative researchers are most interested in how humans arrange themselves and their settings and how inhabitants of these settings make sense of their surroundings through symbols, rituals, social structures, social roles, and so forth. Through qualitative techniques, Berg (2007) suggests, researchers are allowed to share in the understandings and perceptions of others and to explore how people structure and give meaning to their daily lives. What all of these have in common, according to Creswell (1994), are some common characteristics including: natural setting, researcher as key instrument, multiple sources of data, inductive data analysis, participants' meanings, emergent design, theoretical lens, interpretive inquiry, and a holistic account.
There are many ways to collect data, whether it is interviewing individuals, holding a focus group, observing as a participant or non-participant for observations, content analysis, or a combination of various methods. As it has been mentioned above, this study uses a descriptive method with library research techniques on references related to the development of reading teaching materials based on the cultural wisdom of Betawi fairy tales. The data are collected through the collection of various literatures of reading materials related to the philosophy of science, especially literature related to the development of teaching materials, readings, and the cultural wisdom of Betawi fairy tales.
The development of teaching material in the literature of reading materials includes books, articles and relevant research results. Then, the data that have been collected will be analyzed inductively. Meanwhile, in this study, a philosophical study in the development of reading teaching materials based on the cultural wisdom of Betawi fairy tales will be seen from three different points of view, namely ontology, epistemology, and axiology.
Ontology, epistemology, and axiology lay the foundations for how we, as individuals, understand the world we live in, the determinations we make about issues relating to truth, and the matters we consider to be of value to us individually, and to society at large.
Ontology is about what exists and what does not exist (basic). Ontology is a sub branch of metaphysics. Ontology is concerned with being. Ontology, or the study of being, creates the framework for how we, as individuals, connected in societies, make sense of the reality in which we live. The power of ontology is that it gives us the keys to unlock the way reality is understood, by taking as its object of study the actual being of things, matters, concepts, experiences, and words essentially of everything.
Epistemology is concerned with knowledge .We, humans, do not have access to the actual world. So, we built models in order to make sense of the world. Epistemology gives us the perimeters of knowledge concerning our model of the world and the methodologies to know the world. Epistemology, or the study of knowledge, receives more emphasis in our rationalist society because it sets out to explain why we jointly decide that certain things are true, and others are not. Science, and the interpretation of scientific results, changes the way society acts at all stages of life.
Axiology is about values such as good and bad, moral and immoral. Therefore, Axiology is concerned with values. Questions about what the meaning of life is and how we should live. Axiology, or the study of value or of goodness, is definitely the philosophical strain out of these three that has received least attention, even though it is fundamentally linked to our actions in our daily lives. The value of something can be seen as having intrinsic properties, valuable in its own right, or to have extrinsic properties, valuable for the sake of something else, which in turn can have intrinsic properties.
RESULTS AND DISCUSSION
The discussion regarding the development of reading teaching materials based on the cultural wisdom of Betawi fairy tales will be studied epistemologically, ontological and axiological.
Ontology
According to Tiswardini (2019), ontology is a science that describes the nature that exists, including various concrete or abstract forms. In reference to other sources, ontology discusses what you want to know and how far you want to know (Nursalim, 2017). In other words, ontology is a discussion to find or get the essence of something that is material or non-material.
The ontology in this study is formulated to examine reading teaching materials based on the cultural wisdom of Betawi fairy tales and will be used for high school students. The reading teaching materials developed are in the form of audiovisuals. Betawi cultural wisdom will be reviewed in relation to the original knowledge of a society that comes from the noble values of cultural traditions to regulate the order of people's lives (Inriani, 2017). Meanwhile, the object of the research is the students of grade X science program who attend SMA Negeri 107 Jakarta. The researchers find a problem that the available reading teaching materials have not raised many local cultural themes, in this case the cultural wisdom Betawi.
One of the inspiration for teaching material is a book entitled "Dongeng Betawi Tempo Doeloe" by Abdul Chair published in 2017, especially a story of Angan-Angan Si Muin. Angan-Angan Si Muin translated into Muin's Wishful Thinking was a story about a young man called Muin who lived alone in a village in Jakarta a long time ago. He just stayed in a hut and ate some raw vegetables nearby. For drinking and other water needs, there was a well that he dug close to his stay. In other words, the way he lived was very simple. He had two beehives; one was in his house and the other was in a jackfruit tree. After a traveler dropped in his hut and told him that honey is "valuable" because it can be sold. Knowing this, he became extremely happy to earn a lot of money. Unfortunately, his wishful thinking and uncontrollable action led him to destroy himself. He spilled a pot of honey and scattered it on the ground. Eventually, he was very regretful and very sad. Moral teaching that can be drawn from this story is that daydreaming or wishful thinking tended to be a useless activity which was proved by Muin's unintended action due to daydreaming, i.e the honey spilled. Then it was reflected in a pantun as stated in the ending part of the story. "Don't eat too much cucumber, Cucumbers have a lot of sap, Don't sit and be a daydreamer, Daydreaming is very bad". In Betawi culture, pantun is usually recited for disseminating local wisdom. It could be found both in oral and written languages such as in this story.
Through the introduction of local cultural fairy tales, it is hoped that students can directly appreciate the values of character education in the fairy tales. In addition, the available reading teaching materials do not employ many reading questions that refer to higher order thinking skills. Therefore, the urgency of the need for teaching materials that promote Betawi culture is important. Meanwhile, the teaching materials that will be developed by the researchers implement the SAM model (successive approximation model) from Allen (2012), which is relatively new so that it is expected to be able to overcome this problem.
Epistemology
Epistemology is a theory that examines the roots of science or philosophy of knowledge. In other words, how to acquire knowledge is also an epistemology (Mustasyir, 2002). Another definition of epistemology is the direction of human thinking in finding and obtaining knowledge by using the ability of the ratio (Suriasumantri, 1990). Thus, it can be concluded that epistemology is a human effort to find knowledge that utilizes the ability to think rationally. Furthermore, the search for scientific knowledge has limitations. As stated by Surajiyo (2019) that the determination of the scope of the limits of empirical scientific studies is in accordance with the principles of scientific epistemology, which requires empirical evidence in the process of discovering and compiling correct statements scientifically.
Epistemology is the method used by researchers to find out the object of research. Epistemology as it has been elaborated above has a close relevancy with the object of this research which contains several steps to develop teaching materials. In connection with the development of reading teaching materials based on the cultural wisdom of Betawi fairy tales, the researcher will implement the SAM mode, as stated in Figure 1. It has eight small repetitive steps that are spread into three stages. Initially, the preparation stage includes information gathering and SAVVY Start (brainstorming, sketching, and prototyping) by involving contributors in material development, in this case, materials teaching reading such as peers, expert advisors, and students. After that, the iterative design stage aims to design and make prototypes of the development of teaching materials so that they can be evaluated and get feedback from the parties involved. Lastly, the iterative stage of development called prototype is developed and implemented thoroughly. When the teaching materials have been used, if necessary, they can be evaluated and returned to the development and implementation stage.
Axiology
Axiology is closely related to the moral principles of developing the use of the acquired knowledge (Ginting, 2008). According to another expert, Surajiyo (2007) describes axiology values as a benchmark for truth, ethics and morals as a normative basis for research and exploration, as well as the application of science. The philosopher Suriasumantri (1996) explains that axiology is a theory of value related to the usefulness of the acquired knowledge. Thus, it can be stated that axiology discusses moral rules, benchmark values of truth, and value theory related to the benefits of an acquired knowledge.
The axiology study in this study is to examine two values, namely values related to technological advances in learning, especially learning English and human values in language teaching. What is meant by the value of technological progress here is that this research will result in the development of reading teaching materials based on stories of Betawi cultural wisdom in audiovisual or video form and stored on the web. Lestari (2013) reviewed that non-printed teaching materials include audio teaching materials such as cassettes, radio, vinyl records, and audio compact discs. Audio-visual teaching materials are such as CAI (computer assisted instruction) and web-based learning materials. This is supported by the results of research submitted by Jufriadi et al. (2019). He explained that the use of video as a medium in teaching is very helpful to facilitate a pleasant learning atmosphere where students watch videos with interesting topics. The skills developed from watching this video are not limited to just listening and listening but also writing and reading.
Meanwhile, the development of reading teaching materials based on tales of Betawi cultural wisdom can be used in a fun teaching and learning process because it is authentic. Thus, this can foster a sense of humanity among students through the depiction of the characters told in the fairy tale. In addition, this reading teaching material also has a function to help teachers and students gain a thorough reading comprehension by appreciating literary works, namely Betawi fairy tales.
CONCLUSION
Three studies of philosophy of science, namely ontology, epistemology and axiology are used to examine research on the development of reading teaching materials based on fairy tales of Betawi cultural wisdom. Based on the ontology, it will be studied further regarding the development of reading teaching materials which are intended for students of class X science program at SMA Negeri 107 Jakarta. Based on epistemology, the method used in this research is the SAM (Successive Approximation Model) teaching material development model from Allen (2012). Furthermore, based on axiology in this study, there are two values contained, namely the educational technology and human values which are intended for the model of teaching materials. Those values are included because the development of reading teaching materials based on fairy tales of Betawi cultural wisdom can help the learning process in the classroom become more meaningful, through character education from the characters told in fairy tales. | 6,062 | 2023-07-21T00:00:00.000 | [
"Philosophy",
"Education"
] |
On a new four-dimensional model of memristor-based chaotic circuit in the context of nonsingular Atangana–Baleanu–Caputo operators
A memristor is naturally a nonlinear and at the same time memory element that may substitute resistors for next-generation nonlinear computational circuits that can show complex behaviors including chaos. A four-dimensional memristor system with the Atangana–Baleanu fractional nonsingular operator in the sense of Caputo is investigated. The Banach fixed point theorem for contraction principle is used to verify the existence–uniqueness of the fractional representation of the given system. A newly developed numerical scheme for fractional-order systems introduced by Toufik and Atangana is utilized to obtain the phase portraits of the suggested system for different fractional derivative orders and different parameter values of the system. Analysis on the local stability of the fractional model via the Matignon criteria showed that the trivial equilibrium point is unstable. The dynamics of the system are investigated using Lyapunov exponents for the characterization of the nature of the chaos and to verify the dissipativity of the system. It is shown that the supposed system is chaotic and it is significantly sensitive to parameter variation and small initial condition changes.
Introduction
In the last decade, fractional differential equations started gaining much attention in modeling several real-world problems in different areas including mathematical epidemiology, physics, engineering, and many others, in which fractional-order operators are either with the singular kernel (Caputo derivative and Riemann-Liouville fractional derivatives) or nonsingular kernels (Atangana-Baleanu and Caputo-Fabrizio derivatives) .
The difference between integer-and fractional-order derivatives is that the integer-order derivative indicates some properties at a particular time of a system, while the fractionalorder derivation operator describes a certain feature of a dynamical system for the whole time. Moreover, the integer-order derivative describes the local properties of a certain dynamic system, whereas the fractional-order derivative representation of a dynamic system involves the whole space of the process [33]. In other words, applying derivation operators via noninteger orders in modeling real-world problems is essential for describing the hereditary specifications and effectiveness of the memory as an important feature of different mechanisms in the problem [34,35].
The emergence of different definitions of a fractional derivative is interesting and is an opportunity to address the complexity of nature in the sense that some problems in nature follow the power law for the case of Riemann-Liouville fractional operator, others follow the Mittag-Leffler law for the case of Atangana-Baleanu fractional derivative operator, and others the exponential decay law l for the case of Caputo-Fabrizio fractional operator or a combination of the above [36,37].
The recent increase in the study of different dynamical systems using fractional-order derivatives is attributed to the fact that most of the dynamic systems in relation to complex systems are found to be nonlocal having long memory in time, and intrinsically fractional derivation operators can describe such systems more accurately than the integer derivatives [38]. In other words, important features of many physical systems are best described or exposed by using fractional-order operators.
Chaos theory has attracted many researchers and has applications in the fields of encryption and secure communication [39], modeling financial systems and representing circuit diagrams [40], and many others [37]. It is believed that the first chaotic phenomenon occurred when the 3-body problem was studied by Poincare in the 1980s. In his study Poincare realized that the 3-body problem is not integrable and the numerical solution used for the system is highly sensitive to initial conditions. Later on, in the middle of the twentieth century, Edward Lorenz modeled atmospheric problems using three coupled nonlinear differential equations, and he also noted sensitivity to initial conditions of his atmospheric model. The sensitivity to initial conditions made predicting the weather for a long enough amount of time very difficult or impossible [41].
Besides, Lorenz recognized that trajectories of a chaotic system are not scattered all over the places but approach an attractor of the state space. He also detected that sensitivity to initial conditions reveals itself by generating instability in the attractors and as a result, an attractor is named 'strange' attractor [42].
Nowadays there are many kinds of literature dedicated to the analysis of chaotic systems using fractional derivative operators. This includes chaotic systems of Chua's electrical circuits and memristor-based systems. Some of the literature is reviewed in what follows.
Sene applied the Caputo fractional derivative operator in detecting the chaotic behavior of different 3D and 4D chaotic systems. He used Lyapunov exponent and bifurcation diagrams to identify the nature of the chaos and impact of parameter variation for different chaotic models investigated in [40,[43][44][45][46]. Different chaotic systems including Chua's electric circuit and several other chaotic systems were analyzed using fractionalbased order mathematical models by Petras [47]. Bifurcation and chaotic behaviors in fractional-order simplified Lorenz system using Adams-Bashforth-Moulton predictorcorrector scheme is considered in [48]. A dynamical fractional order of HIV-1 in the Caputo fractional order sense that led to chaotic behavior is considered in [49].
Atangana-Baleanu fractional derivative operators were used for modeling and analysis of different chaotic and hyperchaotic systems, and solutions were approximated by im-plementing a two-step Adams-Bashforth numerical algorithm in [50]. An intra-specific relation of predators and a prey model based on the Atangana-Baleanu fractional derivation operator, in which the numerical simulation led to a more chaotic dynamic system for different fractional derivative orders, were considered in [51].
It was in 1971 that Chua, circuit theorist, proposed memristor as a missing two-terminal nonlinear electrical component. The three basic components of a circuit are resistor, capacitor, and inductor. Memristor is famous for its memory effect and nonlinear specifications which is considered as the fourth component of a circuit. Memristor relates magnetic flux and electric charge linkage in which case it is called a charge-controlled electric model (φ = φ(q)), or it models a relationship between charges and flux (q = q(φ)) in which case it is called a flux-controlled model [39,52].
At present, there have been several studies conducted on memristor-based chaotic circuits using both integer and fractional-order derivatives. In [53] a conformal model of the simplest fractional memristor-based chaotic circuit is designed and studied in direction of the conformable ADM (Adomian decomposition method), bifurcation diagram, Lyapunov exponent, and Poincare sections. Buscarino et al. introduced a chaotic circuit with the help of a realistic model of an HP memristor, and numerical results showed the generation of chaotic attractors [52]. A novel five-dimensional chaotic system via flux-controlled memristor and integer-order derivative, extracted from Wang's 4D hyperchaotic system, is proposed by Wang et al. [54]. A memristor-based chaotic circuit modified by putting a nonlinear resistor in the circuit due to Chua via a flux-controlled memristor and negative conductance is analyzed for its chaotic dynamics using Lyapunov exponent, bifurcation mapping, and Poincare mapping and power spectrum, along with laboratory experiments using integer-order derivatives in [39]. Then Petras [47] investigated a memristor-based Chua's oscillator in the framework of the fractional orders.
The objective of this work is to investigate the memory effect properties and detection of chaos in a four-dimensional memristor-based system. Accordingly, an integer model of memristor-based circuit is represented by Atangana-Baleanu fractional derivatives of the Caputo type (ABC) and the existence-uniqueness of the mentioned ABC fractional model are performed based on the Banach fixed point theorem (BFPT) for contraction principle. Numerical approximation of the ABC fractional model is made using the newly developed numerical approximation for fractional derivative by Toufik and Atanganain [55]. The local stability of given fractional model is accompanied using the Matignon stability criterion. The existence and nature of chaos in the fractional model are checked using Lyapunov exponents. A bifurcation diagram for different fractional derivatives and parameter variations is performed. Several phase portraits are depicted as a verification for the impact of different parameter values and different fractional orders. The impact of initial conditions on the solution trajectory of the system is also investigated using simulation of the trajectories of the system for different initial conditions. All the phase portraits and solution trajectories in this work are obtained from the numerical scheme of Toufik and Atangana adapted for the memristor chaotic model considered in this work. A computing software called Matlab 2019a is used for the simulation of different results.
A Matlab code in relation to Lyapunov exponents and bifurcation diagram of suggested fractional systems called Danca algorithm [56] is used to quantify the chaos by calculating Lyapunov exponents and obtaining bifurcation diagrams for different fractional orders and different parameter values of the model. Some of the pieces of evidence for the orig-inality of this work include the application of Atangana-Baleanu fractional operator to the memristor-based system considered in this study, application of the newly developed numerical approximation by Toufik and Atangana for the fractional-order systems, and obtaining the phase portraits of the system from the numerical scheme. The Lyapunov exponents are calculated and bifurcation diagrams are depicted for different fractional orders and different range of parameter values of the model respectively. The impact of small changes in given initial conditions on the dynamics of the chaotic system is also investigated using simulation results for different initial values.
The remaining part of the research manuscript is arranged as follows: In the second part of the paper, the fractional representation of the memristor-based systems is made following some recapping of preliminary concepts and definitions of memristor circuit and basic properties of Atangana-Baleanu derivatives. The existence-uniqueness of the solution for the fractional representation of such a model are accomplished in this same section of the paper. The numerical algorithm applied to get the phase portrait of the memristor-based system is developed in the third part of the paper. The fourth part of the paper is concerned with the local stability analysis of the fractional model, and then, Lyapunov exponents, bifurcation diagrams with different fractional order and parameter variation are considered in the fifth section. Investigation of the impact of small change in the initial conditions on the dynamics of the chaotic system is considered in the sixth section, and the conclusions and references are presented in the subsequent sections.
Mathematical model describing memristor-based circuit 2.1 Memristor-based circuit
A memristor is regarded as a 2-terminal tool in which the magnetic flux (φ) between the terminals is considered as a function of the electric charge q that passes through the device [57]. Here, the memristor is of flux-controlled type specified by its incremental memductance function W (φ) given as follows: From (1) we reach a relation between the current I M (t) through the memristor and the voltage V 1 (t) which is presented as By (2) and W (φ(t)) = W ( V 1 (t)), the integral operator on the memductance function indicates that the mentioned function remembers the history of the voltage values. On the other side, if then a memristor will be a resistor. The memristor was derived from a circuit attributed to Chua by putting the Chua diode with the flux-controlled memristor [47,57]. Chua's electric circuit involves the resistor R, the inductor L, capacitors C j , j = 1, 2, and a nonlinear resistor (NR). The dynamic equation for the memristor-based chaotic circuit is formulated by in which V 1 , V 2 , and R L stand for the voltages over the capacitors C j , j = 1, 2, I L stands for the current through the existing inductance L, and I M (t) is defined in (2). Based on the motivation that smooth nonlinearity does give rise to chaos, let us choose a smooth continuous cubic monotone increasing nonlinearity presented by Consequently, the memductance function is formulated as The system of differential equations (3) can be converted to a structure without the dimension given as follows: where
Fractional representation of the memristor-based circuit
In this subsection, we recall the definitions and basic properties of Atangana-Baleanu derivative of Caputo type (ABC fractional derivative). where is the Mittag-Leffler function.
Now we follow to formulate model (6) in terms of ABC fractional derivatives.
. Then the following fractional differential equation admits a solution uniquely given by We can describe the result in Lemma 2.5 in the form of the Banach fixed point theorem as follows. We begin by defining a Banach space and and and We are now ready to describe the dynamic equation for the aforesaid memristor-based circuit given in (6) using the ABC fractional derivative in which
Results on the existence-uniqueness
Here, the existence-uniqueness of the solution for the ABC fractional model given in (16) are proved using Banach fixed point theorem (BFPT) for contraction mapping. The following two theorems are worth recalling before proceeding further.
Theorem 2.6 (BFPT, [61]) Let = ∅ be any closed subset of a Banach space X . Then any contraction mapping O : → admits a fixed point uniquely.
Theorem 2.7
Assume that x, y, z, φ are continuous mappings satisfying the following assumptions: Then the ABC fractional derivative system given by (16) has a unique solution in the region X .
Proof Let us show that the operator O 1 defined in (12) is well defined in the sense that and Now, for any x ∈ r and from (12), we have To show continuous differentiability on I, we proceed from which is continuous on I, and so we conclude that ABC To show that the operator O 1 has a fixed point, we apply Theorem 2.6. Based on this theorem, it is enough to show that O 1 is a contraction mapping. Indeed, let x 1 , x 2 ∈ X , t ∈ I. Hence, . for Now, for any y ∈ r and from (13), we have Consequently, we have O 2 y(t) ∈ r . To show continuous differentiability on I, we proceed from which is continuous on I, and so we conclude that ABC To show that the operator O 2 has a fixed point, we apply Theorem 2.6. Based on this theorem, it is enough to show that O 2 is a contraction mapping. Indeed, let y 1 , y 2 ∈ X , t ∈ I. Thus, Since H < 1, by the hypothesis of the theorem, we conclude that O 2 is a contraction.
In the sequel, we verify that the operator O 3 defined in (14) is well defined in the sense that O 3 z(t) ∈ r and ABC for Now, for any z ∈ r and from (14), we have To show continuous differentiability on I, we proceed from which is continuous on I, and so we conclude that ABC To show that the operator O 3 has a fixed point, we apply Theorem 2.6. Based on this theorem, it is enough to show that O 3 is a contraction mapping. Indeed, let z 1 , z 2 ∈ X , t ∈ I. Hence, . Since H < 1, by the hypothesis of the theorem, we conclude that O 3 is a contraction.
Now, for any φ ∈ r and from (15), we have Consequently, we have O 4 φ(t) ∈ r . To show continuous differentiability on I, we proceed from ABC 0 which is continuous on I, and so we conclude that ABC As a result, O 4 r ⊂ r . Immediately, it follows that O 4 is a contraction. Hence, by BFPT 2.6, system (16) admits a solution uniquely in X .
Numerical approximation of solutions of a given model
In this section, the numerical algorithm utilized to get the phase portrait of the dynamic ABC-system (16) is introduced. In the context of chaotic or hyperchaotic fractional differential equations, the use of analytical methods such as the Laplace transform method, the Sumudu transform, the HATM, and the homotopy perturbation technique cannot easily be applied because of the nonlinearities of the system [36,37]. This leads to the need for using numerical methods to approximate the solutions of systems of fractional differential equations. Some of the numerical methods that can be applied for this case include Adams-Bashforth and Toufic-Atangana numerical schemes [38]. Both of these methods are based on Lagrange interpolation polynomials.
In this study, the newly developed numerical approximation for fractional derivatives developed by Toufic and Atangana is employed. The numerical scheme is particularly developed for approximation of Atangana-Baleanu fractional-derivatives, and it is proved to be convergent, stable, and consistent.
For convenience, let us write (16) in the form shown in the following: where Now from Lemma 2.5 and the first equation of (21) we have The solution to FIVP (22) is given as
x(t), y(t), z(t), φ(t)
With the help of Lagrange's interpolation polynomial on [t k , t k+1 ], the equality leads to where h = t kt k-1 . Substituting (24) into (23), we obtain where and Inserting t m = mh into (26) and (27), we get Expression (25) can be expressed in terms of (28) and (29) as shown in Accordingly, we obtain other equations for the rest of the variables and and
Analysis on local stability
In the current part, the local stability analysis of the fractional model represented in (21) is performed. It is known in general that the equilibrium points (EPs) of chaotic systems are not stable. Some of the standard methods of stability analysis in fractional calculus are the Matignon criterion and the Laplace transform methods. In this work, the Matignon method is used for its simplicity and most commonly used in the literature for the same purpose [1,35,37,45]. The Matignon condition is presented as in which J represents the Jacobian matrix, λ(J) is the set of all existing eigenvalues of J, and q is the fractional derivative order. In relation to fractional derivatives, the EP of (21) is named locally stable provided that the Matignon criterion (35) is satisfied for each of the eigenvalues of J.
To determine if the equilibrium points (EPs) of (21) are stable or not, we proceed as follows. (
I) The equilibrium points (EPs):
The equilibrium points of (21) are obtained by solving ABC which is found to be E eqpts = (0, 0, 0, δ) for any arbitrary constant δ. Hence, it is a trivial equilibrium point for δ = 0, and the nontrivial equilibrium point is a line of equilibrium points given by E eqpts = {(0, 0, 0, δ) : δ = 0}.
On the other side, | arg(λ 2,3 )| = 2.0133 > q π 2 is true for any q ∈ (0, 1). It can then be concluded that the equilibrium point E eqpts = (0, 0, 0, 0) is locally unstable for the considered parameter values. Furthermore, since the real part of one of the eigenvalues of the Jacobian matrix is positive, so we infer that system (21) fulfills the necessary criterion for showing the double scroll attractor [1].
Lyapunov exponents, bifurcation, and chaos via different values of q and parameters
In this section, the level of chaos in system (21) is quantified using the Lyapunov exponent method. Bifurcation analysis of system (21) related to fractional derivative order q and three of the parameters in the model β 1 , β 2 , and β 3 are performed. In addition, a Matlab pseudo-code for Lyapunov exponents of fractional systems called the Danca algorithm [56] is used to quantify the chaos by calculating Lyapunov exponents for different fractional orders of model (21). The values for parameters are a 1 = 0.3, a 2 = 0.8, ς = 1.4, β 1 = 10, β 2 = 13, and β 3 = 0.1. These values are similar to the above ones used for calculating the Jacobian matrix. The initial conditions used in this part of the work are given by (0.11, 0.11, 0.11, 0.11). The final time of the simulation is t end = 300 seconds. The corresponding Lyapunov exponents (LE) for different values q = 0.95, 0.96, 0.98, 0.99 can be found in Table 1. As we can observe from Table 1, in each of the rows of Table 1, one of the Lyapunov exponents is positive, and as a result, the dynamic system in (21) is chaotic, and, since the sum of the LEs in each row is negative, we conclude that the system is dissipative.
Bifurcations due to variation of q
To obtain bifurcation diagrams due to the variation of the fractional order q, the values of all the parameters are kept fixed and the order of the fractional derivative q is made to vary in the interval (0.95, 1) with an increment of 0.01. The bifurcation diagram is illustrated in Fig. 1.
As shown in Fig. 1, the system started to bifurcate at q = 0.95, and then for q ∈ (0.95, 1] the system is chaotic. This observation is verified by the phase portrait of given model (21) depicted in Figs. 2 and 3 for q = 0.95 and q = 0.99, respectively. It is observed from the figures that as the order of the derivative increases to 1, the chaotic nature of dynamic system (21) gets more and more significant. Based on the LE shown in Table 1 for the fractional order q = 0.95, the Kaplan-Yorke dimension of system (21) can be calculated as follows: Since the sum of the first three LEs are positive, i.e., LE1 + LE2 + LE3 > 0, and the sum of all the LEs is less than zero, thus this system is dissipative, and accordingly, there is the Kaplan-Yorke dimension which equals The Kaplan-Yorke dimension corresponding to q = 0.98 is given as Similarly, the other Kaplan-Yorke dimension corresponding to q = 0.99 is given as The overall conclusion from the LEs is that the system is a chaotic system with one positive LE. The Lyapunov dimension is found to be noninteger for all the fractional orders considered above. Another observation is that though the fractional order q increased from 0.95 to 0.99, the LEs are approximately equal to each other exactly to three decimal places.
In the next part of this section, the chaos and bifurcation with different values of parameters are investigated. The three parameters considered are β 1 , β 2 , and β 3 .
One can observe from Fig. 4 that for the parameter β 1 ∈ [9.5, 10.5] system (21) gets significantly chaotic. To verify the observation, the phase portrait for β 1 = 10.09 is depicted in Fig. 5.
Bifurcation diagram due to variation of β 2
To obtain the bifurcation diagram as a result of variation of parameter β 2 , the other parameter values are kept unchanged and the order is taken to be q = 0.98. The parameter β 2 is made to vary in the interval (12.5, 13.5) with an increment of 0.001. The relevant diagram is illustrated in Fig. 6.
One can infer from Fig. 6 that when β 2 ∈ [12.5, 13.5], system (21) remains chaotic. To verify this observation, the phase portraits of system (21) for β 2 = 12.5 and β 2 = 13.02 are illustrated in Figs. 7 and 8, respectively. Based on the bifurcation diagram (Fig. 6) and the figures for β 2 = 12.5 and β 2 = 13.02, it can be observed that the significance of the chaos gets weaker and weaker as the values of the parameter increase to 13.5 in the interval (12.5, 13.5).
This observation is also supported by the LEs and corresponding Kaplan-Yorke dimension of the attractors shown in Table 2. As is observed from Table 2, the positive LEs and the corresponding dimension decrease by increasing the value of β 2 , and because of this, the significance of the chaos decreases by increasing the value of the parameter in the interval (12.5, 13.5).
Bifurcation diagram due to variation of β 3
To obtain the bifurcation diagram as a result of variation of parameter β 3 , the values of other parameters are kept unchanged and the order is taken to be q = 0.98. The parameter β 3 is made to vary in the interval (0, 1) with an increment of 0.001. The bifurcation diagram is illustrated in Fig. 9.
Impact of initial conditions
The impact of applying different initial conditions on dynamic system (21) is addressed using different initial conditions and simulation results in this section. Since chaotic systems are famous to be sensitive to given initial conditions, it is better to investigate dynamic system (21) in terms of different initial conditions. The values of used parameters are q = 0.98, β 1 = 10, β 2 = 13, β 3 = 0.1, a 1 = 0.3, a 2 = 0.8, ς = 1.4. As well as, the used initial condition is X 0 = [x 0 , y 0 , z 0 , φ 0 ] = [0.11, 0.11, 0.11, 0.11].
The first case of investigation of the influence of initial condition on the dynamics of system (21), x 0 is made to vary from 0.11 to 0.001, 0.1, 0.2, and 0.12. The simulation result Figure 12 The solution trajectory φ(t) of system (21) for different initial conditions due to variation of x 0 Figure 13 The solution trajectory φ(t) of system (21) for different initial conditions due to variation of y 0 is shown in Fig. 12 for the solution φ(t) of system (21) corresponding to the different initial conditions obtained by varying the first coordinate of X 0 . The other solution graphs of system (21) have a similar pattern as Fig. 12 for this case of initial conditions. Secondly, keeping all the values of parameters fixed as in the first case, the initial value y 0 is made to vary from 0.11 to 0.001, 0.1, and 0.12. The simulation result is shown in Fig. 13 for the solution φ(t) of system (21) corresponding to the different initial conditions obtained by varying the second coordinate of X 0 . The other solution graphs of system (21) have a similar pattern as Fig. 13 for this case of the initial condition.
In general, we can conclude that variation of the initial conditions in the chaotic systems generates different properties of the solution trajectories of the system. This happens because the values of Lyapunov exponents are very sensitive to initial conditions.
Conclusions
In this study, a four-dimensional memristor-based system of a circuit along with a flux-controlled memristor is investigated. The integer-order system is represented by Atangana-Baleanu fractional derivative and the Banach contraction theorem is utilized to show the existence-uniqueness of solutions of the fractional representation of the mathematical model. The memristor fractional model exhibited chaotic behavior for different fractional-order derivatives and different values of the parameters. One can say that the fractional representation showed additional chaotic behaviors that may not be clear using integer derivatives. This is verified by simulation of the system for different fractionalorder derivatives. The Lyapunov exponent was applied to show the nature of the chaotic system, and it is found that there is one positive Lyapunov exponent and then the mentioned system is chaotic. The memristor-based chaotic system is found to be sensitive to small variations of the parameters as depicted by bifurcation diagrams. The dynamics of the system are also found to be sensitive to the initial conditions as it is depicted via different simulation results. For the future, other fractional operators such as Caputo-Fabrizio could be applied to the same system and compared to the result obtained in this work. | 6,363 | 2021-10-09T00:00:00.000 | [
"Mathematics",
"Physics",
"Engineering"
] |
A Novel Feature-Map Based ICA Model for Identifying the Individual, Intra/Inter-Group Brain Networks across Multiple fMRI Datasets
Independent component analysis (ICA) has been widely used in functional magnetic resonance imaging (fMRI) data analysis to evaluate functional connectivity of the brain; however, there are still some limitations on ICA simultaneously handling neuroimaging datasets with diverse acquisition parameters, e.g., different repetition time, different scanner, etc. Therefore, it is difficult for the traditional ICA framework to effectively handle ever-increasingly big neuroimaging datasets. In this research, a novel feature-map based ICA framework (FMICA) was proposed to address the aforementioned deficiencies, which aimed at exploring brain functional networks (BFNs) at different scales, e.g., the first level (individual subject level), second level (intragroup level of subjects within a certain dataset) and third level (intergroup level of subjects across different datasets), based only on the feature maps extracted from the fMRI datasets. The FMICA was presented as a hierarchical framework, which effectively made ICA and constrained ICA as a whole to identify the BFNs from the feature maps. The simulated and real experimental results demonstrated that FMICA had the excellent ability to identify the intergroup BFNs and to characterize subject-specific and group-specific difference of BFNs from the independent component feature maps, which sharply reduced the size of fMRI datasets. Compared with traditional ICAs, FMICA as a more generalized framework could efficiently and simultaneously identify the variant BFNs at the subject-specific, intragroup, intragroup-specific and intergroup levels, implying that FMICA was able to handle big neuroimaging datasets in neuroscience research.
Independent component analysis (ICA) has been widely used in functional magnetic resonance imaging (fMRI) data analysis to evaluate functional connectivity of the brain; however, there are still some limitations on ICA simultaneously handling neuroimaging datasets with diverse acquisition parameters, e.g., different repetition time, different scanner, etc. Therefore, it is difficult for the traditional ICA framework to effectively handle ever-increasingly big neuroimaging datasets. In this research, a novel feature-map based ICA framework (FMICA) was proposed to address the aforementioned deficiencies, which aimed at exploring brain functional networks (BFNs) at different scales, e.g., the first level (individual subject level), second level (intragroup level of subjects within a certain dataset) and third level (intergroup level of subjects across different datasets), based only on the feature maps extracted from the fMRI datasets. The FMICA was presented as a hierarchical framework, which effectively made ICA and constrained ICA as a whole to identify the BFNs from the feature maps. The simulated and real experimental results demonstrated that FMICA had the excellent ability to identify the intergroup BFNs and to characterize subject-specific and group-specific difference of BFNs from the independent component feature maps, which sharply reduced the size of fMRI datasets. Compared with traditional ICAs, FMICA as a more generalized framework could efficiently and simultaneously identify the variant BFNs at the subject-specific, intragroup, intragroup-specific and intergroup levels, implying that FMICA was able to handle big neuroimaging datasets in neuroscience research.
INTRODUCTION
Blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) has been used as an effective neuroimaging tool to study functional connectivity, which can reveal the neural correlates of cognitive processes, among multiple cortical brain regions (Biswal et al., 1995(Biswal et al., , 1997Kawashima et al., 2000;Greicius et al., 2003;Yang et al., 2014;Shi et al., 2015b). A recent research interest in the literature is to study functional connectivity at multiple levels using fMRI technique (Yeo et al., 2011), and for this purpose a variety of well-known methods have been utilized, e.g., general linear model (GLM; Friston et al., 1994;Bagarinao et al., 2003), clustering methods (Golay et al., 1998;Fadili et al., 2000;Cordes et al., 2002;Zhang et al., 2011;Ren et al., 2014;Tang et al., 2015), principal/independent component analysis (PCA/ICA; McKeown et al., 1998;Biswal and Ulmer, 1999;Baumgartner et al., 2000;Kiviniemi et al., 2000;Beckmann and Smith, 2004), sparse dictionary learning (Georgiev et al., 2007;Lv et al., 2015a,b;Wang et al., 2016a), etc. As a representative of the model-based methods, GLM requires a prior knowledge of design matrix. Therefore, GLM is not able to detect intrinsic brain functional networks (BFNs) at the resting state where no design matrix is available. On the contrary, since no prior knowledge on the spatial or temporal pattern prior of the BFNs is required, the data-driven methods are more widely used in functional connectivity study. Examples of such datadriven methods include spatial ICA (McKeown et al., 1998) and temporal ICA (Biswal and Ulmer, 1999), assuming the spatial and temporal independence, respectively, while probabilistic ICA (PICA) carries out a probabilistic modeling, to achieve an asymptotically unique decomposition of the fMRI data (Beckmann and Smith, 2004). Other ICA methods for fMRI data analysis include an approach making use of spatial regularity of sources (Valente et al., 2009), and the models combining the sparsity and the mutual independence of components Wang et al., 2013Wang et al., , 2015, to improve the accuracy of the estimated brain sources.
In order to investigate the commonality of the functional connectivity inferred by ICAs across a group of subjects, roughly five group analysis methods have been developed by many researchers (Calhoun and Adali, 2012). The first group analysis method performs ICA on the average of the fMRI data across all subjects, with the underlying assumption that all subjects have common time courses (TCs) and spatial maps (SMs; Schmithorst and Holland, 2004). The second method, temporal concatenation group ICA model (TCGICA), performs ICA on the temporal concatenation of the fMRI data for all subjects, which allows for unique TCs for each subject but assumes common group SMs (Calhoun et al., 2001). The third one was spatial concatenation group ICA model (SCGICA), which allows for unique SMs but assumes common TCs (Svensén et al., 2002). However, for most resting-state fMRI functional connectivity studies, SCGICA does not perform so well as TCGICA (Schmithorst and Holland, 2004), possibly because the assumption of the unique time course across subjects, is more appropriate than the common-SM assumption. The fourth group ICA method called tensor-ICA concatenates the multi-subject fMRI data along a separate third dimension, and estimates a single spatial, temporal, and subject-specific mode for each component to attempt to capture a multidimensional structure of the data , with the assumption of both temporal and spatial consistency across the subjects. The fifth group analysis approach, makes a post-hoc analysis of the singlesubject ICAs, to combine the components into groups by spatial correlation (Schöpf et al., 2010;Wang et al., 2012), self-organized clustering (Esposito et al., 2005), or retrospective matching of the components (Langers, 2010). Additionally, by incorporating the intragroup sources as a priori of ICA model, called ICA-R (ICA with references; Lu and Rajapakse, 2006;Shi et al., 2015a), it is expected to obtain the more accurate subject-specific brain sources. For example, a novel group information guided ICA model (GIG-ICA) with the spatial reference of the intragroup sources generated by TCGICA (Calhoun et al., 2001) was able to extract more accurate subject-specific brain sources than the traditional ICA (Du and Fan, 2013).
Though the ICA or ICA-based models have been widely used to analyze the fMRI data, the aforementioned methods have the many kinds of deficiencies. For example, the multistep PCA operations in TCGICA, SCGICA and GIG-ICA for data reduction, possibly eliminate the subtle signals (Cordes and Nandy, 2004), which likely is not quite proper for handling the big neuroimaging data; since tensor ICA assumes common TCs among subjects, it is inappropriate for when they are different, such as in a resting-state study or when events are randomized between subjects; the single-subject ICAs also have the disadvantage that since the data are noisy, the components might not be necessarily unmixed in the same way for all subjects. Moreover, to our knowledge, these methods are just applied in BFNs identification at individual or/and intragroup levels, and there is a need to further investigate BFNs identification at intragroup-specific and intergroup levels, especially across the multiple fMRI datasets with different acquisition parameters such as variant time of repetition (TR) and different kinds of scanners. In this study, a generalized feature-map based ICA model (FMICA) is proposed to address the aforementioned deficiencies, which can be used to analyze the big fMRI datasets at individual, intragroup and intergroup levels.
The remainder of this paper is organized as follows. Theory and methods of FMICA are firstly presented in the next section, and followed by the description of the experimental designs and a subsequent section on BFN identification ability validation using both the simulation data and real fMRI datasets of task and rest at the subject-specific, intragroup and intergroup levels. Results and discussions are then presented, followed by final conclusions related to the advantages and limitations of FMICA.
THEORY AND METHODS
In this section, the related theory of ICA and ICA-R is presented, followed by the detailed procedures of FMICA and some key issues in FMICA implementation.
BFNs Extraction and ICA/ICA-R
The BFNs extraction has been formulated as a source separation problem, based on the functional integration property of the brain (McKeown et al., 1998;Du and Fan, 2013;Shi et al., 2015a). This source separation problem can usually be divided into blind source separation (BSS) and semi-blind source separation (SBSS), depending on whether the prior is given or not. With the respect to ICA model, as a representative of BSS, it is assumed that the observed fMRI mixtures (denoted as X) are linearly mixed by a set of non-Gaussian sources, namely BFNs (denoted as S), which can be formulated as The goal of ICA is to estimate an unmixing matrix W, such that the estimated sources Y computed by the following equation are good approximation of the true sources S: To solve Equation (1), many ICA algorithms have been proposed, e.g., the commonly used Infomax (Bell and Sejnowski, 1995) and FastICA (Hyvärinen and Oja, 1997). ICA-R model of SBSS incorporates the prior spatial reference (denoted as r), and can be modeled in a constrained ICA framework, to the following constrained optimization problem maximize J(y), s.t. g(y) ≤ 0 and h(y) = E(y 2 ) − 1 = 0, (3) where J(y) is the contrast function of a standard ICA algorithm, and g(y) = ε(y, r) − ξ , with ε(y, r) denotes the closeness between y (the estimated BFN) and the reference signal r, and ξ signifies a threshold parameter used to restrain the distance between y and r. To solve Equation (3), the Lagrange multiplier method can be utilized to search for the solution using Newton-like learning (Lu and Rajapakse, 2006), fixed-point learning (Lin et al., 2007), or multi-object optimization (Du and Fan, 2013).
FMICA
Supposing that there are m scanned fMRI datasets, i.e., Dataset k , 1 ≤ k ≤ m, the fMRI data of subject i in Dataset k is denoted as Sub i k , where 1 ≤ i ≤ n k , 1 ≤ k ≤ m, and n k signifying the subject number in Dataset k . As described in Figure 1, the FMICA model mainly consists of three levels of ICA decomposition and two re-estimation of group-specific and subject-specific feature maps using ICA-R: (1) the first (or singlesubject) level ICA decomposition on Sub i k to obtain the feature maps, i.e., independent components ICS i k , where 1 ≤ i ≤ n k and 1 ≤ k ≤ m; (2) the second (or intragroup) level ICA decomposition on the aggregated feature maps, i.e., ICS Agg k for Dataset k , to obtain the feature maps at intragroup level, i.e., GICS k for Dataset k , 1 ≤ k ≤ m; (3) the third (or intergroup) level ICA decomposition on the aggregated feature maps across the datasets (Dataset k , 1 ≤ k ≤ m), i.e., GICS Agg 1:m , to extract the intergroup feature maps across the different datasets, i.e., GICS; (4) the ICA-R algorithm first runs on the GICS k regrading Dataset k (1 ≤ k ≤ m) to extract the correspondingly intragroupspecific feature maps, i.e., GICS k , then on the ICS i k regarding Sub i k (1 ≤ i ≤ n k , 1 ≤ k ≤ m) to obtain the correspondingly subject-specific featured maps (denoted as ICS i k ). Specifically, in order to make the last procedure in FMICA more apparent, the corresponding details are described in the following. On the one hand, for extracting the intragroupspecific feature maps, i.e., GICS k , 1 ≤ k ≤ m, the ICA-R algorithm (Du and Fan, 2013) is implemented on each GICS k , where the correspondingly intergroup feature maps GICS are used as the spatial references. On the other hand, in order to extract the subject-specific feature maps ( ICS i k ) corresponding to the subject data Sub i k , 1 ≤ i ≤ n k , and 1 ≤ k ≤ m, the similar ICA-R procedure with spatial reference is implemented on each ICS i k . It is noteworthy that the spatial references may have two choices: the intergroup feature maps GICS or the corresponding intragroup-specifc feature maps GICS k . Repeating the above procedure for Dataset k (1 ≤ k ≤ m), all the corresponding intragroup-specific GICS k and subject-specific feature maps ICS i k (1 ≤ i ≤ n k ) are retrieved. However, when the number of the involved datasets is <2, i.e., m = 1, the third level ICA decomposition procedure is not implemented, and both the intergroup and intragroup-specific feature maps are not generated. Facing this situation, the subject-specific feature maps ( ICS i 1 ) are also obtained by ICA-R, where the intragoup feature maps (GICS 1 ) determined by the second level ICA decomposition procedure are used as the required spatial references.
Further, the corresponding statistical parametrical maps (SPMs) for ICS i k , GICS k , GICS, and GICS k (1 ≤ i ≤ n k and 1 ≤ k ≤ m) are obtained by the z-score transformation, where the BFNs are generated by threshing the corresponding SPMs with cluster size controlling.
Finally, in FMICA, it is worthy of noting that the intragroup or intergroup BFNs are identified by the second or third level ICA decomposition procedure, while the intragroup-specific or subject-specific BFNs are both identified by ICA-R procedure using the ones at the intragroup or intergroup level as references. It is expected that the BFNs at the intergroup, intragroup-specific and subject-specific levels, have some degree of similarity to each other in spatial distribution, but orderly capture the commonness across different groups, the specific activation parts of the spatial distribution within a certain group and within a single subject data. In a word, for a given intergroup BFN, the corresponding intragroup-specific or subject-specific one belongs to the same kind of BFN, but captures the group-specific or subject-specific difference in spatial distribution of BFN.
Based on the above description, FMICA is a quite generalized framework using feature maps, which is effective to capture the common BFNs (i.e., GICS, GICS k ), subject-specific ones (i.e., ICS i k ) and intragroup-specific ones (i.e., GICS k ). This implies that FMICA can be used to not only explore the subject-specific differences within a group, but also to reveal the intragroupspecific differences across the multiple datasets.
Some Key Points in FMICA Implementation
With respect to the FMICA implementation, the number of independent components (IC) in ICA decomposition at different levels should be first addressed. For the first level ICA decomposition procedure (depicted in Figure 1), the Laplace approximation (Minka, 2000) previously used in probabilistic ICA for fMRI data analysis (Beckmann and Smith, 2004) is used to estimate the components number for each single-subject fMRI data; for the second level ICA decomposition, the mean order corresponding to all subjects within the same dataset is used, while the average order is usually used as the number of intragroup level components in TCGICA (Calhoun et al., 2001;Li et al., 2007) implemented in the GIFT software (http:// mialab.mrn.org/software/gift/index.html); for the third level ICA decomposition, the average number of components in GICS k , 1 ≤ k ≤ m is used due to the situation of the different session scans of the same subjects under the same condition (for example, Experiment 2 of Section Experimental Designs); otherwise, the stability measure retrieved by ICASSO (Himberg et al., 2004) is used to determine the optimal components number, where ICASSO runs from the minimum number (i.e., min{x x = #(GICS k ), 1 ≤ k ≤ m}) to the maximum one (i.e., max{x x = #(GICS k ), 1 ≤ k ≤ m}) to obtain the stability values under different order number. Specifically, #() is an operation of obtaining the components number of the intragroup feature maps GICS k , 1 ≤ k ≤ m (for example, Experiment 3 of Section Experimental Designs). Moreover, in this research, FMICA takes advantage of the FastICA (Hyvärinen and Oja, 1997) and GIG-ICA (Du and Fan, 2013), to perform the ICA decomposition and ICA-R decomposition, respectively. Additionally, since the performance of ICA-R depends on the accuracy of the spatial references (Du and Fan, 2013;Wang et al., 2014;Shi et al., 2015a), slightly thresholded feature maps which are more similar to the real activated BFNs are used as spatial references, where the corresponding z threshold value is set to 1.0 empirically. Finally, the z-threshold value and the cluster size threshold are set to 2.0 and 10 voxels, respectively, to obtain the ultimate BFNs.
EXPERIMENTAL TESTS
In this section, the efficacy of the proposed FMICA model, was validated on the simulation data, task-related fMRI data and resting-state fMRI data. The details of the designed experiments were presented as follows.
Simulation Dataset
The SimTB toolbox (http://mialab.mrn.org/software; Allen et al., 2012;Erhardt et al., 2012) was used to generate simulation dataset including 20 subjects. Each subject data was with V = 148 × 148 voxels, 12 spatial sources and 120 time points at TR = 2 s (s). The baseline intensity was set to 800, and the baseline map was shown in Figure S1A. Each source, depicted in Figure S1B, represented a spatial pattern that underwent certain activation over time. Two sources (10 and 12) shared task-related modulation in addition to having unique fluctuations. For source 10, the strength of task-modulation (expressed as the ratio between task event amplitude and unique event amplitude) was set to 4, while taskrelatedness was smaller for source 12, set to 2. Task-modulation was introduced with a block design (24 s on, 24 s off, five blocks), convolved with a canonical hemodynamic response function to simulate the slow dynamics of the vascular response (Friston et al., 1995). Activation for the other 10 sources was described solely unique hemodynamic fluctuations with no task-related variation. All sources had unique events that occurred with a probability of 0.2 at each TR. For task-modulated sources (10 and 12), unique events were added with smaller amplitudes (0.2 and 0.4, respectively). For sources not of interest (no task modulation), the unique amplitude was 1. For all sources, the percent signal change was centered at 3 with a standard deviation of 0.25. Additive noise was included to reach a specified contrastto-noise ratio of 1. The time courses corresponding to the simulated 12 sources were depicted in Figure S1C. To simulate the subject-specific variations in spatial domain, modifications such as translation, rotation, expansion, and contraction, were also randomly added to each source of each subject, where the corresponding parameters were depicted in Figure S1D.
Visual Task Dataset
Six subjects (4 males and 2 females) took part in this visual task experiment, all informed about the purpose of this study and all the participants included in this study provided written informed consent according to procedures approved by the IRB of East China Normal University (ECNU). The designed visual paradigm was a two-states (OFF, ON) × 3 block design with a duration of 40 s. At the "ON" state, visual stimulus was corresponding to a radial blue/yellow checkerboard, reversing at 7 Hz. While at the "OFF" state, the participants were required to focus on the cross presented at the center of the screen. The BOLD fMRI data were acquired in the Shanghai Key Laboratory of Magnetic Resonance of ECNU, on a Siemens 3.0 Tesla scanner with a gradient echo EPI with 36 slices providing whole-brain coverage, TR = 2.0 s, scan resolution = 64 × 64, in-plane resolution = 3.75 × 3.75 mm; the slice thickness was 4 mm; and the slice gap was 1 mm. This dataset was also used in our previous study (Ren et al., 2014).
Test-Retest Task-Related Datasets for Motor, Language, and Spatial Attention
This test-retest fMRI datasets for motor, language and spatial attention functions were downloaded from the openfmri website (https://openfmri.org/dataset/; Gorgolewski et al., 2013). Three task-related fMRI time series (motor, covert verb generation, and landmark tasks) were selected to validate our proposed FMICA model. Ten healthy subjects (median age 52.5 years, min = 50, max = 58) included four males and six females, of which three were left-handed and seven right-handed. Each subject was scanned twice, either 2 (eight subjects) or 3 (two subjects) days apart. All subjects were provided with the written informed consent and this study was approved by South East Scotland Research Ethics Committee 01. The fMRI data acquisition parameters were set as follows: GE Signa HDxt 1.5T MRI scanner, FOV = 256 × 256 mm, in-plane matrix = 64 × 64, slice thickness = 4 mm, slice number = 30, TR = 2.5 s, flip angle = 90 • . The number of volumes in time series regarding the motor, covert verb generation and landmark tasks were 173, 184, and 238, respectively. For the convenience of description, the motor, covert verb generation and landmark tasks were denoted as Task1, Task2, and Task3, respectively.
Test-Retest NYU Resting-State Datasets
The test-retest resting-state fMRI datasets with 25 normal participants were drawn from the Human Connectome Project (http://www.nitrc.org/projects/nyu_trt; Zuo et al., 2010). All the participants included in this study were provided with written informed consent according to procedures approved by the IRB of New York University (NYU). Also, the fMRI data were collected according to protocols approved by the institutional review boards of NYU and the NYU School of Medicine. Each participant was scanned three times at rest by a Siemens Allegra 3.0 Tesla MRI scanner and the fMRI data for each subject consisted of 197 contiguous EPI functional volumes (TR = 2 s, TE = 25 ms, flip angle = 90 • , slice number = 39, matrix = 64 × 64, FOV = 192 × 192 mm 2 , acquisition voxel size = 3 × 3 × 3 mm 3 ). Data of sessions 2 and 3 were collected 5-16 months (mean 11 ± 4 months) after session 1 with an interval of 45 min. A high-resolution T1-weighted magnetization prepared gradient echo sequence was also obtained for each participant (MPRAGE, TR = 2.5 ms, TE = 4.35 ms, TI =900 ms, flip angle = 8 • , slice number = 176, FOV = 256 × 256 mm 2 ).
Data Preprocessing
All computations of this study were performed on a personal computer with intel(R) Core(TM) i5-3210M 2.5 GHz CPU and 4 GB RAM. The operation system platform was Windows 7. All steps for preprocessing or processing were run on the Matlab platform (Matlab 2012b, Mathworks Inc., Sherborn, MA, USA).
No preprocessing step was involved for the simulation dataset; For the other real data, the widely used DPARSF (Yan and Zang, 2010) batch processing pipeline with embedding SPM8 software (http://www.fil.ion.ucl.ac.uk/spm/) was used to perform the preprocessing operations including slice-timing, motion correction, spatial normalization to the Montreal Neurological Institute (MNI) EPI template and spatial smoothing with the full width at half maximum (FWHM) equal to 6 mm. Specifically, considering magnetization equilibrium, the first ten volumes were discarded for the test-retest NYU datasets. For the taskrelated datasets, no first volumes were discarded.
The z threshold of the z-scored SPMs from all the real fMRI datasets was set to 2.0, and the least active cluster size was set to 10 voxels. The BFNs were displayed by the MRIcroN software (https://www.nitrc.org/projects/mricron), and their locations were assessed by the PickAtlas toolbox (Maldjian et al., 2003(Maldjian et al., , 2004.
Experimental Designs
Three kinds of experiments were designed to validate the effectiveness of FMICA in this study.
Experiment 1: the simulation dataset with only one session was used to validate effectiveness of FMICA in two aspects, i.e., the BFN identification ability at the subject-specific and intragroup levels. Specifically, the third level ICA step was not involved in this experiment. The corresponding pipeline consisted of three procedures: the first level ICA on simulation dataset for obtaining the initial FMs (ICs), the second level ICA for extracting the second level (or intragroup) FMs and the ICA-R procedure using the intragroup FMs as the references for obtaining the subject-specific BFNs, respectively. Experiment 2: the NYU resting-state datasets with three sessions were used to validate effectiveness of FMICA in three aspects, i.e., the BFN identification ability at the subject-specific, intragroup-specific and intergroup levels. The corresponding pipeline consisted of the following steps: the first level ICA on fMRI datasets of each rest session for obtaining the initial FMs, the second level ICA for extracting the second level FMs, the third level ICA for obtaining the intergroup FMs and the ICA-R using the intergroup FMs as the references for obtaining the intragroup-specific and subject-specific FMs, respectively. Experiment 3: the ability of capturing the group difference of intrinsic BFNs across the multiple kinds of datasets using FMICA was validated using a combination of the test-retest NYU resting-state datasets, the test-retest task-related datasets for motor, language, and spatial attention and the visual task dataset. The corresponding pipeline consisted of the following steps: the first level ICA on each aforementioned dataset for obtaining the initial FMs, the second level ICA for extracting the intragroup FMs, the third level ICA for retrieving the intergroup FMs and the ICA-R procedure using the intergroup FMs as the references for obtaining the intragroup-specific FMs.
Results of Experiment 1
According to Experiment 1, the BFN detection ability of FMICA at the subject-specific and intragroup levels was designed to be validated on the simulation dataset. The order in both first (individual) and second (intragroup) level ICA decomposition was set to 13 (twelve designed sources and one background source). Firstly, the 12 sources determined by FMICA at intragroup level were displayed in Figure 2, which were highly approximate to the simulated ground truth sources. The Pearson correlation coefficients between the 12 estimated intragroup sources and the corresponding 12 ground truth sources were 0.9783, 0.9672, 0.989, 0.9858, 0.9634, 0.9687, 0.9687, 0.9877, 0.9717, 0.9670, 0.9868, and 0.9703, respectively, quantitatively implying the effectiveness of FMICA in the intragroup BFNs identification. Moreover, in order to investigate the intragroup BFNs estimation from the different ratios of the retained ICA components of the intragroup level to that of the individual level, FMICA was performed with a variety of such intragroupto-individual ratios on the simulation dataset to obtain the intragroup-level BFNs, and then the mean and standard deviation (std) values of Pearson correlation coefficients between the estimated intragroup sources and the corresponding ground truth sources at each run were calculated. As shown in Table S1, from which, one could draw a conclusion that increasing the number of retained components at the individual level had no effect on performance, while greatly increasing the number of retained components at intragroup level had a certain degree of negative impact on the estimated intragroup sources, possibly due to the over-spilt effects in ICA decomposition in the simulation dataset. Table S1 also demonstrated that using the 13 components in both first-level and second-level ICA exhibited good BFNs identification performance in this simulation dataset.
The subject-specific BFNs for each simulated subject were also estimated by FMICA, and then the correlation coefficients between these estimated subject-specific BFNs and the corresponding ground truth sources for each subject were also calculated, with the mean correlation coefficient across the 12 sources for each subject denoted as its subject-specific BFNs identification accuracy, which was compared to those of the traditional ICA model, demonstrating superior subject-specific BFNs identification ability as shown in Figure 3.
Results of Experiment 2
In this experiment, the test-retest NYU resting-state datasets with three sessions were used to validate effectiveness of the proposed FMICA model on identifying the BFNs at the intergroup, intragroup-specific, and subject-specific levels. At the subject-specific level, compared to the intrinsic BFNs from different subjects, the intrinsic BFNs originated from the different sessions of the same subject should be more correlated. Moreover, the intragroup-specific intrinsic BFNs from different sessions should be also highly correlated with each other due to the high reproducibility of the intrinsic BFNs (Shehzad et al., 2009;Zuo et al., 2010;Wang et al., 2016b).
Eighteen intrinsic BFNs at the intergroup and intragroupspecific levels for the resting session 1 (S1), session 2 (S2), and session 3 (S3) were selected visually by the experts from the estimated components by FMICA, as displayed in Figure 4, respectively, with the involved Talairach Daemon (TD) lobes, Brodmann areas, Automated Anatomical Labeling (AAL) atlas regions, and the representative MNI coordinates, presented in Table 1. The components IC1-IC4 were referred to the wellknown default mode network (DMN; Raichle et al., 2001;Damoiseaux et al., 2006;De Luca et al., 2006), which were divided into four sub-networks (Zuo et al., 2010); IC5, IC6, IC7, and IC10 were called the auditory network, predominant visual network, lateral visual network, and sensorimotor network, respectively Damoiseaux et al., 2006;De Luca et al., 2006;Schöpf et al., 2010;Wang et al., 2012); IC8 and IC9 were involved with the working memory function related brain regions (Wang et al., 2012(Wang et al., , 2013Iraji et al., 2016); IC11 and IC12 involved dorsal parietal and lateral prefrontal cortex, which were two split separate components of a dorsal pathway network (Damoiseaux et al., 2008;Schöpf et al., 2010;Wang et al., 2012Wang et al., , 2013; IC13 was the salience network as reported by Menon and Uddin (2010) and Uddin (2015); IC14 was the basal ganglia network, involving mainly caudate nucleus and putamen (Iraji et al., 2016); IC15 involved the cerebellum posterior lobe and a portion of calcarine area in the occipital lobe; IC16 was located at the brainstem and cerebellum, e.g., cerebellar vermis; IC17 involved mainly brodmann areas 47 and 34, e.g., the superior temporal pole; IC18 was located at a portion of the limic and frontal cortex, e.g., hippocampus, some areas of superior frontal gyrus, etc. The successful identification of the aforementioned wellknown intrisic BFNs at the intergroup and intragroup-specific levels demonstrated the effectiveness of the proposed FMICA model.
FIGURE 4 |
The spatial map distribution of the intrinsic BFNs at the intergroup and intragroup-specific levels on test-retest resting-state datasets: the first column depicted the intergroup intrinsic BFNs from the three rest sessions; the second, third, and fourth columns displayed the intragroup-specific intrinsic BFNs from the first (S1), second (S2), and third (S3) rest session, respectively.
Frontiers in Neuroscience | www.frontiersin.org TABLE 1 | The location information of the 18 intrinsic BFNs from the test-retest resting-state datasets shown in Figure 4: the MNI coordinates (in mm), the involved brain lobes, Brodmann areas and AAL atlas regions for each network. From Figure 4, it could be observed that the intrinsic BFNs at the intragroup-specific level from each session were quite approximate to the corresponding ones at the intergroup level. Meanwhile, three pairs of mutual correlations among the intragroup-specific BFNs estimated from the three sessions, were calculated, as shown in Figure 5, from which high correlations could be clearly observed, demonstrating great reproducibility of the intrinsic BFNs across the sessions. At the subject-specific level, the BFNs identified by FMICA were compared among sessions of the same subject and among different subjects, respectively, and mean correlation coefficients FIGURE 5 | The spatial map correlation curves among the correspondingly intragroup-specific intrinsic BFNs identified by FMICA from the first (S1), second (S2), and third (S3) session of resting-state datasets, respectively. between pairs of BFNs under comparison for each subject were shown in Figure 6, where Figures 6A,B were for across-sessions and across-subjects comparison, respectively. Similar to FMICA, the BFNs identified by the first level ICA were also compared among sessions of the same subject and among different subjects, respectively, and mean correlation coefficients between pairs of BFNs under comparison for each subject were also shown in Figures 6C,D for across-sessions and across-subjects comparison, respectively. It was worth noting that the 18 intrinsic BFNs at the intergroup level were used as the templates to match the best ones from the individual FMs generated by the first level ICA decomposition on each session data of each subject, aiming at overcoming the random order of the components. Moreover, based on the contrast values presented in Figures 6A,C, and the contrast ones in Figures 6B,D, two sample T-tests with significance level equal to 0.05 were implemented, respectively, where the mean value (0.5644, marked in Figure 6A) of all points in Figure 6A was significantly larger than the one (0.3168, marked in Figure 6C) of all points in Figure 6C with p value equal to 6.1630×e −80 , and the mean value (0.3591, in Figure 6B) of all points in Figure 6B was also significantly larger than the one (0.1853, marked in Figure 6D) of all points in Figure 6D with p value equal to 2.1511 × e −172 . The mean values of the points in Figures 6B,D corresponding to the first level ICA were relatively small, and this was possibly due to that some BFNs could be identified at the intergroup level, but not separated at the single subject level by the traditional ICA. However, the ICA-R reestimation procedure in FMICA could identify most of BFNs at the single subject level, demonstrating that the proposed FMICA could identify the subject-specific BFNs more effectively than the traditional ICA did.
To intuitively compare the performance between the proposed FMICA and the traditional ICA, the spatial maps of IC1 (DMN) identified by FMICA and FastICA for all three sessions of the first four subjects (for space limitation) as shown in Figures 7A,B, respectively, were taken as an example, from which it was obvious that the DMNs identified by FMICA had higher across-sessions than across-subjects consistency and much higher both across-sessions and across-subjects consistency compared to the results of FastICA, implying that FMICA had higher subject-specific BFNs identification capability than traditional ICA.
In summary, results from the test-retest resting-state datasets demonstrated that the proposed FMICA model had high BFN identification capability at intergroup, intragroup-specific and subject-specific levels.
Results of Experiment 3
In this experiment, the intergroup and intragroup-specific analysis ability of FMICA was further validated by combining the test-retest resting-state datasets, the test-retest task-related datasets for motor, language and spatial attention (i.e., Task1, Task2, and Task3) and the visual task dataset. Namely, there were ten datasets as the input of FMICA, i.e., the resting-state datasets with three sessions, three test-retest task-related datasets (i.e., Task1, Task2, and Task3) with two sessions and the visual task dataset with one session. In this experiment, as described in Section Some Key Points in FMICA Implementation, the ICASSO method was used to determine the optimal order for the intergroup-level analysis based on the stability measure of the estimated components for each order, as shown in Figure S2, demonstrating that the estimated components had the highest mean/median stability and relatively small values of standard deviation (STD) and inter-quartile range (IQR), when the order was equal to 57. Therefore, the order was set to 57 in Experiment 3.
Intragroup-specific BFNs for each of the 10 datasets and the corresponding intergroup BFNs, selected visually by the experts from the estimated components by FMICA, were showed in Figure 8 for the first five BFNs due to space limitation, and results for the remaining 25 BFNs were shown in Figure S3. The TD lobes, Brodmann areas, AAL regions and the representative MNI coordinates involved in these BFNs, were recorded in Table S2. It could be observed that most of the intrinsic BFNs extracted in Experiment 2 had high reproducibility in Experiment 3, and better across-sessions than across-datasets consistency of the BFNs was also observed. Correlation analysis was performed to quantify such consistency in various cases. Firstly, mean and std values of correlation coefficients among the intragroup-specific BFNs from different sessions of the restingstate datasets under the same condition (e.g., session 1 and session 2 of resting-state datasets), were calculated and presented in Figure 9A, demonstrating a high mean correlation of 0.8464 and thus implying high across-sessions reproducibility of the BFNs in resting-state datasets (Wang et al., 2016b). Then, the same correlation analysis were performed across the test-retest task datasets of Tasks 1, 2, and 3 from the same subjects, with results presented in Figure 9B, showing also a high mean correlation of 0.8314 and thus implying across-tasks similarity of the intrinsic functional connectivity architecture (Finn et al., 2015). Finally, correlation analysis were performed on completely different kinds of datasets sharing neither sessions nor tasks, with results shown in Figure 9C, indicating a low correlation of 0.5814 inferior to that in Figures 9A,B and thus demonstrating that the proposed FMICA was able to effectively capture the FIGURE 8 | The spatial map distribution of first five BFNs at the intergroup and intragroup-specific levels in Experiment 3: each column depicted a BFN at the intergroup and intragroup-specific levels; Rest_S i denoted the ith session of test-retest resting-state datasets; Task i_S j denoted the jth session of Task i from the test-retest task-related datasets; Visual denoted the visual task dataset.
FIGURE 9 | The spatial correlation curves of the intragroup-specific BFNs generated by FMICA among the test-retest resting-state datasets, three test-retest task-related datasets and visual task dataset: (A) the spatial map correlation curves among the intragroup-specific BFNs from the same kinds of datasets with different sessions; (B) the spatial map correlation curves among the intragroup-specific BFNs with respect to the test-retest task-related datasets; (C) the spatial map correlation curves among the intragroup-specific intrinsic BFNs from different kinds of datasets.
differences of intragroup BFNs across the different kinds of datasets.
To summarize, it could be stated that the proposed FMICA was effective for the intergroup and intragroup-specific analysis, and could characterize the group-specific difference.
DISCUSSION
In this paper, a BFNs parcellation model based on feature maps, called FMICA, was proposed with demonstrated effectiveness. This FMICA consisted of four main procedures: (1) the firstlevel ICA decomposition to extract independent component feature maps for each dataset of each subject; (2) the secondlevel ICA decomposition to obtain the intragroup feature maps for each dataset; (3) the third-level ICA decomposition to acquire intergroup BFNs across multiple datasets; (4) the ICA-R decomposition to extract intragroup-specific BFNs and subject-specific BFNs based on intragroup feature maps and individual IC feature maps, respectively. On one hand, since FMICA used only the feature maps identified by the single-subject level ICA and the subsequent hierarchical processing steps were incorporated for multi-level analysis, it was able to effectively handle the big neuroimaging datasets with different acquisition parameters. On the other hand, the experimental results showed that FMICA had great capability for brain network identification at the subjectspecific, intragroup, intragroup-specific and intergroup levels. For example, the results of Figures 3, 6, 7 demonstrated the FMICA's more effective identification ability of the subjectspecific BFNs in contrast to the traditional ICA method; based on the results showed in Figure 9, the intragroupspecific BFNs showed the better across-sessions than acrossdatasets consistency, while the intragroup-specific ones also uncovered group-specific difference in the spatial distribution compared to the intergroup ones (showed in Figure 8 and Figure S3).
Comparison with Other Feature-Based ICA Methods
It was proposed to perform ICA analysis by summarizing fMRI data of each subject as a feature map and applying subsequently traditional ICA algorithms on these feature maps, where features could be the amplitude of low frequency fluctuations (ALFF) maps for resting-state data or T-statistic maps for task-related data, yielding BFNs strikingly similar to but slightly noisier than the results of spatiotemporal group ICA analysis (i.e., TCGICA; Calhoun and Allen, 2013). Very recently, another novel feature-based ICA model using seedbased functional connectivity as summarizing features was proposed (Iraji et al., 2016), with performance highly depending on the choice of seeds. The proposed FMICA in this paper took spatial maps of the independent components of the subjects as feature maps for group analysis. FMICA could produce the comparable intragroup BFNs to the spatiotemporaldomain based group ICA, as shown in Figure S4, implying that it was more effective than the first feature-based ICA model. Meanwhile, FMICA without the seed-based functional connectivity identification procedure was more flexible than the second feature-based ICA model, and it had extra unique advantages of identifying subject-specific, intragroup-specific and intergroup BFNs due to hierarchical processing incorporated in the model.
Limitations and Future Research
Single-subject independent components were used as the input feature maps in the proposed FMICA model. However, edges and shapes of the feature-maps could be susceptible to the preprocessing steps in fMRI data analysis, such as the spatial smoothing with FWHM kernels of different size. Therefore, one future research topic on FMICA might be to develop a more robust model to deal with the effects of the preprocessing steps on the feature maps.
Since brain activity at either resting or task states is nonstationary, and it is very important to characterize the dynamics of brain networks (Calhoun et al., 2014). Although static brain functional activity is considered in the current study, the FMICA has also the potential to provide new options to the investigation of the dynamic characteristics of brain networks.
CONCLUSION
In this study, we proposed a generalized feature-map based ICA model, named FMICA, which aimed at facing the ever-increasingly big neuroimaging datasets with diverse acquisition parameters. This proposed model was effective to characterize BFNs at the subject-specific, intragroup, intragroup-specific and intergroup levels. The success of FMICA also implied that the feature maps used as the single-subject representatives could not only reduce the high dimensions of the original fMRI data to a small one, but also capture the useful common and distinct properties embedded in each original data. In summary, this proposed FMICA was expected to have wide applications in neuroimaging neuroscience research, e.g., determining individual brain functional ROIs, characterizing differences of BFNs among individual subjects or among the contrast groups, etc.
ETHICS STATEMENT
In this study, all the participants included in this study of visual dataset, test-retest task-related datasets and testretest NYU resting-state datasets provided written informed consent according to procedures approved by the IRB of East China Normal University (ECNU), South East Scotland Research Ethics Committee 01 and New York University (NYU), respectively. Thus, all subjects gave written informed consent in accordance with the Declaration of Helsinki.
AUTHOR CONTRIBUTIONS
Collection of fMRI data: NW, HY, WZ, and YS. Design of the work: NW and HY. Analysis and interpretation: NW, CC, WZ, YS, and HY. Drafting the article: NW, HY, and CC. | 9,970.8 | 2017-09-08T00:00:00.000 | [
"Computer Science"
] |
Can EEG-devices differentiate attention values between incorrect and correct solutions for problem-solving tasks?
ABSTRACT The affective state of an individual can be determined using physiological parameters; an important metric that can then be extracted is attention. Looking more closely at compact EEGs, algorithms have been implemented in such devices that can measure the attention and other affective states of the user. No information about these algorithms is available; are these feature classification algorithms accurate? An experiment was conducted with 23 subjects who utilized a pedagogical agent to learn the syntax of the programming language Java while having their attention measured by the NeuroSky MindWave Mobile 2. Using a concurrent validity approach, the attention values measured were compared to band powers, as well as measures of task performance. The results of the experiment were in part successful and supportive of the claim that the EEG device’s attention algorithm does in fact represent a user’s attention accurately. The results of the analysis based on raw data captured from the device were consistent with previous literature. Inconclusive results were obtained relating to task performance and attention.
Introduction
Learning analytics is concerned with collecting and analysing data during the learning process in order to predict, inform stakeholders, and consequently improve learning outcomes (Sinha et al., 2014). Thus, one of the challenges of learning analytics is collecting data about learners and developing data-intensive analytics methods (Knight & Buckinghamm Shum, 2014). In addition to cognitive data, physical data (e.g. clicks), or social network data (i.e. data related to building communities (Hoppe, 2017), learning analytics may exploit physiological data in order to better understand the cognitive learning process of learners.
According to James (1890), a psychologist and a philosopher, attention is 'the taking possession by the mind, in clear and vivid form, of one out of what may seem several simultaneously possible objects or trains of thought … It implies withdrawal from some things in order to deal effectively with others. ' There are many dimensions of attention, resulting in its categorization into four main types (McDowd et al., 1991): sustained attention, selective attention, alternating attention and divided attention. Sustained attention, otherwise known as 'vigilance', can be defined as maintaining focus with a moderate level of mental effort over an extended period of time (Oken et al., 2006). Selective attention is the process of actively selecting focus on one stimulus, including the external environment or internal sources, while filtering out others (Johnston & Dark, 1986). Alternating attention is the ability to switch back and forth between tasks that require different cognitive processes (Sohlberg & Mateer, 1987). Finally, divided attention, commonly known as 'multi-tasking', is the activity of processing more than one stimulus at a time or reacting to multiple stimuli simultaneously.
'Sustained attention' is the focus of investigation in this research, as this type of attention is most highly related to learning and education. It has been shown in various studies (Gould et al., 2011;Klimesch et al., 1998;Makeig & Jung, 1996;O'Connell et al., 2009), not to mention from common human experience, that being able to focus and concentrate on a task results in greater task performance, whether this is at school, on the job, during free time, while driving, etc. Attention is a crucial factor in the advancement of an individual's cognitive skills, being a reason as to why it has been extensively studied and researched in the fields of psychology, neurology, biology, and physiology.
Monitoring attention along with processing its data has been accomplished by means of self-reports (along with reports from others) and brain-computer interfaces (BCI). Many applications have been developed that use data pertaining to attention in the fields of education, healthcare, and entertainment, to name a few (Al-Nafjan et al., 2017). In this paper, we will mainly focus on the realm of education and learning.
With the popularization of 'flipped' learning (Szafir & Mutlu, 2013), in which one learns via online tools, such as online lectures or intelligent tutoring systems rather than by means of traditional methods, BCI applications, namely 'bio-cybernetic loops' (Pope et al., 2014) can be utilized to promote focused learning in (and outside) the classroom. This biocybernetic loop corresponds to the retrieval and processing of physiological signals, in this case, the electroencephalography signals indicating attention, and the subsequent production of biofeedback. The user can, prompted by the biofeedback, change behavior and consequently, their cognitive state. Using technology that can measure a student's attention can appropriately guide how the learning style should be adapted to increase vigilance and, therefore, deliver optimized results for the individual. This design demonstrates the great potential in measuring physiological signals, such as those that measure attention and then providing feedback, to effectively increase information retention and improve concentration in students. However, if the values used in neurofeedback are a false representation of the student's attention, this could result in the adaptation of the system that does not fit the student's needs. The learning process could be hindered by altering tasks to those that are either too difficult or too easy, preventing a proper balance of engagement and motivation.
The implementation of BCIs has made it possible to connect physiological signals to technology, and the accuracy of such algorithms pertaining to attention is of utmost importance. Affordable compact physiological sensors have the potential to help bring educational tools to a wider range of users, thus supporting the notion that algorithms to calculate attention levels should be as accurate as possible.
Research question
Many compact EEGs are available on the market today including NeuroSky, Emotiv EPOC, Muse, and OpenBCI (Farnsworth, 2017). Such devices have been tested in the past (Crowley et al., 2010;Maskeliunas et al., 2016;Rebolledo-Mendez et al., 2009;Sałabun, 2014), however, very little testing has been conducted on the accuracy of algorithms implemented in these technologies that determine the attention level of the user. In particular, the NeuroSky biometric 'eSense Attention' algorithm has not yet been examined extensively.
One study that evaluated the accuracy of the attention algorithm implemented in the NeuroSky's EEG concluded that an accuracy of 78% was reached while conducting a psychological stress-inducing test (Crowley et al., 2010). Despite this, no correlation between low cognitive performance (i.e. making errors during the test) and the change in attention was found.
The aim of the research work presented in this paper is to test if the 'eSense Attention' algorithm corresponds to other physiological metrics, as well as see how it correlates to performance while conducting a cognitive problem-solving task to therefore judge whether it can be considered accurate. Thus, the research question to be proposed is: Can EEG devices differentiate attention values between incorrect and correct solutions for problem-solving tasks?
State of the art of physiological approaches to measuring attention There have been many attempts at determining the correlation between physiological metrics and attention. Some examples of physiological metrics that have played a part in measuring attention levels are heart rate variability, frontal EEG asymmetry, and EEG power bands including EEG-Alpha, EEG-Beta, EEG-Delta, and EEG-Theta. Heart rate variability is the changing variance between consecutive heartbeats. Frontal asymmetry is the difference between the total power in the EEG-Alpha band of the right and left hemispheres which can be used as a physiological response pattern to detect whether a learner is having an approaching or avoiding attitude (Karran & Kreplin, 2014).
Using Google Scholar, candidate publications were found using keywords such as 'attention', 'physiology', 'alpha oscillations', 'vigilance', 'ADHD', 'frontal asymmetry', and 'EEG'. In order to use relevant information and sources, only literature from the year 1980 and onwards was considered, as new and more modern technology and methods of measuring attention were introduced at this time. Next, the author(s) must have either explicitly declared which of the four named forms of attention were at hand (namely sustained, selective, alternating, or divided attention) or it was clear from the context which form of attention was implied. Additionally, the paper must have been based on scientific research, rather than used for commercial purposes, and was either in the form of a literature review or had a methodology including a procedure for the execution of an experiment. The paper must have described the relationship between attention and one (or more) physiological parameter(s) where a physiological signal and metric were at hand. Based on the criteria, out of the set of 75 papers, 24 papers were deemed relevant for the state of the art and the other 51 did not fulfill the requirements. Table 1 shows that concerning EEG signals, multiple authors concur that EEG-Alpha activity is related to sustained attention (O'Connell et al., 2009;Başar, 2012;Ray & Cole, 2020;Aftanas & Golocheikine, 2020). It is also concluded that increasing EEG-Beta reflects an increase in sustained attention (Linden et al., 1996;Oken et al., 2006) whereas EEG-Theta is disputed in its role with attention. Although Oken et al. (2006) and Linden et al. (1996) agreed that EEG-Theta increases with increased attention, Makeig and Jung (1996) concluded the opposite.
There has been extensive research on the connection between EEG signals and selective attention. Concerning the EEG-Alpha metric, alpha desynchronization (the decrease in the amplitude of the alpha waves and increase in frequency) is said by most authors to reflect attentional processes (Aftanas & Golocheikine, 2001;Gould et al., 2011;Herrmann & Knight, 2001;Herrmann et al., 2016;Klimesch et al., 1998). Other studies concluded that EEG-Alpha activity increases when rejection tasks are performed whereby someone completes a cognitive task and is internally attentive (Ray & Cole, 1985). EEG-Gamma waves are said to increase as a result of cognitive processing in response to a stimulus (Herrmann et al., 2016;Herrmann & Knight, 2001).
To our best knowledge, little research coupling physiological signals to alternating attention. Maunsell (2015) reviewed studies about the correlation between alternating attention and neural activity and suggested that neural response latency decreases with an increase in attention. Similarly, research concerning the association between divided attention and physiological parameters is rare. The only research work on investigating divided attention in correlation with EEG power bands was conducted by Rodrigue et al. (2015). The study was aimed to determine the level of divided attention of users using the Emotiv EPOC device and concluded that the (black-box) algorithm implemented in this device was considered and deemed reliable. However, it was not explicitly stated to which physiological parameter divided attention was correlated. Approximately 20 s before missing a target, alpha band activity increases; increasing activity in the alpha band corresponds to decreasing attention (temporal expectancy).
Alpha changed in intake or rejection (attentional) activities in both hemispheres; alpha activity associated with attention. Ray and Cole (1985) EEG EEG-ALPHA, EEG-THETA About 27 right-handed participants who regularly practice meditation (5 male and 6 female short-term meditators; 7 male and 9 female long-term meditators). Three phases (income phase; deep meditation phase; outcome phase) while EEG signals measured; self-report after session.
Theta band power increases with increased cognitive processing and concentration; low alpha band desynchronization correlates to vigilance; high alpha band desynchronization correlates to cognitive processing. ADHD group showed significantly larger increases in theta and significant decreases in low beta in right frontal region than control group in cognitive tasks; delta decrease in frontal regions.
Mann et al.
EEG EEG-THETA, EEG-GAMMA About 15 participants (young adults). Subjects had five halfhour sessions in which they Increase in theta-band and gammaband activity indicates an increase in attention. Makeig and Jung (1996) (Continued)
Research hypotheses
Based on the findings in the state of the art of neuroscience regarding attention, the 'black-box' algorithm for classifying attention implemented in a compact EEG device (e.g. Neurosky' Mindwave) can be examined. The hypotheses regarding EEG band power and attention are below: (1) Delta power increases with attention (Harmony, 2013;Harmony et al., 1996).
Materials
To conduct the study, the MindWave Mobile 2 was used, by which the neural oscillations were captured from the user's scalp. The ThinkGear Connector software development kit (provided by Neurosky) then sent the digitized neural data from the serial port to an open network socket where the open-source software, OpenViBE (Renard et al., 2010), was used to display band power and attention as well as record the data, with the accompanying timestamps, into CSV files. pushed two buttons. The first was pushed to indicated an above-threshold auditory stimulus and the second where they detected a visual pattern on a computer screen. EEG EEG-DELTA About 15 subjects with schizophrenia (14 males, 1 female) with mean age 27 ± 6.4 years and 9 subjects without schizophrenia (3 males, 6 females) with mean age 27.8 ± 8.9. With intake of 18-Fdeoxyglucose subjects performed visual vigilance task. Digits displayed on screen, and when 0 appeared, subject was to respond. Resting state with eyes closed.
Subjects without schizophrenia had a decrease in delta band during task in Cz and C4 regions; subjects with schizophrenia had decrease in delta band in inferior frontal regions Fp2 and F8; schizophrenics had higher delta levels than control group. Guich et al. (1989) EEG Frontal asymmetry Of 22 participants (age range 19-25, normal distribution of male/ female). Regional cerebral blood flow detected using radial array placement of detectors. 10 task conditions, differing in difficulty and requirements; 121 observations.
Right frontal cortex activation with attention demanding tasks related to amount of attention/vigilance needed to complete the task. Deutsch et al. (1987) Low These powers are calculated based on the frequencies delta (1-3 Hz), theta (4-7 Hz), low alpha (8-9 Hz), high alpha (10-12 Hz), low beta (13-17 Hz), high beta (18-30 Hz), low gamma (31-40 Hz), and mid gamma (41-50 Hz), respectively (NeuroSky Inc., 2009). In addition to these data, the Mindwave Mobile 2 provides attention levels. No details are provided by the company Neurosky about how attention levels were computed, how the algorithm was developed, or how data artefacts were filtered. Thus, it is the motivation of this paper to examine whether the attention classification algorithm provided by Neurosky corresponds to findings in neuroscience.
In order to induce mental effort and sustained attention, a cognitive task was required to be chosen that could be performed over an extended period of time. As mentioned earlier, in the field of education, measuring attention and incorporating its metric into biofeedback can be used to enhance learning abilities and increase the student's concentration and focus. It was deemed a good choice to utilize a pedagogical agent to conduct the experiment.
The pedagogical agent chosen is called 'SYNJA'. This is an intelligent tutoring system with the aim of teaching Java syntax to those without prior experience. It consists of explanations and clarifications of concepts along with follow up tasks such as multiplechoice questions, fill in the blanks, and coding exercises. SYNJA can be interacted with in either the German or English language.
Two parameters from the pedagogical agent, a timestamp and a boolean value, were recorded in a separate CSV file. These parameters pertain to the time in which a question was answered and if it was answered correctly or incorrectly while using the pedagogical agent. This data was recorded in each session to be later cross-referenced with the CSV file from OpenViBE.
Additionally, a self-report for the user was used to evaluate their subjective attention. The pre-test was completed before and the post-test after the interaction with the pedagogical agent. The pre-test and post-test questionnaires consisted of six questions which were in accordance with the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) (American Psychiatric Association, 1994) pertaining to ADHD. The pre-test questions pertained to the user's general qualities and behaviours as well as how they would gauge themselves, with respect to attention, in everyday tasks. The post-test questions pertained to the user's behaviours specifically while using SYNJA (see Appendix A for questions, Appendix B for results).
Participants
Although the trial originally consisted of 27 volunteers in total, 23 trials were deemed valid to be further evaluated. This was due to the fact that no tasks were completed by two of the participants while using the pedagogical agent SYNJA; there was an extended loss of connection experienced during the interaction for one subject; and one participant withdrew consent to having their data used. 14 of these remaining participants were female and 9 were male. The participants were between the ages of 19 and 30 with the mean age being 24.17 ± 3.68 years of age. Roughly two thirds of the participants were university students.
Four of the 23 subjects chose to interact with SYNJA in the German language, all being native speakers, and the remaining 19 chose to use the English version of SYNJA. Fourteen of those participants speak English at a native level.
For optimal results, only individuals with little to no Java experience were considered to ensure that the task would not be repetitive or familiar, implying 'automatic processing' (Norman & Shallice, 1986), which could decrease the potential of the individual's full concentration while completing the tasks. All participants claimed to be mentally healthy and none of the participants had ever been diagnosed with ADHD. It is to be noted though, that one participant had a family history of ADHD. Before completing the trial, each participant was asked to give consent to having their information and data used for this experiment. One participant withdrew consent to having their data used.
Procedure
In order to have consistent results across trials, a quiet and solitary place was provided for each participant to complete the questionnaires and interact with the pedagogical agent. Each trial was conducted the following way: (1) The participant was instructed on how to use the pedagogical agent (10 min).
(3) The device was placed on the participant's head and OpenViBE Acquisition Server was opened. The preferences were set to ensure the proper ports were being used as well as ensuring the metrics attention and band power were being collected by the Mind-Wave Mobile 2. The user ID was assigned. OpenViBE Designer was opened, and the program was run to collect neural waves and record them in a CSV file labelled with the ID of the participant at the process (2 min). (4) The participant interacted with the pedagogical agent and learned one to two concepts (10 min). (5) The retrieval of cerebral oscillations was stopped and the CSV file from OpenViBE was written. (6) The participant completed the post-test questionnaire (2 min).
To test the reliability and effectiveness of the procedure, a pilot test was conducted with two subjects. Trials of the procedure were run so that potential technical difficulties could be anticipated and an approach to deal with faulty data could be established.
Step 4, that is, the intervention with the pedagogical agent that stimulates sustained attention, takes 10 minutes. This period is based on developed studies for sustained attention. Early studies on sustained attention were conducted involving relatively long tasks (>10 minutes) to examine task effects (Cristofori & Levin, 2015). These studies focused on performance variability across time. These tasks required continuous responses to targets and non-targets or responses only to infrequent targets. Robertson et al. (1997) proposed a measure of 'sustained attention to response task' that differs from typical vigilance tasks because it is brief (3 minutes), requires frequent responses (90% of trials), and does not require participants to suppress inappropriate stimuli. The tasks provided by the pedagogical agent in our experiment also have similar characteristics, that is, users are not required to suppress inappropriate stimuli. Thus, we choose the time period of 10 minutes for learning one to two Java concepts.
Data analysis
This paper proposes a concurrent validity method to validate test results based on a similarly conducted test with previously validated measures. This principle was used to support or reject the hypotheses proposed pertaining to the attention level measured by the MindWave Mobile 2 and different band powers.
Attention and band powers
Band power reflects the dominance of a certain band wave, or frequency, in a signal. The way in which the band power is calculated is by taking the average of the square of the sample; the units are expressed in volts squared per Hertz. However, as stated by the developers of NeuroSky, the power values that are calculated in its software are relative to themselves and one another and therefore have no units; instead, they are expressed in decibels. Hence, the band powers are only used to compare the strength of certain frequencies to others as well as to see the change in one frequency over a range of time.
The best-fitting statistical method to analyse the relationship between the various band powers and attention was a correlation, as what was being sought was the association between the band power value and the attention value. This attention value has unknown composition and therefore was examined if its values corresponded to previous research.
To obtain a normal distribution of the band power values for each sample, a log transformation was performed on the set of band powers. A Pearson's correlation was then used to compare the attention and the respective band power with one another. If the band powers correlate to the attention values in the ways stated above, then the null hypothesis can be refuted, and the attention algorithm, implemented by NeuroSky, regarding the band power activity, can be confirmed to be accurate.
Comparing attention at events of correctly vs. incorrectly answered questions
When using the pedagogical agent, the user was faced with questions based on the material they had just learned through explanations and lessons. As task performance is associated with attention, it can be deduced that high performance (responding correctly) corresponds to a high attention and likewise, low performance (responding incorrectly) corresponds to a lower attention (Sykes et al., 1973). This assumption is based on previous studies where sustained attention was measured by precision of response and reaction time while completing a concentration task (Falkenstein et al., 1991;Gould et al., 2011;Klimesch, 2012). '[S]peed and accuracy are used to determine an individual's ability to sustain concentration' (Flehmig et al., 2007, p. 134). Therefore, the increased number of correct responses within a given span of time is used to assess vigilance.
Additionally, according to Ballard (2001), participant characteristics are something to keep in mind when conducting a continuous performance task. To prevent biases, two different groups were observed as skill sets between subjects differed. The first group consisted of those subjects that had already had previous Java or general programming experience, and the second group consisted of those without any programming experience. As those already familiar with programming would understand basic concepts more easily, it is more likely that they would more quickly and accurately be able to answer questions using previous knowledge and experience.
The timeframe of 10 seconds leading up to the event of answering a question was considered since before a response is given, the brain is already activated in anticipation of the event of answering (i.e. contemplating the response and typing). Therefore, the accuracy of the attentiveness should not be hindered by taking values prior to events of incorrect and correct answers. In addition, this allows for a better estimation of the average attention at such events as more data points can be used in the case of only a few events per subject. To analyse this data and draw conclusions about the relationship between attention level, correct, and incorrect answers, an independent samples t-test was performed to compare the difference of two means for the attention values leading up to two types of event, namely correctly and incorrectly answered questions, for each subject.
The average attention level for the time leading up to correctly answered questions is expected to be greater than the average attention level for the time leading up to incorrectly answered questions. This can be reformulated by saying that the difference of two means of the attention values before the respective events will be significantly different than 0.
From the t-statistic, the p-value can be obtained by calculating the area under the tail of the t-statistic. Should the p-value be less than 0.05 and the sign of the t-statistic the same as that of the hypothesis (in this case positive), a deduction can then be made that the MindWave Mobile 2 accurately portrays a high level of attention.
Something to note is that not all users may have both types of events during their interaction with SYNJA. Some may have only correctly answered questions and others only incorrectly answered questions. Therefore, only those subjects who both answered at least one question correctly and one question incorrectly were considered.
Pre-test and post-test questionnaires
The pre-test and post-test questionnaires are used to later aid in providing explanations as to why certain phenomena occurred. In order to analyse the questionnaires completed by the subjects, each category in the Likert-type scale was assigned a number (Very often: 5; Often: 4; Sometimes: 3; Rarely: 2; Never: 1). The higher the score, the less attentive the subject judged themself to be (see Appendix A). A paired two-sampled t-test was performed with the scores of the pre-test, considering attention in general circumstances, and the post-test, considering attention while interacting with SYNJA, for each participant.
Attention and band powers
The correlations between the various band powers and the attention value implemented by NeuroSky were calculated over 23 subjects, disregarding subjects 4, 17, 21 and 23. To support the proposed alternative hypotheses, one must take the positivity or negativity of the correlation coefficient and the p-value, which indicates the significance, into consideration. Table 2 displays the results of the Pearson's correlation performed on the attention values and log normalized band powers for each subject.
Delta band power
As seen in Table 2, the correlation coefficient of attention and delta band power was significantly different than 0. For each subject, the p-value was less than 0.05, showing the clear correlation between attention and delta band power. As the results indicated a negative correlation between the delta band power and the attention level, the alternative hypothesis can be rejected, that the attention algorithm correlates positively to the delta band power as implemented in the MindWave Mobile 2.
Theta band power
The theta band power was expected to decrease with an increase in attention, and therefore result in a negative correlation coefficient. As the correlation coefficient for all 23 subjects was below zero and the p-value was less than 0.05, it can be deduced from the data that the null hypothesis is rejected and the alternative hypothesis for the relationship between theta band power and attention is supported.
Low alpha band power Low alpha band power was expected to decrease with an increase in attention. For subjects with IDs 6, 9, 10, and 14, the relationship was not strong enough to reject the null hypothesis. For the other 19 subjects, the null hypothesis was rejected. In the two subjects, 6 and 10, where a positive correlation was calculated, the significance values were not great enough to confidently confirm the nature of the relationship. Therefore, it can be deduced from the rest of the results, that the correlation between low alpha band power and attention is negative. This is consistent with the assumption made based on previous studies.
High alpha band power
The alternative hypothesis regarding the high alpha band power was that it is negatively correlated to attention. The subjects 6, 10 and 13 displayed correlation coefficients of a positive sign. However, all three of these subjects had a p-value of greater than 0.05, indicating that these results were not significant. Aside from these three subjects, subjects 11 and 14 also did not show significant correlations. Therefore, the null hypothesis, that there is no significant correlation between attention and high alpha band power, for the subjects 6, 10, 11, 13, and 14 cannot be rejected. The remaining 18 subjects did indeed display a significant negative correlation. The results of these subjects reject the null hypothesis and support the alternative hypothesis.
Low beta band power
The low beta band power was expected to increase with the increase of attention. Significant results were only found for roughly half of the subjects. Subjects 2,5,6,7,8,10,12,14,16,18,19,26, and 27 had significant correlations where the p-value was less than 0.05. The remaining subjects' results were unable to reject the null hypothesis based on the pvalue. Of the subjects named with significant correlation coefficients, those with negative correlations were subjects 2, 7, 8, 12, 16, 18, 19, 26, and 27, and those with positive correlations were subjects 5, 6, 10, and 14. Based on this data, a definite conclusion cannot be drawn as 9 of 23 correlation coefficients were not significant, and those that were indeed significant did not share the same results.
High beta band power Lastly, high beta band power was expected to increase with the increase of attention. For all subjects, the correlation coefficient was positive, and 19 from 23 subjects had a significant correlation with the p-value being lower than 0.05. These results are in keeping with the alternative hypothesis regarding the high beta band power; the null hypothesis can be rejected for these subjects. Subjects 7, 16, 18 and 27, did not have a significant correlation coefficient and therefore the null hypothesis cannot be rejected for these. Nonetheless, as the vast majority of values were significant, there is strong evidence of a positive correlation.
Comparing attention at events of correctly vs. incorrectly answered questions
To compare the means of attention leading up to correctly answered questions and incorrectly answered questions, an independent samples t-test was performed whereby it was assumed that the sample sizes are different. Some sets of data had to be removed as only the datasets with subjects who answered at least one question correctly and one question incorrectly were included. Based on these criteria, subjects 11, 13, 15, 16 and 20 were removed from this analysis in addition to those whose data was already removed, namely subjects 17, 21, 23, leaving 18 subjects left to analyse.
As seen in Table 3, the proposed hypothesis was rejected in the cases of subjects 1, 2, 5, 18 and 25. The t-statistic for these subjects was greater than 0 and the p-value was less than 0.05, indicating significance. This entails that the mean value of attention preceding correctly answered questions was significantly greater than the mean value of attention preceding incorrectly answered questions. In the other cases, the null hypotheses could not be rejected.
This result shows weak evidence of the significance of difference in attention when a question was answered correctly or incorrectly as only 5 of 18 subjects' results rejected the null hypothesis.
As seen in Table 3, the proposed hypothesis was rejected in the cases of subjects 1, 2, 5, 18 and 25. The t-statistic for these subjects was greater than 0 and the p-value was less than 0.05, indicating significance. This entails that the mean value of attention preceding correctly answered questions was significantly greater than the mean value of attention preceding incorrectly answered questions. In the other cases, the null hypotheses could not be rejected. This result shows weak evidence of the significance of difference in attention when a question was answered correctly or incorrectly as only 5 of 18 subjects' results rejected the null hypothesis.
Discussion and limitations of the study
The results of the testing of association between certain band powers were to an extent inconsistent with the hypotheses proposed. When taking delta band power into consideration, there was a significant negative correlation with attention for every subject. This is inconsistent with previous research from Harmony et al. (1996) and Harmony (2013) where delta power is said to increase with internal concentration. However, as delta oscillations have an inhibitory effect, as demonstrated in the case of deep sleep (Amzica & Steriade, 1998;Banquet & Sailhan, 1974), different attentional networks are inhibited Notes: n 1 indicates the number of observations before correctly answered questions and n 2 indicates the number of observations before incorrectly answered questions. μ 1 indicates the mean attention of the time leading up to correct answers. μ 2 indicates the mean attention of the time leading up to incorrect answers. df indicates the degrees of freedom.
while others are not. As in the studies mentioned, the internal processing was favoured while external stimuli were inhibited. Depending on the task at hand, and subsequent activation of different areas of the brain, an inhibitory effect can be observed where the sensor is measuring the brain oscillations. As in the case of the attention algorithm from NeuroSky, it can then be assumed that this inhibitory effect was anticipated.
As for the band powers of theta, low alpha and high alpha, the majority of the subjects' results, and in the case of theta, all the subjects' results, were significant enough to reject the null hypothesis, indicating that these did indeed correlate to attention as seen in previous studies. The attention algorithm implemented by NeuroSky does indeed display the relationship between these specific band powers and attention.
Regarding low beta band power, only 4 of 23 subjects had significant results that supported the alternative hypothesis. To speculate as to why such weak results were obtained from the data, one must consider that NeuroSky differentiates between low and high beta band power values whereas previous studies did not. As the high beta band power did indeed significantly positively correlate to attention in most of the cases, supporting the alternative hypothesis, an explanation was needed as to why low beta band powers did not correlate in the same way. Perhaps the developers did not take low beta into consideration when calculating the attention value. Another study suggested that attention could be measured using a ratio between the sum of the power spectral densities of the alpha and beta bands, respectively (Liu et al., 2013). It was also suggested that beta does not directly have an effect on attentiveness but rather, the relationship between the alpha and beta bands is of high importance. In the case of the MindWave Mobile 2, this could be an explanation for the unexpected results of the correlation between low band power and attention, since the alpha band power was not taken into consideration when observing the beta band power. In conclusion, to account for the discrepancy between the expected and actual correlation between low beta power and attention, the developers of NeuroSky may have laid more importance on the high beta band rather than the lower to compute the attention value, or, as mentioned from Liu et al. (2013), a ratio rather than a direct relationship between band powers and attention may have been considered.
Based on the results regarding the anticipated relationships between band powers and attention, and considering the potential reasons for discrepancies with the proposed hypotheses, the accuracy of the attention algorithm can be validated. Concerning the relationship between the success of the tasks completed by the subjects while using SYNJA and the attention level recorded by the MindWave Mobile 2, the results did not correlate with the expectations. There are many possible reasons as to why this was the case. One factor could be the time constraint of the interaction with the pedagogical agent. Some participants took more time than others to read and understand the lessons. With only ten minutes to interact with SYNJA, potentially not enough time to learn and understand Java concepts at a high enough level to complete the tasks correctly was allocated. This also did not give the user much of a chance to become accustomed to how SYNJA works. Only simple instructions for the pedagogical agent were given and no practice run was performed beforehand for the participant to become comfortable with the software. Had a practice trial been given, the results could have been a better representation of the relationship between performance in the task, measured by the attention level when a question was answered correctly.
In addition, each subject had a different skill set when it came to programming Java. By only allowing subjects with limited or no Java experience to participate, biases were avoided to a small degree. Despite this, some participants, for example, those who study in a scientific faculty, had more general programming knowledge than others. Using two different analyses, this bias was partially removed. Nevertheless, some subjects were able to understand concepts more quickly than others despite not having programming experience. After the interaction with SYNJA, some participants commented that the formulation of explanations of Java concepts was not clear. Moreover, subjectspecific vocabulary was not understood, especially for those who had no previous programming knowledge. Some participants had trouble understanding the language and wording, whether it be English or German, depending on which version of SYNJA they chose to use. As five of the participants using SYNJA did not speak English at a native level, this could account for some of the results and comments made by the participants.
In another study from Chen and Wu (2015), similar results were obtained where sustained attention did not correlate to learning performance. Therefore, it can be more correctly assumed that sustained attention is relative to cognitive load, rather than learning performance. 'While referring to the mental effort imposed by instructional activities, their design, and presentation, extraneous load does not contribute directly to an understanding of material' (Chen & Wu, 2015, p. 109). This could explain why the mean attention prior to incorrect answers, in most cases, was not significantly lower than that prior to correct answers. Although the cognitive load was not 'measured', the loads for completing different tasks given by SYNJA were comparable, independent of the response of the participant. When the material was not understood, a cognitive effort was still applied. This can, to some extent, be seen in the subjective data as recorded by the pre-and post-tests, as around one third of the participants perceived their attention to be higher while using SYNJA than in normal circumstances (see Appendix B).
In conclusion, the results based on the relationship between the eSense attention metric and band powers are in favour of the accuracy of the NeuroSky MindWave Mobile 2. Still, the conclusions drawn from the problem-solving tasks prompted by SYNJA did not produce significant results that back up the hypothesis about the relationship between task performance and attention.
Conclusion
In order to collect and analyse physiological signals to interpret the affective state of the user, wearable and compact physiological devices (e.g., ECG and EEG) can be used. However, the accuracy of classification algorithms of those devices should be concurrently validated. This paper has proposed a concurrent validity approach using findings in neuroscience regarding the physiological metric 'attention'. This proposed approach has been demonstrated with the wearable EEG device, NeuroSky MindWave Mobile 2, and is, thus, the second contribution of the paper. Based on the results of the correlation between the different band powers and the attention values calculated by the device, it can be concluded that the NeuroSky's attention algorithm accurately classifies the attentional states of learners. The NeuroSky's EEG device has been validated the first time in the context of learning, being the third contribution of this paper. Important physiological indicators of sustained attention, relevant to the research question, include EEG signals such as alpha, beta, delta, and theta, as captured by EEGs. As many compact EEGs suitable for educational settings are on the market today, it is important to investigate the accuracy of such metrics for attention because of their application in many domains including education and learning. Advantages of the concurrent validity approach include flexibility of the choice of the task to induce attention or other physiological states. In this case, it proved to be a good choice to hone in on the use of EEGs in the sector of education and being able to compare the neurological signals to performance tasks related to attention. A disadvantage in this method is that the user's perception of their attention may be different than the value as calculated causing discrepancies in the analysis of the accuracy of such algorithms.
In order to take advantage of the great potential of using physiological data to improve learning, more research and testing should be conducted regarding classification algorithms implemented in BCIs, including attention and meditation, among others. The use of concurrent validity, such as in the experiment conducted, is a good starting point to further assess, and therefore make improvements on, more commercial EEG devices, as well as other BCIs, as their use in educational settings is gaining popularity with good reason.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Appendices
Appendix A
Pre-test and post-test questionnaires
Pre-test questionnaire . Do you usually avoid or delay starting new tasks? . How often do you find that work or assignments are boring or repetitive making it difficult to complete them? . Do you make careless mistakes when performing a boring, repetitive, or difficult task? . How often do you feel restless or fidgety? . Are you usually distracted or find difficulty concentrating when there is activity or noise around you? . Do you have difficulty concentrating on people when they talk to you?
Post-test questionnaire . Did you tend to avoid or delay getting started when learning a new concept in SYNJA? . Did you find that the majority of the tasks while using SYNJA were boring or repetitive, making them difficult to complete? . Did you find yourself making careless mistakes when using SYNJA? . How often did you feel restless or fidgety while using SYNJA? . Were you distracted (or did you find it hard to focus) when there was activity or noise around you while using SYNJA? . Did you have difficulty concentrating on SYNJA when she was interacting with you? All these questions above were to be answered with one of five choices: (1) Very often, (2) Often, (3) Sometimes, (4) Rarely, (5) Never. | 9,714.6 | 2021-08-04T00:00:00.000 | [
"Computer Science"
] |
Application of knowledge for automated land cover change monitoring
This paper outlines an approach for updating baseline land cover datasets. Knowledge about land cover, as used during manual mapping, is combined with simple remote sensing analyses to determine land cover change direction. The philosophy is to treat reflectance data as one source of information about land cover features. Applying expert knowledge with reflectance and biogeographical data allows generic solutions to the problem. The approach is demonstrated in areas of semi-natural vegetation and shown to differentiate ecologically subtle but spectrally similar land cover classes. Further, the advantages of manual mapping techniques and of high resolution remotely sensed imagery are combined. This approach is suitable for incorporation into automated approaches: it makes no assumption about the distribution of land cover features, can be applied to different remotely sensed data and is not classification specific. It has been incorporated into SYMOLAC, an expert system for monitoring land cover change.
Introduction
Before satellite imagery became so freely available in the 1970s, aerial photography was commonly used to map land cover. During aerial photograph interpretation (API) land cover is mapped manually. The interpreter combines their specific expertise such as knowledge of the relations that land cover features have with various biogeographical gradients, the landscape context in which different land covers are found, with their appearance in the aerial photograph. By using contextual information much greater land cover detail is captured (Paine 1981, Lillesand andKeifer 1987). However photographic data are relatively expensive and require much human effort to extract thematic information. Now land cover is more usually mapped from remotely sensed imagery recorded by sensors mounted on satellites. Satellite imagery is cheap compared to alternative data sources such as aerial photography, covers large areas and has a high temporal frequency. However the granularity of the land cover information derived from such imagery is limited by the spatial resolution of the data and the number of land cover feature classes that can be reliably identified by their reflectance properties alone. Typically, spectrally distinct cover types are easily classified, whilst other more spectrally heterogeneous land covers are less reliably identified. Improvements can be made by fine tuning the analysis, but the results are frequently instance specific and subjective.
The problem we addressed was how to use satellite imagery to update an ecologically detailed land cover dataset. The cost of a repeat aerial photograph survey with API is prohibitive, the extent of cloud free coverage provided by very high resolution (v5 m pixel) satellite data is poor, and the granularity of land cover information that can be extracted reliably from medium resolution satellite imagery such as Landsat Thematic Mapper (TM) is low.
In this paper we present an approach for determining land cover change direction that uses API knowledge of land cover biogeographical characteristics and class specific knowledge combined with simple remote sensing analyses. We show how this approach: (a) marries the benefits of API with those of satellite remotely sensed data; (b) avoids the specificity of many remote sensing analyses; (c) is generic in terms of its applicability to other change direction problems.
Land cover of Scotland 1988
The Land Cover of Scotland 1988 (LCS88) survey (MLURI 1993) provides a baseline census of land cover information. It was manually classified from an aerial photograph survey at 1:24 000 scale, before being digitized into a Geographic Information System (GIS). The objective of LCS88 was to record information specific to the Scottish landscape, particularly upland semi-natural vegetation and to this end it describes the distribution of 126 land cover classes.
Mapping semi-natural vegetation from satellite imagery
Mapping semi-natural land cover from remotely sensed imagery is difficult. A review of the remote sensing literature specific to upland semi-natural vegetation supports this statement. Belward et al. (1990), studying semi-natural vegetation using Landsat TM data concluded that it would be inappropriate to try to match spectral classes with semi-natural land cover classes. Baker et al. (1991) found that spectral classification of SPOT HRV (Systeme Probaboire pour l'Observation de la Terre High Resolution Visible Image Instrument) data alone would not discriminate between semi-natural vegetation types. Whilst Weaver (1987), using simulated Landsat data, concluded that discrimination of moorland vegetation was possible her conclusions have not been endorsed by more recent work that has examined the use of actual Landsat TM data with reference to semi-natural moorland vegetation, such as Wright and Morrice (1997), Gauld et al. (1997), Bird et al. (2000) and Taylor et al. (2000). Wright and Morrice (1997) found it difficult to match LCS88 land cover features to Landsat TM spectral capabilities. Gauld et al. (1997) concluded that unsupervised segmentation of Landsat TM imagery division bore little relation to the ecological classes on the ground. Work on monitoring landscape change in the UK National Parks has showed that current satellite data are not suitable for mapping land cover features and analysing land cover change in UK National Parks containing a large amount of semi-natural moorland and heath land covers .
The use of auxiliary data in remote sensing analyses
Consistent and explicit calls for remote sensing analyses to incorporate knowledge or ancillary data into the classification process have been made (e.g. Green et al. 1994, Mattikalli 1995, Foody and Hill 1996, Stuckens et al. 2000. Mapping of land cover features would be improved if other data were applied (e.g. Holmgren and Thuresson 1998). Due to LCS88 classification detail, this trend is reflected in work that has considered how LCS88 may be updated (Birnie 1996, Horgan et al. 1997, Wright and Morrice 1997. This makes sense for two reasons. First subtle variations in land cover botany may be obscured by sensor specifications such as pixel size (Fisher 1997) and may be difficult to discern due to image specific characteristics (Verstraete et al. 1996) or the nature of the landscape under investigation. Secondly, land cover classes are commonly defined by their biophysical properties such as species composition, biogeographic position and landscape context (Comber et al. 2001).
Summary
The difficulties of identifying detailed semi-natural land cover features from data such as Landsat TM arise because they are: (a) spectrally indistinct (Wright and Morrice 1997); (b) not necessarily defined on their physical reflectance properties alone, rather by other objectives such as policy (e.g. MLURI 1993); (c) only subtly different in botanical terms and class identification may depend on biogeographic context (Comber et al. 2001).
Therefore in semi-natural environments, the advantages of using remotely sensed satellite data (speed of image capture, data cost, areal coverage, repeatability) are offset by difficulties in reliably identifying semi-natural land cover features. In these situations traditional data oriented change methodologies may be inappropriate. Typically conclusions about analyses that proceed in this way are that they work for some sets of classes and in some areas, and not in others (for example, Lyon et al. 1998, Macleod and Congalton 1998, Mas 1999. A further problem is that their specificity makes them difficult to incorporate into generic, expert systems for monitoring land cover change, such as SYMOLAC (Skelsey 1997).
Materials and methods
In this section we describe how knowledge of land cover features from different sources can be identified and then combined. Necessarily this involves some data analysis. The data is described followed by descriptions of land cover knowledge and an outline of how all the information in this section was applied to the change direction problem.
Data
The area of analysis was a 40 km by 41 km area around Elgin in north eastern Scotland. This area contained some 3996 LCS88 polygons. Of these, the classes with populations of w20 polygons were used in the analyses described belowsome 3465 polygons or 91.4% of the test area in total.
A 20 m binary raster grid of each LCS88 polygon was generated using ArcInfo's POLYGRID command (ESRI 2001). Landsat TM data of the area from 1987 (the nearest date to the air photograph survey for which cloud free coverage could be obtained-see Wright and Morrice 1997) and Landsat Enhanced Thematic Mapper (ETM) data from 2000 was registered to the British National Grid from Ordnance Survey point map data and resampled to 20 m. The 20 m cell size was chosen as a compromise between minimizing information loss during LCS88 land cover parcel conversion to raster format and maximizing the information content of the Landsat imagery.
Soil Quality and Soil Wetness datasets were derived from the digital 'Quarter Million' soil series produced by the Macaulay Institute in 1984 (Macaulay Institute For Soil Research 1984). 1 km Mean Annual Rainfall data for the area was obtained. This data is described in Matthews et al. (1994). Ordnance Survey's 50 m DEM was used to generate a Slope dataset using ArcInfo's SLOPE command (ESRI 2001). All of these datasets were resampled using a cubic convolution to 20 m rasters from their original resolutions for ease of data overlay in the analysis using the RESAMPLE command in the GRID module of ArcInfo (ESRI 2001).
LCS88 land cover knowledge
Land cover knowledge is given in three parts. First, we describe the information used during API by experts. This includes the position of land cover features in various environmental gradients. Secondly, we detail how simple descriptions of land cover class reflectance properties can be derived from remotely sensed data. Thirdly, an approach for extracting information about an individual region of land cover change is given.
API
Air-photograph interpreters involved in the LCS88 project were interviewed. Knowledge of how they mapped different land cover classes and the nature of their expert knowledge was identified. This included land cover related facts or principles, rules and heuristics. They described their mapping processes (e.g. which features were identified first and why), class specific information about how they mapped and differentiated amongst each of the LCS888 land cover classes, and the class to class transitions that were possible and under which scenarios. The resulting information, specific to individual land cover classes, included descriptions of the feasible changes, the scenarios under which the changes might occur and information about the typical biogeographical position of each class in a range of dimensions. In API the interpreter identifies specific classes by bringing together all this information. An example of this knowledge for different LCS88 grassland classes is illustrated in table 1.
Reflectance
The objective was to assess the reflectance characteristics of LCS88 land cover classes to determine the extent to which LCS88 land cover classes are separable using Landsat TM data. Each LCS88 polygon grid was used as a template to punch out the appropriate portions of the 1987 Landsat TM imagery in PVWAVE (Visual Numeric 2001). A histogram of the reflectance properties of each land cover polygon, excluding edge pixels, was generated for each band and for a standard Normalized Difference Vegetation Index (NDVI) value. For each polygon, in each band the median value was determined. The median values for the polygons of each class were placed in a histogram, and from that the median and inter-quartile range (IQR) of the class medians were extracted. The median and IQR give an indication of what the typical spectral characteristics were for all the polygons in a given class. The extent to which the reflectance values of the different land cover classes in Landsat TM band 2 were separable is shown in figure 1. Whilst only band 2 is illustrated, the same trends were shown in the other bands and the NDVI. Two clear patterns were evident: individual and cover class spectral overlap and the similarity of Summary class elements, as indicated by the IQRs and medians, respectively.
Generating change area information
An area of change has been identified and which in 1988 formed part of a LCS88 polygon of 'Dry Heather Moorland, no rocks, no scattered trees, no muirburn'. The change area location and context is shown in figure 2 and its spectral properties in figure 3.
The knowledge acquired from the air-photograph interpreters described the typical positions of different land cover classes in different environmental gradients-slope, soil wetness, soil quality and climate (rainfall). The different component soil types were allocated 'wetness' and 'quality' scores from 1 (driest and poorest) to 5 (wettest and richest) by one of the expert soil surveyors at the Macaulay Institute, Aberdeen. The slope values were allocated slope scores of 'very steep' (w25 ‡), 'steep' (16-25 ‡), 'tractor accessible' (9-15 ‡), 'gentle' (3-8 ‡) and 'flat' (0-2 ‡). The mean annual rainfall values were allocated wetness scores of 'very wet' (w1600 mm year 21 ), 'wet' (1200-1600 mm year 21 ), 'average' (1000-1199 mm year 21 ), 'dry' (800-999 mm year 21 ) and 'very dry' (v800 mm year 21 ). These ranges were identified from the API knowledge acquisition exercise ( § 3.2.1). The median position of the change area in each of these environmental gradients was determined in order that its characteristics could be compared to the API expert descriptions. The median and IQR positions of the change area in six bands and a standard NDVI were extracted from 2000 Landsat ETM data to determine the band in which the change area was least variable.
Outline approach
The analysis and application of knowledge in the walkthrough was partitioned into three general stages as follows: Stage 1: Generate a large set of all possible change hypotheses (SET 1). Reduce this set to a smaller set (SET 2) by relegating some of the possible land cover change directions. This stage used expert API knowledge to identify possible transitions.
Stage 2: Compare the reflectance characteristics of the change area with those of the remaining candidate land cover classes now to narrow the set of candidate hypotheses down further (SET 3). This stage uses simple analysis of change area spectral properties to identify the change area summary class. The approach as described in § 3.2.2 with 1987 Landsat TM data (to establish the difficulty in identifying LCS88 land cover classes from their spectral characteristics alone) is now applied to 2000 data. Stage 3: Apply land cover class specific knowledge to differentiate amongst the hypotheses contained in SET 3. At this stage the expert knowledge was returned to in order to determine the land cover change direction.
Results
In this section we describe how the methods described in § 3 were applied to an actual change problem. After describing the 'Walkthrough' example, a series of other results are presented in tabular form. The change area was introduced in § 3.2.3 and is LCS88 class 'Dry Heather Moorland, no rocks, no scattered trees, no muirburn'.
Walkthrough example
Stage 1: Generate a large set of all possible change hypotheses (SET 1). Reduce this set to a smaller set (SET 2) by relegating some of the possible land cover change directions.
The API expert described the class to class land cover transitions that were possible and under which scenarios. For a polygon of 'Dry Heather Moorland, no rocks, no scattered trees, no muirburn' some 66 initial change directions are possible (SET 1). The set is reduced by applying some of the API knowledge, reducing the set to 12 competing hypotheses (SET 2). The rules, and the number of hypotheses they cause to be relegated are shown in table 2.
Stage 2: Compare the reflectance characteristics of the change area with those of the remaining candidate land cover classes now to narrow the set of candidate hypotheses down further (SET 3).
The spectral characteristics of the change area were extracted and then compared with LCS88 land cover populations. The lowest IQR for the change area determined the Landsat TM band in which the change area showed the greatest homogeneity. Table 3 shows that the change area was least variable in band 2, with the lowest IQR. The median positions of the remaining hypothesized land cover change directions were compared with the position of the change area in Landsat TM band 2. Those that were closest form SET 3. Closeness was arbitrarily set at half the maximum distance to avoid specifying a numeric threshold. From table 4, SET 3 contains five elements.
Stage 3: Specific land cover knowledge is applied to differentiate amongst the hypotheses contained in SET 3. Table 5 details the biophysical evidence about the change area and the remaining five hypotheses in SET 3, including expert knowledge about the origins of each of the five candidate changes. According to the experts, the likely transitions from Heather Moorland were to Undifferentiated Rough Grassland and Undifferentiated Smooth Grassland. Of these the change hypothesis with the most 18 'There will be no changes to Peatland vegetation' 7 'Changes to bracken and agriculture are from adjacent areas' 6 'No changes in scattered tree status in 20 years' 6 'Forestry will not be planted and felled in v30 years' 1 support from all the different sources of evidence and land cover knowledge was a change to 'Undifferentiated Smooth Grassland: no rocks, no scattered trees'. Although formal methods for combining such evidence exist (for instance Dempster-Shafer, Bayesian Probabilities, Endorsement Theory), these are not within the scope of this work and are presented elsewhere (see Comber et al., in press).
Validation by field visit
A field visit to the change area was undertaken in June 2001 and the change area was photographed. The photographs were examined in the laboratory by an expert (familiar with the area, field mapping and LCS88 land cover classes) to identify the species composition and land cover present. The photographs of the change area are presented in figures 4 (a) to (e) . Figures 4 (a) The ecologist considered the change area to have been over-burned in terms of intensity, and in too concentrated an area in 1997 or 1998. As a result the heath is regenerating very slowly, and there is a grassier flush than would normally be expected in a post-burn environment of this age. What these images show is the extent to which the classic dwarf shrub heath found in Dry Heather Moorland (Calluna vulgaris) has been knocked out by the burn. In the ecologist's opinion the land cover of this area has changed to the single feature class of Undifferentiated Smooth Grassland: no rocks, no scattered trees and may eventually go back to 'Dry Heather Moorland, no rocks, no scattered trees, no muirburn' provided that it is not overstocked.
Other results
Three results of three further examples are described in table 6. In each case land cover knowledge is successfully applied to augment simple remote sensing analyses and identify land cover change direction. The change direction was correctly identified as one of the two hypotheses. The change was due to mis-management over two different land cover parcels 5. Discussion 5.1. Discussion of results There are two general problems with the approach. First, despite identifying the 'correct' land cover change direction, it is always possible that due to socio-cultural norms the land use gets mapped not the land cover. All land cover classifications confuse the differences in ontology between land cover and land use. So in the walkthrough described in § 4.1 for instance it is possible that the area of change may get re-mapped as moorland with muirburn, rather than the actual cover present on the ground. Secondly, land management has caused all of the changes considered in the walkthroughs. Whilst this has long been recognized, it presents problems when seeking to discern subtle shifts between semi-natural land cover classes. The potential for dramatic changes in the management, by design or by error, are by their nature difficult to predict and model. Yet despite these problems, the approach has shown that it is possible to separate spectrally similar land cover classes (for instance grassland) by applying some general common sense and some land cover class-specific ecological knowledge.
Remote sensing issues
Since satellite imagery first became available in the 1970s there have been considerable developments-increased data availability, more sensors, many resolutions, variable frequencies in the electromagnetic spectrum. However these have not been matched by developments in image processing. Analyses remain specific to the data, the application or the area under investigation with the result algorithms developed for one application will produce different results with another image scene. This is part of the land cover mapping paradigm that is avoided by the remote sensing community: land cover features identified in remote sensing analyses are commonly described in terms of their botanical, floristic, ecological, biogeographical or other biological characteristics. However they are defined from the image data on their reflectance characteristics alone. The reason for this is the primacy given to the remotely sensed data itself. In this work we have taken steps to address this paradox by focusing on how best to achieve the aims of the analysis. This task oriented approach to land cover mapping, as introduced by Skelsey (1997), considers remotely sensed data as only one of a number of useful datasets to be used to solve the change direction problem. In one sense this is already implicitly acknowledged by many land cover mapping exercises that define the classes they identify in terms of their biology.
Generic solutions
The methodology presented here for determining change directions to LCS88 can be readily adapted to other baseline land cover surveys given some signal of change, an expert familiar with the land cover concepts and some environmental data. The stages in this are: (a) identify the land cover transition pairs that are possible; (b) identify the defining biogeographical characteristics of the land cover classes; (c) elicit some simple rules from experts that are familiar with the data and map concepts to eliminate some of the transitions; (d) use simple remote sensing analyses to characterize the land cover classes (we used means and inter-quartile ranges) and to identify the general land cover change direction (in this case at a summary class level), eliminating some further candidate change directions; (e) compare the change area characteristics with those of the remaining possible change directions.
These steps make no assumptions about any underlying distributions of the data. The results presented here have been successfully implemented inside SYMOLAC, an automated land cover monitoring system developed by Skelsey (1997) and extended by Comber (2002).
Conclusions
The main findings of this work are that the integrated approach combining API expert knowledge with simple reflectance characterizations of land cover classes from satellite imagery allows ecologically and spectrally subtle shifts in land cover type to be identified. This method produces a solution that is both inexpensive and at a fine degree of thematic land cover detail, thereby maximizing the advantages of both types of mapping approach. Also it suggests that more meaningful environmental monitoring is possible than current estimations of gross land cover stocks such as 'forest' and 'rangeland'. Of perhaps wider significance are first the applicability of this approach to automated and semi-automated land cover monitoring exercises, and secondly, preservation of the value of original baselines such as the Land Cover of Scotland 1988 Survey which are not lost due to their unrepeatability. | 5,322.8 | 2004-08-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Minimal path decomposition of complete bipartite graphs
This paper deals with the subject of minimal path decomposition of complete bipartite graphs. A path decomposition of a graph is a decomposition of it into simple paths such that every edge appears in exactly one path. If the number of paths is the minimum possible, the path decomposition is called minimal. Algorithms that derive such decompositions are presented, along with their proof of correctness, for the three out of the four possible cases of a complete bipartite graph.
Introduction
A path decomposition of a graph is a decomposition of it into paths such that every edge appears in exactly one path. If the number of paths is the minimum possible, the path decomposition is called minimal.
A complete bipartite graph is a graph with its nodes partitioned in two sets, such that no edge that connects nodes of the same set exists in the graph, and all edges that connect nodes of the two sets exist in the graph.
In this paper, the subject of minimal path decomposition of complete bipartite graphs is investigated. The complete bipartite graphs are split into four cases that cover every possible instance of them. Algorithms that provide the actual paths of a B Costas K. Constantinou<EMAIL_ADDRESS>Georgios Ellinas<EMAIL_ADDRESS>minimal path decomposition are presented for the three out of the four possible cases. A proof of correctness is also given for the presented algorithms.
To the best of our knowledge, no algorithms can be found in the literature that provide minimal path decomposition of complete bipartite graphs. Relevant work can be found in Alspach (2008), Bryant (2010) where the cases of complete graphs of even, odd order respectively are investigated. The subject of decomposing a graph into paths of certain length is investigated in Parker (1998), Truszczyski (1985), Zhai and Lu (2006). Work that is concentrated on the theoretical analysis of the subject of path decomposition can be found, among others, in Haggkvist and Johansson (2004), Thomassen (2008a), Thomassen (2008b), Heinrich (1992), Dean and Kouider (2000), Tarsi (1983), Lovasz (1968), Fan (2005), Pyber (1996), Harding and McGuinness (2014), Donald (1980). The remaining of the paper consists of the following sections: The necessary notation and definitions are given in Sect. 2, and the general framework that is applied for the derivation of the proposed algorithms is presented in Sect. 3. The proposed algorithms are presented in Sect. 4. The conclusions and ongoing research are given in Sect. 5.
Preliminaries
The graphs considered in the current paper are undirected, connected, without multiple edges between the same pair of nodes and without self-loops (i.e., without edges that connect a node to itself). The notation G = (V, E) stands for a graph with the aforementioned characteristics, consisting of n = |V | nodes and m = |E| edges. The notation x ↔ y represents the (undirected) edge that connects nodes x and y. The nodes are labeled with the numbers 1 to n. The difference between the labels of two nodes x and y is defined as |x − y|. Two edges x ↔ y, x ↔ y are identical if x = x and y = y , or if x = y and y = x . For this case, obviously, |x − y| = |x − y |.
By the notation simple path we mean a path where each node appears at most once. The Path Decomposition (PD) of a graph consists of a set of simple paths (PD-paths) that are edge-disjoint and every graph edge appears in exactly one of them. If the number of these paths is the minimum possible, the decomposition is called Minimal PD (MPD), and the corresponding paths are called MPD-paths.
For the derivation of the MPD-paths, a Path Matrix (PM) is created. The elements of this matrix are the graph nodes. Therefore, the notions element and node are used interchangeably throughout the paper. The position (or place) of the element found in the ith row and jth column of the PM is denoted by . If the two ending nodes of the path found in the ith row are the first and last element of this row, then we say that this path consists of the complete row i. For complete bipartite graphs, set V is split in two sets V 1 , V 2 such that V 1 ∪V 2 = V , V 1 ∩V 2 = ∅, |V 1 | = n 1 , |V 2 | = n 2 , (therefore n 1 +n 2 = n). Without loss of generality, throughout the paper it is assumed that n 2 ≤ n 1 . Set E consists of all edges x ↔ y such that x ∈ V 1 and y ∈ V 2 . Nodes of set V 1 are labeled with the numbers from 1 to n 1 , and nodes of set V 2 with the numbers from n 1 + 1 to n. The complete bipartite graph, using the aforementioned notation, is denoted by K n 1 ,n 2 . It can be easily verified that every possible instance of a complete bipartite graph belongs in one of the four cases presented in Table 1.
General framework
The proposed algorithms that are presented in Sect. 4 are derived using the general framework presented here. The derived paths must have the following properties in order to constitute an MPD: Necessity of property A is obvious, since the solution must consist of simple paths. Property B states that no edge is used more than once. Property C (under the validity of property B) states that the solution includes all the edges. If properties A-C are valid, then the solution constitutes a PD. For MPD, property D must be valid as well.
Steps of the general framework
(i) Create the PM. (ii) Locate the part of the PM that must be manipulated, and derive the corresponding PD.
(iii) Verify that properties A-C are valid for the derived PD.
(iv) If property D is not valid for the derived PD, modify the paths of the latter in order to derive an MPD, while preserving the validity of properties A-C.
The steps of the general framework are detailed and easily understood in the following section, where the proposed algorithms are presented. 4 Proposed algorithms 4.1 Complete bipartite graphs K n 1 ,n 2 with even n 1 , even n 2 and n 1 = n 2 Consider the case of the complete bipartite graph K n 1 ,n 2 where n 1 and n 2 are even, and n 1 = n 2 = n 2 (i.e., K n 1 ,n 2 = K n 2 , n 2 ). Obviously, for this case, n ≥ 4. The graph consists of n 1 · n 2 = n 2 4 edges. The application of the general framework is as follows.
Step GM-I The PM is created using the proposed Algorithm 1. Algorithm 1 creates a PM consisting of n 2 rows and n columns (shown in Table 2).
Algorithm 1 K n 1 ,n 2 with even n 1 , even n 2 and n 1 = n 2 1. Create row 1 of the PM: (a) Place nodes 1, . . . , n 2 in odd cells, sequentially, in increasing order (b) Place nodes ( n 2 + 1), . . . , n in even cells, sequentially, in decreasing order 2. Create rows 2 to n 2 of the PM. Create each row from the previous one, by adding one to the label of each node. For each cell of the row under creation: (a) If it belongs to an odd column and the resulting label is greater than n 2 , subtract n 2 from the label (b) If it belongs to an even column and the resulting label is greater than n, subtract n 2 from the label
Derivation of cell content from cell coordinates
Note that the odd (even) cells of row 1 (steps 1a and 1b of Algorithm 1) are the ones described by Since the labels for each upcoming row are increased by one compared to the previous row, and number n 2 is subtracted if the resulting label is, for odd k, larger than n 2 , and for even k, larger than n, the general equations for row i, 1 ≤ i ≤ n 2 are as follows: -For odd k, -For even k, . . .
Step GM-II The following part of the PM is selected for the derivation of the PD-paths: 1. Each of the paths i, 1 ≤ i ≤ n 4 , consists of the complete row i. 2. Each of the paths i, n 4 + 1 ≤ i ≤ n 2 , consists of a single edge, that is, the edge Step GM-III Here, it is verified that properties A-C are valid for the derived PDpaths.
Proposition 1 Property A is valid.
Proof If this property is valid for the whole PM, it is valid for the derived PD-paths. To prove that it is valid for the whole PM, it is sufficient to show that each row does not have the same node more than once. This is proven using mathematical induction: 1. Prove that it is true for the first row: This is trivial, as it is an immediate result of the way the first row was created. 2. Assume that it is true for the ith row. 3. Prove that it is true for the (i + 1)th row: Let v 1 , v 2 represent two nodes on the ith row (v 1 = v 2 ) and v 1 , v 2 represent the corresponding nodes on the (i + 1)th row (i.e., the ones that belong to the same columns as v 1 , v 2 ). The following cases can occur: v 1 belongs to an odd column and v 2 to an even column: 1 ≤ v 1 ≤ n 2 and n 2 + 1 ≤ v 2 ≤ n ⇒ v 1 = v 2 -v 1 belongs to an even column and v 2 to an odd column: , v 2 belong to odd (or even) columns. Then, both v 1 , v 2 belong to odd (or even) columns and The following cases are possible: and If Eqs. 5 and 7 (or 6 and 8) are valid, If Eqs. 5 and 8 are valid, If Eqs. 6 and 7 are valid, To prove that property B is valid, Proposition 2 is used.
Proposition 2 All the nodes of a column (of the whole PM) are unique.
Proof Consider that for an odd (or even) column the nodes from 1 to n 2 (or from n 2 + 1 to n) are arranged circularly, in increasing order according to their labels, and node 1 (or n 2 + 1) is found after node n 2 (or n). Then, the creation of a column can be seen as the selection of n 2 sequential nodes found on the aforementioned circle. Regardless the first node of a column, since the number of elements in the column is equal to the number of elements in the circle, all the selected nodes are unique. Therefore, all the nodes of a column are unique.
Proposition 3 Property B is valid.
Proof First it is proven that property B is valid for the paths i, 1 ≤ i ≤ n 4 , i.e., for the upper half of the derived PM.
Consider that we have the edges e = a ↔ b, e = a ↔ b . The possible cases of them can be found in Table 3, as derived by Eqs. 1-4. These edges will have either a = a or a = a. If a = a, obviously e = e. If a = a, according to Propositions 1 and 2, nodes a, a belong to different rows and columns, i.e., i = i and k = k for the contents of Table 3.
In Table 3: -1 ≤ k, k ≤ n − 1, since columns k + 1, k + 1 can take values up to n, according to the way the PM is created. -For cases with |a − b| = 3n 2 − k, since |a − b| ≤ n − 1, For equality of the two edges e, e , apart from a = a, |a − b| must be equal to |a − b |. Consequently, the cases where |a − b| = n − k and |a − b | = n − k , or |a − b| = 3n 2 − k and |a − b | = 3n 2 − k are omitted, since for them |a − b| = |a − b |, due to the fact that k = k . The rest of the cases are investigated as follows: To prove that b = b , we assume that b = b and from it we derive a non-valid result: The last result is not valid: Therefore, b = b and, consequently, e = e . -Case 1a-2c To prove that b = b , we assume that b = b and from it we derive a non-valid result: ⇒ n = 4(i − i) As previously, the last result is not valid. Therefore, b = b and, consequently, e = e . -Case 1a-2f (13) To prove that b = b , we assume that b = b and from it we derive a non-valid result: As previously, the last result is not valid. Therefore, b = b and, consequently, e = e . -Case 1a-2g (14) To prove that b = b , we assume that b = b and from it we derive a non-valid result: As previously, the last result is not valid. Therefore, b = b and, consequently, e = e .
For brevity, the investigation of the rest of the cases is omitted; it can be easily verified that, using the aforementioned framework, Proposition 3 is valid for them as well.
Subsequently, Proposition 3 has been proven for the PD-paths found in the upper half of the PM, i.e., for 1 ≤ i ≤ n 4 and 1 ≤ k ≤ n (result 3a). For the PD-paths found in the lower half (each one consisting of a single edge), i.e., for n 4 + 1 ≤ i ≤ n 2 and k = n 2 : k is even, therefore either Eqs. 3 or 4 is valid. The one that is valid is Eq. 4 since, k + 1 is odd, therefore either Eqs. 1 or 2 is valid. The one that is valid is equation 2 since, Therefore, the single edges e = |a − b| that constitute the paths i, n 4 + 1 ≤ i ≤ n 2 are as follows: Consequently, |a − b| = n 2 . According to Table 3, the edges e = a ↔ b = [i][k ] ↔ [i][k + 1] to be checked whether they are equal to a ↔ b can have: This is not possible, since for e = e, a must be equal to a, and, according to Proposition 2, for k = k, a = a.
The aforementioned analysis has proven that the edges e = |a − b| that constitute the paths i, n 4 + 1 ≤ i ≤ n 2 , do not exist anywhere else in the PM (result 3b). Results 3a, b constitute the proof of Proposition 3
Proposition 4 Property C is valid.
Proof According to Step GM-II 1. Each of the paths i, 1 ≤ i ≤ n 4 , consists of n − 1 edges. 2. Each of the paths i, n 4 + 1 ≤ i ≤ n 2 , consists of one edge. Therefore, the PD consists of n 4 · (n − 1) + n 4 · 1 = n 2 4 = m edges.
Since the number of the derived PD-paths is larger than the minimum possible, we modify the PD as follows, in order to derive an MPD from it.
Derivation of MPD from the derived PD
-Path 1 of the MPD is equal to path 1 of the PD.
-Paths of the MPD from 2 to n 4 are derived from the corresponding paths of the PD, neglecting the last edge of each one of them.
-Path ( n 4 + 1) of the MPD consists of the edges of the single-edge paths n 4 + 1 to n 2 of the PD, and of the edges that were removed from the paths 2 to n 4 of the PD. Node in jth position of this path (1 ≤ j ≤ n 2 ) is found in: for even j. In other words, the edges that were removed from paths 2 to n 4 of the PD, are used to connect the edges of the single-edge paths n 4 + 1 to n 2 of the PD (in increasing order according to the row they belong), so as to construct a single path (i.e., path ( n 4 + 1) of MPD) from them. More precisely, paths i and i + 1 of the PD ( n 4 + 1 ≤ i ≤ n 2 − 1), consist of the following edges, according to Eqs. 2 and 4: , 2 ≤ i ≤ n 4 (which has been removed from path i of the PD) can be used to connect the aforementioned edges (i = i − n 4 + 1): Under this transformation, it is obvious that properties A-C are still valid. Property D is also valid, since the number of MPD-paths is equal to n 4 + 1, i.e., the minimum possible according to Eq. 17.
The following part presents an example of the proposed procedure. The PM as derived by Algorithm 1 is presented, as well as the derived 8 The PM as derived by Algorithm 1 is given in Tables 4 and 5 gives the derived MPDpaths. K 8,8 consists of 64 edges, and this is exactly the number of edges found in Table 5. According to equation 17, the minimum number of decomposition paths is 5, equal to the number of MPD-paths found in Table 5.
Complete Bipartite
Graphs K n 1 ,n 2 with Even n 1 and 1 ≤ n 2 ≤ n 1 − 1 Consider the case of the complete bipartite graph K n 1 ,n 2 where n 1 is even and 1 ≤ n 2 ≤ n 1 − 1 (with n 2 either odd or even). Therefore, n = n 1 + n 2 ≥ 3. The graph consists of n 1 · n 2 edges. The application of the general framework is as follows: Step GM-I: The PM is created using Algorithm 2. It consists of n 1 2 rows and 2n 2 +1 columns. Algorithm 2 K n 1 ,n 2 with Even n 1 and 1 ≤ n 2 ≤ n 1 − 1 1. Create row 1 of the PM: (a) Place nodes 1, . . . , (n 2 + 1) in odd cells, sequentially, in increasing order (b) Place nodes (n 1 + 1), . . . , n in even cells, sequentially, in increasing order 2. Create rows 2 to n 1 2 of the PM. For each cell of the row under creation: (a) If it belongs to an odd column, add 2 to the label of the node found in the same column in the previous row; If the resulting label is greater than n 1 , subtract n 1 from it. Place the result in this cell (b) If it belongs to an even column, place the label of the node found in the same column in the previous row
Derivation of cell content from cell coordinates
-For even k, Step GM-II The complete PM is selected for the derivation of the PD. Therefore, the derived PD consists of n 1 2 paths, and path i (1 ≤ i ≤ n 1 2 ) consists of the complete row i.
Step GM-III Here, it is verified that properties A-C are valid for the derived PD.
Proposition 5 Property A is valid.
Proof It is sufficient to show that each row does not have the same node more than once. This is proven using mathematical induction: 1. Prove that it is true for the first row: This is trivial, as it is an immediate result of the way the first row was created. 2. Assume that it is true for the ith row. 3. Prove that it is true for the (i + 1)th row: Let v 1 , v 2 represent two nodes on the ith row (v 1 = v 2 ) and v 1 , v 2 represent the corresponding nodes on the (i + 1)th row (i.e., the ones that belong to the same columns as v 1 , v 2 ). The following cases are possible.
v 1 , v 2 belong to odd columns. The following cases are possible: Proposition 6 Properties B and C are valid.
Proof To prove that properties B and C are valid, it is sufficient to prove that for each node x, (n 1 + 1) ≤ x ≤ n (which, according to Algorithm 2, can be found in even columns) each of the edges x ↔ y, 1 ≤ y ≤ n 1 exists exactly once in the derived PD.
In other words, to prove that for arbitrary even column k, every node z 1 such that z 1 is odd and 1 ≤ z 1 ≤ n 1 − 1, can be found in column k − 1 exactly once, and every node z 2 such that z 2 is even and 2 ≤ z 2 ≤ n 1 , can be found in column k + 1 exactly once (or vice versa). Consider even column k, with odd k 2 . Then the node found in row i and column k − 1 is k 2 + 2i − 2 or k 2 + 2i − 2 − n 1 , according to Eqs. 23 and 24. Since n 1 2 rows exist, this means that every node z 1 such that z 1 is odd and 1 ≤ z 1 ≤ n 1 − 1, can be found in column k − 1 exactly once. The node found in row i and column k + 1 is k 2 + 1 + 2i − 2 or k 2 + 1 + 2i − 2 − n 1 . This means that every node z 2 such that z 2 is even and 2 ≤ z 2 ≤ n 1 , can be found in column k + 1 exactly once. For even k 2 , the opposite analysis holds Step GM-IV Up to this point, the derived solution constitutes a PD. To verify that this is also an MPD, Proposition 7 is proven.
Proposition 7 Property D is valid.
Proof Each path can consist of at most 2n 2 edges. Therefore, the minimum number of paths is n 1 n 2 2n 2 = n 1 2 (26) Since the derived PD consists of exactly n 1 2 paths, property D is valid, i.e., the derived PD is also an MPD.
Note that for n 2 = n 1 (i.e., for the case investigated in Sect. 4.1), Algorithm 2 cannot be applied, since in cell [1][2n 2 + 1] node n 2 + 1 = n 1 + 1 > n 1 → 1 will be placed, i.e., the first path will not be simple since this node will also be placed in cell [1][1]. The same holds for the rest of the rows.
The following part presents illustrative examples. | 5,609.8 | 2017-11-14T00:00:00.000 | [
"Mathematics"
] |
Endophytic Bacterial Communities Associated with Roots and Leaves of Plants Growing in Chilean Extreme Environments
Several studies have demonstrated the relevance of endophytic bacteria on the growth and fitness of agriculturally-relevant plants. To our knowledge, however, little information is available on the composition, diversity, and interaction of endophytic bacterial communities in plants struggling for existence in the extreme environments of Chile, such as the Atacama Desert (AD) and Patagonia (PAT). The main objective of the present study was to analyze and compare the composition of endophytic bacterial communities associated with roots and leaves of representative plants growing in Chilean extreme environments. The plants sampled were: Distichlis spicate and Pluchea absinthioides from the AD, and Gaultheria mucronata and Hieracium pilosella from PAT. The abundance and composition of their endophytic bacterial communities was determined by quantitative PCR and high–throughput sequencing of 16S rRNA, respectively. Results indicated that there was a greater abundance of 16S rRNA genes in plants from PAT (1013 to 1014 copies g−1 DNA), compared with those from AD (1010 to 1012 copies g−1 DNA). In the AD, a greater bacterial diversity, as estimated by Shannon index, was found in P. absinthioides, compared with D. spicata. In both ecosystems, the greater relative abundances of endophytes were mainly attributed to members of the phyla Proteobacteria (14% to 68%), Firmicutes (26% to 41%), Actinobacteria (6 to 23%) and Bacteroidetes (1% to 21%). Our observations revealed that most of operational taxonomic units (OTUs) were not shared between tissue samples of different plant species in both locations, suggesting the effect of the plant genotype (species) on the bacterial endophyte communities in Chilean extreme environments, where Bacillaceae and Enterobacteriacea could serve as keystone taxa as revealed our linear discriminant analysis.
plant (e.g., Arabidopsis thaliana), commercially relevant plants for agriculture (e.g., wheat, soybean, rice, maize, etc.) and wild plant species (e.g., weeds and trees) grown under laboratory, greenhouse and fields conditions 1,2,7,8 . Consequently, we only have limited knowledge on the composition and interactions of microbiota and plants, especially endophytic bacterial communities, on native plant vegetation growing in extreme environments, such as hot and/or cold deserts. Thus, our understanding on microbial interactions in plant holobiont will be key in the develop of efficient strategies for native plant conservation and/or exploit the full yield potential of crop plants under climate change scenario 9 .
The country of Chile is long (4,270 km) and narrow (mean width 177 km) and harbors a great variety of pristine ecosystems. The Atacama Desert (AD) is located in the northern region of Chile (from 18°24′S to 29°55′S) and is considered among the driest places on earth. In contrast, the Chilean Patagonia (from 41°08S to 56°30′S) is located in the far south of the country and is a sub Antarctic region. Both regions have extreme environments and their plant-associated bacterial communities have been barely studied thus far. In this context, we have reported that members of the orders Enterobacteriales, Actinomycetales, and Rhizobiales comprise dominant groups of the bacterial communities in the rhizosphere (the soil influenced by plant roots) of shrubs grown in AD and Patagonia (PAT), namely Atriplex sp. and Chuquiraga sp., respectively 10 . Results of this study also suggested that some isolates, belonging to the genera Enterobacteria, Pseudomonas, and Bacillus, were putative PGPB. The ability of the native isolates from AD to act as PGPB was confirmed by formulation and inoculation of a bacterial consortium onto plants. These studies revealed that wheat plants inoculated with the consortium produced greater biomass under water shortage and field conditions, compared with uninoculated seedlings 11 . A recent study also showed a greater protection against salt stress in wheat plants inoculated with rhizosphere bacteria isolated from Andean Altiplano native plant (Parastrephia quadrangularis) in AD 12 . However, these studies did not take into account the composition and interaction of native endophytic bacteria in Chilean extreme environments, as well as their potential use as PGPB.
During the last several years, advances in high-throughput DNA sequencing (HTS) technologies (e.g., Illumina ® , PacBio ® and Oxford Nanopore ® ) have opened new windows into the microbial ecology of a variety of environments, allowing the detailed study of complex bacterial communities in nature as never seen before. Thus, HTS platforms have widely been used to decipher the structure and function of microbiota in different compartments of plants, including as the rhizosphere, endosphere (inner tissues of plants), and phyllosphere (the aerial part of plant leaves) 1,13 . Results of 454-pyrosequencing studies showed that the Proteobacteria (mainly Gammaproteobacteria) were the dominant taxa in the rhizospheres of Atriplex sp. and Stipa sp. (shrubs) grown in the AD 14 . These authors also postulated that native plants from Chilean extreme environments may attract, select, and conserve specific bacterial groups in order to sustain plant growth and tolerance to local harsh conditions. Based on this supposition, the main goal of the present study was to describe and compare the relative abundances and composition of bacterial communities associated with roots and leaves of plants grown in the AD and PAT regions of Chile by using HTS of 16S rRNA genes.
DNA extraction. Roots and leaves samples were separated and surface sterilized by repeated immersion in 70% (v/v) ethanol for 3 min, followed by 2.5% (v/v) sodium hypochlorite (NaOCl) for 5 min as described by Barra et al. 15 . Roots were exhaustively rinsed with sterile distilled water. Triplicate portions of roots and leaves were aseptically cut, frozen in liquid nitrogen, macerated and homogenized with a mortar and pestle, and stored at −80 °C until DNA extraction. Samples of the homogenized tissues (0.25 g) were used for DNA extraction with Quick-DNA TM Plant/seed Miniprep kits according to manufacturer instructions (Zymo Research, CA, USA). The quantity and purity of DNA extracts were determined by measuring absorbance at 260 nm and 280 nm by using a microplate spectrophotometer (Multiskan GO, Thermo Fisher Scientific, Inc., MA, USA).
Quantitative pCR. The abundance of endophytic bacteria in each tissue sample was determined by quantitative PCR (qPCR) by using a universal primer set for the bacterial 16S rRNA gene (Bac1369F 5′-CGG TGA ATA CGT TCY CGG-3′) and Prok1492R (5′-GGW TAC CTT GTT ACG ACT-3′) as previously described 16,17 . Briefly, PCR conditions were run with an enzyme activation step at 95 °C for 10 min, followed by 40 cycles of 15 s at 95 °C, and 1 min of annealing plus extension at 60 °C. PCR reactions were performed in triplicate per plant species (including technical triplicates) with 20 µg L −1 of total DNA in a StepOnePlus TM Real-Time PCR System (Applied Biosystems, Inc., CA, USA) using PowerUp TM SYBR TM Green Master Mix (Applied Biosystems, Inc.), by following the manufacturer instructions. The numbers obtained were normalized and analyzed by using one-way ANOVA, and comparisons were done by using Tukey's post-hoc test. Differences were considered to be significant when the P value was ≤0.05.
High-throughput DNA sequencing. The distribution and relative abundances of endophytic bacteria in root and leaf tissues was assessed by HTS using triplicate samples of each plant species as follow. The V4 hypervariable region of the 16S rRNA genes were amplified, for bacteria and archaea, by using primer set 515F (5′-GTG CCA GCM GCC GCG GTA A-3′) and 806R (5′-GGA CTA CHV GGG TWT CTA AT-3′). Sequencing was done www.nature.com/scientificreports www.nature.com/scientificreports/ by the University of Minnesota Genomics Center (UMGC, Minneapolis, MN, USA) 18 using barcode primers and the dual indexing method. Amplicons were gel purified, pooled, and paired-end sequenced at a read length of 300 nt on the Illumina MiSeq platform (Illumina, Inc., San Diego, CA, USA) from UMGC.
Bioinformatics and statistical Analysis. Sequences were analyzed by using mothur program ver. 1.34.0 (https://www.mothur.org) 19 . The first 150 nt were trimmed from sequences to remove low-quality regions at the ends of reads. Fastq-join software was used to join paried-end sequencing reads 20 , the joined sequencing reads were trimmed to maintain an average quality score >35, a homopolyer length >8 nt. Sequences with >2 mismatches in primer sequences and ambiguous bases were removed. High quality sequencing reads were aligned on the basis of the SILVA database ver. 123 21 , and subjected to a 2% pre-clustering step to remove possible sequence errors 22 . The UCHIME software was used to identify and remove probable chimeric sequences 23 . To avoid the influence of non-microbiota (e.g., Chloroplast and mitochondria), the sequences were futher filtered by Qiime to remove non-microbiota taxa before a subsequent analysis. Sequence data was rarefied to 700 and 4,500 sequence reads per data set prior to statistical analysis for AD and PAT, respectively. Raw sequencing data were deposited in the Sequence Read Archive (SRA) of NCBI under Accession Number SRP156290.
For statistical analysis, Alpha diversity indices, as well as Good's coverage, were calculated using the Shannon index and the abundance-based coverage estimate (ACE) through the mothur program. Visualization of the taxonomic distribution of microbial communities was performed using the "ggplot2" package in R 24 . Differences in beta diversity was evaluated by using analysis of similarity (ANOSIM) and permutational multivariate analysis of variance (PERMANOVA). Principal coordinate analysis (PCoA) was performed based on unweighted unifrac distance for the ordination 25 . The VennDiagram package in R was used to identify shared OTUs of endophytic bacterial communities between root and leave tissues 26 . Variations in taxa associating with root and leaf tissues were evaluated using linear discriminant analysis (LDA) of effect sizes 27 , which employs Kruskal-Wallis and Wilconxon rank-abundance tests and then utilizes linear discriminant analysis (LDA) to estimate effect sizes of the features. www.nature.com/scientificreports www.nature.com/scientificreports/
Results
Abundances of Bacteria. In general terms, plant tissues from Patagonia (G. mucronata and H. pilosella) had greater abundances of bacteria (from 10 13 to 10 14 16S rRNA gene copies g −1 template DNA), compared with those from AD (D. spicata and P. absinthioides) (from 10 10 to 10 12 16S rRNA genes copies g −1 template DNA), except roots from G. mucronata (Table 1). In AD plants, a significantly (P ≤ 0.05) greater abundance of bacteria in both tissues was found in P. absinthioides (2.6 × 10 11 and 2.7 × 10 12 16S rRNA gene copies g −1 template DNA in roots and leaves, respectively) compared with those from D. spicata (4.4 × 10 10 and 5.4 × 10 11 16S rRNA gene copies g −1 template DNA in roots and leaves, respectively). In PAT plants, a significantly (P ≤ 0.05) greater abundance of bacteria in both tissues was found in H. pilosella (8.1 × 10 13 and 5.4 × 10 14 16S rRNA gene copies g −1 template DNA in roots and leaves, respectively) compared with those from G. mucronata (1.3 × 10 13 and 3.0 × 10 10 16S rRNA gene copies g −1 template DNA in roots and leaves, respectively).
Composition of endophytic Bacterial Community in extreme environments. Sequence analyses
showed a lower estimated coverage in AD (from 91 to 95%) compared with PAT (from 98 to 99%) ( Table 2). The values of observed OTUs (S obs ) were lower in plant tissues from AD compared with those from PAT, ranging in values from 68 to 113 and 208 to 220 in roots, and 103 to 152 and 151 to 160 in leaves, respectively. Similarly, lower ACE values were also observed in AD compared to PAT plants, ranging in values of 148 to 188 and 263 to 314 in roots, and 139 to 211 and 190 to 245 in leaves, respectively. However, and as revealed by the Shannon index, there was significant differences (Tukey's post-hoc test, P < 0.05) in bacterial diversity in both tissues of P. absinthioides compared with tissues from D. spicata in AD (Table 2). In contrast, no significant differences in bacterial diversity were found in plant tissues from PAT plants.
In both ecosystems, the assignment of taxonomic affiliation to endophytic bacterial communities at the phylum level indicated that there were high relative abundances of Proteobacteria (14.88% to 68.53%), Firmicutes (26.03% to 41.59%), Actinobacteria (6.45% to 23.69%), and Bacteroidetes (1.09% to 21.21%) in both tissues ( Fig. 2A). It is noteworthy that the lowest relative abundance of Bacteroidetes was found in roots from D. spicata. This tissue, however, presented a relative abundance of 31.01% of members belonging to Euryarchaeota phylum ( Fig. 2A).
With respect to minor taxa, broad taxonomic diversity among samples was found. The tissues of AD plants (D. spicata and P. absinthioides) showed large relative abundances of Cyanobacteria, Lentisphaerae and Chloroflexi in roots, and Verrucomicrobia and Fusobacteria and Cyanobacteria in leaves (Fig. 2B). The tissues from PAT plants mucronata and H. pilosella) showed a large relative abundance of Elusimicrobia, Fusobacteria, Spirochaetes and Acidobacteria in roots, and Elusimicrobia, Fusobacteria, TM7 and Verrucomicrobia in leaves (Fig. 2B). At family level, a wide taxonomic diversity among samples was also found. In AD plants, higher relative abundances of Halobacteriaceae (31.01%), Bacillaceae (24.67%) and Nocardiopsaceae (17.78%) are highlight in roots of D. spicata whereas a higher relative abundance of Halomonadaceae (25.81%) is highlight in leaves of P. absinthioides (Fig. 3). In PAT plants, a higher relative abundance of members belonging to Pseudomonaceae was found in roots (21%) and leaves (57.93%) from G. mucronata.
Differences between tissues and plant species were also confirmed by PCoA. In AD plants, a clear grouping between roots and leaves from D. spicata, and roots from P. absinthioides was observed (Fig. 4, AD). Similarly, a clear grouping between roots and leaves from H. pilosella, and roots from G. mucronata was also observed in PAT plants (Fig. 4, PAT).
shared and Unique operational taxonomic Units and Keytone taxa in extreme ecosystems. In relation to the distribution of shared and unique OTUs among endophytic bacterial communities in ecosystems, 53 out of 1075 OTUs were shared in AD, while 115 out of 1713 OTUs shared in PAT ecosystem ( Fig. 6 and Table 3). In AD plants, among these 53 shared OTUs, most belonged to the Firmicutes (20), followed by Proteobacteria (15) and Actinobacteria (13). In PAT plants, among these 115 shared OTUs, most belonged to the Firmicutes (37), followed by the Actinobacteria (29), Bacteroidetes (25) and Proteobacteria (24). In the PAT, there were a greater number of unique OTUs, relative to those seen at AD, where 305 and 341 and 306 and 195 plant specific OTUs were found in root and leaves from G. mucronata and H. pilosella, respectively ( Table 3 and Fig. 5). In the AD, 234 and 224, and 218 and 93 plant specific OTUs were found in root and leaves from D. spicata and P. absinthioides, respectively (Table 3 and Fig. 5). In AD plants, most of unique OTUs belonged to the Firmicutes (353), followed by the Proteobacteria (213), Actinobacteria (90) and Bacteroidetes (87). Similarly, in PAT plants, most of unique OTUs belonged to the Firmicutes (512), followed by Proteobacteria (397), Actinobacteria (159) and Bacteroidetes (119) ( Table 3).
Linear discriminant analysis (LDA) of effect size was performed to determine which taxa varied among plant components at two ecosystems. Several bacterial taxa, which belong to the Bacillacea, Nocardiopsaceae, Ectothiorhodospiraceae, and Moraxellaceae (at family level), elucidative served as the keystone taxa in roots of D. spicata, while only the Propionivacteriaceae and Corynebacteriaceae could be used to indicate leaves of D. spicata and P. absinthioides, respectively (Fig. 6). Moreover, the Bacillaceae had the greatest effect size among the plant components in AD. Contrastingly, taxa belonging to the family level of the Enterobacteriaceae, Pseudomonadaeceae, Dermabacteraceae, Coriobacteriaceae, and Bacteroidaceae were the keystones presenting endophytic microbiota among plant compartments in PAT ecosystem (Fig. 6). Additionally, the Enterobacteriaceae had the highest effect size in PAT.
Discussion
Microbial endophytes play a central role in the ecology, evolution, and growth promotion of plants 2,4,28 . However, despite their importance, there is scant knowledge concerning endophytic microbial populations in plants living in extreme environments, including Chile. Our study showed that the abundances of bacteria from the endosphere of root and leaf tissues ranged from 10 10 to 10 12 and from 10 10 to 10 14 16S rRNA gene copies g −1 template DNA in AD and PAT, respectively; corresponding from 10 3 to 10 7 copies of 16S rRNA genes copies g −1 fresh tissue.
A wide range of endophytic prokaryote densities have been reported in tissues of different plants thus far. Similar to our finding, 10 14 16S rRNA gene copies g −1 template DNA of total endophytic prokaryotes were reported in endospheres of olive tree (Olea europaea L.) leaves collected from diverse Mediterranean ecosystems 29 . Higher abundances than those observed in this current study were reported to be present in the endosphere of rice (10 7 to 10 8 16S rRNA gene copies g −1 root) and crops (10 10 to 10 13 copies of 16S rRNA genes g −1 root) by Ruppel et al. 30 and Breidenbach et al. 31 , respectively. Under stress conditions, Blain et al. 32 recently reported abundances of endophytic bacteria from 10 3 to 10 5 16S rRNA gene copies g −1 from fresh roots of natural vegetation growing in a hydrocarbon-contaminated site. In addition, and to our knowledge, there are no studies www.nature.com/scientificreports www.nature.com/scientificreports/ reporting the abundances of endophytic bacterial populations in plants from Chilean extreme environments. It should be noted, however that a previous study of the rhizosphere of plants from AD and PAT revealed values of 10 9 and 10 11 16S rRNA genes copies g −1 of soil, respectively 33 .
With respect to analyses of alpha diversity of endophytic bacterial communities in root and leave tissues, the number of OTUs observed (97% similarity) range from 68 to 152 and from 151 to 220 in the endospheres of plants from AD and PAT, respectively. These values are in accordance with other studies reporting OTUs values from plant endospheres ranging from 100 to 300 2,28,34,35 . However, a significantly greater number of OTUs (from 450 to 3700) have also been reported in plant endospheres by other authors 29,36,37 .
The Shannon index values ranged from 2.28 to 4.41. and from 2.76 to 4.41 in the endospheres of plants from AD and PAT, respectively. The diversity of bacteria in root and leaf tissues of tree peony (Paeonia Sect. Moutan) had greater Shannon index values (7 to 9) compared to our study 37 . In contrast, a recent study reported Shannon index values that were similar to those we found (3 to 4) in root and leaf endospheres of groundsel (Senecio vulgaris L., Asteraceae), also by using the Illumina platform 34 . Similarly, but by using 454-pyrosequencing, Correa-Galeote et al. 35 reported Shannon index values of 3 to 4 in the root endosphere of maize cultivated at 3,537 meters above sea level in Perú. Interestingly, the Shannon index values found in endosphere tissues from AD plants were generally lower than those reported in other AD habitats, such as soil, lakes and sediments of flat mats, with Shannon index values ranging from 3 to 9 [38][39][40][41] . This suggests that endospheres of AD plants may harbor less bacterial diversity compared to other niches in the AD, which is considered oldest and driest place on the Earth 42 .
Interestingly, compared with G. mucronata, H. pilosella had high bacterial abundances and diversity at the PAT ecosystem, as determined by qPCR and Shannon index, respectively. H. pilosella is an exotic weed at the www.nature.com/scientificreports www.nature.com/scientificreports/ Chilean Patagonia, which recognized by its explosive expansion in Patagonian grasslands, often replacing forage plants with the concomitant economic loss for livestock and soil degradation by overgrazing 43 . Therefore, the higher abundance and diversity of endophytic bacteria in H. pilosella might give this plant species a competitive advantage against other Patagonian plants. Thus, alteration of the composition and activity of endophytic bacteria may be a useful strategy for biological control of invasive plants. However, our study is limited to few sampled plants and major efforts are required to validate this statement and evaluate the potential biocontrol of H. pilosella expansion in Chilean Patagonia grasslands.
Our Illumina-based analyses revealed the dominance of members of the phyla Proteobacteria, Firmicutes, Actinobacteria and Bacteroidetes in the endosphere. It was previously reported that Proteobacteria and Firmicutes are common inhabitants of plant endospheres with relative abundances ranging from 39% to 97% and 14% to 44%, respectively 4,28,36 . Studies done to analyze root and leave endosphere tissues have also shown a great dominance (over 86%) by the Proteobacteria, Firmicute and other phyla (such as Bacteroidetes, Acidobacteria and Actinobacteria) in olive trees, peony and groundsel 29,34,37 . Interestingly, a high relative abundance (31%) of Euryarchaeota was found in roots of D. spicata. Member of Euryarchaeota have also been found colonizing the endospheres of Mediterranean olive trees 29 and compartments (rhizosphere, endosphere and phyllosphere) of a halophyte plant (Salsola stocksii) from Pakistan 44 .
Relative to what was seen in tissues from the AD and PAT as minor taxa, both ecosystems had a great diversity, represented by members of the Lentisphaerae, Chloroflexi, Verrucomicrobia, Elusimicrobia, Fusobacteria, Spirochaetes and Acidobacteria. Low numbers of sequences of these phyla were observed by Hardoim et al. 4 , when endophytic data sets from all peer-reviewed publications were revised in the ISI Web of Science and PubMed databases. Similarly, our analyses interestingly revealed the presence of members of the phylum Cyanobacteria in roots and leaves of AD plants. High abundances (24% to 47%) of members of this phyla have also been reported in the endospheres of grasses (Spartina alterniflora) and mangrove (Kandelia obovata) 45 . Lower abundances (1.7%) of Cyanobacteria have also been reported in the endosphere and other compartments (rhizosphere and rhizoplane) of the medicinal perennial plant Stellera chamaejasme L. 46 . It is noteworthy that Cyanobacteria are | 5,047 | 2019-03-20T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Why Fuel Economy Fraud Happened in the Japanese Automotive Industry?
: Fuel economy competition has heated up as a result of the oil crises of the 1970s, the environmental issues occurring since the 1990s, and the Japanese government’s economic policies, so that fuel economy has become a key competition index. However, for engineers who measure fuel economy, it is (i) a vague and unstable metric that fluctuates because of a number of factors and (ii) a quality that does not affect safety and so is not subject to recall. Competitive pressure regarding fuel economy led to arbitrary measurements. This eventually became normalized, and since 2016, cases of organizational corruption in the Japanese automotive industry have been uncovered one after another.
Introduction
An automobile, which consists of about 30,000 parts, has a closed integral architecture and is a product that epitomizes the strength of Japanese-style manufacturing (Fujimoto, 2004). However, among Japanese automakers, many cases of institutionalized organizational corruption with respect to fuel economy measurement have come to light since 2016.
However, for engineers who measure fuel economy, it is (i) a vague and unstable metric that fluctuates because of a number of factors, and (ii) a quality that does not affect safety and so is not subject to recall. In fact, catalog fuel economy differs from the actual fuel economy when the consumer drives the car, and so, the two diverge. Such cases thus create room for fraudulent activity to occur. For example, in the case of emissions, vehicles manufactured by Volkswagen AG typically emit 40 times the volume of nitrogen oxides permissible under the environmental standards. Yet, Volkwagen achieved compliance with the U.S. Clean Air Act of 1963 by installing defeat device software in its diesel vehicles so that the volume of pollutants would be significantly reduced only during testing. In 2015, the U.S. Environmental Protection Agency announced that this was a violation of its diesel engine emissions rules (Hotten, 2015). This was the first in a series of incidents worldwide, and since 2016, many Japanese automakers have been found to have regularly engaged in organizational corruption with respect to fuel economy measurement. So, how did it come about that Japanese automakers were involved in organizational corruption with regard to the measurement of fuel economy?
Why fuel economy fraud happened in the Japanese automotive industry?
Focus on fuel economy
Fuel economy (nenpi in Japanese) means the rate at which fuel is consumed. It is quantified as the distance (in kilometers) that an automobile can travel on one liter of gasoline. 1 The term "fuel economy" has been widely used in Japan since the 1970s, when the country was dealing with the problems of pollution from auto emissions and the oil crises ( Figure 1). In the United States, the Energy Policy and Conservation Act was enacted in 1975, and in Japan, the Act on the Rational Use of Energy (Energy Conservation Act) was enacted in 1979 (Nishino, 2015). Around this time, the Honda Civic, which was equipped with a compound vortex controlled 1 Honda Motor Co., Ltd. (Honda) website. "Honda Cars 'Frequently asked questions when thinking about buying a new car. Q. What is fuel economy?'" (in Japanese). https://www.honda.co.jp/hondacars/ hajimete/faq/select/02fuel/ Aizawa combustion (CVCC) engine, was developed in Japan (Kawamoto, Sato, Minowa, Nisimura, & Hashimoto, 1975).
Gasoline prices in the U.S. were low, so not much importance was placed on American cars' fuel economy, but gasoline was relatively expensive in Japan because of taxes and other factors, so Japanese cars pursued more fuel economy (Itami, 2017). A ground-breaking vehicle was the Prius, the world's first hybrid car, introduced by the Toyota Motor Corporation in 1997.
Accelerating fuel economy competition
The 1990s was the era when fuel economy came to the fore in
The Ministry of Land, Infrastructure, Transport and Tourism therefore set forth rules for fuel economy measurement. Automakers typically measure fuel economy by mounting their vehicles on a device called a chassis dynamometer. The chassis dynamometer is equipped with rollers that move the vehicle's tires. The vehicle is run in a test lab with a certain load that simulates actual driving conditions. The fuel economy thus obtained is called the catalog fuel economy, which is the fuel economy advertised in the automaker's catalog and elsewhere. However, if we liken this to humans, it would be as though the fuel economy was measured while running on a treadmill at the gym, 9 so it cannot be equated with running a marathon or running outdoors on a road. In fact, the catalog fuel economy is a different value from the actual fuel economy when the consumer is driving, so there is inevitably a gap between the two. Therefore, the measurement regulations have been continuously changing in order to get the measurement as close as possible to the consumer experience, and in 2011, the regulation was changed from the previous JC08 to the 10-15 mode. 10 However, room for fraudulent activity to take place will crop up when there is a discrepancy between the catalog fuel economy and the actual operating fuel economy. 9 Japan Automobile Transport Technology Association website "Technical explanation: Vehicle assessment 1 using chassis dynamometer" (in Japanese). http://www.ataj.or.jp/technology/chdy_technology.html 10 Due to the fuel economy fraud problem among automobile companies since 2015, regulators are also considering a shift to the world harmonized light vehicles test procedure (WLTP), which is an international standard. Ministry of Land, Infrastructure, Transport and Tourism website. "WLTP international standard (WLTP) to be adopted for passenger car emissions and fuel consumption test methods" (in Japanese). https://www.mlit.go.jp/report/press/jidosha10_hh_000172.h tml
Engineer judgment based on ambiguity
From the beginning, engineers in the field thought that fuel economy measurement was ambiguous and existed in a gray zone. In fact, engineers at Company A made the following comment regarding fuel economy measurement (Tokubetsu Cyousa Iinkai, 2016, pp. 218-219): "Theoretically, there is no difference in the running resistance ultimately achieved with the coasting method or the high-speed coasting method, so using the high-speed coasting method is not a very serious issue." Therefore, the judgment was that there is theoretically no difference even when using the unapproved high-speed coasting method, so it is not a problem.
Moreover, "Since the data from the measurements are affected by the external environment at the time of the measurement is taken, one can say that the values that can be obtained 'in theory' (the true values) are the correct values for the measured data." In other words, the theoretical values obtained while sitting at one's desk are more accurate than the fuel economy measurements according to the law. Therefore, to measure the running resistance for fuel economy performance, Company A typically (a) used the high-speed coasting method instead of the legally stipulated coasting method and (b) calculated only the theoretical values without taking any actual measurements. This was eventually exposed as a form of organizational corruption in the automobile industry.
Organizational corruption regarding fuel economy
The first global case of this organizational corruption was the 2015 discovery of Volkswagen's violation of the diesel engine emissions rules (Hotten, 2015). In Japan, the first instance of fuel economy fraud was uncovered in 2016 at Mitsubishi Motors Corporation (Tokubetsu Cyousa Iinkai, 2016), and that year Suzuki Motor Corporation also disclosed that it had found fraud (Suzuki Motor Corporation, 2016).
In 2018, Subaru Corporation and Nissan Motor Co., Ltd. announced that inappropriate actions, such as rewriting fuel economy measurement data, had occurred at a completed vehicle inspection (Nagashima Ohno & Tsunematsu, 2018;Nissan Motor Co., Ltd., 2018). Immediately before this, Subaru and Nissan had uncovered irregularities in the inspection of pre-shipment vehicles by unqualified persons, and the upshot of this was that further problematic behavior was identified and disclosed. Corporation, 2018). 11 Although Japan has less than ten automakers, many of them were involved in fraudulent activity in this area.
Conclusion
The meaning of fuel economy is vague in the first place, 12 and even if the actual fuel economy while driving is measured, the results will vary considerably, so they are unstable. Nevertheless, fuel economy became a competitive metric because of such social issues as the environment and oil crises. If too much emphasis is placed on one competition index, people will tend to feel that the metric in question needs to be achieved by whatever means possible (Takahashi, 2015). 13 Although there were some discrepancies in fuel economy measurement standards that were ambiguous, development goals were increased many times due to severe fuel economy competition in the kei vehicle market, and when this happened, there were some cases where the approach taken evolved into fraudulent acts (Tokubetsu Cyousa Iinkai, 2016).
Also, violations in the area of fuel economy performance do not subject vehicles to recalls, as they do not affect people's lives or their safety. The severity differs from such cases as Takata Corporation, which collapsed in 2017 due to an airbag recall problem. The investigative committee that interviewed the engineers who committed the fraud was critical, saying, "There is insufficient awareness that the laws are being violated, because no one is paying 11 This investigation targeted fraud in the inspection of completed vehicles, but no fraud was reported by Toyota Motor Corporation, Honda Motor, Mitsubishi Motors Corporation, or Daihatsu Motor Co., Ltd. 12 Engineers adhere to the concept of tolerance, which refers to the functionally allowable difference between a product's maximum and minimum dimensions (Byun, 2019). 13 Takahashi (2015) discusses the results-oriented approach adopted by the Japanese human resources departments in the 1990s. attention to the laws," and that the engineers had a "sanctimonious attitude." However, the engineers' standpoint was that there was no serious violation of the law (Tokubetsu Cyousa Iinkai, 2016). However, not all companies that were competing on fuel economy committed fraud. 14 Future studies will need to verify the internal organizational factors by conducting a case study on each company. | 2,284.8 | 2020-06-15T00:00:00.000 | [
"Economics"
] |
Molecular cloning and functional characterization of a novel human CC chemokine receptor (CCR5) for RANTES, MIP-1beta, and MIP-1alpha.
Chemokines affect leukocyte chemotactic and activation activities through specific G protein-coupled receptors. In an effort to map the closely linked CC chemokine receptor genes, we identified a novel chemokine receptor encoded 18 kilobase pairs downstream of the monocyte chemoattractant protein-1 (MCP-1) receptor (CCR2) gene on human chromosome 3p21. The deduced amino acid sequence of this novel receptor, designated CCR5, is most similar to CCR2B, sharing 71% identical residues. Transfected cells expressing the receptor bind RANTES (regulated on activation normal T cell expressed), MIP-1β, and MIP-1α with high affinity and generate inositol phosphates in response to these chemokines. This same combination of chemokines has recently been shown to potently inhibit human immunodeficiency virus replication in human peripheral blood leukocytes (Cocchi, F., DeVico, A. L., Garzino-Demo, A., Arya, S. K., Gallo, R. C., and Lusso, P. (1995) Science 270, 1811-1815). CCR5 is expressed in lymphoid organs such as thymus and spleen, as well as in peripheral blood leukocytes, including macrophages and T cells, and is the first example of a human chemokine receptor that signals in response to MIP-1β.
Chemokines mediate the migration and activation of leukocytes at sites of inflammation. Chemokines are 70 -90 amino acids in length and are subdivided into two gene families based on the presence or absence of an amino acid between the first two of four conserved cysteines (1,2). The CXC chemokines predominantly activate neutrophils, while the CC chemokines generally activate monocytes, lymphocytes, basophils, and eosinophils.
Genes encoding closely related G protein-coupled receptors are often closely linked. For example, the IL-8 receptor genes (IL8RA, IL8RB, and a related pseudogene) are all encoded by human chromosome 2q34-q35 (16). In addition, the receptor genes for chemoattractants C5a and fMet-Leu-Phe as well as two closely related orphan receptor genes cluster at chromosome 19q13.3 (17). Recently, the genes for CCR1, CCR2, and the related sequence V28 (18) have been mapped to human chromosome 3p21 (8, 18 -20). In our efforts to further analyze the genetic linkage of these CC chemokine receptor genes, we have identified a novel G protein-coupled receptor on chromosome 3 with significant homology to this gene family. This receptor binds and functionally responds to a unique combination of CC chemokines and is therefore termed CCR5. Screening YAC Library-Two yeast artificial chromosome (YAC) clones encoding CCR1 were identified by PCR on DNA pools from the human CEPH "B" YAC library (Research Genetics Inc., Huntsville, AL). DNA from YAC clones 881F10 and 941A7 were used as templates for PCR reactions with degenerate primers designed from homologous regions of CCR1, CCR2, and V28 (18). The primers corresponded to regions in the second intracellular loop and the sixth transmembrane domain of the receptor proteins: sense, GACGGATCCAT(T/C)GA(T/ * This work was supported by ICOS Corporation and by National Institutes of Health Grant HL52773 (to I. F. C.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) U54994.
‡ The first two authors contributed equally to this work. The PCR reactions were performed using a single 4-min denaturing step at 94°C followed by 33 cycles of denaturing for 1 min at 94°C, annealing for 45 s at 55°C, and extension for 1 min at 72°C. The resulting PCR products were digested with BamHI (site underlined in sense primer) and HindIII (site underlined in antisense primer) and cloned into pBluescript (Stratagene, La Jolla, CA) for sequence analysis.
Isolation of CCR5 cDNA-A human macrophage cDNA plasmid library (22) in pRcCMV (Invitrogen, San Diego, CA) was screened by PCR with primers specific for CCR5 (sense, TGTGTTTGCTTTAAAAGCC; antisense, TAAGCCTCACAGCCCTGTG). The PCR reactions were performed as above except 30 cycles were used with an annealing step of 60°C. Two plasmids were obtained which contained a full length open reading frame that included the CCR5 PCR fragment sequence. The cDNA clones were sequenced by dideoxy chain termination using an automated sequencer.
RACE PCR-RACE PCR was performed on human spleen 5Ј-RACEready cDNA (Clontech, Palo Alto, CA) using antisense primers specific for CCR5 according to the manufacturer's directions. The resulting PCR product was ligated into vector pCR using a TA cloning kit (Invitrogen) and sequenced.
Mapping CCR Locus-Three P1 clones (Genome Systems, St. Louis, MO) were identified using PCR primers specific for CCR2. Genomic mapping was performed by Southern blotting of restriction endonuclease digests of P1 and YAC DNA. Restriction fragments were separated by pulse field or conventional electrophoresis, transferred to nylon membranes, and hybridized with 32 P-labeled DNA probes from different regions of the CCR genes.
Northern Blots-The expression of CCR5 mRNA in human tissues was examined by Northern analysis using blots purchased from Clontech. The blots were hybridized in 0.75 M sodium chloride, 50 mM sodium phosphate (pH 6.8), 5 mM EDTA, 0.2% Ficoll, 0.2% polyvinylpyrrolidone, 0.2% bovine serum albumin, 100 g/ml sheared salmon sperm DNA, 2% sodium dodecyl sulfate, and 50% formamide at 42°C overnight using a partial CCR5 cDNA as probe (including approximately 700 bases of coding region and 300 bases of 3Ј non-coding). Blots were washed extensively at 50°C in 30 mM sodium chloride, 3 mM sodium citrate, 0.1% sodium dodecyl sulfate. CD4 ϩ and CD8 ϩ T lymphocytes were isolated from normal human blood by immunomagnetic negative selection. Purity of each T cell preparation was Ͼ90%. RNA was isolated from T cells and from hematopoietic cell lines using RNA STAT-60 (Tel-Test "B," Friendswood, TX). Ten g of total RNA from each cell type was electrophoresed, blotted, and hybridized as described (18). A 500-base PCR fragment corresponding to the 5Ј half of the CCR5 coding region was used as a probe.
Expression of CCR5-The CCR5 coding region was subcloned into the mammalian cell expression vector pBJ1, which is derived from pcDL-SR␣296 (23). This construct contains a signal sequence followed by the FLAG epitope (DYKDDDDK) at the amino terminus of CCR5 to facilitate quantitation of surface expression (24). For signaling studies in COS-7 cells, CCR5 was co-transfected with the chimeric G protein Gqi5 (21) using LipofectAMINE following the manufacturer's instructions. For binding studies, the FLAG-CCR5 sequence was subcloned into pcDNA3 (Invitrogen) and transfected into HEK-293 cells using LipofectAMINE according to the manufacturer's instructions. Cells were expanded in the presence of G418 (800 g/ml). Transfected cells were evaluated for expression of CCR5 at the cell surface by enzymelinked immunosorbent assay using the M1 antibody (Eastman Kodak Co.) to the FLAG epitope.
Phosphoinositol Hydrolysis-Transfected cells were assayed for agonist dependent phosphoinositol turnover as described (25). Briefly, approximately 24 h after transfection, COS-7 cells were labeled for 20 -24 h with myo-[2-3 H]inositol (1 Ci/ml) in inositol-free medium containing 10% dialyzed fetal calf serum. Labeled cells were washed and then treated with agonist for 1 h at 37°C in inositol-free Dulbecco's modified Eagle's medium containing 10 mM LiCl. Cells were lysed by addition of 0.75 ml of ice-cold 20 mM formic acid for 30 min. Supernatant fractions were loaded onto AG1-X8 Dowex columns (Bio-Rad) followed by immediate addition of 3 ml of 50 mM NH 4 OH. The columns were then washed with 4 ml of 40 mM ammonium formate followed by elution with 2 M ammonium formate. Total inositol phosphates were quantitated by counting  emissions.
Binding Assay-The radiolabeled chemokine binding assay was a modification of the procedure described by Ernst et al. (26). Chemokines were labeled using the Bolton and Hunter reagent (diiodide, DuPont NEN) according to the manufacturer's instructions. Unconjugated iodide was separated from labeled protein using a PD-10 column (Pharmacia Biotech Inc.) equilibrated with phosphate-buffered saline and bovine serum albumin (1% w/v). The specific activity was typically 2200 Ci/mmol. Equilibrium binding was performed by adding 125 I-labeled hMIP-1 with or without a 100-fold excess of unlabeled ligand to 5 ϫ 10 5 cells in a total volume of 300 l of binding buffer (50 mM Hepes pH 7.4, 1 mM CaCl 2 , MgCl 2 , 0.5% bovine serum albumin) and incubating for 90 min at 27°C with shaking at 150 rpm. The cells were collected using a Skatron cell harvester (Skatron Instruments Inc., Sterling, VA) on glass fiber filters presoaked in 0.3% polyethyleneimine and 0.2% bovine serum albumin. Bound ligand was quantitated by counting ␥ emissions. Competitive binding was determined by incubation of 5 ϫ 10 5 transfected cells (as above) with 2 nM radiolabeled hMIP-1 and the indicated concentrations of unlabeled ligand. The data were analyzed using the curve-fitting program Prism (GraphPad Inc., San Diego, CA) and the iterative nonlinear regression program LIGAND (27).
RESULTS AND DISCUSSION
Previously characterized CC chemokine receptors are structurally similar, sharing 46 -62% amino acid identity (19). In addition, CCR1, CCR2, and the closely related G protein-coupled receptor V28 are known to be encoded by human chromosome 3p21 (8, 18 -20). The close proximity of these CC chemokine receptor genes suggested that related genes might be clustered nearby. Two overlapping human YAC clones were identified by PCR with primers specific for the CCR1 gene. The clones, 881F10 and 941A7, were 640 and 700 kb, respectively, and both mapped to human chromosome 3 (28,29). Amplification with primers specific for the other CC chemokine receptor genes demonstrated that both YAC clones also encoded CCR2 but not V28.
To look for novel receptor genes, YAC DNA was used as a template for PCR with degenerate oligonucleotides. The beginning of the second cytoplasmic domain and the sixth transmembrane domain, which are highly conserved among CCR1, CCR2, and V28, were used to design the degenerate oligonucleotides for PCR. In addition to CCR1 and CCR2, two novel G protein-coupled receptor genes were amplified. One of the genes, 88-2B, was subsequently reported to be the eotaxin receptor gene CCR3 (13, 14, 19). The other novel sequence was utilized to isolate two full length cDNA clones from a macro- (Fig. 1). Clone 134 is 1.6 kb in length and extends 45 bases upstream of the putative initiating methionine. Clone 101 is 3.4 kb in length, extends 25 bases upstream of the initiating methionine, and includes a poly(A) tail. A consensus Kozak sequence (30) surrounds the putative initiating methionine codon; however, neither clone contains an in-frame stop codon in the 5Ј-untranslated region. To confirm that the putative initiating methionine is the true translational start, RACE PCR was performed on human spleen cDNA. The fragment amplified by RACE PCR extends 9 nucleotides farther upstream than clone 134; however, these nucleotides do not encode another methionine or a termination codon. Therefore, the originally designated initiating methionine is assumed to be correct.
The deduced amino acid sequence of the novel gene shares significant sequence identity with all four of the known CC chemokine receptors (Fig. 2). It is most similar to CCR2B with 71% identical residues and shares 55, 49, and 48% identity with CCR1, CCR3, and CCR4, respectively. The sequences diverge significantly in their amino-terminal extracellular domains, a region that has been implicated in determining ligand specificity (31). As shown below, the receptor encoded by this novel sequence binds and responds to a unique set of CC chemokines and is therefore termed CCR5.
The position of CCR5 was mapped relative to the other CC chemokine receptor genes at this locus on chromosome 3p21. As shown in Fig. 3A, four receptor genes are closely linked, mapping within approximately 150 kb of each other as determined by pulse field electrophoresis and Southern blotting of YAC clones. The fifth CC chemokine receptor gene, CCR4 (15), was not found on either of the YAC clones. Significantly, CCR5 maps within 18 kb of CCR2, the gene to which it is most similar. Overlapping P1 clones were used to restriction map and define the intron/exon structure of these two closely related genes (Fig. 3B). CCR5 contains a single intron of 1.9 kb between nucleotides Ϫ11 and Ϫ12 in the 5Ј-untranslated region of the cDNA. CCR2 contains at least two introns, an alternately spliced 1.2-kb intron in the coding region and an intron greater than 2.7 kb that interrupts the 5Ј-untranslated region between nucleotides Ϫ51 and Ϫ52 of the cDNA.
The expression of CCR5 mRNA in human tissues was examined by Northern blot (Fig. 4A). A transcript of approximately 3.5 kb was found at highest levels in thymus and spleen, at medium levels in peripheral blood leukocytes and small intestine, and at low levels in ovary and lung. CCR5 expression in hematopoietic cell lines and in human T lymphocytes was also determined by Northern blot analysis (Fig. 4B). The transcript is present at highest levels in the myeloid cell line THP-1 and in CD4 ϩ and CD8 ϩ T cells. It was also detectable at lower levels in the myeloid cell line HL-60, in the B cell line Jijoye, and in the T cell line HUT 78. In addition, the cDNA was an abundant transcript in our human macrophage cDNA library.
CCR5 was transfected into COS-7 cells for intracellular signaling studies to determine ligand specificity. The FLAG epitope (DYKDDDDK) was added to the amino terminus of CCR5 to facilitate detection of receptor expression (24). Previous experiments using CCR2B have shown that addition of this epitope does not affect ligand binding or receptor signaling and may increase surface expression. 2 Quantitative enzyme-linked immunosorbent assays confirmed that CCR5 was expressed at the cell surface in transiently transfected COS-7 cells; however, no phosphoinositol hydrolysis was detected in response to CC or CXC chemokines (data not shown). Other laboratories have shown that some chemokine receptors such as IL8RA and IL8RB require cotransfection with exogenous G proteins before signaling can be detected in COS-7 cells (32). To optimize signaling through CCR5 in the COS-7 cells, the receptor was co-expressed with the chimeric G protein Gqi5 (in which the carboxyl-terminal five amino acids of G␣i2, which mediate receptor binding, replace those of G␣q; see Ref. 21). Previous results have shown that Gqi5 significantly potentiates signaling by CCR1 and CCR2B. 3 Co-transfection of CCR5 with Gqi5 revealed that CCR5 signaled well in response to RANTES, hMIP-1, and hMIP-1␣ in phosphoinositol hydrolysis assays (Fig. 5A). Murine chemokines MIP-1␣ and MIP-1 also stimulated inositol phosphate release in transfected cells. No signaling was measured in response to hMCP-1, hIL-8, or the murine MCP-1 homologue JE. Dose-response curves indicated EC 50 values of 1 nM for RANTES, 6 nM for hMIP-1, and 22 nM for hMIP-1␣ (Fig. 5B). The murine homolog of CCR5 has recently been isolated and recognizes a similar set of ligands (33).
Binding of radiolabeled hMIP-1 to 293 cells expressing CCR5 was examined. Equilibrium binding experiments showed that hMIP-1 bound CCR5 transfected cells in a specific and saturable manner (Fig. 6A). Scatchard analysis of the binding data revealed a dissociation constant (K d ) of 1.6 nM and an average number of sites per cell of 1.2 ϫ 10 5 (data not shown). Competition of hMIP-1 binding was observed with hMIP-1, hMIP-1␣, and RANTES (Fig. 6B). IC 50 values obtained from competition binding curves were 6.9 nM for RANTES, 7.4 nM for MIP-1␣, and 7.4 nM for MIP-1. Interestingly, the affinity of MIP-1␣ binding to CCR5 is as high as the other ligands, even though it is less potent at inducing inositol phosphate release. This disparity between binding and signaling potency has also been observed with MIP-1␣ and RANTES interactions with CCR1 (7).
CCR5 is the first cloned human receptor that responds to MIP-1. Although Combadiere and colleagues originally reported that CCR3 signaled in response to MIP-1 (34), they subsequently reported that this response was not due to CCR3 (35). In contrast to MIP-1␣, relatively little is known of potential biological roles for MIP-1. Sporn and colleagues have demonstrated an increase in MIP-1 mRNA following monocyte adhesion to substrates (36). Evidence for a role in mediating lymphocyte migration was provided by Tanaka and colleagues, who found that immobilized MIP-1 induced activation and adhesion of CD8 ϩ T cells to vascular cell adhesion molecule (VCAM) (37). It is intriguing that the same three chemokines that activate CCR5 have recently been shown by Cocchi and colleagues (38) to potently inhibit replication of human immunodeficiency viruses types 1 and 2 in human peripheral blood leukocytes. This raises the possibility that activation of CCR5, which is expressed in T lymphocytes and macrophages, may play a protective role in human immunodeficiency virus infection. The availability of the CCR5 cDNA represents an important tool for elucidating the roles of MIP-1 and related chemokines in lymphocyte activation, trafficking, and human immunodeficiency virus infection. | 3,873.8 | 1996-07-19T00:00:00.000 | [
"Biology",
"Medicine"
] |
Low-frequency vibrational modes of stable glasses
Unusual features of the vibrational density of states D(ω) of glasses allow one to rationalize their peculiar low-temperature properties. Simulational studies of D(ω) have been restricted to studying poorly annealed glasses that may not be relevant to experiments. Here we report on D(ω) of zero-temperature glasses with kinetic stabilities ranging from poorly annealed to ultrastable glasses. For all preparations, the low-frequency part of D(ω) splits between extended and quasi-localized modes. Extended modes exhibit a boson peak crossing over to Debye behavior (Dex(ω) ~ ω2) at low-frequency, with a strong correlation between the two regimes. Quasi-localized modes obey Dloc(ω) ~ ω4, irrespective of the stability. The prefactor of this quartic law decreases with increasing stability, and the corresponding modes become more localized and sparser. Our work is the first numerical observation of quasi-localized modes in a regime relevant to experiments, and it establishes a direct connection between glasses’ stability and their soft vibrational modes
A morphous solids exhibit universal low-temperature properties, seen for instance in the heat capacity and thermal conductivity 1 , that differ remarkably from crystal physics. These properties are related to the vibrational density of states D(ω). For a continuous elastic medium in three dimensions, low-frequency excitations are phonons, and the density of states follows D(ω) = A D ω 2 , where A D is given by Debye theory 2 . A well-known universal feature of amorphous solids is an excess in vibrational modes over the Debye prediction that results in a peak in D(ω)/ω 2 at an intermediate frequency, called the boson peak [3][4][5][6] .
More recently, another source of 'excess modes' has been identified in computer simulations of model glasses [7][8][9][10][11][12] . It is composed of quasi-localized low-frequency modes with a density obeying D loc (ω)~ω 4 . Quasi-localized modes are observed at frequencies significantly lower than the boson peak and the link between the two phenomena is not immediate, despite some indications that they may be connected 8,13 . The quartic law was predicted long ago using phenomenological models 14,15 , reanalyzed over the years [16][17][18] , and remains the focus of intense research 19,20 . These predictions differ from two recent mean-field approaches 21,22 , which predict instead a universal non-Debye behavior that is quadratic in all spatial dimensions, also reported numerically 23 . Interest in the low-frequency localized modes extends beyond connections to theoretical models and the boson peak. It was suggested that these modes are correlated with irreversible structural relaxation in the supercooled liquid state 24 , and that the spatial distribution of these soft modes is correlated with rearrangements upon mechanical deformation and plasticity [25][26][27][28] . Localized defects are also central to theoretical descriptions of glass properties at cryogenic temperatures 29,30 .
Recent numerical insights were obtained for glasses that are very different from the ones studied experimentally, since they are prepared with protocols operating on timescales that differ from experimental ones by as many as ten orders of magnitude 31 . It is therefore unknown whether any of the vibrational, thermal, or mechanical properties derived from earlier computational study of the density of states is experimentally relevant. For example, it was reported 9,32 that D loc (ω)~ω β with β ranging from 3 to 4 depending on the glass's stability, with β = 4 for the two most stable simulated glasses created by cooling at a constant rate. It remains unclear, however, whether β would be different for glasses with stability comparable to that of the experimental glasses.
Our main achievement is to extend studies of the vibrational density of states of computer glasses to an experimentally relevant regime of glass stability for the first time. To this end, we build on the recent development of a Monte Carlo method that allows us to equilibrate supercooled liquids down to temperatures below the experimental glass transition [33][34][35] to prepare glasses that cover an unprecedented range of kinetic stability, from extremely poorly annealed systems to ultrastable glasses. We thus match the large gap between previous numerical findings and the experimental regime 36 . Recent studies have shown that that such stable glasses may differ qualitatively from ordinary computer glasses 35,37,38 . For example, qualitatively different yielding behavior of well-annealed glasses compared to that of poorly annealed glasses was reported in ref. 38 . Since rearrangements upon mechanical deformation are correlated with the spatial distribution of soft modes, this result suggested that the density of states could also evolve dramatically with the stability.
Results
System preparation. We prepare glasses by instantaneously quenching supercooled liquids equilibrated at parent temperature T p to T = 0, so that T p uniquely controls the glass stability. We find that the low-frequency part of the vibrational density of states changes considerably when T p varies, thus offering a direct link between soft vibrational modes and kinetic stability. Following earlier work 8,10 , we divide modes into extended and quasilocalized ones. As found for high parent temperature glasses 7-10 , the density of states of the quasi-localized modes follows D loc = A 4 ω 4 , with the same quartic exponent for all glass stabilities. Our work thus establishes the relevance of earlier findings about quasi-localized modes and their effect on the density of states in the experimentally relevant regime of glass stability. In addition, we find that the overall scale A 4 decreases surprisingly rapidly when T p decreases, showing that the density of the quasi-localized modes is highly sensitive to the glass stability. This rapid decrease contrasts with the modest changes found for other structural quantities, such as mechanical moduli, sound speed, and Debye frequency. Quasi-localized modes also become sparser and increasingly localized at low T p , and so the identification of soft localized modes as relevant glassy defects controlling the physics of amorphous solids becomes more convincing near the experimental glass transition. Our results also suggest that ultrastable glasses contain significantly fewer localized excitations than ordinary glasses, which appears consistent with recent experiments [39][40][41] .
We simulate a polydisperse glass forming system in three dimensions, which is a representative glass-forming computer model 33 . We use the swap Monte Carlo algorithm to prepare independent equilibrated configurations at parent temperatures T p ranging from above the onset temperature of slow dynamics T o ≈ 0.200, down to T p = 0.062, which is about 60% of the modecoupling temperature T c ≈ 0.108 (T c marks a crossover to activated dynamics and corresponds typically to the lowest temperature accessed by standard molecular dynamics). Importantly, our lowest T p is lower than the estimated experimental glass temperature T g ≈ 0.072 33 , and no previous computational study has explored such range of glass stability. In addition, we also use a very high parent temperature which we refer to as T p = ∞. We then probe vibrational properties of zerotemperature glasses produced by an instantaneous quench from equilibrated configurations at different T p . The specific simulation details are provided in Methods.
Macroscopic properties. We begin by presenting macroscopic properties of the glasses as a function of the parent temperature T p . The inherent structure energy E IS is directly related to the mobility of the particles 42 , and thus we show E IS in Fig. 1a as an indicator of the increased stability of the glass. E IS deviates from its high-temperature plateau when T p becomes smaller than the onset temperature, and decreases further with decreasing T p 43 . Similarly, the bulk modulus B decreases modestly with decreasing T p (Fig. 1b). By contrast, the shear modulus G in Fig. 1c remains nearly temperature-independent until themode-coupling temperature, which is below the onset temperature, and then the shear modulus increases with decreasing T p . Associated with the increase in the shear modulus is a decrease in the Debye level controlled by the increase of the shear modulus since the transverse speed of sound c t ¼ ffiffiffiffiffiffiffiffi G=ρ p is 2.4-2.6 times smaller than the longitudinal speed of sound c l . The overall relative variations of mechanical moduli and Debye frequency are, however, relatively mild given the broad range of glass stabilities covered in Fig. 1.
Classification of quasi-localized and extended modes. By examining the participation ratio P(ω) as a function of ω at different parent temperatures (see Fig. 2), we observe all the features that characterize the T p -dependence of the density of states. A value of P(ω) = 1 indicates a mode where all the particles participate equally, a value of P(ω) = N −1 indicates a mode where only one particle participates, and a value of P(ω) = 2/3 indicates a plane wave. The sharp peaks in P(ω) at low frequencies are due to the phonon modes, with the first peak corresponding to the first allowed transverse phonon at ω t = c t 2π/L, L being the box length. An increase in ω t indicates an increase in c t ¼ ffiffiffiffiffiffiffiffi G=ρ p . The low-frequency modes can be naturally divided into quasilocalized modes (small P) and extended modes (large P) through an appropriate thresholding procedure 8,10 , this decomposition becoming sharper as L increases and T p decreases. The value P 0 = 0.006 is appropriate, as shown in Fig. 2, but we checked that our results are not qualitatively affected by a reasonable change of P 0 . As T p decreases, phonon modes shift to larger frequencies, as expected from the evolution of the mechanical moduli, whereas quasi-localized modes become increasingly localized and well-separated from the phonons. We also checked that our results hold for small system sizes where allowed phonon modes are shifted to much higher frequencies 7 .
Properties of quasi-localized modes. We examined the density of states for the quasi-localized modes D loc (ω), which are shown in Fig. 3a for a few representative T p . At low frequencies, D loc (ω) = A 4 ω 4 for each parent temperature with a prefactor A 4 that depends on the glass stability. We show the resulting A 4 (T p ) in Fig. 3b. The prefactor A 4 stays nearly constant for high enough T p , but decreases sharply when T p decreases below the modecoupling temperature T c . This observation is robust against changing the system size. The decrease of A 4 at low T p correlates well with the evolution of shear modulus and Debye level in Fig. 3c, d. We note that a study of less stable glasses 32 found an increase in the lowest frequency of quasi-localized modes with decreasing parent temperature, which, under certain assumptions, may be related to the change of A 4 reported here. A major result of our study is that the quartic law governing D loc (ω) is obeyed irrespective of the glass stability, thus extending the validity of previous findings to the experimentally relevant regime.
In Fig. 3c we show the probability distribution for finding a mode with a participation ratio P for the modes with P < P 0 for N = 48,000 particles. With decreasing T p , the distribution becomes narrower and the peak position shifts to smaller P values. We find that the average participation ratio decreases with decreasing T p , which is evident from Fig. 3c. This confirms that these modes become more localized with decreasing parent temperature, which had been observed for less stable glasses 9,13,32 . Since the density of states is a function of the structure of the quenched system, we conclude that subtle local structural changes occur for T p below T c that strongly affect soft vibrational motion in the quenched glass.
To visualize the increasing mode localization, we define a 'softness' 25 for particle i as AðiÞ immobile background. To quantify these observations, we measured the probability distribution of A(i) (Fig. 4c). These distributions show a power-law tail at large A values, PðA i Þ ¼ λðT p ÞA Àα i with α ≈ 3.7. At low T p the tail is well separated from the core of the distribution at small A, and mobile particles with large A are better defined. There is also a pronounced decay of the probability of finding large A values at low T p since λ(0.2)/λ (0.062) ≈ 4.3, which indicates a greater than four fold decrease in the number of soft particles with large vibrational amplitudes. The interpretation of quasi-localized modes as relevant glassy defects controlling mechanical and thermal properties of glasses is therefore more convincing for stable glasses than it is for conventional computer glasses.
Properties of extended modes. Next, we examine the density of states of extended modes, D ex (ω), with a participation ratio greater than P 0 . In Fig. 5a, b we show the reduced density of states D ex (ω)/ω 2 for two parent temperatures. For each temperature, the Debye level is reached at low enough ω and a boson peak is observed at larger frequencies. Using our localization criterion, we find that modes near the boson peak are not localized, but this does not imply that they have a phononic character. The boson peak narrows slightly with decreasing T p . The Debye level, the boson peak location, height, and width all change modestly as T p is varied over the entire range studied. The changes observed in our study agree qualitatively with those found by Grigera et al. 4 .
In Fig. 5c we examine scaling properties of the density of states of extended modes. We rescale ω by the boson peak frequency, ω BP , and plot the rescaled density of states D ex /(A D ω 2 ). We observe an excellent collapse on the low-frequency side of the boson peak. This shows that in this frequency range the reduced frequency dependence has a universal shape, as reported before 44 .
Second, the collapse also shows that the height of the boson peak correlates with the Debye level A D . These results agree with experiments on molecular glass formers [45][46][47] . However, some of the same experiments report that the boson peak position scales as the Debye frequency 45,47 , which is not consistent with our results. We also find that a scaling of ω BP with the bulk modulus suggested in ref. 48 is inconsistent with our results. Note that we study the evolution of the boson peak as a function of the preparation temperature, while experiments sometimes examine the temperature evolution of the boson peak for a given glass preparation. We also note that a correlation between the boson peak and quasi-localized modes has been proposed by studying systems at different pressures around the unjamming transition 49 .
Since the boson peak occurs in a different frequency range than the ω 4 scaling of D loc (ω), it is not clear that there could be a relationship between the boson peak and the low-frequency quasi-localized modes. Simulations close to jamming suggest that A 4 $ ω 4 BP 8 , but we do not find that this relation holds with changing T p . An alternative possibility can be obtained from dimensional analysis, where a characteristic frequency for quasilocalized modes can be defined as A À1=5 4 32 . We find that A À1=5 4 $ ω BP for glasses with T p < T c (Fig. 6), but this relation does not hold for glasses created with T p > T c . We note that ω BP is constant for T p > T c , see the inset to Fig. 6, and only changes for T p < T c . Again we find that T c marks a change in the behavior of D(ω).
Given the relatively small changes in both ω BP and A entire range of parent temperatures studied, it is not clear that a power law is the proper relationship between these quantities and further work is needed to verify it.
Discussion
In summary, we report the first characterization of the vibrational density of states of computer glasses prepared over a range of glass stability that bridges the gap between ordinary simulations and experimental studies. At low-frequency extended and quasilocalized modes coexist, and both types of modes evolve differently when the glass stability is varied. We find a relatively mild temperature dependence of extended modes, with a strong correlation between the Debye level and the boson peak. By contrast, quasi-localized modes evolve more strongly when T p decreases below the mode-coupling temperature, but their density of states is always described by D loc $ A 4 ω 4 . Unexpectedly, the temperature dependence of the prefactor A 4 (T p ) is more interesting than the value of the quartic exponent, which is insensitive to the degree of annealing. The increasing localization of the modes implies that subtle yet significant changes occur in the local structure of the glass that are not reflected in the pair correlation function, which is nearly identical for parent temperatures below T c . Since soft modes have been linked to irreversible relaxation 24 and rearrangements under shear [25][26][27][28] , it follows that the reduction of these soft modes can have significant implications for glassy dynamics. In turn this reduction indicates that there are fewer soft spots, which should increase the strength of the glass. This hypothesis is supported by the observation that the decrease in D loc (ω) mirrors the increase of the shear modulus, and also correlates very well with the evolution of the ductility of the produced glasses 38,50 . Since we can now equilibrate amorphous systems at temperatures low enough so that they do not flow, another perspective would be to analyze the density of states at finite temperatures through the Fourier transform of the velocity autocorrelation function 51 , or by diagonalizing the covariance matrix of displacements 52 . Future studies should examine the difference between these procedures to provide insights into thermal anharmonicities of stable glasses, and more generally into their low-temperature transport properties.
Methods
Simulations. We simulate a polydisperse model glass former of sizes between N = 48,000 and 450,000 particles with equal mass at a number density ρ = 1.0 33 . The interaction between two particles i and j is given by Vðr ij Þ ¼ σ ij r ij 12 þvðr ij Þ when their separation r ij r c ij ¼ 1:25σ ij and zero otherwise. We use , where the coefficients c 0 , c 2 , and c 4 ensure the continuity of V(r ij ) up to the second derivative at the cutoff r c ij . The probability of particle diameters σ is P(σ) = A/σ 3 , where σ∈[0.73,1.63] and we use a non-additive mixing rule, σ ij ¼ σ i þ σ j 2 ð1 À 0:2jσ i À σ j jÞ. For N ≤ 192,000 we use the swap Monte Carlo algorithm to prepare independent equilibrated configurations at parent temperatures T p ranging from above the onset temperature of slow dynamics (T o ≈ 0.200) down to T p = 0.062, which is about 60% of the mode-coupling temperature (T c ≈ 0.108), and is lower than the estimated experimental glass temperature (T g ≈ 0.072) 33 . In addition, we also use a very high parent temperature, which we refer to as T p = ∞. Due to very long equilibration times for systems of N > 192,000 particles we only study systems with N > 192,000 for T p = ∞.
Density of states calculation. Following equilibration at a temperature T p , zerotemperature glasses are produced by instantaneously quenching equilibrium configurations to their inherent structures using the Fast Inertia Relaxation Engine algorithm 53 . We then calculate the modes by diagonalizing the Hessian matrix using Intel Math Kernel Library (https://software.intel.com/en-us/mkl/) and ARPACK (http://www.caam.rice.edu/software/ARPACK/). We calculate all the normal modes for the 48,000 particle systems, but only the low-frequency part of the spectrum in systems with N > 48,000. We characterize the modes through the density of states DðωÞ ¼ 1 3NÀ3 P 3NÀ3 l¼1 δðω À ω l Þ and the participation ratio $ ω BP for glasses whose T p < T c , which is the parent temperature range where we see an increase in ω BP with decreasing T p , see inset where the vertical dasheddotted, dashed lines mark the positions of T g and T c , respectively NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-018-07978-1 ARTICLE NATURE COMMUNICATIONS | (2019) 10:26 | https://doi.org/10.1038/s41467-018-07978-1 | www.nature.com/naturecommunications Pðω l Þ ¼ P N i¼1 je l;i j 2 À Á 2 N P N i¼1 je l;i j 4 , where e l,i is the polarization vector of particle i in mode l with frequency ω l . For a mode localized to one particle P(ω) = N −1 , and for an ideal plane wave P(ω) = 2/3. The phonon modes occur at discrete frequencies, and care has to be taken in the binning procedure to calculate the density of states of extended modes, D ex (ω). To perform this calculation, we determine the phonon frequencies from the peak positions of the participation ratio versus frequency, and tune the bin size to smooth D ex (ω).To obtain the shear modulus G and the bulk modulus B we use the method described in ref. 54 .
Data availability
All data will be available from the authors upon request. | 5,007.8 | 2018-03-05T00:00:00.000 | [
"Physics"
] |
Design and Implementation of a Hardware-in-the-Loop Simulation System for a Tilt Trirotor UAV
-e tilt trirotor unmanned aerial vehicle (UAV) is a novel aircraft that has broad application prospects in transportation. However, the development progress of the aircraft is slow due to the complicated control system and the high cost of the flight experiment. -is work attempts to overcome the problem by developing a hardware-in-the-loop (HIL) simulation system based on a heavily developed and commercially available flight simulator X-Plane. First, the tilt trirotor UAV configuration and dynamic model are presented, and the parameters are obtained by conducting identification experiments. Second, taking the configuration of the aircraft into account, a control scheme composed of the mode transition strategy, hierarchical controller, and control allocation is proposed. -ird, a full-scale flight model of the prototype is designed in X-Plane, and an interface program is completed for connecting the autopilot and X-Plane. -en, the HIL simulation system that consists of the autopilot, ground control station, and X-Plane is developed. Finally, the results of the HIL simulation and flight experiments are presented and compared. -e results show that the HIL simulation system can be an efficient tool for verifying the performance of the proposed control scheme for the tilt trirotor UAV.-e work contributes to narrowing the gap between theory and practice and provides an alternative verification method for the tilt trirotor UAV.
Introduction
Tiltrotor UAV is an aircraft that has three flight modes including hover, transition, and forward. erefore, it enjoys many advantages, such as long endurance, high mobility, and less site limitation [1,2]. In the hover mode, it can vertically takeoff and land (VTOL), so that there is no need for a runway. In the forward mode, long-distance transportation can be achieved due to its long endurance and high cruise speed. Due to these advantages, a tiltrotor UAV has received considerable attention in recent decades [3,4]. It can be applied to aerial photography, target identification and localization, environmental protection, and so on. It is worthwhile to mention that the tiltrotor UAV may perform an impressive role in the field of urban air traffic [5]. Airbus has proposed a tilt-wing aircraft named Vahana for the urban air mobility passenger transportation mission [6]. Besides, Uber has taken an interest in the urban air traffic based on the VTOL aircraft [7]. However, the development progress of tiltrotor UAV is slow because of the high cost and risk of flight experiments. Few mature platforms can be applied for engineering applications [8].
At present, many countries such as the United States, Korea, Israel, and China are devoted to developing the tiltrotor UAV for its outstanding advantages. e Eagle Eye developed by the United States is one of the successful tiltrotor UAVs in operation. However, there are several serious accidents caused by the complexity of the control system and aerodynamic model, and the aircraft has not come into extensive use [9]. Korea designed a tiltrotor UAV named Smart UAV, which is configured similar to the Eagle Eye. Although many experiments with the Smart UAV have been carried out, the aircraft is still in the test stage [10,11]. Israel Aircraft Industries designed a tilt trirotor UAV named Panther. e rear rotor can tilt for the control of yaw motion. Note that Panther is the first tilt trirotor UAV, which has been delivered to the army as equipment [12]. e VTOL aircraft FireFLY6 developed by Birds Eye View Aerobotics has six propellers. e two rotors fixed at the rear of the aircraft are only for hovering and remain off during forwarding flight. e other rotors in the front are driven by the same servo and can synchronously tilt 90 degrees for mode transition [13]. Panther and FireFLY6 are two typical aircrafts that have been applied in practice; however, the cruise efficiency and flight stability are still needed to be improved. e development of the tiltrotor UAV suffers from various difficulties including the control scheme and verification method. More and more research works are published to solve the two problems. Kang et al. built the mathematical model of the tiltrotor UAV, and a neural network controller was designed for stable flight control. e control performance under turbulent wind conditions was validated through the nonlinear simulation [14]. Papachristos et al. proposed an explicit model predictive control scheme relying on constrained multiparametric optimization, and the effectiveness of the scheme was demonstrated based on a tri-tiltrotor equipped with rotor-tilting mechanisms [15]. Yucel et al. designed a tiltrotor UAV named TURAC using a cheap, rapid, and easily reproducible prototyping methodology. Mathematical and CFD analyses were performed to optimize the design. e low-cost prototyping methodology was verified by ground and flight experiments [16,17]. Many control algorithms are proposed for a tiltrotor UAV, while few efficient verification methods are developed.
Due to the cost of designing a prototype and high risk for conducting a flight experiment, most of the advanced algorithms proposed for the tiltrotor UAV are validated by software simulation. It is well known that the effectiveness of simulation relies on the accurate mathematical model, which is difficult to be obtained. e HIL simulation with a high degree of accuracy as an actual flight contributes to verifying the control scheme and improving development efficiency. e most realistic flight simulator X-Plane is widely used for developing and testing flight control scheme [18]. Adriano et al. developed a HIL simulation system that consists of an academic autopilot and X-Plane to verify and optimize the hardware. e fixed-wing attitude control scheme was proposed and verified by the HIL simulation, in which X-Plane was used to simulate the aircraft dynamics, sensors, and actuators [19,20]. Sergio et al. designed a quadrotor using Plane Maker provided by the X-Plane flight simulator and proposed a novel approach to design an attitude controller for the quadrotor according to the learning algorithm.
e simulation system composed of Simulink and X-Plane was built to investigate and verify the control algorithm [21]. Zhang et al. developed a test system, including autopilot and X-Plane, to validate the control structure and narrow the gap between the theory and practice. e results show that the autopilot that passed the validation in HIL simulation can be directly applied to the real flight [22]. Due to the variable structure of the tilt trirotor UAV during flight, the aerodynamic model is too complex to be acquired. However, the tilt trirotor UAV with unique configuration can be designed with the help of Plane Maker, and the aircraft model imported into the X-Plane flight simulator is capable of simulating real flight with a high degree of accuracy. It is evident that HIL simulation based on X-Plane is an effective, rapid, and low-cost method to develop and verify control schemes for a tilt trirotor UAV.
In this paper, the motivation is to provide an efficient method for verifying the performance of the hardware and software designs of the control scheme for a tilt trirotor UAV. e X-Plane-based HIL simulation system that contributes to developing and verifying the control scheme of the aircraft is presented. First, we design a tilt trirotor UAV with three rotors, two servos, elevator, and aileron. For the designed aircraft, the control principle and mathematical model are presented. e critical parameters as the moment of inertia, mass, and coefficients of the rotor are obtained through the identification experiments. Second, in order to achieve stable control of the aircraft in different modes, a complete control scheme that consists of the mode transition strategy, hierarchical controller, and control allocation is developed. e control scheme is coded using C programming language and embedded in the autopilot designed for HIL simulation and flight experiment.
ird, the advantages of the X-Plane flight simulator are illustrated in detail. According to the tilt trirotor UAV parameters, the 3-dimension (3D) full-scale model used to simulate actual flight dynamics is developed using Plane Maker. And then, the HIL simulation system that consists of the autopilot, ground control station, and X-Plane is developed. Finally, the HIL simulation and flight experiment results are presented and compared to illustrate the reliability of the HIL simulation system. e control scheme validated by the HIL simulation can be directly moved on to the flight experiment, and only control gains need to be adjusted. Based on the HIL simulation system, the risk and cost of the flight experiment can be reduced; meanwhile, the development efficiency is improved.
e remaining sections are arranged as follows. In Section 2, the configuration and control principle of the tilt trirotor UAV are demonstrated, and then, a serial of identification experiments is conducted. Section 3 details the control scheme of the aircraft. In Section 4, the accurate fullscale 3D flight model based on the prototype is designed using Plane Maker, and then, the HIL simulation system is developed. In Section 5, the results of the HIL simulation and flight experiment are presented. In Section 6, the conclusion and future research work are outlined.
Tilt Trirotor UAV
e designed tilt trirotor UAV is a novel aircraft with a unique structure. To further understand the aircraft, we briefly describe the configuration and dynamic model of the tilt trirotor UAV. en, the parameter identification is completed by conducting different experiments.
tilting servos are embedded in the nacelles. e two front rotors are able to tilt from 30 ∘ to −90 ∘ with respect to the vertical axis, and the rear rotor is installed vertically. e right rotor and rear rotor rotate in counter-clockwise while the left in clockwise.
Taking the hierarchical control structure into account, position control is achieved by adjusting the attitude of the aircraft. In the hover mode, three rotors and two servos are used to control attitude. e different thrusts between the left and right rotors are used to control the roll motion. e rear rotor can compensate for the moment generated by two front rotors to stabilize the pitch. e yaw moment is created by the difference of tilting angles between the two front rotors. In the forward mode, the elevator and aileron are used for pitch and roll control, respectively. e aircraft is designed without the rudder, so the yaw motion is achieved by adjusting the thrust of two front rotors, and the rear rotor remains off in the forward mode. e principle of flight control in the hover and forward modes are shown in Figure 2. e tilting angles of two rotors are denoted by α 1 and α 2 , respectively. e thrust F i of the rotor i is influenced by the rotation speed ω i . e symbol δ a represents the angle of aileron deflection, and the angle of elevator deflection is represented by δ e . e mathematical model of the tilt trirotor UAV that contains kinematic equations, navigation equations, force equations, and moment equations is expressed as follows [23,24]: where Δ is defined as In these expressions, the position of the center of gravity in the world frame is expressed as χ � [X, Y, Z] T , V � [u, v, w] T denotes the linear velocity of the aircraft in the body frame, m denotes the mass, ω b � [p, q, r] T represents the rotational angular velocity, Θ � ⌈ϕ, θ, ψ⌉ T is the Euler angle, and I b is the inertial matrix with respect to the body frame. e rotation matrix from the body frame to the world frame is R BER , and R BET denotes the transformation matrix from the body frame to the world frame. e force and moment imposed on the aircraft are denoted using F b and τ b , and these force and moment are generated by the rotors, wings, and surfaces. It should be pointed out that the aerodynamics of the tilt trirotor UAV is difficult to be obtained; however, the mathematical model is an important part of the simulation. To efficiently tackle this issue, the X-Plane flight simulator is introduced to provide dynamics of the aircraft; to this end, parameters of the prototype should be acquired for designing a 3D flight model.
Parameter Identification.
In this section, the parameter identification and related work of the tilt trirotor UAV are presented. On the one hand, constant parameters including the moment of inertia and coefficients of the rotor are obtained. On the other hand, the method used for estimating the tilting angle is demonstrated. e compound-pendulum method and bifilar torsion pendulum method are used for measuring the moment of inertia of the aircraft [25,26]. According to the methods described above, the measurement of the oscillation period is the most important part. To obtain the oscillation period, the autopilot with sensors is introduced to the experiment. e Journal of Advanced Transportation oscillation period can be obtained by recording the change of attitude. e partial results are shown in Figure 3.
ree groups of experiments are conducted for different axes, respectively, and the accurate period is obtained by calculating the average of three measurements. According to the value of the oscillation period, the moment of inertia with respect to three axes can be acquired as follows: e rotor is one of the most important actuators for the tilt trirotor UAV. In our work, the rotor of the aircraft is composed of a 13 × 6 inch propeller and a motor. A novel intelligent ergometer that can record the value of force, moment, and rotation speed is applied to test the rotor. e test system that contains a rotor, ergometer, lithium battery, and computer is shown in Figure 4.
Assuming that the air density is constant, there is a linear relationship between the thrust and the square of rotation speed. e thrust and torque can be calculated by the following equation [27].
Data are acquired after the test of the rotor, and thrust coefficient k f and torque coefficient k d can be obtained by using the least square method. According to the specification of the motor and propeller, the maximum thrust that a rotor can provide is around 53.3 N. e curve-fitting results are shown in Figure 5, and then, the value of rotor coefficients are given as follows: Roll Hover mode Forward mode e yaw control is achieved by adjusting the tilting angle of the rotor. Taking the structure of the tilting mechanism into account, it is difficult to install a sensor for measuring the tilting angle.
erefore, a utility approach should be proposed to estimate the tilting angle for flight control. In the practical application, the function relation between the angle and command should be built first, and then, we can estimate the tilting angle online according to the command. e calibration is divided into two steps. In the first step, an angle measuring instrument is fixed in the rotor, the tilting angle is changed by adjusting the output command of the autopilot, and the value of tilting angle corresponding to different commands is recorded. Due to the existence of installation error, two rotors are calibrated separately. e measurement process is presented in Figure 6.
In the second step, by fitting the recorded data, the function expression of the tilting angle and command is acquired. e fitting results are shown in Figure 7, and the function can be acquired as in equation (6). It is evident that the two rotors have a different tilting angle when given the same command. is problem is caused by the installation error and can be solved based on the result of calibration. e contributions of equation (6) contain two aspects. On the one side, the decoupled control allocation algorithm can be designed based on the estimation of the tilting angle. On the other side, the precise control of the tilting angle is achieved without the angle sensor. It is noteworthy that the commands η 1 and η 2 are solved from the desired tilting angles, and then, the two front rotors can be driven by the servos to the desired angles, respectively. In the real flight, the tilting angle error can be limited to no more than 2 degrees by using the estimation method: It should be mentioned that, for building the accurate mathematical model in the X-Plane, parameters such as wingspan and rotor position are also obtained. According to the model described above, the control scheme and a fullscale 3D flight model are developed. Journal of Advanced Transportation 5
Control Design
e control scheme consists of the mode transition strategy, hierarchical controller, and control allocation. Considering the existence of the hover, transition, and forward modes, the mode transition strategy is designed for achieving safety mode conversion. e tilt trirotor UAV is an underactuated system, and a hierarchical controller using PID is developed to control position and attitude. e control allocation is proposed to provide the mapping from the virtual control commands to the manipulated inputs of the aircraft. For the tilt trirotor UAV, the three parts described above are necessary for flight control.
Mode Transition Strategy.
e altitude and attitude are the key factors for stable mode transition. e control system aims to stabilize the altitude and attitude in flight. As a matter of fact, it is difficult to obtain the accurate aerodynamic model in the transition mode, so that the airspeed should be increased as quickly as possible to ensure flight safety. A phased mode transition strategy is designed according to the mode transition command and flight airspeed. To illustrate the transition process clearly, the transition from the hover mode to the forward mode is called the conversion phase, and the transition from the forward mode to the hover mode is called the reconversion phase. e conversion phase is completed with the increase of the airspeed. First, after receiving the conversion command, the two front rotors tilt forward to an angle P f within T f1 seconds. With the tilting angle increasing, the airspeed increases, and the wings start generating lift. Second, if the airspeed achieves V f in which the aerodynamic can compensate a part of the gravity, all rotors are shut down; meanwhile, two front rotors tilt forward to the horizontal position within T f2 seconds. Finally, the aircraft enters into the forward mode and cruises at V c . In this work, the reconversion phase is a relatively simple process. After receiving the reconversion command, the two front rotors are shut down and tilted backward to the vertical position within T b seconds. And then, the aircraft enters into the hover mode and begins to decelerate flight. Note that T f2 and T b should be set small enough to ensure the stable of the transition flight. Conversion and reconversion phases are shown in Figure 8.
Hierarchical Controller.
Due to the tilt trirotor UAV is an underactuated system, a PID-based hierarchical controller is introduced to achieve the position and attitude control. In terms of hover and forward modes, two controllers are applied, respectively. Note that the hover controller is also used in the transition mode. Figure 9 shows a block diagram of the flight control system.
For the hover controller, the outer loop is used for the position control, and it receives the desired position from the navigator and outputs the desired thrust vector [U x , U y , U z ] T in the world frame. According to the inverse transformation and desired yaw angle ψ d , the desired pitch angle θ d and roll angle ϕ d can be solved; besides, the virtual thrust T H in the body frame is obtained [28]. e inner loop attitude controller receives the desired angles and provides the virtual control torque [R, P, Y] T . e virtual thrust and control torque are inputs of the allocator, and then, the control inputs of actuators can be obtained based on the control allocation algorithm. For the forward controller, the outer loop position controller is composed of the longitude control and lateral control. L1 navigation logic is used for the lateral control; then, the desired yaw angle ψ d and roll angle ϕ d can be obtained based on the desired position and current position [29]. e longitude control is completed based on the total energy control method; the desired pitch angle θ d and virtual thrust T F are acquired according to the altitude and airspeed [30]. e attitude controller in different flight modes has two loops including an angular loop and an angular rate loop. e angular loop uses the P controller to produce the desired angular rate for the angular rate loop. And then, the virtual control torque [R, P, Y] T is provided by the angular rate loop, which is based on the PID controller.
Control Allocation.
In this section, the control allocation algorithm that provides the mapping from the control inputs to the manipulated inputs of the aircraft is presented [31]. It should be pointed out that the control allocation under the forward mode is the same as the traditional fixed-wing. So that we focus only on the control allocation for the hover controller.
e tilt trirotor UAV has four virtual control inputs R, P, Y, and T H , where R is directly linked to roll control, P is used for pitch control, Y is related to yaw control, and the altitude is controlled by T H . ere are five actuators including three rotors and two servos that can be used for flight control. It is known that the change of rotation speed is much faster than the change of the tilting angle. According to the response speed of actuators, the calculation of actual outputs can be divided into two parts. For the first part, the rotation speeds of three rotors are acquired by R, P, and T H . e right rotor, the left rotor, and the rear rotor are labeled by 1, 2, and 3, respectively. To calculate rotation speeds from the virtual commands, the tilting angles are determined based on the equation (6). where e vector [r ix , r iy , r iz ] denotes the position of the rotor i in the body frame. Since the matrix H is a square matrix, the keypoint to obtain the rotation speed ⌈ω 2 1 , ω 2 2 , ω 2 3 ⌉ T is to verify the reversibility of the matrix H. Taking the symmetry of the aircraft into account, we have r 2y � −r 1y and r 2x � r 1x . In this manner, the determinant of matrix H can be written as For the hover mode and transition mode, the tilting angles are limited to (−90 ∘ , 30 ∘ ). Considering the differential control of two front rotors, we have α 1 + α 2 ≤ 0. en, we can conclude that According to the above derivation, the reversibility of H is proved, so that the rotation speed can be obtained by For the second part, there are two tilting angles that need to be determined based on Y. e yaw control of the aircraft is achieved by the differential tilting angles generated by two Journal of Advanced Transportation 7 front servos. Considering the fact that the yaw motion can be adjusted effectively by tilting the two front rotors in opposite directions. From this point of view, the tilting angles are given as follows: where δ is a small constant that depends on the maximum tilting angle and the corresponding yaw moment, α 0 is an original angle, which is relevant to flight modes, and α 0 � 0 in the hover mode. e proposed control allocation is very useful for the aircraft without the tilting angle sensor; besides, it is easy to be applied in real flight.
HIL Simulation System
X-Plane is a commercially available flight simulator written by Laminar Research, that can be installed on computers running Windows, Linux, or Mac OS. In this work, X-Plane is applied to provide the model of the tilt trirotor UAV and simulate its dynamics. e advantages and communication interface are illustrated, and then, a full-scale 3D flight model is designed.
X-Plane Flight
Simulator. e flight simulator X-Plane is chosen because of its ability to predict the aircraft's flying qualities with high accuracy. is is performed by using the blade element theory. e aircraft is divided into many small elements, and the forces acting on each element are calculated several times per second. Compared to the other flight simulators such as Flight Gear and FSX, X-Plane is more flexible and advanced. Moreover, it is the only flight simulator that has obtained certification from the Federal Aviation Administration (FAA), which makes the simulation results more credible [32]. Numerous companies as Cessna, Cirrus, and Boeing have purchased X-Plane as an engineering tool; in addition, many researchers have applied X-Plane to develop and test control schemes. X-Plane provides Plane Maker and Airfoil Maker to enable users to create the aircraft and airfoils as easily as possible. e benefits of using X-Plane are (a) Realistic simulation environment: Laminar Research claims that using the blade element theory to calculate the forces on the aircraft is more accurate than the stability derivative method. e cloud cover, rain, wind, thermals, microburst, and fog can be simulated in X-Plane. e certification from the FAA allows researchers to achieve high levels of confidence in simulation results. e results of many papers have verified that the simulation based on X-Plane has a high degree of accuracy as an actual flight. (b) Database of aircraft models: thousands of manned and unmanned vehicle models are freely available for download at the X-Plane forum. Although not necessarily certified, these models provide quick starting points for testing. Most important of all, Plane Maker makes it possible for researchers to design their aircraft model, so that the particular aircraft such as the tilt trirotor UAV can be developed and tested. (c) Communication: X-Plane can communicate with external processes via User Datagram Protocol (UDP). e UDP protocol is well suited for simulating because there is no check as delivery, detection, or error correction; it is possible to achieve high-speed data traffic. X-Plane accepts control signals to drive actuators and outputs flight information. Note that X-Plane is capable of outputting all the navigation data necessary to perform a simulation and allows users to adjust the update rate from 1 to 99 packets per second.
e data packet of X-Plane that has 41 bytes follows a standard format [33]. e first four bytes denote the characters "DATA," which indicates that this is a data package. e fifth byte is a code "I," and it is used for internal policy. e remaining 36 bytes are divided into nine groups: the first group of four bytes presents a label to identify the set of data, and the other eight groups denote the data need to be sent. e first byte of the group is the sign bit, which tells whether the number is positive or negative. To further illustrate the data packet, a packet that contains Euler angles is shown in Figure 10. e packet is labeled 17 in X-Plane (version 10.42), and the values of the Euler angles are selected. Not all the groups are filled with data in practice, and the detailed information can be acquired from the manual.
Design of the Flight Model.
e 3D model of the tilt trirotor UAV is performed using Plane Maker, which allows the user to create nearly any type of aircraft [34]. e program provides a graphical interface to design an aircraft according to the physical specifications (weight, engine power, wingspan, wing area, control surfaces, and the center of gravity). And then, X-Plane is capable of predicting how that aircraft will fly. Based on Plane Maker, we can edit the fuselage cross section at a maximum of 20 locations along the fuselage length, with up to nine points at each cross section. is allows for much more accurate modeling of the fuselage than simply stating the radius and length, as is the case in most software which can only model simple bodies of revolution.
To build an accurate X-Plane model, useful information such as dimensions and weight of the aircraft is measured as shown in Table 1, and then, the designing process is divided into three parts. First, the fuselage is the basic frame for adding other parts, and it is relatively simple to be modeled based on the prototype. Second, wings and control surfaces are designed according to the airfoil and actual parameters. Finally, the rotor consists of a motor, and a 13 × 6 propeller is modeled. For the motor, the value of input such as +1 provides 1500 watts of power to the aircraft, and the motor turns off when given 0. Figure 11 shows the prototype and X-Plane model. It should be mentioned that the X-Plane model has three rotors, two servos, elevator, and aileron. erefore, it is able to drive actuators as the prototype does when given the same command.
HIL Simulation System Setup.
e HIL simulation system consists of a dedicated autopilot, X-Plane simulator, and ground control station. e diagram of the HIL simulation system for the tilt trirotor UAV is shown in Figure 12. An open-source ground control station named QGroundControl that can parse and package the UDP packet is applied. We have completed the secondary development of the QGroundControl, so that it can be used to connect with the tilt trirotor UAV in the X-Plane environment. e control scheme is embedded and realized in the autopilot using the C programming language. e autopilot hardware is placed inside the simulation loop in order to test and validate both the hardware and control scheme, and it receives the states of the aircraft simulated by X-Plane and outputs control commands. e control commands are sent to the X-Plane and used to control rotation speeds, surfaces, and tilting angles of the flight model. Note that all state information necessary for flight control can be provided by X-Plane. e autopilot shown in Figure 12 is developed for the control of the tilt trirotor UAV. e autopilot embedded two ARM Cortex-M4 microcontrollers, working up to 168 MHz of clock rate. In addition, a triaxial gyro, triaxial accelerator, triaxial magnetic field meter, barometric altimeter, and airspeed meter are integrated into the autopilot. e autopilot linked with an external GPS module can be applied to perform the predefined mission in the flight experiment. e interface resource of the autopilot contains 1 serial peripheral interface (SPI), 1 S-bus (RC receiver), 2 CAN BUS, 4 serial ports, and 16 PWM outputs. In HIL simulation, a serial port is used to communicate with the ground control station, and the RC receiver transmits manual control signals to the autopilot through the S-bus port.
It should be pointed out that the X-Plane is capable of providing a realistic simulation environment. Moreover, the autopilot and ground control station used in HIL simulation is the same as the flight experiment. Based on HIL simulation, we can test both the autopilot and control scheme. e autopilot that passed the validation in HIL simulation can be directly moved on to the prototype, and the flight experiment can be conducted with minimum modification.
Results
In this section, the mode transition flight is conducted in the HIL simulation environment to verify and improve the control scheme. And then, the flight experiment using the same autopilot and control scheme is carried out. e effectiveness of the control scheme and HIL simulation system are demonstrated by comparing the results of the simulation and experiment.
HIL Simulation Results.
In order to verify the control scheme and test the HIL simulation system, we set an oblong airline flight mission, in which the mode transition process can be demonstrated clearly. After receiving the takeoff command, the aircraft is capable of carrying out the mission automatically. e predefined oblong airline contains fifteen waypoints. e aircraft switches the mode and completes the mission as shown in Figure 8. In the first waypoint, the aircraft receives the conversion command and switches the flight mode by tilting two front rotors. And then, the aircraft conducts the mission under the forward mode. When the aircraft arrives at the fourteenth waypoint, it receives the reconversion command and enters into the hover mode. Finally, the aircraft flies to the fifteenth waypoint and lands under the hover mode.
Mode transition flight is the most important part of the flight mission. e key parameters for mode transition in HIL simulation are shown in Table 2. Based on the mode transition strategy mentioned earlier, the conversion phase is completed by tilting two front rotors as shown in Figure 13. It is noteworthy that we mainly focus on the conversion phase due to the reconversion phase is a relatively simple process.
To further present the flight mission, the trajectory of the aircraft in HIL simulation is shown in Figure 14. e flight mission consists of fifteen waypoints, in which waypoint 1 and waypoint 14 are introduced for mode transition. e aircraft takes off and flies to waypoint 1 under the hover mode, and then, it enters into the conversion phase based on the conversion strategy. It enters into the forward mode while the airspeed reaches V f and flies from waypoint 2 to waypoint 13 under the forward mode. After arriving at waypoint 14, the altitude decreases to 25 m, and the aircraft enters into the hover mode according to the reconversion strategy. In the last stage of the flight mission, the flight Figure 10: Packet format. Figure 15. e yellow shaded areas denote the conversion and reconversion phases. Note that the cruise airspeed in the forward mode is set to V c � 20(m/s), which is obtained based on the aerodynamic analysis and flight test of the prototype. It should be pointed out that the conversion phase costs about 5 s from 20.5 s to 25.5 s, the time used for tilting to angle P f � −0.46 rad is T f1 � 3.0 s, the aircraft uses about 1.8 s to accelerate to V f � 10(m/s) with tilting angle P f , and then, the two rotors tilt to the horizontal position within T f2 � 0.2 s. e reconversion phase is completed within T b � 0.4 s. Due to the low airspeed when entering into the forward mode, the altitude of the aircraft decreases about 5 meters at first, and then, it climbs and accelerates with enough thrust under the forward mode. ere are two peaks in the airspeed curve between 200 s and 260 s. e aircraft needs to turn left and reduce altitude significantly when getting through waypoints 11 and 12, so the curve of the airspeed and altitude has fluctuations. Around 284 s, the reconversion phase is completed, and then, the aircraft begins to decelerate by increasing the pitch angle under the hover mode, and the increasing of the altitude around 285 s is caused by the aerodynamics. After the airspeed decreases to 0(m/s), the aircraft flies to the last waypoint with velocity less than 5(m/s). e roll angle, pitch angle, and yaw angle of the aircraft in HIL simulation are shown in Figure 16. e aircraft has achieved a good attitude control performance with the hierarchical controller. It is obvious that the desired pitch angle between 284 s and 289 s is about 0.52 rad, which is the maximum desired pitch angle in the hover mode. erefore, the aircraft is capable of decelerating with a positive pitch angle. e control input signals of the two front tilting servos are shown in Figure 17. e pulse width modulation (PWM) signals are used to drive actuators, and the range is 1000 ∼ 2000 (duty cycle 0% ∼ 100%). e differential control of the tilting angles for yaw motion in the hover mode and transition mode is demonstrated clearly. Note that the two rotors tilt forward simultaneously in the conversion phase. However, the two rotors tilt backward to the vertical position directly in the reconversion phase. In the hover mode, there exists a constant differential angle for dealing with the torque generated by the rear rotor. e HIL simulation results illustrate that the proposed control scheme is useful for the control of the tilt trirotor UAV. e attitude and altitude of the aircraft in the transition mode are two key factors to describe the control performance. Figure 16 shows that the attitude is able to track the reference angle accurately in the transition mode. Moreover, the altitude fluctuation after the reconversion phase is limited to 10 m. e proposed HIL simulation system contributes to testing and validating both the autopilot and control scheme.
Flight Experiment
Results. Based on the autopilot and control scheme that passed validation in HIL simulation, a flight experiment is conducted to further demonstrate the effectiveness of the HIL simulation system and control scheme. e autopilot and the ground control station used in HIL simulation are directly applied for the flight experiment. e parameters for the mode transition in the flight experiment are the same as HIL simulation shown in Table 2, and only some gains of the hierarchical controller are adjusted to improve the control performance. To present the realistic of the HIL simulation with X-Plane, a flight mission similar to the one used in HIL simulation is designed for the flight experiment.
Considering the importance of the mode transition in the whole flight, the mode transition process of the tilt trirotor UAV in the air is recorded by a camera fixed on the aircraft. By comparing Figures 18 and 13, we can conclude that the mode transition strategy can be carried out effectively both in the HIL simulation and flight experiment. e varying tendencies of airspeed and altitude are similar to the curves depicted in Figure 15. However, two main differences can be pointed out clearly. On the one side, due to the increasing of the airspeed, the altitude increases significantly around 40 s. On the other side, the airspeed is generally bigger than 5(m/s) during the hover mode. It is noteworthy that the airspeed meter used in the experiment has an error when working in the low airspeed case. Meanwhile, the value of the airspeed meter is also influenced by the gust in the flight experiment.
e attitude of the aircraft in the flight experiment is shown in Figure 21. It should be mentioned that the pitch angle is not tracked well during the conversion phase; however, the changing of the pitch is relatively smooth. For decelerating flight between 284.5 s and 288.5 s, the desired pitch angle is 0.2 rad provided by the outer loop position controller. e control input signals of the tilting servos are shown in Figure 22; note that the curves match the results depicted in Figure 17.
As shown in the results of the flight experiment, the predefined flight mission can be conducted well using the proposed mode transition strategy, hierarchical controller, and control allocation. Besides, the results of the flight experiment are remarkably similar to the results of HIL simulation. e flight experiment not only demonstrates the effectiveness of the control scheme but also verifies the reliability of the developed HIL simulation system. e control scheme and autopilot that passed the validation in HIL simulation can be easily applied to the flight experiment. e extra work required is to adjust the control gains of the controller based on the platform.
Conclusions
is paper presents the design and implementation of the HIL simulation system for the tilt trirotor UAV that contains advantages of helicopter and fixed-wing. In this paper, several parameter identification experiments are completed based on the developed prototype to obtain the parameters of the aircraft. e control scheme that consists of the mode transition strategy, hierarchical controller, and control allocation is proposed for the control of the tilt trirotor UAV. And then, a full-scale 3D flight model that can simulate actual flight dynamics is designed according to the acquired parameters. In order to improve efficiency and reduce the risk of the flight experiment, the HIL simulation system including the autopilot, ground control station, and X-Plane is developed. e HIL simulation and flight experiment results are presented and compared to demonstrate the performance of the control scheme. e HIL simulation system has a high degree of accuracy as an actual flight. Based on the developed HIL simulation system, defects or problems can be found and modified before the flight experiment, so that the workload and risk of the flight experiment are reduced. e work provides an alternative verification method for the development of the tilt trirotor UAV and can be used as a guide for those who want to research the novel aircraft with a special configuration. In the future work, we will pay more attention on the improvement of the control system based on HIL simulation.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 9,542.4 | 2020-10-21T00:00:00.000 | [
"Engineering"
] |
The Russian-Ukraine Conflict, Crude Oil Price Fluctuation, and Dynamic Changes in China’s and American Manufacturing
. This study examines how responsive the manufacturing indices of China and the United States are to changes in the price of WTI crude oil over the three months prior to and following the Russian-Ukrainian Conflict, paying particular attention to changes in the manufacturing index brought on by volatile oil prices. To assess the static and dynamic effects of changes in crude oil prices on the manufacturing indices in China and the US, this study uses a time series model. The use of a VAR(p) model to clearly correlate WTI crude oil price volatility with the Russian-Ukrainian Conflict is the distinguishing feature. The empirical results demonstrate a strong correlation between the size of China's and the US manufacturing index's oscillations in response to geopolitical shocks. Both countries' manufacturing indices are highly susceptible to changes in WTI crude oil prices.
Introduction
As one of the factors of industrial production, the fluctuation of oil price has an impact on industrial production. Over the past six months, a number of factors have come together to push the price of crude oil to almost record levels. And the cost of manufacturing and transportation is increased by high energy prices. The rise of international crude oil prices leads to the rise of industrial production costs, which leads to the rise of the cost of the whole industry chain and thus leads to the macroeconomic downward economic situation such as inflation. For now, the war between Russia and Ukraine was the biggest factor pushing crude oil prices higher.
The potential impact of the war between Russia and Ukraine on the manufacturing industry happens in two ways: the impact of the war and the destruction of manufacturing capacity and the impact on trade and production following sanctions. It strongly disrupted the production and trade of international bulk commodities, including grains, crude oil, and fertilizers. Wheat costs have increased by about 112 percent in the past 12 months, while corn, soybean, and vegetable oil costs have also increased by about 80 percent [1]. Due to the fact that Russia and Ukraine are major exporters of those kinds of bulk commodities and the war has halted normal trade between these two nations as well as the import and export restrictions put in place by governments, the price of bulk commodities and crude oil will rise in the short term, adding more uncertainty and concern to the current situation, but in the long term with the likelihood that the war will end, the price of bulk commodities and crude oil could shift tow.
Furthermore, it is important to consider the long-term effects of changes in the price of oil on the industrial sector. More people are beginning to understand the impact that potential changes in the oil market will have on the industrial industry. According to Takuji Fueki, in comparison to actual supply and demand shocks, long-term aggregate demand shocks and expected future oil supply shocks have a substantial impact. Future shocks to supply and demand can be blamed for about 23% of the fluctuation in crude oil prices over a 12-month period [2]. However, only a few types of literature investigate this subject from the perspective of rising crude oil prices, and now, available research mostly concentrates on the macro and industrial levels to investigate the influence of variations in the price of crude oil on the manufacturing index.
This paper aims to fill the gap in the relationship between the fluctuation of oil prices and the manufacturing industries in China and the United States by examining the effects of the global crude oil price fluctuation caused by the Russia-Ukraine conflict on the research of manufacturing indices of China and the United States.
Although oil is a key strategic resource and the price elasticity of demand is modest, the government frequently views it as a significant political weapon. As a result, numerous geopolitical events or tensions invariably upset the oil market, and extreme geopolitical events frequently result in investor panic and abnormal volatility in the oil market [3]. The VAR model is used in this study to examine how changes in crude oil prices affect the returns of the Chinese and American manufacturing industries. The GARCH model is used to examine how these changes affect volatility. Both the Chinese and American manufacturing industries are examined, along with the macroeconomic effects of these changes over the short-and long-term. In order to research the impacts of fluctuations in the price of crude oil on the manufacturing industries in China and the United States, the author analyzes present oil price shocks and forecasts future oil price shocks to the manufacturing industry index. The mechanisms by which oil price volatility affects industrial development and the effects of these shocks on industrial output in the United States and China are then revealed. These effects are based on scientific research that suggests expectations of oil supply and demand play a key role in oil price volatility. The analysis of changes in global crude oil prices and manufacturing performance in China and the United States not only fills a research need in this area, but it also provides a solid micro-empirical basis for the development of macroeconomic policy for the present and the future. Giving industrial companies advise on how to justly and successfully avoid the external risk caused by changes in the price of crude oil globally is useful. The recommended model has a close relationship to the research on VAR model analysis for oil prices, which is discussed in the literature review that follows.
There has been a lot of research done on how oil price shocks affect the industrial industry. The ability of China's whole manufacturing sector to manufacture goods efficiently would unquestionably be severely hampered by the growth in global oil price volatility, according to Cheng Dong's research [4]. Each shock has a unique impact on how much the globe produces, according to Takuji Fueki's research [5]. Jinyu Chen and Xuehong Zhu's research indicates that China's industrial PPI suffers when oil prices increase due to supply shocks, but that both the industrial PPI and oil prices increase due to overall market volatility and fuel demand shocks that behave similarly [6]. From a macroeconomic view, the impact of variations in crude oil prices on the growth of China's manufacturing industry as a whole. Milani made the point that an increase in the price of crude oil in the global market would have an impact on both the supply and demand for the commodity, resulting in inflation. The more a country's economy depends on oil, the more significant the inflation, and the more of an impact on the industrial sector there is from changes in oil prices [7].
Much research in this field has looked at the examination of oil price volatility. More precisely, according to John Chatziantoniou, Michail Filippidis, George Filis, and David Gabauer, increased realized oil price volatility, particularly in the short term, can be attributed to changes in oil supply and demand, oil inventories, and uncertainty in the financial markets [8]. Bourghelle, Jawadi, and Rozin claim that the conflict between the major oil-producing nations resulted in a demand shock that decreased demand for crude oil globally, increased volatility, and led to severe economic depression in the majority of industrialized and developing nations [9].
Most academics also ignore the link between regional wars and oil price changes, notably the link between oil price changes and geopolitical conflicts in the context of time series. In addition, earlier research on the correlation between manufacturing indexes and oil prices tended to ignore the medium-term and long-term effects of oil price variations in favor of concentrating on the immediate effects on the manufacturing index. Thus, this paper will analyze the dynamic relationship between oil price volatility caused by geopolitical conflicts and the manufacturing index in the framework of time series.
The remaining portions of this essay are divided into four pieces. The methodology including the VAR model and ARMA-GARCH model will be briefly introduced in section 2, section 3 will analyze the primary empirical findings, section 4 will discuss the findings of this study and section 5 will conclude.
Data source
This study intends to shed light on how the fluctuation in oil prices affected the manufacturing index during the Russia-Ukraine conflict. Thus, the author uses daily data over the period November 2021 to May 2022 (starting 3 months before Russia and Ukraine war).
This data sample is used to study the fluctuation in oil prices from November 2021 to the ongoing conflict between Russia and Ukraine. Using daily data (closing prices) to record important details and track the development of the oil prices. For the oil data, the author used the West Texas Index (WTI) as a benchmark for oil prices.
Unit Root Test
The ARMA model is built using the unit-root test as a foundation. The unit root test is used to determine whether a sequence contains a unit root. The outcome is a non-stationary time series, which is demonstrated by the presence of the unit root. Regression analysis will produce false regression if the unit root exists in the sequence, indicating that the process is unstable. Therefore, the unit root test is required to guarantee the sequence's stability.
Rules of decision-making in this paper are using the ADF test, which can show the result with more accuracy. The augmented Dickey-Fuller (ADF) statistic for the test has an unfavorable value.
The assumption that there is a unit root at a particular level of confidence is strongly rejected the more negative it is [10,11]. In the test, the null hypothesis 0 : =1, and the alternative hypothesis 1 : The method of the unit root test in this paper is to define the null hypothesis 0 as unstable and test the probability value through ADF to compare whether the null hypothesis is rejected or not, to achieve the purpose of the unit root test for sequence stationarity.
VAR Model
The Vector Autoregressive (VAR) model, which allowed for the dependency of a variable on both its lagged values and those of other explanatory factors, was used to analyze the interdependence between time series.
The author uses the following three-dimensional VAR to model the dynamics of oil price volatility.
To predict the model parameters with accuracy, the variables in a VAR model must be stationary.
ARMA-GARCH model specification
The method is expanded by the author by including two extra variables in the ARMA-GARCH model, which enables identifying shocks in the price of international crude oil to forecast the volatility of the manufacturing index in the future.
The expression of ARMA model is as follow: Where 0 is constant term, order m and order n are non-negative integers, is the daily WTI crude oil price, the order m's auto-regressive component's variable is , is the moving average component of order n's parameters, and is the error term. The derivation of MA is as follows: Where i≥1, satisfied = − 1 , if want the model to be stationary series, the absolute value of 1 must be smaller than 1, because | 1 | < 1, thus, when → ∞, 1 → 0, the contribution of − to decays exponentially as i increase.
Then, − can be expressed as: Times 1 on both sides of the equation (7), then minus equation (6): This equation shows that in addition to the constant term, is the weighted average of the two perturbation terms and −1 . Therefore, MA model is white noise stationary series. It is generally accepted that time-series data frequently exhibit autocorrelation while crosssectional data are more likely to exhibit heteroscedasticity, namely "Autoregressive Conditional Heteroskedasticity" (Autoregressive Conditional Heteroskedasticity), denoted as ARCH.
For the general form of the linear regression model: Where the conditional variance of the perturbation term is 2 ≡ ( | −1 , … ) , the subscript t of 2 indicates that the conditional variance can change over time. Inspired by the phenomenon of volatility agglomeration, the hypothesis 2 depends on the disturbance term's square during the prior period: For using the MLE to estimate ARCH model, estimate both the original equation ( = ′ + ) and conditional variance equation ( 2 = 0 + 1 −1 2 ) at the same time. In comparison to the ARCH model, the GARCH model is more frugal because it uses fewer parameters [12][13][14]. The GARCH model is divided into two parts: the mean equation ( = ′ + ) and the variance equation ( 2 = 0 + 1 −1 2 ). The GARCH (p, q) model has the following general form: Where the most commonly used GARCH model is GARCH (1, 1):
VAR order identification
"LL" means log-likelihood. "LR" means likelihood-ratio test (likelihood ratio tested for the joint significance of the last order coefficients). Where if use "LR" as the standard, the VAR level should be ordered 11. According to the estimation in Table 1, it can be determined that the order of VAR is 11. And the stability of order 11 is confirmed in the stability test in Figure 2.
Define three × 1 column vector as follows: Define the × companion matrix as follows: Where can write equation (13) in VAR (1) form as follows: Thus, the stationarity of VAR(p) model requires that all eigenvalues of its adjoint matrix Γ fall within the unit circle.
Impulse and response
Where the equation means that when the disturbance term of the j variable in the t period increases by one-unit (while other variables and the disturbance terms of other periods are unchanged), and the number i variable in the ( + ) phase influence on the value , + . Considering ( , + / ) as a function of the time interval s, is the "impulse response function" (IRF). The green line is the point estimation result of the pulse response, and the gray line is the 95% confidence interval of the pulse response point estimate result. Because this paper uses daily data, the horizontal axis represents the day.
From the graph can intuitively see that during the first ten days of the Russia-Ukraine conflict, the manufacturing index of China and the United States fluctuated with the sharp fluctuations of international crude oil prices, but as time passes, the fluctuations brought by crude oil prices tend to stabilize, and the manufacturing index tends to develop smoothly. The partial autocorrelation function (PACF) in analysis provides the cointegration relationship between time series data. When analyzing data to determine the amount of lag in autoregressive models, this function is crucial.
ARMA identification
( ) model or extended ( , , ) the model can be determined by drawing part of the autocorrelation function [15]. DJI: Large spike at lag 33, otherwise a damped wave with intermittent positively and negatively correlations.
Manufacturing, CN: Large spike at lag 36, followed by a damped wave with alternating positive and negative correlation.
ARMA-GARCH estimation results
From the estimation result of the variance equation, the increase in international crude oil price did not cause the intraday fluctuation of the Dow Jones index and China manufacturing index.
Discussion
This research advances the macroeconomic and manufacturing impacts of price changes of international crude oil on the manufacturing sectors in China and America to the micro-manufacturing level and adds regional political and military conflicts as the influencing factor, partially making up for the shortcomings of earlier studies' microanalysis. In this work, the time series model is also employed to describe the static and dynamic effects of changes in crude oil prices on the manufacturing indices in China and America.
Given that the Russia-Ukraine conflict and other unpredictable and volatile events continue to affect crude oil markets, policymakers and authorities should assess and put in place the proper limits to prevent rapid and disproportionate changes in oil prices brought on by geopolitical worries. Also, the fluctuation of oil prices caused by the Russia-Ukraine conflict will weaken the overseas market demand, reduce the driving effect on the economic growth of China and the United States, and directly increase the supply side risk of the manufacturing industry. The continued rise in international prices of crude oil, agricultural products, non-ferrous finance, and other commodities will increase cost pressure on the manufacturing industry and trigger the risk of an overall decline in the manufacturing industry [16].
The Conflict between Russia and Ukraine severely dampened investor confidence and greatly reduced the investment willingness of multinational companies, which will be a severe challenge to China's manufacturing industry. International capital demand for safe haven, resulting in emerging market funds to accelerate the return to the U.S. market, will bring a respite for the U.S. manufacturing industry. On the basis of this analysis, future studies can investigate into how additional risks or unknown factors relate to the world's crude oil markets.
Conclusion
Concerns regarding the relationship between the manufacturing sector and crude oil markets have grown as a result of the conflict between Russia and Ukraine, which has exacerbated financial and geopolitical uncertainty in the energy markets. By utilizing a VAR model based on daily data, this study contributed to the existing of research about the relationship between the manufacturing industry index and WTI crude oil price.
According to the analysis, there is a strong linear relationship between the manufacturing industry index and oil prices. Due to the oil market's sensitivity to changes brought on by outside shocks and the close relationship between the crude oil price index and the manufacturing index, crude oil is an essential raw material for manufacturing businesses. Thus, investors should take equivalent action when geopolitical risk rises to lessen the loss of volatility crude oil brings and moderate the dramatic swings in the global oil market. | 4,145.4 | 2022-10-24T00:00:00.000 | [
"Economics",
"Political Science"
] |
Model Compensation Approach Based on Nonuniform Spectral Compression Features for Noisy Speech Recognition
This paper presents a novel model compensation (MC) method for the features of mel-frequency cepstral coe ffi cients (MFCCs) with signal-to-noise-ratio-(SNR-) dependent nonuniform spectral compression (SNSC). Though these new MFCCs derived from a SNSC scheme have been shown to be robust features under matched case, they su ff er from serious mismatch when the reference models are trained at di ff erent SNRs and in di ff erent environments. To solve this drawback, a compressed mismatch function is defined for the static observations with nonuniform spectral compression. The means and variances of the static features with spectral compression are derived according to this mismatch function. Experimental results show that the proposed method is able to provide recognition accuracy better than conventional MC methods when using uncompressed features especially at very low SNR under di ff erent noises. Moreover, the new compensation method has a computational complexity slightly above that of conventional MC methods.
INTRODUCTION
The problem of achieving robust speech recognition in noisy environments has aroused much interest in the past decades. However, drastic degradation of performance may still occur when a recognizer operates under noisy circumstances. Resolutions to this problem can be generally divided into three categories: inherently robust feature representation [1], speech enhancement schemes [2], and model-based compensation [3][4][5][6]. More details are reviewed in [7]. Recently, different speech analyses based on psychoacoustics have been reported in the literature [8]. The well-known perceptual linear prediction (PLP) [9] uses critical band filtering followed by equal-loudness pre-emphasis to simulate, respectively, the frequency resolution and frequency sensitivity of the auditory system. Cubic-root spectral magnitude compression with a fixed compression root is subsequently used to approximate the intensity-to-loudness conversion. However, it is suboptimal to use a constant root for compressing all the filter bank outputs, because employing a constant compression root would over-compress some outputs and under-compress other outputs at the same time.
A new kind of noise-resistant feature by employing a SNR-dependent nonuniform spectral compression scheme was presented in [1], which compress the corrupted speech spectrum by a SNR-dependent root value. [1] has shown that the SNSC derived mel-frequency cepstral coefficients (SNSC-MFCC) features are able to provide recognition accuracy better than the conventional MFCC features and cubicroot compressed features. In a SNSC scheme, the compressed speech spectra in the linear-spectral domain, Y k , is expressed as where Y k is the kth mel-scale filter bank output of a corrupted speech segment and α k is the compression root for the kth filter band, which is SNR-dependent. However, since α k is SNR-dependent, estimation of noise is required in the training session for finding α k under a particular noise type and global SNR. Thus models estimated by training in this way should only be used for a recognizing task under the same global SNR and noise environment. So as not to reestimate the model when adopting a SNSC scheme, we need to compensate the models for the mismatch 2 EURASIP Journal on Advances in Signal Processing caused by the compression root. This paper presents a compensation scheme to compensate the recognition models trained with clean and uncompressed training data for melfrequency cepstral coefficients SNSC-MFCC features in various noisy environments. In this scheme, we start with using conventional MC methods such as the PMC [3,4] method or the VTS [6] approach, to produce compensated models for features of no compression. The means and variances of the compressed mismatch function are derived in the paper. With the use of Gaussian-Hermite numerical integrals [10], a model compensation procedure is developed. Most importantly, the new compensation scheme is applicable to any conventional model compensation method. The experimental results of the paper show that the new compensated models provide very good accuracy in recognizing SNSC-MFCC features at different SNRs in different noisy environments. The computational complexity of the proposed MC-SNSC method is comparable with conventional MC methods. We call our new scheme the model compensation approach based on SNR nonuniform spectral compression (MC-SNSC).
The structure of this paper is as follows. The SNSC method is briefly reviewed in Section 2. In Section 3, we will introduce the MC-SNSC approach. Series of experimental results along with discussion and analyses are then presented in Section 4. Our conclusions on this study will be given in the final section.
SNR-DEPENDENT NONUNIFORM SPECTRAL COMPRESSION
The functional diagram of the generation of SNSC-MFCC features is depicted in Figure 1. The testing utterance is segmented into frames using a Hamming window. The frequency spectra of the speech segments are computed via discrete Fourier transform (DFT). Their squared magnitude spectra are passed to the mel-scaled filter bank. After the melscaled bandpass filtering, the spectral compression is applied to the outputs as in (1). Taking the log of the compressed outputs and then the discrete cosine transform, we obtain the SNSC-MFCC features. Simulated by the spectrally partial masking effect, the compression function α k is defined as where A 0 is the floor compression root, β is the cutoff parameter to function as the just-audible threshold, γ is the parameter to control the steepness of the compression function, and u(·) is the unit step function. For SNR less than the cutoff, (2) yields the floor compression value. The compression function produces small α k at a steep rate of change for small band SNR above the cutoff and large α k asymptotically close to one at a gradual rate for large band SNR. This SNSC scheme renders the filter bank outputs of low SNR less con-Windowed noisy speech signal y(n)
P(i)
Mel-scaled band-pass filter Log followed by DCT SNSC-derived MFCC (static feature) Filter-bank output energies of the noise estimate tributed to the resulting speech features while the outputs of high SNR are largely emphasized. The mismatch function Y k of the kth mel-filter bank output, which is modeled as the sum of the noise energy N k and the clean speech energy X k in the linear-spectral domain, is expressed as We define the clean speech and noise segment in the Logspectral domain as X (l) k and N (l) k , respectively, then the mismatch function in the log-spectral domain is expressed as Thus the compressed mismatch function for the SNSC in the log-spectral domain is expressed as where In this paper, we make the following assumptions in order to facilitate the derivations of the MC procedures. (1) The recognition model is a standard HMM with mixture Gaussian output probability distributions. The transition probabilities and mixture component weights of the models are assumed to be unaffected by the additive noise. (2) The background noise is additive, stationary, and independent of the speech.
The notations for the description of variables in the paper are defined as follows. The superscripts (l) mean the Figure 2 shows the functional diagram of the recognition system using model compensation for SNSC-MFCC features.
MODEL COMPENSATION APPROACH BASED ON THE SNSC SCHEME
In the training phase, clean speech HMMs are trained from standard MFCC features of which no compression is applied or the compression root is just equal to one. During the feature extraction in the testing phase, the SNSC scheme as described in (1) is used to compress each filter bank output. The clean HMMs are combined with the noise model to construct the corrupted speech models to recognize the SNSC-MFCC features using MC-SNSC approach.
There are no closed-form solutions for the moments of the mismatch function in (5) and (6). The expectations are multidimensional integrals for which we need to use computationally expensive numerical integrations to calculate the model parameters. With the use of assumption (2) and an additional assumption that the two random variables Y (l) k and N (l) k are uncorrelated, we can reduce the dimensionality of the integration. Using the Gauss-Hermite numerical integral method, we derive the procedures for computing the means and variances of the static features in the log-spectral domain in the next subsections.
Mean compensation
Using the compressed mismatch function described in (5), the mean of the static SNSC-MFCC feature in the log-spectral domain is given by For the sake of simplifying the expression, we define Then the mean parameters of the static corrupted and compressed features are expressed as Using the Gauss-Hermite integral, g(γ) is calculated as (1/γ) Σ (l) Ykk , and erf(·) is the error function. The parameters t i and ω i for i = 1 to n are, respectively, the abscissas and the weights of the nth-order Hermite polynomial H n (t) [10].
Variance compensation
The diagonal elements of the covariance matrix of the SNSC-MFCC static features are given by where The computations of the off-diagonal elements of the covariance matrix of static models involve two dimensional Gaussian-Hermite numerical integrals. To reduce the computational complexity, the off-diagonal elements are approximated as where λ lk is a scaling factor defined as in order to ensure that the off-diagonal elements are smaller than the corresponding diagonal elements.
Corrupted models of noncompressed features
The above MC-SNSC procedures need the compensated static models of noncompressed corrupted speech in the logspectral domain, { μ (l) Yk , Σ (l) Ykl }. They can be obtained from any conventional model-based compensation methods such as the PMC method [3,4] or the VTS (Vector Taylor series) [6].
In the log-normal PMC method, the kth elements of the mean vectors and the (k, l)th elements of the covariance matrices of the clean speech models in the linear-spectral domain are related to the log-spectral domain as μ Xk = e μ (l) In the linear-spectral domain, the noise is assumed to be additive and independent of the speech. The corrupted speech model parameters in this domain are obtained by combining the clean speech models and the noise model as For the log-add PMC, the mean compensation is described as This method only compensates for the mean but not the variance. It thus has low computational complexity. However, its performance becomes unsatisfactory at low SNR. This scheme can be viewed as the zeroth-order VTS (denoted as VTS-0). The VTS method is to approximate the mismatch function by a finite length Taylor series, and the expectation of this Taylor series is taken to find the corrupted speech model parameters. A higher-order Taylor series can yield a better solution but its computational complexity is very expensive. Thus VTS-0 and first-order VTS (VTS-1) [6] are employed commonly. Using the VTS-1 method, the compensation of the mean is the same as the log-add PMC, and the covari- where M is the diagonal matrix whose elements are expressed as As a brief summary, the MC-SNSC method uses the background noise model and the uncompressed corruptedspeech models to compute the compressed corrupted speech models. The band SNR-dependent SNSC is employed in this scheme to compress the features so as to emphasize the signal components of high SNR and de-emphasize the highly (1,2,3) For the Gauss-Hermite integral, n = 4 is employed. * Average WRR (%) between −5 and 5 dB.
noisy ones. The compressed corrupted speech models are then used for recognizing the SNSC-compressed testing features.
EVALUATION
In this section, three noise types from the NOISEX-92 database are used in the evaluation experiments including white, pink, and factor noises. The speech database used for the evaluation of the MC-SNSC techniques is TI-20 database from Ti-Digits which contains 20 isolated words, including digits "0" to "9" plus ten extra commands like "help" and "repeat." The speech database was spoken by 16 speakers (8 males and 8 females), and we select 2 and 16 utterances for training and testing, respectively, from each speaker and each word (641 utterances for training and 5081 utterances for testing). The length of the analysis frame (Hamming windowed) is 32 milliseconds, and the frame rate is 9.6 milliseconds. The feature vector is composed of 13 static cepstral coefficients.
A word-based HMM with six states and four mixture Gaussian densities per state is used as the reference model. In the training mode, we train the system with the clean speech utterances to produce clean models and corrupted speech for the matched case. In the testing, the ten speech recognition methods as listed in Table 1 are used for the performance evaluation. These nine methods are two mismatched and two matched cases; three conventional model-based compensation methods: the log-normal, the log-add PMC, and the first order VTS (denoted as VTS-1); and these three conventional methods plus the MC-SNSC method.
For our MC-SNSC approach, an average background noise power spectrum is needed to estimate the background noise model, and to estimate the band SNR for calculating the SNSC-derived features in the testing phase. The average noise power spectrum is calculated by using 200 nonoverlapping frames of noise data and is scaled according to a specified global SNR. The global SNR for an utterance is defined as where {P m (k)} is the clean speech power spectrum of the mth frame, {N (k)} is the nonscaled average noise power spectrum, O is the total number of frame for the utterance, Q is the FFT size, and g is the scaling factor to scale the ratio according to a specified SNR global . Thus, the corrupted speech is produced by where y(i) is the corrupted speech, x(i) and n(i) are the clean speech and the nonscaled noise signal, respectively. Table 2. For the MC-SNSC method, the parameters (A 0 , β, γ) are set according to lots of testing experiments. The method can obtain good performance when the parameters are set in the area of A 0 ∈ [0.7, 0.9], β ∈ [−0.6, 0.6], and γ ∈ [1,2]. In this work, we fix the parameter set as A 0 = 0.75, β = −0.4, and γ = 1.
The results show that all MC methods can achieve good performance for the three additive noises at low SNR. For the sake of comparison, we define an average performance gain G ave of a MC method as the average of the difference of the recognition rates in absolute percentage of the MC method using MC-SNSC and its original counterpart over the four noises. For the −5 dB case, the G ave of the MC-SNSC plus the log-add PMC, the MC-SNSC plus the log-normal PMC, the MC-SNSC plus the VTS-1 are 11%, 10.5%, and 5%, respectively. For 0 dB case, the G ave of the three methods are 9.5%, 7%, and 4.3%, respectively. The experimental results also show that the MC-SNSC scheme can enhance the performance of the original method under the four noises for all SNR cases. It is worth noting that at low SNR as 0, −5 dB, even MC-SNSC gives a better performance than the matched case based on MFCC features.
These experimental results reveal that the new MC-SNSC scheme can deal with different types of additive noise and yield remarkable recognition performance, which is attributed to the noise-resistant feature extraction (SNSC scheme) [1] and pertinent model compensation. Table 3 lists the number of multiplication, division, logarithm, and exponential operations for each technique to update the parameters of a single mixture density for static parameters, where N and M are the dimensions of features in the cepstral domain and the log-spectral domain, respectively. It can be seen that the computational complexity of the MC-SNSC plus the conventional MC methods is comparable to that of the conventional MC methods. However, the MC-SNSC is more effective than the conventional model compensation methods.
CONCLUSION
A novel model compensation approach for robust SNSC-MFCC features is presented in this paper. Meanwhile a com-pressed mismatch function is defined for the static observations with nonuniform spectral compression. The modelbased compensation method for compressed feature has been derived, which employs a Gauss-Hermite integral and the conventional MC approach. The experimental outcome demonstrates that the MC-SNSC approach can cope with different kinds of noises automatically with enhanced recognition accuracy substantially, especially in low SNR in comparison with the conventional MC approaches. In addition, the complexity of the MC approach plus the MC-SNSC method is not very expensive and it is comparable with a correspondent MC approach. | 3,745 | 2007-01-01T00:00:00.000 | [
"Computer Science"
] |
Solar Wind Protons Forming Partial Ring Distributions at Comet 67P
We present partial ring distributions of solar wind protons observed by the Rosetta spacecraft at comet 67P/Churyumov‐Gerasimenko. The formation of ring distributions is usually associated with high activity comets, where the spatial scales are larger than multiple ion gyroradii. Our observations are made at a low‐activity comet at a heliocentric distance of 2.8 AU on 19 April 2016, and the partial rings occur at a spatial scale comparable to the ion gyroradius. We use a new visualization method to simultaneously show the angular distribution of median energy and differential flux. A fitting procedure extracts the bulk speed of the solar wind protons, separated into components parallel and perpendicular to the gyration plane, as well as the gyration velocity. The results are compared with models and put into context of the global comet environment. We find that the formation mechanism of these partial rings of solar wind protons is entirely different from the well‐known partial rings of cometary pickup ions at high‐activity comets. A density enhancement layer of solar wind protons around the comet is a focal point for proton trajectories originating from different regions of the upstream solar wind. If the spacecraft location coincides with this density enhancement layer, the different trajectories are observed as an energy‐angle dispersion and manifest as partial rings in velocity space.
Introduction
Comets are a highly diverse group of solar system bodies that are mainly comprised of ice and organic material (Filacchione et al., 2019).They are known for their vast tails resulting from the material on their surface sublimating when the comets approach the sun.Cometary activity can be defined by the amount of volatiles that a comet releases into space.A high-activity comet is 1P/Halley, which has been the target of several space missions, e.g.ESA's Giotto mission (Reinhard, 1987).The atmosphere of such high-activity comets, especially at perihelion, can extend millions of kilometres from the nucleus.Lowactivity comets (Hansen et al., 2016), such as 67P/Churyumov-Gerasimenko (hereafter 67P), only have a tenuous atmosphere that might span no more than a few thousand kilometres.The cometary activity is driven by the strength of the solar radiation and strongly varyies over time due to the comet's highly elliptical orbit.The significant change in activity also changes the plasma environment around the comet with different plasma boundaries forming at certain heliocentric distances (Mandt et al., 2016).
The Rosetta mission has so far been the only mission to orbit a comet.It accompanied comet 67P for two years and observed large variations in its cometary activity as the heliocentric distance changed from about 3.6 AU to 1.24 AU.This provided us with unique measurements of the evolving plasma environment (Glassmeier, Boehnhardt, et al., 2007;Taylor et al., 2017).In the beginning of the mission the low cometary activity presented no significant obstacle to the solar wind, which was observed from the anti-sunward direction with little to no deflection (Behar et al., 2016).At heliocentric distances between approximately 3 AU and 2.2 AU the cometary activity increases, and with it the flux of cometary water-group ions (Nilsson et al., 2017).This also coincides with observations of a more deflected, but still beam-like, solar wind (Behar et al., 2017).
Closer to perihelion the deflection increases even further, until Rosetta enters a region completely devoid of solar wind protons, the solar wind cavity, at around 1.7 AU (Nilsson et al., 2017).During the outbound leg, observations show that the plasma environment evolves in reverse order.This paper focuses on observations from April 19th, 2016, when comet 67P was at 2.8 AU on its outbound journey.Contrary to the expected beam-like and slightly deflected solar wind, observations show partial ring distributions in the proton data.Ring distributions can be formed by two interacting plasma populations.At a comet these are typically the solar wind ions and the cometary ions.When the cometary activity is low the solar wind flow is almost undisturbed and newly born cometary ions are picked up by this flow.The cometary ions then form a ring distribution in velocity space if the spatial scales are larger than multiple ion gyroradii (A.Coates, 2004).As the activity increases and the density of the two particle populations becomes comparable the situation is more complex.The two populations then gyrate around a common gyrocentre and both form ring distributions in velocity space (Behar et al., 2018).
Ring distributions of cometary ions have been observed at 1P/Halley.Water group ions from the comet were picked up by the solar wind and in the solar wind turbulence pitch angle scattering transformed the initial ring distribution into a shell distribution (A.J. Coates et al., 1989).In the case of comet Halley the spatial scale of the coma is large enough to allow for protons released in photo-dissociation of cometary water ions to be picked up and form rings as well.Such proton ring distributions were observed (Neugebauer et al., 1989), but these protons were of cometary origin, and not solar wind protons.At 67P a considerable deflection of the solar wind together with an acceleration of the cometary ions along the solar wind electric field is observed at low to moderate activities (Nilsson et al., 2017).This deflection is the beginning of gyration due to the small spatial scales at comet 67P.Reports on ring distributions are rare, but Williamson, H. N. et al. (2022) present a case (at higher activity) where both cometary ions and solar wind protons form partial rings in velocity space.These observations have been interpreted as indicative of cometosheath formation.Numerical models serve to set the local in situ measurements of Rosetta at 67P in a global context and help explain observed phenomena.Hybrid models, for example presented by Koenders et al. (2015) in the context of 67P, are frequently used to model the interaction between the solar wind and the cometary plasma.There are, of course, limitations.Many models simplify the cometary environment by, for instance, assuming spherically symmetric outgassing.They also require solar wind conditions and cometary activity as input parameters to produce relevant results.Additionally, the spatial resolution of the models is often not high enough to resolve processes occurring close to the nucleus.Nonetheless, hybrid models have been used to aid in understanding unique cometary phenomena, such as the infant bow shock (Gunell et al., 2018).Sometimes very simple models are helpful for interpretation.Behar et al. (2018) developed a 2D semi-analytical model to provide a view on single particle dynamics at the comet.Among other things it suggests the existence of a solar wind-depleted region, and a local density enhancement of the solar wind along the boundary layer (titled 'caustic' in the paper).Although this model does not include electric fields, the particle trajectories resulted in similar features also seen in hybrid models.Such density enhancements have also been reported e. g. downstream of the Earth's bow shock (Sckopke et al., 1983).In this paper we will compare our observational results to models in order to explain the occurrence of partial ring distributions of solar wind protons.
Instrument Description
The main data sources for this study are the two ion mass spectrometers on the Rosetta spacecraft: the Ion Composition Analyser (ICA) and the Ion and Electron Sensor (IES).Both instruments are part of the Rosetta Plasma Consortium (RPC; Carr et al., 2007).IES and ICA are mounted at different locations with different orientations on the spacecraft and provide partially complementary field-of-views, which we will make use of in this paper.A signal outside of one sensor's field-of-view can therefore be picked up by the other, and the overlapping part of the field-of-view serves as a validation of the observations.
ICA
ICA is a mass-resolving ion spectrometer with a field-of-view of 360 • ×90 • .The field-of-view is subdivided into 16 equally spaced azimuth and elevation bins, giving an angular resolution of 22.5 • in azimuth, and approximately 5.6 • in elevation (Nilsson et al., 2007).The mass resolution allows to distinguish between H + , He 2+ , He + , and heavier ions.The energy range of the instrument is between a few eV and 40 keV, logarithmically distributed over 96 energy bins.Each observation consists of 16 consecutive elevation scans, one for each elevation bin.An elevation scan is made at a set elevation and sweeps over the entire energy range, while azimuth and mass bins are observed continuously.Such a full scan of all variables takes 192 s, which is the nominal time resolution of the instrument.To improve data compression for downlink to Earth, a background count reduction was applied on-board.This removes both noise and very weak signals.The dataset used here is mass-separated into H + , He 2+ , and heavy ions.
IES
IES is a combined ion and electron spectrometer, with a field-of-view of 360 , with a highresolution sector subdivided into 5 • ×5 • sectors.The angular resolution of electrons is 22.5 • × 5 • for the entire field-of-view.Both sensors cover the energy range from 1 eV to 22 keV in 124 energy steps, and have an energy resolution of 4 %.The time resolution can be varied and ranges from 128 s to 1024 s.
To comply with telemetry requirements, the data was binned onboard and transmitted with a lower resolution than measured.The available angular resolution of the data used in this study is 45 • × 10 • for both the ion and the electron sensor.For the energy resolution, two successive measurements were binned together and the time resolution is 256 s (Burch et al., 2007).IES does not apply a background reduction and the data appear more noisy than ICA data.
Other Instruments
In addition to data from the ion spectrometers, we use data from the magnetometer (MAG) and the Langmuir probes (LAP), which also are parts of RPC.MAG measures the magnetic field vector with a sampling frequency of 20 Hz.The range is ±16 384 nT with a resolution of 31 pT (Glassmeier, Richter, et al., 2007).The LAP instrument consists of two spherical Langmuir probes placed at the ends of two booms extending 1.6 and 2.2 m from the spacecraft body (A.Eriksson et al., 2007).From LAP we retrieve the electron density.Finally, we estimate the neutral gas cometary production rate using data from the COmet Pressure Sensor (COPS, part of the ROSINA package; Bal-siger et al., 2007).COPS consists of two pressure gauges giving the neutral density and dynamic pressure of the gas streaming out from the comet.
Dual Colourmap Plots
Commonly used heatmaps allow for a graphical representation of only one variable (e. g. flux).An example is the energy-time spectrogram (top panel in Figure 2) displaying the differential flux of ions as a function of energy and time, summed over the entire field-of-view.Similarly one can make a heatmap of the differential flux as a function of the field-of-view, summed over all energies and for a certain time interval.To simultaneously study dependence on both energy and flow direction of the ions, we use a dual colormap showing both the differential flux and the median energy of the ions as a function of the instruments' field-of-view at the highest possible time resolution (see e. g. figure 3).
To combine two quantities into one dual colormap with intuitive identification of both individual variables we use the CIECAM02 colour appearance model (Moroney et al., 2002).CIECAM02 computes so-called perceptual attribute correlates from perceived colours, and is based on experimental data (Luo & Hunt, 1998).For simplicity, we will refer to the perceptual attributes as hue, brightness, and chroma (often also called saturation).These independent variables create a three-dimensional colour space.The dual colormap plots are a two-dimensional slice of this colour space at a fixed chroma value.
Our two variables of interest, the median energy and the differential flux, are mapped onto the two axes of this colour slice: different values of the median energy are represented by a different hue, while the differential flux determines the brightness of each data point.The obtained colour in CIECAM02 variable space is then converted to an RGB triple using colorspacious, cropping any values that fall outside of minimum/maximum boundaries.A similar approach to fuse two images containing complementary data has been used in medical science (Li et al., 2014).
Partial Ring Fits
To characterise the observed partial rings, we fit a circle to the data in velocity space.
For each scan covering the full field-of view (corresponding to 192 s for ICA and 256 s for IES) we convert the median energy of each azimuth-elevation pixel into a velocity vector with an associated differential flux.Depending on the precise time, there are usually 15 to 25 velocity vectors with a differential flux larger than a threshold value (nonzero for ICA, and 1.5 orders of magnitude lower than the maximum value for IES due to the higher noise level of IES).The circle is found through a non-linear least square fitting process divided into two steps: 1. Fit a plane to all datapoints 2. Fit a sphere to the datapoints, where the centre of the sphere must lie on the plane determined in step 1 The two-step process improves the robustness of the fitting procedure compared to a onestep fitting procedure and restricts the number of free variables to match the degrees of freedom in the system.
In the first step, we retrieve u bulk," , a vector normal to the plane best describing the location of the velocity vectors.In an ideal case with a uniform magnetic field u bulk," would be along the ambient magnetic field.We find u bulk," by minimising where u i are the velocity vectors with differential fluxes above the threshold value, and ûbulk," is the unit vector along u bulk," .The weighting function w(u i ) is the logarithm of the differential flux associated with the vector u i .
In the second step we find the centre u 0 and radius u ⊥ of the sphere that best represents the velocity vectors.We require the centre of the sphere to lie on the plane determined in the first step.The fitting parameters are obtained by minimising where we use the same weighting as in step 1.The fit parameter u ⊥ corresponds to a gyration speed, and the difference between the centre of the sphere and u bulk," is the drift velocity in the plane of the velocity vectors, u drif t = u 0 − u bulk," , see Figure 1.This additional drift motion, e. g. due to an E×B drift, causes that u bulk," is not necessarily the centre of gyration.
Partial Ring Extent
We define the extent of the partial ring as the angle corresponding to the arc along the fitted ring spanned by the observed data points with fluxes above the threshold value (see Figure 1).A complete ring would correspond to 360 • .To find the extent of the partial ring we take 100 equally spaced points of the fitted ring and map each velocity vector onto the closest sampled point.We use the same weighting as used for the ring fits and search for the shortest arc that contains 80 % of the weighted sum of all the data points.For each scan (that is with the highest time resolution possible) we find the start and stop points of the arc using an iterative process.With this method, the extent of the partial ring is always underestimated.However, the chosen threshold value of 80 % provided excellent results in terms of robustness and efficiently excluded noise and other small signals not connected to the partial ring, while keeping the underestimation to a minimum.
Results
In this section, we will focus on the plasma observations on April 19th, 2016.This day shows signatures of a partial ring distribution of solar wind protons.To set this into the context of typical solar wind behaviour during this time period, we also showcase a reference case on April 23rd, 2016.
April 19th, 2016
The heliocentric distance on 19th of April 2016 was 2.8 AU.The distance of Rosetta to the comet nucleus was almost constant throughout the day, averaging at around 31 km.
The level of cometary activity was around 5 × 10 25 s −1 (derived from COPS data assuming isotropic outgassing) in the morning, and increased slightly in the afternoon.
Overview
Figure 2 shows Rosetta ion observations, plasma density, and magnetic field data.
The top three panels show the energy-time spectrograms of ions as measured by ICA, split up into protons, alpha particles, and heavy ions.In the beginning of the day protons (panel a) are observed with energies between 300 eV/q and 2 keV/q.Two types of structures appear during this time.Around 08:00 (all times are UT) protons continuously populate this entire energy range, resulting in one broad energy band.At 10:00, on the other hand, two separate energy bands can be identified.The differential fluxes of the two energy bands are usually different and one of the bands even disappears at times (e.g. at 07:00).The transitions between one single energy band and two separate ones happen suddenly, within a few scans.At around 13:00, there is a transition to a more narrow energy band and even this band sometimes disappears completely.This is a fieldof-view effect and will be discussed in the next section.Contrary to the ICA proton measurements, the alpha particles (panel b) were only observed in one energy band centred around 2.3 keV/q throughout the interval.In the afternoon, the signal sometimes disappears due to the same field-of-view effects mentioned above.The heavy ions (panel c) can be split into two parts: the newly ionised low energy ions (energies below 40 eV/q) are present the entire day, but show increased fluxes in the afternoon.At higher energies we see ions that have been accelerated by the solar wind electric field.These pickup ions are observed most of the time, but the differential flux and maximum energy for this ion population drop in the afternoon, especially around 16:00.
Panel d shows the IES ion observations.As IES is not mass-resolving all ion species are present.The overall behaviour of the protons (signal band at 1 keV/q) is similar to ICA observations, with a broader energy distribution in the morning compared to the afternoon.However, the signal in the morning does not split up into two energy bands at any point.In the afternoon no discontinuities are observed.At energies below 200 eV/q signatures of cometary pickup ions can also be seen throughout the entire day.
The magnetic field (panel e; magnitude, and components in CSEQ coordinates) has an average strength of 20.9 nT between 01:00 and 13:00 with little variation in amplitude and a dominating y-component.Only the z component shows changes of up to ±10 nT, including sign changes, which does not have a large impact on the magnitude.After 13:00 the fluctuations increase for all components.
The plasma density, as measured by LAP (panel f), is around 70 cm −3 in the morning but increases to an average value of 120 cm −3 in the afternoon, which is also reflected in the ICA measurements of low energy cometary ions (panel c), which are dominating the plasma at this time.
The proton density derived from ICA measurements (panel g) varies greatly throughout the entire day, but some features can be observed: the highest measured value is at around 1 cm −3 in the beginning of the day, and decreases in the afternoon (see dashed line at 1 cm −3 ).The periods in the morning where the density drops correspond to the appearance of two energy bands in the energy spectrum.Density estimates from ICA often have large uncertainties, but our focus here is on the variations in the proton density rather than absolute numbers.
Angular Plots
In this section we use the method described in section 3 to visualise the angle-energy dispersion of protons and alpha particles, and their relation to the magnetic field.To identify and compensate for possible field-of-view effects we use both ICA and IES data for the protons.All angular plots cover single scans, so they show the data at the highest time resolution available for this day.The time resolution of ICA and IES differs and we show the IES scan with the starting time closest to the starting time of the ICA scan.
To make it easy to combine the two datasets, the IES data is rotated into the ICA coordinate system.When comparing the upper and lower panel of figure 3 the complementary field-of-view of the two instruments is obvious.
Figure 3 shows a representative scan, taken around 02:54.At this time we see very broad energy bands in both the ICA and IES ion spectra (see figure 2, panels a and d).
The upper panel of figure 3 shows the median energy and differential flux of ICA protons.On the lower panel, IES ion data between 400 eV and 2 keV are displayed in the same manner.Both panels also show the anti-sunward and anti-cometward flow direction (yellow disc and grey star).Ions flowing from the Sun or the comet would be seen at the marked locations.The blue cross marker indicates the direction of the magnetic field, averaged over the entire scan.The underlying ellipse gives an estimate of the variability of the magnetic field direction during this scan.
We note that the ICA dataset shows a large angular spread of the proton distribution along a continuous line at negative elevation angles.The median energy is highest (1.2 keV) for the pixels closest to the anti-sunward direction and decreases down to 500 eV for the most deflected protons.The differential flux is similar for most pixels and only falls off for the most deflected protons.The broad spectra seen in figure 2a reflects this energy dispersion.IES data have higher noise levels, but in the pixels with the highest fluxes, the same features as are seen in ICA data can be identified.
The observed distributions resemble partial rings so we combine ICA and IES measurements and apply the ring fitting method described in section 3.2 in order to characterise the shape of the proton distribution.The resulting fitted ring for this scan is overlaid in both panels and features the same energy scale as the data.We conclude that the shape of the ring and the energy dispersion match the data very well.The estimated direction of the parallel component of the bulk velocity direction (u bulk," ) is displayed with a green cross and deviates only about 30 • from the magnetic field direction.The method to find the extent of the ring is described in section 3.2.1.The white dots on top of the fitted ring indicate the estimated start and end of the partial ring.We note the slight underestimation of the partial ring extent, an effect of the method used.
In both panels there is a signal deflected in the direction opposite to the rest of the distribution (positive elevation angles).The fluxes are lower and the angular spread is less, but this signal appears in many scans in similar position and energy range, and it is hence considered to be a real signal.
The magnetic field does not drastically fluctuate between 01:00 and 13:00, but it still sometimes exhibits changes on the timescale of individual scans.show a continuous partial ring close to the lower edge of the field of view.The IES measurement agrees well with this observation.In the next two scans the entire proton distribution appears shifted downwards in elevation.Due to the higher angular resolution this shift is more obvious in ICA data, but can also be seen in IES data.As a result, the middle part of the partial ring with energies around 700 eV is not observed by ICA because it falls outside the field-of-view.However, the IES data suggests that plasma with these energies is still present.We conclude that the two separate energy bands we observe in figure 2 are a consequence of part of the distribution being outside of the ICA field-of-view.
With the change in B-field towards lower elevations, u bulk," also decreases in elevation.The angle between the B-field and u bulk," increases from 27 • to 29 • , which is small compared to the overall change of magnetic field direction.u bulk," is consistently observed at higher elevations compared to the magnetic field direction.The variability of the B-field direction during one scan is approximately 10 • , which is much smaller than the difference between the u bulk," and the direction of the B-field.We make two important observations: 1.A change in the measured magnetic field direction coincides with a matching shift of the partial ring distribution.
2. The difference between the magnetic field direction and the estimated u bulk," cannot be explained by uncertainties due to the fitting procedure nor the variability of the magnetic field during one scan.So far we have only shown the angular distribution of protons.To get a complete picture of how the solar wind behaves, a comparison of protons (upper panel) and alpha particles (lower panel) of a single scan is given in figure 5. Separate scales for both median energy and differential flux on the dual colormaps are used to account for the different plasma properties of the two species.Compared to the protons, the alpha particles are much less spread in angular space.There is a slight energy-angle dispersion visible in the scan shown in figure 5, but such dispersion is not consistently observed during the day.Analysis of all scans between 01:00 and 13:00 shows that the angular spread of alpha particles never exceeds 5 pixels in elevation, and is rarely broader than 2 sectors in azimuth direction.The differential flux also falls off significantly for the two pixels at lowest elevations.Hence, we can exclude the possibility of field-of-view effects cutting away significant parts of the signal.
Due to the low fluxes of alpha particles and the lack of mass separation, we cannot use IES to confirm the observations mentioned above.Whenever there was a strong signal standing out in the IES data in the energy range between 2 keV and 4 keV, the observations match the ICA alpha particle data.
Timeseries of Fitted Rings
For a more comprehensive analysis of the partial rings, we applied the fitting procedure to all ICA and IES scans between 00:00 and 13:00, the time period when we observe the partial rings.There are 225 ICA scans available during this time, and the resulting fits were evaluated individually by visual inspection to exclude unsuccessful fits due to high noise in the data.This resulted in 180 good fits, a success rate of 80 %.It is interesting to note that the success of the fitting procedure, as well as the resulting fit parameters, are not affected by the field-of-view limitations of the instruments.
A timeseries of the fitted parameters is given in figure 6. Panel a shows the fitted ring velocities.The dominating velocity component is the gyration speed.It is relatively constant, with an average of u ⊥ = 362 km s −1 .The drift speed is also relatively constant, and averages at u drif t = 98 km s −1 .The parallel component of the bulk velocity shows more variability, and extends from 0 up to 198 km s −1 .The average is u bulk," = 51.5 km s −1 .The estimated ring angle extent (shown in panel b) fluctuates slightly over these 13 hours, ranging from 90 • to 150 • .Apart from a slightly smaller angle in the beginning of the day, there is no clear trend, and the average ring extent is 111.4 • .In panel c we show the angle between the magnetic field and u bulk," .It drops from above 60 • early in the morning to 10 • around 6:00, and remains low for the next two hours.Between 9:00 and 13:00 the magnetic field direction and u bulk," deviate significantly, and the average angle is 38 • .
Reference Case
As a reference case we choose April 23rd, 2016.Since it is only four days later than our main case, the heliocentric distances are comparable, as is the distance of Rosetta to the nucleus (around 30 km).However, the production rate for the reference case is about four times as high, with an average of 2.1 × 10 26 s −1 .
Overview
Figure 7 shows the same plasma parameters as figure 2, but for the reference case.
The ICA proton measurements (panel a) show a narrow energy band with a centre energy around 600 eV/q, constant throughout most of the day.Only between 14:15 -15:30, and after 19:30, there is an increase in the centre energy of the energy band, along with in the beginning of the day to above 300 cm −3 in the afternoon.As in our main case the density is dominated by low energy cometary ions.The proton density (panel g) is around 0.1 cm −3 most of the time, with the exception of the time between 14:15 -15:30, where it has a plateau at a value of 0.5 cm −3 .
Angular Plots
The angular spread of the protons for the reference case is much smaller than in the partial rings case, and appears beam-like instead of ring-shaped.The beam is less deflected than what was observed for the partial rings, and the magnetic field configuration differs in both magnitude and direction.There is also no clear angle-energy dispersion visible.A typical example of flow directions of alphas and protons for the reference case is shown in the supporting information (see figure S1).
The alpha particle distributions are very similar to both the proton distributions in this case, as well as the alpha particle distribution of the partial rings case, only with a lower flux.In fact, the differential flux is so low that it is just above the detection threshold of the instrument for this energy range, which explains the lack of a continuous alpha signal band in figure 7 (i.e., whenever the fluxes drop just slightly, they will not be detected by ICA).
Proton Temperatures
The broad energy band seen in figure 2a, with a spread of 1 keV, gives the impression of a heated proton population.At 1 AU the mean proton temperature is 12.7 eV (Wilson III et al., 2018), and decreases with T ∼ R −0.3 (cf.Belcher et al., 1981) to an expected solar wind proton temperature of 9 eV at 2.8 AU. Figure 3 reveals that the width of the spectrum is a result of an energy-angle dispersion rather than heating.In this context, we define heating as an irreversible process resulting in an increased temperature.The proton temperature would correspond to the width of the ring in velocity space, which is hard to determine from the data with the given angular resolution.Instead we assume an isotropic temperature and fit a Maxwellian to the energy distribution observed in each individual pixel that contains a measurable differential flux.We require five non-zero val- ues in the energy distributions to fit and each scan typically contains 5-15 pixels where a fit can be made.All fits are visually inspected and bad fits are removed.Figure 8 shows the fitted temperature, expressed as the thermal velocity versus the bulk velocity (obtained from the same fit).The thermal velocities correspond to energies in the range 5-20 eV.The colour of each dot is the modified index of agreement, a measure of the goodness of fit (Willmott, 1981).In figure 8 we use the first 30 of the 180 good scans identified in section 4.1.3to get a representative view of the distribution.We note a clear dependence and a linear fit is a reasonable representation of the data.The Pearson correlation is 0.65.
For the reference case we obtain most of the proton temperatures between a few eV and about 15 eV, with no obvious correlation between the thermal and bulk velocities (not shown).We note though that bulk velocity is almost constant and hence it is difficult to determine any dependence.
Discussion
To put the partial ring observations into a global context of the cometary environment, we compare with model results.Visualising the model results requires a projection into a coordinate system.Most useful for our case is the projection into magnetic coordinates centred at the comet, where the x-axis is in the sunward direction, which corresponds to −v of the undisturbed solar wind.The y-axis is along the solar wind magnetic field direction perpendicular to x.The z-axis completes the right-handed system, and is along the convective electric field (E = −v × B).This separates the comet environment into two hemispheres, referred to as +E (z > 0) and −E -hemisphere (z < 0), respectively.The terminator plane at x = 0 is the orbit plane of Rosetta for both days discussed in this paper.
Only few models focus on the specific case of low cometary activity and resolve the low distance between Rosetta and comet 67P.One such model is presented in Gunell et al. (2018) for a heliocentric distance of 2.4 AU.It predicts the formation of a solar wind proton density enhancement layer draping asymmetrically around the nucleus, and continuing in the tail region in the −E -hemisphere.In the terminator plane this density enhancement layer coincides with a local enhancement of the magnetic field strength, as well as a broadening of the proton energy spectra.At the same time the alpha particles appear as almost undisturbed solar wind.The model by Gunell et al. (2018) further shows a +E -hemisphere characterised by the occurrence of cometary pickup ions with energies exceeding 100 eV.Many of the features of the model correspond to our observations: the broadened proton energy spectra with increased density, an increased magnetic field strength, and the occurrence of energetic pickup ions are all present during the observations of the partial rings.However, we have shown that the observed broadening of the energy spectra is mainly due to the energy-angle dispersion of the protons, and not due to an increase in temperature.This makes a model with a more detailed analysis of the flow directions very useful.
The 2D kinetic model from Behar et al. (2018) provides a simplified view of the trajectories of solar wind protons.They assume that the neutral gas density of the comet falls off as 1/r 2 , and that the amplitude of the magnetic field is proportional to 1/r 2 as well.Because no electric field is included in the model, particles are only gyrating and do not change energy.Consequently, changes in the gyroradius are only due to a change in cometocentric distance, and not due to the convective electric field or a change in particle speed.In this semi-analytical model, the solar wind -modelled containing only a proton population -gets deflected around the comet in an asymmetric manner.The results were verified with a hybrid model, and show a similar density enhancement layer compared to that in Gunell et al. (2018).The region cometward of this layer is depleted of solar wind ions.In the +E -hemisphere the density enhancement is only visible close to the nucleus, and dominated by highly deflected, almost sunward-streaming ions.Assigning spatial scales to the dimensionless model places the density enhancement at about 12 km in the +E -hemisphere for a heliocentric distance of 3 AU (Behar et al., 2018).For our case at 2.8 AU, this density enhancement region would be found at around 24 km.
We ence frame, estimated by the fitted ring parameters u bulk," and u drif t .Because of the negligible speed of Rosetta relative to the comet nucleus, the comet reference frame is also the spacecraft reference frame.The ions moving in an anti-sunward direction will have the highest energies, while the more deflected ones exhibit lower energies in the comet reference frame.This relation is illustrated using the same energy colourbar as in the dual colourmap plots (see for example figure 3).For the case that a particle performs a nearly full gyration before being observed, the energy is expected to be similar to the only slightly deflected solar wind.Such a signal has been consistently observed along with the partial rings, although with a lower flux intensity (see figure 4, at 30 • elevation near the anti-sunward flow direction in all three panels).
What information can we obtain from these partial ring observations?The estimated parameters u bulk," and u drif t describe the average gyration centre of the solar wind protons.In a generalised description of different ion populations, u drif t is the same for the entire plasma population (assuming an E × B drift).The direction of the parallel component u bulk," provides a proxy for the average magnetic field direction in the entire interaction region of the ions observed as partial rings.A comparison between this proxy and the local magnetic field direction measured by MAG, as seen in figure 6 in the second panel, provides information about the differences between the local and the average global +E -hemisphere upstream of the observation point.At large distances from the nucleus, the direction of the magnetic field is expected to be similar to that of the undisturbed solar wind (Goetz et al., 2017).Only close to the nucleus (< 50 km), magnetic field draping becomes important (Koenders et al., 2016).We also estimate the gyration speed u ⊥ of the protons.This gyration speed carries the kinetic energy that is no longer in the bulk plasma drift of the protons.Due to the similar spatial scales of the ion gyroradii (approximately 180 km for protons at the spacecraft) and the comet environment the gyration motion is still in its initial stage.As the scale size of the interaction grows significantly larger than an ion gyroradius, it is likely that this gyration will evolve into increased thermal velocity via heating processes (A.J. Coates & Jones, 2009).
In such a comet environment a shock is likely to form.
To verify that Rosetta was in the +E -hemisphere when we observed the partial rings, we used the direction of u bulk," to define the y-axis of the magnetic field coordinates.From this we determined that the spacecraft is located in the +E -hemisphere (see figure S2 in the supplementary information).Using the local magnetic field measurements for the coordinate transformation instead resulted in a larger spread of the spacecraft position.This indicates that u bulk," is indeed a better estimate for the average upstream magnetic field direction than the local magnetic field measurements.
During the reference case, Rosetta was also located in the +E -hemisphere, at a similar radial distance to the comet nucleus as in the partial rings case.However, the outgassing rate of the comet during that day was higher, as seen e. g. in the LAP and COPS densities.This is likely due to a latitudinal effect of the comet activity (Hansen et al., 2016).A higher outgassing rate will lead to a density enhancement layer that is separated dataset (Nilsson, 2021a) and the derived moment data (Nilsson, 2021b) were used.The additional ion data is the calibrated data from RPC-IES (Trantham, 2019).
Introduction
This supporting information contains an additional angular plot with a dual colourmap for the reference case.It also contains an overview plot of the spacecraft position in magnetic field coordinates.
Figure S1. Reference case -Angular plots
Figure S1 shows the angular distribution of protons and alpha particles as measured by ICA during our reference case (April 23rd, 2016, at 11:32).The lower median energy of the protons could be due to a slower upstream solar wind, or due to a higher electrostatic October 12, 2022, 6:15pm X -2 : potential difference from the observation point to the upstream solar wind.As the alpha particles are also observed at much lower energies, the dominating influence seems to be the upstream solar wind conditions (Nilsson et al., 2022).The signal to the left in the upper panel is an instrumental effect (cross-talk) and not a real signal.
Figure S2. Spacecraft position in magnetic field coordinates
To define the magnetic / electric field coordinate system we aligned the x-axis with the sunward direction as an approximation for the negative upstream solar wind flow direction.
For the y-axis, which is usually aligned along the magnetic field component perpendicular to the velocity in this coordinate frame, we used the local magnetic field measured by MAG for both cases (see green markers in figure S2).Additionally, we also used the estimated ring parameter u bulk,! to provide an alternative estimate of the magnetic field direction.The results of using the component of u bulk,!perpendicular to the x-axis is shown with red markers in figure S2.The z-axis completes the right-hand system and is along the convective electric field (E = −v × B).The +E -and −E -hemispheres are found at z > 0 and z < 0.
On both days the majority of data points are at z > 0, but the spread is significant, especially for the partial rings case when using the local magnetic field measurements.
Using the u bulk,!estimate instead of the MAG measurements significantly reduces the spread to about half of the angular variation.October 12, 2022, 6:15pm
Figure 1 .
Figure 1.(Partial) Rings in velocity space.Panel a): Illustration of a generic ring in 3D velocity space, with the defining parameters u bulk,!, u drif t , and u ⊥ shown.The measured velocity vectors along the ring are indicated with black arrows (ui), and the extent of the partial ring corresponds to the grey part of the ring.Panel b): Velocity vectors measured by ICA and IES in ICA instrument coordinates (at 02:22 on April 19th, 2016).The ring fitted to both datasets is shown in red, and the darker part marks the estimated extent of the partial ring.
Figure 2 .
Figure 2. Timeseries overview of the 19th of April 2016.Panels a-c) show the ion differential flux per E/q as measured by ICA, mass-separated into protons, alphas, and heavy ions.Panel d) shows the ion differential flux per E/q as measured by IES.The differential flux colourbar is the same for panels a-d).Panel e) shows the magnetic field data as measured by RPC-MAG (in nT).The individual lines show the magnitude of the B-field and its individual components in a CSEQ reference frame.Panel f) shows the plasma density, measured by LAP, and panel g) shows the proton density, derived from ICA (both in cm −3 ).The dashed line in panel g) marks a density of 1 cm −3 .For the grey areas there is no ICA data available.
Figure 3 .
Figure 3. Azimuth-Elevation plots of ICA (upper panel) and IES (lower panel) for one individual instrument scan of each instrument.Elevation is shown by the left-hand axis, and azimuth ranges from −180 • on the left to 180 • on the right side.The partial ring structure with a decreasing energy along the ring can be seen in both instruments.The dotted line shows the fitted ring, colour-coded using the same energy scale as the median energy for each pixel.The estimated start and end point of the partial ring are indicated with white dots.More information can be found in section 4.1.2.
Figure 4 shows such a case.During three consecutive scans the magnetic field magnitude is almost constant while the average direction changes by 32 • .The change in the elevation angle from 25 • to 8 • is observable in figure 4.During these three scans we also see a change in the angular distribution of the protons.In the first scan the ICA measurements (upper left panel)
Figure 4 .
Figure 4. Azimuth-Elevation plots of three consecutive ICAand IES-scans showing the response of the partial ring distribution to a change in B-field direction.The format is the same as described in figure3.
Figure 5 .
Figure 5. Azimuth-Elevation plots of SW protons (upper panel) and alphas (lower panel) as measured by ICA.The alpha particles exhibit no prominent ring features and are in general less deflected than the protons.The format of the upper panel is the same as in figure 3. The colour bars in the lower panel are adjusted to match the different flux and energy range of the alpha particles compared to protons.
Figure 6 .
Figure 6.Timeseries of fitted ring parameters (April 19th, 2016).Panel a) shows the magnitude of the fitted velocities u bulk,!, u drif t , and u ⊥ in km/s.Panel b) shows the estimated extent of the ring angle.Panel c) shows the angle between the vectors of the locally measured magnetic field direction B and the fitted parallel velocity direction u bulk,! .Only successful fits are included in the timeseries.No ICA data is available for times within the grey area.
slight broadening and an increase in differential flux.The alpha particles (panel b) appear as a barely visible narrow band with a centre energy of 1.3 keV/q.The differential fluxes are barely above the detection threshold of the instrument.During times where there is no signal available, e.g. at 5:00, the particle fluxes are probably too low to be detected by ICA.The ICA heavy ion spectrum (panel c) is dominated by low energy cometary ions.Pickup ions can be seen between 14:15 -15:30, and after 19:30, but the fluxes are much lower compared to the main case.The proton signatures in IES (panel d) are very faint or not available during this day, mostly due to field-of-view effects.There are also no traces of cometary pickup ions visible in the IES data.Magnetic field measurements (panel e) show a calm magnetic field with an average magnitude of 10.5 nT.There is a slight change in direction over the course of the day, as seen in the x-and y-components.The z-component only shows large changes between 14:15 -15:30.The LAP estimate of the plasma density (panel f) increases from 100 cm −3
Figure 7 .
Figure 7. Timeseries overview of the 23rd of April 2016.The format is the same as described in figure 2.
Figure 8 .
Figure 8. Fitted proton thermal speed as a function of the bulk speed obtained from the same fit.The goodness of fit (modified index of agreement; Willmott, 1981) is colour-coded and all fits have been inspected manually.A low modified index of agreement corresponds to cases where the flanks of the distribution do not perfectly match a Maxwellian.
used the particle trajectories of both the kinetic model and the hybrid model shown inBehar et al. (2018) (cf.their figure7) to create a sketch of possible flow patterns of solar wind protons.Figure9ashows some suggested realistic solar wind proton trajectories (blue lines), partially based on the hybrid simulation results presented inBehar et al. (2018) for a low cometary activity.The theoretical trajectories from the kinetic simulation are shown in grey, and the density enhancement region is visible.Our illustration of more realistic trajectories attempts to include the effects of a convective electric field as well as asymmetries in the outgassing.This results in more cycloidal trajectories compared to the kinetic model, and a more diverse flow pattern.We see that even a slight perturbation from the simplified case creates a highly complex interaction region in the +E -hemisphere.The density enhancement layer observed here is a focal point for ion trajectories coming from different directions, with the largest angular range of the proton flow directions occurring in the +E -hemisphere.Here the different proton trajectories would be observed as a partial ring.The spatial extent of the focal region is small, which requires the spacecraft to be located in a very specific region for these rings to be seen.In figure9ba local view of the realistic trajectories near the comet and the spacecraft is shown.The solid lines and arrows indicate the flow pattern of ions before intersecting at the observation point.Their trajectories after the observation point are shown by the dashed lines.The flow directions vary from slightly deflected anti-sunward to an almost sunward flow.The change in energy in the comet reference frame is due to the gyration of the solar wind protons around the centre of mass of the bulk plasma refer-
Figure 9 .
Figure 9. Illustration of the solar wind proton trajectories leading to partial ring distributions at comet 67P for low activity.Panel a) shows a global view.The illustrated realistic trajectories are shown in blue.The theoretical trajectories from the kinetic model (after Behar et al. (2016)) are underlaid in grey.Panel b) shows a local view, with the flow direction of the protons at the spacecraft indicated by the arrows, and the continuation of the trajectories drawn with dotted lines.The change in energy of the observed protons depending on the arrival direction is indicated with a colour bar (same as e. g. figure3).In both panels the separation into a +E -and −E -hemisphere is indicated.
:Figure S1 .:
Figure S1.Azimuth -Elevation plots of a single scan during our reference case (April ×5 | 11,146.8 | 2023-01-27T00:00:00.000 | [
"Physics",
"Geology"
] |
Role of Acupuncture in the Treatment of Drug Addiction
This review systematically assessed the clinical evidence for and against acupuncture as a treatment for drug addiction. The existing scientific rationale and possible mechanisms for the effectiveness of acupuncture on drug addiction were also evaluated. We used computerized literature searches in English and Chinese and examined texts written before these computerized databases existed. We also used search terms of treatment and neurobiology for drug abuse and dependence. Acupuncture showed evidence for relevant neurobiological mechanisms in the treatment of drug addiction. Although positive findings regarding the use of acupuncture to treat drug dependence have been reported by many clinical studies, the data do not allow us to make conclusions that acupuncture was an effective treatment for drug addiction, given that many studies reviewed here were hampered by small numbers of patients, insufficient reporting of randomization and allocation concealment methods, and strength of the inference. However, considering the potential of acupuncture demonstrated in the included studies, further rigorous randomized controlled trials with long follow-up are warranted. in their methadone program. Concurrent drug counseling was also offered to patients in all conditions. The primary outcome measure was cocaine use during treatment and at the 3- and 6-month postrandomization follow-up based on urine toxicology screens and retention in treatment. Results of urine samples showed a significant overall reduction in cocaine use, but no differences by treatment condition. There were also no differences between the conditions in treatment retention (44%–46% for the full 8 weeks). In the last week of treatment, 24, 31, and 29% of patients in auricular acupuncture, needle-insertion control, and relaxation control conditions, respectively, were abstinent from cocaine. This large study does not support the use of acupuncture as a stand-alone treatment for cocaine addiction.
Introduction
Acupuncture originated in ancient China and has been used to manage various clinical disorders for thousands of years in China. Acupuncture needles insert into acupuncture points of the body to treat many different disorders. Acupuncture needles are manipulated manually. One of the recent technical developments was to use peripheral electrical stimulation applied via the acupuncture needles inserted into the acupoints, that is, "electroacupuncture" (EA). Currently, new methods for stimulating the acupuncture points include applying electric current to skin electrodes over the points, directing a laser light onto the points, or using finger pressure to massage selected points (acupressure). In addition, many new points and entire "microsystems" of points have been described for specific body parts, for example, scalp acupuncture and ear acupuncture (auricular acupuncture). In Western countries, acupuncture began to be known in the middle of the 1970s, yet its acceptance has increased rapidly. Many Western patients turn to acupuncture along with conventional medical therapy to make sure they are utilizing all possible medical options. A recent survey of acupuncture released by an NIH Consensus Development Panel indicated that although there are inherent problems of design, sample amount, and appropriate controls in the acupuncture literature, extensive work has shown that acupuncture is beneficial in treating various pain syndromes, postoperative and chemotherapy-induced nausea and vomiting, some forms of bronchial asthma, headache, migraine, and female infertility. For the past 40 years, a number of studies of acupuncture applied, as a medical technique, to the treatment of heroin, alcohol, nicotine, and cocaine addictions have been reported. In light of an increasing trend in the use of acupuncture and utilization of such approaches by patients suffering from drug addiction, we intend to review the existing scientific rationale and clinical data, which indicate that acupuncture may influence the prognosis of drug addicts.
Acupuncture: Theory and mechanisms
It has long been a dream to cure diseases by nonpharmacological measures that activate selfhealing mechanisms, without using drugs. Recent efforts along these lines were the use of vagal nerve stimulation, repetitive transcranial magnetic stimulation (rTMS), deep brain stimulation, and acupuncture to stimulate certain brain areas. Evidence presented in the present review demonstrates that it is possible to facilitate the release of certain neuropeptides in the central nervous system (CNS) by means of peripheral acupuncture point's stimulation. In contrast to magnetic stimulation that stimulates the superficial areas of the brain (i.e., the cortex) [1], acupuncture activates various brain structures and/or the spinal cord via specific neural pathways. Any predictions made at this stage should not be overly optimistic. But the clinical efficacy demonstrated using acupuncture to ease postoperative pain [2,3], lower-back pain [4,5], and diabetic neuropathic pain [6], and the successful application of 100 Hz (but not 2 Hz) electroacupuncture for treating muscle spastic pain of spinal origin [7]E certainly hold exciting promise for the future. Gaining knowledge of therapeutic mechanisms is essential to validating therapies such as acupuncture that are difficult to test under double-blind, placebocontrolled conditions. If we try to answer the question "how the acupuncture works or what physical changes occur", it is appropriate first to give some theoretical background for acupuncture. Clinical treatment with acupuncture is done in light of symptom differentiation and therapeutic methods, by means of needling and moxibustion (lighted punks of artemis vulgaris) with certain manipulating methods to stimulate the selected acupuncture points for prevention and treatment of diseases. The theory of meridians and acupuncture points is the basic theory of therapy. In fact, traditional Chinese medicine is based on the concept of the flow of energy or Qi through meridian pathways in the body. Qi is postulated to flow through the body in precisely located pathways or channels called meridians. These meridians are thought to be connected to various body organs as well as to each other. According to the principles of traditional Chinese medicine, illness results from an imbalance of energy flow within these meridians. Acupuncture was developed according to the principle that human bodily functions are controlled by the "meridian" and "Qi" systems. There are 365 designated acupuncture points located along these meridians. Acupuncture stimulates the points located on "meridians" along which Qi flows, breaking the blockage, and subsequently restoring the flow of energy and healthy body functioning [8].
Acupuncture points on the body have both local and systemic influences. Pain, for example, is treated not only locally but distally as well, via acupuncture points further along the meridian, drawing energy away from the pain. Conditions caused by organ dysfunction such as asthma or drug addiction are differentiated according to the specific symptoms present. Acupuncture points are then selected appropriate to both the symptoms reported and the cause of that individual's problems.
Although different direction, angle and depth for inserting needles, stimulation intensity, such as rolling, raising, and thrusting, and Deqi may have an effect through different actions, the condition of the patient is the most important factor that influences the effectiveness of acupuncture. Numerous examples reveal that the regulatory effect of acupuncture has the characteristics of holism and bidirectional regulation. In acupuncture theory, bidirectional regulation is referred to a balancing effect of acupuncture interventions when the human body is experiencing a hyperactivity or hypoactivity due to abnormal intrinsic or external factors. The same acupuncture points' stimulation with different manipulating techniques or stimulation parameters can regulate different functional activities of the body bidirectionally, which means to balance the functions of the body when they become hyperactive with the inhibiting effect and to restore the normal functions of the body when they become hypoactive with the exciting effect. For example, when blood pressure is too high, needling Neiguan (PC.6) can reduce high blood pressure; when blood pressure is too low, needling PC.6 can elevate blood pressure. Acupuncture-induced correction of abnormal blood pressure is observed to be dependent on the nervous, endocrinal, humoral, and dielectric regulation. Take Zusanli (ST. 36) for another example, EA at Zusanli (ST.36) can biregulate gastric activity. For gastric hypermotility, EA at ST.36 can inhibit gastric movement; but for bradygastria, EA at ST.36 can promote the peristalsis of the stomach. In addition, it is notable that some of the acupuncture points can bear special or specific curative effect on certain diseases. For example, Dazhui (GV. 14) abates fever and Zhiyin (BL.67) rectifies the position of fetus [9].
The guidance of the theory of traditional Chinese medicine is traditionally believed to be essential in achieving acupuncture's therapeutic effect, but the metaphysical explanations may be hard to understand by modern science. In recent years, increasing research publications gave strong evidence that acupuncture could be explained on a physiological and neurobiological rather than a metaphysical basis [10,11]. For example, in traditional Chinese medicine, the vision-related acupuncture point (VA1) (known as urinary bladder channel of BL67) is believed to be an effective acupuncture point that directly treats eye-related disorders. Various acupoints are related to corresponding specific organs rather than via the central nervous system. Based on the knowledge of Western medicine, it is difficult to believe that acupuncture treats disorders and diseases by direct control of organs or organ-related disorders and diseases. It is known that many disorders are either controlled or affected by the brain, i.e., specific corresponding brain functional areas. Recently, Cho et al. [12] demonstrated that when acupuncture stimulation is performed at VA1 (vision-related acupuncture point), activation of occipital lobes is seen by functional magnetic resonance imaging (fMRI). Stimulation of the eye by directly using light evokes similar activation in the occipital lobes. It may represent an important step toward understanding oriental acupuncture in relationship to brain function. In addition, the findings by Bruce Rosen of Harvard Medical School at the American Psychosomatic Society Meeting in Orlando showed that acupuncture on pain-relief points cut blood flow to key areas of the brain related with pain within seconds. Researchers applied acupuncture needles to acupuncture points on the hand linked to pain relief in traditional Chinese medicine. Blood flow decreased in certain areas of the brain, which was detected by fMRI within seconds of volunteers reporting a sense of heaviness in their hands, a sign that the acupuncture is working. The needling technique is not supposed to hurt if done correctly. When a few subjects reported pain, the fMRI scannings showed an increase of blood to the same brain areas. It may provide a clear explanation to date for how the ancient acupuncture might relieve pain.
Recently, the neurophysiology of acupuncture has been investigated extensively and reviewed in detail. The principal suggestion is that acupuncture operates largely through neurotransmitters, particularly endorphin-related mechanisms. These studies demonstrate conclusively that acupuncture's effects are related to the release of a variety of neurotransmitters including natural opiates and, furthermore, that this effect is naloxone-reversible. Basic research work carried out has demonstrated that any noxious stimulus will result in endorphin release through the neurophysiological mechanism described as diffuse noxious inhibitory control (DNIC). Therefore, DNIC represents a nonspecific physiological mechanism which triggers the natural opiate system in both man and experimental animal. It has been suggested that DNIC plays a relatively minor role in acupuncture analgesia and that other systems, mediated by serotonin and noradrenaline, may be important. The mechanism of acupuncture in internal diseases, such as asthma, irritable bowel, and the treatment of symptoms such as nausea is completely unknown. Acupuncturists have hypothesized that the autonomic nervous system plays an important, but not as yet ill-defined, part in the underlying mechanisms that are involved in the treatment of such internal problems.
Effects of acupuncture on drug dependence
Conventional detoxification methods such as methadone and buprenorphine are effective in reducing illicit opioid use, but problems associated with their use, such as social resistance to the idea of "replacing one drug of abuse with another"and difficulties in tapering patients off the medication due to long-lasting withdrawal effects, make the search for alternative therapies important [13].
Acupuncture's utility for treating drug abuse and dependence is best shown in opioiddependent patients experiencing withdrawal [14,15]. Over the past 40 years, acupuncture and EA have been applied with great success to attenuate behavioral signs of opioid withdrawal in addicts [16][17][18]. Using acupuncture to treat drug withdrawal symptoms began in 1972. H. L. Wen, a neurosurgeon from Hong Kong, visited China to learn acupuncture anesthesia. Upon returning to his Hong Kong practice, he used electrical stimulation via acupuncture needles to reduce or eliminate the need for anesthetic drugs during surgery. Acupuncture treatment was given over several weeks prior to surgery, as well as during operational procedures. Dr. Wen was unaware that some patients were also heroin, opium, morphine, alcohol, and/or nicotine dependent. The addict patients later volunteered this information, and reported that they also lost their drug cravings after receiving acupuncture. Wen and his colleagues followed up 40 patients for opium and heroin addiction. They confirmed that 39 of 40 patients were considered improved in that they had gained basal weight and reported they did not crave drugs [19][20][21]. In the United States, Smith and coworkers [22][23][24][25] modified Wen's original protocol by eliminating electrical stimulation and by using an abbreviated prescription of fivepoint auricular acupuncture. This prescription was not designed for withdrawal from any class of drug or any single abused substance. Instead, it effectively reduced cravings, anxiety, and dysphoria of withdrawal in addict patients during withdrawal from a variety of drugs and alcohol. Patients consistently reported the dramatic relief during the early weeks of withdrawal, when the incidence of relapse is highest. By 1974, Smith had used this five-point auricular protocol as the sole detoxification method used in the outpatient clinic at Lincoln Hospital in the Bronx, NY. Over the past 40 years, this acupuncture protocol has grown in popularity. It is currently used to treat alcohol and other drug withdrawal in more than 800 substance abuse treatment centers across the United States and Europe.
Clinical studies and related research on acupuncture have been undertaken by independent groups. Some randomized trials have been done to compare the effects of auricular acupuncture at specific points for the treatment of substance abuse and at sham points [26][27][28]. Washburn et al. [29] conducted the first controlled study of acupuncture heroin detoxification. One hundred addicted persons were randomly assigned, in a single-blind design, to the standard auricular acupuncture treatment used for addiction or to a "sham" treatment that used points that were geographically close to the standard points. They observed that subjects assigned to the standard treatment attended the acupuncture clinic more days and stayed in treatment longer than those assigned to the sham condition. Zhang et al. [26] also found that acupuncture and electrical stimulation were more effective than clonidine in treating withdrawal syndromes such as insomnia, pain, and anxiety following acute withdrawal symptoms. Clinical studies have also demonstrated that this treatment has fewer side effects. In addition, Meade et al. [30] tested the effectiveness of transcutaneous electric acupoint stimulation (TEAS) as an adjunctive treatment for inpatients receiving opioid detoxification with buprenorphinenaloxone at a private psychiatric hospital. It is shown that TEAS is an acceptable, inexpensive adjunctive treatment that is feasible to implement on an inpatient unit and may be a beneficial adjunct to pharmacological treatments for opioid detoxification. Acupuncture also appears to be a useful adjunct to methadone maintenance therapy (MMT) in heroin addiction. Recently, one study examined the effectiveness of acupuncture for heroin addicts on methadone maintenance by measuring the daily consumption of methadone, variations in the 36-item Short Form Health Survey-36 (SF-36) and Pittsburgh Sleep Quality Index (PSQI) scores. It is shown that acupuncture was also associated with a greater improvement in sleep latency at follow-up. All adverse events were mild in severity [31].
A number of studies have examined the effects of acupuncture on cocaine and alcohol dependence. For example, severe recidivist alcoholic patients treated with acupuncture specifically for the treatment of substance abuse reported less craving for alcohol, fewer drinking episodes, and required fewer admissions to the county detoxification center than did control patients who received acupuncture at nonspecific points [27]. Lipton et al. [32] also reported that patients receiving acupuncture treatment had significantly lower levels of cocaine metabolites than the control subjects. Recently, researchers, headed by S. Kelly Avants, from the division of substance abuse in the Department of Psychiatry at Yale University, divided 82 cocaine addicts into three groups. One third received acupuncture at four specific points around the outer ear, another third received "sham" acupuncture at sites on the ear that would be ineffective, and the remaining third received relaxation therapy consisting of viewing a relaxing video. Treatment sessions were five times a week and lasted eight weeks.
The subjects' urine was tested three times a week for traces of cocaine. They found that patients assigned to receive true acupuncture had less cocaine use compared to the two other groups, and there were a higher percentage of patients in the acupuncture group who were clean from cocaine use by the last week of the study than in the two other groups [6].
The effects of acupuncture on drug addiction have also been verified by animal experiments. It has been well shown that acupuncture suppressed morphine withdrawal syndrome and alcohol-drinking behaviors in rats [33][34][35]. Furthermore, morphine-induced conditioned place preference can be successfully suppressed by 2 or 100 Hz electroacupuncture, a substitute for classic acupuncture [36,37]. A recent study by Chae et al. [38] found that acupuncture at ST36, but not the other acupuncture points, significantly attenuated the expected increase in nicotineinduced locomotor sensitization to subsequent nicotine challenge. Behavioral response to nicotine challenge in the repeated nicotine treated group (control) was significantly intenser. Stimulation of acupuncture at ST36 just before nicotine challenge as well as during 3 days of withdrawal period completely blocked the effects of nicotine on locomotor activity during the 60 min testing period. In our laboratory, we also found that acupuncture applied at the BL.23 acupuncture point, a novel acupuncture point, could effectively suppress withdrawal syndrome [39,40].
However, some large clinical trials have questioned the effectiveness of acupuncture for drug dependence. In these studies, the acupuncture treatment groups failed to show significant differences from the control group in the treatment of drug dependence [41]. One study has found that acupuncture offered no significant reduction of nicotine withdrawal symptoms or long-term improvement over placebo [42]. Bullock et al. performed a single-blind, randomized, placebo-controlled study to evaluate auricular acupuncture in the treatment of cocaine addiction. Their study had 236 residential and 202 day treatment clients. They did not find any significant treatment differences between true and sham acupuncture. They also found no differences among the three dose levels of true acupuncture [43].The Cocaine Alternative Treatment Study (CATS) [44] was a large-scale, multi-site study. In this study, 620 patients addicted to cocaine were enrolled from six treatment sites; 412 of the patients were ''primary'' cocaine-dependent, and 208 were opiate-dependent and maintained on methadone. Patients were randomized to the three treatment conditions: auricular acupuncture, a needle-insertion control condition, and a relaxation control condition. Treatments were offered five times weekly for 8 weeks. The patients maintained on methadone received standard care as offered in their methadone program. Concurrent drug counseling was also offered to patients in all conditions. The primary outcome measure was cocaine use during treatment and at the 3-and 6-month postrandomization follow-up based on urine toxicology screens and retention in treatment. Results of urine samples showed a significant overall reduction in cocaine use, but no differences by treatment condition. There were also no differences between the conditions in treatment retention (44%-46% for the full 8 weeks). In the last week of treatment, 24, 31, and 29% of patients in auricular acupuncture, needle-insertion control, and relaxation control conditions, respectively, were abstinent from cocaine. This large study does not support the use of acupuncture as a stand-alone treatment for cocaine addiction.
Effects of acupuncture on psychological symptoms associated with drug addiction
Easing psychological symptoms associated with heroin use and heroin relapse is an important goal in the treatment of heroin dependence. Notably, as the course of withdrawal followed its natural history and acute symptoms abated, acupuncture continued to reduce anxiety and cravings associated with protracted withdrawal. In fact, patients who had completed addiction programs often continued to enjoy stress reduction induced by occasional "booster" acupuncture treatments. There are many ancient and contemporary papers reporting the successful use of acupuncture for the treatment of patients with depression and anxiety disorders [45][46][47][48][49][50]. Given that the prevalence of depression and anxiety is very high in cocaine and other drug addicts, and depression and anxiety after prolonged abstinence become the main factors contributing to drug relapse and craving, it is very meaningful to pay close attention to the effects of acupuncture on depression treatment among addicts. In addition, acupuncture has been used to improve psychological status and lessen fatigue [51]. Chang et al. conducted a three-arm randomized controlled trial (RCT) on residents of a homeless veteran rehabilitation program. Sixty-seven enrolled participants were randomly assigned to acupuncture, the relaxation response, or usual care. They found that craving and anxiety levels decreased significantly following one session of acupuncture [52]. In another small, randomized controlled trial, Allen et al. [53] compared symptoms of depression in an acupuncture group, placebo group, and a waitlist control group. The acupuncture group showed greater improvements in depressive scores than the placebo group and the waitlist control group. Roschke et al. [54] studied the effects of adding acupuncture to antidepressant treatment and found that the acupuncture in combination with antidepressant treatment improved the alleviation of depression course compared with pharmaceutical treatment alone. In a clinical trial using TAES for the suppression of opiate craving in humans, a total of 117 heroin addicts who had completed the process of detoxification for more than 1 month were recruited [55]. They were randomly and evenly assigned into four groups. Three groups received TAES treatment of different frequencies (2, 100, or 2/100 Hz). Self-sticking skin electrodes were placed on four acupoints: Hegu and Laogong (palmar side of the Hegu point) in the left (or right) hand to complete a circuit, and Neiguan and Weiguan in the opposite arm to complete a circuit. The control group was processed as in the previous groups except that the intensity was minimal (15 Hz, threshold stimulation for 3 min, and then switched to 1 mA thereafter) to serve as a mock TAES control. Visual analog scale (VAS) was used to assess the degree of craving. There was a very slow decline of the VAS in the mock TAES control group in a period of 1 month. A dramatic decline of the degree of craving was observed in the groups receiving 2 and 2/100 Hz electric stimulation, but not in the group receiving 100 Hz stimulation. These results observed in humans were in line with the findings obtained in the rat: low-frequency TAES is more effective than high-frequency TAES in suppressing the morphine-induced CPP [56].
However, some studies [6,30,57,58] did not show favorable effects of acupuncture on psychological symptoms associated with opioid addiction (anxiety, depression, and craving). For example, Black et al. [59] conducted a randomized controlled study to test the effect of auricular acupuncture in the treatment of anxiety associated with withdrawal from psychoactive drugs. They found that auricular acupuncture was not more effective than sham or treatment setting control in reducing anxiety. We reviewed the clinical studies that have investigated the clinical effectiveness of acupuncture and focused on psychological symptoms associated with opioid addiction. The clinical studies published in Chinese language journals were assessed carefully and included in our systematical reviews. We found that eight studies [26,29,41,44,[60][61][62]64] included heroin/opioid craving. Seven studies [27,28,32,[60][61][62][63] included anxiety. Two studies included depression [60,65]. All of the four studies [44,[66][67][68] published in English language journals did not show favorable effects of acupuncture on psychological symptoms associated with opioid addiction (anxiety, depression, and craving). Many studies published in Chinese language journals supported the use of acupuncture for controlling psychological symptoms associated with opioid addiction: craving [26, 41, 63, 69,], anxiety [29,32,60,62,63,70], and depression [60,68].
Treatment retention and abstinence are more important goals for the treatment of drug dependence. Effectiveness of the treatment of psychological symptoms associated with drug addiction should be assessed by including longer-term follow-up data. In fact, to determine whether initial improvements from the treatment persist for a reasonable period of time, participant observation should last for at least 3 months. However, most of the studies we reviewed did not provide follow-up data. In these studies, the duration of acupuncture interventions was also shorter than 1 month. In fact, it is unclear whether the extent to which acupuncture has therapeutic effects depends on the duration and frequency of acupuncture. Arguably, longer treatment periods are required for acupuncture to have any chance of showing clinical effects. These variable factors should be taken into account when assessing the effects of acupuncture. Future studies should therefore have sufficiently large samples, extended treatment, and follow-up periods.
Possible mechanisms for the effectiveness of acupuncture on drug addiction
It would be reasonable to suggest that an opioidergic mechanism is, at least partially, involved in mediating acupuncture antiwithdrawal. Han and his colleagues from Peking University China have made a detailed survey on the analgesic effect of EA. They found that analgesia induced by 100-Hz EA resulted from accelerating the release of dynorphin from the spinal cord of the rats [11,71,72]. In accord with this was the finding that the analgesic effect of 100-Hz EA observed in morphine-dependent rats could be blocked by a high dose of naloxone only [73]. On the other hand, dynorphin has been shown to be the endogenous ligand of the n-opioid receptor. Indeed, the withdrawal syndrome observed in rats dependent on morphine can be suppressed by high-frequency electroacupuncture, which accelerates the release of dynorphin in the spinal cord and brain [33,70,74]. Morphine-induced conditioned-place preference, an experimental model simulating the craving of heroin addicts, can be effectively suppressed by low-frequency electroacupuncture. This effect can be blocked by a small dose of naloxone, indicating the involvement of endogenous opioid peptides [36,69]. Meanwhile, the clinical study by V. Clement-Jones et al. also showed that EA was associated with a rise in cerebrospinal fluid met-enkephalin levels in all addicts studied [67]. Recently, Wang et al. [75] found that a downregulation of preprodynorphin (PPD) mRNA level was observed in spinal cord, PAG, and hypothalamus 60 hours after the last morphine injection, which could be reversed by multiple sessions, but not a single session of EA. Accompanied with the decrease of PPD mRNA level, there was an upregulation of p-CREB in the three CNS regions, which was abolished by 100 Hz EA treatment. These findings suggest that downregulation of p-CREB and acceleration of dynorphin synthesis in spinal cord, PAG, and hypothalamus may be implicated in the cumulative effect of multiple 100Hz EA treatment for opioid detoxification. The mesolimbic dopamine system originates in the ventral tegmental area (VTA) and projects to regions that include the nucleus accumbens and prefrontal cortex, which are believed to play a pivotal role in the development of opiate addiction [20]. Opiate abuse-induced changes in the levels of dopamine in the brain are associated with feelings of well-being and pleasure, providing positive reinforcement of continued opiate abuse [76,39]. Conversely, withdrawal from chronic opiate administration reduces dopamine outflow in the nucleus accumbens [40,77]. Furthermore, in the treatment of drug craving and relapse to drug use, the core symptoms of addiction, a non-endorphin-mediated mechanism is probably involved. Lu et al. [78] examined alterations in the firing rate of dopaminergic neurons by means of extracellular recording following chronic morphine exposure and applied 100 Hz electroacupuncture treatment to reverse the reduced firing rate of these neurons. They found that the electrophysiological response of VTA DA neurons to morphine was markedly reduced in chronic morphine-treated rats compared to saline-treated controls. A substantial recovery of the reactivity of VTA DA neurons to morphine was observed in rats that received 100 Hz EA for 10 days. Evidence also indicates that acupuncture acts on the nucleus accumbens to inhibit the elevation in dopamine [79,80]. Yoon et al. demonstrated the acupuncture-mediated inhibition of ethanol-induced dopamine released in the rat nucleus accumbens through the GABA B receptor [80]. Chae et al. showed that acupuncture treatment at ST.36 attenuated the expected increase in nicotine-induced locomotor activity by reducing postsynaptic neuronal activity in the nucleus and striatum [38].
ΔFosB and FosB are members of the Fos family of transcription factors implicated in neural plasticity in drug addiction. Li et al. [81] found that the intake of and preference for ethanol in rats under 100 Hz, but not 2 Hz electroacupuncture, regiment were sharply reduced. The reduction was maintained for at least 72 hours after the termination of electroacupuncture treatment. Conversely, 100 Hz electroacupuncture did not alter the intake of and preference for the natural rewarding agent sucrose. Additionally, FosB/ΔFosB levels in the prefrontal cortex, striatal region, and the posterior region of ventral tegmental area were increased Role of Acupuncture in the Treatment of Drug Addiction http://dx.doi.org/10.5772/60655 following excessive ethanol consumption, but were reduced after 6-day 100 Hz electroacupuncture. Interestingly, EA can inhibit CB1 receptor upregulation in the prefrontal cortex, striatum, hippocampus, amygdala, and ventral tegmental area in ethanol-withdrawn mice [82]. Furthermore, extracellular signal-regulated kinase (ERK) plays a role in neuronal changes induced by repeated drug exposure. EA can reverse ethanol-induced locomotor sensitization and subsequent ERK expression in mice [83]. These results suggest that acupuncture could play an important role in suppressing the potentiating effects of ethanol and other drugs.
Our recent study [41] showed that acupuncture attenuated elevated c-fos expression in the central nucleus of the amygdala (CeA) during morphine withdrawal in rats. Some studies emphasize that the motivational components of opiate withdrawal appear to be centrally mediated by limbic structures such as the nucleus accumbens and amygdale [2][3][4]. Therefore, elevated c-fos expression in the CeA might be associated with the motivational components of opiate withdrawal. Our observation that acupuncture suppressed elevated c-fos expression in the CeA indicated that acupuncture might have some therapeutic effects in the treatment of the negative motivations of opiate withdrawal. Of course, further studies must be performed to clarify this issue. In addition, the CeA and the basolateral amygdala have been extensively and differentially involved in associative learning and memory processes, attributing affective salience to environmental stimuli paired with drug effects [5]. One theory of the neural mechanisms of drug abuse focuses on various learning and memory systems in which the normal functions of these complex neural circuits become subverted leading to compulsive drug-seeking behaviors [84,85]. In this model, drugs of abuse initiate plasticity mechanisms in different learning and memory systems that come to control behaviors of the individual over other preexisting memories. Experience with addictive drugs are encoded and stored like other experiences, except that drugs of abuse only mimic a subset of the actions of natural reinforcers in the brain. Acupuncture can affect learning and memorizing ability [1,7,86,87]. Further work is needed to emphasize whether acupuncture can re-encode experience with addictive drug via affecting learning and memory systems, and modify the addictive behaviors. The amygdala acquires information that promotes approach and interaction with drugassociated stimuli. We also need to know which role the amygdale plays when acupuncture stimulation affects drug-associated learning and memory.
Discussion
In terms of lives and productivity, drug addiction remains one of the most serious threats to our public health. Addiction can be defined as the loss of control over drug use, or the compulsive seeking and taking of a drug regardless of the consequences. Available treatments for addiction remain inadequately effective for most individuals. Incorporating acupuncture into existing therapies offers a promising approach. Acupuncture has been widely recognized as a valuable, readily available, and safe means of health care. It is effective, inexpensive, and requires only simple equipment. In this review, we identify and summarize the evidence about the possible clinical effectiveness of acupuncture on drug addiction, including withdrawal symptoms, drug craving, depression, and anxiety. We also discuss the theory and possible mechanisms for the effectiveness of acupuncture. Some animal and clinical studies have provided supporting evidence for the promising effects of acupuncture. Unfortunately, the data do not allow us to make conclusions that acupuncture was an effective treatment for drug addiction. The evidence for its effectiveness has been inconclusive and difficult to interpret [63]. Some of the clinical studies were unable to detect statistically significant differences in treatment efficacy between their acupuncture treatment and control groups [66][67][68]. In addition, there are few randomized controlled clinical trials of acupuncture treatment for drug addiction, and the methodological methods used in several clinical trials of acupuncture treatment for drug dependence can be criticized for their poor quality. The quality issues include the following: small numbers of patients, no control subjects, lack of randomized assignment, lack of details regarding specific point locations for needle insertion, and no specification regarding the degree of blinding among research subjects.
In fact, there are some variable factors that need to be taken into account when assessing the effects of acupuncture on drug addiction. (1) The study protocol may influence the assessment of effectiveness of acupuncture. Methods and research designs have been issues of debate among acupuncture clinicians and researchers [88]. For a methodological perspective, randomized controlled trails are considered the gold standard in terms of identifying differences in treatment efficacy [89]. However, unlike the evaluation of a new drug, randomized controlled trials of acupuncture are extremely difficult to conduct, particularly if they have to be blind in design and acupuncture has to be compared with a placebo [90]. The efficacy of acupuncture is difficult to study empirically because of the fundamental divergence between the two schools of thought. The gold standard in Western science is randomized, double-blind, and controlled trails, utilizing one specific protocol for each condition. Randomized controlled trails can be used to answer questions about most clinical problems. However, this approach is not always a practical and cost-effective solution. Sometimes randomized controlled trails are open to error; for instance, patient preference may have an effect on the results as may certain cultural environments. In addition, in some Asian countries such as China where acupuncture is widely used, most patients know a great deal about acupuncture, including the special sensation that should be felt after insertion or during manipulation of the needle. Although various "sham" or "placebo" acupuncture procedures have been designed, they are not easy to perform in these countries. Moreover, acupuncturists consider these procedures unethical because they are already convinced that acupuncture is effective. In fact, most of the placebo-controlled clinical trials have been undertaken in countries where there is skepticism about acupuncture, as well as considerable interest.
(2) Another difficulty in evaluating acupuncture practice is that the therapeutic effect depends greatly on the proficiency of the acupuncturists. Their ability and skill in selecting and locating the acupuncture points and in manipulating the needles are different. Needling techniques of inserting, retaining, stimulating, and withdrawing are difficult to standardize. This may partly explain the disparities or inconsistencies in the results reported by different authors, even when their studies were carried out on equally sound methodological bases. (3)In traditional Chinese medical system, such as acupuncture, where each individual is treated according to specific conditions and symptoms, it may be invalid to use the same protocol for every condition. Individualized protocols are critical to the success of the acupuncture treatment. For example, acupuncture stimulation typically elicits a composite of sensations termed deqi, manifesting as soreness, numbness, heaviness, and distention [91]. A body of clinical and experimental evidence indicates that the presence of the deqi sensation is a prerequisite for, and often an indicator of, a clinical acupuncture effect. Traditionally, patients are asked to remain aware of the sensation during acupuncture treatment. Deqi may be an important variable in studies of the efficacy and mechanism of the action of acupuncture treatment. Our previous study showed that the deqi sensations of heroin addicts were significantly higher than those of healthy subjects during acupuncture stimulation, indicating that heroin addicts are "good" responders to acupuncture stimulation [92]. (4) Acupuncture was developed as a branch of traditional Chinese medicine on the basis of oriental philosophy, which takes a holistic approach to regulating the balance of the human body. (Several different schools of acupuncture exist, each with its own principles.) These principles may vary with the types of acupuncture being investigated. The inconsistency in treatment protocols between studies, or the use of combined therapies, makes it impossible to draw a strong causal relationship between therapy and its treatment effect, thus making replication of studies difficult. To this end, traditional knowledge and experience of acupuncture should be duly represented by the investigation team when research is proposed, prepared, and conducted. A good clinical study on acupuncture may be conducted with the understanding and integration of both traditional and modern knowledge of medicine; (5) Most of the clinical research on acupuncture in the United States focused on auricular acupuncture, which is simply the insertion of acupuncture needles into prespecified locations in the ear, whereas studies from China used body acupuncture to treat opiate addiction. These findings are intriguing considering that acupuncture on body and auricular points exhibited different efficacies. According to our clinical experience and the theory of traditional Chinese medicine, body acupuncture may need more attention. Some acupuncture points represent discrete locations in the body, where manual or electrical stimulation can exhibit therapeutic effects on cocaine and other drug addiction [26,28,60,61,93]. Table 1 provides the summary of main acupoints/sites selected in the reviewed studies. In China, body acupuncture, rather than ear acupuncture, was commonly used for the treatment of drug addiction [26,28,60]. The acupuncture points most frequently selected are Zusanli (ST.36), Sanyinjiao (SP.6), Neiguan (PC.6), Shenmen (HT.7), Laogong (PC.8), Waiguan (TE.5), and Hegu (LI.15), located on the four limbs. In our recent work, we showed for the first time that acupuncture applied at the BL.23 acupuncture point, located on the back and commonly used for analgesia and sedation in our clinic, could effectively suppress withdrawal syndrome [40,41]. Clinically, BL.23 could provide us with a new selection of effective acupuncture points for successful treatment of drug addiction. Further studies on the synergistic combination of BL. 23 and other effective acupuncture points, such as Zusanli (ST.36) and Sanyinjiao (SP.6), could assist acupuncturists to use a balanced and appropriate choice for combining points in the treatment of addicts. In summary, acupuncture offer some advantages over existing pharmacological interventions: they are safer, have fewer side effects, and are less expensive. Since deteriorating health often accompanies long-term use of addictive drugs, pharmaceutical interventions with harsh side effects can be detrimental to the general health of long-term drug users. In contrast, acupuncture can enhance immune function and increase metabolism in organs necessary to fight infections and various acute and chronic illnesses. Although the definitive role of acupuncture in the treatment of drug addiction has yet to be established, its basic research and clinical data reviewed here justify further clinical trials to systematically examine the efficacy of acupuncture in treating various conditions related to drug addiction such as withdrawal symptoms, drug craving, anxiety, and depression. The next important step in acupuncture research is to get a better understanding of the neurochemical mechanism of acupuncture in order that the therapeutic effects of acupuncture can be further improved. Also scientifically conducted clinical research is needed to examine the effectiveness of acupuncture treatment of drug addicts. As we mentioned in this review, it has proved difficult to apply and integrate the basic principles and methodology of modern science that ensure the reliability of research subjects to clinical studies on acupuncture. However, researchers should be encouraged to ensure the highest possible standards of study design and reporting in future research in order to improve the evidence base in this field. | 9,167.2 | 2015-09-02T00:00:00.000 | [
"Medicine",
"Psychology",
"Biology"
] |
Octave-wide supercontinuum generation of light-carrying orbital angular momentum
Nonlinear frequency generation of light-carrying orbital angular momentum (OAM), which facilitates realization of on-demand, frequency-diverse optical vortices, would have utility in fields such as super-resolution microscopy, space-division multiplexing and quantum hyper-entanglement. In bulk media, OAM beams primarily differ in spatial phase, so the nonlinear overlap integral for self-phase matched χ processes remains the same across the 4-fold degenerate subspace of beams (formed by different combinations of spin and orbital angular momentum) carrying the same OAM magnitude. This indistinguishable nature of nonlinear coupling implies that supercontinuum generation, which substantially relies on self/cross-phase modulation, and Raman soliton shifting of ultrashort pulses typically results in multimode outputs that do not conserve OAM. Here, using specially designed optical fibers that support OAM modes whose group velocity can be tailored, we demonstrate Raman solitons in OAM modes as well as the first supercontinuum spanning more than an octave (630 nm to 1430 nm), with the entire spectrum in the same polarization as well as OAM state. This is fundamentally possible because spin-orbit interactions in suitably designed fibers lead to large effective index and group velocity splitting of modes, and this helps tailoring nonlinear mode selectivity such that all nonlinearly generated frequencies reside in modes with high spatial mode purity. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
This special case notwithstanding, a generalized methodology to nonlinearly frequencyconvert, or obtain spectrally diverse, OAM beams with high spatial coherence and mode purity has not been possible, though doing so would be of great utility to a variety of applications that require on-demand OAM beams of different colors, for which the only broadband solutions require independent generation of white light that is then transformed with a wideband mode converter [18,19].
The electric field, E(r, φ, z) of an OAM mode in optical fibers is given by: where F(r) represents the radial distribution of the electric field (which is substantially similar for modes of the same L ), L is the topological charge associated with an OAM of L per photon, φ is the azimuthal angle, σ ± denotes the (circular) polarization states associated with a spin angular momentum (SAM) of ± per photon, and β is the propagation constant, related to the effective index, n eff , of the mode by 2 / eff n β π λ = (λ is the free-space wavelength). Note the subscripts of β, representing two degenerate spin-orbit aligned states (SOA) where the sign of L and σ are the same, and two degenerate spin-orbit anti-aligned states (SOAA) where these two quantities are opposite in sign, respectively. In bulk media, for a given magnitude L of OAM, β SOA = β SOAA , and modes with the four combinations of SAM and OAM are four-fold degenerate (in β or n eff , and hence also in group velocity, group velocity dispersion (GVD) and higher-order dispersion terms). Third-order nonlinear coupling between different modes is governed by the field overlap integral where the subscript j E and k E denote the normalized fields associated with two different modes. From Eq. (2), it is immediately apparent that nonlinear interactions would have similar strengths when coupling modes of the same or opposite sign of L since their radial field profiles are nearly identical. This implies that nonlinear scattering (due to Raman or self/cross-phase modulation) from a "pump" mode in L would occur, with equal probabilities, to a mode with the same L as well as a mode with the opposite topological charge of −L . In addition, as the pulses propagate, their envelopes continue to temporally overlap due to the aforementioned four-fold degeneracy, enabling the nonlinear interaction to build up. This explains why nonlinearly generated supercontinua of OAM beams in bulk media have, thus far, resulted in multimode outputs.
Nonlinear evolution of OAM beams in fibers
The situation is dramatically different in an optical fiber. Due to the confinement potential of the waveguide, SOA and SOAA modes have different propagation constants -a result of spinorbit interactions in the presence of dielectric anisotropies [20]. We have previously shown that this spin-orbit effect can be exacerbated by fiber design, and lifting this degeneracy (i.e. making |β SOA −β SOAA |) large) avoids linear mode mixing during fiber propagation through lengths as long as 13 km [21]. In addition, angular momentum conservation rules dictate that even the degenerate orthogonally polarized modes do not mix in the linear regime [22]. Thus, in a suitably designed optical fiber, the four-fold degeneracy of OAM modes depicted in Eq.
(1) reduces to two 2-fold-degenerate subspaces, and coupling even within the doubly degenerate modes is inhibited. The nonlinear overlap integral (Eq. (2), however, remains the same, and linear mode stability does not guarantee nonlinear mode selectivity. Here, we show that the aforementioned degeneracy-lifting criteria that enabled stable linear behavior of OAM modes in fibers also facilitates controllable nonlinear interactions for ultrafast pulses of OAM beams in fibers. Figure 1(a) illustrates this effect: for a spin-orbit aligned pump ( , ) σ + + L th temporally ov nonlinear con with the pump (arising from known result beams [23]. T light launche generated freq Fig
Experime
We probe the ( Fig. 2 Fig. 4 wave-plate es in the launched polarization state as well as the orthogonal polarization state, respectively. Spatial integration of the camera intensity pattern across the two images reveals that the power ratio in the launched, versus the orthogonal polarization bins remains 6 dB (i.e. 75% power remain in the launched polarization state) across the spectrum [ Fig. 5(c)]. This confirms the polarization (hence SAM) preserving behavior of the nonlinear process. Within the launched polarization bin, the relative modal content in SOA and SOAA is found by utilizing the fact that the L = 14 OAM state (converted from L = 8 SOA) diffracts more than the L = 2 state (converted from L = 8 SOAA) in free-space, as shown in Fig. 5(b); hence, the relative powers in the desired mode with respect to other modes is found by spatial integration of the four spatially separated regions (Region 1 corresponding to L = 14 and Region 3 corresponding to L = 2, and Region 2 and 4 composed of power in all other parasitic modes). Using this method, we find that the mode purity is better than 13 dB (>95%) across the spectral bandwidth [ Fig. 5(d)]. This confirms that the dominant OAM content at all the nonlinearly generated frequencies is same as the launched pump state. We additionally confirm that both OAM and polarization are preserved across the generated supercontinuum when L = + 7; σ + state is used as the pump, indicating that the phenomenon is not dependent on OAM charge of the pump. The measurement technique to discern mode purity above rests on two assumptions, both of which we show to be valid: (a) the intensity profiles of modes of the same, first, radial order (the primary mode orders used for the pump and probed in this experiment) need to be similar across different order of L . This is substantially true for the air core fibers used in our experiments. In contrast to free-space or bulk media, the intensity profiles of modes with different L depends weakly on L , because the high index contrast of the waveguide offering confinement plays the dominant role here. (b) power in other radial mode orders is assumed to be negligible. This validity of this assumption arises from the fact that intensity line cuts of the near field profiles all OAM modes exiting from our fiber [see Fig. 5(e)] match very well with simulated intensity profiles for our fiber -we find that an intensity overlap integral between the two yield 98% coincidence. Moreover, theoretically constructed intensity profiles assuming incoherent addition with other radial orders shows that obtaining such a high overlap would have meant that the other radial mode orders would have a maximum of 0.5% power, which is indeed negligible.
Discussion, summary and conclusions
These experiments reveal an interesting and highly useful attribute of nonlinear optics with OAM fiber modes -all nonlinear products arising from ultrafast pulse nonlinear effects conserve both polarization and OAM. Recall that this is fundamentally due to the effect of spin-orbit interactions in the air-core fiber, which results in group-velocity walk-off between spin-orbit aligned and anti-aligned states. Hence, self-phase matched nonlinear effects, such as Raman scattering or supercontinuum generation, with OAM beams can achieve high spatial coherence only in media that offer optical confinement with cylindrical symmetry, such as optical fibers or whispering gallery modes in ring resonators, but not in bulk media, where non-exclusive nonlinear coupling occurs. in an optical f nonlinear op de size, as is t r optics (eithe avelength of c optics in any s l is independen , is an order of mmonly used f nua produced ble in conventio le spatially sel shifting and su abler is a suita actions and disp greater than an OAM state. rs is not only u ch as spectrally also provides d-wave nonline output using q-pla d camera at diffe a). (b) Representa r mode-conversion to the orthogonal p length, in launche ted = L 8 mode ity profile of the s fiber enables c ptics. This has typically done er on-chip or choice -hence spectral range nt of the effec f magnitude lar for supercontin by OAM beam onal fibers. lective and co upercontinuum ably designed persive behavi n octave) freq Hence, thirduseful for the g y-diverse supe an attractive w ear optics. ate, bandpass filter erent wavelengths ative output mode n using the q-plate polarization across ed polarization bin at 650 nm [mode ame mode.
control of disp s two implicat with high-con r fiber-based), e OAM conten in which an OA ctive area of t rger than those nuum generatio ms in fibers c oherent ultrash m generation w OAM support ior related to th quency genera -order ultrasho generation of s er-resolution n wavelength-agn r s e . s n e persion, a tions: (a) nfinement , enables nt control AM fiber the mode e of highon). Thus, can be an hort pulse with OAM ing fiber, he rate of ation in a ort pulse spectrally anoscopy nostic and | 2,401 | 2019-04-15T00:00:00.000 | [
"Physics"
] |
Model Discrimination in Gravitational Wave spectra from Dark Phase Transitions
In anticipation of upcoming gravitational wave experiments, we provide a comprehensive overview of the spectra predicted by phase transitions triggered by states from a large variety of dark sector models. Such spectra are functions of the quantum numbers and (self-) couplings of the scalar that triggers the dark phase transition. We classify dark sectors that give rise to a first order phase transition and perform a numerical scan over the thermal parameter space. We then characterize scenarios in which a measurement of a new source of gravitational waves could allow us to discriminate between models with differing particle content.
Introduction
The detection of gravitational waves (GW) [1] established a new and independent probe of New Physics. It has already been suggested that the data from resolvable events such as from binary mergers could help constrain interacting dark matter [2,3] or exotic compact objects [4][5][6]. The observation also implies that we may anticipate the detection of a stochastic GW background at both current and future detectors. This may be a rare probe of the Cosmic Dark Ages and the first observational window onto cosmic phase transitions (PTs). Such cosmic phase transitions leave behind a characteristic broken power-law gravitational wave spectrum. The relic spectral shape depends on the strength of the transition, the speed of the transition, the bubble wall velocity and the temperature of the transition.
In terms of the LISA-inverse problem, much attention has been focused towards either arguing for a new scale of physics [10,25], or for relic backgrounds from certain well motivated extensions of the standard model, assuming the reheating temperature is sufficiently high [8, 9, 11-24, 31, 32]. Little work to date has focused on the question of model discrimination [18]. In this work we endeavour to see how much model discrimination is in principle possible from the frequency spectrum of a future stochastic gravitational wave signal. In particular, we consider renormalizable and non-renomalizable effective field theories of interacting hidden sectors, in which a gauge symmetry is spontaneously broken. We also consider the effect of fermions that couple to the scalar.
Simulations of gravitational wave backgrounds from cosmic phase transitions indicate that there are three spectral contributions: the collision spectrum is the direct effect of bubbles of true vacuum colliding, the sound wave spectrum is the result of the fluid dynamics after such collisions, and the turbulence spectrum, which is usually subdominant. It has been realized recently that the sound wave contribution dominates in most relevant scenarios. In particular, this is true in all cases that do not display "runaway" behaviour, and such runaway is blocked by any gauge bosons acquiring a mass in the transition [33].
All spectra are controlled by four thermal parameters: the velocity of the bubble wall, v w , the ratio of the free energy density difference between the true and false vacuum and the total energy density, ξ, the speed of the phase transition β/H and the nucleation temperature T N . In the special case in which two peaks are visible, the four thermal parameters can in principle be reconstructed.
In this paper we focus on the thermal parameters T N , β/H and ξ. The first determines the scale of the phase transition, and the latter two are most powerful at model discrimination.
To study the thermal parameters in a general context, we observe that first order phase transitions are realized in (effective) double well potentials, from the interaction of terms with alternating signs. As such, we will study multiple models within two limiting scenarios, where all coefficients are positive at the time of transition. Most phase transitions can be mapped onto these effective scenarios. In particular, the EWPT in a Higgs+singlet model is an example of (1.2), upon integrating out the heavy singlet (up to dimension-6 operators). The thermal parameters are strongly dependent on the nature of the thermal corrections. These thermal corrections are functions of the bosonic and fermionic degrees of freedom coupling to the scalar. The bosonic degrees of freedom are given by the gauge structure of the theory. Here we will consider models within the following scenarios, corresponding to the limiting cases (1.1) and (1.2): 1. A dark Higgs -SU (N ) breaking into SU (N − 1). In this case the barrier between the true and false vacuum during the transition is caused by dark gauge bosons that provide an effective cubic term.
2.
A dark Higgs -SU (N ) breaking into SU (N − 1) -with significant non-renormalizable operators. In this case the barrier between the true and false vacuum is caused by the quartic dark Higgs coupling being negative and the vacuum being stabilized by the positive Wilson coefficient of the sextet interaction.
Such scenarios may arise for example in the context of Composite Higgs models of cosmology [34,35], where the Dark Higgs would be represented by a pseudo-Goldstone boson state (a generalization of a QCD pion) whose interesting potential is due to explicit breaking of the global symmetry via SU (N ) gauge and Yukawa couplings.
In each case we consider gauge groups of different ranks as well as models with and without a thermal mass produced by dark fermions. For all cases ξ is independent of the scale of the potential and β/H has a weak logarithmic dependence whereas both thermal parameters are controlled by the ratio of the vev with the scale of the potential x ≡ v/Λ. Therefore the renormalizable potentials are a 2(3) parameter problems for each model and the non-renormalizable potentials are a 3(4) parameter problem without (with) the addition of a dark fermion.
We find that non-renormalizable operators dramatically improve the visibility of gravitational wave spectra, whereas adding a dark fermions N f and increasing the rank of the group N provide a more modest boost, which becomes reasonably large in the limit of large N f or N . The boosts to visibility in each case are non-degenerate. In the renormalizable case (1.1), we find that both the effect of a larger gauge group (SU (N ) → SU (N + 1)), and the effect of increasing the number of fermions (with significant thermal mass) are essentially to shift the thermal parameter space, and increase the detection prospects. Of course, there is a degeneracy of predictions for specific models. It has been suggested that anisotropy measurements could break this degeneracy, for example by a cross-correlation with the CMB data [36].
The structure of this paper is as follows. In section 2 we summarize the models we are attempting to discriminate. In section 4 we review the spectra of gravitational waves from a cosmic phase transition and in section 5 we present our results. In section 6 we relate our results to studies of dark matter, before concluding with a discussion and an outlook to future work in the final section.
Scenarios for a dark first order Phase Transition
A first order phase transition may occur for a potential with three competing terms, with alternating signs, such that it has a double well separated by a barrier. Moreover, the vacuum energy corresponding to these minima will be temperature-dependent, such that the ground state changes as the Universe cools. The first order phase transition may then happen if the potential barrier is present at the critical temperature T c , when the minima are degenerate.
We will consider two limiting cases of such potentials. In the renormalizable case, the potential barrier is generated effectively at finite temperatures, but does not exist at zero temperature. As we will see, the zero-temperature masses and self-couplings, the quantum numbers of the scalar, and the couplings to fermions crucially determine the thermal parameters of the phase transition. For all the models we are considering the part of the Lagrangian relevant to phase transitions can be written We will consider potentials of the form (1.1) and (1.2).
SU (N )/SU (N − 1) models with renormalizable operators
The first case has a double well generated from the quadratic, cubic, and quartic interactions at finite temperature. We parametrize the potential such that the overal scale (Λ) and the zero temperature vacuum expectation value (v) are inputs. This implies the following redefinitions for zero temperature parameters in the potential (1.1), As we will see below, we find that some thermal parameters are only functions of the ratio of the zero temperature vev and the scale of the potential (v/Λ). Using this parametrization, the finite temperature potential is given by, 5) where N G is the number of gauge bosons coupling to the scalar sector with coupling constant g, N GB is the number of Goldstone degrees of freedom, and N f is the number of fermions with Yukawa coupling y. For simplicity, we consider degenerate Yukawa couplings, as the gravitational waves produced by (y, N F ) and ({y i }, N F ) are related by y 2 × N F = N F y 2 i . 1 In the second line we have applied a high temperature expansion, All field dependent masses which enter into the effective potential are provided in the appendix. 2
SU (N )/SU (N − 1) models with non-renormalizable operators
The second limiting case has the double well resulting from the interplay between the quadratic, quartic, and sextic terms. We again choose a parametrization of the potential such that the scale of the potential Λ and the zero temperature vacuum expectation v value are inputs. This will leave us with one free parameter α, which parameterizes the difference in vacuum energy of the two minima at zero temperature. In the high temperature expansion (2.6), the potential becomes, It is seen that at zero temperature, the potential has minima at h D = 0 and h D = v respectively, overall scale Λ, and the (dimensionless) non-renormalizable coupling is α. That is, we have made the following redefinitions in Eq. (1.2): Up to a small change in the number of relativistic degrees of freedom g * . Since the gravitational wave spectra has a very weak dependence on g * , making this simplification is at little cost to generality. 2 Note that the use of perturbation theory introduces some theoretical uncertainty as perturbativity at finite temperature breaks down above the critical temperature [37,38], a fact that can be delayed somewhat by the inclusion of "daisy terms" [39] although in reality one requires a lattice simulation for a robust treatment. In spite of this theoretical uncertainty we expect our results to be indicative of the overall thermal parameter space including its overall scope and dependence on the model. Finally note that the most important points in our scan are where a lot of supercooling occurs and TC is significantly higher than TN meaning that these are the points where perturbation theory is most valid. At zero temperature, a value of α = 1/2 corresponds to degenerate minima, and the upper limit α = 2/3 corresponds to the value for which there is no zero temperature barrier between the vacua (as the zero temperature mass term changes sign), see Fig. 1. Of course, finite temperature corrections may allow for a higher value of α, as positive corrections to the mass term may reintroduce the barrier. Note we have once again assumed degenerate Yukawas with little loss of generality as explained in the previous section.
For operators up to dimension-6, models for the electroweak phase transition (EWPT) can be captured effectively by a special case of the above, with where we have defined, Here Λ 6 is the scale associated with the dimension-6 operators which arise from integrating out BSM physics, such as a singlet scalar. 3 . We will also consider the EWPT with nonrenormalizable operators for the sake of comparison later. Finally, note that if one rewrites Eq. 2.7 in terms of implicitly defined temperature dependent parameters one can follow the process in [31,40] and fit the action to the function for the range α(T ) ∈ [0.51, 0.65] In this section we give examples of hidden sector models which can be mapped onto our general framework given above. Of course we are not completely general as we do not consider for example the case where multiple scalars acquire a vev at the same time (such as a multi dark higgs doublet model) or more complicated gauge group structures SU(N)×SU(N ) where both gauge couplings are large. However, scenarios which can be mapped onto our framework are ubiquitous including, Pati-Salem symmetry breaking 4 [41], colour breaking intermediate phase transitions [42,43], atomic dark matter [44], asymmetric dark matter [45] and compositeness [46] to give a non-exhaustive list. We give more details of three of these examples and how they map to the various models we consider below.
Generalized baryon number
As was suggested in [45], the dark sector relic abundance and the baryon asymmetry in the SM can have a common origin in models with a generative symmetry breaking. In such models, there is a generative gauge group G, for example SU (2) G which is broken spontaneously through a first-order phase transition in the early universe. The asymmetry generated in this phase transition is communicated to the dark and visible sectors through a mixed Yukawa term. The degenerative scalar has tree-level zero temperature potential, and quartic mixing terms with the SM Higgs, B-L breaking scalar σ, and dark scalar χ. For small mixing, such as is the case in various supersymmetric models, the mass contributions are small. For non-supersymmetric models, the mixing can be significant, and contribute to the thermalization and decay properties of the various sectors. The mass hierarchies are small, such that the scalar ϕ can have a mass at the electroweak scale. In this case there are significant cosmological and astrophysical constraints as discussed in [45]. The first order PT can be induced when one includes an effective dimension-6 operator, which can arise at the one loop level from the mixed quartic interactions [21] from which it is seen that this an example of a model within the scenario given by (2.7). 4 This phase transition is more likely to occur at a scale visible to aLIGO than LISA
Atomic dark matter
A further possibility is that the dark sector contains a confining group, as well as fermions charged under an unbroken U (1) . Then, dark atoms can be formed [44]. The strongest constraint on atomic dark matter comes from the self scattering bound [47,48], where m χ is the heavier particle, which forms the nucleus of dark atoms. The mass of m χ can be heavier than a TeV [49] in which case the constraint on the gauge coupling is very modest (α D ∼ 0.1, implying g ∼ O(1)). A simple example is an SU(4) gauge group, which breaks into SU (3) ×U (1), allowing for the formation of nuclei during dark BBN [50].
Composite Dark Matter models
A final example is a dark matter candidate as the lightest bound state of a confining gauge group SU (N ), such as has been discussed in [32]. The spontaneous symmetry breaking of an approximate global symmetry, which is only partially gauged, gives rise to pseudo-Goldstone bosons. These light states are sensitive to an effective scalar potential at the 1-loop level, which in turn initiates a further breaking. A particularly interesting possilibility has the SM Higgs and the dark matter candidate both as pseudo-Goldstone bosons of the same symmetry breaking [46]. Various symmetry breaking cosets have been studied in the literature, with scalar potentials of the form (2.5) or (2.7). The couplings in such scenarios correspond to 1-loop integrals in the UV theory. The GW spectra for benchmarks of thermal parameters for the breaking SU (3) and SU (4) dark gauge symmetries were previously considered in [32], where it was argued that scalar DM bound states and dark quarks (carrying EW quantum numbers) are most relevant for detection at LISA.
Thermal parameters
The dynamics of the phase transition are controlled by a bounce solution φ c (r, T ), which is a spherically symmetric classical solution to the Euclidean equations of motion [40,51,52] We compute the bounce solutions with potentials in the previous section. The thermal parameters of the phase transition can then be computed from the bounce solution. First, the nucleation temperature of bubbles of the new vacuum T N is conventionally defined as the temperature for which a volume fraction e −1 is in the true vacuum state. This corresponds approximately to where p(t) is the nucleation probability per unit time per unit volume, and where t N is the nucleation time. The nucleation probability can be calculated from the bounce solution as, where S E is the Euclidean action evaluated on the bounce. We assume a radiation dominated universe to relate the nucleation temperature and time. The speed of the phase transition is controlled by the parameter β, which can also be related to the bounce action, Last, the latent heat parameter is given by, Where ∆ indicates that the quantity should be evaluated on both sides of the bubble wall, and where ρ N = π 2 g * T 4 N /30 is the equilibrium energy density at T N .
Gravitational wave spectrum and the LISA inverse problem
The gravitational wave profiles can be related to the thermal parameters. We will adopt a parametrization introduced by [53], but our analysis can be adapted when future models become available. In principle, there are three contributions to the power spectrum, Ω GW = Ω col + Ω sw + Ω turb (4.6) Where the first term corresponds to the spectrum from bubble collisions, the second is a spectrum due to sound waves in the fluid after collisions, and the third a turbulence term. As realized last year [33], in any model in which gauge bosons gain a mass in the transition, the bubble wall velocity approaches a finite limit. Therefore, the sound wave contribution [28] is typically dominant in all of the cases we consider in this work. Its power spectrum can be expressed as [53], where Γ ∼ 4/3 is the adiabatic index, andŪ 2 f ∼ (3/4)κ f ξ is the rms fluid velocity. For v w → 1, the efficiency parameter is well approximated by [54] κ f ∼ ξ 0.73 + 0.083 √ ξ + ξ (4.8) For v w ≈ 0.5, we use [54] κ f ∼ ξ 2/5 0.017 + (0.997 + ξ) 2/5 (4.9) Figure 2: (Schematically) the LISA inverse problem. In the above, the subscript x refers to the dominant peak of the GW spectra (collision, sound wave, or turbulence). As described in the text, for most models the sound wave contribution is dominant. The thermal parameters of the PT can be calculated by solving the bounce EOM (4.1), and then related to the GW spectra using (4.7) and (4.11). This paper finds general relations between the GW spectra and the Lagrangian. and the spectral shape is given by From this we notice that the amplitude of the signal is a function of the parameters β/H, the wall velocity v w , and the latent heat ξ; whereas the position of the peak depends on β/H and T N . We will use this insight in the next section, to compare the predictions of the different models (2.5) and (2.7). This effort can be summarized by the LISA inverse problem, in fig. 2. We should mention some previous work towards solving the LISA inverse problem. The link between gravitational waves detection of collision and turbulence peaks and the thermal parameters has previously been summarized in ref. [25], which highlighted visible regions in the thermal parameter space. On the link between the Lagrangian and thermal parameters some thorough work has been done in the case of the EWPT with extended scalar sectors, [23,[55][56][57]. The aim of this paper is to compliment these previous works by studying the general case of a (single) scalar, with couplings to different numbers of fermions and gauge bosons, as well as other scalars separated in mass.
Spectra from models
We compute the thermal parameters for scenarios (2.5) and (2.7), for different dark sectors. We are specifically interested in light scalar sectors, with masses around the EW scale. For comparison, we also study the SMEFT case, in which the electroweak phase transition is catalyzed by a non-renormalizable H 6 effective operator. The SMEFT case is then well approximated by a dark SU(2) with three dark fermions.
We find bounce solutions using two techniques to ensure accuracy: a numerical finitedifference algorithm, where we discretized the radial direction r and the analytic technique described in section 2. The thermal parameters are then found by substituting the bounce solution into the Euclidean action S E as described in the previous paragraph.
In both the renormalizable and non-renormalizable models the thermal parameter set (ξ, H/β) governs the peak amplitude. We find that these results are essentially independent of the scale of the potential Λ. Specifically, ξ is independent, whereas β/H has a weak Logrthmic dependence. The nucleation temperature by contrast scales linearly with Λ. In the case where we have only renormalizable operators (2.5), we scan over (g, v/Λ), with scan ranges g ∈ (0.1, 1), and v/Λ ∈ (0.5, 4) In the non-renormalizable case (2.7), we fix g = (0.5, 1) and scan over (α, v/Λ), where we fix Λ = 200 GeV. The scan ranges are v/Λ ∈ (0.5, 4) and α ∈ (0.55, 1.5). We assume that the fermions are massless before the PT. The parameter that enters the scan is then N f × y χ . For convenience, we have set y χ = 1 in the figures.
We summarize the results for the peak amplitude and peak frequency in Figs. 3 and 4 respectively where in the spirit of reference [25] we include visibility curves for LISA and plot the (ξ, β/H) and (β/H, T n ) planes. We check explicitly that the high temperature expansion is valid for the results of our scan, by ensuring that 2m 2 i < T 2 with i = h, GB for the gauge boson and Higgs mass at the critical vev and temperature. 5 The effect of excluding points for which this check fails is to mildly trim the very tip of the peaks of the thermal parameter space in Figs. 3. The fact that the trimming occurs for low dark Higgs mass can be understood in direct analogy with early studies of the EWPT (before the Higgs mass was known). In this model one finds that for fixed vev, the strength of the phase transition grows inversely with the Higgs mass. In the limit of small Higgs mass, the gauge boson masses (which scale with v(T n )) become large, invalidating the high temperature expansion (m G /T < 1) to be valid.
The different shape of the results for the potential (2.5) with fermions can be understood as the fermions contribute only to the mass term. Therefore the potential barrier is no longer just a function of the gauge coupling, which we scan over, and the zero temperature mass. The reader will also notice that the results for the different potentials (2.5) and (2.7) have different zero temperature mass ranges. This can be understood by considering the contribution of the dimension-6 term to the latter. .7), where we have chosen vw = 0.5 in the left plot, and vw = 1 in the right plot (with the corresponding efficiencies from [54]), as motivated using the conditions in [58]. The upper thicker contour corresponds to the LISA 1-year peak sensitivity [59]. The lower thicker dashed contour corresponds to LISA for a power-law spectrum (integrated over frequency), taken from [60]. The width of the contours is found from varying the zero-temperature potential parameters. Left: unless otherwise indicated, the number of Yukawa couplings is taken to be zero. If present, the Yukawa couplings are set to yχ = 1. Right: unless otherwise indicated, g = 1. The light blue dashed line corresponds to the predictions from the EWPT.
From the results for the non-renormalizable operators, it would naively seem that gauge bosons and fermions change the zero temperature mass of the scalar. However, the more accurate statement is that the presence of fermions and the rank of the gauge group determines which zero temperature masses lead to a strong first order PT, and are not disallowed by supercooling. Furthermore, for the case where g = 0.5 rather than g = 1, the high temperature expansion is valid for lower dark Higgs masses, before it is rendered invalid by large gauge boson masses.
In the right panels, we compare our result to the predictions from the EWPT up to dimension-6 operators (2.11), with the dashed blue line. We find that the results in Fig. 3 overlap, demonstrating that these results are insensitive to the scale Λ (but sensitive to the ratio v/Λ). As expected, the predictions for the peak frequency (Fig. 4) do not overlap, as T N scales with Λ.
Some qualitative features can lead to model discrimination, which we list below: 1. The thermal parameter space available for SU (N ) is essentially the same as that of SU (2) apart from a shift in log ξ by an amount Figure 4: Thermal parameters from the PT described by Eqs (2.5) and (2.7) respectively. The dashed contours in the plots correspond to the sound wave peak fsw (4.11), where we have chosen the wall velocities as in Fig. 3. The thicker dashed contour corresponds to the LISA frequency peak [60]. Note that the EWPT results do not overlap with our scans, since the nucleation temperature T N is sensitive to the scale Λ.
where the coefficient A(y χ × N f ) depends on y χ × N f and is around 2.4 for y χ × N F ∼ 0 and decreases to about 1.8 for y × N f = 10. Note that in general increasing the rank of the gauge group improves visibility although one has diminishing returns for large N which we show in Fig. 3.
2.
Adding fermions qualitatively changes the available thermal parameter space slightly. Comparing N f × y χ > 0 and N F × y χ = 0, we notice a shift and a slight change in shape. For 1 < y × N f < 10 we find that the thermal parameter space merely shifts according to where we find that C(2) ∼ −0.35 for SU (2) That is ξ is shifted in the direction of greater visibility whereas β/H is shifted in a direction of weaker visibility. Since the amplitude is more sensitive to ξ this overall means that adding fermions increases the visibility of the transition which we show in both Fig. 5 and Fig. 3. The increase in β/H is due to T c − T n reducing in magnitude as one adds strongly coupled fermions. For ξ there is a competition between two effects: the reduction in T c − T n which tends to reduce ξ and an increase in dV /dT which increases ξ. Its the latter that wins. 3. The presence of nonrenormalizable operators boosts H/β by orders of magnitude compared to what is possible in the renormalizable case. This is a striking signal suggesting that a large H/β indicates the presence of more than one new scale of physics. In this case the effect of adding extra fermions is to shift and slightly rotate the thermal paramater space (ξ, H/β), this time in the ∆ log ξ direction although the relationship is less clean than the case of renormalizable operators. In contrast the effect of increasing the rank of the group is to both shift and somewhat contract the parameter space. The shift in both cases is in a direction of increased visibility.
Relic abundance example
The scenarios discussed in the previous sections constitute hidden sectors, which may explain the present relic abundance of Dark Matter (DM). As an example, we discuss the contribution to DM relic abundance from the coupling to a single Dirac fermion to the scalar responsible for the PT. We will also assume the region m h D < m χ , which corresponds to the majority of the scenarios we covered in the last section. The fermionic DM may not have tree-level couplings to the SM, just Yukawa interactions with the Dark Higgs, and thermalize at a dark temperature, which in principle could be different from the SM evolution, T D = T SM . But provided that there was thermal equilibrium between the SM and hidden sector at some scale (above the weak scale), one can assume that at freeze-out of the χ particles T D ∼ T SM . This scenario can explain the observed DM relic abundance [61], which is mostly determined by the internal dynamics of the hidden sector.
In particular, the annihilationχχ → h D h D sets the relic abundance of χ particles.
To avoid over-closure, the h D scalar is expected to have a decay channel to the SM, such as via Yukawa couplings to the SM fermions, via a mixing θ with the Higgs, of magnitude g f = (m f /v) sin θ, where y 2 χ sin 2 θ 2 × 10 −13 [62]. This coupling is small enough such that the SM fermions are not expected to play a significant role in the h D phase transition.
Under these assumptions, the dominant annihilation cross section is p-wave, and an approximate expression for the relic abundance is then given by [63] Ω DM h 2 2.1 × 10 8 GeV −1 where the fermion masses are m χ = y χ v/ √ 2, x F = m χ /T F 20 and b = (3/128π)y 4 χ /m 2 χ . To illustrate the possible interplay between DM observations and the discovery of a new source of gravitational waves, we explore the region of correct relic abundance in the model (1.1) with Λ = 200 GeV. The results are shown in Fig. 6.
These results are based on a toy model for DM, and many other scenarios could be considered. In particular, one could explore non-thermal production of DM and its relation with the scalar potential responsible for the PT. An alternative scenario has the heavy gauge bosons of the broken symmetry as the most important component of the dark matter relic abundance. Such a scenario was considered in [64] for the symmetry breaking pattern SU (3)/SU (2), and is sensitive to additional cosmological constraints from structure formation.
Discussion
In this work we have considered the relic gravitational wave spectra from phase transitions in a hidden sector. These spectra can be related to the thermal parameters of the transition, which can be computed from first principles: β/H, the speed of the transition, Υ, the latent heat, and T N , nucleation temperature. We have distinguished between two limiting cases, with potentials (1.1) and (1.2), which effectively capture the main classes of models. Furthermore, we have studied the effect of varying the quantum numbers of the scalar, the gauge coupling, and the number of coupled fermions. The results of these studies are summarized in Figs. 3 and 4, and some general conclusions are derived in section 5. We find that although there is some degeneracy in the predictions, a level of model discrimination is possible. This is due to the fact that increasing the number of strongly coupled fermions, the rank of the group, or the number of scales involved all increase the visibility of a gravitational wave signal. Moreover the changes in thermal parameters due to each of these model changes are qualitatively different. In section 6, we comment on the relic abundance of hidden sectors that could be constrained through their GW spectra.
A few caveats to our work. First, the renormalizable potential (2.5) does not have a zero-temperature potential barrier, such as could be the case for a singlet scalar with a cubic self-interaction. Phase transitions resulting from such a potential are qualitatively different, and the thermal corrections may restore the vacuum to a unique field value in such a way that no first order phase transition is expected. If a first order phase transition does occur, it may exhibit runaway behaviour, such that the GW spectrum from bubble shell collisions becomes relevant. This would lead to a different spectral shape, which in principle may be distinguishable in future experiments, for T N around the weak scale. A detailed analysis of such a scenario is beyond the scope of the present work.
Second, in the present work we have employed a high temperature expansion, Eq. (2.6), which has a limited range of validity. Phase transitions not captured by this approximation may also give observable spectra; this is most noticeable in the results from the renormalizable potential (2.5). In future work, it will be interesting to explore the models using the full thermal functions. Another possible extension is the inclusion of higher dimensional potential corrections to the two limiting potential forms considered here. An analysis with the inclusion of such operators will be presented in a future paper.
Finally, we have not calculated the wall velocities v w in the phase transitions, instead making conservative assumptions to calculate the spectra. Calculating the bubble wall velocity for a general model with general parameters is a highly non-trivial task which we leave to future work. However, we can briefly comment on how measuring the bubble wall velocity can lead to further model discrimination. The wall velocity can be estimated in the limit that the departure from each particle in the plasma's equilibrium distributions are slowly varying near the bubble wall [65][66][67]. In this case the bubble wall velocity solves for boundary conditions h D (−∞) = 0 and h D (∞) = v(T n ), that is, the value of the non trivial minimum at the nucleation temperature. Only a particular value of the combined friction term on the right hand side will satisfy the boundary conditions, and since η is determined by particle physics, the problem reduces to choosing an appropriate value for v w γ (where γ is the Lorentz factor). In the above η can be written as a matrix product G T Γ −1 F where G and F are vectors, whose components scale as g 2 or y 2 χ . The matrix of coefficients scale either as g 2 y 2 χ , g 4 or y 4 χ . Therefore the bubble wall velocity can give more information on the size of both the gauge coupling and the fermion couplings, if present.
Future work may also include further analysis of the internal hidden sector dynamics, including a thorough calculation of the thermal histories and relic abundances of hidden sector degrees of freedom. In this work we have chosen to focus on a decoupled hidden sector, but it is in principle straightforward to extend the results presented here to sectors with significant portal couplings.
The next order in the expansion is given by a logarithm, which is cancelled by the zero temperature one-loop Coleman Weinberg potential. Note we have also ignored the constant term. We find numerically that the high temperature expansion is valid almost exactly for m 2 < 2 × T 2 . Our values of n i are n H = 1 n G = 2N − 1 n GB = 3 × (2N − 1) n f = 2 × N × N f (7.7) where N f is the number of fermions and N is the rank of the group. Note that we follow the standard practice of ignoring the second term in the high temperature expansion for Goldstones and Higgs ∼ m 3 T , such that the only cubic self-interaction comes from the gauge bosons.
Non-renormalizable potential
For the nonrenomalizable potential (2.7) we proceed as before, but here we assume the cubic corrections due to gauge bosons are subdominant compared to the zero temperature terms with alternating signs. This corresponds to only taking the first term in the high temperature expansion for every species. | 8,458.8 | 2018-06-06T00:00:00.000 | [
"Physics"
] |
A fine-tuned YOLOv5 deep learning approach for real-time house number detection
Detection of small objects in natural scene images is a complicated problem due to the blur and depth found in the images. Detecting house numbers from the natural scene images in real-time is a computer vision problem. On the other hand, convolutional neural network (CNN) based deep learning methods have been widely used in object detection in recent years. In this study, firstly, a classical CNN-based approach is used to detect house numbers with locations from natural images in real-time. Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7, among the commonly used CNN models, models were applied. However, satisfactory results could not be obtained due to the small size and variable depth of the door plate objects. A new approach using the fine-tuning technique is proposed to improve the performance of CNN-based deep learning models. Experimental evaluations were made on real data from Kayseri province. Classic Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 methods yield f1 scores of 0.763, 0.677, 0.880, 0.943 and 0.842, respectively. The proposed fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5, and YOLOv7 approaches achieved f1 scores of 0.845, 0.775, 0.932, 0.972 and 0.889, respectively. Thanks to the proposed fine-tuned approach, the f1 score of all models has increased. Regarding the run time of the methods, classic Faster R-CNN detects 0.603 seconds, while fine-tuned Faster R-CNN detects 0.633 seconds. Classic MobileNet detects 0.046 seconds, while fine-tuned MobileNet detects 0.048 seconds. Classic YOLOv4 and fine-tuned YOLOv4 detect 0.235 and 0.240 seconds, respectively. Classic YOLOv5 and fine-tuned YOLOv5 detect 0.015 seconds, and classic YOLOv7 and fine-tuned YOLOv7 detect objects in 0.009 seconds. While the YOLOv7 model was the fastest running model with an average running time of 0.009 seconds, the proposed fine-tuned YOLOv5 approach achieved the highest performance with an f1 score of 0.972.
INTRODUCTION
The quality of geographic information systems (GIS) developed to store, analyze, and display spatial data depends on the accuracy of the data it contains (Cooperative & Collins, 1988;Tasyurek, 2022). The quality and readability of the image data sets used in creating an address map are very important (Ulutaş Karakol, Ataman & Cömert, 2021). Detecting house numbers from natural scene images containing spatial location information (Visin et al., 2015) and processing them with their locations accelerates the address infrastructure (Öztürkçü & Leyla, 2020). The natural scene image is the raw form of the momentary image of nature or the environment. The most common source used to obtain house numbers from images is Google Street images, which consist of coordinated panoramic images taken with 360 • (Vandeviver, 2014). Door numbers from street views detecting and reading (Asif et al., 2021) is a computer vision problem (Zuo et al., 2019;Kulikajevas, Maskeliunas & Damaševičius, 2021) that falls under the category of natural scene text recognition (Fischler & Firschein, 2014). Character recognition in images in natural scenes is a complicated problem due to the variability of light, background clutter, severe blur, inconsistent resolution, and many other factors. In addition to these properties, there are deteriorations in the characters and numbers in street view photographs with the effect of natural events.
In recent years, deep learning method has been widely used in image classification, object tracking, pose estimation, text detection and recognition, visual salience detection, action recognition, and scene tagging (Alzubaidi et al., 2021;Bashir et al., 2021;Pal & Pradhan, 2023;Atasever et al., 2022). Deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks are the methods frequently used in deep learning (Garcia-Garcia et al., 2018). Among these methods, it has been found that convolutional neural networks (CNN) show high performance in image classification (Khan et al., 2020;Dönmez, 2022). The CNN model takes its name from the linear mathematical operation between matrices called convolution (O'Shea & Nash, 2015;Maass & Storey, 2021;Terzi & Azginoglu, 2021). The CNN model consists of a multi-layer structure including a convolutional layer, non-linear layer, pool layer and fully connected layer (Albawi, Mohammed & Al-Zawi, 2017).
Identifying characters and numbers from natural images is one of the classification problems in computer vision. In the literature, studies on detecting house numbers from street images with CNN models show very high performance in image classification (Goodfellow et al., 2013;Visin et al., 2015).
In this study, classic CNN models such as Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 were applied in a CNN-based system designed to detect house numbers from images obtained in real-time with spatial location. However, sufficiently successful results could not be obtained, especially due to the small and variable depths of the house number objects in the image.
Training on more datasets is a solution to improve the performance of CNN-based deep learning models, but collecting large amounts of data imposes a time and financial burden. On the other hand, a fine-tuning method has been widely used in recent years to improve the performance of deep learning models (Amisse, Jijón-Palma & Centeno, 2021). Fine-tuning is to increase the model's success by making adjustments on deep learning models (Subramanian, Shanmugavadivel & Nandhini, 2022). One of the commonly used fine-tuning methods in the literature is to remove the last layer of the model, the softmax layer, and replace it with its classifier layer. Another fine-tuning method is to change the value of the parameters, also called hyperparameters, which affect the performance of the models (Öztürk, Taşyürek & Türkdamar, 2023). On the other hand, freezing the layers' weights in the previously trained model is a common fine-tuning practice. In this study, a new fine-tuning technique is proposed to improve the performance of deep learning-based models. The proposed technique includes updating the softmax layer, multi-scale training (Rath, 2022) and performing the training process with a low learning rate (Yu, 2016) rate. The proposed approach's main contributions within this study's scope are presented below.
Contributions
• A new CNN-based approach is proposed for house number detection with the location in real-time.
• The proposed approach has been tested on real natural scene images taken from Kayseri Metropolitan Municipality.
• In the proposed approach, the performances of Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models, which are widely used as CNN models, are examined.
• A fair evaluation was made by comparing Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models designed in different structures on a single platform (PyTorch).
• A new fine-tuning technique is proposed to improve the performance of classical CNN-based deep learning models in house number detection.
• The proposed fine-tuned YOLOv5 approach can detect house numbers from natural scene images with a high f1 score of 0.972 in an average of 0.015 s.
Scope and outline
• Hyperparameter optimization to improve accuracy performance in house door number detection is out of the scope of this study.
The rest of this article is organized as follows: Section 2 presents the related work. Section 3 gives about basic concept with CNN models. Section 4 presents the proposed approach. In Section 5, experimental evaluations are presented. Section 6 presents conclusions and future works.
RELATED WORKS
CNN method, which is one of the deep learning methods, has been widely used in different fields such as computer networks (Gu et al., 2018), image detection (Chauhan, Ghanshala & Joshi, 2018) and disease classification (Lu, Tan & Jiang, 2021) in recent years. The image classification process with CNN can be done by creating a custom CNN structure or using CNN models with a fixed structure. As an example of custom CNN models, Wei et al. (2018) proposed a new technique using the CNN model to effectively and robustly detect multifaceted text in natural scene images. He et al. (2016) presented a system for scene text detection by proposing the Text-CNN model, which focuses on extracting text-related regions and features from image components. Jia et al. (2018) proposed a CNN-based approach to detect handwritten texts from images of whiteboards and handwritten notes. Garg et al. (2019) stated that they detected high performance in MNIST dataset by creating an efficient CNN model with multiple convolutions, ReLu and Pooling layers. Athira et al. (2022) suggested using a special CNN model for character classification in container identity detection and recognition.
The model developed by LeCun et al. (1999) as LeNet-5 for handwriting and machineprinted character recognition in the 1990s is considered the first successful application of convolutional networks. LeNet-5, a 7-level convolutional network, was developed to recognize handwritten numbers in 32x32 pixel grayscale input images. When it is desired to analyze higher resolution images with the LeNet-5 method, the level of the convolutional network is insufficient (Paul & Singh, 2015). AlexNet (Krizhevsky, Sutskever & Hinton, 2012) (ImageNet) developed in 2012 produced more successful results than all previous CNN models. CNN models have been continuously developed to achieve higher accuracy and faster results (Alom et al., 2019). ZFNet (Fu et al., 2018) in 2013, GoogLeNet (Sam et al., 2019 and VGGNet (Simonyan & Zisserman, 2014) in 2014, ResNet (Gao et al., 2021) in 2015 were developed.
The developed CNN models are successful in feature extraction and classification in single-object image analysis but not sufficiently successful in multi-object image analysis. For this reason, Girshick et al. (2014) proposed the R-CNN method to overcome the multi-object problem. The R-CNN divides the image into approximately 2,000 regions and searches within the region with CNN. The computational cost of the R-CNN method is high in terms of time. Girshick (2015) developed the Fast R-CNN method that works faster to eliminate the problem of R-CNN running slow. Julca- Aguilar & Hirata (2018) suggested using the Faster R-CNN algorithm as a general method for detecting symbols in handwritten graphics. Nagaoka et al. (2017) developed a model for text detection based on Faster R-CNN that can be trained in an end-to-end coherent manner. R-CNN algorithms use regions to localize the object within the image. The CNN-based YOLO (You Only Look Once) method, which examines parts of the image likely to contain the object rather than thinning the region, was developed by Redmon et al. (2016). The YOLO method has produced more successful results than many object detection methods used in real-time object tracking. For example, Li et al. (2018) used the YOLO model to detect steel strip surface defects in real-time. Rahman, Ami & Ullah (2020) suggested using the YOLO model for an automatic reverse vehicle detection system from road safety camera images. Pei & Zhu (2020) developed the YOLO model for real-time text detection and recognition.
Taşyürek & Öztürk (2022) proposed a two-stage deep learning model using only the YOLOv4 model to detect house numbers from natural scene images. However, in the approach, real-time object detection was not performed, and the location data of the objects on the earth was not captured.
In addition, YOLO models have been constantly being improved. YOLOv5 was developed by Jocher et al. (2020). Kim et al. (2022) examined the object detection and classification performances of YOLOv4 and V5 models on the Maritime Dataset and showed that the YOLOv5 model showed superior object detection performance compared to the YOLOv4 model. On the other hand, Taşyürek (2023) has proposed a new approach called ODRP, which uses map-based transformation and deep learning models to detect street signs with their real locations on Earth from EXIF format data. In the proposed ODRP approach, the YOLOv5 model outperformed the YOLOv6 model in object detection.
In recent years, the fine-tuning technique has been widely used to increase the classification and segmentation performance of CNN-based deep learning methods (Pham, 2021;Xu et al., 2021). For example, Kaya & Gürsoy (2023) proposed a transfer learningbased deep learning approach with fine-tuning mechanisms to classify COVID-19 from chest X-ray images. They used the MobileNet V2 version as the CNN model, and the proposed model achieved an average accuracy of 97.61% with fine-tuning. Akshatha et al. (2022) examined the performance of the Faster R-CNN and SSD models fine-tuned for human detection from air thermal images. After fine-tuning, the mAP metric of the Faster R-CNN model increased by 10%, while the mAP metric of the SSD model increased by 3.5%. Salman et al. (2022) proposed the fine-tuned YOLO model for an automated prostate cancer grading and diagnosis system. Thanks to the fine-tuning technique they suggested, the proposed method achieved 97% detection and classification success.
In this study, firstly, classic Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models were applied for a CNN-based system that detects house numbers with spatial locations from natural images in real-time. However, satisfactory results could not be obtained due to the small size and variable depth of the house plate object in the raw images. A new approach using the fine-tuning technique is proposed to improve the object detection performance of the CNN-based system.
BASIC CONCEPTS
Deep learning has become a prevalent subset of machine learning because of its high classification performance across many data types (Raschka & Mirjalili, 2017;. One of the most impactful deep learning methods for image classification is the convolutional neural network (CNN) method. CNN is a deep learning algorithm generally used in image processing and takes images as input (Wang et al., 2017;Nasir, Khan & Varlamis, 2021). This algorithm, which captures and classifies the visual features with different operations, has been widely used in recent years (Barzekar & Yu, 2022). CNN-based Faster R-CNN, MobileNet and YOLO models used in this study are presented below.
R-CNN
R-CNN architecture detects classes of objects in images and their bounding boxes. In the R-CNN model, features that are candidates to be objects in the visual are determined by selective search. In selective search, which works with the hierarchy from small to large, small regions are determined first. Then, two similar regions are merged, and a new larger region emerges. This process continues recursively. In each iteration, more significant regions occur, and the objects in the image are clustered. After about 2,000 regions are determined, each is individually entered into a CNN model, and their classes and bounding boxes are estimated. Specific region candidates for R-CNN are determined by selective search. These district candidates each enter the CNN networks as inputs. At the end of this region nomination process, approximately 2,000 regions emerge, and 2,000 CNN networks are used for these 2,000 regions. The object class in SVM models and bounding boxes in regression models are determined using the features obtained from CNN networks. The R-CNN model has the following disadvantages: • Each image needs to classify 2,000 region suggestions. Therefore, it takes a lot of time to train the network.
• It also requires a lot of disk space to store the feature map of the region recommendation.
The backbone of R-CNN models can be changed. AlexNet, VGG 16 or ResNet 50 can be selected as the backbone of the R-CNN. The default backbone of the R-CNN model developed in the PyTorch is ResNet 50 (Rath, 2021). The ResNet 50 model consists of 50 layers, including 1 MaxPool layer, one average pool layer and 48 convolutional layers.
R-CNN architecture (Girshick et al., 2014) has been developed since it cannot be easily detected with CNN in images with multiple objects. Ross Girshick developed the Fast R-CNN method, which works faster, to eliminate the problem of R-CNN running slow (Girshick, 2015). The fast R-CNN model takes all image and region suggestions as input in feed-forward CNN architecture. Also, the Fast R-CNN model combines the ConvNet, Role Pool, and classification layer of the R-CNN model in a single structure. This eliminates the need to store a feature map and saves disk space. It also uses the softmax layer instead of the SVM method in region recommendation classification, which has proven faster and produces better accuracy than the SVM method.
On the other hand, Faster R-CNN were introduced by Ren et al. (2015). In the Fast R-CNN model, the bottleneck is the selective search method for the R-CNN architecture. The region proposal network (RPN) is used instead of the selective search method in the Faster R-CNN model. In this model, the image is first transferred to the backbone network. This backbone network creates a convolutional feature map. This feature map is forwarded to the region recommendation network (RPN). Returns object candidates along with candidate scores objectness using the RPN feature map. Then, The ROI pooling layer resizes the regions to a fixed size. Finally, it feeds the regions to the fully connected layer for classification. Regarding computational cost, Faster R-CNN is faster than R-CNN and Fast R- CNN (Ren et al., 2015). In addition, the Faster R-CNN model achieves better mean average precision value than R-CNN and Fast R-CNN models. This study used the Faster R-CNN model, a more successful method than R-CNN and Fast R-CNN methods.
MobileNet
MobileNet is a CNN-based deep learning model designed for mobile and embedded computer vision applications. The MobileNet (V1) was introduced by Howard & Zhu (2017). MobileNet is a simple and efficient deep learning model (Michele, Colin & Santika, 2019). It is widely used in real-time applications due to its low computational cost (Verma & Srivastava, 2022;Edel & Kapustin, 2022).
The basis of MobileNetV1 is deeply detachable convolutional structures to create lightweight deep neural networks. Deep convolution applies a single filter to each input channel in this release. Point convolution then uses the 1 × 1 convolution to combine the outputs of the deep convolution. A standard convolution filters the inputs and combines them into a new set of outputs in a single step. MobileNet has 28 layers. The model takes an image with dimensions 224 × 224 × 3 as input. On the other hand, the MobileNet model continued to be developed by adding new features. In 2018, the MobileNet V2 was introduced by Sandler et al. (2018). The MobileNet V2 has been developed to overcome the bottlenecks in the intermediate inputs and outputs of the V1 model. Thanks to the improvements made, the Mobilenet V2 model has achieved faster training and better accuracy than the V1 model. On the other hand, the following model version, MobileNet V3, is widely used in the image analysis capabilities of many popular mobile applications.
In this study, the MobileNet V3 version was used because it stands out with its low computation cost in real-time systems.
YOLO
The YOLO approach takes its name from the words ''You Only Look Once'', which means you only look once (Redmon et al., 2016). The YOLO approach can predict at a glance what the objects in the image are and where they are Sarkar & Gunturi (2022). With the YOLO method, high accuracy can be achieved most of the time, and it also works in real-time, which has been frequently preferred in recent years due to its capabilities (Du, 2018). The algorithm ''looks only once'' at the image in the sense that it only requires one forward propagation pass through the neural network to make the prediction. After non-maximum suppression (which allows the object detection algorithm to detect each object only once), it outputs the recognized objects along with the bounding boxes. With YOLO, a single CNN simultaneously predicts multiple bounding boxes and the class probabilities for those boxes. YOLO can work on full images and directly optimize detection performance.
The YOLO algorithm performs these operations using the CNN model. The architectural structure of the YOLO model consists of 24 convolutional layers, followed by two fully connected layers (Redmon et al., 2016). The architecture uses the 7×7 (S×S) grid structure. It takes 448×448×3 images as input data. Architecture produces output in size 7×7×30.
The YOLO approach has been continually developed. YOLO V1 architecture, the first version developed by Redmon et al. (2016), because the output layer is a fully linked layer, the YOLO training model only supports the exact input resolution during testing as the training image. To eliminate the shortcomings of the YOLO V1 version and continue its success, the more accurate, faster, and more powerful YOLO v2 architecture, which can recognize 9,000 objects, was introduced by Redmon & Farhadi (2017). Developed by Redmon & Farhadi (2018) in 2018, the YOLOv3 model is more complex than the previous model. The YOLOv3 architecture allows changing the size of the model's structure, allowing the speed and accuracy of the model to be changed. In 2020, the YOLOv4 version was introduced by Bochkovskiy, Wang & Liao (2020) as an object recognition method with optimum speed and accuracy. A practical and powerful object detection model is proposed in the YOLOv4 release. YOLOv4 aims to find the best balance between input network resolution, number of convolutional layers, number of parameters, and number of layer outputs (filters).
On the other hand, Jocher developed the YOLOv5 model in 2020 (Jocher et al., 2020). Unlike the V4 model, the YOLOv5 model is run in Pytorch. Studies (Jiang et al., 2022;Fang et al., 2021) have shown that the YOLOv5 model produces more successful estimations and less computational cost than the V4 model. While previous versions of YOLO were written in the C programming language, YOLOv5 was written in the Python programming language. Thus, installing and integrating YOLOv5 into IoT devices has become more accessible. YOLOv5 is only 27 MB, while YOLOv4 using Darknet is 244 MB. Compared to YOLOv4's Darknet community, YOLOv5's Pytorch community is more populated, indicating that more contributions will be made and more significant potential for future growth. It is challenging to accurately compare the performance of the YOLOv4 and YOLOv5 methods, which use two different languages and frameworks. But over time, under the same conditions, the YOLOv5 method has proven itself by showing higher performance than the YOLOv4 method and receiving more support from the computer vision community.
In addition, a new version of the YOLO model, the YOLOv7, was released in 2022 (Wang, Bochkovskiy & Liao, 2022). YOLOv7 uses anchor boxes to detect a broader range of object shapes and sizes than previous versions. YOLOv7 also has a higher resolution than previous versions. While other models process images at 416 × 416 resolution by default, the YOLOv7 model processes images at 608 × 608 by default. Thanks to this default image size, the YOLOv7 model detects smaller objects and gives it higher accuracy overall (Kundu, 2023).
In this study, the performances of the YOLOv4, V5 and V7 models were examined.
PROPOSED CNN BASED DEEP LEARNING APPROACH FOR HOUSE NUMBER DETECTION WITH SPATIAL LOCATION IN REAL-TIME
The quality of geographic information systems developed to store, analyze and display spatial data depends on the accuracy of the data it contains. Since address data has been created using natural scene images in recent years, the legibility of the house number characters in the images is very important (Taşyürek & Öztürk, 2022). In addition, detecting house numbers from natural images containing location information and processing them with their locations accelerates the address infrastructure. The instance natural scene image containing the number plate is shown in Fig. 1. The plate with blue background in Fig. 1 is the door number plate. There is the letter 5A on the plate. Address plates are produced in the same standard color and format. The images of the Kayseri used within the scope of this study, obtained in real-time, also contain the location information of the point where the photo was taken. When the house number in these images is detected, the location of the house number is automatically detected. The location of the point where the photo was taken is accepted as the location of the house number. Determining the door number, the essential component of the address infrastructure, and its correct positioning on the map is essential for vital services such as education, hospital and pharmacy. However, when door numbers are determined from natural images with classical methods, errors occur due to eye strain or typing on the keyboard incorrectly. In this study, a new CNN-based approach is proposed to overcome these problems and to detect house numbers with their locations in real-time. The flowchart of the proposed system is presented in Fig. 2.
As seen in Fig. 2, firstly, the model must be trained in CNN-based object detection systems. In order to increase the performance of the proposed system, the transfer learning technique was used within the scope of this study. The transfer learning method is frequently used during the training process of CNN-based models (Zhuang et al., 2020). Transfer learning model can be expressed as transferring the previously trained and high-performance weights to the new model to be trained (Weiss, Khoshgoftaar & Wang, 2016). This way, models that show higher success and learn faster with less training data are obtained using previous knowledge. In the system presented in Fig. 2, the picture containing the house numbers with spatial location is input for door number determination. After the picture is given to the system, the door number in the picture is estimated with the CNN-based deep learning method. Suppose the confidence score of the door number estimated by the deep learning method is above the threshold value. In that case, the system reads the estimated door number, location information in the picture and other attributes and saves this information to the database. '5' was estimated with a confidence score of 0.86, and 'A' was estimated with a confidence score of 0.83 in the sample plate detection presented in Fig. 2. Suppose the confidence score of the door number estimated by the deep learning method is below the threshold value (0.5 was selected for this study). In that case, the system ensures that the user enters the door number, reads the other attribute information from the picture, and saves the data to the database.
Within the scope of this study, firstly, Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models, which are widely used as CNN-based deep learning models, were applied in the proposed system. The computational costs of YOLO-based models were low, as expected for real-time systems. However, all models could not detect the house numbers and characters sufficiently due to the depth and resolution found in natural images. In order to overcome these problems and improve the object detection performance of CNN-based models, the fine-tuning technique, which has been widely used in recent years, was proposed. Fine-tuning is expressed as increasing the model's success by adjusting deep learning models. There are many fine-tuning types. However, the common and easy-to-use ones can be listed as changing the last layer, reducing the learning rate and multi-resolution training (Yu, 2016;Rath, 2022). In this study, these three processes were applied. As the first fine-tuning process, the softmax layer of the previously trained network (transferred with the transfer learning technique) was truncated, and a new softmax layer with 14 classes was added instead. As the second fine-tuning process, the learning rate of the models was reduced, and the models were trained with a learning rate of 0.001. As the final fine-tuning process, the models are trained in multi-resolution. For multi-resolution training, images are automatically resized by +-50% during training with the -multi-scale parameter in YOLOv4, V5 and V7 models. However, this feature is not available on Faster R-CNN and MobilNet models. For Faster R-CNN and MobileNet models, images were resized before fine-tuned training. Results of the classical Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models and fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models in the proposed approach in following section has been presented.
EXPERIMENTAL EVALUATIONS
In this section, the experimental performances of Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 methods are compared for both classical and proposed fine-tuned learning. In the experimental evaluations, the answers to the following questions were examined.
• What are the door number detection performances of approaches using classical CNN models?
• What are the door number detection performances of approaches using fine-tuned CNN models?
• What are the run-times of the approaches?
Data sets
In this study, natural scene images containing the house numbers with the location were used. 2,664 images were used as training data, and 626 images were used as validation data.
To examine the performance of the methods, real images containing 3,627 door numbers and location information in Kayseri province, Sarioglan-Ciftlik district, were used. Detailed information about the images used for testing purposes is presented in Table 1. The images presented in Table 1 also include locations of door numbers. In other words, while there is a house number on the image, its attributes contain the information at which location the image is taken. The location data in the attribute information is positioned on the map as shown in Fig. 3 using the open-source leaflet library and the open street map base.
The spatial distribution of the dataset is shown in the map image presented in Fig. 3. Since the settlements are more in the town centre, the blue dots showing the location of the house number are more concentrated in the settlements.
Model settings and performance metrics
The YOLOv5 (Jocher et al., 2020) and YOLOv7 (Wang, Bochkovskiy & Liao, 2022) models were developed using the PyTorch library. Faster R-CNN (Rath, 2021), MobileNet (Wang, 2019) and YOLOv4 (Yiu, 2021) versions developed with Pytorch architecture were used to compare the methods under equal conditions. All methods were trained by setting the epoch value to 300. Experimental studies have been analyzed using Python 3.9 version on the computer with Intel Core i7-9700 3.0 GHz, 32 GB RAM and 12GB NVIDIA. The loss value produced by deep learning models is used to examine the success of the training (Chung et al., 2020). The decrease in the Loss value during the training and approaching zero indicates the success of the training. The training graphs of the classical and fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 model are presented in Figs. 4A and 4B, respectively. In Fig. 4 Fig. 4. The loss value decreases for a long time since the fine-tuning process reduces the learning rate. In addition, the multi-resolution training increased the training times of the models. The labelling (annotation) process was done with the LabelImg (Talin, 2018) program. The door numbers were analyzed as 14 classes. '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '/', 'A', 'B' and 'C' were defined and labelled for classes. The YOLOv4, V5 and V7 use .txt files as labelling files, while the Faster R-CNN and MobileNet use .xml files. A single labelling process was made, and the same labels were used for all models by selecting the export formats .txt and .xml.
Performance metrics are used to examine the performance of deep learning models (Bacchi et al., 2020;Teplitzky, McRoberts & Ghanbari, 2020). These metrics are accuracy, precision, recall and F 1 score. However, in order to calculate these values, true positive (TP), true negative (TN), false positive (FP) and false negative (FN) values must be calculated. If there is an object and detection, this value is accepted as TP. The number of door numbers that the proposed approach detects correctly is the TP value. If there is no object and no detection, this situation is evaluated as TN. If there is a detection by the model even though there is no object, it is expressed as FP. An object count that cannot be detected by the deep learning model even though it is in the image is referred to as FN.
Accuracy shows how successful the model is in all classes in general and is calculated with Eq. (1).
Precision represents the ratio of the number of correctly classified positive samples to the total number of positive samples and is calculated with Eq. (2).
Recall measures the model's ability to detect positive samples and is calculated with Eq. (3).
F1 score is one of the most widely used metrics. F1 score is obtained as a result of using almost all metrics. F1 score is calculated with Eq. (4).
Experiments
In this section, experimental comparisons of the approach designed with the CNN-based deep learning model for real-time house number detection are presented. First, the door number detection performances of the classic Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models were investigated. Then, the port number detection performances of fine-tuned classic Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models were investigated. Finally, the real times of the proposed approaches are presented.
Door number detection performances of classical CNN models
Within the scope of this experiment, the performance of detecting house numbers in natural scene images of the classic Faster R-CNN, MobileNet, YOLOv4, YOLOv4 and V7 models was compared. Test operations were carried out on 3,627 images. These images contain 20,722 characters (numbers) in total. In order to better examine the performance of CNN models, all benchmark metrics obtained are presented in Table 2. When the metrics presented in Table 2 are examined, the classical Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 approaches were able to detect 14, 849, 12,532, 17,875, 19,302 and 17,078 as TP, respectively. The TN value was 0 in all models because there was no image without the door number in the dataset. Regarding models' FP values, Faster R-CNN has 3,339, MobileNet has 3,752, YOLOv4 has 2,020, YOLOv5 has 924, and YOLOv7 has 2,780 FP values. On the other hand, Faster R-CNN, MobiletNet, YOLOv4, YOLOv5, and YOLOv7 models have FN values of 5,873,8,190,2,847,1,420 and 3,644, respectively. While TP, FN, FP and FN metrics are used to calculate accuracy, Fig. 5 is examined, there is the original version of the image in Fig. 5B. This is because the MobileNet model cannot detect any digits and characters in Fig. 5B. Due to such situations, the performance metric of the MobileNet model was lower. As seen in Fig. 5A, the Faster R-CNN model detected the number '6' with a confidence score of 0.73 but failed to detect the character 'A'. The detection result of the YOLOv4 model is presented in Fig. 5C. The YOLOv4 model could not detect the 'A' character but detected the '6' with a confidence score of 0.66. As seen in Fig. 5D, the YOLOv5 model could not detect the 'A' character but detected the '6' with a confidence score of 0.86. The result of detecting the house number of the YOLOv7 model is presented in Fig. 5E. The YOLOv7 model could not detect the 'A' character like other models, but it did detect the '6'. When Fig. 5 is examined, the model that detects '5' with the highest confidence score is YOLOv5. Because it detects with such a high confidence score, the metric values of the YOLOv5 model are higher than the others. However, none of the classical CNN models could detect the 'A' character. In this study, the fine-tuning technique is proposed to detect undetectable characters, such as the 'A' character and to detect door numbers with higher performance rates. The results of the proposed fine-tuning technique are presented in the following experiment.
Door number detection performances of fine-tuned CNN models
In this experiment, the performance of fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv4 and V7 models to detect house numbers in natural scene images was compared. As shown in the previous section, classical CNN-based models could not detect house numbers in images with variable depths. The fine-tuning technique has been proposed to overcome these problems and to detect door numbers with higher performance rates. The success of the proposed method was examined on 3,627 real images. All benchmark metrics showing the performance of the proposed fine-tuned CNN models are presented in Table 3. When the metrics presented in Table 3 Since the classical MobileNet model has a very low TP compared to other models, the highest increase was observed in this model after fine-tuning. The lowest increase in TP values was observed in the fine-tuned YOLOv5 model. This is because the classic YOLOv5 model is also successful. On the other hand, the TN value of all fine-tuned models was 0. When fine-tuned CNN models are analyzed according to FP value, fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5, and YOLOv7 models have FP values of 2,682, 2,968, 1,401, 485 and 2,224, respectively. If the model finds a number or character, even though there is no number or character, it is considered FP. A low FP value or a decrease in this value compared to the previous model indicates the success of the recommended fine-tuning technique. Thanks to the proposed fine-tuning technique, these models reduced their FP values by 657, 784, 619, 439 and 556, respectively. In addition, these models decreased their FN values amount of 2,273, 2,445, 1,443, 735 and 1,267, respectively, thanks to the proposed method. When the fine-tuned models were examined according to their F1 score values, the order of performance was the same as the classical CNN models. Fine-tuned YOLOv5 has the highest f1 score with 0.972. The fine-tuned MobileNet model, on the other hand, has the lowest f1 score with 0.775. Fine-tuned Faster R-CNN, YOLOv4 and YOLOv7 models achieved f1 scores of 0.845, 0.932 and 0.889, respectively. Thanks to the proposed fine-tuning technique, all CNN models have increased the f1 score performance. In order to better analyze the performances of the proposed fine-tuned CNN models, the method's house number detection on the same image used in classical CNN models was examined. The detection results of the fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 are presented in Figs. 6A, 6B, 6C, 6D, and 6E, respectively.
As the Fig. 6 is examined, all models except the fine-tuned MobileNet model detected the number '6' and the character 'A' correctly (TP). The fine-tuned MobileNet model caught only 6'. While the classical MobileNet model could not find any object in the same image, the fine-tuned MobileNet model could detect the number '6' thanks to the suggested fine-tuning technique. Fine-tuned Faster R-CNN, YOLOv4, YOLOv5 and YOLOv7 models detected the 'A' character, which they could not detect in their classical state, thanks to the fine-tuning technique. In the input image, the depth of the door plate is high. In other words, the character's size on the door sign is small. Due to variable depth, classical CNN-based models cannot detect the house number successfully enough. In the proposed fine-tuned technique, the models are trained in multi-resolution by changing the size of +-50. Thanks to this multi-resolution training, fine-tuned models can detect more successful house numbers in natural scene images with varying depths than classic CNN models. In addition, as with classical CNN models, the fine-tuned YOLOv5 model is the model that detects house numbers with the highest confidence score. Due to such successful detections, the performance of the fine-tuned YOLOv5 model is superior to other models.
Run time of the approaches
In real-time object detection, the computational cost is as important as the estimation performance of the methods. For this reason, the object detection times of the classic Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models and the recommended fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 models were investigated. The PyTorch version of Faster R-CNN (Rath, 2021), MobileNet (Wang, 2019), YOLOv4 (Yiu, 2021), YOLOv5 (Jocher et al., 2020) and YOLOv7 (Wang, Bochkovskiy & Liao, 2022) models were used to evaluate the models under equal conditions. Models were run for 3,627 images in the dataset. The total running times of models are presented in seconds in Fig. 7. As seen in Fig. 7, the total run times of the classic Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7 are 2,187, 167, 852, 54 and 33 s, respectively. The total working time of these models with the fine-tuning technique is 2,296, 174, 869, 55 and 33 s, respectively. The Fine-tuned Faster R-CNN model has the highest calculation cost with 2,296 s. Also, the runtime of the classic Faster-RCNN model is higher than the MobileNet and YOLO models. On the other hand, the classic YOLOv7 and fine-tuned YOLOv7 models have the lowest runtime. Classical CNN models detect the house numbers for which an image is found in approximately 0.603, 0.046, 0.240, 0.015 and 0.009 s, respectively. Fine-tuning CNN models detect about 0.633, 0.048, 0.240, 0.015 and 0.009 s, respectively. As a result of the fine-tuning process, the computational cost of the Faster R-CNN model increased by only 0.030 s in object detection. The proposed fine-tuning technique added only 0.020 s to the MobileNet model. This extra computational cost to the YOLOv4 model is 0.005 s. The fine-tuning technique did not affect the average running time of the YOLOv5 and YOLOv7 models. In real-time door number detection, the YOLOv7 method works at least 66 times faster than the Faster R-CNN method, 5 times faster than the MobileNet model, 26 times faster than the YOLOv4, and at least 1.5 times faster than the YOLOv5 model. The YOLOv5 model operates approximately 40 times faster than the Faster R-CNN model, about 3 times faster than the MobileNet model, and about 15 times faster than the YOLOv4 model.
CONCLUSION
In this study, a CNN-based approach is proposed to detect house numbers with location information from natural images obtained in real-time. The performance of the proposed system has been tested on real images of Kayseri Province. In the proposed method, classical Faster R-CNN, MobileNet, YOLOv4, YOLOv5 and YOLOv7, which are widely used as CNN models, were used. However, since the depths vary in natural scene images, sufficient successful results could not be obtained. In other words, the distance of the door plate in the image varies. In cases where the door plate is deep, the characters on the plate become challenging to read. The fine-tuning technique has been proposed to achieve higher performance in images with variable depths. The suggested fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5, and YOLOv7 methods obtained f1 scores of 0.845, 0.775, 0.932, 0.972 and 0.889, respectively. Thanks to the fine-tuning technique of these methods, the f1 score value increased by 0.082, 0.098, 0.052, 0.029 and 0.047, respectively, compared to the classical methods. Among the proposed approaches, the fine-tuned YOLOv5 achieved the highest performance with an f1 score of 0.972. On the other hand, regarding the run time of the proposed fine-tuned based methods, fine-tuned Faster R-CNN, MobileNet, YOLOv4, YOLOv5, and YOLOv7 detect objects about 0.633, 0.048, 0.240, 0.015 and 0.009 s, respectively. The YOLOv7 model is the model that makes the door number the fastest, with an average working time of 0.009 s.
In future studies, it is planned to perform hyperparameter optimization of CNN-based deep learning models with artificial intelligence optimization algorithms. | 9,822.4 | 2023-07-03T00:00:00.000 | [
"Computer Science"
] |
Simultaneous Adsorption of Heavy Metals from Roadway Stormwater Runoff Using Different Filter Media in Column Studies
Stormwater runoff from roadways often contains a variety of contaminants such as heavy metals, which can adversely impact receiving waters. The filter media in stormwater filtration/infiltration systems play a significant role in the simultaneous removal of multiple pollutants. In this study, the capacity of five filter media—natural quartz sand (QS), sandy soil (SS) and three mineral-based technical filter media (TF-I, TF-II and TF-III)—to adsorb heavy metals (Cu, Pb and Zn) frequently detected in stormwater, as well as remobilization due to de-icing salt (NaCl), were evaluated in column experiments. The column breakthrough data were used to predict lifespan of the filter media. Column experiment operated under high hydraulic load showed that all technical filters and sandy soil achieved >97%, 94% and >80% of Pb, Cu and Zn load removals, respectively, while natural quartz sand (QS) showed very poor performance. Furthermore, treatment of synthetic stormwater by the soil and technical filter media met the requirements of the Austrian regulation regarding maximum effluent concentrations and minimum removal efficiencies for groundwater protection. The results showed that application of NaCl had only a minor impact on the remobilization of heavy metals from the soil and technical filter media, while the largest release of metals was observed from the QS column. Breakthrough analysis indicated that load removal efficiencies at column exhaustion (SS, TF-I, TF-II and TF-III) were >95% for Cu and Pb and 80–97% for Zn. Based on the adsorption capacities, filtration systems could be sized to 0.4 to 1% (TF-I, TF-II and TF-III) and 3.5% (SS) of their impervious catchment area and predicated lifespan of each filter media was at least 35, 36, 41 and 29 years for SS, TF-I, TF-II and TF-III, respectively. The findings of this study demonstrate that soil—based and technical filter media are effective in removing heavy metals and can be utilized in full-stormwater filtration systems.
Introduction
Stormwater runoff from vehicle trafficked areas and roofs contains a heterogeneous mixture of pollutants including solids, heavy metals, organic pollutants, such as polycyclic aromatic hydrocarbons (PAHs), and mineral oil hydrocarbons (MOH), nutrients and compounds of de-icing salts, which can cause significant hydrological and ecological impacts on receiving waters [1][2][3].Heavy metals such as cadmium (Cd), chromium (Cr), copper (Cu), lead (Pb), nickel (Ni) and zinc (Zn) are the most frequently reported pollutants in roadway and parking lot runoffs, mainly emitted from vehicles and traffic-related activities [3][4][5][6].Heavy metals are mobile in natural water ecosystems, non-degradable and potentially toxic, as they can accumulate in the environment causing both short term and long term adverse effects [7,8].Furthermore, various studies noted that roadway runoff are likely to induce mutagenic/genotoxic effects due to the combined effects of heavy metals and PAHs [9,10].Consequently, treatment of stormwater has become increasingly important to mitigate its negative ecological effects.
A diverse range of soil-based stormwater control measures such as filter strips and swales, infiltration systems, storage facilities (e.g., detention basins, retention ponds and wetlands), filtration systems (storm filter) and porous pavement have been widely used to reduce the adverse hydrological and ecological impacts on receiving waters [11,12].However, some of these treatment technologies are not effective for the removal of dissolved pollutants, spatially too limited or usually suffer from early clogging [2,13].Stormwater infiltration/filtration systems that utilize granular adsorptive filter media enabling high infiltration rates, which can be retrofitted in small compact systems, are receiving increasing interest due to their ability to remove both dissolved and particulate pollutants [13][14][15].The removal of pollutants is achieved via a number of processes including sedimentation, filtration, sorption, ion exchange, surface complexation and transformation [5,12,16,17].
Studies both under laboratory [2,5,6,13,18,19] and field conditions [14,16,20] have investigated the ability of adsorptive filter media mixtures and soils to retain pollutants from percolating stormwater.For example, Thomas et al. [18] tested the performance of mixed filter media composed of crushed aggregate and three active ingredients: perlite, dolomite and gypsum in column experiment using synthetic stormwater reported over 90% removal efficiencies of copper and zinc.The Authors found that the media mix has an estimated lifespan of 14 to 22 years for copper and zinc loading.Bioretention systems with media mixes (sand, soil and mulch) achieved over 96% removal efficiency of oil/grease, suspended solids and Pb [19].In a large-scale laboratory filter system, Reddy et al. [13] evaluated the efficiency of mixed media consisted of calcite, zeolite, sand and iron filings observed that over 90% (Cd, Cr, Cu, Pb and Zn) and 75-88% nutrients were removed from synthetic stormwater.Soil-based filters are efficient for the removal of solids, Cu, Ni, Pb, Zn and PAHs [5,21].Unfortunately, results regarding pollutant removal efficiencies, equilibrium/effluent concentrations and sorption capacities were highly variable among the studies as well as these results may not be comparable to field conditions.These variabilities could be related to many factors including single solute solution versus multi-solute solution, influent concentration, pH, flow rate, flow direction (i.e., upflow vs. downflow mode) and filter bed height [21][22][23][24].Column sorption experiments were mainly conducted with metal concentrations much higher than the levels in real roadway runoff waters [6,25] and also it is important to consider the simultaneous removal of co-existing metals [23].The candidate filter media should be able to bind and adsorb multiple metals of significantly varying concentrations.In this context, the results of both laboratory [13,21,24,26] and field experiments [14,16] have demonstrated that soil-based and mixed media based decentralised stormwater infiltration/filtration are effective and affordable.Metals adsorbed to the filter media might not be permanently immobilized.De-icing road salts in winter periods may interfere with the operation of stormwater treatment facilities, for example the release of chemicals [27,28].In column study, Norrström [28] demonstrated that a large part of Pb, Cd and Zn in highway roadside soils are vulnerable to leaching when exposed to a high NaCl (5.84 g/L) concentration.From field studies, Bauske and Goetz [29] also found a strong effect of NaCl solution on Cd and Zn.Additionally, studies have been conducted to examine the remobilization of heavy metals adsorbed onto filter materials used for stormwater treatment [6,25].In laboratory column experiment, Huber et al. [6] have shown that pure NaCl (10 g/L) had a minor effect on the remobilization heavy metals.Recently, NaCl solution has been used to investigate the remobilization of previously adsorbed heavy metals, which is a crucial test criterion for the certification of filter media in Austria [30] or filtration systems in Germany [31].
Increased mobility of heavy metals coincident with road salt applications have been observed in road side soils and filter media used in stormwater filtration systems by various mechanisms, including the competition of salt derived cations with positively charged heavy metal species for sorption sites on the solid phase (ion exchange), lowered pH, formation of chlorocomplexes and possible colloid dispersion [27][28][29]32].The mobility of heavy metals induced by NaCl application might not behave in the same way.The mechanisms mentioned above exert their effect with different intensities depending on the heavy metal type, total amount of heavy metal present, ionic strength, hydration radius and number of electrolytes present in the system [32,33].Filter media characteristics such as pH, organic matter/clay content, amount and type of available charge sites and mineralogical composition are also important factors to consider when investigating metal mobility.The cations of de-icing salt are important driving forces for the mobility of heavy metals as a result of competition for adsorption sites, so that adsorbed heavy metals could be displaced from the exchange sites into solution by Na ions [28,32].In the cation exchange process, selectivity of heavy metal displacement is determined by the concentration of ions, their valence, their degree of hydration and hydration radius [34].The order of adsorption of heavy metals Ni > Cu > Co > Cd coincide reasonably well with the reversed order trend of hydrated radius as Cd (4.26 Å) > Co > (4.23 Å) > Cu (4.19 Å) > Ni (4.04 Å) [35].Thus, de-icing salt is expected to have minimal effect on the mobilization of heavy metals with smaller hydration radius and higher intrinsic binding constants.Another trigger process is an increase of the ionic strength of NaCl promoting the release of sorbed Cd, Cu, Pb and Zn [33].
Results of both laboratory [6,27,28] and field experiments [29] have demonstrated de-icing road salt indeed has the potential to increase metal mobility of heavy metals previously adsorbed by soil and individual (single) filter media.We hypothesised that application of de-icing road salt (NaCl) can mobilize major parts of heavy metals previously adsorbed by mixed mineral based filter media.Consequently, the simultaneous removal of multiple heavy metals as well as the effect of de-icing salt on the mobilization of adsorbed heavy metals at experimental conditions similar to real roadway runoff were deemed necessary.
The objectives of this study were: firstly, to determine the influence of the hydraulic loading rate on simultaneous removal of Cu, Pb and Zn from synthetic stormwater using five different filter media in column sorption experiments; secondly, to investigate the impact of de-icing salt (NaCl) on the remobilization of adsorbed heavy metal; and finally, the long-term performance of each filter medium was investigated using a column breakthrough curve.To mimic heavy metal adsorption capacity at natural environmental conditions, column sorption studies were conducted with experimental conditions closer to real life of stormwater quality and treatment systems.The column study results were used to predict filter media lifespan based on effluent quality and removal efficiencies.
Chemicals and Analytical Instruments
All chemicals used were analytical reagent grade (Merck KGaA, Darmstadt, Germany).Synthetic stormwater solutions containing Cu, Pb and Zn were prepared using analytical grade 1000 mg/L stock solutions (Titrisol ® , Merck, Darmstadt, Germany) of CuCl 2 , Pb(NO 3 ) 2 and ZnCl 2 , respectively and mixed with de-ionised water to obtain the desired concentrations.The initial pH of the test solutions was adjusted to the desired value by using dilute solutions of 0.1 M NaOH and 65% HNO 3 .Conservation of samples was performed using 1% volume of 65% HNO 3 suprapure
Filter Media
The performance of natural commercial available quartz sand (QS) without pre-treatment, sandy soil (SS) and three mineral-based technical filter media (TF-I, TF-II and TF-III) to remove heavy metals (Cu, Pb and Zn) from synthetic stormwater runoff, has been investigated through column tests.The sand soil was excavated from a newly constructed highway runoff infiltration basin and the coarse gravel fraction (dimeter over 2 mm) was removed manually.There exist numerous adsorbents of different nature and they can be utilized as mixed-media filter systems.According to the ÖNORM B 2506-3 (2016) mineral-based mixture of adsorptive materials are defined as technical filter media which is here denoted as "TF".Studies showed that a combination of several filter media (for example zeolites, vermiculite, activated carbon, dolomite, sand and soil) are necessary to achieve effective simultaneous removal of multiple contaminants [2,5,13,15].The technical filter media (TF-I, TF-II and TF-III) investigated in this study are combinations of various sorbents such as zeolite, vermiculite, dolomite, activated carbon, coconut fibre, expanded clay and soil media.All tested filter media were investigated without any physico-chemical treatment or modification.Physical characteristics and composition of the filter media are summarised in Table 1.
Experimental Design
The column experiment was carried out using two different sized columns with inner diameters of 32 and 100 mm, respectively.The aim of the 100 mm column experiment was to study the efficiency of metal removal under high hydraulic loading rates.Subsequently, the effect of de-icing road salt on the mobilisation of already retained metals was studied by flushing each filter column with sodium chloride (NaCl) solution.In the second set, continuous adsorption experiments were conducted using 32 mm columns to investigate the long-term capacity of the filter media to remove metals and to predict its effective lifespan.
High Hydraulic Loading Conditions
High hydraulic loading may lead to reduced stormwater retention times and could reduce treatment efficiencies.The column test was designed to simulate treatment efficiency of five different filter media at their maximum infiltration rates (saturated hydraulic conductivity, K sat ).The experiments were conducted in 800 mm high and 100 mm internal diameter (cross-sectional area of 78.5 cm 2 ) plexiglass columns and an outlet diameter of 30 mm to allow the free flow of water by gravity.The filter media in the column was packed to the desired depth of 300 mm, providing a filter bed volume (BV) of 2.36 L. A drainage layer of 250 mm gravel (4/8 mm) and textile nylon mesh were placed at the bottom of the columns to prevent particle wash out.In order to maintain uniform feed solution distribution and flow rate, 50 mm gravel (4/8 mm) was placed on top of the filter media.The feed solution percolated through the filter columns in downflow mode (from top to bottom) using a precise peristaltic pump (Watson Marlon 520U, Falmouth, UK) dynamically adjusted to a flow rate that resulted in a ponding depth of 50 mm to elucidate peak inflow.The flow rate was 2.1, 0.225, 0.980, 0.820 and 0.770 L/min for QS, SS, TF-I, TF-II and TF-III, respectively.For all technical filter media (TF-I, TF-II and TF-III) and QS the flow rate remained almost constant throughout the experimental period but for the column packed with sandy soil flow rate slightly reduce over time (0.225 L/min to 0.180 L/min).
The experiments were conducted in five successive runs simulating different stormwater sources and impact of de-icing salt on metal mobility (Table 2).To assess the heavy metal removal efficiency, 84 L of synthetic stormwater was percolated per column per experimental run (Run 1-Run 4), therefore each column received a total stormwater volume of 336 L. After passing this volume of water, the filter columns were allowed to drain for at least 24 h.Finally, to investigate the impact of de-icing salt on mobilization of retained metals each filter column was flushed with 42 L de-ionised water solution containing 5 g/L of NaCl (Run 5).The concentration of NaCl was based on common concentrations found in urban highway runoff in Austria [36] and the Austrian Standard Method [30].The influent pH level (Table 2) were selected as the optimum condition, while pH higher than this would cause potential precipitation within the storage tank.Influent water samples were taken at the beginning of every experimental run while effluent samples were collected after every flow through of 28 L from each column (i.e., 3 effluent samples per experimental run per column) and analysed for dissolved concentrations of Cu, Pb and Zn.For the experiments with NaCl solution, one influent sample at the start of the experiment and several effluent samples at designated time intervals were collected in 100 mL glass bottles and preserved with 1% volume of 65% HNO 3 .In addition, a mixed sample was collected from total effluent volume of each column.Remobilized metal mass was determined based on the effluent concentrations and effluent volume.
Column Breakthrough Experiments
Breakthrough curves of Cu, Pb and Zn using five filter media were studied in small-scale plexiglass columns with an inner diameter of 32 mm and a length of 300 mm.The filter media was packed to a depth of 200 mm (yielding a bed volume of 160 mL) and used for the continuous flow test.A 20 mm layer of glass beads was placed at the bottom and top of the packed filter column to support the filter media and to ensure uniform flow distribution.The ratio of the inner diameter to mean particle diameter (d 50 ) was at least 10:1 in which wall effect can be negligible [38].After packing, each column was slowly flushed with approximately 20 bed volumes (BV) of de-ionized water in an upflow mode in order to saturate the filter media and remove the air bubbles entrapped in the sorbent pores to maintain identical experimental conditions.
The column breakthrough experiment was devoted to urban highway runoff, where the target heavy metal concentrations were assumed to be 50 µg/L Pb, 100 µg/L Cu and 400 µg/L Zn at an influent pH of 5.8 ± 0.20 based on a stormwater quality reviews [1,37].The influent solution was prepared in an aquarium tank and pumped in up-flow mode (from bottom to top) using a high precision peristaltic pump (Ismatec IDEX, Laboratoriumstechnik GmbH, Wertheim, Germany).The flow rates were 50% of the flow determined at saturated hydraulic conductivity (K sat ) each filter media.Thus, the flow rate was different for each filter media.Firstly, the effect of flow mode on heavy metal removal was examined by conducting column experiment in upflow and downflow mode operated in parallel using TF-II, while maintaining all other experimental conditions constant.Finally, the sorption capacity of all five filter media was evaluated in the upflow mode and their lifespan was predicted using the maximum sorption capacity at filter media exhaustion.The experiments were performed from Monday to Friday, thus during weekends, the filter media were closed and kept saturated without flow in order to maintain similar experimental boundary conditions.The volume of solution kept closed in the filter column over the weekend was insignificant (<<1%) compared to the total flow though volume.Effluent samples were collected in 100 mL glass bottles from the exit of the column at different intervals, preserved with 1% volume of 65% HNO 3 and analysed for dissolved concentrations of Cu, Pb and Zn.
Operation Criteria
In Austria, purified wastewater should fulfil the criteria of Groundwater Quality Ordinance (QZV) of 9 µg/L Pb and 1800 µg/L Cu [39] and the criteria of the ÖNORM B 2506-3 [30].Therefore, the operation target of all filter columns was terminated (i.e., filter media exhaustion) when: either Pb concentration in the effluent exceeded 9 µg/L, the Cu removal rate fall below 80%, Zn removal rate fall below 50% or a combination of these criteria.
The expected lifespan (years) of each filter media was determined by dividing the cumulative adsorbed mass of a heavy metal at filter media exhaustion point (Section 2.3.2) by the annual load of a heavy metal entering the treatment systems.The annual heavy metal loads entering the treatment systems were calculated under the following assumptions: a filter area of 8.04 cm 2 , a filter media depth of 300 mm, annual precipitation of 700 mm, dissolved runoff concentrations 25 µg/L Pb, 50 µg/L (Cu) and 200 µg/L (Zn) respectively which corresponds to 50% of the total concentration and size of stormwater treatment system relative to its impervious catchment area.The size of the stormwater infiltration systems relative to its impervious catchment area was estimated from the cumulative heavy metal load retained in the filter column.
Analytical Procedures
All samples were filtered through a 0.45 µm pore size non-sterile Phenex-RC 26 mm syringe filter (Phenomenex LTD, Aschaffenburg, Germany) for analysing dissolved metal concentrations and was preserved with 1% volume of 65% HNO 3 until analysis.Cu, Pb and Zn concentrations were measured using inductively coupled plasma mass spectrometry (ICP-MS) according to DIN EN ISO 17294-2.The detection limits were 1.0, 0.5 and 3.0 µg/L for Cu, Pb and Zn, respectively.For simplicity, effluent concentrations below detection limit were set equal to the detection limit, recognising that this conservative assumption might underestimate the Cu, Pb and Zn removals by only 1.0%.The pH was measured immediately after sample collection using a glass electrode (WTW pH 197i, Weilheim, Germany) according to DIN EN ISO 10523-C5.
Data Analysis
Metal removal efficiency for a sample taken at time t over the course of the experiment was calculated as follows (1): where influent and effluent concentrations (µg/L) are denoted as C it and C et respectively and η is metal removal efficiency (%) of a sample taken at time t.
Influent metal load applied to each column till media exhaustion or termination of the experiment, (mg), was calculated as follows ( 2) where C i is the influent concentration (µg/L) and V i is influent volume passed through the filter column (L) Mass of metal adsorbed till filter media exhaustion or termination of the experiment, q s (mg), was calculated using Equation (3): where C i and C e are the influent and effluent concentration (µg/L) and V i and V e influent and effluent volumes (L).
Heavy metal adsorption capacity (q e ) at column exhaustion or at the end of four successive dosing of synthetic stormwater (Run 1-4), simulating different runoff sources per unit dry weight of filter media packed in the column, (mg/g), was calculated using the following Equation (4): where M (g) the total dry weight of filter media packed in the column.
Effluent pH Variations during the Experiments
Although the pH of the feed multi-metal solution was adjusted to 5.8 ± 0.2 during all column experimental runs, the effluent pH was higher than the influent for all tested filter media (Figure 1).Effluent pH exhibited a general decreasing trend over the course of the experimental period for all columns, decreasing from 8.6 to 7.1 for TF-I, 9.1 to 7.9 for TF-II, 9.1 to 7.8 for TF-III, 9.2 to 7.9 for SS and 6.7 to 5.6 for QS, respectively.At the end of the experiments the effluent pH of the columns packed with soil based and mineral-based technical filter media remained consistently high ranging from 7.1 to 8.0.However effluent pH of the QS column dropped from 6.7 to 5.6.
where Ci and Ce are the influent and effluent concentration (μg/L) and Vi and Ve influent and effluent volumes (L).
Heavy metal adsorption capacity (qe) at column exhaustion or at the end of four successive dosing of synthetic stormwater (Run 1-4), simulating different runoff sources per unit dry weight of filter media packed in the column, (mg/g), was calculated using the following Equation (4): where M (g) the total dry weight of filter media packed in the column.
Effluent pH Variations during the Experiments
Although the pH of the feed multi-metal solution was adjusted to 5.8 ± 0.2 during all column experimental runs, the effluent pH was higher than the influent for all tested filter media (Figure 1).Effluent pH exhibited a general decreasing trend over the course of the experimental period for all columns, decreasing from 8.6 to 7.1 for TF-I, 9.1 to 7.9 for TF-II, 9.1 to 7.8 for TF-III, 9.2 to 7.9 for SS and 6.7 to 5.6 for QS, respectively.At the end of the experiments the effluent pH of the columns packed with soil based and mineral-based technical filter media remained consistently high ranging from 7.1 to 8.0.However effluent pH of the QS column dropped from 6.7 to 5.6.The increase in pH suggested that the soil and technical filter media have good pH buffering capacity.The higher effluent pH observed in the technical filter media packed columns was mainly due to calcite (dolomite) additive as an additional amendment in the mixed media.The dissolution of the carbonate phase, impurities present in the filter media, adsorption of hydrogen ions from the solution as well as cationic exchange causes a rapid increment of pH in the solid-water interface [Error!Reference source not found.,13,17,40].According to the ÖNORM B2506-3 [30] in column experiments with 100 mm inner diameter (2.36 L bed volume) a filter media should achieve a minimum effluent pH of 6.0 ± 0.1 while it is flushed with influent pH of 3.0 ± 0.1 at a flow rate that produces 5 cm ponding level for at least half an hour for filter media with ksat over 2. 5 × 10 −3 m/s or a minimum flow through of 42 L when ksat is below 2.5 × 10 −3 m/s.In this regard, the investigated technical filter media and sandy soil are suitable for utilization in stormwater filtration/infiltration systems but QS failed to meet the minimum requirements.The increase in pH suggested that the soil and technical filter media have good pH buffering capacity.The higher effluent pH observed in the technical filter media packed columns was mainly due to calcite (dolomite) additive as an additional amendment in the mixed media.The dissolution of the carbonate phase, impurities present in the filter media, adsorption of hydrogen ions from the solution as well as cationic exchange causes a rapid increment of pH in the solid-water interface [4,13,17,40].According to the ÖNORM B2506-3 [30] in column experiments with 100 mm inner diameter (2.36 L bed volume) a filter media should achieve a minimum effluent pH of 6.0 ± 0.1 while it is flushed with influent pH of 3.0 ± 0.1 at a flow rate that produces 5 cm ponding level for at least half an hour for filter media with k sat over 2. 5 × 10 −3 m/s or a minimum flow through of 42 L when k sat is below 2.5 × 10 −3 m/s.In this regard, the investigated technical filter media and sandy soil are suitable for utilization in stormwater filtration/infiltration systems but QS failed to meet the minimum requirements.
Effect of High Hydraulic Load
The effect of hydraulic load on the adsorption of the selected heavy metals was carried out with synthetic stormwater solutions representing first-flush highway and roof runoff (Table 2).As shown in Figure 2, effluent heavy metal concentration trends suggest that the soil and technical filter media were able to maintain high removal performance for all experimental runs (Run 1-Run 4).For the soil and technical filter media effluent Pb and Cu concentrations were below the maximum allowed (9 µg/L for Pb and 1800 µg/L for Cu) and the required minimum removal efficiencies (80% for Cu and 50% for Zn) were reached during the whole experimental period.As can be seen in Figure 2, the influent heavy metal concentration had a minor influence on treatment performance of sandy soil and technical filter media.The QS filter column managed to effectively remove all three heavy metals from the synthetic highway runoff (Run 1 and Run 2), however for experiments with synthetic roof runoffs (Run 3 and Run 4) effluent concentration of Cu as well as removal efficiencies of Cu and Zn were not met.It should be mentioned that the natural quartz sand (QS) turned out to contain some metal iron impurities, which could potentially serve as adsorption sites for heavy metal ions through surface complexes on iron oxyhydroxides.Metal ions that form outer-sphere complexes are readily exchangeable and are expected to be more easily displaced from the adsorbent surface [41].For the experiment with zinc roof runoff (Run 4), effluent concentrations of Cu were significantly higher than its inlet concentration (150 µg/L) and exceeded the required levels of 1800 µg/L [39].This phenomenon was due to the displacement of weakly adsorbed Cu from previous dosings (Run 1-Run 3) in favour of the increased Zn influent concentration.The displacement of Cu can also be related to the relatively low influent pH, strength of complexation and adsorption order.The extent of simultaneous adsorption of heavy metals is influenced by adsorbate concentration and presence of competing metal ions [23].This competitive adsorption also showed that adsorption of Cu decreased significantly when high concentration of Zn was added in the influent.The effect competitive heavy metal ions on adsorption efficiency was more pronounced in the QS filter column.For example, in the experiment with copper roof runoff (Run 3), effluent Cu concentration reached 70% of the inlet concentration while the effluent Zn concentration reached 100% of its inlet concentration (500 µg/L).Accordingly, Cu outcompetes Zn in occupying available sorption sites of QS.This is in agreement with the findings of Atanassova [42], that in a multi-component system, an increase in the Cu concentration reduced the uptake of other heavy metals such as Ni, Cd and Zn.
In general, subsequent dosings of the columns with synthetic runoff showed that the soil and technical filter media were able to remove the heavy metals, thus significantly reducing the concentrations of Cu, Pb and Zn (Figure 2).The extent of heavy metal removal depends on the initial heavy metal concentration and filter media type or composition [4,24].The performance of each filter media in reducing the heavy metal levels was assessed based on the influent and effluent concentrations.All filters removed more than 98% of Pb.The mean removal efficiency of Cu was 89.6%, 97.4%, 98.5% and 90.5% through the filter columns packed with SS, TF-I, TFII and TF-III, respectively.The mean removal efficiency of Zn was 93.4%, 96.6%, 98.7% and 89.2% through the filter columns packed with SS, TF-I, TFII and TF-III, respectively.The results indicated that the mean removal efficiencies of Cu and Zn by the sandy soil and technical filter media are not statistically different.Nevertheless, it seemed that the composition of the studied technical filter media has played an important role in treatment efficiency.The mineral composition of TF-I and TF-II were similar, except for the 3% dolomite in the case of TF-II.However, the technical filter media with dolomite (TF-II) provided the best treatment performance indicating that carbonate content enhanced the removal of the studied metals.Overall cumulative metal removal efficiency of each filter media was determined using the total influent and effluent loads (Run 1-Run 4).The results of the calculated cumulative removal efficiencies and the corresponding adsorption capacity are presented in Figure 3 and the influent load added to each column was 51.8 mg, 334.6mg and 542.9 mg for Pb, Cu and Zn, respectively.Load removal efficiencies through the soil based and technical filter media (SS, TF-I, TF-II and TF-III) were >95% for Cu and Pb and more that 87% for Zn.These results demonstrate that all filter media were effective for the simultaneous removal of heavy metals, except for QS which had significantly lower removal efficiency of Cu and Zn.It is important to note that results presented in Figure 3 refers only to the adsorbed amount following four successive dosing of synthetic stormwater (Run 1-Run 4) simulating different runoff sources are not the maximum the adsorption capacities.Adsorption capacity was in the order of Pb < Cu < Zn which coincide well with the order trend of the influent load (influent concentration).The increase in adsorption capacity with increasing heavy metal influent load is due to the increase in the driving force for mass transfer as well as an increase in electrostatic interactions (physical adsorption relative to covalent interactions) [35].Despite its high removal efficiency, adsorption capacity of Pb was lowest.This is attributed to its very low influent load compared to Cu and Zn.Similar to our findings, Hatt et al. [2,5] showed that a wide range of media compositions (i.e., combinations of sand, sandy loam, vermiculite, perlite, compost, mulch, charcoal) achieved more than 90% removal of Cu, Pb and Zn from synthetic stormwater.Results indicated that the natural quartz sand (QS) has lowest sorption capacity compared to soil based and technical filter media which is attributed to its low surface area and few sorption sites [Error!Reference source not found.,6,24].Overall cumulative metal removal efficiency of each filter media was determined using the total influent and effluent loads (Run 1-Run 4).The results of the calculated cumulative removal efficiencies and the corresponding adsorption capacity are presented in Figure 3 and the influent load added to each column was 51.8 mg, 334.6mg and 542.9 mg for Pb, Cu and Zn, respectively.Load removal efficiencies through the soil based and technical filter media (SS, TF-I, TF-II and TF-III) were >95% for Cu and Pb and more that 87% for Zn.These results demonstrate that all filter media were effective for the simultaneous removal of heavy metals, except for QS which had significantly lower removal efficiency of Cu and Zn.It is important to note that results presented in Figure 3 refers only to the adsorbed amount following four successive dosing of synthetic stormwater (Run 1-Run 4) simulating different runoff sources are not the maximum the adsorption capacities.Adsorption capacity was in the order of Pb < Cu < Zn which coincide well with the order trend of the influent load (influent concentration).The increase in adsorption capacity with increasing heavy metal influent load is due to the increase in the driving force for mass transfer as well as an increase in electrostatic interactions (physical adsorption relative to covalent interactions) [35].Despite its high removal efficiency, adsorption capacity of Pb was lowest.This is attributed to its very low influent load compared to Cu and Zn.Similar to our findings, Hatt et al. [2,5] showed that a wide range of media compositions (i.e., combinations of sand, sandy loam, vermiculite, perlite, compost, mulch, charcoal) achieved more than 90% removal of Cu, Pb and Zn from synthetic stormwater.Results indicated that the natural quartz sand (QS) has lowest sorption capacity compared to soil based and technical filter media which is attributed to its low surface area and few sorption sites [4,6,24].
Remobilization of Heavy Metals
Table 3 shows the mean and range of heavy metal concentrations measured at the effluent after each column was flushed with 5 g/L NaCl solution.Heavy metals were remobilized with different intensities depending on metal and filter media type.Regardless of the filter media type, concentrations of mobilized heavy metals were in the order of Zn > Cu > Pb, which coincide well with the order trend of adsorbed mass.Effluent concentrations of metals measured after the passage of one bed volume were higher but decreased successively over time with continued flushing.This suggested that precipitated or slightly adsorbed fractions of the retained metals were remobilized easily during the initial passage of NaCl solution.As displayed in Table 3, the de-icing salt (NaCl) solution had similar effects on the soil based and technical filter media and the three metals.Except for effluents from QS, remobilizations of metals were low and comply with the requirements noted in the ÖNORM B 2506-3 [30].This relative low release indicates that adsorption was stable and salts would not have a major influence on the remobilization of previously retained metals.However, the heavy metals that had retained in the QS filter column were released in highest amounts indicating a major effect of NaCl so that this filter media is not feasible for utilization in stormwater filtration systems.The main metal removal mechanism by QS is outersphere complexation or non-specific electrostatic adsorption to negatively charged functional group sites on the sand particle surfaces.The mass and load fractions of each heavy metal remobilized from the filter columns as compared to the total mass previously retained by each filter media are presented in Table 4.As shown in Table 4, the effect of NaCl application was more pronounced for QS.The results showed that an extensive mobilization of heavy metals from the QS column (5.4% Cu, 6.8% Pb and 22% of Zn the total retained) occurred in response to NaCl application.Conversely, only a small fraction (<2.0%) of the retained heavy metals were mobilized from the soil and technical filter media.This implies that chemisorption was the principal metal removal mechanism and salts would not have a major effect on metal mobilization.Our study concluded that mobilization of Cu, Pb and Zn from
Remobilization of Heavy Metals
Table 3 shows the mean and range of heavy metal concentrations measured at the effluent after each column was flushed with 5 g/L NaCl solution.Heavy metals were remobilized with different intensities depending on metal and filter media type.Regardless of the filter media type, concentrations of mobilized heavy metals were in the order of Zn > Cu > Pb, which coincide well with the order trend of adsorbed mass.Effluent concentrations of metals measured after the passage of one bed volume were higher but decreased successively over time with continued flushing.This suggested that precipitated or slightly adsorbed fractions of the retained metals were remobilized easily during the initial passage of NaCl solution.As displayed in Table 3, the de-icing salt (NaCl) solution had similar effects on the soil based and technical filter media and the three metals.Except for effluents from QS, remobilizations of metals were low and comply with the requirements noted in the ÖNORM B 2506-3 [30].This relative low release indicates that adsorption was stable and salts would not have a major influence on the remobilization of previously retained metals.However, the heavy metals that had retained in the QS filter column were released in highest amounts indicating a major effect of NaCl so that this filter media is not feasible for utilization in stormwater filtration systems.The main metal removal mechanism by QS is outer-sphere complexation or non-specific electrostatic adsorption to negatively charged functional group sites on the sand particle surfaces.The mass and load fractions of each heavy metal remobilized from the filter columns as compared to the total mass previously retained by each filter media are presented in Table 4.As shown in Table 4, the effect of NaCl application was more pronounced for QS.The results showed that an extensive mobilization of heavy metals from the QS column (5.4% Cu, 6.8% Pb and 22% of Zn the total retained) occurred in response to NaCl application.Conversely, only a small fraction (<2.0%) of the retained heavy metals were mobilized from the soil and technical filter media.This implies that chemisorption was the principal metal removal mechanism and salts would not have a major effect on metal mobilization.Our study concluded that mobilization of Cu, Pb and Zn from the technical filter media (TF-I, TF-II and TF-III) and sandy soil (SS) in response to NaCl application, though not alarming, is more likely due to the combined effect of cation exchange and complexation with chloride.Similar results were reported from column studies using alternative filter media other than soil for treatment of highway runoff [6,25].Monrabal-Martinez et al. [25] observed a small release of Cd, Cu, Pb and Zn (<3%) by NaCl from filter columns (pine bark, olivine and charcoal) preloaded with about 50 mg of each metal.Conversely, other studies with soils containing 17-50% clay reported an extensive remobilisation of heavy metals as a result of exposure to high concentration of NaCl [27,28,32].This could be attributed to the fact NaCl promotes the dissolution of organic matter and/or clay which favours mobilization of heavy metals.Mechanisms of metal mobilization were association with coagulated or sorbed organic matter in combination with colloid dispersion, chloride complexation and ion exchange.Norrström [28] evaluated the impact of de-icing salt on remobilization of Cd, Cu, Pb and Zn from soils collected from two highway runoff infiltration trenches (1.5-2.7 mg/kg Cd, 155-194 mg/kg Cu, 171-324 mg/kg Pb and 607-781 mg/kg Zn).They reported that 37-45% of Cd, 0.1-0.2% of Pb and 4.7-5.0% of Zn were leached by NaCl.Remobilization of heavy metals is a function of several mechanism including cation exchange, colloid dispersion, chloride complex formation, metal characteristics and total concentration of metals in the media [27,28,32].Overall, results of the present study indicated that the heavy metals (Cu, Pb and Zn) are strongly attached to the soil and technical filter media.
Effect of Flow Mode on Heavy Metal Removal
The breakthrough curves for Cu, Pb and Zn removal at two different flow modes are shown in Figure 4.It has been observed that heavy metal removal efficiencies in the upflow mode were generally high as compared to the downflow mode.As shown in Figure 4 both the shape and gradient of the breakthrough curves were different with variations in flow direction.The breakthrough point for Cu and Zn, set at Ce/C i = 10%, was almost 2300 BV for the downflow mode and 7600 BV for the upflow mode.At 20% breakthrough of Cu 9700 BV of synthetic stormwater was treated by TF-II operated in the upflow mode and the requirements of the Austrian regulation regarding Pb maximum effluent concentration of 9 µg/L and Zn minimum removal efficiency of 50% were met.Accordingly, exhaustion (lifespan) of the filter media (TF-II) was limited by Cu removal.The corresponding adsorption capacities of TF-II at the 20% breakthrough point of Cu were 573.8 mg/kg, 1182 mg/kg and 4669 mg/kg for Pb, Cu and Zn respectively.On the contrary, in the downflow mode 20% breakthrough of Cu was achieved at 7100 BV and exhaustion (lifespan) of the filter media was limited by Cu removal at nearly 7100 BV.The sorption capacity at exhaustion point was 447 mg/kg, 771 mg/kg and 2771 mg/kg for Pb, Cu and Zn respectively.Similar to our findings, a short breakthrough time and low adsorption capacity of metal ions was reported for a downflow mode as compared to the upflow mode [22,43].For example, Athanasiadis [22] reported that the adsorption capacity of clinoptilolite The observed performance difference between the downflow and the upflow mode is explained by variabilities in the liquid holdups and liquid maldistribution [43].The upflow mode allows saturation of all the vacant metal binding sites which leads to the achievement of higher equilibrium sorption process.These differences are attributed to the liquid holdup in the upflow mode is 100%, while for the downflow mode liquid holdup time is only a function of volumetric flow rate.Furthermore, feeding the multi-metal solution in the upflow mode ensures saturated flow conditions and uniform hydraulic distribution of the sorbate.Accordingly, under the same experimental conditions it becomes apparent that the upflow mode resulted in a more effective use of the filter media.Results of the present study demonstrated that the upflow mode was more efficient in maintaining a saturated flow through condition leading to higher sorption capacity.Therefore, to predict the life span of filter media based on sorption capacity, column experiment operating in upflow mode would be more appropriate.
Breakthrough Curves
The breakthrough curves of Cu, Pb and Zn in the column experiments are presented in Figure 5. Subsequent dosing of the columns with synthetic roadway runoff showed that treatment by technical filter media (TF-I, TF-II and TF-III) and sandy soil (SS) filters are effective in removing Cu, Pb and Zn simultaneously to effluent levels below analytical detection limit (i.e., Ce/Ci < 0.01).After the breakthrough (i.e., Ce/Ci = 0.1) metal effluent concentrations from all filter media started to increase overtime as a function of treated bed volumes.The patterns of metals breakthrough curves (Figure 5) were similar for all filter columns and the steepness of the breakthrough curves decreased in the order of Pb > Cu > Zn for all filter media.The observed performance difference between the downflow and the upflow mode is explained by variabilities in the liquid holdups and liquid maldistribution [43].The upflow mode allows saturation of all the vacant metal binding sites which leads to the achievement of higher equilibrium sorption process.These differences are attributed to the liquid holdup in the upflow mode is 100%, while for the downflow mode liquid holdup time is only a function of volumetric flow rate.Furthermore, feeding the multi-metal solution in the upflow mode ensures saturated flow conditions and uniform hydraulic distribution of the sorbate.Accordingly, under the same experimental conditions it becomes apparent that the upflow mode resulted in a more effective use of the filter media.Results of the present study demonstrated that the upflow mode was more efficient in maintaining a saturated flow through condition leading to higher sorption capacity.Therefore, to predict the life span of filter media based on sorption capacity, column experiment operating in upflow mode would be more appropriate.
Breakthrough Curves
The breakthrough curves of Cu, Pb and Zn in the column experiments are presented in Figure 5. Subsequent dosing of the columns with synthetic roadway runoff showed that treatment by technical filter media (TF-I, TF-II and TF-III) and sandy soil (SS) filters are effective in removing Cu, Pb and Zn simultaneously to effluent levels below analytical detection limit (i.e., Ce/Ci < 0.01).After the breakthrough (i.e., Ce/Ci = 0.1) metal effluent concentrations from all filter media started to increase overtime as a function of treated bed volumes.The patterns of metals breakthrough curves (Figure 5) were similar for all filter columns and the steepness of the breakthrough curves decreased in the order of Pb > Cu > Zn for all filter media.The volume of stormwater treated by all five filter media was different depending on the sorption capacity and total flow through volume at exhaustion.It has to be noted that TF-II treated more stormwater before breakthrough as compared to other filter media types.Zn breakthrough (Ce/Ci < 0.1) occurred in the QS filter media first, followed by SS, TF-III, TF-II and TF-II, respectively (Figure 5).The number of bed volumes to breakthrough was 55, 680, 1700, 2700 and 7600 for QS, SS, TF-I, TF-III and TF-II, respectively.Zn breakthrough generally occurred faster as compared to Cu and Pb which demonstrates that the influent metal concentration has a significant effect on breakthrough curve.This was in good agreement with earlier studies which showed Zn is relative mobile compared to Pb and Cu [5].As indicated in Figure 5 full breakthrough (i.e., Ce/Ci = 1) of Pb, Cu and Zn was not observed in all filters within the experimental running time of three months, except for QS filter column.
Experimental results with the QS filter column indicated a breakthrough of Cu, Pb and Zn beginning at 50 BV and was nearly complete (Ce/Ci ≈ 1.0) after a total flow though of 1000 BV.The filter column QS was found to be the worst with fair metal removal in the early dosing stages but shortly effluent concentration of Pb exceeded the groundwater quality criteria value of 9 μg/L [39].Similar to our findings, Genc-Fuhrman et al. [Error!Reference source not found.]reported that among 11 sorbents, sand has a minor efficiency for the removal of heavy metals which is attributed to its low specific surface area and cation exchange capacity.Removal of heavy metals increased with increasing pH by processes such as surface complexation between dissolved species and oxide and hydroxide groups [Error!Reference source not found.].The decrease in metal removal was in line with the pH drift curve (Figure 1).Measured effluent pH levels showed that QS has very limited pH buffering capacity, consequently its performance on heavy metal removal was very poor as compared to other filter media (i.e., SS, TF-I, TF-II and TF-III).
The filter columns packed with soil and technical filter media showed a maximum breakthrough value (Ce/Ci) of around 0.3 to 0.5 for Pb and Cu and 0.21 to 0.91 for Zn.The column data suggest that TF-II has high affinity for all tested heavy metals and the magnitude of the sorption (up to 7100 BV) remained constant with removal rates of over 93%.Initial metal ion The volume of stormwater treated by all five filter media was different depending on the sorption capacity and total flow through volume at exhaustion.It has to be noted that TF-II treated more stormwater before breakthrough as compared to other filter media types.Zn breakthrough (Ce/Ci < 0.1) occurred in the QS filter media first, followed by SS, TF-III, TF-II and TF-II, respectively (Figure 5).The number of bed volumes to breakthrough was 55, 680, 1700, 2700 and 7600 for QS, SS, TF-I, TF-III and TF-II, respectively.Zn breakthrough generally occurred faster as compared to Cu and Pb which demonstrates that the influent metal concentration has a significant effect on breakthrough curve.This was in good agreement with earlier studies which showed Zn is relative mobile compared to Pb and Cu [5].As indicated in Figure 5 full breakthrough (i.e., Ce/Ci = 1) of Pb, Cu and Zn was not observed in all filters within the experimental running time of three months, except for QS filter column.
Experimental results with the QS filter column indicated a breakthrough of Cu, Pb and Zn beginning at 50 BV and was nearly complete (Ce/C i ≈ 1.0) after a total flow though of 1000 BV.The filter column QS was found to be the worst with fair metal removal in the early dosing stages but shortly effluent concentration of Pb exceeded the groundwater quality criteria value of 9 µg/L [39].Similar to our findings, Genc-Fuhrman et al. [4] reported that among 11 sorbents, sand has a minor efficiency for the removal of heavy metals which is attributed to its low specific surface area and cation exchange capacity.Removal of heavy metals increased with increasing pH by processes such as surface complexation between dissolved species and oxide and hydroxide groups [4].The decrease in metal removal was in line with the pH drift curve (Figure 1).Measured effluent pH levels showed that QS has very limited pH buffering capacity, consequently its performance on heavy metal removal was very poor as compared to other filter media (i.e., SS, TF-I, TF-II and TF-III).
The filter columns packed with soil and technical filter media showed a maximum breakthrough value (Ce/C i ) of around 0.3 to 0.5 for Pb and Cu and 0.21 to 0.91 for Zn.The column data suggest that TF-II has high affinity for all tested heavy metals and the magnitude of the sorption (up to 7100 BV) remained constant with removal rates of over 93%.Initial metal ion concentration has a significant effect on breakthrough time and filter media exhaustion.The breakthrough curves determined in this study were slower and least steep, thus the number of BV passed at breakthrough time reported here are much higher than those reported in other studies.It is difficult to compare these results directly to those of other investigators because the influent concentrations are often higher than the levels used in this study [6,15,23].
It is possible that the adsorption capacity of the filter media may be exhausted before clogging occurs, resulting in high effluent concentrations exceeding discharge water quality and low removal efficiencies of heavy metals [5].The effluent concentration of Pb was exceeding the maximum allowable of 9 µg/L for groundwater protection [39] after the passage of 300, 1060, 3360 and 3600 bed volumes for QS, SS, TF-I and TF-III, respectively.Comparable bed volumes were treated to reach the 20% and 50% breakthrough of Cu and Zn, respectively.On the contrary, for the filter column packed with TF-II 20% breakthrough (80% removal) of Cu was achieved after the passage of 97,000 BV while effluent concentration of Pb and Zn removal efficiency fulfil the requirements throughout the entire experimental duration.Therefore, lifespan of TF-II was limited by Cu removal and after the treatment of 9800 BV this filter media was considered as exhausted.
The cumulative heavy metal loads applied to each column, mass retained in the filter column, cumulative load removal efficiencies and adsorption capacity at filter media exhaustion point are summarized in Table 5. Due to the differences in volumetric flow rate and the exhaustion point, the total flow though volume and total influent heavy metal loads were different for each filter.It can be seen that the influent load of individual contaminants applied into the QS and SS was significantly lower in comparison to the loads applied to the technical filter media.Nevertheless, the load removal efficiencies of each filter column were comparable.Over 90% Cu and Pb dosed into the columns were retained in the filter media, while Zn removal ranged from 62.6% (QS) to 93% (TF-II).The adsorption capacity (mg/kg) of each filter media at column exhaustion point towards individual heavy metal varied significantly.Values of breakthrough show that the adsorption capacity decreases in the following order: Zn > Cu > Pb.This variability is possibly related to the differences in influent concentrations, adsorption affinity (selectivity sequence) as well as weight of filter media.Consequently, adsorption capacity of filter media was found to be in the order of TF-II > TF-I, TF-III > SS > QS.The adsorbent mixture components of TF-I and TF-II (Table 1) were similar, despite for the 3% dolomite addition in TF-II.Comparison of the adsorption capacities of breakthrough curves evidenced that adsorption of Cu, Pb and Zn onto technical filter media was enhanced in the presence of dolomite.The results of this study eventually supported the theory that presence of dolomite increased the pH of the solution above solubility point which caused metals to precipitate as metals oxide and probably metals carbonate [13].The lowest adsorption capacities observed in the filter column packed with natural quartz sand (QS) could be due to its low affinity and non-reactive characteristics which is in agreement with a previous study using sand for metal removal [4,26].
Filter Media Lifespan
The lifespans will be dependent on needed removal efficiencies and effluent water quality requirements.Size of the stormwater treatment system relative to its impervious catchment area allows designers to predict lifespan of a filter media regarding adsorptive removal of heavy metals.Based on the cumulative heavy metal loading (Table 5) the investigated filter media could be sized at 4% (SS), 1% (TF-I and TF-III) and 0.4% (TF-II) of their impervious catchment size.In order to meet the required removal efficiencies of 80% for Cu and 50% for Zn, predicated lifespans of the filter media were at least 35, 36, 41 and 29 years for SS, TF-I, TF-II and TF-III, respectively.The lifespans determined in the present study are relatively high compared to other studies [18,25].For example, mixed media composed of perlite, dolomite and gypsum showed an estimated lifespan from 14 to 22 years for Cu and Zn [18].The variability of estimated lifespan might be attributed to the filter media composition, influent concentration, filter bed depth, size of the treatment system relative to its impervious catchment area.
In practice, lifespan of stormwater infiltration/filtration is usually highly dependent on mitigating sediment input to the system.Solids in stormwater might settled out at the surface of filtration system forming a cake layer or are removed in the pores of the filter bed via filtration are considered to play a vital role in reducing the hydraulic performance of the filtration system due to physical clogging.Clogging of filter media is recognised as the main limiting factor regarding the operational lifespan of stormwater infiltration/filtration systems [2,20].A previous study of stormwater filtration systems constructed with filter media similar to TF-I showed a significant decrease in the infiltration capacity of the systems after five to seven years of operation due to the formation of a clogging layer at the surface of the filters, while the lifespan regarding heavy metal removal was 30 years [20].The authors suggested that the hydraulic performance of the system could be recovered by scraping off the surface accumulated particle layer and replacement or back flushing of the geotextile on periodic bases, approximately every 7 years.Further research should seek to understand the clogging phenomena of filter media receiving particles and contaminants that mimic the real conditions
Conclusions
In the present study, the simultaneous adsorption of heavy metals (Cu, Pb and Zn) from synthetic stormwater runoff aqueous solutions using quartz sand (QS), sandy soil (SS) and three mineral-based technical filter media (TF-I, TF-II and TF-II) was investigated.The column study result was also used to evaluate effect of de-icing salt (NaCl) on the mobility of retained metals, size of treatment system relative to its imperious catchment area and predict infiltration/filtration system lifespan.The results demonstrate that soil based and mineral-based technical filter media are potentially efficient for the removal of heavy metals under high hydraulic loading conditions.Nearly all effluent concentrations measured during the infiltration of synthetic highway and roof runoff fulfilled the requirements of the Austrian regulations (9 µg/L Pb and 1800 µg/L Cu).Additionally, required removal efficiencies for Cu (80%) and Zn (50%) were met during the whole experimental run.However, effluents of Cu from QS column exceeded the required levels of 1800 µg/L, and the required removal efficiencies Cu and Zn were not met.Application of the de-icing salt (NaCl) minor effect on the remobilization for most heavy metals adsorbed heavy metals from the sandy soil and all technical filter media columns and all effluent concentrations fulfil the Austrian regulations.However, results from the natural quartz sand (QS) column showed approximately 6.8%, 5.2% and 22% of the retained Pb, Cu and Zn respectively, were leached in response to NaCl application as well as effluent concentrations of Pb and Cu exceeded the maximum allowable concentration.
Results of long-term treatment performance (Breakthrough curves) demonstrated that mineral-based technical filter media are able to treat higher volumes of stormwater in small filtration systems relative to their impervious catchment area (0.4 to 1.0%) so that they are potentially suitable for utilization in compact stormwater treatment, particularly in urban landscapes where space is very limited.Breakthrough of Cu, Pb and Zn is not expected to occur during the operating life of a
Figure 1 .
Figure 1.pH drift curves for column experiments conducted at influent pH value of 5.8 ± 0.2.
Figure 1 .
Figure 1.pH drift curves for column experiments conducted at influent pH value of 5.8 ± 0.2.
Figure 2 .
Figure 2. Effluent concentrations of heavy metals with different synthetic stormwater dosings (Run 1-Run 4, see Table2) under high hydraulic load experimental conditions as a function of treated volume.
Figure 2 .
Figure 2. Effluent concentrations of heavy metals with different synthetic stormwater dosings (Run 1-Run 4, see Table2) under high hydraulic load experimental conditions as a function of treated volume.
Figure 2. Effluent concentrations of heavy metals with different synthetic stormwater dosings (Run 1-Run 4, see Table2) under high hydraulic load experimental conditions as a function of treated volume.
Figure 3 .
Figure 3. Overall removal efficiency (left) and adsorption capacity (right) of heavy metals by each filter media type following four successive runs (Run 1-Run 4) simulating different stormwater sources.
Figure 3 .
Figure 3. Overall removal efficiency (left) and adsorption capacity (right) of heavy metals by each filter media type following four successive runs (Run 1-Run 4) simulating different stormwater sources.
Figure 4 .
Figure 4. Comparison of breakthrough curves of Cu, Pb and Zn in downflow and upflow mode as a function of bed volume at a volumetric flow rate of 50 mL/min (50% of its maximum saturated flow rate) and filter bed volume of 160 mL.Note that to facilitate visibility of the graph, the Y-axis is different in each case.
Figure 4 .
Figure 4. Comparison of breakthrough curves of Cu, Pb and Zn downflow and upflow mode as a function of bed volume at a volumetric flow rate of 50 mL/min (50% of its maximum saturated flow rate) and filter bed volume of 160 mL.Note that to facilitate visibility of the graph, the Y-axis is different in each case.
Figure 5 .
Figure 5. Breakthrough curves of column experiments for metal mixtures.The lines are not fitting functions; they simply connect points to facilitate visualization.
Figure 5 .
Figure 5. Breakthrough curves of column experiments for metal mixtures.The lines are not fitting functions; they simply connect points to facilitate visualization.
Table 1 .
Composition and physicochemical properties of filter media used in the study.
Table 2 .
Influent concentrations of heavy metals (µg/L) and NaCl (g/L) in different experimental runs and influent pH.
Table 3 .
Remobilization of previously adsorbed metals during road de-icing salt application (42 L solution of 5 g/L NaCl).Mean effluent metal concentrations are indicated in bold numbers and the Italic values in bracket are ranges.
Table 3 .
Remobilization of previously adsorbed metals during road de-icing salt application (42 L solution of 5 g/L NaCl).Mean effluent metal concentrations are indicated in bold numbers and the Italic values in bracket are ranges.
Table 4 .
Heavy metals adsorption and their remobilization/desorption using 42 L solution of 5 g/L NaCl.
Table 5 .
Removal efficiencies and sorption capacity of each filter column at filter media exhaustion for Cu, Pb and Zn. | 13,550.6 | 2018-08-29T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
A Quantitative and Dynamic Model for Plant Stem Cell Regulation
Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.
Introduction
The stem cell (SC) niche in the shoot apical meristem (SAM) of Arabidopsis is composed of three functionally distinct zones [1][2][3]. The central zone (CZ), comprising the center of the upper three cell layers, is home to the stem cells (SCs), which divide slowly. Cells that are displaced laterally into the peripheral zone (PZ) remain undifferentiated, but divide more rapidly, before they are incorporated into organ primordia, which are located at the flanks of the SAM. Cells of the organizing center (OC) located below the CZ divide very slowly and are the source of signals that specify SC identity in the CZ and thus set up a functional meristem (see Figure 1A). Despite the fact that the demand for cells varies strongly, the structure of the SAM is remarkably constant over a wide range of environmental conditions and developmental stages. In principle, this variation of cell supply could be achieved by two alternative mechanisms. The size of the SAM could be adapted and thus indirectly lead to a larger cell output rate proportional to the increase in meristem size. Alternatively, the size of the meristem could remain the same while only the cell output rate increases. This latter mechanism requires a shift in the balance of cell proliferation and cell differentiation in the SAM. It is currently unclear, which of these alternative mechanisms operate in the SAM.
It was suggested that non-cell autonomous signaling between the stem cell pool and the OC is responsible for a homeostasis of SC number [4][5][6]. Fundamental to this mechanism is the negative feedback regulation between the homeodomain transcription factor WUSCHEL (WUS) and the short secreted peptide CLAVATA3 (CLV3). WUS is expressed in the OC and is essential for the maintenance of SC fate and expression of CLV3 [5]. CLV3 in turn is secreted by SCs and acts as a non-cell autonomous signal to repress WUS expression in the OC via a complex signaling pathway [4,5,7]. Additionally, recent experiments showed the possibility of re-specification of peripheral cells into SCs opening another possibility for the regulation of the stem cell pool size [8]. This study also suggested that cell re-specification is regulated by the OC as it is accompanied by an expansion of the WUS expression domain. However, how these mechanisms could modulate the overall cell output rate of the SAM under varying conditions is unclear.
Previous modeling approaches of the shoot apex have mainly focused on the question of pattern formation by means of auxin signaling [9][10][11][12]. Furthermore, Jönsson et al. have used a reaction diffusion model in order to explain the re-formation of the WUS expression domain in the SAM after laser ablation of the CZ [13]. How the domain and thus cell pool structure of the SAM is regulated by changes in cell behavior, such as differentiation and proliferation has not been addressed by mathematical modeling so far.
In this study we address the question of SAM regulation quantitatively by combining experimentation and mathematical modeling using data derived from three experimental conditions. We determined the sizes of the SC domain, the OC and the PZ and measured cell proliferation rates in these domains. Our data revealed that the size of the SC pool as well as the size of the OC is correlated with the cell proliferation rate and is not invariant in different environmental conditions. We used this information to develop a mathematical model of the CZ, which can explain variations in cell pool sizes by a balance of cell proliferation and differentiation rates. The model allows us to estimate the unobserved cell differentiation rates of the different cell pools and sheds light on the contribution of SC proliferation towards overall cell production of the SAM. We show that a model based on the well-established negative feedback between SC and OC domains is sufficient to explain CLV3 and WUS over-expression phenotypes. However, the model does not allow a SC homeostasis under variable cell proliferation rates. By examining two possible feedback mechanisms, which both act to buffer the size of the SC pool despite large changes in cell proliferation rates, we identify functional constrains between an adaptation of the SAM to external cues and SC homeostasis.
Experimental analysis of cell behavior in the shoot apical meristem
To unravel the basic principles underlying the robustness of SAM function by quantitative measurements, we captured SAM domain sizes as well as cell proliferation rates over a wide range of SAM states. To this end we grew Arabidopsis plants in three different growth conditions to perturb SAM function and sampled at different developmental stages. We analyzed vegetative meristems of plants grown for 26 days in short days (SD, eight hours of light) under 23uC, meristems during the transition to flowering of plants grown in long days (LD, 23 hours of light) under 16uC and inflorescence meristems of 26 day old flowering plants from a LD, 23uC condition. To quantify the effects of these perturbations, we measured overall SAM size, the size of the functional subdomains, as well as the mitotic index of cells on histological sections of multiple individuals grown under the described conditions. We used in situ hybridization of CLV3 and WUS to visualize the SC pool and the OC, respectively. We also monitored the proliferation zone by in situ hybridizations of SHOOT MERISTEMLESS (STM), while we used HISTONE H4 RNA expression as a marker to asses the mitotic index of the CZ and the PZ (see Figure 2).
The expression domains of the marker genes were quantified by automated image analysis of individual SAM sections (see Methods for details). The expression domains of CLV3 and WUS could be identified unambiguously and thus the area of their expression could be quantified precisely. In contrast, the expression of STM was not restricted to the SAM and extended into the vasculature in all three conditions investigated (see Figure 2). Therefore, the STM expression domain in the SAM could not be analyzed in two dimensions, but rather was quantified by its width measured along the surface of the SAM (see Figure 1). To measure the mitotic index, the relative expression area of HISTONE H4 mRNA in the CZ and the PZ was determined (see Figure 3, Table 1 and Methods).
While the overall structure of the SAM remained largely unchanged under all conditions, our quantitative analysis showed that meristem size as measured by the surface distance between opposing primordia varied greatly (see Figure 3 and Table 1). Transition apices on average were twice the size of vegetative meristems and the increase in primordia distance was correlated with a doubling in the width of the of STM expression domain (see Figure 3). This expansion of the proliferating cell pool could be viewed as an adaptation to a higher demand for cells during floral transition. Consistent with the enlarged meristem, we also found a two-fold increase in the size of the OC (see Table 1). Surprisingly, the size of the SC domain did not change significantly (Wilcoxon rank sum test for equal medians), pointing to a dynamic and independent regulation of SAM domain sizes. This observation was further supported by the data obtained from inflorescence meristems. Here we found that while meristem size was intermediate between vegetative and transition apices, STM and WUS domains were practically identical to those in transition apices. Remarkably, we found an almost complementary behavior of the SC domain: While the change in size of the CLV3 signal was minor and not significant between vegetative and transition apices, despite the dramatic increase in meristem size, the SC domain was reduced in inflorescence apices, which show an intermediate size.
This reduction was, however, also not statistically significant (p = 0.052). Since the inflorescence meristem is the most mature stage of the apex, this reduction might indicate a gradual loss of stem cells over time. Taken together, our results highlight four important properties of the SAM: (i) Meristem size is highly Figure 1A). C) Mitotic index (MI) of the CZ and the PZ. Mean and standard deviation of the data is given in Table 1 To extent our analysis beyond meristem organization, we measured the mitotic index of cells as a proxy for cell behavior in the three domains of the SAM. To this end, we quantified cells expressing the S-phase marker HISTONE H4 by means of in situ hybridization. As expected, we detected a significantly higher mitotic index for cells of the PZ when compared to CZ cells in vegetative and transition meristems, consistent with the function of the PZ as cell amplification zone. We also observed on average a two-fold difference in mitotic index between CZ and PZ of inflorescence meristems. However, this difference was not statistically significant, due to the variability of our data. Our results are consistent with previous studies of the cell division pattern in inflorescence meristems, which report a significantly lower number of cell division in the CZ compared with the PZ of the SAM [14,15]. However, recent studies based on real time lineage analysis of cell division patterns in the L1 have revealed a wide range of cell cycle length distributions in the inflorescence meristem of an individual plant, which might explain the variability of our data [16].
Since the size of the meristem varied over all conditions analyzed, we asked whether cell proliferation rates are also different between the conditions. We found that the mitotic index of CZ and PZ changed significantly between vegetative and transition apices (CZ: p#0.05; PZ: p#0.01; Wilcoxon rank sum test for equal medians). This change was correlated with a significant increase of OC size (p#0.001), the width of the STM expression domain (p#0.05) and the overall apex size (p#0.001). Compared to transition apices, inflorescence meristems showed a significantly reduced cell proliferation for the PZ (p#0.01), which was accompanied by a significant decrease in the surface distance between newly emerging primordia (p#0.001).
Taken together, our experimental results provided evidence for a dynamic regulation of meristem domains, which is correlated with a modulation in cell behavior. Over the growth conditions examined, meristem size, as well as the dimensions of the STM domain and the OC were correlated with the mitotic index of cells. In contrast, the size of stem cell pool was less correlated with meristem size, despite the fact that also cells of the CZ showed variation in proliferation activity. Since current models based on the CLV-WUS feedback hypothesis [13] address meristem maintenance in fixed developmental conditions and therefore cannot account for such a behavior, we developed a quantitative model to uncover the underlying logic of plant stem cell control.
A quantitative model for the dynamic behavior of meristem cells
To elucidate the underlying principles of meristem and domain size regulation, we developed a quantitative model to describe cell behavior in the shoot apical meristem. Our model is based on the assumption that cell proliferation, cell differentiation and respecification are the basic size-determining mechanisms in the SAM. In this context, we defined loss of SC identity as differentiation. Two well established interactions between the SC domain and the OC justify our basic assumptions: (i) CLV3 expression in the SCs is induced by WUS, which is expressed in the OC [5]. Since we used both genes as cell pool markers, we accounted for this positive interaction by requiring that SC formation is induced by the OC. Live-imaging experiments revealed that this induction can occur via a fast re-specification of peripheral cell identity to SC identity [8]. We used a linear relationship for the convenience of parameter estimation, which also is a good approximation in case of low WUS levels. However, at high levels of WUS the re-specification rate probably saturates. (ii) Non-cell autonomous CLV3 signaling negatively acts on the expression of WUS [4,5,7,8]. We accounted for this observation within our model by a negative effect of the SCs on the size of the OC. Thus, SCs increase the differentiation rate of OC cells into non-meristematic cells. Finally, we assumed that the cell proliferation rate is proportional to cell pool sizes. Since we only had an indirect measure for the size of the proliferation zone, we could not use these data for model parameter estimation. Therefore, we only modeled SC and OC. As our data did not allow to reliably distinguish the proliferation rates of OC and the SC pool we used an average proliferation rate for cells in the center of the SAM.
The assumptions listed above were incorporated into the following model for the SC pool size (S) and the size of the OC (O): Figure 1B shows a graphical representation of the model. The cell proliferation rate constant is a 1 . The re-specification rate of proliferating cells into SCs depends linearly on the size of the OC with a rate constant r 1 accounting for interaction (i). For simplicity, it was assumed that this interaction does not depend on the size of the proliferating cell pool P, which is omitted from the model. The SC differentiation rates are all proportional to S with constants l 1 for the SC-to-OC differentiation and l 2 for the SC-to-PZ differentiation. The differentiation rate of the OC depends on S reflecting interaction (ii) and has a rate constant l 3 .
For the purpose of our study we focused our analysis of Equations (1-2) on the steady state size of the SC and OC pools. Equations (1-2) have a trivial steady state at (S * , O * ) = (0,0) which is unstable and a stable, non-trivial steady state given by: For biological relevance the steady state must be positive which requires l 1 +l 2 .a 1 . In order to yield a predictive model we determined the value of all five model parameters from data. The cell proliferation rate a 1 was calculated using the mitotic index measurements. Note that the value of a 1 depends on the growth condition as if it were controlled by external factors that change under each condition, e.g., nutrient availability or plant hormone levels. All other parameters are assumed to be independent of growth conditions. We use these minimal assumptions, since a specific functional connection between the differentiation and respecification rates and growth conditions is presently unknown. The re-specification rate r 1 was estimated from the experimental data given in [8].
With these values at hand, the steady state Equations (3-4) were used to estimate the differentiation parameters l 1 , l 2 and l 3 from the data given in Table 1. The estimated parameter values are listed in Table 2 and Table 3. A detailed description of the parameter estimation can be found in the Materials and Methods. Since the data of cell pool sizes showed a high variability in each condition the parameter values could only be determined up to a certain confidence, which is also given in Table 3.
Our data showed that the size of the OC and the SC domains varied with the proliferation rate of cells in the CZ. However, the extent of this variation was quite different for both cell pools. Using the optimized model parameters we compared the experimental results with the predicted response of our model to changes in the cell proliferation rate a 1 . In the model, the steady state level of the SC pool S * varied by more than a factor of two between the vegetative and the transition meristem using the respective values for the cell proliferation rate (see Figure 4A). In contrast, the steady state level of the OC changed only 1.5 fold. More generally, it can be shown that for any positive parameter values of the model Equations (1-2) the steady state S * is more sensitive to changes in a 1 than the steady state of the OC. This was done by comparing in the relative sensitivities of the steady states S * and O * to a change in a 1 .
As mentioned above, a positive steady state of S * and O * requires l 1 +l 2 .a 1 and therefore L a1 S Ã ð Þ=S Ã w0 holds. It follows that Thus, the change in size of the SC pool due to changes in the cell proliferation rate is larger compared to the respective change in size of the OC. The increased sensitivity of the SC pool is a result of the OC-controlled cell re-specification at the periphery of the SC pool. Our analysis highlights an important prediction of our model: variations in cell proliferation rates as observed in different developmental and environmental conditions lead to changes in SC number. This prediction is supported by our experimental data as transition meristems on average show the largest SC domain compared to vegetative and floral meristems. However, these changes were small compared to those observed for other domains, suggesting that additional mechanisms buffer SC pool size. Noteworthy, this SC variation does not rule out SC homeostasis under constant growth conditions and in fact the model predicts a stable SC number if the cell proliferation rate does not change.
The role of feedback on the stem cell homeostasis in different growth conditions SC behavior has mainly been discussed in the context of constant developmental and environmental conditions where the simple negative feedback between SC and OC domains ia able to produce SC homeostasis [1][2][3]. However, our modeling revealed that this feedback alone is unable to buffer SC pool size under changing growth conditions, a behavior, which we had observed experimentally. Thus, additional regulatory mechanisms are necessary to achieve a stabilization of SC number. Alternatively, the large spread of our data might have obscured a more pronounced change in SC pool size. Since we could not distinguish these possibilities experimentally, we investigated them by mathematical modeling asking which additional feedback mechanisms could give rise to SC homeostasis under changing growth conditions. An obvious mechanism to balance the SC pool size is the adaptation of the SC differentiation rate in response to a change in the cell proliferation rate. A similar mechanism was suggested to operate in the SC niche of the colonic crypt [17]. Here, the non-linear regulation of SC differentiation also leads to a robust control of the cell pool size. Following this idea, we introduced two alternative mechanisms that lead to an adaptation of the SC differentiation rate in the SAM. In both cases, the adaptation is based on a secreted differentiation signal X that is either produced by the SC pool (i) or by the OC (ii) and is degraded linearly everywhere in the SAM. For simplicity, we assumed that the mobility and decay of X are fast compared with the dynamics of the cell pools. Under these conditions the global concentration of X is proportional to the size of the cell pool it originates from. Thus, the first model includes a differentiation signal X, which is produced by the SC pool. Using the above mentioned approximation X,S, lead to a quadratic SC differentiation term.
Solving for the non-trivial stable steady state gave, Table 3. Median and 95% confidence interval for differentiation rates of the three alternative models. where f = a 1 (l 1 +l 2 +l 3 )+r 1 l 1 and Biological relevance requires f 2 w4l 3 l 1 zl 2 ð Þ a 2 1 and l 3 S * .a 1 . In the alternative model the differentiation signal X is produced by the OC, i.e., X,O, which can be expressed as: Note that in order to maintain the SC pool size it is necessary that not only the SC differentiation rate but also the differentiation rate of the OC is regulated by this differentiation signal. The new model also has a trivial steady state (S * ,O * ) = (0,0), which is unstable and two alternative non-trivial steady states of which the stable one is: Here, biological relevance requires A.0 and A 2 .r 1 a 1 4B. The differentiation rates of each model were estimated as described above and are given in Table 3.
The results for the SC-based feedback mechanism are shown in the phase plane diagram of Figure 4B. A five-fold increase in the cell proliferation rate increases the size of the OC three-fold and that of the SC pool two-fold. Thus, mechanism (i) reduces the sensitivity of the SC pool size to changes in the cell proliferation rate compared with the basic model in Figure 4A. The reduction in sensitivity is even stronger for the alternative OC-based feedback mechanism as can be seen in Figure 4C. Since the size of the OC increases in response to an elevated cell proliferation, the differentiation signal and thus the SC differentiation rate regulated by the OC increases accordingly, leading to a stabilization of the SC pool size. Thus, mechanism (ii) allows for an almost perfect SC homeostasis in the various SAM states. However, the reduced sensitivity to variations in the cell proliferation rate is accompanied by a high sensitivity towards changes in other model parameters. E.g. a 10% increase in l 3 or r 1 leads to a loss of a stable steady state. Therefore, an OC-based feedback mechanism is much more fragile compared with the SCbased feedback mechanism. Taken together, both suggested mechanisms lead to a reduction in the sensitivity of the SC pool size to changes in cell proliferation rate. If the differentiation signal originates from the OC, an almost perfect SC homeostasis under different environmental and developmental conditions could be achieved. This robustness, however, is accompanied by fragility in other model parameters.
Regulation of cell output generated by the stem cells of the shoot apical meristem
The control of overall cell production per time, or cell output rate, is the major task for the shoot apical meristem to serve its function to supply the growing plant with an appropriate amount of building material. To address the question of how much the SC pool contributes to the varying amount of cells produced in the SAM, we asked how the size of the SC pool in the different conditions is correlated with the differentiation rate into PZ cells using our three fitted models. Figure 5 shows the cell output rate of the CZ in dependence of the SC pool size. Each pair of values was calculated by varying a 1 continuously between 0.001 and 0.03. Noteworthy, all three models predict a higher SC output rate in transition meristems when compared to vegetative meristems. Thus, an increase in SC proliferation contributes to meeting a higher demand for cells during floral transition. While the increase in the cell output rate is almost the same for all three models, the change in SC pool size is not. While for the basic model Equations (1-2) the SC size scales linearly with cell output rate, both models including additional feedback mechanism show a reduced change in SZ size. For the OC-based feedback, the SC pool size is almost independent of the output rate.
Prediction of CLV3 and WUS over-expression phenotypes
Since all fitted models can explain the experimental data with reasonable accuracy (see x 2 values, caption of Table 3), we asked whether they could also correctly predict the results of experimental modulations in CLV3 and WUS expression [4,5,18]. Since our modeling approach did not allow us to exactly replicate the experimental setup of published experiments, such as those from Müller et al. or Schoof and colleagues [5,18], we tested two alternatives. First, we increased SC number as a means to simulate an ectopic expression of WUS, while as a second approach, we increased the differentiation rate of OC cells, to account for an increased CLV3 signaling emanating from the same number of stem cells as described by Müller et al. [18]. Elevated CLV3 levels are known to repress endogenous WUS expression, despite the fact that variations in CLV3 signaling are compensated over a wide range [18]. Thus, we expected that an artificial enlargement of the SC domain, as in plants ectopically expressing WUS [5], would reduce OC size. Conversely, an increase in the negative SC-to-OC signaling should lead to a reduction in both OC and SC size. Using these phenotypes as a test case, we varied SC production and OC differentiation rates in all our three models in order to simulate the respective over-expression experiments. Figure 6 shows a comparison of the phenotypes predicted by our three models. In order to avoid a statement based on unknown and therefore arbitrary over-expression rates, only the functional relations between the sizes of the SC pool and the OC are shown. Interestingly, only the basic model without additional feedback is in agreement with the experimentally observed phenotypes. Here, the SC domain size increases in response to ectopic WUS expression, while the OC size, which is a proxy for the endogenous activity of the WUS promoter, is decreased, but not shut down completely (see Figure 6, black dashed curve). Elevated CLV3 levels lead to a reduction of the SC and OC size as expected from experiments (see Figure 6, black solid curve). In contrast, the SCbased feedback model correctly predicts only the response to increased CLV3 levels ( Figure 6 solid red line), but fails to do so in the case of ectopic WUS expression. In this scenario the SCfeedback model predicts an increase in OC size, which is in disagreement with experimental observations (see Figure 6, red dashed curve). The OC-based feedback model is very sensitive to experimental perturbations and allows only very limited ectopic WUS over-expression as well as elevation of CLV3 levels. The response in the later scenario is also mispredicted (see Figure 6, blue solid curve).
Thus, both feedback mechanisms invoked to buffer the variations in SC number under changing growth conditions interfere with the ability of the model to explain modulations of the system at the genetic level.
Discussion
The complexity of known and unknown regulatory interactions in the SAM precludes an intuitive understanding of plant stem cell control [19]. It is reasonable to expect that the quantitative regulation of cell number is dependent on a feedback system that Figure 5. Dependence of the CZ cell output rate and SC pool size. The output rate is defined by the fraction of SCs that differentiate into PZ cells per unit time. Note that while the basic model Equations (1-2) show a linear increase in SZ size with, both models including a feedback on the SZ differentiation rate exhibit a reduced (Equations (5-6)) or almost absent (Equations (9-10)) change in the SC pool size while delivering the same increase in cell output rate. Circles: vegetative meristem from short days, 23uC. Diamonds: transition meristem from long days, 16uC. Squares: Inflorescence meristem from long days, 23uC. doi:10.1371/journal.pone.0003553.g005 can sense the number of SC and adjust SC proliferation and differentiation rates accordingly. Therefore, we expect that these rates ultimately depend on the number of cells in the SAM. Adopting this view allowed us to circumvent the problems of extracting quantitative information from the numerous known genetic interactions. Instead, we choose a cell pool size dependent description of the SC regulation. One advantage of this approach is that we were able to directly address the question of how the output rate of meristem cells without stem cell property (referred to as differentiated cells) is regulated, which is an important property of meristem function. It is noteworthy that despite this abstraction our model is still mechanistic in the sense that it can be used to predict the effect of genetic or environmental perturbations that change cell proliferation or differentiation rates or otherwise change the size of a specific cell pool in the SAM.
Based on this idea we have quantified different cell pool sizes under various environmental and developmental conditions, which cause an adaptation of the SAM organization. We were able to observe a systematic adaptation of cell pool sizes and cell proliferation rates of the CZ and PZ of the SAM in different conditions. While the variations in meristem and subdomain size as well as cell proliferation rates were striking, the correlation between these responses was non-trivial. Thus, we have employed mathematical modeling to deduce rules of meristem behavior from our experimental data. We have formulated a model based on the known domain structure of the SAM and determined the unknown cell differentiation rates by fitting the model to our new experimental data. This formed the basis for a systematic study of the influence of cell proliferation on the cell pool sizes of the SAM. An important simplification of our modeling approach is the implicit treatment of the spatial structure of the SAM by using cell pools that are connected via differentiation rates. This simplification allows us to arrive at a coarse-grained but nevertheless quantitative picture of SC regulation since all model parameters were identifiable from our data. While a cell-based model would allow answering specific questions, e.g., about the regulation of cell differentiation at the pool boundaries, it would also require a much finer spatial and temporal resolution of the data to identify all its parameters. With the advance of live imaging techniques [20,21] it will become possible to study cell pool dynamics in the SAM with much greater detail and thus allow quantitative modeling of cell behavior with high spatial and temporal resolution.
The most important observation made from our experimental data is that the meristem is a highly plastic tissue, which undergoes substantial changes in domain organization and cell behavior in response to environmental and developmental cues. In the context of this plasticity, the low variation in SC number under the growth conditions tested is remarkable. While our dataset is too limited to draw final conclusions, it suggests that under changing growth conditions, SC number is well buffered but not in perfect homeostasis, which is compatible with a homeostatic SC behavior under constant conditions. Since a simple feedback model is unable to account for this observed stability of SC number, we have included additional feedback systems into our model. A thorough analysis of these three models shed new light onto the dynamical constraints of SC regulation in the SAM: none of the models was able to correctly predict CLV3 and WUS overexpression phenotypes and SC homeostasis under changing growth conditions at the same time. One explanation for this could be that, due to data limitation, some of the underlying assumptions derived from experimentation might be incorrect. Alternatively, there could be unknown regulatory connections between the feedback systems, which are able to modulate the responses. However, the adaptation to changes in the environment involves the fully functional regulatory system, while interference at the genetic level, such as in over-expression experiments, might disable some parts of the regulatory network. Thus, we believe that the results obtained from modeling meristem behavior under various growth conditions are more relevant than those aimed at explaining over-expression phenotypes.
As a central assumption of our study we treated the cell proliferation rate as an externally controlled quantity that is adapted during the different environmental and developmental conditions. This allowed us to derive a quantitative model of the central meristem zone, which is able to predict the effect of experimental perturbations. However, additional internal feedback mechanisms might operate in the SAM to control SC proliferation and differentiation. For example, it was shown that ectopic co-expression of WUS and STM not only induces ectopic SCs, but also leads to organ formation, i.e., differentiated tissue from SCs [22][23][24]. Consistently, WUS is a direct activator of the floral patterning gene AGAMOUS [25], demonstrating its involvement in both proliferation and differentiation. Plant hormones strongly contribute to the regulation of this balance and in the context of the root meristem the phytohormone cytokinin was shown to play an important role in cell differentiation [26]. Conversely, cytokinin is an essential signal for cell proliferation in the SAM [27][28][29]. A direct link between stem cell control and cytokinin signaling came from the finding that WUS directly represses the expression of ARABIDOPSIS RESPONSE REGULA-TOR 7 (ARR7), a negative element of cytokinin signal transduction [30]. Interestingly, ARR7 has a negative effect on WUS expression, providing another layer of feedback regulation. The intricate spatial regulation of cell proliferation and differentiation within the meristem almost certainly involves modulation of the cell cycle machinery in the various SAM domains. It has recently been shown that CYCLIN DEPENDENT KINASES of the B2 class (CDKB2;1 and CDKB2;2) are not only essential for proper cell cycle progression, but also for the correct spatial organization of the SAM [31]. Interestingly, the expression of these genes is dependent on WUS and STM function and their activity is at least partially mediated by plant hormones, such as auxin and cytokinin. Thus, the adaptation of cell proliferation leading to different cell pool sizes in different environmental and developmental conditions could be the result of a complex and highly branched regulatory network. Our study is a first attempt to uncover the basic regulatory principles of this network by a combined approach of quantitative data collection and modeling.
Plant Material and Growth Conditions
Plants of Columbia (Col-0) background were grown under three different light and temperature conditions in order to elicit variations in SAM size. All plants were harvested after 26 days. The three growth conditions were: short day (SD), 23uC = vegetative SAM; long day (LD), 16uC = SAM during floral transition; LD, 23uC = inflorescence SAM.
In situ hybridization
In situ hybridizations were performed using a standard protocol [30]. The goal was the precise quantification of marker expression area and not the absolute or relative expression level. Therefore, in order to achieve high optical resolution of the stained tissue and avoid spreading of the NBT-BCIP dye, the staining reaction was stopped when single cells gave a clear signal. The maximum staining length was one day.
Image Acquisition and Image Analysis
Images were taken with a Zeiss AxioCam HR camera mounted to a Zeiss Axioplan 2 microscope and taken with a resolution of 0.54 mm 2 per pixel. All images were acquired with the Zeiss AxioVision Image software and saved in TIFF format. Subsequent image analysis was performed on the intensities of the red channel, which gave the sharpest staining signal of the three color channels represented in the TIFF images. The expression area of CLV3, WUS and HISTONE H4 was determined by thresholding relative to the intensity of the unstained tissue of the same image. This image specific thresholding allows corrections for sample and image specific properties, such as background intensity and illumination. The threshold was determined by the mean of the unstained tissue intensity minus four standard deviations. Thresholded pixels, which did not correspond to cell-shaped areas with a diameter $3 mm were removed. The mitotic index was determined as the ratio of the thresholded area to the total area of a quadratic selection. For each developmental condition the size of total area of selection for the CZ was adapted to the mean size of the CLV3 expression area under this condition. The selection of the PZ was directly adjacent to either side of the CZ and had the same size. The mitotic index of the PZ was averaged over both sides. The distance between the two inner primordia was measured along the outer epidermal layer of the SAM. All image analysis was performed with the Imaging Toolbox of the MATLAB software from Math Works, Inc. All MATLAB scripts are available from the authors upon request.
Parameter Estimation and Mathematical Modeling
Numerical analysis of the model was performed with the MATLAB software from MathWorks, Inc. For parameter estimation the measured areas A for CLV3 and WUS were transformed to volumes V assuming a spherical symmetry of the cell pools: This data transformation is appropriate in order to reflect the three dimensional structure of the cell pools in the SAM. However, we want to note that the main conclusions of the modeling are not dependent on this data transformation. Subsequently, all volumes were scaled to relative quantities by taking the mean CLV3 expression volume of the inflorescence meristem as a reference volume V ref . The transformed data relate to the dynamical variables of our model as follows: SC pool size where i is the index of the sampled conditions. The work of Reddy et al. revealed that cell re-specification precedes cell proliferation and is probably controlled by the OC [8]. This justifies the simple model S˙= r 1 O to calculated the respecification rate r 1 via the approximation: :  clv3 and  wus are the mean CLV3 and WUS expression areas of the inflorescence meristem respectively. The factor c is the fold increase in CLV3 expression area after a time Dt = 24 h and is in the order of two [8]. The standard error of r 1 is calculated by error propagation (see Table 2). The mitotic index of a given tissue corresponds to the probability of observing a proliferating ( = stained) cell within the tissue and is given by the ratio of the expression length of the HISTONE H4 marker (L H4 ) to the total cell cycle length (L cc ). The average cell proliferation rate a of a given cell pool can then be calculated as:
a~1
L cc~M I L H4 : We assume L H4 = 10 h as the average HISTONE H4 marker expression length [32]. The calculated values of the CZ and PZ are given in Table 2. The remaining differentiation rates l 0~l 1 ,l 2 ,l 3 ð Þ were estimated by least square fitting of the steady state equations of our models to the mean of the CLV3 and WUS expression areas in all three developmental stages of the SAM minimizing the functional: Confidence intervals for the differentiation rates were determined by a bootstrap procedure. One thousand bootstrap samples were generated from the complete data and the mean and standard deviation of each sample was used for parameter estimation. The resulting parameter distributions were used to calculate the median and 95% confidence interval of the three differentiation rates (see Table 3).
Simulation of elevated CLV3 levels and ectopic WUS overexpression
In order to simulate the effect of elevated CLV3 levels, the differentiation parameter l 3 was increased from its basal level. Thereby, the SC pool size can be used as a proxy for the activity of the endogenous CLV3 promoter. Ectopic WUS over-expression was simulated adding a constant production rate to the dynamical equation for the SC pool size. This enables to visualize the activity of the endogenous WUS promoter by monitoring OC levels. The new steady states were computed by numerical integration. | 9,354.8 | 2008-10-29T00:00:00.000 | [
"Biology"
] |
Colorimetric Humidity Sensor Using Inverse Opal Photonic Gel in Hydrophilic Ionic Liquid
We demonstrate a fast response colorimetric humidity sensor using a crosslinked poly(2-hydroxyethyl methacrylate) (PHEMA) in the form of inverse opal photonic gel (IOPG) soaked in 1-butyl-3-methylimidazolium tetrafluoroborate ([BMIM+][BF4−]), a non-volatile hydrophilic room temperature ionic liquid (IL). An evaporative colloidal assembly enabled the fabrication of highly crystalline opal template, and a subsequent photopolymerization of PHEMA followed by solvent-etching and final soaking in IL produced a humidity-responsive IOPG showing highly reflective structural color by Bragg diffraction. Three IOPG sensors with different crosslinking density were fabricated on a single chip, where a lightly crosslinked IOPG exhibited the color change response over entire visible spectrum with respect to the humidity changes from 0 to 80% RH. As the water content increased in IL, thermodynamic interactions between PHEMA and [BMIM+][BF4−] became more favorable, to show a red-shifted structural color owing to a longitudinal swelling of IOPG. Highly porous IO structure enabled fast humidity-sensing kinetics with the response times of ~1 min for both swelling and deswelling. Temperature-dependent swelling of PHEMA in [BMIM+][BF4−] revealed that the current system follows an upper critical solution temperature (UCST) behavior with the diffraction wavelength change as small as 1% at the temperature changes from 10 °C to 30 °C.
Introduction
There has been growing interest in the utilization of opal templating method for the fabrication of various photonic crystal (PC) devices such as stimuli-responsive colorimetric sensors and reflective full color displays [1,2]. As a nature-mimicking process, opal templating provides manifold advantages compared to other fabrication methods, such as simple and low-cost process, color tunability, various external stimuli, etc. [3,4]. The origin of colors in PC stems from its periodic structure with a lattice spacing d which reflects incident light of a specific wavelength λ at an angle of diffraction θ as represented by Equation (1), and thus the reflected color is often called as the structural color.
If d or the effective index of refraction (n eff ) can be varied by certain stimulus, the structural color will be changed as well, and such a PC will act as a stimuli-responsive colorimetric sensor. A light diffraction can occur either longitudinally or laterally. A longitudinal light diffraction is utilized in one-dimensional (1D) PC (e.g., multi-layered structure, opal film with {111} facet parallel to the substrate) to which Bragg equation can be applied, while a lateral diffraction occurs in two-dimensional (2D) PC such as diffraction grating or colloidal monolayer. To be utilized as a colorimetric sensor, likewise, d should be changed longitudinally in 1D-PC, while a lateral variation of d should occur in 2D-PC. Comprehensive studies have been conducted towards the fabrications of stimuli-responsive sensors using 1D-or 2D-PC structures via opal templating techniques. From the mechanistic point of view, PC sensors can be either field-responsive (e.g., electric field [5][6][7][8], magnetic field [9], pressure [10], temperature [11][12][13]) or mass-responsive, and the latter is often called as a chemical sensor. Various examples of analytes can be investigated in chemical sensor such as ions in aqueous phase, and gaseous species as well. For fabrication of PC sensors, volume-changeable hydrogels are generally utilized [14][15][16]. In a variety of PC chemical sensors, the driving force of hydrogel swelling is related to the development of osmotic pressure occurring at the interface between the hydrogel and the bulk solution, which is induced by distinct distribution of ionic charges between them. On the other hand, there also have been studies on the PC chemical sensors based on thermodynamically driven swelling/deswelling of hydrogel in aqueous phase upon inclusion of the gaseous analytes (e.g., water vapor, ammonia, CO 2 ) [17][18][19][20]. T. Kanai et al. reported a swelling of gel-immobilized colloidal PC in a hydrophilic ionic liquid (IL) [21]. Smith et al. reported an opal-templated 2D PC gas sensor soaked in 1,3-diallylimidazolium bis(trifluoromethanesulfonyl)imide IL which showed a color change responses to water or ammonia vapor [20]. Tian et al. reported that an opal templated copolymers of styrene, methylmethacrylate, and acrylamide can be used as a PC humidity sensor [18]. In the aforementioned studies on humidity sensors, however, none of them utilized a porous structure which can provide a rapid response kinetics. Recently, Barry et al. reported a humidity sensor having a porous inverse opal structure of polyacrylamide which showed a rapid response time (~20 s) for humidity sensing, while the sensitivity was relatively poor [17]. In general, a fast response kinetics for humidity sensing can be achieved by incorporating an intermediate liquid phase like IL for water sorption [18]. In this study, we investigate a crosslinked inverse opal photonic gel (IOPG), which is soaked in 1-butyl-3-methylimidazolium tetrafluoroborate ([BMIM + ][BF 4 − ]), a non-volatile hydrophilic room temperature ionic liquid (IL) to demonstrate a fast-response and high-sensitivity humidity sensor.
Chloroform and acetonitrile (ACN) was respectively purchased from Duksan Chemical (Seoul, Korea) and Daejung Inc. (Seoul, Korea). Deionized (DI) water was produced from the water purifying system (human technology). Methods: The polystyrene (PS) µ-sphere with narrow size distribution was synthesized by emulsion polymerization [22]. In an N 2 -purged DI water which is heated at 70 • C, potassium persulfate and SDS were fully dissolved, and emulsion polymerization was carried out upon addition of styrene which had been filtered through activated alumina for inhibitor removal. After polymerization for 4 h, the µ-spheres dispersion was filtered through pre-cleaned cotton fibers, and dialyzed using a semi-permeable cellulose membrane tube (MWCO 12,000-14,000, MFPI) for removal of the remaining impurities. The purified aqueous dispersion of µ-sphere was about 15 wt %, and the average diameter was characterized to be 220 nm. For fabrication of IOPG humidity sensor, a recently developed technique, called a 'directed enhanced evaporation of water for colloidal assembly' (DEECA), was used, as schematically shown in Figure 1 [23]. The template was composed of three parts, a top slide, a spacer, and a bottom slide. To make a top slide, a standard size glass slide (2 × 6 cm 2 ) with three drilled holes (diameter = 1 mm) was treated with trichlorooctadecyl silane (Sigma) in i-octane to render it hydrophobic. As a spacer, 30 µm-thick Surlyn ® (Dupont, Wilmington, DE, USA) film was cut to form three individual channels when hot-pressed between a top and a bottom slide. Upon completion of assembly, the bottom of each channel was wide open, while the drilled hole was placed at the top of each channel. An aliquot of aqueous suspension of 15 wt % PS µ-sphere was infiltrated from the bottom of each channel, and the top holes were sealed with Teflon ® -tape (Dupont). After 4 h of colloidal assembly induced by water evaporation, the DEECA cell was further air-dried for 24 h, and thermally annealed at 80 • C for 3 h. A precursor mixture of 2.5 g of 2-hydroxyethyl methacrylate (HEMA) (96% Junsei), 0.025 g of ethylene glycol dimethacrylate (EGDM) (98%, Sigma-Aldrich), 0.075 g of Irgacure-651 ® (Ciba specialty chemicals), and 0.625 g of DI water was prepared, and a small aliquot of precursor mixture was infiltrated within the interstices of the colloidal assembly in DEECA cell, which was photopolymerized by exposure to a UV lamp (Spectroline, MODEL5B-100P/F) for 1 h. After removing the top slide from the DEECA cell, PS µ-spheres were etched away by immersing the cell in chloroform bath for one day, and the resulting IOPG was subsequently rinsed with chloroform and ACN. The IOPG in ACN was transferred to [BMIM + ][BF 4 − ] IL, and finally vacuum-dried at 40 • C overnight in order to completely remove ACN leaving the dried IL behind. An as-prepared IOPG in IL was placed in a custom-made desiccator (LK Lab Korea) in which the relative humidity had been kept very low (<1.0% RH) by using CaSO 4 drierite. The relative humidity was monitored by using a digital humidimeter (Traceable ® , Control Company, Webster, NY, USA). The humidity was increased by placing wet cotton balls in the desiccator. The color changes of IOPG were characterized by a digital camera or a custom-made reflectance measurement system equipped with a reflective microscope (L2003A, Bimeince, Seoul, Korea) and a UV-vis spectrometer (AvaSpec ® , Avantes, Apeldoorn, The Netherland). Refractive index of IL with varying water content was measured using a digital refractometer (RX-5000a, Atago, Tokyo, Japan). completion of assembly, the bottom of each channel was wide open, while the drilled hole was placed at the top of each channel. An aliquot of aqueous suspension of 15 wt % PS μ-sphere was infiltrated from the bottom of each channel, and the top holes were sealed with Teflon ® -tape (Dupont). After 4 h of colloidal assembly induced by water evaporation, the DEECA cell was further air-dried for 24 h, and thermally annealed at 80 °C for 3 h. A precursor mixture of 2.5 g of 2-hydroxyethyl methacrylate (HEMA) (96% Junsei), 0.025 g of ethylene glycol dimethacrylate (EGDM) (98%, Sigma-Aldrich), 0.075 g of Irgacure-651 ® (Ciba specialty chemicals), and 0.625 g of DI water was prepared, and a small aliquot of precursor mixture was infiltrated within the interstices of the colloidal assembly in DEECA cell, which was photopolymerized by exposure to a UV lamp (Spectroline, MODEL5B-100P/F) for 1 h. After removing the top slide from the DEECA cell, PS μ-spheres were etched away by immersing the cell in chloroform bath for one day, and the resulting IOPG was subsequently rinsed with chloroform and ACN. The IOPG in ACN was transferred to [BMIM + ][BF4 − ] IL, and finally vacuum-dried at 40 °C overnight in order to completely remove ACN leaving the dried IL behind. An as-prepared IOPG in IL was placed in a custom-made desiccator (LK Lab Korea) in which the relative humidity had been kept very low (<1.0% RH) by using CaSO4 drierite. The relative humidity was monitored by using a digital humidimeter (Traceable ® , Control Company, Webster, NY, USA). The humidity was increased by placing wet cotton balls in the desiccator. The color changes of IOPG were characterized by a digital camera or a custom-made reflectance measurement system equipped with a reflective microscope (L2003A, Bimeince, Seoul, Korea) and a UV-vis spectrometer (AvaSpec ® , Avantes, Apeldoorn, The Netherland). Refractive index of IL with varying water content was measured using a digital refractometer (RX-5000a, Atago, Tokyo, Japan).
Results and Discussion
The sensor material used in this study is a cross-linked PHEMA which is well known for hydrophilicity, biocompatibility, and rubber elasticity [24]. Opal-templated photopolymerization of PHEMA via DEECA process provided a 20 μm-thick IOPG film as shown in Figure 2.
Results and Discussion
The sensor material used in this study is a cross-linked PHEMA which is well known for hydrophilicity, biocompatibility, and rubber elasticity [24]. Opal-templated photopolymerization of PHEMA via DEECA process provided a 20 µm-thick IOPG film as shown in Figure 2. It was reported that polyvinylalcohol (PVA), a typical water-soluble polymer is compatible with a hydrophilic IL of [BMIM + ][BF4 − ], while the thermodynamic interaction between PVA and IL is controlled by inclusion of water in IL [25,26]. We found that PHEMA is also compatible with [BMIM + ][BF4 − ], and an IOPG film of PHEMA in [BMIM + ][BF4 − ] was found to exhibit a humiditydependent color changing response. At a low humidity, IL molecules are equilibrated with PHEMA IOPG. Upon the increase of humidity, water is dissolved in IL since BF4 − , a kosmotropic ion strongly interacts with water molecule by hydrogen bonding [26]. Following this, the osmotic pressure is developed within IOPG, and there are mutual diffusions of water and IL molecules in and out of IOPG, where lighter water molecules can diffuse into IOPG faster than IL to bring about rapid longitudinal swelling of IOPG, and consequently red-shift the diffraction wavelength in Equation (1). In the meantime, a hydrophobic BMIM + behave as a chaotropic ion which disrupts the hydrogen bonding between water and PHEMA to exclude water molecules out of IOPG during the drying process, so the shrinking of IOPG and the blue-shift of the structural color occurs. In some circumstances, BMIM + is also reported to act as a kosmotropic ion due to hydrophobic hydration [26].
In addition to the swelling and deswelling of IOPG to induce color change of humidity sensor, neff of IOPG and liquid medium can also affect the structural color as Equation (1) 1.4230, and that of DI water was 1.3330. Even though neff,D showed a decreasing tendency with an increased humidity due to inclusion of water with low index of refraction, the values were maintained close to that of pure IL regardless of relative humidity, implying that apparent water content in IL is kept low. Using Equation (2), water content (fwater) was calculated for the measured neff,D values (Table 1, and Supplementary Material, Figure S1). ] can also be ignored due to its extremely low vapor pressure. In the fabrication of humidity sensors, the precursor mixtures of three different crosslinker contents (1, 2.5, and 5%) with respect to HEMA were ion strongly interacts with water molecule by hydrogen bonding [26]. Following this, the osmotic pressure is developed within IOPG, and there are mutual diffusions of water and IL molecules in and out of IOPG, where lighter water molecules can diffuse into IOPG faster than IL to bring about rapid longitudinal swelling of IOPG, and consequently red-shift the diffraction wavelength in Equation (1). In the meantime, a hydrophobic BMIM + behave as a chaotropic ion which disrupts the hydrogen bonding between water and PHEMA to exclude water molecules out of IOPG during the drying process, so the shrinking of IOPG and the blue-shift of the structural color occurs. In some circumstances, BMIM + is also reported to act as a kosmotropic ion due to hydrophobic hydration [26].
In addition to the swelling and deswelling of IOPG to induce color change of humidity sensor, n eff of IOPG and liquid medium can also affect the structural color as Equation (1) IL was measured to be 1.4230, and that of DI water was 1.3330. Even though n eff,D showed a decreasing tendency with an increased humidity due to inclusion of water with low index of refraction, the values were maintained close to that of pure IL regardless of relative humidity, implying that apparent water content in IL is kept low. Using Equation (2), water content (f water ) was calculated for the measured n eff,D values (Table 1, and Supplementary Material, Figure S1). − ] can also be ignored due to its extremely low vapor pressure. In the fabrication of humidity sensors, the precursor mixtures of three different crosslinker contents (1%, 2.5%, and 5%) with respect to HEMA were respectively prepared, and the IOPGs with different crosslinking densities were fabricated on a single glass substrate as shown in Figure 3A.
Entire IOPGs were fully soaked with [BMIM + ][BF 4 − ], open to air. The IOPG/IL humidity sensors were subject to the controlled humidity, to reveal the color changes as shown in Figure 3B. The temperature for each humidity condition was maintained at 25 ± 1 • C in order to exclude a possible temperature-driven swelling of IOPG, which will be discussed later in this paper. Figure 3B clearly demonstrates that the increase in crosslinking density of the IOPGs resulted in a red-shifted structural color response. For instance, the structural color of 1% crosslinker-containing IOPG was blue at a low humidity condition while that of 5% crosslinker exhibits a green color. The fewer the crosslinks, the wider color spectra were obtained for IOPG/IL sensor. By fabricating several batches of the humidity sensors, and observing the responses individually, the reproducibility for sensor production was confirmed to be good enough (Supplementary Material, Figure S2). respectively prepared, and the IOPGs with different crosslinking densities were fabricated on a single glass substrate as shown in Figure 3A. Entire IOPGs were fully soaked with [BMIM + ][BF4 − ], open to air. The IOPG/IL humidity sensors were subject to the controlled humidity, to reveal the color changes as shown in Figure 3B. The temperature for each humidity condition was maintained at 25 ± 1 °C in order to exclude a possible temperature-driven swelling of IOPG, which will be discussed later in this paper. Figure 3B clearly demonstrates that the increase in crosslinking density of the IOPGs resulted in a red-shifted structural color response. For instance, the structural color of 1% crosslinker-containing IOPG was blue at a low humidity condition while that of 5% crosslinker exhibits a green color. The fewer the crosslinks, the wider color spectra were obtained for IOPG/IL sensor. By fabricating several batches of the humidity sensors, and observing the responses individually, the reproducibility for sensor production was confirmed to be good enough (Supplementary Material, Figure 2). In order to investigate the sensor responses in more quantitative manner, the reflectance spectra for the respective IOPG/IL sensors were obtained. In Figure 4A-C, the spectra at three different humidity values are shown where Figure 4A shows the data from a sensor containing 1% crosslinker, and (b) and (c) are from 2.5% and 5% crosslinker-containing sensors respectively.
It is obvious from Figure 4A that a less-crosslinked IOPG sensor shows better-resolved peaks at the given humidity (1%, 55%, and 80% RH) just like the color responses demonstrated in Figure 3B. Such tendencies imply that a more-crosslinked IOPG is stronger in mechanical strength, and less stretchable. It is a general trend yet worthwhile to note that in all three sensors, the reflectance peaks get weaker as humidity increases, since a more-swollen IOPG exhibits a smaller Δn between IOPG and liquid medium which is the origin of light diffraction. The peak wavelengths (λmax) in the respective spectra were plotted with respect to the humidity values as shown in Figure 4D, in which the red-shifts are evident for IOPGs with higher crosslinker contents. The repeated variations of low and high humidities have revealed that there is no hysteresis of λmax, and the reproducibility was as good as shown by error bars in Figure 4D. For individual measurements of humidity-dependent color changing responses, each humidity condition was maintained for at least 20 min prior to the reflectance measurement, so that the distribution of water molecules reached the equilibrium within the IOPG/IL. To confirm whether 20 min is enough duration for the equilibrium, a humiditydependent swelling/deswelling kinetics of IOPG/IL was investigated. In Figure 5, the plots of relative wavelength ratios of time-dependent λmax with respect to the initial λmax (λ0 at t = 0) are shown upon instantaneous humidity changes. Figure 5A shows the kinetics plots for swelling, in which differently crosslinked IOPGs which had been stored at a low humidity (1% RH) were suddenly exposed to air IOPGs with crosslinker contents of 1%, 2.5%, and 5%. Upon variations of relative humidity from 1% to 80% RH, a less-crosslinked (1% EGDM content) IOPG shows entire visible color ranges.
In order to investigate the sensor responses in more quantitative manner, the reflectance spectra for the respective IOPG/IL sensors were obtained. In Figure 4A-C, the spectra at three different humidity values are shown where Figure 4A shows the data from a sensor containing 1% crosslinker, and (b) and (c) are from 2.5% and 5% crosslinker-containing sensors respectively.
It is obvious from Figure 4A that a less-crosslinked IOPG sensor shows better-resolved peaks at the given humidity (1%, 55%, and 80% RH) just like the color responses demonstrated in Figure 3B. Such tendencies imply that a more-crosslinked IOPG is stronger in mechanical strength, and less stretchable. It is a general trend yet worthwhile to note that in all three sensors, the reflectance peaks get weaker as humidity increases, since a more-swollen IOPG exhibits a smaller ∆n between IOPG and liquid medium which is the origin of light diffraction. The peak wavelengths (λ max ) in the respective spectra were plotted with respect to the humidity values as shown in Figure 4D, in which the red-shifts are evident for IOPGs with higher crosslinker contents. The repeated variations of low and high humidities have revealed that there is no hysteresis of λ max , and the reproducibility was as good as shown by error bars in Figure 4D. For individual measurements of humidity-dependent color changing responses, each humidity condition was maintained for at least 20 min prior to the reflectance measurement, so that the distribution of water molecules reached the equilibrium within the IOPG/IL. To confirm whether 20 min is enough duration for the equilibrium, a humidity-dependent swelling/deswelling kinetics of IOPG/IL was investigated. In Figure 5, the plots of relative wavelength ratios of time-dependent λ max with respect to the initial λ max (λ 0 at t = 0) are shown upon instantaneous humidity changes. Figure 5A shows the kinetics plots for swelling, in which differently crosslinked IOPGs which had been stored at a low humidity (1% RH) were suddenly exposed to air with 40% RH. In Figure 5B, the same IOPGs at a high humidity (80% RH) were taken out to an environment with 40% RH. with 40% RH. In Figure 5B, the same IOPGs at a high humidity (80% RH) were taken out to an environment with 40% RH. During entire experiments, the temperature was maintained at 25 ± 1 °C. The respective data were fitted to the single exponential functions, as the fitted curves and time constants (τ) are shown in Figure 5. A lightly-crosslinked IOPG (with 1% crosslinker) showed the fastest τ of 0.6 min and 1.1 min respectively for swelling and deswelling processes due to larger free volumes within the IOPG, while a highly-crosslinked (5%) IOPG showed 2-3 times longer τ of 2.8 min and 2.2 min, respectively (please refer to a movie clip showing a deswelling of the same IOPGs used in Figure 5). However, all of the time constants were less than 3 min which are much faster than the equilibration time of 20 min for the humidity sensing experiments. A rapid humidity sensing shown in Figure 5 could be with 40% RH. In Figure 5B, the same IOPGs at a high humidity (80% RH) were taken out to an environment with 40% RH. During entire experiments, the temperature was maintained at 25 ± 1 °C. The respective data were fitted to the single exponential functions, as the fitted curves and time constants (τ) are shown in Figure 5. A lightly-crosslinked IOPG (with 1% crosslinker) showed the fastest τ of 0.6 min and 1.1 min respectively for swelling and deswelling processes due to larger free volumes within the IOPG, while a highly-crosslinked (5%) IOPG showed 2-3 times longer τ of 2.8 min and 2.2 min, respectively (please refer to a movie clip showing a deswelling of the same IOPGs used in Figure 5). However, all of the time constants were less than 3 min which are much faster than the equilibration time of 20 min for the humidity sensing experiments. A rapid humidity sensing shown in Figure 5 could be achieved owing to highly porous structures of IOPGs used in this study as shown in Figure 1, as well During entire experiments, the temperature was maintained at 25 ± 1 • C. The respective data were fitted to the single exponential functions, as the fitted curves and time constants (τ) are shown in Figure 5. A lightly-crosslinked IOPG (with 1% crosslinker) showed the fastest τ of 0.6 min and 1.1 min respectively for swelling and deswelling processes due to larger free volumes within the IOPG, while a highly-crosslinked (5%) IOPG showed 2-3 times longer τ of 2.8 min and 2.2 min, respectively (please refer to a movie clip showing a deswelling of the same IOPGs used in Figure 5). However, all of the time constants were less than 3 min which are much faster than the equilibration time of 20 min for the humidity sensing experiments. A rapid humidity sensing shown in Figure 5 could be achieved owing to highly porous structures of IOPGs used in this study as shown in Figure 1, as well as a rapid diffusion of water vapor into IL [13,27] Figure 6. Individual spectra for Figure 6 with varying humidity are shown in Supplementary Material, Figure S3. Inappreciable color changes of PHEMA IOPG in [BMIM + ][PF6 − ] implies that its water solubility is negligible in that IL, and thus no significant energetic change occurs between IL and IOPG to induce a gel swelling. Due to existence of fluorine atoms, PF6 − anion makes the imidazolium salt much similar to organic solvents and less soluble in water as well [28]. Although the experiments were carried out under the thermostated conditions, a temperature dependence of IOPG in [BMIM + ][BF4 − ] under a fixed humidity was examined. From a thermodynamic point of view, temperature increase will induce swelling of PHEMA IOPG assuming that HEMA and IL/water has upper critical solution temperature (UCST) behavior. As shown in Figure 7, temperature increase resulted in a redshift of λmax as expected. However, the degree of wavelength change (λmax/λmax @ 5°C), which is an indication of longitudinal IOPG swelling, was less than 1% at the temperature ranges lower than 30 °C. Therefore, the reliability of the humidity sensor can be guaranteed within the temperature ranges of 10 °C to 30 °C. Individual spectra for Figure 6 with varying humidity are shown in Supplementary Material, Figure S3. Inappreciable color changes of PHEMA IOPG in [BMIM + ][PF 6 − ] implies that its water solubility is negligible in that IL, and thus no significant energetic change occurs between IL and IOPG to induce a gel swelling. Due to existence of fluorine atoms, PF 6 − anion makes the imidazolium salt much similar to organic solvents and less soluble in water as well [28]. under a fixed humidity was examined. From a thermodynamic point of view, temperature increase will induce swelling of PHEMA IOPG assuming that HEMA and IL/water has upper critical solution temperature (UCST) behavior. As shown in Figure 7, temperature increase resulted in a redshift of λ max as expected. However, the degree of wavelength change (λ max /λ max @ 5 • C ), which is an indication of longitudinal IOPG swelling, was less than 1% at the temperature ranges lower than 30 • C. Therefore, the reliability of the humidity sensor can be guaranteed within the temperature ranges of 10 • C to 30 • C. Figure 6. Individual spectra for Figure 6 with varying humidity are shown in Supplementary Material, Figure S3. Inappreciable color changes of PHEMA IOPG in [BMIM + ][PF6 − ] implies that its water solubility is negligible in that IL, and thus no significant energetic change occurs between IL and IOPG to induce a gel swelling. Due to existence of fluorine atoms, PF6 − anion makes the imidazolium salt much similar to organic solvents and less soluble in water as well [28]. Although the experiments were carried out under the thermostated conditions, a temperature dependence of IOPG in [BMIM + ][BF4 − ] under a fixed humidity was examined. From a thermodynamic point of view, temperature increase will induce swelling of PHEMA IOPG assuming that HEMA and IL/water has upper critical solution temperature (UCST) behavior. As shown in Figure 7, temperature increase resulted in a redshift of λmax as expected. However, the degree of wavelength change (λmax/λmax @ 5°C), which is an indication of longitudinal IOPG swelling, was less than 1% at the temperature ranges lower than 30 °C. Therefore, the reliability of the humidity sensor can be guaranteed within the temperature ranges of 10 °C to 30 °C.
Conclusions
In summary, we fabricated a crosslinked IOPG via DEECA followed by photopolymerization of HEMA and crosslinker which was subsequently soaked in [BMIM + ][BF 4 − ], a non-volatile hydrophilic IL at room temperature to demonstrate a colorimetric humidity sensor. Under varying humidity from 0% to 80% RH, IOPGs with different crosslinker contents were exposed, to exhibit structural color changes over entire visible ranges especially for a lightly crosslinked IOPG. Fast color change responses with exponential time constants of 0.6~2.8 min were obtained due to highly porous IO structure of the sensor. From a temperature-dependent color change test, temperature increase resulted in a swelling of IOPG, implying that the thermodynamic interaction between PHEMA IOPG and [BMIM + ][BF 4 − ] is supposedly UCST behavior. However, the dimensional change was less than 1% at the temperature variations from 10 • C to 30 • C. | 6,603.8 | 2018-04-27T00:00:00.000 | [
"Materials Science"
] |
Well posedness of a nonlinear mixed problem for a parabolic equation with integral condition
The aim of this work is to prove the well posedness of some posed linear and nonlinear mixed problems with integral conditions. First, an a priori estimate is established for the associated linear problem and the density of the operator range generated by the considered problem is proved by using the functional analysis method. Subsequently, by applying an iterative process based on the obtained results for the linear problem, the existence, uniqueness of the weak solution of the nonlinear problems is established.
Introduction and statement of the problem
Some problems related to physical and technical issues can be effectively described in terms of nonlocal problems with integral conditions in partial differential equations. These nonlocal conditions arise mainly when the values on the boundary cannot be measured directly, while their average values are known. The problem of parabolic equation with integral condition is stated as follows: Let us consider the rectangular domain Q = ]0, 1[ × ]0, T[, then the problem is to find a solution σ (x, t) of the following nonclassical boundary value problem: with the initial condition lσ = σ (x, 0) = ϕ(x), for x ∈ [0, 1], (1.2) and the Dirichlet boundary condition σ (0, t) = 0, for t ∈ [0, T], (1.3) and the nonlocal condition α 0 σ (x, t) dx + 1 β σ (x, t) dx = 0, 0 ≤ α ≤ β < 1 ∀t ∈ [0, T]. (1.4) In addition, we assume that the function a(x, t) and its derivatives satisfy the conditions (1.5) where the functions g(x, t, σ , ∂σ ∂x ), ϕ(x) are given, and we assume that the following matching conditions are satisfied: We also assume that there exists a positive constant d such that g x, t, σ 1 , for all (x, t) ∈ Q. This type of problem can be found in various physic problems such as heat conduction [1][2][3][4], plasma physics [5], thermoelasticity [6], electrochemistry [7], chemical diffusion [8] and underground water flow [9][10][11]. Several research papers such as found in [1-4, 7, 12-18] have studied and solved the parabolic equation by combining the integral condition with Dirichlet condition or Newmann condition, or with purely integral conditions, using various methods. For hyperbolic equations, the unicity and existence of the solution have been studies in [13,[19][20][21][22] and the mixed-type equations in [23][24][25][26][27]. The elliptic equations were considered in [28,29] and [30].
The linear problem associated to the problem stated in (1.1)-(1.4), for α = β = 0, has been studied in [18] and for β = 1 in [16]. Meanwhile in [31] the solved problem is for the case α + β = 1. It is worth mentioning that in [32] the author studied the same case where ∂ ∂x (a ∂σ ∂x ) was replaced by the Bessel operator. In the present paper the motivation is to study and find a solution to the stated problem without imposing any conditions on the constants α and β in the interval [0, 1]. In addition, the nonlinear problem of the parabolic equation with integral condition defined on two parts of the boundary is solved.
First, an a priori estimate is established for the associated linear problem and the density of the operator range generated by the considered problem is proved by using the functional analysis method. Subsequently, by applying an iterative process based on the obtained results for the linear problem, the existence and uniqueness of the weak solution of the nonlinear problems is established.
The rest of the paper is organized as follows. In Sect. 2, the associated linear problem is stated. Section 3 deals with the proof of the uniqueness of the solution using an a priori estimate. Section 4 gives the solvability of the considered linear problem. Finally, in Sect. 5, on the basis of the obtained results in Sects. 3 and 4, and on the use of an iterative process, we prove the existence and uniqueness of the solution of the nonlinear problem.
Statement of the associated linear problem
In this section we introduce the linear problem and the different function spaces needed to investigate the mixed nonlocal problem given by the equation. The operator L is an operator acting on E into F, where E is the Banach space of functions u ∈ L 2 (Q), with a finite norm Then we show that the operator L has a closure L and later on, in Sect. 3, we establish an energy inequality of the following type (see Theorem 3.1): Since the points of the graph of the operator L are limits of sequences of points of the graph of L, we can extend the a priori estimate (2.4) to be applied to strong solutions by taking the limits, that is, we have the inequality (2.5) From this inequality, we deduce the uniqueness of a strong solution, if it exists, and that the range of the operator L coincides with the closure of the range of L.
By virtue of the uniqueness of the limit in D (Q), we conclude that f = 0. According to (2.7), we also conclude that The following a priori estimate gives the uniqueness of the solution of the posed linear problem.
An energy inequality and its application
In this section, the uniqueness of the solution will be proved using an energy inequality method.
Theorem 3.1 There exists a positive constant K , such that for each function u ∈ D(L) we have and λ, k and δ are a positives scalar parameters such that Substituting Mu by its expression in the first term in the right-hand side of (3.3), we obtain Integrating by parts the second term in the right-hand side of the last equality of (3.4) with respect to x, using the fact that ∂u ∂t = 1 k xe δ(x-1) ∂g ∂x , then integrating by parts with respect to x, we obtain using this equality The last term in the previous equality becomes Similarly integrating by parts the last term of (3.4) with respect to x, we obtain From (3.7) and (3.6), equality (3.4) becomes Similarly, substituting Mu by its expression in the last term in the right-hand side of (3.3), integrating by parts with respect to x, using the Dirichlet condition (1.3) and the integral Integrating by parts the first two terms with respect to t in (3.9), using the condition (1.2) we have then from the above equalities and equalities (3.8) and (3.9), (3.3) becomes Using the Young inequality in the last four terms in the left-hand side of (3.10), and using the facts that We choose ε 1 = 8, ε 2 = 2, ε 3 = δ k , and ε 4 = 2 and c > 0 such that therefore by combining the previous inequalities with (3.10), we get the following expression: where Substituting Mu by its expression in the first term in the right-hand side of (3.12), we obtain each term in the right-hand side of (3.13), can be, respectively, controlled by and Re Q s The combination of the previous inequalities with (3.12) yields where This last inequality implies the following corollaries.
then from Theorem 3.1" we deduce that u E ≤ 0, which implies that u 1 = u 2 .
Corollary 3.2 The range R(L) of L is closed in F and R(L) = R(L).
Proof First, we prove that R(L) is closed. Let T ∈ R(L), then there exists a sequence , we deduce that the convergence of LU n in F implies the convergence of U n in E, say U n − → n→∞ U, in E. Since L is closed, (U n ) is a sequence in D(L) and
Solvability of the linear problem
In order to prove the solvability of problem (2.1)-(1.4) it is sufficient to show that R(L) is dense in F. The proof is based on the following lemma. where
then w vanishes almost everywhere in .
Proof Equality (4.1), can be written as follows: We introduce the smoothing operators J -1 ε = (Iε ∂ ∂t ) -1 and (J -1 ε ) * = (I + ε ∂ ∂t ) -1 from L 2 (0, T) into the space H 1 (0, T) with respect to t, then these operators provide the solution of the problems: We also have the following properties: If g ∈ D(L), then J -1 g ∈ D(L) and we have ⎧ ⎨ ⎩ lim J -1 gg L 2 (0,T) = 0, for ε → 0, Substituting the function u in (4.2) by the smoothing function u ε and using the relation Since the operator A(t) has a continuous inverse in L 2 (0, 1) defined by where the functions C 1 (t) satisfy the following expression: , the function K(x) is given by x -1, (β, 1).
Then we have α 0 A -1 (t)u dx + 1 β A -1 (t)u dx = 0, hence, the function J -1 u = u ε can be represented in the form Consequently, equality (4.3) becomes u ∂ρ * ∂t dx dt = A(t)uh dx dt, (4.4) where The left-hand side of (4.4) is a continuous linear functional of u, hence the function h has the derivatives ∂h ∂x , ∂ 2 h ∂x 2 ∈ L 2 (Q) and the following conditions are satisfied: For a sufficiently small and the operator then the function ρ(x) can be expressed as follows: .
Taking u ∈ D(L) in (4.6) yields Since the two terms in the previous equality vanish independently and since the range of the trace operator is everywhere dense in Hilbert space with the norm
Study of the nonlinear problem
This section is devoted to the proof of the existence, uniqueness of the solution of the problem (1.1)-(1.4).
If the solution of problem (1.1)-(1.4) exists, it can be expressed in the form θ = w + U, where U is a solution of the homogeneous problem and w is a solution of the problem We shall prove that the problem (5.5)-(5.8) has a weak solution by using an approximation process and passing to the limit. Assume that v and w ∈ C 1 (Q), and the following conditions are satisfied: Taking the scalar product in L 2 (Q) of Eq. (5.5) and the integrodifferential operator by taking the real part, we obtain Nv dx dt. (5.11) Substituting the expression of Nv in the first integral of the right-hand side of (5.11), integrating by parts with respect to t, using the condition (5.10), we get Substituting the expression of Nv in the second integral of the right hind-side of (5.11), integrating by parts with respect to x, using the condition (5.10), we get Insertion of (5.12), (5.13) into (5.11) yields where obtained by integrating by parts the right-hand side of (5.11) with respect to x. Definition 5.1 By a weak solution of problem (5.5)-(5.8) we mean a function w ∈ L 2 (0, T : V 1,0 (0, 1)) satisfying the identity (5.14) and the integral condition (5.8). | 2,597 | 2021-08-06T00:00:00.000 | [
"Mathematics"
] |
Zebra or quagga mussel dominance depends on trade-offs between growth and defense—Field support from Onondaga Lake, NY
Two invasive mussels (zebra mussel, Dreissena polymorpha and quagga mussel D. rostriformis bugensis) have restructured the benthic habitat of many water bodies in both Europe and North America. Quagga mussels dominate in most lakes where they co-occur even though zebra mussels typically invade lakes first. A reversal to zebra mussel over time has rarely been observed. Laboratory experiments have shown that quagga mussels grow faster than zebra mussels when predator kairomones are present and this faster growth is associated with lower investment in anti-predator response in quagga mussels than zebra mussels. This led to the hypothesis that the dominance of quagga mussels is due to faster growth that is not offset by higher vulnerability to predators when predation rates are low, as may be expected in newly colonized lakes. It follows that in lakes with high predation pressure, the anti-predatory investments of zebra mussels should be more advantageous and zebra mussels should be the more abundant of the two species. In Onondaga Lake, NY, a meso-eutrophic lake with annual mussel surveys from 2005 to 2018, quagga mussels increased from less than 6% of the combined mussel biomass in 2007 to 82% in 2009 (from 3 to 69% by number), rates typical of this displacement process elsewhere, but then declined again to 11–20% of the mussel biomass in 2016–2018. Average total mussel biomass also declined from 344–524 g shell-on dry weight (SODW)/m2 in 2009–2011 to 34–73 g SODW/m2 in 2016–2018, mainly due to fewer quagga mussels. This decline in total mussel biomass and a return to zebra mussel as the most abundant species occurred as the round goby (Neogobius melanostomus) increased in abundance. Both the increase to dominance of quagga mussels and the subsequent decline following the increase in this molluscivorous fish are consistent with the differences in the trade-off between investment in growth and investment in defenses of the two species. We predict that similar changes in dreissenid mussel populations will occur in other lakes following round goby invasions, at least on the habitats colonized by both species.
Introduction
Dreissenid mussels, both zebra mussel (Dreissena polymorpha) and quagga mussel (Dreissena rostriformis bugensis), are invasive ecosystem engineers with large effects on aquatic ecosystems through filtering and alteration of the benthic habitat (reviews in [1][2][3][4]). Both species arrived to North America and Lake Erie in the mid-1980s; zebra mussels were confirmed present in 1986 and quagga mussels in 1989 [5][6][7]. Zebra mussels then spread rapidly and by 1993 were common across the Laurentian Great Lakes and in many inland lakes [8]. Quagga mussels spread more slowly, but had reached Lake Ontario in 1990, the Mississippi and Ohio Rivers in 1995, lakes Michigan and Huron in 1997, and the Hudson River in 2005 [8,9]. In addition, it takes longer for quagga mussels to reach maximum abundance after the initial colonization of a lake (average of 12.2 years for quagga mussels versus 2.5 years for zebra mussels [10]. Even so, quagga mussel do end up as the dominant of the two species in most lakes [11][12][13][14][15] and can increase from low densities to the dominant species in two to three years [15,16]. The displacement of zebra mussels by quagga mussels may increase the effects of these ecosystem engineers if lake-wide dreissenid mussel biomass increases after the quagga mussel becomes the dominant species [17,18].
There are several physiological and behavioral differences between the two species that may explain the dominance of quagga mussels [10]. Compared to zebra mussels, quagga mussels have a lower metabolic rate, are more resistant to starvation, can grow and reproduce at lower temperatures, and can colonize soft substrata [19][20][21][22]. Quagga mussels can therefore build up dense populations on deep, cold bottoms that zebra mussels cannot colonize. This also allows quagga mussels to produce a larger number of veligers, giving them an advantage over zebra mussels in the lottery for settling space [10,23]. Further, quagga mussels grow better than zebra mussels at low food concentrations [19], thereby having a competitive advantage when dreissenids decrease phytoplankton abundance [7,24,25]. In addition, quagga mussels may have higher filtering rates, but investigations of filtering rates that directly compared the two species are inconclusive, with reports of higher filtering rates by quagga mussels [26], higher filtering rates by zebra mussels [19] and no differences [27,28].
Selective predation cannot be the direct cause for the displacement of zebra mussels by quagga mussels as quagga mussels are more vulnerable to predation because of their thinner shells, less aggregation behavior, lower propensity to seek refuges, and lower attachment strength [29][30][31][32][33][34][35]. However, these anti-predation adaptations have a cost. In a series of papers, Naddafi and Rudstam [31][32][33] explored the difference in anti-predatory investments by the two mussel species, and the consequences of these difference to mussel growth. They compared mussels of both species with and without exposure to predator kairomones. With predator cues present, zebra mussels invested more in shell growth and byssal thread production as well as lowered their filtering rates resulting in lower growth rates compared to quagga mussels that had a more limited response to the predators. These morphological and behavioral responses to predators resulted in lower vulnerability to predation for zebra mussels compared to quagga mussels and both round goby and rusty crayfish (Orconectes rusticus) preferred quagga mussels over zebra mussels. Greater investments in anti-predator behavior and morphology by zebra mussels than by quagga mussels have been observed repeatedly in laboratory experiments elsewhere [29,30,34,35].
Although greater investment in anti-predatory adaptations may be an advantage in high predation environments, the additional cost of these investments can be a disadvantage when predation mortality is low. Low predation rates may be expected in newly invaded environments where the predators are not adapted to feeding on mussels, or not yet discovered this new food resource (the enemy release hypothesis of invasion success- [36]). Therefore, Naddafi and Rudstam [32] hypothesized that quagga mussel dominate in many systems because quagga mussel has a more optimal trade-off between resource allocation to growth and to defense than zebra mussels when predation pressure is low, resulting in faster quagga mussel growth rates. This hypothesis (hereafter the trade-off hypothesis) would help explain why quagga mussels dominate also in productive lakes where food limitation is less important and where the deep cold water bottoms are often anoxic. In such lakes, a faster growth rates of quagga mussels in low food environments and cold temperatures should be less important. If the trade-off hypothesis is important, quagga mussels should dominate in lakes with low predation pressure and zebra mussels should dominate in lakes with high predation pressure, such as expected after the arrival of the mussel specialist round goby (Neogobius melanostomus), an invasive fish species native to the Ponto-Caspian region that is spreading through North America and Europe [37].
The trade-off hypothesis could be tested against field data from a productive lake that includes both years with high and years with low densities of mussel predators. Herein, we analyze such a data series-a 14 year data set (2005-2018) from Onondaga Lake, New York, USA. This data set consist of annual surveys conducted during years when quagga mussels increased in abundance and during the eight years after the arrival of the round goby in 2010. In addition, the Onondaga Lake data includes information on other aspects of the ecosystem (phytoplankton, zooplankton, fish, nutrients) that can be used to evaluate alternative explanations for changes in mussel abundance [38]. Based on our trade-off hypothesis, we expect that quagga mussels would grow faster than zebra mussels in most years and that quagga mussels should increase to dominance, as commonly observed elsewhere [10,15,16,39]. We also expect that quagga mussels should decline more than zebra mussels after round gobies increase in abundance resulting in a return of zebra mussels as the most abundant of the two dreissenid species when gobies are abundant.
Study area
Onondaga Lake, New York (43˚5'20" N, 76˚12'30"W) is an 11.7 km 2 meso-eutrophic lake with a mean depth of 10.9 m and a maximum depth of 20 m. For more than a century the lake has been the recipient of domestic and industrial wastewater from the Syracuse metropolitan area [40]. However, water quality in the lake has improved substantially during the past 25 years as a result of closures of several industries and improvements to the Syracuse Metropolitan Wastewater Treatment Plant (Metro) [41]. Several limnological parameters, including temperature, dissolved oxygen (DO), phosphorus, chlorophyll-a, and water clarity, as well as phytoplankton, zooplankton, and fish were monitored in this lake as part of an Ambient Monitoring Program run by Onondaga County Department of Water Environment Protection (OCD-WEP) [38].
Although water quality improved over time, there was little additional change in the limnological parameters after 2007 [38]. Temperature and DO were measured bi-weekly at the surface, 3, 6, 9, 12, 15 and 18 m depth. Between year 2000 and 2018, maximum epilimnetic summer temperature ranged from 24.5 to 28.2˚C (Fig 1A), which is within the tolerance range of both mussel species [22]. Anoxic conditions in bottom waters started between the end of June and mid-July and continued to the fall overturn. In all years since 2000, water at 3 m depth remained oxygenated (DO>4 mg/L) throughout the year whereas DO at 6 m declined to less than 1 mg/L in some years (in 2002, 2003, 2005, 2006, 2007, 2017, 2018, Fig 1A). Annual average values for epilimnetic total phosphorus (TP) declined dramatically from 2000 to 2006, then remained in the range of 20-30 μg/L from 2007 to 2018 (Fig 1B). The time trends in chlorophyll-a concentrations were very similar to TP and remained between 6 to 10 μg/L from 2007 to 2018 ( Fig 1B). Average annual Secchi disk transparency varied between 1.6 and 3.7 m, with no significant time trends. These trophic level indicators classify this lake as meso-eutrophic [42]. Annual average phytoplankton biovolume ranged from 0.5 to 2.0 cm 3 /m 3 with diatoms the largest group followed by cryptophytes, chlorophytes and chrysophytes ( Fig 1C). Zooplankton ( Fig 1D) consisted of the common copepods and cladocerans of the region and was dominated by cyclopoid copepods and bosminids in years with abundant alewife (Alosa pseudoharengus), and by dahpniids and calanoid copepods in years with few alewife [43]. Change-point analyses [44] Both mussel species were reported from the outlet of Onondaga Lake in 1991, 3 years after they were documented as present in Lake Erie [45]. However, quagga mussels represented less than 1% of the mussels inspected in 1991 and although Mills et al. confirmed their presence in the spring of 1992, they could not find quagga mussels again in the fall of 1992 [6]. Both species of dreissenids remained rare in Onondaga Lake proper up to and including 1997 when reported densities were < 1 m -2 [46].
Methods
Mussels were sampled each year at depths 0-4.5 m at 12 sites around the lake from 2005 to 2018 (Fig 2) using ponar grabs (area 0.027 m 2 ) by OCDWEP staff between October 8 and October 25. Ponar grabs were effective in Onondaga Lake because the substrate at all sites was basically the same (calcium carbonate enriched sand, silt, and organic material). At each site, one sample was collected from each of three depths 0-1.5 m, 1.5-3 m, and 3-4.5 m resulting in 12 clusters (= sites) of 3 grabs. This design was chosen to maximize variability within each site, as recommended in sampling design using cluster sampling [47]. Sampling prior to 2005 in Onondaga Lake [46,48] and from nearby Oneida Lake [12] confirmed that bottom depth is an important gradient for mussel density, thus sampling across the depth gradient within each site is preferable to random selection of samples within each site [47]. The depths sampled were expanded to include a ponar grab at 4.5-6 m in 2011-2018, at 6-7.5 m in 2011-2018, and at 7.5-9 m in 2014-2018, as a response to improving oxygen conditions in the lake. We present time trends from 2005 to 2018 in water depth 0-4.5 m (depths sampled all years), and time trends from 2011-2018 in water depths 0-6 m (depths sampled since 2011). Samples were sieved in the field and processed in the laboratory. Up to 100-150 mussels that were alive at collection were measured in each sample to the nearest 0.1 mm (maximum shell length). When subsampled (samples with > 100 mussels), the weight of a random subsample of~100 mussels and the weight of the total sample were measured to expand the numbers counted in the subsample to the whole sample. Total wet weight of the sample was measured to the nearest 1 g. Shell-on dry weight (SODW) was calculated from the lengths of each mussel measured using species-specific regressions from nearby Oneida Lake [12]: Quagga mussels : log e ðSODWÞ ¼ 2:766 � log e ðSLÞ À 9:472 ð1Þ Zebra mussels : log e ðSODWÞ ¼ 2:864 � log e ðSLÞ À 9:622 ð2Þ where SODW is in g and SL is maximum shell length in mm. These calculated values were highly correlated with measured wet biomass in Onondaga Lake with no significant effect of mussel species, bottom depth, or year. Calculated SODW was 36.8% of measured shell-on wet weight (SODW (g) = 0.368 (SE 0.001) � wet weight (g), R 2 = 0.99, N = 1322, P<0.0001). In nearby Oneida Lake, SODW was 35.3% of wet weight for zebra mussels and 33.9% of wet weight for quagga mussels with both SODW and shell-on wet weight measured on individual mussels [32]. We chose to analyze the calculated SODW values because small samples were not always weighed. Round goby abundance was indexed with a beach seine at 15 sites in August and September each year. Each haul with the 15 m long, 1.2 m high beach seine covered an area of 116 m 2 . In all years, at least 2 surveys of these 15 sites were conducted and the numbers caught were expressed as a catch per seine haul. Round goby was also assessed with electrofishing at 12 transects in September. Electrofishing transects followed the shoreline in water depths of 1 to 2 m. Due to the large number of gobies encountered when electrofishing, only a portion of the observed gobies were captured and the number encountered along a transect but not captured was estimated by the operators. Electrofishing effort was standardized by power-on time (at output voltage 340 V) and given as the number of fish encountered per unit of power-on time (as per New York State Department of Environmental Conservation Fisheries Sampling Manual [49]). Time series from different fishing gear cannot be combined without standardization because catchability in different gear can be very different [50]. Therefore we standardized each catch per unit effort (CPUE) data series by dividing by the average annual CPUE in 2011-2018 for each gear, thereby making the CPUE relative to the average CPUE in 2011-2018 in each gear. This is a common method for comparing catches in different fishing gear [51]. We used the average of this normalized CPUE in seines and electrofishing as our index goby abundance.
To investigate the effect on mussels of the arrival of round goby, we tested for declines in density and biomass in water depth 0-6 m from 2011 to 2018 of (1) zebra mussel alone, (2) quagga mussel alone, and (3) both species combined. We averaged density and biomass from four (0-6 m depths) ponar samples to obtain an average per site. Standard errors in the figures were calculated using un-transformed values. Benthic animals are often aggregated making transformations of density values necessary [52]. Here we used fourth-root transformations for density and biomass which Strayer et al. [15] found appropriate for the dreissenid data series they analyzed, including the Onondaga Lake data up to 2015. Shell length and proportion quagga mussel were not transformed and standard errors were based on site values. We then tested for a time trend in the fourth-root transformed density and biomass data and for time trends in the proportion of quagga mussels using a mixed-model ANOVA with site as a random effect and year as a continuous fixed effect. Using site as a random effect accounts for consistent differences among sites. To test for difference in mussel length we used a paired ttest comparing mean and median lengths paired by year. For this test, mean lengths were first calculated from the measured mussels at each site, and then we calculated the average and standard errors of these site-specific mean lengths for all sites with more than 10 mussels measured (many sites had 100s of mussels measured). Median lengths were obtained from all measured mussels in a given year by species. Statistical analyses were done with Jmp 1 Pro 12.1 [53].
All sampling was done by OCWEP under collecting permits and guidelines obtained from New York State Department of Environmental Conservation.
There were differences in density and biomass with depth with higher proportions of quagga mussels in deeper samples. Therefore, the addition of samples in 4.5-6 m from 2011 to 2018 increased the proportion of quagga mussels compared to values in 0-4.5 m depth shown in Fig 3. For example, in 2011-2012, the proportion of quagga mussel by biomass was 78% in depths 0-4.5 m and 97% in depth 4.5-6 m in 2011-2012. But even in samples collected in 4.5-6 m, the proportion of quagga mussels declined to 13% in 2017 and 54% in 2018. Mussels deeper than 6 m contributed on average 12% of the total biomass when such depths where sampled (range 2-30%, 2013-2018). No mussels were caught in 9-10.5 m samples in 2015, the only year such deeper bottoms were sampled.
Mean length of measured quagga mussels (range among years 6.3-10.4 mm) was greater than the mean length of zebra mussels (Fig 5, range 5.3-7.9 mm) in all years. This difference was highly significant using a paired t-test with data points paired by year (P<0.0001, df = 11). Median lengths of all mussels from 0-6 m depth measured a given year gave the same results (median length range among years 5.3-14.7 mm for quagga mussel and 4.7-8.0 mm for zebra mussel, paired t-test, P = 0.0009, df = 11). Zebra mussels larger than 12 mm were uncommon in all years (2-19% of measured zebra mussels) whereas quagga mussels larger than 12 mm were more common (8-75% of measured quagga mussels). Mussels larger than 25 mm were rarely observed (17 quagga and 13 zebra mussels out of 34,534 individuals measured 2005-2018). In most years, the lengths distributions were unimodal.
Round goby were first detected in Onondaga Lake in 2010. Goby densities increased from 2011 to 2013 in both beach seine surveys and electrofishing surveys and stayed abundant through 2018. Seine surveys may be the better index since all gobies caught were counted The yellow circles are the proportion of quagga mussels (%), bars are ± 1 SE based on sites. Quagga mussel proportions for 2000 are from Spada et al. [46] and for 2002 are from a OCDWEP report [48]. Blue triangles represent the goby index calculated from beach seine and electrofishing surveys (see Methods).
https://doi.org/10.1371/journal.pone.0235387.g004 Fig 6) although pumpkinseed declined significantly from a peak CPUE in 2009 to 2018 (P<0.003). A decline in this predator is not consistent with a significant predatory effect of pumpkinseed on mussels that also declined during this time period.
The effects of the increase in round goby on mussel density and biomass was tested using the years 2011 to 2018; years sampled with 4 ponar grabs at each site collected between 0 and 6 m. Average density and biomass (SODW, in parenthesis) of both species in 0-6 m declined from 13,000/m 2 (580 g/m 2 ) in 2011 to 2,800/m 2 (72 g/m 2 ) by 2018 (Fig 7). The declines in density (year effect P = 0.0064) and biomass (year effect P = 0.0012) were both highly significant. Most of that decrease was due to a highly significant decrease in quagga mussels, as this species declined from 4,900/m 2 (490 g SODW/m 2 ) in 2011 to 510/m 2 (20 g/m 2 ) in 2018 (P<0.0001 for both density and biomass). Zebra mussels did not decline significantly during this time period (average density 4,123/m 2 , year effect P = 0.133, average biomass 50 g/m 2 , year effect P = 0.60). The proportion of quagga mussels also declined significantly both by biomass (P<0.0001, Fig 7) and density (P<0.0001).
Discussion
The development of the dreissenid populations in Onondaga Lake up to 2011 was consistent with observations elsewhere [15]. This included the timing of peak abundance of both species [10,12], the rate of the displacement of zebra mussels by quagga mussels [12,16] and the higher growth rate of quagga mussels compared to zebra mussels [10,11,15,54]. However, a return to zebra mussel as the most abundant of the two species, as observed between 2011 and 2018 has rarely been documented.
Both the initial increase of quagga mussels and the subsequent decline after the arrival of round goby in 2010 are consistent with the trade-off hypothesis suggested by Naddafi and Rudstam in 2014 [33]. They found that quagga mussels grew better than zebra mussels in the presence of predator cues as zebra mussels then invested more in anti-predator defenses. If this higher investment in anti-predator defense does not result in higher survival, as may be the case in newly invaded systems without mussel specialist predators, quagga mussels should dominate. This is a common observation in many newly invaded lakes and reservoirs including Onondaga Lake [15]. The trade-off hypothesis is also consistent with the larger size of quagga mussels in all years when they co-occurred in Onondaga Lake. More interesting, perhaps, is that the trade-off hypothesis also predicts a return to zebra mussels as the most abundant of the two dreissenids if predation rates on mussels increase and investment in antipredator defenses therefore becomes more advantageous. This was observed in Onondaga Lake. After 2011, quagga mussels declined whereas zebra mussels did not, resulting in a return to zebra mussels as the most abundant mussel species from 2016 onwards. This decline occurred as the round goby, a known mussel specialist, became abundant.
The timing and magnitude of peak abundance of both species in Onondaga Lake was comparable to observations elsewhere. Peak density of zebra mussels typically occurs earlier after colonization (2.5 years on average) than peak density of quagga mussels (12.2 years after colonization, [10]). Zebra mussels were reported from the outlet from Onondaga Lake in 1991 [45]. However, the abundance of mussels remained low in the lake (<1 m -2 ) until 1999 when veliger counts increased and large number of 4-6 mm zebra mussels were found on trap nets [46]. Spada et al. [46] reported densities reaching 1,200 to 22,200 m -2 by year 2000, and most mussels between 5 and 15 mm shell length. They attributed this increase to improvements to the Metro sewage treatment plant after 1998, in particular to the reduction of ammonia as freshwater mollusks are sensitive to ammonia [55]. If water quality suppressed mussels before 1998, zebra mussels would have reached high densities 2 years after the lake became conducive to dreissenids, similar to the time lag between arrival and peak abundance observed elsewhere [9]. Quagga mussels were reported in very low numbers from the outlet of Onondaga Lake in 1991 [45] and in spring of 1992 but were not found in the fall of 1992 [6,46]. Stewart [56] documented an eastward progression of quagga mussels along the Erie Canal from 1998 to 2009. At the outlet to Onondaga Lake, quagga mussels were not found in 1998, 1999, 2000 or 2002, but dominated in 2009 [56]. Similarly, no quagga mussels were reported from the 2000 survey in the lake proper [46], but a few quagga mussels were found in a 2002 survey [48]. After 2007, quagga mussels increased rapidly and the species went from a minor component of the dreissenid population in 2007 to having a higher biomass than zebra mussels in 2009, 2 years later. This rate of increase of quagga mussels is similar to the rate of increase observed in European lakes (26% per year, [16]) and in nearby Oneida Lake [12]. Peak quagga mussel abundance in 2009 is 11 years since 1998 when presumably also quagga mussels could have increased in the lake if present, or 7 years since 2002, when they were first reported from the lake proper. This is within the range of observations elsewhere for the time to peak abundance of quagga mussels in lakes initially dominated by zebra mussels (6-19 years, [9]). Peak densities of dreissenids in Onondaga Lake (> 10,000 /m 2 in 0-6 m) were also comparable to observations elsewhere [15,57,58]. Lake-wide densities would be lower because the 70% of the lake bottom that is below 6 m depth can be anoxic during the summer and had few dreissenid mussels when those depths were sampled. Quagga mussels were larger than zebra mussels in all years with data on both species. Comparisons of growth rates of the two species under similar conditions are relatively rare; most studies report higher growth of quagga mussels [12,15,19,59], but see [60].
Several hypotheses have been proposed for the mechanisms behind the initial displacement of zebra mussels by quagga mussels including the quagga mussel's ability to grow and reproduce at cold temperature and at lower food concentrations [9,10,22]. Because quagga mussels did become dominant in Onondaga Lake, a lake with relatively high levels of edible algae and without habitable cold water bottoms due to summer anoxia, cold water and low food concentrations are not necessary for quagga mussels to dominate. However, the trade-off hypothesis predicts a dominance of quagga mussels also in productive lakes, like Onondaga Lake, if predation rates are low. Quagga mussels were larger than zebra mussels in all years, also consistent with the effect of predator cues decreasing zebra mussel growth more than quagga mussel growth. We note that other hypotheses, such as lower metabolic rate and higher growth efficiency of quagga mussels may also be important (reviewed by Karatayev et al. [10], but these mechanism may be the results of lower investment in anti-predatory defenses and are not in conflict with the trade-off hypothesis.
Predation rates on mussels should increase with the invasion of round goby, a dreissenid specialist [32,61]. Round goby can consume more mussels per unit time than crayfish and native molluscivorous fish, such as pumpkinseed sunfish [62]. Round goby arrived to Onondaga Lake in 2010, increased in abundance up to 2013 and has remained abundant through 2018. Total dreissenid mussel abundance did decline from 2011 to 2018 primarily because of declines in quagga mussels. The result was a return to zebra mussel as the most abundant of the two species by 2016, with the largest decline in quagga mussels after 2013 when round goby became abundant. Zebra mussel continued as the more abundant species through the end of our study in 2018, consistent with continued high round goby densities in the lake.
We did consider other possible explanations for both the initial displacement of zebra mussels by quagga mussels, and the subsequent decline of quagga mussels and return of zebra mussel as the most abundant species. Change point analysis of the limnological time series indicate that significant changes occurred in the time period 2002 to 2007, with less change after 2007the period of the largest changes in the two mussel populations. Measurements of temperature and dissolved oxygen were within the expected tolerance of both dreissenid mussels with the exception of low oxygen concentration at 6 m in some years. Low oxygen at 6 m in 2017 and 2018 could have contributed to fewer deep quagga mussels those years [72], but oxygen was sufficient at 3 m in all years, and quagga mussels decreased from~50% in 2013-2015 to 5-27% in 2016-2018 also in 1.5-3 m depths. Other predators than round goby could also be important, but fish species known to feed on mussels did either not change in abundance or declined with the decline in mussels. Both crayfish and diving ducks are known predators on mussels [63], and diving ducks do congregate on the lake during spring and fall migrations. Although we cannot rule out a surge in ducks or crayfish from 2011 to 2018, at least crayfish also prefer quagga mussels over zebra mussels [33] and if they did increase would contribute similarly to round goby to the return of zebra mussels. There are of course other possibilities, such as an increase in diseases and parasites [73,74] that we did not evaluate. However, we consider the most likely cause for the decline in quagga mussels and total dreissenids to be the arrival and subsequent increase of round goby. Note that zebra mussels did not decline significantly with the increase in round goby, and zebra mussels therefore returned to being the most abundant of the two dreissenids.
Although there are many examples of the increase in dominance of quagga mussels, there is only limited evidence for a reversal to zebra mussels as the most abundant of the two species. None of the 42 longer-term (>10 year) data series on adult dreissenid mussels from Europe and North America analyzed by Strayer et al. [15] showed a differential decline of quagga mussels, and there was no general decline in the combined dreissenid mussels with time since invasion. But only four of these 42 data sets included more than 10 years of annual data on adult mussels from systems with both quagga and zebra mussels (Oneida Lake [12], Hudson River [75], Lake Balaton [54], and Onondaga Lake-this study). Interestingly, in the Hudson River, quagga mussels have remained subdominant for decades, perhaps due to higher predation rates in the river [75]. There are also studies that were not included in the Strayer et al. data set that suggest a link between predator abundance and mussel species dominance. Zhulidov et al. [76,77] did observe a shift from quagga mussel dominance to zebra mussel dominance in the lower Don River system, and speculated that selective predation on quagga mussels by roach (Rutilus rutilus) adapting to mussel feeding could explain the return of zebra mussel dominance. In addition, twelve years of annual data from lakes Erie, Ontario, Michigan and Huron have recently been published [11,78] and the data from western Lake Erie where round goby is abundant (but not from the deeper lakes Ontario, Michigan and Huron) show coexistence of the two dreissenid species. The two species also continue to coexist in the shallow water of Oneida Lake [12,79].
A decline in the density of an invasive species following an initial peak in abundance may be expected as the invaded community adapts to the presence of the new species [80]. This may also be the case for dreissenid mussels [10], although the evidence for such a decline is stronger for density than biomass and is not always observed [15]. Even so, when declines occur, they have a cause. The Onondaga Lake data supports increased predation as an important mechanisms contributing to such declines. Further, the trade-off hypothesis predicts both the quagga mussels dominance over zebra mussels also in productive systems and the disproportionate decline in quagga mussels following an increase in predation rates, such as expected after the round goby invasion in Onondaga Lake. If the trade-off hypothesis is correct, we predict that zebra mussels will continue to be the more abundant of the two species in Onondaga Lake as long as round goby remain abundant. Whether or not zebra mussel will increase as quagga mussel declines likely depend on the relative importance of increased predation mortality and increased population growth associated with decreased competition with declining quagga mussels. In Onondaga Lake and the years studied here, zebra mussels did not change significantly with the invasion of round gobies; elsewhere the results may be different. However, we expect that quagga mussels will continue to have a competitive advantage in low predation environments and in the cold oxygenated bottoms of deep lakes. Thus, the relative abundance of the two species should vary among lakes with deep oligotrophic lakes dominated by quagga mussels, shallow lakes with high predation pressure dominated by zebra mussels, and coexistence of both species in intermediate habitats. | 7,522.2 | 2020-06-29T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
PRIMO: An Interactive Homology Modeling Pipeline
The development of automated servers to predict the three-dimensional structure of proteins has seen much progress over the years. These servers make calculations simpler, but largely exclude users from the process. In this study, we present the PRotein Interactive MOdeling (PRIMO) pipeline for homology modeling of protein monomers. The pipeline eases the multi-step modeling process, and reduces the workload required by the user, while still allowing engagement from the user during every step. Default parameters are given for each step, which can either be modified or supplemented with additional external input. PRIMO has been designed for users of varying levels of experience with homology modeling. The pipeline incorporates a user-friendly interface that makes it easy to alter parameters used during modeling. During each stage of the modeling process, the site provides suggestions for novice users to improve the quality of their models. PRIMO provides functionality that allows users to also model ligands and ions in complex with their protein targets. Herein, we assess the accuracy of the fully automated capabilities of the server, including a comparative analysis of the available alignment programs, as well as of the refinement levels used during modeling. The tests presented here demonstrate the reliability of the PRIMO server when producing a large number of protein models. While PRIMO does focus on user involvement in the homology modeling process, the results indicate that in the presence of suitable templates, good quality models can be produced even without user intervention. This gives an idea of the base level accuracy of PRIMO, which users can improve upon by adjusting parameters in their modeling runs. The accuracy of PRIMO’s automated scripts is being continuously evaluated by the CAMEO (Continuous Automated Model EvaluatiOn) project. The PRIMO site is free for non-commercial use and can be accessed at https://primo.rubi.ru.ac.za/.
Introduction
Studying the three-dimensional (3D) structure of a protein is crucial to gaining insights into its function, which is one of the driving principles behind structural biology [1] and structural bioinformatics [2]. Experimental techniques, such as X-ray crystallography and NMR provide to challenging targets it is not ideal for more standard modeling jobs as it can be time consuming, often limiting users to a single modeling run at a time, spanning over a number of days.
Considering all the above points, we have developed the PRotein Interactive MOdeling (PRIMO) pipeline to provide a user-inclusive online modeling resource. It incorporates a user-friendly interface that has been designed to guide users through each stage in the homology modeling process. Keeping novice users in mind, the interface is simple and easy to learn, while allowing more experienced users to alter parameters and exhibit control over their modeling jobs. Multiple options are provided for both template identification and templatetarget sequence alignment. Additionally, PRIMO allows users to alter parameters specific to MODELLER, such as refinement level and number of models produced, as well as allowing users to model specific ligands and ions found within template PDB files.
PRIMO is being developed as part of H3ABioNet [21] for use by the H3Africa Consortium [22]. Research groups around Africa, as part of the Consortium, have been sequencing a large number of human genomes linked to various diseases and identify disease associated novel SNPs. PRIMO can be used to analyze disease related proteins and relevant nonsynonymous SNPs. In this way, PRIMO can help to advance the progress towards the Consortium's scientific goals. However, the usage of PRIMO goes beyond the Consortium's targets as it is designed to model proteins from any organism.
Here we describe the features of the PRIMO web interface and assess the backend scripts of PRIMO to demonstrate the accuracy of the pipeline when choosing fully automated options for modeling protein targets of interest.
Methods
The backend functionality of PRIMO was written in Python, presented as three separate tools in a local version of the Job Management System (JMS) [23]. The PRIMO pipeline currently provides options which use HHsuite [24], protein BLAST [25] Clustal Omega [26], MAFFT [27], MUSCLE [28], T-Coffee [29], MODELLER [13] and PROCHECK [30]. The PRIMO web interface is written as a single page web application, managed using the Django web framework. Communication between the PRIMO web interface and the PRIMO tools is managed through AJAX calls via the JMS API. The diagram presented in Fig 1 illustrates the process by which jobs are submitted from PRIMO to the cluster via JMS. When a user submits a modeling job from the PRIMO interface, their input parameters are sent to the PRIMO server. PRIMO then compiles these parameters into a request to be sent to JMS. Authentication details for JMS are also added to the request at this point. Once the request has been compiled, it is sent to JMS, which submits the job to the cluster and returns the job ID to the PRIMO web server. PRIMO then simply returns a message to the interface that the job was submitted successfully, while JMS monitors the job on the cluster. When the job finishes running on the cluster, JMS notifies PRIMO that the results are available. PRIMO then collects the results and returns them to the interface, where the user can interact with them.
PRIMO modeling algorithm
The PRIMO modeling algorithm is displayed in Fig 2. The minimum input required for the server is the sequence of a protein (target protein) to be modeled. Thereafter, the process is divided into three steps: 1) template identification and selection, 2) target-template sequence alignment and 3) modeling and model evaluation. Each step follows on to the next and allows for user inspection and input between these stages.
Template identification. This step involves helping the user find suitable templates for modeling. PRIMO allows users to select either HHsearch of HHsuite or protein BLAST to search for templates. BLAST is set as a default search option as it runs substantially faster than HHsearch, and identifies closely related templates if any are present in the PDB. A local version of BLAST is used to query the target sequence against a National Center for Biotechnology Information (NCBI) database of PDB files downloaded from ftp://ftp.ncbi.nlm.nih.gov/ blast/db. Output from BLAST is parsed to extract information about each template, including the PDB ID and chain, template-target sequence identity, query coverage and the alignment produced when running BLAST. Alternatively, HHsearch can be run if distant homologs need to be identified. This option incorporates various programs from the HHsuite package. HHblitz is run to search the target sequence against the HHsuite uniprot20 database. Secondary structure is added to the A3M alignment, an alignment format generated by HHblitz and used by HHsearch, before converting it to hidden Markov model using HHmake. HHsearch is then used to search against the HHsuite pdb70 database to identify templates. The resulting hhr file is parsed to extract the same information obtained if BLAST was run.
Target-template sequence alignment. For each template selected, the PDB file is parsed to extract its sequence. Both missing residues and non-standard amino acids are replaced with an "X" character, so this information may be included in the alignment. PRIMO performs the alignment using MAFFT, MUSCLE, Clustal Omega or T-Coffee. The templatetarget alignment provided by protein BLAST or HHsearch may also be used if one of these was run for template identification.
Modeling and model evaluation. The final step in the modeling process involves using the target-template sequence alignment and the template PDB file(s) to generate a PIR file and modeling script. The alignment undergoes some preprocessing before being converted to PIR format. Primarily this involves replacing the missing residue characters with gap characters ("-") and modified residues with period (".") characters, since this is how MODELLER recognizes modified residues. The sequences also undergo trimming at each end to ensure that the The steps involved in modeling using PRIMO can be seen as an interactive process in which the user can supply and edit input as they see fit, while PRIMO chains these steps together to model protein targets of interest.
doi:10.1371/journal.pone.0166698.g002 parts of the target sequence being modeled have a corresponding template section at each end. Finally, each template sequence is checked against the sequence extracted from its PDB file to ensure that it is correct. The starting and ending residues in each template, which is required by MODELLER is also determined, and added to the PIR file. The PIR file is required by MODELLER to link the template-target alignment to the specific segments of each template PDB file used in modeling. Once the PIR file has been created, the modeling script is prepared, then run using MODELLER. After modeling has completed, the models are evaluated by MODELLER's normalized DOPE function (DOPE Z-score) [31], as well as PROCHECK.
If ligands are specified for modeling, an additional set of steps is taken to prepare the PIR file before modeling can begin. In this context, "ligands" include any HETATM record found in the template PDB, excluding non-standard amino acids; for example substrates, ions, inhibitors and solvent molecules. All ligands specified are identified within their respective template PDB file. The position of the ligand that occurs last in the coordinate section is noted and becomes the ending residue for that template in the PIR file. All residues and ligands that occur up to this position are then appended to the template entry in the original PIR file. In the target sequence, gap characters are added to match the length of the template. Gaps are then replaced with period characters to match the positions of ligand molecules of interest that occur within the template, since MODELLER recognizes these characters as ligands as well. In addition to PIR file modifications, additional parameters are given to the modeling script to instruct MODELLER to read in ligands or solvent molecules where applicable.
PV-MSA: a JavaScript wrapper combining the functionality of PV and MSA PV [32] is a widely used JavaScript plugin for 3D protein visualization. Similarly, BioJS MSA (http://msa.biojs.net/) is a JavaScript component used to visualize multiple sequence alignments. Although useful in their own right, the need to view a structure in conjunction with its sequence often arises in bioinformatics. In addition, these tools can be difficult to use as their application programming interfaces (API) are fairly unintuitive. To cater for this, we have developed PV-MSA, a wrapper that combines the functionality of PV and MSA in a single JavaScript plugin. PV-MSA also provides a simplified API that makes a fair amount of the functionality of both PV and MSA available. For functionality that has not been included yet, PV-MSA provides direct access to the underlying PV and MSA objects.
Over and above simply wrapping the two plugins, PV-MSA links the selection functionality. For example, a user can select a residue in the protein structure and it will automatically be highlighted in the alignment. The alignment is automatically scrolled to the selected position. Similarly, if a residue is selected in the alignment, its location is highlighted on the corresponding structure. PV-MSA also allows structures to be superposed. In such cases, selecting a residue in one structure will also highlight the aligned residue in the superposed structure. This selection is based on the alignment, and as such, gaps and missing residues are taken into account.
Multiple structures and their sequences can be loaded into the plugin at once and structures and sequences can be hidden and shown independently. PV-MSA also allows users to visualize and select ligands and ions in a structure. Selecting a ligand displays a label over the ligand with the ligand name. Functionality has also been included to resize both the PV and MSA plugins together and independently as the user needs. The PV-MSA plugin can be downloaded from https://github.com/davidbrownza/PV-MSA.
Testing of PRIMO scripts
In order to evaluate the performance and reliability of the PRIMO modeling scripts, tests were run which involved modeling target proteins from the PDB with known structures. The process followed to test the PRIMO scripts is shown in Fig 3A. Target structures were fetched at random from the PDB and templates for modeling were identified using PRIMO's template identification protocol (see above). The template-target sequence identity values were recorded for each set and target-template combinations were binned according to this. This process was repeated until there were 1250 different structures in each bin. The bin ranges used in this manuscript include only the lower value shown (i.e. a template with 30% sequence identity was included in the 30-40% bin, not the 20-30% bin).
For each entry in each bin, the targets were aligned to the templates using the four sequence alignment programs provided by PRIMO, as well as the HHsearch alignment calculated during template identification.
For MAFFT, a script was written to mimic the MAFFT-homologs alignment option. This firstly runs a local version of protein BLAST [25] on both the template and the target to retrieve 50 closely-related sequences to each, before combining these two sets of sequences and aligning them using MAFFT. Similarly, for the T-Coffee alignment, a script was written to mimic the functionality of Expresso [33]. Expresso makes use of 3D-Coffee [34], which incorporates structural information when running T-Coffee. While Expresso runs BLAST to identify homologous PDB structures as input for 3D-Coffee, our mimic script runs 3D-Coffee using the alternative templates identified during template identification (excluding the target PDB). These modifications were made because each of these programs requires calls to external webservers, which slow down substantially and eventually crash when running thousands of modeling jobs.
For each alignment produced, modeling jobs were run using MODELLER, producing 10 models per run using very slow refinement. Due to sequence trimming of the PIR preparation step, not all models from the same target-template set were the same size when modeled using different alignment options. To normalize the models, the PDB files in each modeling set were trimmed to the longest common segment of all models in that set.
Models also went through a series of filtering steps ( Fig 3B). After performing target template alignments using the different alignment programs, some models fell outside their designated sequence identity bin (see S1 Fig). This is because sequence identity is calculated from the alignment between target and template, so realigning with different programs produced different results. To make the modeling sets comparable, only sets where the templatetarget alignment from all five alignment programs fell in the same bin were included. The target coverage was also calculated for each modeling set. Here, sets were only included if at least 80% of the target sequence was modeled, for all five alignment options. The final filtering step involved calculating the RMSD between each template and target PDB file using BioPython, divided into their respective bins. Outliers were calculated and removed from each of these sets. This was done to account for target-template combinations with large conformational differences, where RMSD could not be used to assess the modeling accuracy.
After filtering, the models were evaluated. DOPE Z-score calculations were performed on each model produced, to select the top model from each set. The top model and the target PDB structure were then compared by calculating RMSD, Global distance test-high accuracy (GDT-HA) score, template modeling (TM) score [35] and Local distance difference test (lDDT) score [36]. Both GDT-HA and TM score values were calculated using TMscore software downloaded from the Zhang Lab (http://zhanglab.ccmb.med.umich.edu/TM-score/). Software to calculate lDDT score was downloaded from http://swissmodel.expasy.org/lddt.
A PDB remodel set was also produced and evaluated for each target. Each of the targets was modeled using its own PDB structure as a template, representing ideal modeling conditions and giving an indication of the error produced by MODELLER itself.
Testing model refinement options
In addition to testing the different alignment options provided by PRIMO, some tests were performed to evaluate the different refinement options provided when modeling using MOD-ELLER. These were done using the MAFFT modeling set, with the same PIR files as calculated for the MAFFT alignments. These were also only done using the final set of models evaluated after filtration steps were carried out. The only parameter altered was the refinement level option in the modeling script. The additional refinement levels tested included none and fast. These were compared with the very slow option used in the alignment studies. Models were evaluated by DOPE Z-score and RMSD, as in the other tests.
Modeling case studies
In order to demonstrate the performance of PRIMO when compared to other modeling options, two case studies were performed. These included the modeling of heat shock protein 70-x from Plasmodium falciparum (PfHsp70-x; accession: PF3D7_0831700) as monomer and X-linked tyrosine kinase from Homo sapiens (HsTXK; Accession: AAA74557.1), modeling with ligands. Online modeling servers tested included SWISS-MODEL [15], Phyre2 [16], ModWeb [14], HHpred [19] and I-TASSER [20]. SWISS-MODEL was run, allowing the server to select the best templates and build models without user intervention. Phyre2 was run using its intensive modeling mode. ModWeb was run using the very slow fold assignment option, but otherwise using default parameters. I-TASSER was run using its default parameters. HHpred was allowed to automatically select the top template and perform its alignment, but the alignment was manually trimmed in the PIR file at the N-and C-termini. For PfHsp70-x, four modeling sets were chosen from PRIMO; 1) Using the templates 5e84, 4jne and 5pfn, aligned using MAFFT with no further intervention; 2) The same template combination used in (1) except aligned using 3D-Coffee and no further intervention; 3) The same parameters used in (2), except with small manual edits to the alignment; 4) Using the templates 5e84, 5pfn and 3d2f and MAFFT as the alignment program-here only the final 80 residues of template 3d2f were used to model the C-terminal alpha-helical region of the protein to produce a longer and more complete model. For HsTXK, two modeling sets were chosen for PRIMO; 1) Using only template 4ot5, which comprises the catalytic domain of the kinase. This template structure was in complex with an inhibitor, 4-tert-Butyl-N- . Both were aligned using 3D-Coffee with only minor manual edits to the alignment. All models were assessed using ProSA [37], Verify3D [38,39], PROCHECK [30], the QMEAN server [40] and DOPE Z-score [31].
Independent assessment of PRIMO by CAMEO
As an additional validation step, PRIMO has been registered to participate in the CAMEO (Continuous Automated Model EvaluatiOn) project [4]. CAMEO provides modeling servers with the sequences of PDB structures that have yet to be released, which these servers must predict the structure of and return to CAMEO for independent evaluation. PRIMO has been registered with four different modeling options, which include using the different combinations of BLAST and HHsearch for template identification, and Clustal Omega and 3D-Coffee for sequence alignment (registered as PRIMO_BLAST_CL, PRIMO_BLAST_3D, PRI-MO_HHS_CL and PRIMO_HHS_3D, respectively). Results of evaluation by CAMEO are displayed at http://cameo3d.org/.
PRIMO web interface
The PRIMO website acts as a frontend to link users to the modeling scripts integrated into the JMS (Fig 4). The initial job overview page allows users to specify input and options for all three stages. PRIMO encourages a more 'hands on' approach to modeling, so users can go step-bystep through the process.
Input page. This provides an overview of the modeling job. Users can simply enter in a target sequence and begin the modeling process. PRIMO utilizes MODELLER [13], so users must also supply a MODELLER key. If no other input is provided, PRIMO will run using the default parameters set for each modeling stage. Alternatively, the page allows users to adjust the parameters for template identification, sequence alignment and modeling. For template identification, users can choose to search for templates using HHsearch [19] or protein BLAST [25], or specify templates themselves. They may also select one of five sequence alignment options available, which currently include MAFFT [27], MUSCLE [28], T-Coffee [29] and Clustal-Omega [26], as well as the alignment created by either HHsearch or BLAST. Modeling parameters can also be specified before the modeling job is started. Thereafter, the PRIMO interface guides users through each step in the homology modeling process. Input for each stage is processed and submitted to our local cluster, utilizing the JMS [23].
Template identification. If automatic template identification is run, the templates identified are displayed, including information about sequence identity and query coverage. Templates can be selected through simple check boxes to be included in the target-template alignment stage. Users can also click on the ID of any template, which links directly to its entry in the PDB, allowing users to further assess the quality of each template. The templates can be individually selected and displayed to assess their conformations for multiple-template modeling. The alignment produced by HHsearch or BLAST (whichever was run) is also displayed in order for the user to assess the suitability of each template as well as inspect query coverage. The interface also provides options to allow the modeling of ligands found within any of the templates. A drop down list appears for each template returned, which details ligands that can be included in the modeling run.
Target-template alignment. Sequences are extracted from the templates and aligned to the target sequence, using the alignment option selected. The alignment is displayed in an integrated alignment viewer and can be inspected and edited manually by the user before moving on to the modeling stage. The alignment editor validates changes that the user makes in order to prevent edits, which would cause the modeling stage to fail. In template sequences, gaps can be added anywhere, but the sequences can only be trimmed from the outsides. If the edited sequence cannot be found within the original sequence (excluding gaps), it is invalid. The target sequence can be edited in just about anyway, so long as valid characters for amino acids and gaps ('-') are used.
Model building and evaluation. The sequence alignment is utilized to prepare a PIR file, which is used by MODELLER. Modeling is performed using the parameters specified in the input page and the models are assessed by DOPE Z-score calculations. The top models are listed and can be visualized using the integrated PV-MSA PDB viewer provided (Fig 5). Additionally, each model contains a drop down link to the evaluation page. This displays plots produced by PROCHECK, which includes a Ramachandran plot, as well as nine other plots which describe the stereochemical quality of the model. The page also provides links to various other [40] and Verify3D [38,39] servers.
Job history. Jobs are linked to the users' accounts, which comprise an instant sign-in. Users can navigate to previous jobs run, as well as to different stages in their current jobs, alter parameters and rerun jobs. Email notifications can also be turned on to notify users when a job is complete or requires attention.
Submitting jobs to the cluster via JMS
PRIMO makes use of a unique system to submit jobs to the underlying cluster (Fig 1). JMS [23] has been developed as a web-based workflow management system and cluster front-end for high performance computing (HPC). It is able to store custom tools and scripts, and manage their execution on an HPC cluster. The reason that JMS is used for submitting jobs is that it abstracts away the complexity of managing the job on the cluster and drastically reduces the time taken to develop the PRIMO web server. PRIMO was originally developed as a series of command-line scripts. We were able to upload these scripts to JMS directly via the JMS web interface. After that, building the PRIMO website simply involved building a custom interface that interacted with the JMS web API. Submitting and managing the job on the cluster is handled entirely by JMS while the PRIMO web server merely has to wait for a notification from JMS that the job has completed.
Accuracy of the PRIMO backend scripts
While the PRIMO website was designed to promote user involvement during each step in the homology modeling process, the backend scripts are capable of performing fully automated modeling. Here we present the accuracy of PRIMO, when no user intervention occurs during the modeling process.
To assess the tools and algorithms incorporated into PRIMO, an evaluation study was performed by modeling proteins with known structure from the PDB, using templates ranging from 20% to 90% sequence identity, as well as using five different alignment approaches. After modeling and filtering as explained in the Methods section, the final set included 5 869 modeled targets, comprising 293 450 models, to be evaluated.
Due to the scale of the models produced, evaluations were performed using MODELLER's DOPE Z-score, the results of which are shown in Fig 6. When evaluating models by DOPE Zscore, the desired value are -1.0 or below, as these models are considered native-like [41]. When testing the PRIMO scripts, models from 40-50% bin and above were on average below this cutoff. This is expected, as structures that share at least 40% sequence identity generally have similar structures [6]. The bins below 40% sequence identity displayed lower quality results, and alignments based on programs that use structural information, such as HHsearch and 3D-Coffee, outperformed the other alignment options, especially in the 20-30% sequence identity bin. This was also an expected result, since the addition of structural information is known to improve alignment accuracy in the case of low sequence identity [42].
The PDB structures of both the template and target PDBs were included in the DOPE Zscore evaluations, in order to get an idea of the quality of these structures. Similarly, each target was remodeled using itself as a template to represent modeling under ideal conditions (100% target-template sequence identity). These remodeled targets never matched the quality scores of either the templates or target PDB files. The models in the 80-90% sequence identity bin were the only set that on average matched the quality of the remodeled targets.
The reason for modeling protein targets from the PDB was to be able to evaluate the models produced, by comparing them to known structures. This was done by assessing the RMSD of these structures (Fig 7A). One of the limitations of this approach was that in some cases both the target and template PDB structures were present in different conformations. In some cases, targets and templates had measured RMSD values greater than 20 Å, even at high sequence identity. To account for this, RMSD outliers were removed from each bin before models were evaluated.
In all instances, a similar trend was observed to that shown in the DOPE Z-score assessments. This was not entirely surprising since low DOPE Z-scores (below -0.5) have been previously shown to correspond to lower RMSD values [10]. At lower sequence identity ranges, results were relatively poor and programs such as 3D-Coffee and HHsearch that used structural information performed better than the other alignment programs. From the 50-60% range and above, models had measured RMSD values within 2.0 Å of the target PDBs.
An alternative RMSD measure was also considered by calculating the RMSD value between the template PDB and target PDB, and then subtracting this from the values shown in Fig 7A. This was done as a secondary means of addressing the problem with conformational changes between template and target PDBs. The resulting values (S2 Fig) also make it easier to see the RMSD differences between the different alignment options above the 40-50% sequence identity bin. It was interesting to observe that in the higher sequence identity bins, the alignments produced using 3D-Coffee had an average RMSD value that was greater than those produced by programs that did not take structural information into account, especially in the 70-80% and 80-90% sequence identity bins. To account for the limitations of calculating RMSD scores, three additional scores were calculated to compare the top models to their respective protein targets. These included two global scores, TM score ( Fig 7B) and GDT-HA score (Fig 7C), as well as a calculation local accuracy, the lDDT score (Fig 7D). TM-score provides an indication of accuracy at a protein fold level and is considered a better estimation of model quality than RMSD [35]. GDT scores, such as GDT-HA score are less sensitive than RMSD to deviations that occur in small portions of a model [36]. The TM score results were promising with values above 0.8 in modeling sets above 30% template-target sequence identity (Fig 7B). GDT-HA scores was the strictest measure used, but from the 30-40% bin and upwards these were above 50 (Fig 7C). As a local quality predictor, lDDT score is far less affected by conformational changes than global scores [36]. Our results showed more favorable lDDT scores (Fig 7D) than the GDT-HA.
PRIMO has also been registered to participate in the CAMEO project [4], which allows for independent assessment of the server. Results from this assessment may be viewed at http:// cameo3d.org/. Four different modeling options were registered to demonstrate results of using different template identification and alignment approaches, without adding too much additional strain to the PRIMO server. The scores for models produced by PRIMO are comparable to other published servers, such as Phyre2 [16], and are better than the CAMEO baseline, Nai-veBlast. Even though PRIMO was not designed to be used as a fully-automated modeling tool, the results from CAMEO will provide valuable feedback for future developments to the server.
Model refinement results
An additional set of tests were run to quantify the effect of using MODELLER's different refinement options. The very slow refinement option was selected for the benchmark tests. These results were then supplemented with results using refinement level during modeling set to none and fast (Fig 8). When comparing DOPE Z-scores (Fig 8A), the greatest improvement is seen between no refinement and fast refinement. There is a further improvement when using very slow refinement over fast refinement; however, this difference is far less pronounced. Even more interesting was the RMSD results ( Fig 8B). The advantages of using refinement, when modeling are not as clear as with the DOPE Z-score calculations, particularly above 50% sequence identity. Overall, the benchmark results observed are promising, especially since the PRIMO site was designed with user intervention in mind. By altering parameters, such as using more than one template, manually editing the alignment and increasing the number of models produced, users could easily improve on the results reported here by interacting with the PRIMO pipeline.
Case studies
To demonstrate the potential ways to use PRIMO, we designed two simple case studies which involved modeling PfHsp70-x and HsTXK proteins.
Modeling PfHsp70-x. PfHsp70-x is by no means a challenging target to model and can be considered as a typical protein users might model when using PRIMO. There are templates available with good sequence coverage and sequence identity, making this protein ideal for homology modeling. One of the interesting properties of PfHsp70-x is that, as an Hsp70, it takes on different structural conformations in its different functional states. The PDB contains several structures capturing the different conformations of Hsp70. Thus, homologs of this protein from other organisms can be modeled in these different conformations. This showcases one of the important features of PRIMO; namely the template viewer, which allows users to select and view the conformations of different templates in a similar manner seen when using SWISS-MODEL. This is important because the top models in this case study produced using PRIMO involved multiple template modeling, which should not be done with template structures in different conformations.
The full set of evaluations is summarized in S1(A) Table. As seen in the automated tests, at high sequence identity, there is no clear accuracy gain when using structural alignment programs such as 3D-Coffee, when compared to using MAFFT. Verify3D and DOPE Z-score results indicated that the MAFFT alignment produced slightly better models than those produced using the unaltered 3D-Coffee alignment. This demonstrates the need to test out different modeling approaches, which the PRIMO interface is designed to do.
As part of this case study, we used other online servers to model PfHsp70-x. Our comparison was against the automated features of these servers, but it should be noted that only SWISS-MODEL and HHpred provided a template selection option. Of these, only the SWISS-MODEL interface gave a clear indication of template conformation, which is as important consideration when modeling Hsp70s. As an alignment editing option, HHpred provided a text editor displaying the PIR file to be used by MODELLER. This was nice feature as it gives an indication of PIR file format in addition to allowing users to edit the alignment. It does however, require the user to trim the sequences manually before submitting the job for modeling, which only becomes apparent after the model is returned. The other servers assessed were fully automated, providing to no options beyond the initial input screen. When considering the model evaluation results in S1(A) Table, none of the servers produced poor quality models, which was to be expected, since PfHsp70-x is not a challenging protein to model. What was promising though, was that the models produced by PRIMO were scored more favorably than those produced by the other servers.
Modeling HsTXK. This was a more challenging target to model, as reflected in the evaluation scores S1(B) Table, but it does highlight some interesting features of the different modeling servers. With the exception of the SWISS-MODEL server, all modeling sites returned monomers. SWISS-MODEL returned one of the models as a dimer, as this is the predicted biological assembly, based on template 4xi2, which it used for modeling. In terms of ligand modeling, both SWISS-MODEL and I-TASSER identified ligands in their respective templates, but only I-TASSER included these in the models produced. Neither server provided options to specify which ligands should be included when modeling though. With the PRIMO pipeline, specific inhibitor molecules were selected from each template to be modeled with the protein.
These two case studies were not meant to be a comprehensive assessment of PRIMO compared to other modeling servers, but it was encouraging to see that with this target at least, PRIMO performed well against the other servers assessed for most of the evaluation tools used (S1 Table).
Conclusions
As a modeling tool, PRIMO aims to provide a middle ground between the lack of control caused by full automation and the difficulty and tedious nature of writing scripts and using modeling programs though the command line. The site can identify templates using both HHsearch and BLAST, perform sequence alignments with one of five different alignment options, and perform homology modeling using MODELLER. PRIMO incorporates a job history system which allows quick and easy navigation among the different steps of a specific modeling job, as well as navigation between different jobs. With this, users can perform several modeling jobs in parallel, while also being able to go back and alter modeling parameters to achieve the best results.
The PRIMO pipeline allows users of varying levels of experience to perform homology modeling interactively and reliably. While this 'hands on' approach to modeling is largely encouraged, we still aim to ensure that the automated modeling features of this pipeline are as accurate as possible. The accuracy tests reported here demonstrate that the automation of these algorithms can be done with promising accuracy down to 40% sequence identity, which is where comparative modeling is known to reach its limits [6]. The accuracy of PRIMO's automated modeling capabilities are continuously being assessed by CAMEO.
As a web interface, PRIMO is platform independent and requires no personal computing power. The site currently provides a means for modeling protein monomers using one or more templates and provides functionality to allow protein-ligand complex modeling. Unlike other servers, PRIMO allows users to select specific ligands and ions to be included in the modeling process.
Since PRIMO works via communication with the JMS, adding to the features and functionality of this pipeline can be achieved by simply adding new tools to the JMS. In future we will add more options for template identification, sequence alignment and model evaluation where possible. For now, PRIMO provides a quick and easy way to perform homology modeling, while allowing users to make alterations and improvements to their modeling jobs.
In summary, PRIMO prides itself on providing flexibility during the modeling process, giving users the ability to exercise a certain degree of control over their modeling jobs. It allows users to edit parameters and rerun jobs, while the navigational bar and job history features allow users to attempt multiple modeling approaches in tandem to optimize their modeling results. The site incorporates a user friendly design, which is simple to use, yet robust. The site is intuitive to use, with all options being easy to find and test out; which adds to the educational value of the site, as users gain hand-on experience with homology modeling. Users can adjust parameters and see the effect on the models produced. Apart from the model evaluation options provided by the site, PRIMO links to various other evaluation servers, which inexperienced users should find helpful. In addition to this, tips and tricks are provided in the loading screen to give novice users suggestions as to how they may improve their modeling runs.
Future development will focus on providing more features, such as protein complexes, including modeling of biological assemblies specified within template PDB files. PRIMO encourages user involvement in the homology modeling process and as such we shall also aim to provide additional options for each of the stages.
The PRIMO webserver may be accessed freely for academic use at https://primo.rubi.ru.ac.za Supporting Information S1 Fig. Sequence identity box plots. The box plots show measured target-template sequence identity for all modeling sets, divided into their sequence identity bins and alignment program used, as measured based on the PIR file used for modeling. These are shown for all models produced before the filtering step ( Fig 3B). | 8,783.6 | 2016-03-19T00:00:00.000 | [
"Computer Science"
] |