text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Dynamics of Linker Residues Modulate the Nucleic Acid Binding Properties of the HIV-1 Nucleocapsid Protein Zinc Fingers
The HIV-1 nucleocapsid protein (NC) is a small basic protein containing two zinc fingers (ZF) separated by a short linker. It is involved in several steps of the replication cycle and acts as a nucleic acid chaperone protein in facilitating nucleic acid strand transfers occurring during reverse transcription. Recent analysis of three-dimensional structures of NC-nucleic acids complexes established a new property: the unpaired guanines targeted by NC are more often inserted in the C-terminal zinc finger (ZF2) than in the N-terminal zinc finger (ZF1). Although previous NMR dynamic studies were performed with NC, the dynamic behavior of the linker residues connecting the two ZF domains remains unclear. This prompted us to investigate the dynamic behavior of the linker residues. Here, we collected 15N NMR relaxation data and used for the first time data at several fields to probe the protein dynamics. The analysis at two fields allows us to detect a slow motion occurring between the two domains around a hinge located in the linker at the G35 position. However, the amplitude of motion appears limited in our conditions. In addition, we showed that the neighboring linker residues R29, A30, P31, R32, K33 displayed restricted motion and numerous contacts with residues of ZF1. Our results are fully consistent with a model in which the ZF1-linker contacts prevent the ZF1 domain to interact with unpaired guanines, whereas the ZF2 domain is more accessible and competent to interact with unpaired guanines. In contrast, ZF1 with its large hydrophobic plateau is able to destabilize the double-stranded regions adjacent to the guanines bound by ZF2. The linker residues and the internal dynamics of NC regulate therefore the different functions of the two zinc fingers that are required for an optimal chaperone activity.
Introduction
The human immunodeficiency virus type 1 (HIV-1) nucleocapsid protein (NC) is a small nucleic acid binding protein that possesses a N-terminal basic domain and two zinc fingers connected by a short linker ( Figure 1A). The NC domain, under its various forms (Gag, NCp15, NCp9, NCp7) plays numerous roles during the replication cycle of the virus [1][2][3]. Among these forms, NCp7 (named NC in this report), via its nucleic acid chaperone activity [1,4,5], is thought to facilitate the strand transfer processes occurring during reverse transcription [6][7][8][9]. Through its chaperone activity, NC rearranges the nucleic acids into the most thermodynamically stable conformations. This activity is mainly related to the ability of the protein: i) to destabilize secondary structures of nucleic acids and ii) to promote annealing/aggregation of nucleic acids. Additionally, the fast kinetics of the binding/unbinding of NC to nucleic acids [10,11] as well as the freezing of the local mobility of the contacted bases [12,13] were reported as further key properties of NC chaperone activity. Interestingly, zinc fingers are thought to play a major part in nucleic acid destabilization, fast binding, and dynamic restriction, while the basic N-terminal part is mainly responsible for the nucleic acid aggregation activity [1][2][3]11,[14][15][16][17][18][19].
NC exhibits a clear preference for single-stranded regions [3,20]. Moreover, NC exhibits high affinity for oligonucleotides containing unpaired guanines, such as TG, UG, TGG, GXG in internal or apical loops or in single-stranded domains [1,12,[21][22][23][24][25]. The two zinc fingers of NC are involved in these preferences and most of the structural reasons for these preferences have been inferred from the 3D structures of NC complexes with short oligonucleotides [23,24,26].
Several studies showed that the two zinc fingers (ZF1 and ZF2) are not equivalent [21,[27][28][29][30]. Indeed, the NC mutant in which the N-terminal zinc finger (ZF1) has been duplicated (ZF1:ZF1 mutant) is more replication competent than mutants with zinc finger-swap (ZF2:ZF1 mutant) or with duplicated C-terminal finger (ZF2:ZF2 mutant) [29]. Using the same mutants, ZF1 was shown to be more critical than ZF2 for the nucleic acid chaperone activity of NC [28,29,31,32]. Indeed, facilitation by NC of strand transfers as well as annealing of highly structured nucleic acids substrates is observed only when ZF1 is at its proper place, the ZF2:ZF1 and ZF2:ZF2 mutants being unable to perform these reactions [28,33].
For each zinc finger, one aromatic amino acid, namely F16 in ZF1 and W37 in ZF2, has been shown to be involved in stabilizing contacts with nucleic acids, through stacking interactions with guanines and in some cases thymines [23,24,26,[34][35][36]. Moreover, mutational studies of these residues have shown their essential role in nucleic acid binding [12] and chaperone activity [32,37,38].
Atomic details of NC binding modes to nucleic acids were determined from the structure of six complexes that were analyzed at high resolution using NMR methods [23,24,26,[34][35][36]. Strikingly, in all the complexes one guanine is inserted inside the hydrophobic pocket of ZF2 and stacks extensively with W37. F16 in ZF1 stacks either strongly with guanines (in the case where two guanines are available in the nucleic acid sequence) or stacks partly with a thymine or cytosine residue that remains outside of the hydrophobic pocket of the ZF [25,26,35,36]. Thus, examination of these complexes suggests a strong stabilizing interaction with W37. Moreover, several nucleic acid binding studies with W37A and F16A mutants showed that W37 replacement is much more deleterious than F16 replacement for NC binding [12,32]. Similarly, deletion of ZF2 is largely more critical than deletion of ZF1 for the binding of the NC domain of Gag to its nucleic acid targets [39]. However, in contrast with these data, the isolated ZF1 was reported to bind DNA and RNA sequences more avidly than the isolated ZF2 [22,40,41], while the ZF2:ZF1 and ZF1-ZF1 mutants were found to bind with higher affinity to nucleic acid substrates than the wt and ZF2-ZF2 mutant [42]. This latter result suggests that, either in its isolated form or at the C-terminal position of NC, ZF1 is associated with a higher affinity for nucleic acids than ZF2. It is therefore intriguing that ZF1 appears more important than ZF2 for NC chaperone activity and the viral replication, but not for nucleic acid binding, at least in the context of the wt protein.
This intriguing observation urges us to reconsider the division and coordination of activities between the two ZFs, to get a better understanding of the molecular mechanisms involved in the biological functions of this key HIV-1 protein. Internal dynamics of nucleic acid binding proteins are essential for nucleic acid recognition [43][44][45][46][47] and efficient coordination of their various activities. Since these dynamics can be investigated through NMR measurements of 15 N or 13 C relaxation times [48][49][50][51][52][53], we reinvestigated the internal dynamics of NC, by 15 N nuclear relaxation measurements using two fields (950 and 500 MHz). Detailed analysis of the dynamic parameters of the linker residues shows that the two ZFs move relative to each other around a molecular hinge, localized at a crucial position in the linker. In contrast, several other residues in the linker appear quite rigid and in close contact with ZF1. These specific dynamic properties are likely important for controlling the activities of the two ZFs. Interestingly, reexamination of various 3D structures of NC:nucleic acid complexes allowed us to provide a structural basis for the role of ZF1 in the destabilization of secondary structures. Taken together, our data lead to propose a model of coordinated activity between the two ZFs with a critical role for the linker.
Expression and purification of recombinant HIV-1 protein
The recombinant 55-residue NC protein (HIV-1 strain NL4-3) was overexpressed in a ( 15 N 13 C)-labeled medium and purified as previously described [19].
Sample preparation
NMR buffer (25 mM deuterated sodium acetate pH 6.5, 25 mM NaCl, 0.1 mM ZnCl 2 , 0.1 mM 2-mercaptoethanol) was deoxygenated by sparging with argon for 15 minutes. For NMR experiments, 50 mL 2 H 2 O were added to 0.5 mL of sample into a 5 mm NMR tube. Typical samples contained 1 mM protein. [54,55]. These experiments were carried out on a Bruker Avance 950 MHz (TGIR-RMN-THC FR3050 CNRS, Gif sur Yvette, France) equipped with cryoprobe triple-resonance with z-axis field gradient. 15 N NMR relaxation measurements. NMR experiments were carried out on a Bruker Avance 950 MHz and on a Bruker Avance 500 MHz with triple-resonance probe with z-axis field gradient. All measurements were carried out at 283 K. Standard pulse sequences were used for 15 N relaxation measurements [56][57][58]. In the following, we indicated the parameters used for the experiments recorded at 950 MHz. For experiments at 500 MHz, similar sets of adapted parameters were used. The 15 N-1 H correlation experiments were recorded with 18286256 points and spectral widths of 12 and 34 ppm in 1 H and 15 N dimensions, respectively. All the experiments were recorded with a repetition delay of 4 s between successive scans. The T1 data were collected using 20, 50 (repeat), 80, 100, 150, 200, 250, 300, 400 (repeat), 500, 700, 800 and 3 000 ms for the recovery delay. CPMG pulse trains were used with a 0.9 ms delay between successive 15 N 180u pulses and a 1 H 180u pulse was applied at the center of the CPMG train to remove cross-correlation between 15 N CSA and 1 H-15 N dipolar interactions [57]. The T2-CPMG data were collected with delays of 16, 32, 64, 80, 128 (repeat), 160, 208 (repeat), 256, 320 and 400 ms. In all experiments, the points corresponding to different relaxation delays were acquired in an interleaved manner.
Exclusively for experiments recorded at 500 MHz the CPMG experiments were repeated with the 15 N carrier positioned at 103, 116, 120, 129 ppm in order to take into account the errors arising from large resonances [45,59,60].
The NMR relaxation data were processed with cosine squared apodization and zero filled once in 1 H dimension and twice in 15 N dimension using NMR Pipe. The data were analyzed using SPARKY to determine the relaxation rates and the associated errors.
Quantitative Analysis of relaxation data
Calculation of the overall rotational diffusion tensor is possible since, in the absence of internal motions, the intrinsic 15 N relaxation rates depend on the orientations of the N-H bond vectors relative to the axis of the diffusion tensor [52,61]. The determination requires the T1/T2 ratios, these latter are, to a good approximation independent of internal motions and of the magnitude of the chemical shift anisotropy [61]. However, it is necessary to exclude residues exhibiting motions with timescales greater than several hundred of picoseconds. These latter residues are identified by the fact that they exhibit lower than average 15 N-{1H} nOes [62]. Similarly, residues involved in conformational exchange in the slow regime could also affect the T1/T2 ratios and have been therefore excluded following guidelines presented in previous works [61]. The programs R2_R1 diffusion and Quadric diffusion (AG Palmer) were used to determine the parameters related to the diffusion tensor, including the correlation time. The improvement associated with the use of a more complex model of diffusion (anisotropic, six parameters) relative to simple models such as isotropic (1 parameter) and axially symmetric (four parameters) models was not statistically significant and data related to this model are therefore not presented.
In a second step, the results were evaluated using the ''Model-Free'' approach [63] using ModelFree [64]. A brief recall of the formalism of the ''Model-Free'' approach needs to be presented here. 15 N T 1 , T 2 and 15 N-{ 1 H} nOe values are related to the spectral density J(v) that are determined by the reorientational dynamics of the N-H bond vector [65]. Spectral density terms are the Fourier transforms of the autocorrelation functions of the molecular motions in the time domain.
If the various internal motions are not correlated with the isotropic overall molecular rotation (and this could be questionable as the relative interdomain motion could be, in particular cases, correlated to the overall motion), the total correlation function C(t) can be expressed as C 1 (t) is related to the internal motions and C 0 (t) to the overall molecular rotation. In the original Model-Free formalism the internal motions are characterized by the time constant t e and the generalized order parameter S 2 and no assumptions are made on the nature of the motions; the autocorrelation function is given by and the corresponding Model-Free spectral density is where 1 t 0~1 t e z 1 t c and t c is the overall rotational correlation time and t e is the correlation time of the fast internal motions. This case is described in the ModelFree approach by two models: (i) the model 1 for which the time scale is extremely fast and tends towards zero (only one parameter S 2 is determined by the program to describe the internal motion) and (ii) the model 2 in which the time scale of the internal motion is fast but does not tend towards zero (two parameters are used to describe the internal motion t e and S 2 ). When internal motions of significant amplitude occur both on fast and slow time scale, a procedure called the ''extended Model-Free'' approach could be used [66]. In this approach the corresponding autocorrelation function is given by: with S 2 f and t f being related to the fast internal motion and S 2 s and t s to the slow internal motion. The corresponding spectral density function is described by With the ModelFree program [64] in the case where the t f tends towards zero (very fast motion) and therefore the third term in (5) drops, the preceding ''extended Model-Free'' is described with the model 5 to extract t s , S 2 f S 2 s . Although the preceding description is correct for isotropic motion, in the case of more complex motions such as anisotropic motions, more complete information should be found elsewhere [49,62,67] The estimation of the overall motion parameters has been determined for the whole protein using R2_R1 diffusion or ModelFree as described above. We performed calculations using a global correlation time and a diffusion tensor, as it is classically done [64]. The data obtained at 950 MHz and 500 MHz were fitted separately at each field or simultaneously. As previously described, the relaxation rates of each residue were fitted to five models of increasing complexity [45,64,68]. The spectral densities J(v) were calculated using as parameters, a uniform value d CSA of 2172 ppm for chemical shift anisotropy (CSA) and a value of 1.02 Å for the N-H bond length [69]. In our treatment the standard Model-Free analysis considers only the dipolar 15 N-1 H and the 15 N CSA relaxations pathways. The 15 N-13 C' and 15 N-13 C a that present measurable contribution are not taken into account because they are predicted to be within the experimental error and anyway less than the influence of all others protons (except the 1 H N of the same peptide plane) that are themselves usually not taken into account in relaxation studies [70,71]. Figure S1 in File S1 and in Figure 1, respectively. Experiments at 950 MHz allowed us to obtain a better resolution and to perform more accurate measurements on several residues. Note however that some overlaps remain: namely R52/K34/V13 1 H-15 N cross-peaks cannot be individually resolved. The N-and C-terminal parts are highly flexible as shown by the heteronuclear nOe values less than 0.65 at 950 MHz and less than 0.5 at 500 MHz ( Figure 1B-c and Figure S1 in File S1). In comparison, the ZFs appear highly folded with a mean value of 0.66 (500 MHz) and 0.8 (950 MHz) for heteronuclear nOe. These nOe values are in agreement with the T2 values that are weak in ZFs while the terminal parts display high values, indicating highly and poorly folded structures, respectively ( Figure 1B-b). Concerning the linker residues (29)(30)(31)(32)(33)(34)(35), most of these residues exhibit T1, T2 and nOe values close to those of the two ZFs, however G35 residue differs with significantly lowest nOE value and highest T2 values (relatively to the other residues of zinc finger parts). This residue differs from the other residues of the linker by its nOe and T2 values at the two fields, which are intermediate between those of the ZFs and the terminal flexible parts.
N backbone measurements at two fields
Analysis of the T1, T2 and nOe values show that while T2 and nOe values are close for the two ZFs (at the two fields), a significant difference is observed at 950 MHz in the T1 values of the two ZFs (average values of 812 ms for ZF1 and 718 ms for ZF2). The difference is 12% at 950 MHz and only 6% at 500 MHz. This is also observed in the T1/T2 ratios ( Figure 2) at 500 MHz and 950 MHz (this study) and 600 MHz (in another study [52]). The T1/T2 ratio is particularly interesting because it provides a good estimation of the rate at which each N-H vector reorients with global tumbling [56,61,72]. These differences strongly suggest a difference in the rotational correlation times and/or in the diffusion tensors of the two ZFs [73]. The difference between the two ZFs were hardly detectable at 500 MHz but, due to variations in the spectral densities at 950 MHz, the small differences in the two ZF parameters and namely in their correlation times are fully observable at this higher field.
To further explore the subtle differences between the values for the ZFs and the different residues of the linker, a more quantitative analysis of the data is needed. We performed several analyses, namely an analysis of the diffusion tensors and of rotational correlation times [52] and a model free analysis to determine the order parameters [64].
Quantitative analysis of the data: determination of the rotational diffusion tensors First, we used the R2R1 diffusion program to calculate the diffusion tensor from the 15 N R2/R1 experimental data at the two fields. The data were analyzed by considering the two ZFs, either separately (ZF1, ZF2) or together (ZF1+ZF2). As already described, the T1/T2 ratio provides a measure to determine the rate at which the NH bond reorients as a result of global tumbling [61]. The procedure needs however to exclude the residues with lower heteronuclear nOes and those that could be affected by chemical exchange [61,68]. Additionally, we exclude the linker (29)(30)(31)(32)(33)(34)(35) residues in order to consider only data from ZFs. The results from fitting procedures are presented in Table 1. The data obtained at two different fields converge to show that: (i) a better fit to experimental data is obtained when data from the two ZFs are considered separately; (ii) the ZF1 correlation time is significantly larger than the ZF2 one; (iii) the data from ZF1 fit better with a axially symmetric anisotropic diffusion tensor while the ZF2 one appears isotropic (the x 2 appears lower with the axially symmetric model, the F statistic associated to the use of a more complex fully anisotropic model is not significant).
In a second step, we included the data of the linker residues to the set of data of the two ZFs measured at the two fields. We first added the data concerning the 30-33 residues (A30, R32, K33) either to the data of ZF1 or to those of ZF2 (Table 2). In a third step, we further added the data concerning the G35 residue. This two-step strategy was motivated by the very peculiar behavior of the G35 residue, which exhibited low heteronuclear nOe and high T2 values at the two fields. The data in Table 2 show again a good agreement between the values obtained at the two fields. Moreover, we observe that: (i) addition of the 30-34 residues to ZF1 slightly worsened the fit at 500 MHz (comparing the x 2 values in second and fourth lines of the first part (500 MHz) of Table 2) but improved the fit at 950 MHz (comparing x 2 values in the second and fourth lines of the second part (950 MHz) of Table 2); (ii) addition of the same residues to ZF2 entailed poor fitting at the two fields and (iii) further addition of G35 considerably deteriorated the fit of both ZFs. This suggests that the dynamics of the 30-34 residues are similar to the dynamics of ZF1, while a clear discontinuity exists with the G35 residue that exhibits very peculiar dynamic properties relative to the other linker residues.
Fitting of the 15 N relaxation data with Model-Free
From the above data a very important point emerges. Indeed, while the respective behaviors of the two N-and C-terminal domains are similar at the two fields, their correlation time t c strikingly differ at 500 and 950 MHz. Moreover, if we also examine the data obtained in the same conditions but at 600 MHz in a preceding work [52] the values are intermediate. Therefore the overall rotational correlation time appears to be the clearly field-dependent showing decreasing values as the field increases. This behavior is typical of proteins exhibiting slow internal motion [62,74]. Therefore in the following we use the ModelFree program to analyze the effect of the inclusion of a slow motion on the apparent correlation time and on the data fitting.
A brief recall of the formalism used in the ''model-free'' approach [64,75] has been presented in the Materials and Methods section. The inclusion of a slow motion with the Model-Free approach uses the ''extended model free approach'' that is made possible through the model 5 described in Materials and methods [64]. In this model the generalized order parameter of the slow and fast motion (S 2 f S 2 s ) and the correlation time of the slow motion are determined in addition to the overall correlation time t c . The calculations have been made for several models (models 1, 2 and 5) in order to evaluate the effect of the inclusion of a slow motion in model 5. The results are presented in Table 3. The calculations have been made for the residues that are clearly The different parameter and model of motions are described in the text. The results have been obtained using Powell optimizations in Model-Free, the S 2 and t s values are the average values of the different residues contained in N (ZF1) and C (ZF2) domains. The selected residues are those that contain at both fields neither signs of conformational exchange neither large amplitude, long time scale internal motions (characterized by low heteronuclear nOe). In (a) the relevant parameter for model 1 is S 2 and those relevant for model 5 (that contained both a fast and slow component) is S 2 s . The S 2 f value is related to the fast motion described by the model 5, recall that with ModelFree, we cannot extract the value of the correlation time of the fast motion as it is supposed to be extremely fast. See Table 1 for information on the presented parameters. E/N is the residual x 2 error per residue. doi:10.1371/journal.pone.0102150.t003 exempt of chemical exchange and of large-amplitude motions on a time scale longer than a few hundred picoseconds (which can be identified on the basis of a lower nOe value) using the criteria described previously [61]. Note that selected residues are not exactly the same as those used in Table 1 because here the selection criteria for the different residues must be matched for the data at the two fields. Using this procedure 18 residues have been selected.
We used the first four lines of Table 3 to check that the correlations times and the D II D \ parameter found with ModelFree are the same than with the preceding approach (R2_R1 diffusion program and data presented in Table 1). We clearly see that in these four cases, the residual error per residue (E/N) is quite low indicating a good fitting of the data at each field. However, when the data at the three fields (including the data of Lee et al. (1998) [52]) are fitted simultaneously with model 1 the resulting error is very high (see Table 3 the E/N (residual error per residue) values: 7.2 for ZF1 and 6.2 for ZF2). Including the ''extended Model-Free'' approach with the model 5, the residual error per residue (E/N) drops dramatically (E/N = 1.2, last line of Table 3) underlining the validity of the hypothesis of a slow motion occurring in the nanosecond time scale. The calculations with model 5 (in which the internal motions are described by both slow and fast motion) have also been made in different conditions, namely by changing the maximum value that the correlation time of the slow internal motion can take (Table S1 in File S1). We show that the maximum value is close to the range of 2-3 ns. We also observe the change of the apparent overall correlation time t c with the variation of the correlation time of the slow motion (t s ) as previously described [62,74]. Interestingly the S 2 f values are close to 0.85, a reasonable and typical value for fast librational motion in proteins. The S 2 s values are close to 0.64 indicating the strong involvement of the slow motion in the global motion features of the protein. When these values are considered for each domain separately, a striking difference is found: 0.66 for ZF1 and 0.60 for ZF2 indicating that the C-domain could be more flexible that the N-domain. All the values related to the slow motion are close to those obtained in a precedent work on a similar motion occurring in calmodulin [62].
The existence of this slow motion complicates seriously the determination of the internal motional parameters for each residue as the Model 5 is the most complex model that can be extracted with the ModelFree program and thus cannot take into account additional exchange processes or motions occurring at intermediate time scale (several hundreds of picosecond) [74]. These two last categories of motions are very common in internal dynamics of proteins. A complete extraction of all the parameters describing the various motions in our system is thus clearly too complex and beyond the scope of the present study. Interestingly, a partial description of the protein dynamics can be provided by performing a classical Model-Free approach on the data obtained at each field (950 and 500 MHz). This approach is qualitative since it does not take not into account the slow internal motion identified above. However, it can reasonably describe the exchange and intermediate scale motions of the individual residues. Note that if data at only one field (950 or 500 MHz) was obtained, this approach would have appeared perfectly sufficient.
Thus, the T1, T2 15 N relaxation times and the 1 H-15 N heteronuclear nOe at only one field (either 500 or 950 MHz) were used to extract the internal motion parameters for each residue using the Model-Free approach. In this approach, the relaxation data for each residue were fitted with five internal motions of increasing complexity as described previously [45,64] and the model giving the best fit was assigned to each residue. Most of the residues could be fitted with Model 2, 4, 5 while only a few residues could be fitted with Model 1 and 3 (Materials and Methods). The residues of the N-and C-terminal flexible regions could be only fitted with Model 5 while most of the ZF residues were fitted with Model 2 or 4. The order parameters extracted from the relaxation data at the two fields are presented in Figure 3 (950 MHz) and in Figure S2 in File S1 (500 MHz). The results for the two fields are qualitatively close to each other but fewer points are obtained at 500 MHz due to overlap or impossibility to fit the data with the simple models available in ModelFree. The existence of regions of different flexibility emerges well from the data and is globally in agreement with precedent works [52,53], where the order parameters were not explicitly extracted for all residues. The residues of the N-and C-terminal domains exhibit low values for the order parameter from 0.1 to 0.5, while the two ZFs appear highly structured with values from 0.67 to 0.81 at 500 MHz and 0.77 to 1.0 at 950 MHz. The two ZFs appear equally structured, with order parameter values that are not significantly different. For the linker residues, the order parameter values for A30, R32 and K33 are similar to those of the ZF residues while a significantly smaller value is obtained for G35 (0.44 at 500 MHZ, 0.58 at 950 MHz). This last value is intermediate between the values observed for the ZF residues and those of the flexible C-and Nterminal domains, revealing a significant flexibility of this residue. It is important to note the discontinuity between the G35 value and the values of the other residues in the linker. This clearly suggests that G35 is a hinge allowing the two ZFs to move in respect to each other. Note that we have described thoroughly this internal motion in the preceding paragraphs and that we have found it to occur in the nanosecond timescale (2-3 ns).
Additional Characterization of the protein conformation
We analyzed in detail the 2D NOESY spectra recorded at 950 MHz focusing our investigations on the nOes interactions between residues belonging to the two ZFs fingers and the linker domain. Since interknuckle nOes have been reported in several studies [26,52,76], we checked for their existence in our spectra. We observed most of the nOes already described (A25b-W37g2; A25b-W37f2, A25b-W37f3; W37e3-F16b; W37b-F16d; etc…), but could not find nOes between the aromatic protons of F16 and the aromatic protons of W37. Concerning the linker residues, a very large number of nOes (more than 25) were observed between several protons of N17 (and mainly the two amino protons resonating at 6.81 and 8.52 ppm in our conditions) and the linker residues. Some of these nOes are rather strong and involve interactions with C28, R29, A30, P31, R32, K33 residues, revealing therefore an extended network of contacts. While some of these nOes have been observed previously [76], such a large network of contacts involving linker residues has never been underlined. However, it was not necessary to build new structures taking into account these nOes as the existing structures (1mfs and 1esk) are in agreement with most of these nOes. The others nOes contacts involving the W37d1, e1 and f2 protons and the R32, K33 residues have been already described.
Globally, our data suggest the insertion of the N17 residue inside the turn of the linker induced by the P31 residue (Figure S3 in File S1). Note also that this insertion appears to induce a very large chemical shift difference between the two N17 amide protons (1.71 ppm). This difference is much larger than that usually observed for Asn amide protons and for others Asn residues of NC. It indicates that the N17 amide protons experiment shielding or local field effects. Hydrogen bonding could explain theses effects. The nOe contacts between the F16 and W37 residues of the two different zinc fingers indicate that the amplitude of the motion between the two domains is limited in our conditions (10uC); in line with the limited size of the hinge (G35 residue) that constraints the amplitude motion.
Examination of the three-dimensional structures of NC-nucleic acids complexes
We investigated carefully the four three-dimensional structures of NC bound to stem-loops that are available in the PDB (Figure 4). In all of these complexes, NC is bound to the unpaired region. We examined more particularly the position of the two ZFs relative to the stem base pairs adjacent to the loop. In all the complexes, ZF2 (with W37 in violet and stick mode) is positioned farther from the stem than ZF1. Furthermore, one guanine of the loop is inserted in ZF2 in all the complexes, while another guanine is inserted in ZF1 only when two unpaired guanines are present in the loop. This strongly suggests that ZF2 possesses a stronger affinity for guanines than ZF1 as already mentioned in the introduction section.
ZF1 exhibits an extended hydrophobic platform formed by the residues V13, F16, I/T24, A25 (shown as yellow spheres in Figure 4). In Figure 4, we highlighted the positioning of this hydrophobic platform relative to the closest residues of the stem. In SL2 and SL3, one guanine (located in the apical loop and indicated by light grey in Figure 4a and c) is inserted in the ZF1 hydrophobic platform that is localized far from the stem. In contrast, in the complexes with (-) PBS and mini-cTAR (Figure 4b and d) in which ZF1 does not interact with a guanine, ZF1 contacts one base and one sugar of the stem. These observations suggest that the guanine binding to ZF1 mobilizes the residues of the hydrophobic platform and therefore hampers its binding to the stem and thus, prevents the stem destabilization. In contrast, in the complexes in which ZF1 does not interact with guanine, the hydrophobic platform could contact the stem and destabilize it ( Figure 5).
Dynamic properties of the linker residues
In this study, our data underline the difference of dynamic behavior between the different domains of NC and notably between the two ZFs, as shown by the quantitative analysis of relaxation data (Table 1) as well as by the raw data and particularly the T1 and T1/T2 values at 950 MHz ( Figures 1B and 2). Our data suggest the existence of a relative motion of the two domains (N-and C-terminal) around the G35 and perhaps K34 residues of the linker, acting as a hinge. The observed changes in the apparent overall correlation time with the magnetic field are typical of such a slow motion around a defined hinge [62,74] and can be well modeled using the ''extended Model-Free'' approach.
After the identification of this slow motion, we use the classical Model-Free approach at one field (either 950 or 500 MHz but not simultaneously) to describe the dynamic status of each residue in the protein. From the order parameter obtained for the linker residues, we could conclude that residues A30, R32, K33 are much less flexible than G35 and share common properties with residues of ZF1, suggesting the existence of a persistence length extending from the structured ZF1 to most of the linker residues [73,77,78]. Both this persistence length and the constraining nature of L-proline at position 31 confer rigidity to the linker [76]. In contrast, the seventh residue of the linker, G35 exhibits a significantly lower order parameter indicating that it is highly flexible. Consistent with this notion is the fact that the two Ha protons of G35 present identical chemical shifts, as it could be expected for mobile residues and as it is the case for residue G4 located in the unstructured part of NC. In contrast residues G19, G22, G40 and G43 located in the structured ZFs display all nonequivalent chemical shifts for their two Ha protons. Therefore, our data suggest that the two ZFs move relatively to each other about this pivotal position. Nevertheless, we cannot completely exclude that the K34 neighbor residue, for which the relaxation data could not be measured due to resonance overlapping is also dynamic (see also [53]). However, due to the very small size of the hinge region (one or two residues), the magnitude of the relative motions of the ZFs is probably rather limited. This could explain why a spatial proximity between the two ZFs has been repeatedly found, notably under the form of nOes observed between residues belonging to the two ZFs [52,76] and FRET between the aromatic residues of the two ZFs [79]. It has been also suggested that the spatial proximity between the aromatic residues could be highly dynamic [52] and that increasing the temperature increases the flexibility of the linker, probably by the fact that K34, K33 and R32 residues became also flexible and therefore increasing the amplitude of the conformational changes and resulting in an ''opening'' of the protein [53]. Note that slight changes in the linker composition such as the replacement of L-Pro by D-Pro are sufficient to prevent the spatial proximity of the ZFs [76].
Besides these dynamic aspects, we consider with attention the extended network of nOes involving particularly the side chain N17 of ZF1 and several protons of the C28, R29, A30, P31, R32 and K33 residues. These nOes suggest an insertion of a hairpin loop of ZF1 (with N17 located at its extremity) inside the turn of the linker (See Figure S3 in File S1). This folding is found in the two deposited structures (1mfs and 1esk) and is expected to modulate the properties of ZF1 relative to ZF2 explaining the nonsymmetrical role of the linker in respect to the two ZFs as well as the differences in the motions of the two ZFs. Our results support the critical role of the linker residues on the NC architecture and activities [76,80].
Implication for the nucleic acid binding properties and roles of the two zinc fingers
The analysis of the dynamical behavior performed above revealed the existence of a relative motion of the two zinc fingers around a hinge constituted by the G35 residue at 10uC, and the non-symmetrical contacts of the linker residues with the two ZFs (See Figure S3 in File S1). We suggest that these contacts affect the nucleic acid binding ability of ZF1 relative to ZF2. Indeed in the last years a sufficient number of NC-nucleic acids structures became available, so that it is now possible to draw some general rules. A clear indication from these binding studies is that ZF2 exhibits a higher avidity than ZF1 for unpaired guanines, since in all studied complexes in which only one guanine is present in the nucleic acid binding site, this guanine is inserted in ZF2 [26,35,36]. Moreover, the higher affinity of ZF2 for guanine as compared to ZF1 was also confirmed in biophysical studies [12,32]. Following the ''initial'' binding of a first guanine residue by ZF2 [3], a reorganization of the linker residues involved in nucleic acid binding (R32, K33, K34) may occur, leading to the insertion of a second guanine inside the highly hydrophobic pocket of ZF1 (in the case where two guanines are present in the bound sequence). However, the sequential nature of such events remains to be demonstrated. This conclusion is further substantiated by the significantly smaller order parameter observed for the C-terminal as compared to the N-terminal ZF (0.60 vs. 0.66) suggesting that the former is more flexible and prone to adapt to unpaired guanines.
Examination of the three-dimensional structure of the published complexes of NC with various DNA and RNA stem-loops provides interesting information on their topologies that complement the dynamic data. In the complexes of NC with SL2 and SL3, in which one guanine is inserted in each ZF, no contact occurs between the hydrophobic residues of ZF1 (V13, F16, A25, T/I24) and the stem (Figure 4a and c). In contrast, with the PBS complex, and to a lesser degree with the mini-cTAR complex, a guanine is inserted only in ZF2 and the hydrophobic platform of ZF1 contacts the stem (especially V13 and T24), suggesting that it can destabilize it. Thus, the binding of a guanine base into ZF1 in the complexes with SL2 and SL3 likely mobilizes the hydrophobic platform of ZF1, so that it is no more available for destabilizing the stem. This observation is in agreement with previous data showing that the SL2 and SL3 sequences, involved in the selective packaging of HIV-1, genome are not significantly destabilized by NC [24,81]. In contrast, it is necessary that NC destabilize the PBS and cTAR hairpins to favor the PBS(2)-PBS(+) and TAR-cTAR annealing processes that are required for the strand transfer events during reverse transcription [35,[82][83][84] (Figure 5).
In this model, each ZF is specialized in one function. While ZF2 is required for the binding of accessible and flexible guanines, ZF1 is needed either for the binding of a second guanine (if a GXG sequence is present) or alternatively for the destabilization of base pairs through its large hydrophobic platform (when only one guanine residue is present in the bound sequence by example TG in PBS). Due to this specialization, ZF swapping or duplication mutations such as the ZF2-ZF1 or ZF2-ZF2 mutants (associated with poor chaperone activity) could not achieve similar activity as ZF1-ZF2 or ZF1-ZF1 constructs (associated with strong chaperone activity) [28,32]. Indeed, while in the two last mutants, ZF1 necessary for destabilization is at the right place, this is not the case for the two others mutants (ZF2-ZF1 and ZF2-ZF2), where the ZF at the N-terminal position is ZF2. It is likely that the hydrophobic platform of ZF2 is too small to destabilize base pairs in double stranded nucleic acids.
Conclusions
In conclusion, 15 N relaxation studies of free NC show that the two zinc fingers move relatively to each other around a hinge of small size (G35 and perhaps also K34). The dynamical data further indicate that the motion is in the nanosecond range. Except the G35 flexible residue, the linker residues appear rather rigid, and some of them interact with the ZF1 residues. These new results can be used to explain the dissymmetric binding properties of the two zinc fingers. We suggest that the linker-ZF1 contacts modulate the nucleic acid binding properties of the ZF1 relative to the more mobile ZF2. Therefore, ZF2 could be involved in the ''initial'' binding to exposed guanines, while ZF1 could either bind another available guanine or be involved in the destabilization of the double-stranded part of the nucleic acid substrate depending on the number of guanines in the target sequence.
Supporting Information
File S1 This file contains Figures S1 -S3 and Table S1. Figure S1. Experimental 15N NMR relaxation data (Longitudinal (T1), transverse (T2) relaxation times and heteronuclear nOe) for backbone atoms obtained at 500 MHz for NC at 10uC. Figure S2. Order parameters determined from Model-Free analysis of 15N relaxation data obtained at 500 MHz for NC at 10uC. Figure S3. Structure of NC (pdb 1esk) showing the positioning of the N17 side chain (in red) relatively to the linker, the different residues R29 to K33 are shown with their side chain in blue, K34 in green, G35 in yellow. The side chains of the others residues are not shown, the zinc atoms are shown as spheres and the cysteines and histidines coordinating zinc atoms are shown in orange. Table S1. Results for the fits of the 15 N NMR relaxation data obtained at 950, 600 ( [52]) and 500 MHz using various models of motions with the model-free formalism described in Materials and Methods. (DOC) | 9,940 | 2014-07-16T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Dual Water Choices: The Assessment of the Influential Factors on Water Sources Choices Using Unsupervised Machine Learning Market Basket Analysis
An unsupervised machine learning model of association rule known as market basket analysis is proposed in this study to analyze the influence of various socio-economic factors on the choice of the water source. Data of 51 socio-economic factors collected from 295 individuals living in 65 households in Ambo city in the Oromia region of Ethiopians were used for this purpose. The results revealed (i) 64% of the family preferred multiple water sources (i.e., public tap and river water), (ii) the water was collected females in 92% of the households, and (iii) majority of people preferred bathing and laundering in the river (support = 32% and confidence = 87%). Direct utilization of river water is not a preferable choice for the user since it may lead to severe health issues and cause water pollution from bathing and laundering. Education and monthly income have a significant impact on the choices of water sources. Local management authorities can improve sanitation and public health management using the results obtained in the study. The paper only gives a glimpse of the important factors that should be considered for improving the way of life for the underdeveloped areas of the world using advanced machine learning techniques.
I. INTRODUCTION
Unsustainable human activities have caused water pollution and degradation of safe water sources across the globe [1], [2]. Decreasing freshwater resources and increasing water demand to meet the growing population and economic demand have caused water scarcity even in many water-rich regions. About 2.4 billion of the global population is under water stress, which would increase to 9.6 billion in 2050 [3], [4]. The scarcity will be more in developing countries due to inefficiency in water resources management.
Despite water scarcity, the availability of low-cost water purification technologies and changes in government policies to fulfil the Sustainable Development Goal of ensuring clean The associate editor coordinating the review of this manuscript and approving it for publication was Xujie Li . water and sanitation for all have made the availability of safe water to more population in recent years [5]. A study by [6], estimated access to improved water sources to 16% more population in 2015 compared to 1990. However, improved provision of safe water alone cannot ensure access to clean water. There are many social and economic factors, such as education and behavior, that influence the choice of water sources and the use of safe water. For example, people with better economic ability have more access to freshwater. Education makes people more aware of safe water and enhances their willingness to get access to safe water. Reference [7] showed that the choice of water source is significantly affected by place of residence, geopolitical zone, education, wealth index, ethnicity, access to electricity and gender. Therefore, it is very important to consider these factors in designing and implementation of water services programs. Quantification of the relative influence of different social and economic factors on safe water use is also important for sustainable water resources management, ensuring social equity is water access, protection of public health, and improvement of the quality of life.
Association of safe water use with different socioeconomic factors like health, education, gender, distance to the water source, size of family and so on has been established by many researchers [8], [9]. The previous studies have used multinomial logistic regression [10], generalized linear models [11], conditional logit model [12] and univariate analysis [13]. Table 1 presents a brief survey of the previous research studies done to identify the factors affecting household water access in developing countries using various techniques. All the reported literature studies were conducted using classical statistical models.
The statistical models provide a numerical measurement of association by testing the hypothesis of an association between two variables [14]. Such statistical approaches are applicable when the two variables of interest are known, or a hypothesis is already defined. However, such a condition is not valid when prior knowledge of the relationship among the variables is not known. Besides, such methods are limited to numerical and ordinal data. Therefore, a reliable method is needed to unwrap the correlation in a dataset with a wide range of data types [15], [16].
This study aims to use an unsupervised machine learning model which uses association rule, also known as market basket analysis. It is an efficient analytical methodology to analyze and optimize the choice and behavior of the customer [17]. The main advantage of unsupervised machine learning is the potential to solve complex problems using the capacity of intelligent mimicking [18], [20]. It can observe frequently occurring pattern, correlation and association from the dataset with the help of the three thresholds support, confidence and lift [21]- [23].
The goal of the current research is to use machine learning method for establishing the relationship between the daily water requirement for households, available water supply facilities, choices of water supply and how these choices are affecting the consumer regarding several elements, such as education, job, daily time invested, household income, responsible member of the house to do the chores, awareness, the willingness of maintaining the sanitation and hygiene. The paper desires to highlight the condition of developing countries that lack awareness and affordability of proper water supply. The methodology used for the assessment of the influence of socio-economic factors on water use can be replicated for a reliable analysis of socio-economic interactions with natural resources.
II. FACTOR INFLUENCING WATER SOURCE SELECTION A. SOCIO-ECONOMIC CHARACTERISTIC
Water sources and their utilization are holistically affected by socio-economic factors. The choice of water source depends on the distance from the source, way of access, data availability, ethnicity group of the area, status of the family, education-water access relationship and so on [24], [25]. Besides inequalities in water access, water policies for predominantly poor and economically disadvantaged rural settlements are greatly affected by socio-economic variables in developing countries. Hence, it can be considered the key indicator of water sources utilization [26], [27].
B. PRICING
Water consumption and water price are related to each other. The increase in water price significantly affects water consumption, specifically in low-income households where water bill surges affect the monthly budget [28]. The developing and underdeveloped countries commonly consist of high population density, and raised tariffs might undesirably affect the financial health of households [29]. In specific cases, the pricing can change as per the seasonal variation to minimize the consumption and enhance the water accessibility as per WHO guidelines [30].
C. COLLECTION TIME
Research studies have found that the choice of water source is likely influenced by household characteristics and distance to the water source [38], [39]. As the women spent most of the time in water collection and change in distance, the water sources affect more to the female and children [40]. The collection time is also affected by the water activity since water for drinking and cooking need better quality, even though the travel time increases irrespective of whether the user chooses the source. In addition to that, the water infrastructure and water policies significantly change the household behavior towards the efforts given to water collection, which may include proximity, pricing, quality, accessibility to the source and geographical structure of the area [12].
D. MULTIPLE SOURCES
Choice of multiple water sources is affected by the distance from the water source, quality of the water depending on activities, such as bathing, washing, cooking and conflict among the people. Reference [41] the study reports that the factors influencing the multiple uses of water sources are water services, water supply scheme, technology-system design, water quality-quantity, collection distance and time. Another study reported by [42] showed that users preferred river and public tap depending on the quality, improved access to the water source and efforts on water collection (time taken, the volume of water transported each time, frequency of filling).
E. IMPACT OF COMMUNITY WELLBEING
Water availability and community health are essential parameters to measure community wellbeing. Reference [43] reported in the study that small-scale community water supply significantly affects hygiene behaviors and daily life. The factors considered were water source from the river to groundwater, collection time, household water consumption and income.
F. HEALTH
Water impact on health is a well-known fact but as per the studies yet neglected. Poor water supply or unprotected water source can cause acute infectious diarrhea, Trachoma, Ascariasis, Hook work infection, Dracunculiasis, Schistosomiasis and other diseases from heavy metal exposure. Without access to piped water, the household is 4.8% more prone to infant mortality from diarrhea and other waterrelated diseases [44]- [46]. It was also reported that an awareness program alongside water supply improvement is also necessary to improve health, sanitation and decrease infant mortality [47]- [49].
G. EDUCATION
There is no well-established relation between education and water yet; however, most research supported the relation between water and education [50]. A good impact on the educational campaign in line with water savings and conservation has been noticed in water-scarce countries [51]. Consequently, a study in China has reported that children (especially girls) get to attain more school days in rural areas if they get treated water in their households [52]. Educated parents showed more interest in the easy access of water and hygienic maintenance for their children's health [16], [53].
H. INCOME
Income supports more increasing water demand than the hygienic way of using it; however, the child involvement for water fetching remains the same [16], [54]. The research reported less or no relation between the duration of water access and income [55]. Another study depicts that income is affected by the education level of the head of the household, assets and the preference to the home lifestyle, including water-related decisions. The income structure significantly affects the willingness to pay the water bill, inter-sectoral water transfer, and user-specific water demand and consumption response [56].
III. CASE STUDY AND DATA EXPLANATION A. STUDY AREA
Ethiopia experienced a severe water shortage due to inadequate rainfall in the 2017 rainy season, which led to catastrophic agricultural and socio-economic losses [57]. Recently, 10.5 million Ethiopians need humanitarian assistance due to clean water shortage, as per the report published in August 2017 by the Government of Ethiopia [58].
Ambo is a town in the Oromia region and is 120 km from the capital, Addis Ababa. Ambo is one of the important geographical locations where water insecurity is at its peak, affecting the social and healthy life among 180 Woreda of West Shoa Zone, Oromia Regional State of Ethiopia [59]. Ambo woreda has six kebeles (the smallest administrative unit of Ethiopia similar to a ward) such as Ambo01, Ambo02, Ambo03, Senkale Foris, Kisose odo and Awaro. Kerchelle masa of Ambo 01 has insufficient water supply due to their human behavior, literacy rate, and insufficient infrastructure of the water supply system. The selected study area is well known for Afan-Oromo people in terms of their mother tongue. Ethiopia is running a severe water shortage due to inadequate rainfall during the 2017 rainy season, leading to insufficient crop production and catastrophic socio-economic losses [57]. The study area in Ambo is displayed in Figure 1.
The study area comes under the economically waterstressed zone and only rely on the surface water sources, especially river water. Besides, the groundwater has high fluoride, which makes it difficult for direct use [60]. The study considered the groundwater source, but none of the users reported any use. Therefore, it was not discussed in the result.
B. DATA COLLECTION
A structured questionnaire was developed considering the local language, was used by the interviewees. The questionnaire was translated into Oromo then back-translated to English. The household was selected randomly within a 1 km distance from the river water source. The sample size consists of 295 individuals living in 65 households. The selected area comes under the rural area, Kerchelle masa village situated near Taltelle river of Ambo region. Ethiopia is one of such under developed country which is continuously need humanitarian assistance due to low economic condition [57]. Moreover, severe shortage of water due to inadequate rainfall in rainy season has led to catastrophic agricultural and socioeconomic losses which added more stress to the present condition [58]. The study is a village which is far from the city due to which awareness and education is very less. This is one of the major reason that gathering data was one of the most difficult issue in this study. The selected area was one of the nearest village but issue focused in the study is the same for many such villages. Thus, this study aims to highlight the problems related to the water supply system of such villages all over Ethiopia and tried get attention required to improve the situation. This study found that with this number of data can be used for such case study and highlight the water supply problems and it's the influencing factors. The study also found that other research studies efficiently achieved their goal with minimum number of sample size such as n = 100 [36] and n = 40 [31].
The dependent variable used is the water sources, and independent variables for the study area were chosen based on the public water-related awareness and significant factors influencing wellbeing. Among which to was a major focus that factors such as education, job and awareness factors a lot when choosing the water source. The aim of considering many parameters was to evaluate and point out the issues which can elevate the life condition of the local people. Even though water was our primary concern but many key factors influences it and if at least one of the key point can be improved then major change can be observed. The independent variables were household member (number of male adults, number of female adults, number of male child and number of female child), education level (Uneducated, below 10 th standard, below 12 th standard and graduate), number of member employed, monthly income in Ethiopian Birr (ETB), daily water requirement of the household in litre, water collector information (sex and age), water collected each trip in litre, number of times water collected, total time taken for water collection in minutes, water collection method (manual and transportation), water bill paid per month in ETB, water sources selected for different household activity (bathing, washing and cooking), water collection time (Morning and evening), seasonal water quality variation (summer and rainy), seasonal water collection efficiency (difficult and easy), water interruption in days, household water treatment, cleaning frequency of water storage container (daily, never, weekly), hygiene information (use of soap before handling water, during bath, after defecation), defecation in river, toilet at home, heath information (diarrhea, common cold and other diseases).
IV. APPLIED MACHINE LEARNING AND STATISTICAL APPROACHES A. ASSOCIATION RULE/ZZZZZ/WWWWW/MARKET BASKET ANALYSIS
Association rule is the rule-based machine learning methodology where highly confident associations among multiple variables are calculated. This rule is a centered technique which has exhibited higher accuracy [61]. This tool has been used in different fields of science and engineering but have been applied in a limited number of researches with respect to analysis of water uses [62]- [64]. The best association is selected based on the various statistical analysis outcome such as the high number of counts, higher value of confidence, lift and support. Apriori algorithm is suited to the character variable and leads to better performance [65].
In a given transaction, where each transaction is a set of items, the association rule reveals that the transaction in the dataset containing X also contains Y expressed as equation (1). Support metric parameter is applied to measure the occurrence of the transactions of the item set and classifies the best rule for auxiliary analysis. Thus, support is the fraction of the total number of transactions for sets of items and can be expressed as equation (2). Another parameter to measure the occurrence of consequent and antecedents is known as confidence. The presence of the probability of occurrence of the sets of items and expressed as equation (3). The confidence value is sometimes high even though the association between the items is most likely unrelated. To overcome such a situation, Lift is introduced as the third metric parameter, which controls the frequency of consequent while measuring the conditional occurrence probability of {Y} given {X} and can be expressed as equation (4) [66]. Support and confidence reflect the usefulness and certainty of the identified rules. R version 3.6.1. Packages ''arules'' and ''arulesViz'' were used for association estimation and visualization, respectively.
where, X and Y are sets of items
1) BASIC CONCEPT
The process occurs in two steps (i) finding frequent items and (ii) generation of strong association rule based on support and confidence. Let consider J = (i1, i2 . . . .i n ) to be a set of items (itemset). Let D, the task-relevant data, be a set of database transactions where each transaction T is a set of items such as T ? J . Each transaction is associated with an identifier, known as Transaction ID (TID). Let consider a set of items, where a transaction ∪ ∩ _( ? ) → |T is a set to contain X if and only if X _( ? )T . An association rule is an inference of the form X ⇒ Y , where X ⊂ J , Y ⊃ J , and X ∩B = φ. The rule X ⇒ Y holds in the transaction set D with the support. That contains X ∪ Y (that is X and Y ) is taken as the probability, P(X ∪Y ) the rule X ⇒ Y has confidence C in the transaction set D, if C is the percentage of the transactions in set D which contains X that also contains Y . This is taken as the conditional probability as P(Y|X) that is, Support = (X ⇒ Y ) = P(X ∪ Y ) and Confidence == (X ⇒ Y ) = P(Y|X) [67]. Figure 2 presents the general flowchart of the Association rule process. Pearson's correlation analysis is widely used in multiple sectors of science due to its accuracy in terms of the relationship between the variables [68], [69]. It measures a linear dependence between two variables (x and y) where, x and y are the variables; m x and m y are the mean of x and y, respectively.
B. MODELING DEVELOPMENT
To transform the dataset into the transaction database so that it can be subjected to the selected analytical method. The data frame was divided according to information from the questioner prepared for the study. The data frame consists of 51 variables and 65 observations. Out of 51 variables, 28 variables were assessed using association rule, 6 variables using Pearson's correlation analysis and others were analyzed in general in which interviewer's no. and interviewer's general information was irrelevant for the analysis where the 3 rd variable was the distance from the river which found to be average of 385m (min-max: 300-950m). The remaining variables were interpreted using general data analysis like the mean and min-max analysis. The selected 28 variables were converted in logical type or binary type (i.e., TRUE or FALSE) data set as part of data preprocessing. The main transaction database (i.e., 28 selected variables and 65 observations) was divided further to better understand their relationship with each other and handle the numerous amounts of rules generated by the program. The application was separately established considering two different water sources, including Public tap and (ii) River. The water sources were coupled with four scenarios considering the dual water sources, i.e., public water source and river water sources, along with other variables presented in Figures 3 and 4. Total 65 observations are plotted for two water source choices, public tap water in Figure 3 and river water in Figure 4. frequency Daily, (vi) Cleaning frequency: Never, (vii) clean hand before handling water, (viii) use of soap.
Scenario D: Seven variables: (i) Water Source: Public tap/River, (ii) bathing in river, (iii) Use soap during bathing, (iv) defecation at river, (v) after defecation use of soap, (vi) toilet at home, (vii) common cold.
The implementation part consists of the first step of finding an association rule is searching for frequent item sets. Apriori algorithm available in the R studio ''arules'' package was used to find frequent item sets [70]. The transaction database (a) was specified with minimum support of 20%, minimum confidence of 70% (b) with minimum support of 90%, minimum confidence of 95% (c) specified with minimum support of 40%, minimum confidence of 70% (d) specified with minimum support of 50%, minimum confidence of 70%. Different minimum support and confidence were used for the association rule formation to decrease the number of rules and computational time. The application was able to find the association rules for (a, c and d) around 20-50 seconds since the rules were less. In dataset (b), there were more than 1000 rules that took several minutes, which was later reduced using high minimum support and confidence. Further, for better management and interpretation ''Head'' function of R studio was used with specified 50 top rules sorted as per the descending order of the ''Lift''. The results produced was presented in table 2 where the LHS (left-hand side) is called antecedent, and the RHS (right-hand side) are called consequent along with specified minimum support and minimum confidence, lift and count. The principle between the item set is ''IF'' the antecedent, item {X} is there ''THEN'' the consequent, item {Y} will be there, and this is supported by the three thresholds minimum support and minimum confidence and lift. The results produced from 8 simulations were quite high; thus, the most relevant relations were sorted out. Table 2 only presents 44 of the most pertinent association of the variables.
V. RESULT AND DISCUSSION
The results show that washing, bathing, and choice of public water sources are highly correlated (support = 36%, confidence = 75%). Villagers preferred river water for bathing and washing (support = 32%, confidence = 87%). The survey shows that single users opted 100% for on public tap, and 64% used both water sources.
Female water collectors with an education level less than the 10th standard used the river for bathing (support = 23%, confidence = 75%), which depicts that public tap is used less for bathing; however public tap is preferred for washing (support = 49%, confidence = 100%). The uneducated and educated family opted for the female collector with support 20% and confidence 100%, whereas a family with basic education below 10th showed more aptitude towards the same (support = 55%, confidence = 100%, count = 36). The study finds that collection's responsibility falls on females rather than males, where 92% of families have similar choices. Since most households collect water in the evening and morning (support = 90%, confidence = 100%), and thus, collectors have to devote more time.
The water quality during the rainy season is good, but collection becomes difficult, and interruption and evening collection is also affected (support = 55%, confidence = 87%). In this case, to manage the lack of water, a river source is chosen. The study also reveals that tap water quality remains good (support = 95%, confidence = 100%, count = 62%) but highly related to the regular interruption of water supply (support = 96%, confidence 96%). The average water supply interruption is 5.8 days with a minimum of 2 days and a maximum of 14 days long. The reason for frequent interruption can be due to (i) limited water supply (ii) rainy season affects the water treatment plant of the study area by increasing the sediment load, which leads to the temporary shutdown (iii) municipal corporation poor maintenance leads to frequent leakage, broken pipes and blockages.
In hygiene and sanitation issues, the results show that cleaning the container weekly is the most frequent choice (support = 58%, confidence = 100%), whereas most cleaned their hands before handling water (support = 93%, confidence = 100%). However, a few of them never cleaned the container, which also used river water (support = 23%, confidence = 100%, count = 15). Furthermore, only 27% of the family cleaned hands after defecation, yet 72 % used soap while bathing. It was also noted that for bathing, 73% and for washing 49% choose river water source whereas 100% of users choose tap water for cooking. 27 % of doesn't have a toilet at home and 27% defecates in the river. These preferences of different water source choices can be due to following reasons (i) Users prefer good quality water for consumption (ii) To reduce the water bill (iii) To reduce the time and amount of water collection from the river or public tap, and (iv) frequent cases of interruptions. The users also never do any kind of home treatment, whatever the sources may be.
The 5 integer variables were analyzed by correlation analysis. Figure 5 shows the scattered plot of the variables using Pearson's correlation. If the number of family members is less, then the daily requirement is less with less time spent on water collection and the bill paid per month is also decreased. The water bill increases with family income. The high positive correlation shows increased water requirement per day and increased collection of time. Water bill month is highly correlated to daily water requirement but negatively related to time spent on water collection. The correlation coefficient values of five variables are presented in Figure 5.
By the simple factor analysis, the age of the water collector has been divided into three: (i) 11-22: 45%, (ii) 22-33:40% and (iii) 33-44: 14%. The average time spent was found to be 44 minutes per day, in which 90% of users collect water two times a day, and all the users collect water manually. The survey also estimates that average income is 2327 ETB of the selected family in which the factor analysis reveals that (i) 56% user's monthly income is less than 1000 ETB (ii) 18% between 1000-2000 (iii) 9% between 2000-4000 (iv) 3% between 4000-6000 (v) 1.5% between 6000-8000 (v) 14% between 8000-10000. This shows that 74% of the users are from low-income backgrounds, which can explain that most have chosen dual water sources and paying water bills is a financial burden since the average water bill payment is 62 ETB per month. Considering the socio-economic structure of the study area, income is bound to affect the water choices. The education data discloses that 20% of the study population is uneducated, 55% of education level is below 10th standard, 3% below 12th standard, and only 21% are graduates. Moreover, the majority of the low-income family also belong to education level is below 10th standard, the as per the health issues very common diseases are diarrhea (75%), influenza (98%) and other seasonal water-related diseases like malaria, cholera (20%).
VI. CONCLUSION
The study aims to establish a relationship between the choice of water sources and the factors influencing the choice. The application of unsupervised machine learning techniques reveals that water is a key need of life, and safe choices of water sources are very important. The market basket analysis was able to associate the user's everyday life with the need for water. All the study users don't have a direct water supply, thus depending on the public and river water supply and the coinciding relationship between income, education, water awareness, water security, and health. Ethiopia has the majority of the young populace, and in the study area, 47% were adults, and 53% were children. However, the adult population of the study mostly consisted of youngsters below 25. Furthermore, females are responsible for the water collection, spending 44 minutes per day on travelling and spending 63 ETB per month. Considering the socio-economic background, the water expenditure is quite high considering 56% of family monthly earning is less than 1000 ETB. It is not surprising that many have chosen dual water sources to cope with expenditure and the monthly interruption, which may vary from 2-14 days. 73% of people preferred bathing with river water, and 49% preferred washing. 27% present of the users defecate in the river, which creates serious sanitation and hygiene issues. This also reveals that river water gets polluted due to public use. The time spends on water collection also affects education and working hours. Education is the path for overall development, including the awareness of the right choices since none of the users was familiar with neither treated water nor knowledge of treatment techniques and their benefits. In addition to that, 23% of users don't use soap after defecation. This exposes the lack of awareness about WASH (water, sanitation and hygiene). This can be the reason for the health issues commonly reported by the respondents were diarrhea (75%), influenza (98%) and other seasonal water-related diseases like malaria, cholera (20%).
The study also concluded that there is no bore well or tube well in their area, which means that the government or local population does not exploit groundwater. The lack of water supply to each household shows a lack of water economic structures in the study area and government intervention [71]. It is safe to say that a reliable household water supply can be very helpful to increase the everyday quality of life of the villagers. However, the study could not get to that detailed information about the pollutant in the river, point sources of pollution, detailed study of the water-related diseases in the area and number of children missing school. In the future, in-depth study and analysis for a better understanding of the impact of water on human wellbeing should be done.
The study recommends the following recommendations (i) Piped water service should be extended to the households as it is the most reliable water source (ii) water-related water structure needed to be improved and expanded (iii) water reservoirs and tanks should be installed to overcome the regular interruption issue (iv) in case of contamination during water distribution, household water treatment and safe storage should be encouraged (v) water awareness should be more efficiently increase with the population to ensure water security and health. Furthermore, feature engineering and hyper-parameter tuning would be applying to minimize the noise as well as to uplift the estimation accuracy, as reported in several studies [20], [72]- [74].
FIRAOL FITUMA received the bachelor's degree from the Department of Civil Engineering, Institute of Technology, Ambo University, Ethiopia. He equipped with soft skilled while working several water management project to the local community in collaboration with international and national non-governmental organization. His hardworking and dedication sharpen the pencil of the management guidelines. His several recommendation has been peer-reviewed in the authority meetings and included some of them.
TRAN MINH TUNG received the master's degree in civil engineering from the Ho Chi Minh City University of Technology, Vietnam, and the Ph.D. degree in civil engineering from the University of Wollongong, Australia. He is currently a Lecturer and the Dean of the Faculty of Civil Engineering, Ton Duc Thang University, Vietnam. His majors include sustainability construction, structure engineering, environmental engineering, and climate. In addition, he has an excellent expertise in machine learning and advanced data analytics. He has published over 30 articles in international journals. He was awarded the Peter Schmidt Memorial Award for the best performance in postgraduate research, in 2013. In addition, he won the scientific projects at Ton Duc Thang University. VOLUME 9, 2021 | 7,341.6 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Design of a Beam Separator for FLASH Electron Therapy Facilities
A promising modality of radiation therapy is FLASH: a technique in which the full radiation dose is delivered in just 1/10th of a second. This modality has been proven successful in destroying cancerous cells while sparing healthy tissues. The aim is to develop an accelerator to treat large-volume and deep-seated tumors using high-energy electron beams in the FLASH modality. Specifically, we are designing a steady-state magnet that guides three distinct energy beams into three separate beamlines and ensures dose conformality within FLASH timescales. This paper presents a design method that incorporates beam optics and magnetic parameters into a numerical optimization process. The method is applied to the design of a magnetic spectrometer with a varying pole profile. The magnet performance is compared to pure dipole and combined dipole-quadrupole designs.
Design of a Beam Separator for FLASH Electron Therapy Facilities
Vera Korchevnyuk , Jérémie Bauche , Mikko Karppinen , Andrea Latina , Stephan Russenschuck , Mike Seidel , and Davide Tommasini Abstract-A promising modality of radiation therapy is FLASH: a technique in which the full radiation dose is delivered in just 1/10th of a second.This modality has been proven successful in destroying cancerous cells while sparing healthy tissues.The aim is to develop an accelerator to treat large-volume and deepseated tumors using high-energy electron beams in the FLASH modality.Specifically, we are designing a steady-state magnet that guides three distinct energy beams into three separate beamlines and ensures dose conformality within FLASH timescales.This paper presents a design method that incorporates beam optics and magnetic parameters into a numerical optimization process.The method is applied to the design of a magnetic spectrometer with a varying pole profile.The magnet performance is compared to pure dipole and combined dipole-quadrupole designs.
I. INTRODUCTION
I N THE context of radiation therapy, an emerging and promis- ing topic is the so-called FLASH therapy.It consists of delivering the entire radiation treatment in a few hundred milliseconds, whereas in conventional radiation therapy, the dose is delivered in minutes.This fast delivery of the dose has shown to successfully minimize damage to the surrounding healthy tissues, yet effectively destroying the tumor [1], [2].A project to build the first clinical facility for treating deep-seated and large-volume tumors with FLASH high-energy electron beams has been approved [3], [4], [5].A key element of this facility is the beam separator.The purpose of the beam separator is to guide pulsed beams coming from a linear accelerator into distinct beamlines.The beams follow different trajectories and meet again at the patient, from different directions, within FLASH timescales.Several conventional solutions for a beam separator were studied and proven insufficient for a compact facility.A varying pole profile along the longitudinal direction is the most suitable candidate geometry for the magnet.In the following sections, we present a design method based on particle tracking and expose the design case of the beam separator for the FLASH electron therapy facility.
II. THE DESIGN METHOD
The design of the beam separator, based on the tracking of particle beams, includes the following steps: r Defining the design objectives for the magnet.r Segmenting the magnet and retrieving the magnetic field multipoles that produce the desired solution for the output beams.
r Using these multipoles to describe the shape of the ideal iron pole profiles.
r Checking the consequences of truncating and revolving the iron pole.
r Simulating the magnet in 3D, extracting the field map, and evaluating the performance of the beam separator.
A. Setting Design Goals
We begin by establishing the design objectives for the magnet.These include separating the input beams along the magnet, adjusting their size, focus, or a combination of these.The goal of the present device is to maximize the separation between the three output beams while keeping their size and divergence as small as possible.Increasing the separation angle of the beam separator promotes compactness by reducing the requirement for extended drifts following the device, allowing for more immediate placement of additional optic elements.An additional goal of the design is to bend the middle-energy beam by 90 degrees.
B. Retrieving Optimal Magnet Parameters
The next step is determining the most suitable magnet parameters aligning with the desired objectives.For that, we treat the magnet as an ideal element that transforms the input beams into output beams.The element is divided into sectors and each of them is characterized by a set of magnetic multipoles.The size and strength of each of the magnetic multipoles are the design variables of the optimization problem.The design variables are where P is the number of sectors in the defined setup and Q is the number of magnetic multipoles used to describe the field map 1 .Fig. 1 shows the described setup.
The variables in X are used to generate a magnetic-field model through which the particle beams are tracked.These beams are defined by their transverse and longitudinal coordinates x, y, z, transverse and longitudinal momenta p x , p y , p z , mass m, and charge q.From these data, meaningful properties can be computed, such as horizontal and vertical size b x (s) and b y (s), horizontal and vertical divergence d x (s) and d y (s), energy spread of the beam δ, and separation angle between beams a.These properties are useful to evaluate the quality of the proposed solution and therefore steer the optimization.
We used the tracking code RF-Track [6] to simulate the beam trajectories and properties in the field model.This code propagates the particles by integrating the equations of motion using symplectic algorithms.It enables particle tracking in field maps, providing different interpolation methods, including linear, cubic, and divergence-free.As the code is fully integrated into programmable scientific codes such as Octave or Python, iterative optimization of the field directly based on the beam performance from tracking is possible.
To evaluate the quality of a solution, the weighted objective function (2) 1 The magnetic multipoles M n are related to the magnetic harmonics B n by is employed.In (2), q i represent the weights of the different objectives, a corresponds to the separation angle between outer beams at the exit of the separator magnet, and b x , b y , d x and d y correspond to the average values for the horizontal size, vertical size, horizontal divergence and vertical divergence of the output beams, respectively.As a starting point, the number of sectors and multipoles in the field model is P = 2 and Q = 2, however these numbers can be increased to improve the obtained solution.To guarantee the bending of the middle-energy beam by 90 degrees, B is kept constant along the reference trajectory, shown as a dashed line in Fig. 1.By making the radius of the central trajectory the same as the bending radius ρ and the sum of all angles α equal to 90 degrees, the beam is transformed as intended.
After running the optimization routine, the optimal magnet parameters, as defined in (1), are The simplex optimization algorithm is described in [7].
C. Computing the Ideal Pole Profile
Once the optimal multipoles per sector are obtained, we move on to designing the cross-section of each sector.
Expressing the magnetic scalar potential as the ideal pole profile corresponds to the equipotential lines.
To obtain the equipotential lines, it is enough to set φ m to a constant [8], [9].
In the case of a combined dipole-quadrupole magnet with multipole order n = {1, 2} and perfect up-down symmetry, there are only B 1 and B 2 .As described in [10], this results in the family of curves where K is a constant proportional to the magnetomotive force of the element (measured in ampere-turns).These curves are hyperbolas, with asymptotes at x = − M 1 M 2 and y = 0. Given that the present design only has two sectors, N = 2, the two equations for the pole profile -one per sector -are both in m.Fig. 2 shows the shape of the ideal pole profiles for the two sectors.
D. Truncating and Revolving the Pole Profile
The pole profile equation defined with the scalar potential method requires infinitely wide poles and assumes a straight magnet.As this is not the case for the required beam separator, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.the consequences of truncating and revolving the resulting pole profile must be evaluated.
The size of the gap between the poles remains within defined limits, 2y min and 2y max .For a dipole-quadrupole combined function to remain within the defined aperture sizes, the pole has to be truncated at x a and x b related by To keep the central beam at x = 0 at the entrance of the separator, it follows that x a = −x b , and therefore To facilitate the manufacturing and measurement processes, the gap size is kept between 10 and 400 mm.Using (9), the truncation for each sector is Taking the smallest value of the two and allowing a small margin, the pole of the magnet extends from x = −225 mm to x = 225 mm.Using these geometrical parameters a 2D magnetic field simulation is carried out, using the coupling method of boundary and finite elements implemented in the CERN field computation program ROXIE [11].The ampere-turns, NI, depend on the B 1 and the size of the gap, y c , between the poles in the center of the magnet, i.e., at x = 0.
where μ 0 is the magnetic permeability.In the present design, NI = 8992.3 A. Fig. 3 shows the consequence of truncating the ideal pole profile: it shows B y along the x-axis at y = 0 for both the ideal and truncated pole profiles (both without and Fig. 3. Magnetic field along the x-axis of a truncated and an ideal pole profile.
The pole was truncated at x = ±225 mm.
TABLE I MULTIPOLES GENERATED WITHOUT AND WITH AXISYMMETRY
with axisymmetry).From x = −200 mm to x = 200 mm, the difference between the magnetic field generated by the ideal pole and the truncated pole is below 0.05 T.
The pole profile obtained from (4) describes the pole of a straight magnet.However, a sector-bend, as shown in Fig. 1, with a radius of curvature ρ = 0.7 m is required.The revolution will affect the magnetic length of the various beams.Moreover, the difference in the surface of the pole and return leg introduces unwanted higher-order harmonics.This is studied by computing an axisymmetric field model in ROXIE.This can easily be accomplished because the boundary-element method does not require the meshing of the air domain and the far-field boundary conditions.
For sector 2, the ideal pole profile is described by (7).Fig. 4 shows the B in the iron with and without axisymmetry and Table I gives the multipoles generated in both cases.Even at this relatively small bending radius, the effect of the revolution on the air-gap flux density remains below 2 % and for the higher-order harmonics below 5 %.
E. Building the 3D Magnetic Model
To understand the effects that the transitions between the sectors, the fringe fields, and the pole-face rotation have on the beams, a 3D magnetic model was built, using Dassault-OPERA3D [12].Figure 5 shows the resulting 3D model of the beam separator (CAD and FE model).The transition area between the two sectors is morphed and the exit of the magnet is trimmed to bring the pole face rotation to zero.
F. Evaluating the Performance of the Design
Finally, the field map generated by the 3D magnetic model is exported and used as input in the particle tracking software.The input beam is a homogeneous beam of 1000 particles, with horizontal radius of 1 mm, vertical radius of 1.5 mm, horizontal divergence of 1.5 mrad, vertical divergence of 1 mrad, and energy spread of 0.5 %.The result of the tracking is shown in Fig. 6 and the output beam properties are given in Table II.
From the exit angles, the separation angle is a = 52.5 degrees.For comparison, the separation angle provided by the pure dipole solution is a ≈ 20 degrees.For the 1-sector combined function dipole-quadrupole solution, one can achieve a = 57.38 degrees, however, the horizontal size and divergence of the output beams is approximately 40 % higher than for the new 2-sector solution.
III. DISCUSSION
The current design fulfills the project's requirements and yields an improved solution compared to the pure-dipole or 1-sector dipole-quadrupole solutions.Choosing a 2-sector design as opposed to more was a compromise between the complexity of the design and the effect on the beam.There is also a compromise in terms of separation angle and the size and divergence of the output beams: increasing one and decreasing the other are two counteracting effects.For the manufacturing of the device, we do not expect major challenges.The yoke can be machined from a block of solid iron, and the copper coils can be produced by conventional winding.The design of the field transducer for the magnetic measurement of this curved magnet shall be investigated.Some options are: probing the aperture Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
with a Hall sensor to reconstruct the field map and using a curved coil magnetometer for the integrated bending strength.Ideally, beam-based measurements would be performed.
IV. CONCLUSION
The proposed method for designing the beam separator offers a compact solution well-suited for implementation in a FLASH medical facility.Other projects with demanding beam requirements can benefit from this beam-based magnet design.The device is designed for steady-state operation, enabling high repetition rate beams.Its complexity resides in its design rather than its operation.Compared to pure dipole and 1-sector combined dipole-quadrupole solutions, the new 2-sector design offers a better balance between the separation angle and the horizontal beam size and divergence.
Fig. 1 .
Fig. 1.Illustration of the field map.The domain is divided into sectors of angle α and characterized by a set of magnetic multipoles.Outside the sectors, B = 0.The input beam enters the device along the z-axis.
Fig. 2 .
Fig. 2. Illustration of the iron pole profiles for different multipoles and K values.
Fig. 5 .
Fig. 5. 3D model of the beam separator with two combined-function sectors.
Fig. 6 .
Fig. 6.Particle tracking of three beams through the magnetic field generated by the 3D model of the 2-sector separator magnet.
TABLE II OUTPUT
BEAM PARAMETERS FOR THREE POSSIBLE SOLUTIONS FOR THE BEAM SEPARATOR: A PURE DIPOLE, A 1-SECTOR COMBINED FUNCTION (DIPOLE+QUADRUPOLE) AND A 2-SECTOR COMBINED FUNCTION (DIPOLE+QUADRUPOLE).P, α, B X , B Y , D X , Y ARE THE MOMENTUM, EXIT ANGLE, HALF HORIZONTAL BEAM SIZE, HALF VERTICAL BEAM SIZE, HALF HORIZONTAL BEAM DIVERGENCE AND HALF VERTICAL BEAM DIVERGENCE.THESE PARAMETERS WERE GENERATED BY TRACKING THE INPUT BEAM THROUGH MAGNETIC FIELD MAPS | 3,434.2 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
Chlorogenic Acid as a Positive Regulator in LPS-PG-Induced Inflammation via TLR4/MyD88-Mediated NF-κB and PI3K/MAPK Signaling Cascades in Human Gingival Fibroblasts
Gingival inflammation is one of the main causes that can be related to various periodontal diseases. Human gingival fibroblast (HGF) is the major constituent in periodontal connective tissue and secretes various inflammatory mediators, such as nitric oxide (NO) and prostaglandin E2 (PGE2), upon lipopolysaccharide stimulation. This study is aimed at investigating the anti-inflammatory mechanism of chlorogenic acid (CGA) on Porphyromonas gingivalis LPS- (LPS-PG-) stimulated HGF-1 cells. The concentration of NO and PGE2, as well as their responsible enzymes, inducible NO synthase (iNOS), and cyclooxygenase-2 (COX-2), was analyzed by Griess reaction, ELISA, and western blot analysis. LPS-PG sharply elevated the production and protein expression of inflammatory mediators, which were significantly attenuated by CGA treatment in a dose-dependent manner. CGA treatment also suppressed activation of Toll-like receptor 4 (TLR4)/myeloid differentiation primary response gene 88 (MyD88) and nuclear factor- (NF-) κB in LPS-PG-stimulated HGF-1 cells. Furthermore, LPS-PG-induced phosphorylation of extracellular regulated kinase (ERK) and Akt was abolished by CGA treatment, while c-Jun N-terminal kinase (JNK) and p38 did not have any effect. Consequently, these results suggest that CGA ameliorates LPS-PG-induced inflammatory responses by attenuating TLR4/MyD88-mediated NF-κB, phosphoinositide-3-kinase (PI3K)/Akt, and MAPK signaling pathways in HGF-1 cells.
Introduction
Inflammation refers to the physiological response against numerous types of damage, such as heat, chemical injury, and infection by microorganisms [1]. Among numerous lines of inflammatory disorders, periodontal disease is one of the major public health problems in the world which is characterized by chronic inflammation of periodontium [2]. The periodontal disease indicates a set of infectious inflammation issued by periodontopathic bacteria that is possible to devastate the tooth-supporting tissues, which can be classified as a gingival disease and periodontitis [3,4]. Gingival disease means an inflammatory condition in the gingival tissues caused by the accumulation of dental plaque. Periodontitis is a severe gum disease accompanied by plaque-induced inflammation which can give damage the periodontal ligament and alveolar bone [1]. Among various pathogens that contribute to the progress of periodontitis, Porphyromonas gingivalis, an anaerobic Gram-negative rod-shaped bacterium, is considered as one of the main causes in the progression of periodontal inflammation [5]. This oral bacterium attacks the host's immune system in a variety of bioactive materials, including cytoplasmic membranes, peptidoglycans, lipopolysaccharides (LPS), and fimbriae [6]. LPS from P. gingivalis (LPS-PG) has been regarded as a critical pathogenic component during the onset and development of periodontal disease, in which bacterial LPS can play a critical accelerator for the production of inflammatory cytokines and bone resorption [6,7].
Human gingival fibroblast (HGF) is one of the main cell types located in periodontal tissue and can overproduce various inflammatory mediators, such as nitric oxide (NO), prostaglandin E 2 (PGE 2 ), and interleukins (ILs) when Tolllike receptor 4 (TLR4) is stimulated upon LPS-PG exposure [8]. The TLR4 is activated by LPS exposure and transduces its signal to myeloid differentiation primary response gene 88 (MyD88) to downstream signaling molecules for inflammation in HGF [9]. Thus, elevated inflammatory responses by LPS-PG can promote the severity of periodontal disease, and downregulation of LPS-initiated TLR4/MyD88-mediated inflammatory mediators could be a promising strategy for periodontitis [10].
Chlorogenic acid (CGA) is a well-known phenolic acid compound that is abundantly found in burdock, artichoke, eucommia, coffee beans, and tea [11]. It has been reported that CGA exerts various pharmacological activities such as anti-inflammatory, antioxidative, antibacterial, hepatoprotective, neuroprotective, and lipid modulatory effects [12]. In the field of dental pharmacology, CGA exhibited antimicrobial activity through the inhibited proliferation and protease activity of P. gingivalis [13]. Furthermore, CGA inhibited osteoclastic bone resorption by a downregulated receptor activator of nuclear factor-(NF-) κB ligand-(RANKL-) induced osteoclast differentiation and LPSinduced bone loss [14]. Aqueous extract from the leaves of Rhododendron ferrugineum, containing 1.6% CGA, attenuated both the production of inflammatory cytokines induced by P. gingivalis in epithelial buccal KB cells and adhesion to KB cells [15]. Despite there being many trials to analyze the role of CGA in periodontitis, the exact anti-inflammatory mechanisms in HGF have not been understood yet. Therefore, the present study is aimed at investigating the antiinflammatory mechanisms of CGA in LPS-PG-stimulated HGF-1 cells. 2.4. Griess Reaction for NOS Activity Determination. HGF-1 cells were seeded in a 6-well plate (2 × 10 5 cells/well) and preincubated with various concentrations of CGA for 2 h. Then, 1 μg/mL of LPS-PG was added and incubated for 12 h, the optimal time for the induction of inflammation in HGF-1 cells (Supplementary Figure 1), for NOS induction. For NOS activity measurement in cell lysates, HGF-1 cells were lysed by three times of freeze-thaw cycle in 0.1 mL of 40 mM Tris buffer (pH 8.0) containing 5 μg/mL of pepstatin A, 1 μg/mL of chymostatin, 5 μg/mL of aprotinin, and 100 μM phenylmethylsulfonyl fluoride. The protein concentration was determined by the Bradford assay. NOS enzyme activity was measured as previously described [16]. Briefly, 20 μg protein was incubated in 20 mM Tris-HCl (pH 7.9) containing 4 μM FAD, 4 μM tetrahydrobiopterin, 3 mM DTT, and 2 mM each of L-arginine and NADPH. The reaction was performed in triplicate for 3 h at 37°C on a 96-well plate. Residual NADPH was oxidized enzymatically, and the Griess reaction was performed.
Materials and Methods
2.5. PGE 2 Concentration. The concentration of PGE 2 in the supernatant was determined using an ELISA kit (Cayman Chemical, Ann Arbor, MI, USA) followed by the manufacturer's instructions.
2.6. Western Blot Analysis. Cells (2 × 10 6 cells/dish) in 100 mm plates were preincubated with and without indicated concentrations of each sample for 2 h and then incubated with LPS-PG (1 μg/mL) for 18 h. Cells were washed twice with PBS and scraped into 0.4 mL of protein extraction solution (M-PER, Thermo Fisher Scientific, Waltham, MA, USA) for 10 min at room temperature. The lysis buffer containing the disrupted cells was centrifuged at 13,000 × g for 10 min. Protein samples (25 μg) from each lysate were separated on a 10% SDS polyacrylamide gel and electrotransferred to a PVDF membrane (Bio-Rad Laboratories). Primary antibodies were then incubated at 4°C overnight with a 1 : 1000 dilution after the membrane blocking for 1 h at room temperature with 5% nonfat dry milk in TBST solution. Then, the membrane was incubated with a 1 : 1000 dilution of HRP-conjugated anti-rabbit IgG (Cell Signaling Technology, Danvers, MA, USA) for 2 h at room temperature. The blots were developed with ECL developing solution (Santa Cruz Biotechnology, Santa Cruz, CA, USA), and data were quantified using the Gel Doc EQ System (Bio-Rad Laboratories). The primary antibodies were as follows: antiinducible NO synthase (iNOS, 1 : 1000), anticyclooxygenase-2 (COX-2, 1 : 1000), anti-phospho-p65 3 Mediators of Inflammation anti-inflammatory activity of CGA in LPS-PG-stimulated HGF-1 cells, the Griess reaction and ELISA were applied to determine the concentration of NO and PGE 2 in the supernatant. As shown in Figures 1(a) and 1(b), LPS-PG treatment potently induced acute inflammation, reflected by exaggerated NO and PGE 2 production, was dose-dependently attenuated by CGA treatment without any cytotoxicity (Figure 1(d)) in HGF-1 cells. In addition, western blot analysis was applied to evaluate the protein expression levels of iNOS and COX-2, which was also significantly inhibited by CGA treatment in a dose-dependent manner (Figure 1(c)).
3.2.
Effect of CGA on the Expression of TLR4/MyD88 and NF-κB in LPS-PG-Induced HGF-1 Cells. TLR4 initially can recognize LPS and transduce the signal to MyD88 that is possible to activate NF-κB signaling pathway [9]. NF-κB plays a critical role in the production of inflammatory mediators in gingival fibroblasts [17]. To analyze the antiinflammatory mechanism of CGA in LPS-PG-stimulated HGF-1 cells, TLR4/MyD88-mediated NF-κB signaling pathway was estimated by western blot analysis. As shown in Figure 2, LPS-PG treatment significantly increased the expression of TLR4/MyD88 and p65 phosphorylation, which was dose-dependently attenuated by CGA treatment in HGF-1 cells.
3.3.
Effect of CGA on LPS-PG Stimulated the Activation of PI3K/Akt and MAPK in HGF-1 Cells. Western blot analysis was used to analyze the phosphorylated status of phosphoinositide 3-kinase (PI3K)/Akt as upstream signaling molecules for NF-κB and mitogen-activated protein kinase (MAPK) downstream of MyD88, which can regulate inflammatory responses in HGF-1 cells. This study tried to examine the inhibitory effect of CGA related to the PI3K/Akt, and MAPK signaling cascades were analyzed in LPS-PGstimulated HGF-1 cells. As shown in Figure 3, CGA significantly inhibited phosphorylation of Akt and ERK in a dosedependent manner, while other signaling molecules were not influenced by CGA treatment. Furthermore, the selective inhibitor of each signaling molecule was applied to analyze the role of NF-κB, PI3K, and ERK molecules in inflammatory cascades stimulated by LPS-PG. Indicated concentrations of MG-132, LY294002, and U0126 and selective inhibitors of NF-κB, PI3K, and ERK, respectively, significantly inhibited iNOS and COX-2 expression in LPS-PGstimulated HGF-1 cells [18][19][20] (Figure 4). These results suggest that CGA significantly inhibited LPS-PG-induced inflammatory response through the regulation of TLR4/ MyD88-mediated PI3K/Akt/NF-κB activation and ERK phosphorylation in LPS-PG-stimulated HGF-1 cells.
Discussion
The inflammatory response in periodontal tissue is a complex defense mechanism that can be triggered by periodontopathic bacteria such as Aggregatibacter actinomycetemcomitans,
Mediators of Inflammation
Prevotella intermedia, and P. gingivalis [1]. Among them, P. gingivalis, a Gram-negative anaerobe, is one of the main pathogens that colonize dental plaque in the human oral cavity and acts as a major cause of chronic periodontitis [21]. Prolonged periodontitis can destroy the alveolar bone and its supporting tissues that lead to gum retrogression, bone weakness, and eventual tooth loss in adults [22]. The pathogenic properties of P. gingivalis are initiated from the various virulence factors such as lipopolysaccharide, fimbria, and gingipain [21]. LPS is a component of the outer membrane of Gram-negative bacteria and can stimulate HGF in periodontal tissue. Among various types of cells in the periodontium, HGF is a major cell type consisting of human gingival connective tissue and plays an important role in the development of periodontal inflammation through the exaggerated expression of iNOS and COX-2, the enzymes responsible for NO and PGE 2 , in response to exposure to LPS [2,23,24]. NO is produced by the deamination of L-arginine by NOS and consisted of 3 distinct isoforms including neuronal NOS (nNOS or NOS1), inducible NOS (iNOS or NOS2), and endothelial NOS (eNOS or NOS3). Among them, iNOS is produced by various inflammatory stimuli, such as bacterial LPS exposure, TNF-α, IL-6, and IL-8 release, while eNOS and nNOS maintain normal physiological reactions [25]. An appropriate amount of NO in periodontal tissue may play a role for the nonspecific natural defense mechanisms in the oral cavity but excessively generated NO could destroy local tissue in periodontitis lesions and exacerbate pathogenesis of the periodontal inflammatory disease [26]. Cyclooxygenase catalyzes the conversion of arachidonic acid to prostaglandins and is composed of two distinct enzymes, COX-1 and COX-2. COX-1 plays a role in maintaining cellular homeostasis while COX-2 is potently induced by inflammatory and other physiological stimuli [27]. Especially, exaggerated PGE 2 production and its responsible enzyme, COX-2, overexpression in periodontal tissue
Mediators of Inflammation
were recognized as critical hallmarks of exacerbated periodontal inflammation [28,29]. This study employed LPS-PG to induce inflammatory responses in HGF-1 cells, which was reflected by the accelerated NO and PGE 2 productions as well as the increased expression of their corresponding enzymes, iNOS, and COX-2. By the way, CGA treatment dosedependently attenuated exaggerated production and protein expression of both inflammatory mediators in LPS-PGstimulated HGF-1 cells as shown in Figure 1. This means that CGA has the activity to attenuate LPS-PG-induced inflammatory mediators, which have the potential to progress periodontitis, in HGF-1 cells.
The immune defense system against pathogens is initiated from their perception by highly conserved PRRs, including TLRs [30]. TLRs are a growing family that activates innate immunity and inflammatory responses upon the interaction with numerous pathogen-associated molecular patterns including bacterial LPS, viral RNA, and flagellin [31]. HGF expresses TLR2, 4, and 5 for a critical role in immune response, principally faces and interacts with pathogenic invasion at an early stage of periodontitis [27,32]. As a ligand for TLR4, LPS can bind to the extracellular domain of TLR4 and form intracellular adaptor molecules, including the adaptor protein containing MyD88 and Toll-interleukin 1 receptor domain (TIRAP) (31,33,34). Accelerated production of MyD88 can lead to the activation of NF-κB, PI3K/Akt, and MAPKs and the production of inflammatory mediators [33][34][35]. NF-κB, the inflammatory transcription factor, is involved in the regulation of inflammation, cell proliferation, the immune system, and differentiation. This transcription factor exists ubiquitously in the cytoplasm in an inactive form and can be induced by bacterial infection, inflammatory cytokines, UV irradiation, and oxidative stress [36]. NF-κB consists of p65 and p50 subunits that are anchored by the inhibitor protein, IκBα [37]. In response to stimuli, this transcription factor can be converted into the activated form through the phosphorylation of the NF-κB subunit, p65. The activated form of p65, phospho-p65, can translocate to the nucleus and bind to the promoter region for transcription of various inflammation-related genes [37,38]. As the upstream signaling molecule of NF-κB, PI3K/Akt and MAPK signaling pathways are critical regulators of the production of LPS-induced inflammatory mediators that can play a role in the progression of periodontitis [17]. This study attempted to investigate the antiinflammatory mechanisms of CGA in human periodontitis. The activated status of TLR4/MyD88, PI3K/Akt, and MAPKs was analyzed to make clear the regulation of upstream signaling molecules related to NF-κB modulation in LPS-PG-stimulated HGF-1 cells. CGA treatment attenuated LPS-PG-induced TLR4/MyD88 expression in a dosedependent manner, which means that the antiinflammatory effect of CGA in HGF-1 cells is associated with the TLR4/MyD88 signaling pathway (Figure 2(a)). Phosphorylated p65, a subunit of NF-κB, was also attenuated by CGA treatment, which was in accordance with the result of TLR4/MyD88 expression (Figure 2(b)). The phosphorylated status of Akt, ERK, JNK, and p38 was estimated by western blot analysis and shown in Figure 3. Treatment with CGA inhibited ERK phosphorylation but did not give any effect on PI3K/Akt, JNK, and p38 activations. Furthermore, specific inhibitors against NF-κB, PI3K, and ERK were applied to confirm the inhibitory mechanism of CGA in LPS-PG-induced inflammatory responses in HGF-1 cells (Figure 4). These results indicate that CGA significantly ameliorates LPS-PG-stimulated inflammatory mediators through the regulation of TLR4/MyD88-mediated PI3K/ Akt/NF-κB and MAPK signaling pathways in HGF-1 cells. In a further study, the investigation of the exact antiinflammatory mechanisms of CGA has to be evaluated in periodontitis animal models.
Data Availability
Data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors confirm that they have no conflict of interest. | 3,393.6 | 2022-04-09T00:00:00.000 | [
"Medicine",
"Biology"
] |
Spectral properties of renormalization for area-preserving maps
Area-preserving maps have been observed to undergo a universal period-doubling cascade, analogous to the famous Feigenbaum-Coullet-Tresser period doubling cascade in one-dimensional dynamics. A renormalization approach has been used by Eckmann, Koch and Wittwer in a computer-assisted proof of existence of a conservative renormalization fixed point. Furthermore, it has been shown by Gaidashev, Johnson and Martens that infinitely renormalizable maps in a neighborhood of this fixed point admit invariant Cantor sets with vanishing Lyapunov exponents on which dynamics for any two maps is smoothly conjugate. This rigidity is a consequence of an interplay between the decay of geometry and the convergence rate of renormalization towards the fixed point. In this paper we prove a result which is crucial for a demonstration of rigidity: that an upper bound on this convergence rate of renormalizations of infinitely renormalizable maps is sufficiently small.
Introduction
Following the pioneering discovery of the Feigenbaum-Coullet-Tresser period doubling universality in unimodal maps (Feigenbaum 1978), (Feigenbaum 1979), (Tresser and Coullet 1978), universality -independence of the quantifiers of the geometry of orbits and bifurcation cascades in families of maps of the choice of a particular family -has been demonstrated to be a rather generic phenomenon in dynamics.
Universality problems are typically approached via renormalization. In a renormalization setting one introduces a renormalization operator on a functional space, Date: 2014-12-01. and demonstrates that this operator has a hyperbolic fixed point. This approach has been very successful in one-dimensional dynamics, and has led to explanation of universality in unimodal maps (Epstein 1989), (Lyubich 1999), (Martens 1999), critical circle maps (de Faria 1992, de Faria 1999, Yampolsky 2002, Yampolsky 2003 and holomorphic maps with a Siegel disk (McMullen 1998, Yampolsky 2007, Gaidashev and Yampolsky 2007. There is, however, at present no complete understanding of universality in conservative systems, other than in the case of the universality for systems "near integrability" (Abad et al 2000, Abad et al 1998, Koch 2002, Koch 2004, Koch 2008, Gaidashev 2005, Kocić 2005, Khanin et al 2007. Period-doubling renormalization for two-dimensional maps has been extensively studied in (Collet et al 1980, de Carvalho et al 2005. Specifically, the authors of ( de Carvalho et al 2005) have considered strongly dissipative Hénon-like maps of the form where f (x) is a unimodal map (subject to some regularity conditions), and is small. Whenever the one-dimensional map f is renormalizable, one can define a renormalization of F , following (de Carvalho et al 2005), as where U is an appropriate neighborhood of the critical value v = (f (0), 0), and H is an explicit non-linear change of coordinates. (de Carvalho et al 2005) demonstrates that the degenerate map F * (x, y) = (f * (x), x), where f * is the Feigenbaum-Collet-Tresser fixed point of one-dimensional renormalization, is a hyperbolic fixed point of R dCLM . Furthermore, according to (de Carvalho et al 2005), for any infinitelyrenormalizable map of the form (1), there exists a hierarchical family of "pieces" {B n σ }, organized by inclusion in a dyadic tree, such that the set C F = n σ B n σ is an attracting Cantor set on which F acts as an adding machine. Compared to the Feigenbaum-Collet-Tresser one-dimensional renormalization, the new striking feature of the two dimensional renormalization for highly dissipative maps (1), is that the restriction of the dynamics to this Cantor set is not rigid. Indeed, if the average Jacobians of F and G are different, for example, b F < b G , then the conjugacy F | C F ≈ h G| C G is not smooth, rather it is at best a Hölder continuous function with a definite upper bound on the Hölder exponent: The theory has been also generalized to other combinatorial types in (Hazard 2011), and also to three dimensional dissipative Hénon-like maps in (Nam 2011).
Finally, the authors of (de Carvalho et al 2005) show that the geometry of these Cantor sets is rather particular: the Cantor sets have universal bounded geometry in "most" places, however there are places in the Cantor set were the geometry is unbounded. Rigidity and universality as we know from one-dimensional dynamics has a probabilistic nature for strongly dissipative Hénon like maps. See for a discussion of probabilistic universality and probabilistic rigidity.
It turns out that the period-doubling renormalization for area-preserving maps is very different from the dissipative case.
A universal period-doubling cascade in families of area-preserving maps was observed by several authors in the early 80's (Derrida and Pomeau 1980, Helleman 1980, Benettin et al 1980, Bountis 1981, Collet et al 1981, Eckmann et al 1982. The existence of a hyperbolic fixed point for the period-doubling renormalization operator is an F -dependent linear change of coordinates, has been proved with computer-assistance in (Eckmann et al 1984).
We have proved in (Gaidashev and Johnson 2009b) that infinitely renormalizable maps in a neighborhood of the fixed point of (Eckmann et al 1984) admit a "stable" Cantor set, that is the set on which the Lyapunov exponents are zero. We have also shown in the same publication that the conjugacy of stable dynamics is at least bi-Lipschitz on a submanifold of locally infinitely renormalizable maps of a finite codimension. Furthermore, (Gaidashev et al 2013) improves this conclusion in the following way.
Rigidity for Area-preserving Maps. The period doubling Cantor sets of areapreserving maps in the universality class of the Eckmann-Koch-Wittwer renormalization fixed point are smoothly conjugate.
A crucial ingredient of the proof in (Gaidashev et al 2013) is a new tight bound on the spectral radius of the renormalization operator. The goal of the present paper is to prove this new bound.
We demonstrate that the spectral radius of the action of DR EKW , evaluated at the Eckmann-Koch-Wittwer fixed point F EKW , restricted to the tangent space T F EKW W of the stable manifold W of the infinitely renormalizable maps, is equal exactly to the absolute value of the " horizontal" scaling parameter Furthermore, we show that the single eigenvalue λ F EKW in the spectrum of DR EKW [F EKW ] corresponds to an eigenvector, generated by a very specific coordinate change. To eliminate this irrelevant eigenvalue from the renormalization spectrum, we introduce an F -dependent nonlinear coordinate change S F into the period-doubling renormalization scheme the spectral radius of the restriction of the spectrum of DR c [F * ] to its stable subspace T F * W at the fixed point F * of R c , and obtain the following spectral bound, which is of crucial importance to our proof of rigidity.
Acknowledgment
This work was started during a visit by the authors to the Institut Mittag-Lefler (Djursholm, Sweden) as part of the research program on "Dynamics and PDEs". The hospitality of the institute is gratefully acknowledged. The second author was funded by a postdoctoral fellowship from the Institut Mittag-Lefler, he is currently funded by a postdoctoral fellowship from Vetenskapsrådet (the Swedish Research Council).
Renormalization for area-preserving reversible twist maps
An "area-preserving map" will mean an exact symplectic diffeomorphism of a subset of R 2 onto its image.
Recall, that an area-preserving map that satisfies the twist condition everywhere in its domain of definition can be uniquely specified by a generating function S: Furthermore, we will assume that F is reversible, that is For such maps it follows from (2) that It is this "little" s that will be referred to below as "the generating function". If the equation −s(y, x) = u has a unique differentiable solution y = y(x, u), then the derivative of such a map F is given by the following formula: The period-doubling phenomenon can be illustrated with the area-preserving Hénon family (cf. (Bountis 1981)) : Maps H a have a fixed point ((−1 + √ 1 + a)/a, (−1 + √ 1 + a)/a) which is stable (elliptic) for −1 < a < 3. When a 1 = 3 this fixed point becomes hyperbolic: the eigenvalues of the linearization of the map at the fixed point bifurcate through −1 and become real. At the same time a stable orbit of period two is "born" with H a (x ± , x ∓ ) = (x ∓ , x ± ), x ± = (1 ± √ a − 3)/a. This orbit, in turn, becomes hyperbolic at a 2 = 4, giving birth to a period 4 stable orbit. Generally, there exists a sequence of parameter values a k , at which the orbit of period 2 k−1 turns unstable, while at the same time a stable orbit of period 2 k is born. The parameter values a k accumulate on some a ∞ . The crucial observation is that the accumulation rate is universal for a large class of families, not necessarily Hénon. Furthermore, the 2 k periodic orbits scale asymptotically with two scaling parameters To explain how orbits scale with λ and µ we will follow (Bountis 1981). Consider an interval (a k , a k+1 ) of parameter values in a "typical" family F a . For any value α ∈ (a k , a k+1 ) the map F α possesses a stable periodic orbit of period 2 k . We fix some α k within the interval (a k , a k+1 ) in some consistent way; for instance, by requiring that DF 2 k α k at a point in the stable 2 k -periodic orbit is conjugate, via a diffeomorphism H k , to a rotation with some fixed rotation number r. Let p k be some unstable periodic point in the 2 k−1 -periodic orbit, and let p k be the further of the two stable 2 k -periodic points that bifurcated from p k . Denote with d k = |p k − p k |, the distance between p k and p k . The new elliptic point p k is surrounded by (infinitesimal) invariant ellipses; let c k be the distance between p k and p k in t he direction of the minor semi-axis of an invariant ellipse surrounding p k , see Figure 1. Then, where ρ k is the ratio of the smaller and larger eigenvalues of DH k (p k ). This universality can be explained rigorously if one shows that the renormalization operator has a fixed point, and the derivative of this operator is hyperbolic at this fixed point.
It has been argued in (Collet et al 1981) that Λ F is a diagonal linear transformation. Furthermore, such Λ F has been used in (Eckmann et al 1982) and (Eckmann et al 1984) in a computer assisted proof of existence of a reversible renormalization fixed point F EKW and hyperbolicity of the operator R EKW .
We will now derive an equation for the generating function of the renormalized .
If the solution of (10) is unique, then z(x, y) = z(y, x), and it follows from (9) that the generating function of the renormalized F is given by (11)s(x, y) = µ −1 s(z(x, y), λy).
As we have already mentioned, the following has been proved with the help of a computer in (Eckmann et al 1982) and (Eckmann et al 1984): Theorem 1. There exist a polynomial s 0.5 ∈ A 0.5 s (ρ) and a ball B (s 0.5 ) ⊂ A 0.5 s (ρ), = 6.0 × 10 −7 , ρ = 1.6, such that the operator R EKW is well-defined and analytic on B (s 0.5 ).
Furthermore, its derivative DR EKW | B (s0.5) is a compact linear operator, and has exactly two eigenvalues δ 1 = 8.721..., and δ 2 = 1 λ * of modulus larger than 1, while Finally, there is an s EKW ∈ B (s 0.5 ) such that The scalings λ * and µ * corresponding to the fixed point s EKW satisfy Remark 1.3. The bound (16) is not sharp. In fact, a bound on the largest eigenvalue of DR EKW (s EKW ), restricted to the tangent space of the stable manifold, is expected to be quite smaller.
The size of the neighborhood in A β s (ρ) where the operator R EKW is well-defined, analytic and compact has been improved in (Gaidashev 2010). Here, we will cite a somewhat different version of the result of (Gaidashev 2010) which suits the present discussion (in particular, in the Theorem below some parameter, like ρ in A β s (ρ), are different from those used in (Gaidashev 2010)). We would like to emphasize that all parameters and bounds used and reported in the Theorem below, and, indeed, throughout the paper, are numbers representable on the computer.
Theorem 2.
There exists a polynomial s 0 ∈ A(ρ), ρ = 1.75, such that the following holds. i) The operator R EKW is well-defined and analytic in B R (s 0 ) ⊂ A(ρ) with ii) For all s ∈ B R (s 0 ) with real Taylor coefficients, the scalings λ = λ[s] and µ = µ[s] satisfy Definition 1.4. The set of reversible twist maps F of the form (4) with s ∈ B (s) ⊂ A β s (ρ) will be referred to as F β,ρ (s): . We will also use the notation We will finish our introduction into period-doubling for area-preserving maps with a summary of properties of the fixed point map. In (Gaidashev and Johnson 2009a) we have described the domain of analyticity of maps in some neighborhood of the fixed point. Additional properties of the domain are studied in (Johnson 2011). Before we state the results of (Gaidashev and Johnson 2009a), we will fix a notation for spaces of functions analytic on a subset of C 2 .
Definition 1.5. Denote O 2 (D) the Banach space of maps F : D → C 2 , analytic on an open simply connected set D ⊂ C 2 , continuous on ∂D, equipped with a finite max supremum norm · D : The Banach space of functions y : A → C, analytic on an open simply connected set A ⊂ C 2 , continuous on ∂A, equipped with a finite supremum norm · A will be denoted O 1 (A): If D is a bidisk D ρ ⊂ C 2 for some ρ, then we use the notation The next Theorem describes the analyticity domains for maps in a neighborhood of the Eckmann-Koch-Wittwer fixed point map, and those for functions in a neighborhood of the Eckmann-Koch-Wittwer fixed point generating function. The Theorem has been proved in two different versions: one for the space A 0.5 s (1.6) (the functional space in the original paper (Eckmann et al 1984)), the other for the space A s (1.75) -the space in which we will obtain a bound on the renormalization spectral radius in the stable manifold in this paper. To state the Theorem in a compact form, we introduce the following notation: ρ 0.5 = 1.6, ρ 0 = 1.75, 0.5 = 6.0 × 10 −7 , 0 = 5.79833984375 × 10 −4 , while s 0.5 (as in Theorem 1) and s 0 will denoted the approximate renormalization fixed points in spaces A 0.5 s (1.6) and A s (1.75), respectively. Theorem 3. There exists a polynomial s β such that the following holds for all ii) There exist simply connected open setsD =D(β, β , ρ β ) ⊂ D, such thatD ∩ R 2 is a non-empty simply connected open set, and such that for every (x, u) ∈D and Remark 1.6. It is not too hard to see that the subsets F β,ρ β β (s β ), β = 0 or 0.5, are analytic Banach submanifolds of the spaces O 2 (D(β, β , ρ β ). Indeed, the map where y[s](x, u) is the solution of the equation (20), and h[s](x, u) = (x, y[s](x, u)), is analytic as a map from B β (s β ) to O 2 (D(β, β , ρ β ) according to Theorem 3, and has an analytic inverse where g[F ](x, y) = (x, U (x, y)), and U is as in Theorem 3.
We are now ready to give a definition of the Eckmann-Koch-Wittwer renormalization operator for maps of the subset of a plane. Notice, that the condition P EKW [s](λ, 0) = 0 from Definition 1.1 is equivalent to F (F (λ, −s(z(λ, 0), λ))) = (0, 0), or, using the reversibility λ = π x F (F (0, 0)). On the other hand, Definition 1.7. We will refer to the composition F • F as the prerenormalization of F , whenever this composition is defined: Remark 1.8. Suppose that for some choice of β, β and ρ β , the operator R EKW and the map I, described in Remark 1.6, are well-defined on some B β (s β ) ⊂ A β s (ρ β ). Also, suppose that the inverse of I exists on I(B β (s β )). Then,
Statement of main results
Consider the coordinate transformation for t ∈ C, |t| < 4/(ρ + |β|) (recall Definition 1.2). We will now introduce two renormalization operators, one -on the generating functions, and one -on the maps, which incorporates the coordinate change S t as an additional coordinate transformation.
with G is as in (14), and where λ and µ solve the following equations:
SPECTRAL PROPERTIES OF RENORMALIZATION FOR AREA-PRESERVING MAPS 11
Definition 2.2. Given c ∈ R, set, formally, We are now ready to state our main theorem. Below, and through the paper, s (i,j) stands for the (i, j)-th component of a Taylor series expansion of an analytic function of two variables.
Main Theorem. (Existence and Spectral properties) There exists a polynomial iii) The linear operator DR c0 [s * ] has two eigenvalues outside of the unit circle: iv) The complement of these two eigenvalues in the spectrum is compactly contained in the unit disk: The Main Theorem implies that there exist codimension 2 local stable manifolds W Rc 0 (s * ) ⊂ A s (1.75), such that the contraction rate in W Rc 0 (s * ) is bounded from above by ν: i) The set of reversible twist maps of the form (4) such that s ∈ W Rc 0 (s * ) ⊂ A s (1.75) will be denoted W , and referred to as infinitely renormalizable maps.
Naturally, these sets are invariant under renormalization if is sufficiently small.
Notice, that, among other things, this Theorem restates the result about existence of the Eckmann-Koch-Wittwer fixed point and renormalization hyperbolicity of Theorem 1 in a setting of a different functional space. We do not prove that the fixed point s * , after an small adjustment corresponding to the coordinate change S t , coincides with s EKW from Theorem 1, although the computer bounds on these two fixed points differ by a tiny amount on any bi-disk contained in the intersection of their domains.
The fact that the operator R c0 as in (26) contains an additional coordinate change does not cause a problem: conceptually, period-doubling renormalization of a map is its second iterate conjugated by a coordinate change, which does not have to be necessarily linear.
Coordinate changes and renormalization eigenvalues
Let D andD be as in the Theorem 3. Consider the action of the operator , with λ * and µ * being the fixed scaling parameters corresponding to the Collet-Eckmann-Koch as in Theorem 1.
According to Theorem 1 this operator is analytic and compact on the subset F 0.5,1.6 (s 0.5 ), = 6.0 × 10 −7 , of O 2 (D), and has a fixed point F EKW . In this paper, we will prove the existence of a fixed point s * of the operator R EKW in a Banach space different from that in Theorem 1. Therefore, we will state most of our results concerning the spectra of renormalization operators for general spaces A β s (ρ) and sets F β,ρ β β (s * ), under the hypotheses of existence of a fixed point s * , and analyticity and compactness of the operators in some neighborhood of the fixed point. Later, a specific choice of parameters β, ρ and will be made, and the hypotheses -verified.
Let S = id + σ be a coordinate transformation of the domain D of maps F , satisfying DS • F = DS. In particular, these transformations preserve the subset of area-preserving maps. Notice, that Suppose that the operator R * has a fixed point F * in some neighborhood B ⊂ O 2 (D), on which R * is analytic and compact. Consider the action DR * [F ]h F,σ of the derivative of this operator.
and clearly, h F * ,σ is an eigenvector, if τ = κσ, of eigenvalue κ. In particular, is an eigenvalue of multiplicity (at least) 2 with eigenvectors h F * ,σ generated by respectively. Next, suppose S σ t , S σ 0 = Id, is a transformation of coordinates generated by a function σ as in (29)-(30), associated with an eigenvalue κ of DR * [F * ]. In addition to the operator (27), consider where the parameter t σ [F ] is chosen as E(κ) being the Riesz spectral projection associated with κ: (γ -a Jordan contour that enclose only κ in the spectrum of DR * [F * ]). We will now compare the spectra of the operators R * and R σ . The result below should be interpreted as follows: if h F * ,σ is an eigenvector of DR * [F * ] generated by a coordinate change id + σ, and associated with some eigenvalue κ, then this eigenvalue is eliminated from the spectrum of DR σ [F * ], if its multiplicity is 1. Moreover, if the multiplicity of κ is 1, then Proof. Since DR σ [F * ] and DR * [F * ] are both compact operators acting on an infinite-dimensional space, their spectra contain {0}.
Suppose h is a eigenvector of DR * [F * ] corresponding to some eigenvalue δ, then (we have used the fact that F * satisfies the fixed point equation), where More specifically, Vice verse, suppose h is an eigenvector of DR σ [F * ] corresponding to an eigenvalue δ = κ, then, and by (33) and a similar computation as above, for a ∈ R, Lemma 3.2. Suppose that there are β, , ρ, λ * , µ * and a function s * ∈ A β s (ρ) such that the operator R EKW is analytic and compact as maps from F β,ρ (s * ) to O 2 (D), and where F * is generated by s * .
The 6-th line reduces to after we use the midpoint equation differentiated with respect to x: To summarize, Finally, we use the fact that The Lemma below, whose elementary proof we will omit, shows that λ * is also in the spectrum of DR * [F * ]: Lemma 4.2. Suppose that β, and ρ are such that s * ∈ A β s (ρ) is a fixed point of R * , and the operator R * is analytic and compact as a map from B (s * ) to A β s (ρ). Also, suppose that the map I, described in Remark 1.6, is well-defined and analytic on B (s * ), and that it has an analytic inverse I −1 on I(B (s * )). Then, At the same time, it is straightforward to see that the spectra of and DR EKW [s EKW ] are identical.
Lemma 4.3. Suppose that β, and ρ are such that s * ∈ A β s (ρ), and the operator R EKW is analytic and compact as a map from B (s * ) to A β s (ρ). Also, suppose that the map I, described in Remark 1.6, is well-defined and analytic on B (s * ), and that it has an analytic inverse I −1 on I(B (s * )). Then, The convergence rate in the stable manifold of the renormalization operator plays a crucial role in demonstrating rigidity. It turns out that the eigenvalue λ * is the largest eigenvalues in the stable subspace of DR EKW [F * ], or equivalently DR EKW [s * ]. However, it's value |λ * | ≈ 0.2488 is not small enough to ensure rigidity. At the same time, the eigenspace of the eigenvalue λ * is, in the terminology of the renormalization theory, irrelevant to dynamics (the associated eigenvector is generated by a coordinate transformation). We, therefore, would like to eliminate this eigenvalue via an appropriate coordinate change, as described above.
However, first we would like to identify the eigenvector corresponding to the eigenvalue λ * for the operator R EKW . This vector turns out to be different from ψ s * .
Lemma 4.4. Suppose that β, and ρ are such that the operator R EKW has a fixed point s * ∈ A β s (ρ), and R EKW is analytic and compact as a map from B (s * ) to A β s (ρ). Also, suppose that the map I, described in Remark 1.6, is well-defined and analytic on B (s * ), and that it has an analytic inverse I −1 on I(B (s * )).
Then, the number λ * is an eigenvalue of DR EKW [s * ], and the eigenspace of λ * contains the eigenvector Proof. Notice, thatψ is of the form where ψ x (x, y) = s * 1 (x, y)x + s * 2 (x, y)y is the eigenvector of DR * [s * ] corresponding to the rescaling of the variables x and y, while is the eigenvector corresponding to the rescaling of s. ψ x (x, y) and ψ u (x, y) correspond to the eigenvectors h F * ,σ 1 0,0 and h F * ,σ 2 0,0 , respectively, of DR 0 [F * ]. Recall, that h F * ,σ 1 0,0 and h F * ,σ 2 0,0 are eigenvectors of DR 0 [F * ], with eigenvalue 1, and eigenvectors of DR EKW [F * ] with eigenvalue 0.
By Lemma 4.1 ψ s * is an eigenvector of DR * , the corresponding eigenvector of DR * is h F * ,σ 1 1,0 −2σ 2 1,0 . Thus, ψ s * +ψ corresponds to the vector To finish the proof, it suffices to prove that The result follows if where 0 = s(x, Z(x, y)) + s(y, Z(x, y)), ψ EKW s * is as in (39), G as in (14), and E is the Riesz projection for the operator DR EKW [s * ].
We will quote a version of a lemma from (Gaidashev 2010) which we will require to demonstrate analyticity and compactness of the operator R. The proof of the Lemma is computer-assisted. Notice, the parameters that enter the Lemma are different from those used in (Gaidashev 2010). As before, the reported numbers are representable on a computer. and s 0 is as in Theorem 2, the prerenormalization P EKW [s] is well-defined and analytic function on the set D r ≡ D r (0) = {(x, y) ∈ C 2 : |x| < r, |y| < r}, r = 0.51853174082497335, with Z r ≤ 1.63160151494042404.
We will now demonstrate analyticity and compactness of the modified renormalization operator in a functional space, different from that used in (Eckmann et al 1984), specifically, in the space A s (1.75). It is in this space that we will eventually compute a bound on the spectral radius of the action of the modified renormalization operator on infinitely renormalizable maps.
Proposition 4.7. There exists a polynomial s 0 ⊂ B R (s 0 ) ⊂ A s (1.75), where R and s 0 are as in Lemma 4.6, such that the operator R is well-defined, analytic and compact as a map from B 0 (s 0 ), 0 = 5.79833984375 × 10 −4 , to A s (1.75), if B 0 (s 0 ) ⊂ B R (s 0 ) contains the fixed point s * .
Proof. The polynomial s 0 has been computed as a high order numerical approximation of a fixed point s * of R.
First, we get a bound on t for all s ∈ B δ (s 0 ): We estimate the right hand side rigorously on the computer and obtain (44) |t| ≤ 2.1095979213715 × 10 −6 .
The condition of the hypothesis that s * ∈ B δ (s 0 ) is specifically required to be able to compute this estimate.
Notice that according to Definition 4.5 and Theorem 2, the maps s → t and, hence, s → ξ t are analytic on a larger neighborhood B R (s 0 ) of analyticity of R EKW . According to Theorem 2 and Lemma 4.6, the prerenormalization P EKW is also analytic as a map from B R (s 0 ) to A s (r), r = 0.516235055482147608. We verify that for all s ∈ B δ (s 0 ) and t as in (44) the following holds: where λ − = −0.27569580078125 is the lower bound from Theorem 2. Furthermore, with t as in (44). Therefore, the map s → P[s] is analytic on B δ (s 0 ). Since the inclusion of sets (45) is compact, R[s] has an analytic extension to a neighborhood of D 1.75 , R[s] ∈ A s (ρ ), ρ > 1.75. Compactness of the map s → R[s] now follows from the fact that the inclusions of spaces A s (ρ ) ⊂ A s (ρ) is compact.
Recall, that according to Lemma 4.2, λ * is an eigenvalue of DR * [F * ] of multiplicity at least 1. According to Lemma 3.2, λ * is in the spectrum of DR EKW [F * ], and according to Lemma 4.3, λ * ∈ DR EKW [s * ]. Proof. First, notice the difference between the definition of λ in (1.1) s(G(λ, 0)) = 0, and in Definition (4.5) s(G(λ + tλ 2 , 0)) = 0 (we will use the notation λ EKW below to emphasize the difference). This implies that if D s λ EKW [s]ψ is an action of the derivative of λ EKW [s] on a vector ψ, then Similarly, where Similarly to Lemma (3.1), we get that if ψ is an eigenvector of DR EKW [s * ] associated with the eigenvalue δ = λ * , then ψ = ψ EKW s * , and Finally, assume that λ * / ∈ spec(DR[s * ]), but that there exists an eigenvector and, by (46), This contradiction finishes the proof. So far we were not able to make any claims about the multiplicity of the eigenvalue λ * in the spectrum of DR EKW [s * ]. However, we will demonstrate in Section 5 that it is indeed equal to 1.
Spectral properties of R. Proof of Main Theorem
We will now describe our computer-assisted proof of Main Theorem. To implement the operator DR[s * ] on the computer, we would have to implement the Riesz projection as well. Unfortunately, this is not easy, therefore, we do it only approximately, using the operator R c introduced in the Definition 2.1. Specifically, the component (0, 3) of the composition s • G will be consistently normalized to be where s 0 is our polynomial approximation for the fixed point.
The operator R c differs from R (cf.4.5) only in the "amount" by which the eigendirection ψ EKW s * is "eliminated". In particular, as the next proposition demonstrates, R c is still analytic and compact in the same neighborhood of s 0 . Furthermore, the operators R c are compact in B R (s 0 ) ⊂ A(ρ), with R c [s] ∈ A(ρ ), ρ = 1.0699996948242188ρ.
Proof. The proof is almost identical to that of Proposition 4.7, with a different (but still sufficiently small) bound on |t c [s]|.
The following Lemma shows that the spectra of the operators R and R c are close to each other. Proof. According to Propositions 4.7 and 5.1, under the hypothesis of the Lemma, R and R c * are analytic and compact as operators from B δ (s 0 ) to A s (1.75).
Recall, that ψ EKW s * is an eigenvector of DR EKW [s * ] corresponding to the eigenvalue λ * .
We consider the action of DR c * [s * ] on a vector ψ. Similarly to (46), and we see that the equation has a unique solution a if For such κ, the vector is an eigenvector of DR c * [s * ] associated with the eigenvalue κ.
The eigenvalues κ as in (48) satisfy |κ| > 0.00124359130859375 We will now describe a rigorous computer upper bound on the spectrum of the operator DR c [s * ].
Proof of part ii) of Main Theorem.
Step 1). Recall the Definition 1.2 of the Banach subspace A s (ρ) of A(ρ). We will now choose a new bases {ψ i,j } in A s (ρ). Given s ∈ A s (ρ) we write its Taylor expansion in the form s(x, y) = where ψ i,j ∈ A s (ρ): and the index set I of these basis vectors is defined as Denoteà s (ρ) the set of all sequences Equipped with the l 1 -norm A s (ρ) is a Banach space, which is isomorphic to A s (ρ). Clearly, the isomorphism J : A s (ρ) →à s (ρ) is an isometry: We divide the set I in three disjoint parts: with N = 22, M = 60. We will denote the cardinality of the first set as D(N ), the cardinality of I 1 ∪ I 2 as D(M ).
We assign a single index to vectors ψ i,j , (i, j) ∈ I 1 ∪ I 2 , as follows: This correspondence (i, j) → k is one-to-one, we will, therefore, also use the notation (i(k), j(k)).
For any s ∈ A s (ρ), we define the following projections on the subspaces of the where s 0 is some good numerical approximation of the fixed point. Denote for brevity L s c ≡ DR c [s]. We can now write a matrix representation of the finitedimensional linear operator Step 2). We compute the unit eigenvectors e k of the matrix D numerically, and form a D(N ) × D(N ) matrix A whose columns are the approximate eigenvectors e k . We would now like to find a rigorous bound B on the inverse B of A. Step 3 For any s ∈Â s (ρ), we define the following projections on the basis vectors.
We proceed to quantify this claim.
We will use the Contraction Mapping Principle in the following form. Define the following linear operator on A s (ρ) where Kh ≡δ 1 P 1 h +δ 2 P 2 h, andδ 1 andδ 2 are defined via P 1 L s0 c0 e 1 =δ 1 e 1 , P 2 L s0 c0 e 2 =δ 2 e 2 . Consider the operator We can now see that the hypothesis of the Contraction Mapping Principle is indeed verified: Step 5). Notice, that in general, ] is a small number which we have estimated to be (54) |t c0 [s * c0 ]| < 7.89560771750566329 × 10 −12 . Consider the map F * c0 generated by s * c0 . Recall that by Theorem 3, there exists a simply connected open set D such that F * c0 ∈ O 2 (D). The fixed point equation for the map F * c0 is as follows: | 8,414 | 2014-12-16T00:00:00.000 | [
"Mathematics",
"Physics"
] |
How to Design an ISA
Over the past decade I've been involved in several projects that have designed either ISA (instruction set architecture) extensions or clean-slate ISAs for various kinds of processors (you'll even find my name in the acknowledgments for the RISC-V spec, right back to the first public version). When I started, I had very little idea about what makes a good ISA, and, as far as I can tell, this isn't formally taught anywhere. With the rise of RISC-V as an open base for custom instruction sets, however, the barrier to entry has become much lower and the number of people trying to design some or all of an instruction set has grown immeasurably.
general-purpose ISA.An ISA needs to be efficient for compilers to translate a set of source languages into.It must also be efficient to implement in the kinds of microarchitecture that hardware will adopt.
Designing an ISA for all possible source languages is hard.For example, consider C, CUDA (Compute Unified Device Architecture), and Erlang.Each has a very different abstract machine.C has large amounts of mutable state and a bolted-on concurrency model that relies on shared everything, locking, and, typically, fairly small numbers of threads.Erlang has a shared-nothing concurrency model and scales to very large numbers of processes.CUDA has a complex sharing model that is tightly coupled to its parallelism model.
You can compile any of these languages to any Turingcomplete target (by definition), but that may not be efficient.If it were easy to compile C code to GPUs (and take advantage of the parallelism), then CUDA wouldn't need to exist.Any family of languages has a set of implicit assumptions that drive decisions about the most efficient targets.
Algol-family languages, including C, typically have good locality of reference (both spatial and temporal), but somewhat random access patterns.They have a single stack, and a large proportion of memory accesses will be to the current stack frame.They allocate memory in objects that are typically fairly small, and most are not shared between threads.Object-oriented languages typically do more indirect branches and more pointer chasing.Arrayprocessing languages and shading languages typically do a lot of memory accesses with predictable access patterns.If you don't articulate the properties of the source languages you're optimizing for, then you are almost certainly baking in some implicit assumptions that may or may not actually hold.
Similarly, looking down toward the microarchitecture, a good ISA for a small embedded microcontroller may be a terrible ISA for a large superscalar out-of-order processor or a massively parallel accelerator.There are good reasons why 32-bit Arm failed to compete with Intel for performance, and why x86 has failed to displace Arm in low-power markets.The things that you want to optimize for at different sizes are different.
Designing an ISA that scales to both very large and very small cores is hard.Arm's decision to separate its 32-and 64-bit ISAs meant that it could assume a baseline of register renaming and speculative execution in its 64-bit A profile and in-order execution in its 32-bit M profile, and tune both, assuming a subset of possible implementations.RISC-V aims to scale from tiny microcontrollers up to massive server processors.It's an open research question whether this is possible (certainly no prior architecture has succeeded).
BUSINESS IS NOT A SEPARABLE CONCERN
One kind of generality does matter: Is the ISA a stable contract?This is more a business question than a technical one.A stable ISA can enter a feedback cycle where people buy it because they have software that runs on it and people write software to run on it because they have it.Motorola benefited from this with its 68000 line for a long time, Intel with its x86 line for even longer.
This comes with a cost: In every future product you will This difference impacts the degree to which you can overfit your ISA to the microarchitecture.Both x86 and 32-bit Arm were heavily influenced by what was feasible to build at the time they were created.If you're designing a GPU or workload-specific accelerator, however, then the ISA may change radically between releases.Early AMD GPUs were VLIW (very long instruction word) architectures; modern ones are not but can still run shaders written for the older designs.
A stable ISA also impacts how experimental you can be.
If you add an instruction that might not be useful (or might be difficult to implement in future microarchitectures) to x86 or AArch64, then you will find that some popular bit of code uses it in some critical place and you will be stuck with it.If you do the same in a GPU or AI accelerator, then you can quietly remove it in the next generation.
ARCHITECTURE MATTERS
A belief that has gained some popularity in recent years is that the ISA doesn't matter.This belief is largely the result of an oversimplification of an observation that is obviously true: Microarchitecture makes more of a difference than architecture in performance.A simple in-order pipeline may execute around 0.7 instructions per cycle.A complex out-oforder pipeline may execute five or more per cycle (per core), giving almost an order of magnitude difference between two implementations of the same ISA.In contrast, in most of the projects that I've worked on, I've seen the difference between a mediocre ISA and a good one giving no more than a 20 percent performance difference on comparable microarchitectures.Two parts of this comparison are worth pointing out.The first is that designing a good ISA is a lot cheaper than designing a good microarchitecture.These days, if you go to a CPU vendor and say, "I have a new technique that will produce a 20 percent performance improvement," they will probably not believe you.That kind of overall speedup doesn't come from a single technique; it comes from applying a load of different bits of very careful design.Leaving that on the table is incredibly wasteful.
The second key point is contained in the caveat at the end: "… on comparable microarchitectures."The ISA constrains the design space of possible implementations.It's possible to add things to the architecture that either enable or prevent specific microarchitectural optimizations.For example, consider an arbitrary-length vector extension that operates with the source and destination operands in memory.If the user writes a + b * c (where all three operands are large vectors), then a pipelined implementation is going to want to load from all three locations, perform the add, perform the multiply, and then store the result.If you have to take an interrupt in the middle and you're only halfway down, what do you do?You might just say, "Well, add and multiply are idempotent, so we can restart and everything is fine," but that introduces additional constraints.In particular, the hardware must ensure that the destination does not alias any of the source values.If these values overlap, simply restarting is difficult.You can expose registers that report the progress through the add, but that prevents the pipelined operation because you can't report that you're partway through the add and the multiply.If you're building a GPU, then this is less important because typically, you aren't handling interrupts within kernels (and if you are, then waiting a few hundred cycles to flush all in-flight state is fine).
The same problem applies to microcode.You must be able to take an interrupt immediately before or after a microcoded instruction.A simple microcode engine pauses the pipeline, issues a set of instructions expanded from the microcoded instruction, and then resumes.On a simple pipeline, this is fine (aside from the impact on interrupt latency) and may give you better code density.On a more complex pipeline, this prevents speculative execution across microcode and will come with a big performance penalty.If you want microcode and good performance from a high-end core, you need to use much more complicated techniques for implementing the microcode engine.This then applies pressure back on the ISA: If you have invested a lot of silicon in the microcode engine, then it makes sense to add new microcoded instructions.
WHAT DO SMALL CORES WANT?If you're designing an ISA for a simple single-issue inorder core, you have a clear set of constraints.In-order cores don't worry much about data dependencies; each instruction runs with the results of the previous one available.Only the larger ones do register renaming, so using lots of temporaries is fine.
They typically do care about decoder complexity.The original RISC architectures had simple decoders because CISC (complex instruction set computer) decoders took a large fraction of total area.An in-order core may consist of a few tens of thousands of gates, whereas a complex decoder can easily double the size (and, therefore, cost and power consumption).Simple decoding is important on this scale.
Small code is also important.A small microcontroller core may be as small as 10KB of SRAM (static random access memory).A small decrease in encoding efficiency can dwarf everything when considering the total area cost: If you need 20 percent more SRAM for your code, then that can be equivalent to doubling the core area.Unfortunately, this constraint almost directly contradicts the previous one.This is why Thumb-2 and RISC-V focused on a variable length encoding that is simple to decode: They save code size without significantly increasing decoder complexity.This is a complex tradeoff that is made even more complicated when considering multiple languages.For example, Arm briefly supported Jazelle DBX (direct bytecode execution) on some of its mobile cores.This involved decoding Java bytecode directly, with Java VM (virtual machine) state mapped into specific registers.A Java add instruction, implemented in a software interpreter, requires at least one load to read the instruction, a conditional branch to find the right handler, and then another to perform the add.With Jazelle, the load happens via instruction fetch, and the add would add the two registers that represented the top of the Java stack.This was far more efficient than an interpreter but did not perform as well as a JIT (just-in-time) compiler, which could do a bit more analysis between Java bytecodes.
Jazelle DBX is an interesting case study because it made sense only in the context of a specific set of source languages and microarchitectures.It provided no benefits for languages that didn't run in a Java VM.By the time devices had more than about 4MB of RAM, Jazelle was outperformed by a JIT.Within that envelope, however, it was a good design choice.
Jazelle DBX should serve as a reminder that optimizations for one size of core can be incredibly bad choices for other cores.
WHAT DO BIG CORES WANT?
As cores get bigger, other factors start to dominate.We've seen the end of Dennard scaling but not of Moore's law.Each generation still gets more transistors for a fixed price, but if you try to power them all, then your chip catches fire (the so-called "dark silicon" problem).This is part of the reason that on-SoC (system-on-a-chip) accelerators have become popular in recent years.If you can add hardware that makes a particular workload faster but is powered off entirely at other times, then that can be a big win for power consumption.Components that need to be powered all of the time are the most likely to become performance-limiting factors.
On a lot of high-end cores, the register rename logic is often the single biggest consumer of power.Register rename is what enables speculative and out-of-order execution.Rename registers are similar to the SSA (static single assignment) form that compilers use.When an instruction is dispatched, a new rename register is allocated to hold the result.When another instruction wants to consume that result, it is dispatched to use this rename register.Architectural registers are just names for mapping to SSA registers.
A rename register consumes space from the point at which an instruction that defines it enters speculative execution until another instruction that writes to the same rename register exits speculation (i.e., definitely happens).If a temporary value is live at the end of a basic block, then it continues to consume a rename register.The branch at the end of the basic block will start speculatively issuing instructions somewhere else, but until that branch is no longer speculative and a following instruction has written to the register, the core may need to roll back everything up to that branch and restore that value.The ISA can have a big impact on the likelihood of encountering this kind of problem.
Complex addressing modes often end up being useful on big cores.AArch64 and x86-64 both benefit from them, and the T-Head extensions add them to RISC-V.If you're doing address calculation in a loop (for example, iterating over an array), then folding this into the load-store pipeline provides two key benefits: First, there is no need to allocate a rename register for the intermediate value; second, this computed value is never accidentally live across loop iterations.The power consumption of an extra add is less than that of allocating a new rename register.
Note that this is less the case for very complex addressing modes, such as the pre-and post-increment addressing modes on Arm, which update the base and thus still require a rename register.These modes still win to a degree because it's cheaper (particularly for preincrement) to forward the result to the next stage in a load-store pipeline than to send it via the rename logic.
One microarchitect building a high-end RISC-V core gave a particularly insightful critique of the RISC-V C extension, observing that it optimizes for the smallest encoding of instructions rather than for the smallest number of instructions.This is the right thing to do for small embedded cores, but large cores have a lot of fixed overheads associated with each executed instruction.Executing fewer instructions to do the same work is usually a win.This is why SIMD (single instruction, multiple data) instructions have been so popular: The fixed overheads are amortized over a larger amount of work.
Even if you don't make the ALUs (arithmetic logic units) the full width of the registers and take two cycles to push each half through the execution pipeline, you still save a lot of the bookkeeping overhead.SIMD instructions are a good use of longer encodings in a variable-length instruction set: For four instructions' worth of work, a 48-bit encoding is probably still a big savings in code size, leaving the denser encodings available for more frequent operations.Complex instruction scheduling causes additional pain.Even moderately large in-order cores suffer from branch misprediction penalties.The original Berkeley RISC project analyzed the output of C compilers and found that, on average, there was one branch per seven instructions.This has proven to be a surprisingly durable heuristic for C/C++ code.
With a seven-stage dual-issue pipeline, you might have 14 instructions in flight at a time.If you incorrectly predict a branch, half of these will be the wrong ones and will need to be rolled back, making your real throughput only half of your theoretical throughput.Modern high-end cores typically have around 200 in-flight instructions-that's over 28 basic blocks, so a 95 percent branch predictor accuracy rate gives less than a 24 percent probability of correctly predicting every branch being executed.Big cores really like anything that can reduce the cost of misprediction penalties.
The 32-bit Arm ISA allowed any instruction to be predicated (conditionally executed depending on the value in a condition-code register).This was great for small to medium in-order cores because they could avoid branches, but the complexity of making everything predicated was high for big cores.The encoding space consumed by predication was large.For AArch64, Arm considered eliminating predicated execution entirely, but conditional move and a few other conditional instructions provided such a large performance win that Arm kept them.
YOU DON'T WIN POINTS FOR PURITY Bjarne Stroustrup said, "There are only two kinds of languages: the ones people complain about and the ones nobody uses."This holds for instruction sets (the lowestlevel programming languages most people will encounter) just as much as for higher-level ones.Good instruction sets are always compromises.
For example, consider the jump-and-link instructions in RISC-V.These let you specify an arbitrary register as a link register.RISC-V has 32 registers, so specifying one requires a full five-bit operand in a 32-bit instruction.Almost one percent of the total 32-bit encoding space is consumed by the RISC-V jump-and-link instruction.RISC-V is, as far as I am aware, unique in this decision.
Arm, MIPS, and PowerPC all have a designated link register that their branch-and-link instructions use.Thus, they require one bit to differentiate between jump-and-link and plain jump.RISC-V chooses to avoid baking the ABI into the ISA but, as a result, requires 16 times as much encoding space for this instruction.
This decision is even worse because the ABI leaks into the microarchitecture but not the architecture.RISC-V does not have a dedicated return instruction, but implementations will typically (and the ISA specification notes that this is a good idea) treat a jump-register instruction with the ABI-defined link register as a return.This means that using any link register other than the one defined in the ABI will likely result in branch mispredictions.The result is dealing with all of the downsides of baking the ABI into the ISA, but enjoying none of the benefits.
This kind of reasoning applies even more strongly to the stack pointer.AArch64 and x86 both have special instructions for operating on the stack.In most code from C-like languages, the stack pointer is modified only in function prologs and epilogs, but there are many loads and stores relative to it.This has the potential for optimization in the encoding, which can lead to further optimization in the microarchitecture.For example, modern x86 chips accumulate the stack-pointer displacement for push and pop instructions, emitting them as offsets to the rename register that contains the stack pointer (so they're independent and can be issued in parallel) and then doing a single update to the stack pointer at the end.This kind of optimization is possible even if the stack pointer is just an ABI convention, but this again is a convention that's shared by the ABI and the microarchitecture, so why not take advantage of it to improve encoding efficiency in the ISA?
Finally, big cores really care about parallel decoding.Apple's M2, for example, benefits hugely from the fixedwidth ISA because it can fetch a block of instructions and start decoding them in parallel.The x86 instruction set, at the opposite extreme, needs more of a parser than a decoder.Each instruction is between one and 15 bytes, which may include a number of prefixes.High-end x86 chips cache decoded instructions (particularly in hot loops), but this consumes power and area that could be used for execution.This isn't necessarily a bad idea.As with small cores and instruction density, a variable-length instruction encoding may permit a smaller instruction cache, and that savings may offset the cost of the complex decoder.
Although RISC-V uses variable-length encoding, it's very cheap to determine the length.This makes it possible to build an extra pipeline stage that reads a block of words and forwards a set of instructions to the real decoder.This is nowhere near as complex as decoding x86.
SOME SOURCE LANGUAGES ARE NOT REALLY SOURCE LANGUAGES
A new ISA often takes a long time to gain widespread adoption.The simplest way of bootstrapping a software ecosystem is to be a good emulation target.Efficient emulation of x86 was an explicit design goal of both AArch64 and PowerPC for precisely this reason (although AArch64 had the advantage of a couple of decades more research in binary translation to draw on in its design).Apple's Rosetta 2 manages to translate most x86-64 instructions into one or two AArch64 ones.
A few of its features make AArch64 (and, especially, Apple's slight variation on it) amenable to fast and lightweight x86-64 emulation.The first is having more registers, which allows all x86-64 state to be stored in registers.Second, Apple has an opt-in TSO (total store ordering) model, which makes the memory model the same as x86.(RISC-V has this as an option as well, although I'm not aware of an extension that allows dynamically switching between the relaxed memory model and TSO, as Apple's hardware permits.)Without this mode, you either need variants of all of your loads and stores that can provide the relevant barriers or you need to insert explicit fences around all of them.The former consumes a huge amount of encoding space (loads and stores make up the largest single consumer of encoding space on AArch64) the latter, many more instructions.
After TSO, flags are the second-most annoying feature of x86 from the perspective of an emulator.Lots of x86 instructions set flags.Virtual PC for Mac (x86 on PowerPC) puts a lot of effort into dynamically avoiding setting flags if nothing has consumed them (e.g., if two flag-setting instructions were back to back).
QEMU does something similar, preserving the source operands and the opcode of operations that set flags and computing the flags only when something checks the flags' value.AArch64 has a similar set of flags to x86, so flag-setting instructions can be translated into one or two instructions.Arm didn't get this right (from an emulation perspective) in the first version of the ISA.Both Microsoft and Apple (two companies that ship operating systems that run on Arm and need to run a lot of legacy x86 code) provided feedback, and ARMv8.4-CondM and ARMv8.5-CondMadded extra modes and instructions for setting these flags differently.Apple goes further with an extension that sets the two flags present in x86 but not Arm in some unused bits of the flags register, where they can be extracted and moved into other flag bits when needed.
RISC-V made the decision not to have condition codes.These have always been a feature that microarchitects hate-for a few reasons.In the worst case (and, somehow, the worst case is always x86), instructions can set some flags.In the case of x86, this is particularly painful because the carry flag and the interrupts-disabled flag are in the same word (which led to some very entertaining operatingsystem bugs, because the ABI states that the flags register is not preserved across calls, so calling a function to disable interrupts in the kernel was followed by the compiler helpfully reenabling them to restore the flags).
Anything that updates part of a register is painful because it means allocating a new rename register and then doing the masked update from the old value.Even without that, condition codes mean that a lot of instructions update more than one register.
Arm, even in AArch32 days, made this a lot less painful by having variants of instructions that set flags and not setting them for most operations.RISC-V decided to avoid this and instead folds comparisons into branches and has instructions that set a register to a value (typically one or zero) that can then be used with a compare-and-branch instruction such as branch if [not] equal (which can be used with register zero to mean branch of [not] zero).
Emulating x86-64 quickly on RISC-V is likely to be much harder because of this choice.
Avoiding flags also has some interesting effects on encoding density.Conditional branch on zero is incredibly common in C/C++ code for checking that parameters are not null.On x86-64, this is done as a testq (threebyte) instruction, followed by a je (jump if the test set the condition flags for equality), which is a two-byte instruction.This incurs all of the annoyances of allocating a new rename register for the flags mentioned previously, including the fact that the flags register remains live until the next flag-setting instruction exits speculation.
The decision to avoid condition codes also makes adding other predicated operations much harder.The Arm conditional select and increment instruction looks strange at first glance, but using it provides more than a 10 percent speedup on some compression benchmarks.This is a moderately large instruction in AArch64: three registers and a four-bit field indicating the condition to test.This means that it consumes 19 bits in operand space.An equivalent RISC-V instruction would either need an additional source register and variants for the comparisons to perform or take a single-source operand but need a comparison instruction to set that register to zero or non-zero first.
ALWAYS MEASURE
In 2015, I supervised an undergraduate student extending an in-order RISC-V core with a conditional move and extending the LLVM back end to take advantage of it.His conclusion was that, for simple in-order pipelines, the conditional move instruction had a 20 percent performance increase on a number of benchmarks, no performance reduction on any of them, and a tiny area overhead.Or, examining the results in the opposite direction, achieving the same performance without a conditional move required around four times as much branch predictor state.This result, I am told, reflected the analysis that Arm conducted (although didn't publish) on larger and wider pipelines when designing AArch64.This is, apparently, one of the results that every experienced CPU designer knows but no one bothers to write down.
AArch64 removed almost all of the predication but kept a few instructions that had a disproportionately high benefit relative to the microarchitectural complexity.The RISC-V decision to omit conditional move was based largely on a paper by the authors of the Alpha, who regretted adding conditional move because it required an extra read port on their register file.This is because a conditional move must write back either the argument or the original value.
The interesting part of this argument is that it applies to an incredibly narrow set of microarchitectures.Anything that's small enough to not do forwarding doesn't need to read the old value; it just doesn't write back a value.Anything that's doing register renaming can fold the conditional move into the register rename logic and get it almost for free.The Alpha happened to be in the narrow gap between the two.
It's very easy to gain intuition about what makes an ISA fast or slow based on implementations for a particular scale.These can rapidly go wrong (or start out wrong if you are working on a completely different scale or different problem space).New techniques, such as the way that NVIDIA Project Denver and Apple M-series chips can forward outputs from one instruction to another in the same bundle, can have a significant impact on performance and change the impact of different ISA decisions.Does your ISA encourage compilers to generate code that the new technique can accelerate?
If you come back to this article in five to ten years, remember that technology advances.Any suggestions that I've made here may have been rendered untrue by newer techniques.If you have a good idea, measure it on simulations of different microarchitectures and see whether it makes a difference.
be stuck with any design decision you made in the current generation.When it started testing simulations of early Pentium prototypes, Intel discovered that a lot of game designers had found that they could shave one instruction off a hot loop by relying on a bug in the flag-setting behavior of Intel's 486 microprocessor.This bug had to be made part of the architecture: If the Pentium didn't run popular 486 games, customers would blame Intel, not the game authors.If you buy an NVIDIA GPU, you do not get a document explaining the instruction set.It, and many other parts of the architecture, are secret.If you want to write code for it and don't want to use NVIDIA's toolchain, you are expected to generate PTX, which is a somewhat portable intermediate language that the NVIDIA drivers can consume.This means that NVIDIA can completely change the instruction set between GPU revisions without breaking your code.In contrast, an x86 CPU is expected to run the original PC DOS (assuming it has BIOS emulation in the firmware) and every OS and every piece of user-space software released for PC platforms since 1978. | 6,399.4 | 2023-12-31T00:00:00.000 | [
"Computer Science"
] |
Vibration Fault Diagnosis in Wind Turbines based on Automated Feature Learning
A growing number of wind turbines are equipped with vibration measurement systems to enable a close monitoring and early detection of developing fault conditions. The vibration measurements are analyzed to continuously assess the component health and prevent failures that can result in downtimes. This study focuses on gearbox monitoring but is applicable also to other subsystems. The current state-of-the-art gearbox fault diagnosis algorithms rely on statistical or machine learning methods based on fault signatures that have been defined by human analysts. This has multiple disadvantages. Defining the fault signatures by human analysts is a time-intensive process that requires highly detailed knowledge of the gearbox composition. This effort needs to be repeated for every new turbine, so it does not scale well with the increasing number of monitored turbines, especially in fast growing portfolios. Moreover, fault signatures defined by human analysts can result in biased and imprecise decision boundaries that lead to imprecise and uncertain fault diagnosis decisions. We present a novel accurate fault diagnosis method for vibration-monitored wind turbine components that overcomes these disadvantages. Our approach combines autonomous data-driven learning of fault signatures and health state classification based on convolutional neural networks and isolation forests. We demonstrate its performance with vibration measurements from two wind turbine gearboxes. Unlike the state-of-the-art methods, our approach does not require gearbox-type specific diagnosis expertise and is not restricted to predefined frequencies or spectral ranges but can monitor the full spectrum at once.
Introduction
The globally installed wind power capacity is constantly growing thanks to international efforts to limit the global mean temperature rise by replacing fossil fuels [1].A major fraction of the levelized cost of wind energy consists of the operation and maintenance costs of the wind farms [2].Continuous health monitoring of wind turbine components forms an important part of the work of wind farm operators as it helps to limit the extent of unforeseen maintenance costs.To reduce the operation and maintenance costs of their wind farms, many operators and asset managers are applying remote condition monitoring techniques to detect incipient faults before they can result in major damage.
Gearboxes are among the most critical and costly components to replace in a wind turbine in terms of the equipment, replacement work and downtime costs per failure [3][4][5][6].Therefore, a growing number of wind turbine gearboxes is being equipped with vibration measurement systems to enable a close monitoring and early detection of developing fault conditions in the gearbox components [7][8][9].The vibration monitoring signals require analysis and interpretation to prevent failures.Numerous approaches have been proposed to assess the vibration signals from wind turbine gearboxes in the time and frequency domains.Examples include the time-domain monitoring of waveform features such as root mean square deviations, peak-to-peak amplitudes, and kurtosis.In the frequency domain, methods such as spectral line analysis, envelope and sideband analysis have been proposed [8][9][10][11].Thus, the state-of-the-art vibration diagnostics methods applied for wind turbine (WT) gearboxes in practice rely on the extraction of hand-crafted features from the gearbox vibration signals.The features need to be defined by a human analyst before they can be extracted.Only after they have been defined and extracted from the vibration measurements, the features may be used to infer information about potential faults in gearbox components based on statistical methods or machine learning models [12][13].Typical handcrafted features that are in use for WT gearbox fault diagnostics are the position and amplitude of spectral lines corresponding to characteristic frequencies of gearbox components, such as gear mesh frequencies, and other characteristic metrics such as the root mean square deviation and kurtosis of parts of the vibration time series [8][9][10][11].However, these state-of-the-art vibration diagnostics methods have multiple disadvantages.First, they require a laborintensive upfront conception and handcrafting of feature definitions, which constitutes a significant time and workforce effort.Second, many state-of-the-art approaches and feature definitions need a highly detailed knowledge of the gearbox type, manufacturer, composition and dimensions, its bearing and gear types, gear teeth numbers, and so on.This information needs to be gathered for every single gearbox before the start of the monitoring.As a result, the state-of-the-art fault diagnostics and feature extraction approaches can generally not be transferred straightforwardly to new turbine types added to an operator's wind power portfolio.For every new turbine entering the portfolio, detailed turbine composition information needs to be collected from the manufacturer and the vibration features need to be reviewed, adapted and extracted.This constitutes a large resource-intensive initial effort that many wind farm operators and asset managers are hesistant to make.Third, after a feature definition and extraction method has been implemented, thousands of characteristic spectral values per turbine gearbox need to be stored and monitored, which requires costly storage resources and computing time in the remote monitoring centers of the turbine operators and asset managers.Fourth, the state-of-the-art approaches do not analyse the full vibration spectrum but focus on monitoring only isolated aspects thereof, such as a set of characteristic frequencies, or they focus on global metrics of the vibration time series or spectrum.Unlike the proposed approach, they do not support an automated simultaneous vibration monitoring of the full spectral range.Lastly, features defined by human analysts can lead to imprecise decision boundaries and less accurate fault diagnostics predictions than features that have been learnt by the machine learning algorithms themselves [14].The state-of-the-art feature definition and extraction methods may result in lower diagnostics accuracy, more false alarms and false negatives, especially in ambiguous boundary cases that require additional inspection and decision making by remote monitoring staff, than fault diagnostics methods which learn and extact the optimal features themselves.A reliable feature definition and extraction is essential to the fault diagnostics process.For an illustrative example of how the chosen upfront feature definitions can affect the fault detection quality, we refer to the study presented by [11].
The research gap addressed by this study is the development of a fault diagnostics method for vibration-monitored wind turbine gearboxes that 1) learns and extracts an optimal set of discriminative features in an automated manner, not requiring any feature engineering, 2) analyses the full vibration spectrum, rather than focusing only on isolated predefined aspects thereof, and 3) that is even applicable if only few fault observations are available.
Consequently, the objective of this paper is to introduce and demonstrate a novel fault diagnostics approach for vibration-monitored wind turbine gearboxes that can overcome the discussed disadvantages of the state-of-the-art methods.In particular, the novel approach is expected to learn optimal discriminative features in an automated manner and classify the gearboxes' health conditions based on these features without requiring any human feature definition and extraction.It is also expected to analyse the full vibration spectrum and be applicable even in situations where sufficient model training data for fault-type classification are unavailable.
This paper is organized as follows.Section 2 introduces the proposed fault diagnostics approach.Section 3 describes the method applied and data employed in a gearbox failure case study, whereas section 4 discusses the analysis and results.Our conclusions are presented in section 5.
Fault Diagnosis Method
The proposed fault diagnosis method comprises two stages.The first stage performs unsupervised anomaly detection on the features learnt and extracted from spectrograms of each monitored gearbox component (Figure 1).This stage one accounts for the fact that many WT operators have access to only few or even no sensor measurements from actual gearbox fault incidents as these are relatively rare events and they can arise from a range of different causes.While methods that require labelled observations of gearbox faults (as in proposed stage two below) may be less beneficial to such operators, anomaly detection methods based on measurements taken in normal healthy operation state will still be available and highly useful to them even in absence of labelled fault observations.
The second stage of the presented approach employs a multi-label classification method to diagnose specific gearbox fault types based on past fault observations (Figure 2).This stage mainly benefits operators who have access to measurements from observed gearbox faults that enable training a corresponding fault type classification model.Therefore, the proposed stage two is performed only if sufficient fault observations are available to the operator's remote monitoring staff in charge of implementing the proposed approach.
The fault diagnoses are made based on features extracted from vibration spectrograms.To this end, vibration measurements are taken continuously from numerous accelerometer-monitored gearbox components and are accessed through the turbine's condition monitoring system (CMS).The accelerometer measurement time series are subjected to short-time Fourier transforms (STFT) to monitor the temporal evolution of the vibration spectra in the time-frequency domain.The resulting spectrograms from all accelerometer-monitored gearbox components serve as inputs to feature extraction neural networks composed of convolutional and pooling layers as described below.Unlike the state-ofthe-art fault diagnostics methods, the proposed approach does not require gearbox-type specific information.Therefore, it can be introduced to even large WT portfolios without the upfront efforts and investments required for existing methods.
The operators will be informed both in stages one and two in case a significant deviation from healthy component spectrograms has been diagnosed.Importantly, both stages of the proposed fault diagnostics approach rely on automated feature learning and extraction that is performed by an algorithm rather than a human analyst.This is achieved by the application of convolutional neural networks (CNNs) [14][15], as shown in Figures 1 and 2. We refer to [16] for a technical introduction to CNNs.CNNs were selected for the proposed health state classification approach because, unlike other models, they are capable of learning and extracting features from image data without human assistance.
CNNs are computational models that are capable of extracting the relevant features without any human assistance.They accomplish this by learning optimal convolutional filters based on historical training data, in this case vibration measurements and fault observations.Thanks to this property, CNNs have enabled major performance improvements in fields such as speech recognition and object detection in recent years [14][15].CNNs are artificial neural networks that consist of convolutional and pooling layers trained to perform feature learning and extraction based on past observations.These layers are subsequently linked to fully connected layers to perform desired classification or regression tasks based on the previously extracted features.During model training, the CNN weights optimization algorithm effectuates automated feature learning and extraction to construct a low dimensional representation of the input spectra which is subsequently fed to the anomaly detection model (stage one, Figure 1) or the fault type classification network (stage two, Figure 2).In the first stage of the presented fault diagnostics approach, the feature extraction part is succeeded by an isolation forest model (Figure 1) for detecting anomalous spectrograms in an automated manner.The extracted features serve as input to the isolation forest algorithm [17] that is adapted to distinguish anomalous from normal spectrograms with regard to the component health state based on historical accelerometer measurements.The isolation forest algorithm identifies potential anomalies by how quickly the input spectrograms can be isolated from the rest of the spectrograms using a decision-tree-based approach.A health-state classifier is trained using examples from only one class, namely observations from healthy gearbox components only.This is a highly relevant scenario in practice because fault observations of WT gearbox components are often lacking: Wind farm operators usually have a large amount of CMS sensor observations from different parts of the drive train from multiple months or years of operation.Typically, the vast majority of these measurements from the CMS system are taken under normal operating conditions from healthy components.Fault conditions and damages occur relatively rarely in commercial turbines that are operated and maintained in accordance with the manufacturer's recommendations.Therefore, there is a relative lack of such fault observations.This lack strongly restricts the training and application of machine learning models for fault type classification because those models require a significant amount of training data.Therefore, machine learning models trained only on observations from healthy gearbox components tend to be more widely applicable in practical applications and are highly relevant when comprehensive fault observations are lacking.
To train the health-state classifier using only observations from healthy gearbox components, we compared two anomaly detection approaches: the isolation forest algorithm introduced above and one-class support vector machines [18].The isolation forest approach is known for its fast computational training time [17].In the case study presented below, it outperformed the one-class SVM by more than a factor of 30 in terms of required training time but provided no advantages in terms of prediction performance.Therefore, our discussion of the case study below focuses on the developed isolation forest model with its more attractive training times and accordingly larger practical relevance.
In stage two of the proposed fault diagnostics method, our goal is to train a health-state classifier for diagnosing specific fault conditions in gearbox components using both accelerometer measurements from healthy components and from damaged ones.The features extracted by the convolutional and pooling layers serve as input to train multilabel fault type classifiers, as illustrated in Figure 2.
Case study
The proposed fault diagnosis approach is demonstrated and tested based on vibration measurements from multiple gearbox components taken while the gearbox was operating on a test rig.The proposed approach is demonstrated and its performance tested using accelerometer measurements from the gearboxes of two 750 kW wind turbines operated on a WT test rig at the National Renewable Energy Laboratory (NREL).The data were collected by NREL for its Gearbox Condition Monitoring Round Robin study [23].The accelerometer measurements were taken over a ten-minute period from two identical gearboxes of the 750kW wind turbines with one of the gearboxes in a healthy unimpaired state and the other gearbox suffering from multiple damaged components after an oil loss event which had caused moderate damage to the gearbox gears and bearings.Each of the two gearboxes has a transmission ratio of 1:81.491 and comprises a low-speed planetary stage and two parallel stages.Figure 3 shows a schematic of the gearbox.We refer to [23][24] for a detailed description and visualization of the test stand, gearbox, monitoring system and measurement setup.
To demonstrate the fault diagnostics approach and automated feature learning and extraction, the following components were selected for this case study in an arbitrary manner from among the set of components monitored in the NREL study [23]: Accelerometer 1 was attached to the ring gear (component 1) at a bottom-facing location in both the healthy and the damaged gearbox to measure radial accelerations.
Discussion
To create the spectrograms that will be input to the CNN for feature extraction, the accelerometer measurements were split into four segments per second and a separate short-time Fourier transform (STFT) was computed for each segment.A short-time Fourier transform enables the frequency analysis of a signal as it changes over time [25][26].
The length of four segments per second was selected by investigating the tradeoff between time and frequency resolution so as to maintain a high frequency resolution and sufficient temporal resolution, as shown in Figure 4.The frequency resolution should be sufficient for resolving typical spectral differences arising between healthy and damaged states of the monitored components.The STFTs were computed with an overlap of 0.2 seconds for adjacent segments.However, the length of the overlap had no significant effect on the performance of the subsequent anomaly detection and classification models.Given the 40 kHz sampling rate, vibration frequencies of up to the Nyquist frequency of 20 kHz can be resolved.For the present fault diagnostics case study, we focused the analysis on vibration frequencies of up to 1 kHz.Prior to the model training, we sampled segments from the resulting spectrograms with replacement in order to augment the training dataset.We sampled segments of one second in length to ensure that short vibration measurement periods (of only one second) will be sufficient as input to the fault diagnostics model when it is used for inference in a condition monitoring software or CMS system.One-second intervals were found to be sufficiently representative of this amplitude variability when examining the STFT amplitude variability over time for all frequencies up to 20 kHz, as shown in Figure 5.To test the sensitivity of the presented fault diagnostics approach with regard to the temporal length of the sampled spectrogram segments, we performed our analysis for spectrograms with time lengths ranging up to 6 seconds, finding that this choice did not significantly affect the results.After sampling the one-second segments from the vibration spectra of healthy and faulty components, the resulting dataset was randomly shuffled and partitioned into training, validation and test sets.The training set in this case study contained 80000 instances.The classification method was validated using a validation set of vibration measurements from healthy and damaged components based on 10000 instances.The test set also contained 10000 instances.
Figure 6 shows subsets of the spectrograms derived from the vibration measurements of the three accelerometers at the healthy and the damaged gearbox components.Two healthy and two damaged instances are shown for each of the three components for illustration.As can be seen in the figure, the spectral differences between vibration measurements from the healthy and the damaged components have been sufficiently resolved by the Fourier transforms to enable discriminative feature learning.Stage 1: Isolation forests for detecting anomalous vibration spectra.The spectrograms prepared as outlined above served as input to convolutional and pooling layers of a CNN that learned and extracted discriminative features in an automated manner.For the feature learning and extraction, we defined a network architecture consisting of one convolutional layer with 16 convolutional filters of 3-by-3 pixels followed by a max pooling layer with a 2-by-2 pixels window size and batch normalization [27].This architecture is of low complexity and also enabled a high classification accuracy in the multilabel classification of stage two of the presented fault diagnostics approach.
The extracted features were input to an isolation forest algorithm for one-class classification [17] which identifies spectral anomalies based on how hard it is to isolate a particular spectrum from the rest of the spectra in the training set.A forest containing 100 isolation trees was trained on the extracted features.The model parameters are summarized in Table 1.The training dataset contained features from the spectrograms of only healthy components.We specified the fraction of anomalous data instances estimated to be present in the training data to less than one over the training set size.The anomaly score computed by the model for each training, validation and test set instance corresponds to the number of splits averaged across the isolation tree forest that is needed to isolate a data point (Figure 1).Thus, the anomaly score is the average path length from root to leaf node in an isolation tree.As shown in Figure 7, the spectra from the healthy and damaged components are clearly separable using the computed anomaly scores, so the isolation forest model is well suited for identifying components with anomalous health states even for high dimensional feature spaces as in the present case study.
The health state discrimination is performed in an unsupervised manner to make it applicable to WT operators whose remote condition monitoring team does not have sufficient amounts of fault observations.Since we are actually in possession of such observations in this case study, we employed a test dataset that had not been used in the model training (Figure 7) and estimated performance metrics based on the test dataset.We found both recall and precision to be 100% on the test dataset for the proposed isolation forest approach and model architecture.Recall is a performance metrics that designates the fraction of true positives over all actual positives, in this case the fraction of all correctly identified instances of a given fault type over all actual occurrences of that fault type.Precision is an alternative performance metrics that denotes the fraction of true positives over all instances there were identified as positive.In other words, the precision states what fraction of the identified observations of a given fault type were correctly identified as observations of that fault type.
Model Model parameter Isolation forest
Number of isolation trees equals 100; Threshold for outliers fraction equals 0.0001
Convolutional neural network
One convolutional layer with four 3-by-3 filters followed by a 2-by-2 max pooling layer and batch normalization, followed by a dense layer of four fully connected nodes and a 3-node output layer, 10% drop out rate We repeated the analysis with the same feature learning and extraction architecture employed in stage two below, which naturally resulted in the same extracted feature set as used in the multi-label classification step.Specifically, this architecture comprised one convolutional layer with only four convolutional 3-by-3 filters followed by a 2-by-2 max pooling layer and batch normalization.As before, the extracted features were then input to the isolation forest algorithm for detecting anomalous spectra.The change in feature extraction architecture had no significant effect on the spectra's anomaly scores (Figure 67) and also resulted in 100% recall and precision.
In addition, one-class support vector machines (SVMs) [18] were investigated as a further approach for the vibration-based anomaly detection in this study.However, the SVM algorithm required more than 30 times more model training time than the isolation forest on the same training set and processor, an AMD EPYC 7B12 2.25 GHz processing unit, though no improvement of detection performance was observed. .We tested the model using additional spectra from healthy component states (middle panel) and from damaged component states (right panel).In the latter case, the anomaly scores were negative, indicating anomalies as expected.Healthy and damaged components are clearly separable with this approach.
Stage 2: Multi-label classification for fault-type diagnostics.As in stage one, the spectrograms were subjected to convolutional and pooling layers to enable feature learning and extraction based on the training dataset.Subsequently, a fault-type classification was performed with the extracted features using fully-connected neural network layers.Jointly, the convolutional, pooling and fully connected layers established the convolutional neural network for the multi-label fault type classification.Once trained, the CNN predicts the probabilities of all three fault types considered in this case study based on a given spectrogram to be diagnosed.
To arrive at the final CNN structure (Table 1), we started from a more complex CNN architecture and successively reduced the number of convolutional and pooling layers, filters and fully connected layers while maintaining maximal validation set accuracy.We selected the least complex CNN architecture that could achieve the highest possible classification accuracy on the validation set.In this case study, the resulting CNN architecture comprises one convolutional layer with four 3-by-3 filters followed by a 2-by-2 max pooling layer and batch normalization.This first part ensures the learning and extraction of features based on which the subsequent classification can be performed.Two fully connected layers were added to the network and a 10% dropout rate applied to avoid overfitting the training data [28].The first fully connected layer comprised four nodes and the output layer consisted of three nodes with a sigmoid activation function for the output layer.The model predicts three binary labels, one of which for each of the monitored gearbox components, and indicates whether or not a fault has been detected in the respective component.The output layer with the three neurons and the sigmoid activation functions is providing probabilities for a given spectrum to belong to a particular fault-type class.The model parameters were determined iteratively with the Adaptive Moment Estimation (Adam) optimization algorithm [29] by optimizing a binary crossentropy loss function.In doing so, multiple binary classification decisions can be optimized at once.The model training was performed for 20 epochs and batch size equal to 32.Different batch sizes did not affect the classification accuracy.Logarithmic transformations of the input spectra had no effect either on the classification accuracy.This architecture enabled a 100% classification accuracy for all three component fault type classes on the validation and test sets.We arrived at this architecture through a grid search by starting from a more complex CNN with the number of nodes equal to powers of two, and then reducing the network complexity while maintaining the high accuracy of 100% on the validation set, as described above.The test set classification accuracy of 100% was achieved both with and without the logarithmic transformation of the spectrogram segments inputs to the convolutional and pooling layers (Figure 6).The models trained as part of the hyperparameter optimization converged to a loss function minimum within 20 epochs without overfitting.
With regard to the limitations and the future research needs arising from the present study, we point out that, first, all acceleration measurements in the present study were taken under constant speed and load conditions on a test stand.The introduced fault detection approach should be field tested under variable speed and load conditions in future work.In practice, the wind speed driving the turbine is fluctuating which results in a variable load and shaft speed and may cause frequency smearing in the spectral representation [9].However, this condition can be overcome by synchronizing the measurements with the wind turbine's rotational speed, for instance by sampling under identical wind and load conditions.Second, for applications in operating wind turbines, the performance of the fault type classification model should also be investigated for a significantly larger number of monitored components and fault types and for damage processes that are evolving over time.This investigation will require more comprehensive field or laboratory measurement datasets.Third, attention also needs to be paid to avoiding possible data imbalance issues when training a fault diagnostics model.While this does not affect stage one of the proposed fault diagnosis approach, it may be relevant in the application of the methods introduced for stage two.Data imbalance refers to situations where there are disproportionate numbers of observations in the output classes.For instance, there may be a large number of observations for one class, say fault type 1, but only few observations for another class.In the presented case study, all fault types were represented with similar numbers of observations.This may not always be the case in practice.Typically, more vibration measurements will be available from healthy components because fault situations are less common than WT gearboxes operating in normal health state.Vibration measurements from damaged components or components in which a damage starts to develop are typically in the underrepresented class.One method to address data imbalance is by over-or undersampling to arrive at an augmented and more balanced training dataset.This may be achieved, for example, by random resampling with replacement (statistical bootstrapping) from the available fault observations, so that all monitored WT components are equally represented, both in healthy and damage states, in the training, validation and test datasets.This approach relies on the assumption that the data used for the bootstrapping are sufficiently representative of the underlying data-generating process.A more comprehensive discussion of methods for addressing data imbalance is not in the scope of this work and we refer to the work of other authors, e.g.[30].
Conclusions
An increasing number of wind turbines are equipped with vibration-measurement systems to enable a close monitoring and early detection of developing fault conditions in the gearbox.Gearboxes are among the most critical and costly components to replace in wind turbines.The current state-of-the-art gearbox fault diagnostics algorithms rely on upfront definitions of fault signatures by human analysts.The state-of-the-art diagnostics methods have in common that, for each of them, a human analyst has investigated and designed a particular feature (fault signature) to be extracted from the vibration measurements.Each feature has been defined so as to capture a particular aspect that starts to build up in the time-or frequency domain signals when an incipient fault starts to evolve and intensify in an originally healthy component.For instance, local surface damage on a gear tooth is typically diagnosed based on changes in the residual signal obtained after the gear mesh frequencies and harmonics have been removed.These feature-engineering approaches have multiple disadvantages as discussed above.They require time-intensive handcrafting of fault diagnostics features and detailed knowledge of the monitored component.Therefore, they lack scalability with the increasing number of monitored turbines, with different component types and configurations present, each of which with its own characteristic frequencies.Fault signatures defined by human analysts can result in biased and imprecise decision boundaries in the fault diagnostics process.
Wee presented a novel accurate fault diagnostics framework for wind turbine gearboxes that overcomes these disadvantages and can be easily incorporated into condition monitoring software or CMS systems for autonomous fault diagnostics decision support.It is based on high-frequency vibration measurements from multiple accelerometers and monitored components.The proposed two-stage framework combines autonomous data-driven learning of fault signatures and health state classification based on convolutional neural networks and isolation forests.In stage one of the presented approach, an isolation forest algorithm detects anomalous component health states based on the features that have been automatically learnt and extracted from the gearbox component spectrograms.This is particularly suitable for operators and monitoring centers that do not have access to sufficient amounts of accelerometer measurements from gearbox fault events.On the other hand, the availability of such observations is required in stage two which involves a multi-label classification by fault types based on spectrogram features extracted from past fault observations.
We have demonstrated and tested the proposed fault diagnostics framework by application to gearbox vibration measurements from two wind turbine drivetrains.The case study performed to this end used accelerometer measurements from a test rig measurement campaign for three different fault types and achieved high fault diagnostics accuracy.
Unlike the state-of-the-art approaches [8][9][10][11], the presented method enables automated feature learning and extraction without a human analyst.As demonstrated, given suitable training data, accurate fault diagnosis is possible without any human feature engineering and without the need for storing thousands of spectral characteristics and threshold values to be predefined by monitoring center staff for every turbine.Moreover, the presented fault diagnostics approach does not require detailed knowledge of the gearbox type, manufacturer, composition, gear dimensions, teeth numbers, characteristic bearing frequencies, and so on.Therefore, it can be applied to arbitrary gearbox types, composition and manufacturers.In summary, the proposed framework is advantageous over the state-of-the-art approaches, like monitoring of spectral lines and other characteristic metrics, in that the fault diagnosis features are learnt by the algorithm, in that no gearbox-type-specific diagnostics expertise and no corresponding human feature engineering are required.Moreover, it is not restricted to predefined frequencies or spectral ranges but monitors the full vibration frequency spectrum of interest.
Funding: The support of the present work through a grant from the Swiss innovation agency Innosuisse is gratefully acknowledged.
Figure 1 .
Figure 1.Stage one of the proposed method.Anomaly detection based on features extracted by convolutional and pooling layers from spectrograms of the monitored components in their healthy state.An isolation forest model is trained on the extracted features.The trained model is subsequently employed for detecting anomalous spectra, indicated as red nodes in the isolation trees shown on the right-hand side of the chart.
Figure 2 .
Figure 2. Stage two of the proposed approach.Again, the feature learning and extraction is performed autonomously by convolutional and pooling layers.The proposed fault diagnostics model for the health state classification in wind turbine gearboxes is a fully connected multi-label neural network, as shown on the right-hand side of the chart.Multi-label classification [19-21] is especially beneficial when accelerometer measurements taken during evolving and evolved past gearbox damage are available for the gearbox types of interest.A multi-label classification model is estimated for monitoring multiple gearbox components simultaneously based on high-frequency acceleration measurements from accelerometers attached in close proximity of the monitored components.The multi-label classification enables a joint classification of multiple damage types, wherein each data instance is simultaneously assigned multiple labels.Each label indicates the membership status in one of multiple classes in a binary manner.More formally, a multi-label classification algorithm estimates a map : ℝ → ℝ based on a training set &' , , . . ., , , , , . . ., , ,, = , . . ., 1 of size wherein the coordinates of any ∈ ℝ can take binary values only, ∈ {, } ∀ = , . . ., .
Accelerometer 2 was attached to the low-speed shaft bearing (component 2) also to measure radial accelerations, whereas accelerometer 3 monitored the radial accelerations of the high-speed shaft downwind bearing (component 3).All three components -the ring gear, the low-speed shaft bearing and the high-speed shaft bearing -exhibited different degrees of damage in the damaged gearbox such as scuffing.In total, readings from six sensors are considered.The accelerometer measurements at the undamaged gearbox and at the damaged gearbox were each taken at 40 kHz sampling frequency under constant load and speed of 22.09 rpm low-speed shaft and 1800 rpm high-speed shaft speed for a duration of 10 minutes.
Figure 4 .
Figure 4. Tradeoff between frequency and temporal resolution depending on the number of shorttime Fourier transforms performed per minute (#SFFT) for creating the spectrograms.The longer the input time series to the Fourier transform, the higher the frequency resolution achieved.
Figure 5 .
Figure 5. One-second intervals were found to be sufficiently representative of the STFT amplitude variability when examining this variability over a range of intervals, including 1, 5, 10 and 60 seconds as shown in the subpanels, for all frequencies up to 20 kHz.For illustration, the amplitude variability is only shown for frequencies in the range of 100 to 200 Hz.
Figure 6 .
Figure 6.Subsets of the spectrograms derived from the vibration measurements of the three accelerometers at the healthy and the damaged gearbox components.Two healthy and two damaged instances are shown for each of the three components.For instance, "Healthy 1" labels two different 1-sec spectrogram segments of component 1 in healthy state.Each spectrogram segment runs over 1 second and displays logarithmic STFT amplitudes for frequencies up to 400 Hz.
Figure 67 .
Figure 67.Anomaly detection based on anomaly scores computed with the isolation forest algorithm.The training set contains only spectra from healthy component states, corresponding to absence of anomalies as indicated by a positive anomaly score (left panel).We tested the model using additional spectra from healthy component states (middle panel) and from damaged component states (right panel).In the latter case, the anomaly scores were negative, indicating anomalies as expected.Healthy and damaged components are clearly separable with this approach.
Table 1 .
Parameters of the trained isolation forest model (stage one) and of the convolutional neural network (stage two). | 7,870.8 | 2022-01-31T00:00:00.000 | [
"Computer Science"
] |
Diagnosis and Location of Open-Circuit Fault in Modular Multilevel Converters Based on High-order Harmonic Analysis
: Open-Circuit faults of Submodules (SMs) are the most common fault type of modular multilevel converter (MMC). Thus, in order to improve the reliability of MMC, it is very important to detect and locate faulty MMC SMs. In this paper, a new fault diagnosis and location method based on high-order harmonic analysis of the bridge arm voltage is proposed, and the characteristics of SM open-circuit faults are analyzed. In the proposed method, faults are detected by comparing the amplitude of the bridge arm voltage at the switching frequency with a variable threshold, and the phase angle of the bridge arm voltage at the switching frequency is used to locate the faulty SM. The proposed method can detect faulty SM with just one voltage sensor per arm bridge. Moreover, this method can effectively avoid misjudgment of a fault detection signal in the normal transient state. Finally, a MMC model was built using a MATLAB/Simulink environment and simulations were conducted. The experimental results show that the proposed method can not only diagnose a SM fault quickly, but can also locate the faulty SM accurately. open circuit
INTRODUCTION
The modular multilevel converter (MMC) was first proposed as a new type of voltage source converter topology by the German scholar R. Marquardt in 2001 [1]. Because of its modular structure and design, easy expansion, high output waveform quality, low operating loss, and common DC bus, it is commonly connected to medium and high voltage direct current and new energy. It is increasingly and widely used in applications such as high voltage electric drives [2][3], high voltage direct current (HVDC) transmissions [4][5].
An MMC comprises a large number of SMs. For example, each bridge arm of the "Trans Bay Cable" project contains 216 SMs [6], and each SM contains two power switching devices, each of which is a potential point of failure. SM faults are one of the common MMC fault types. SM faults lead to deviations between the output voltage and the expected value of a bridge arm, increased interphase circulating current, and increased AC and DC side harmonics, which affect the safe and reliable operation of the whole system. After a SM fails, a protection strategy should be adopted immediately. Specific protection strategies should include the following aspects: fast fault detection and location of the faulty SM, fast bypassing of the faulty SM, input of a redundant module, and return of the system to its fault-tolerant operation state [7][8]. Detecting and locating faulty SMs is the premise of faulttolerant control. Therefore, it is very important to study how to quickly detect faults and accurately locate faulty SMs to ensure safe and stable operation of the system [9][10][11][12].
In response to the above problems, much research has been conducted. Reference [13] used a Kalman filter algorithm to compare the difference between measured and estimated values to detect faults, and applied the SM capacitor voltage to locate faulty SMs after failure. The authors of [14] proposed a fault diagnosis method based on the sliding mode observer. Their proposed method effectively diagnoses any open fault of the Insulated Gate Bipolar Transistor (IGBT) and avoids the interference caused by sampling error and system fluctuation. Reference [15] constructed a state observer that can identify multiple SM faults based on the deviation between the measured and calculated values of state variables. In [16], a fault diagnosis and location method based on a mixed kernel support tensor machine was proposed, in which the characteristic data of ac current and internal circulation current are extracted in either normal operation or open-circuit fault. Reference [17] proposed two fault monitoring methods based on a clustering algorithm and on the calculation method of bridge arm equivalent capacitance, which can accurately diagnose and locate faults. Reference [18] identifies faults by determining whether the difference between the predicted and measured values of the bridge arm current exceeds a given threshold. The fault location method is based on the slope of the capacitor voltage of the SM. A series of complete SM fault detection, fault-tolerant control, fault location, and fault reconstruction ideas were proposed in [19], which can achieve fault traversal and improve the reliability of system operation.
This paper proposes a new SM fault diagnosis and localization method based on high-order harmonic analysis of the bridge arm voltage in MMCs. In the proposed method, the amplitude and phase angle of high-frequency harmonic components are used to detect and locate faults. Compared with the methods cited above, the proposed method can detect faults more quickly, significantly reduces the required number of sensors, has no complicated calculations, and is low-cost.
The remainder of this paper is organized as follows. Section II introduces the topology and operating principle of the MMC. Section III analyzes the open-circuit fault characteristics, modulation algorithm, and high-frequency harmonic distribution of MMC SMs, and proposes fault detection based on high-frequency harmonic analysis of the bridge arm voltage in the MMC. Section IV verifies the effectiveness of the proposed method via system simulation results. Finally, Section V presents concluding remarks.
MMC TOPOLOGY AND OPERATING PRINCIPLE 2.1 MMC Topology
The three-phase MMC topology is shown in Fig. 1. The MMC consists of three phases and six bridge arms. The upper and lower arms are combined into one phase unit.
Each bridge arm contains one bridge arm reactance and the same number of series SMs. Fig. 1a, u a , u b , u c are the three-phase AC side voltage of the converter. i pj and i nj are upper and lower arm currents, respectively. u jp and u jn are the upper and lower arm voltages, respectively, where j = a, b, c. I dc is the DC side current, and L is the bridge arm reactance value. Each bridge arm has N SMs connected in series. The SM structure is shown in Fig. 1b. Each SM has two insulated gate bipolar transistors (T1 and T2), an anti-parallel diode, and a floating capacitor in parallel. u sm is the output voltage of the AC terminal of the SM during steady-state operation. U dc is the DC voltage of the SM. Each bridge arm has an identical bridge arm reactance L in series. The main function of the bridge arm reactance is to suppress the internal circulation between the bridge arms and reduce the current rise rate when the converter fails.
MMC Operating Principle
The MMC has N SMs per bridge arm and can output up to N + 1 levels. In steady-state operation, the total number of conduction SMs of each phase unit must satisfy Eq. (1). By controlling the number of SMs of the input state in the upper and lower arms, the MMC outputs a multilevel waveform:
MMC Modulation
The modulation strategy of MMC is the key link of the valve control stage. The purpose of the modulation link is to control the ON and OFF states of the converter switching device according to the reference voltage waveform, so that the MMC output AC voltage approaches the reference voltage waveform. At present, the modulation strategies of MMC proposed in the literature mainly include nearest level modulation (NLM), carrier level shifted pulse width modulation (LS-PWM), and carrier phase-shifted modulation (CPS-PWM). This paper focuses on the carrier phase-shift modulation strategy because it can effectively reduce harmonics at lower switching frequencies. The reference voltages of the SMs are the same, which is beneficial to the balance of the capacitor voltage. Most importantly, it is applied in the SM fault detection and localization algorithm presented in Section III.
N sets of triangular carriers with frequency f c are required for MMCs with N cascaded SMs, and the phase angles are sequentially shifted by ∆Ø = 2π/N. Assuming that the reference voltage of the bridge arm is arm u * and the reference voltage for module i is sm_i u * , the reference voltage of each SM should be equal to the bridge reference voltage, which is The N SMs reference voltages are compared with the N sets of carriers to generate N sets of PWM pulses to respectively control the upper IGBTs of the N SMs, and the lower IGBTs of the N SMs are controlled by adding a certain dead time. The CPS-PWM principle is shown in Fig. 2. The switching frequency of each SM is f s = f c = 1/T. Fig. 2 shows the eight phase-shifted carriers and a modulated wave waveform. Fig. 3 shows the bridge voltage waveform corresponding to Fig. 2, which is the sum of two-level PWM voltages output by eight SMs.
Eq. (2) is a constraint condition to ensure that the output characteristics of the bridge arm remain unchanged. As long as Eq. (2) is satisfied, the reference voltage of each SM in the bridge arm can be adjusted in a small range without changing the output voltage of the whole bridge arm, and the capacitance voltage can be balanced.
FAULT DIAGNOSIS AND LOCATION METHOD OF MMC
The semiconductor devices in the SMs of MMC are relatively fragile, and the number of IGBTs in practical projects is relatively large. Therefore, an IGBT in a SM may be damaged because of overvoltage, overcurrent, or other reasons. SM faults can be divided into two types: short-circuit faults and open-circuit faults. Short-circuit faults have mature solutions, whereas open-circuit faults for a power device are not obvious, are not easily discovered, and their impact is greater. Therefore, this paper focuses primarily on open-circuit faults of SMs.
Open-circuit Fault Analysis of SMs
In studying the AC side output voltage characteristics of SMs, the output voltage of a SM can be expressed as follows: where S i is the switch function of the SM. The output voltage of the SM in normal and faulty conditions is shown in Tab.
2.
When an open-circuit fault occurs in T1, if T1 is in the OFF state, the open-circuit fault has no effect on the OFF state. When an open-circuit fault occurs in T1 and T1 is in the ON state, at i sm > 0, the bridge arm current path is as shown in Fig. 4a. The capacitor can be charged normally, and the output voltage of the SM is u sm . At i sm < 0, the bridge arm current is as shown in Fig. 4b. In this case, the output voltage of the SM is zero and the capacitor cannot discharge normally.
When an open-circuit fault occurs in T2 and T2 is in the ON state, the open-circuit fault has no effect on the OFF state. When T2 is in the ON state, at i sm < 0, the bridge arm current path is as shown in Fig. 4b. The output voltage is the same as normal in this case. At i sm > 0, the bridge arm current is as shown in Fig. 4a Whether an open-circuit fault occurs in T1 or T2, it will eventually lead to increased capacitance voltage in a faulty SM.
High Harmonic Analysis of CPS-PWM
In Sinusoidal pulse width modulation(SPWM)mode, the output PWM voltage waveform is determined by the modulation wave frequency and the carrier frequency. Each SM is equivalent to a two-level converter. The output two-level PWM voltage waveform of each SM is analyzed by the double Fourier transform method [20]. Using twolevel natural sampling, the harmonic distribution expression of the output voltage of the SM can be obtained as follows: where i is the SM number, and i = 1, 2, …, N, m is a multiple of carrier frequency and n is a multiple of fundamental frequency. θ 1 is the initial phase of the modulated wave, θ c1 is the initial phase of the carrier wave.
In Eq. (4), the harmonic amplitude coefficient of the output voltage of SM can be written as: where M is the modulation ratio, J n (x) is the n-order Bessel function, m is a multiple of carrier frequency and n is a multiple of fundamental frequency. From the expression of harmonic distribution of u sm , it can be seen that the harmonic of the two-level PWM voltage is mainly distributed in the fundamental frequency band, carrier frequency band, and carrier sideband, which can be written in the form m c ± n 1 .
Where k = 1, 2, …, ∞ and the total voltage of the bridge arm is equal to the sum of the output two-level PWM voltage of N SMs on the bridge arm. The expression of the bridge arm voltage is as follows: In Eq. (7), the amplitude coefficient of the bridge arm voltage harmonic can be written as follows: (ps) 4 1 π π sin ( ) π 2 2 mn n C mN n J mN M mN It can be seen from Eqs. (4) to (7) that when the CPS-PWM method is adopted, the carrier frequency band and its sideband harmonic components whose carrier multiple is a non-N integer times the output voltage by each SM on the same bridge arm are all cancelled by the CPS-PWM technology in the process of waveform superposition.
From the expression of the bridge arm voltage harmonic distribution, the CPS-PWM strategy is equivalent to raising the original two-level PWM carrier frequency c to N c . That is, the equivalent carrier frequency of the final multi-level PWM voltage is c = N c1 . The harmonics are also distributed in the corresponding carrier frequency band and its sideband with this equivalent carrier frequency.
Detection Principle of Open-circuit Fault for SM
Reference [21] proposed a fault detection method for FC converter based on harmonic frequency analysis. In this method, faults are detected by analyzing the amplitude of the output phase voltage of the converter at the switching frequency. In MMC, the amplitude of the switching frequency component should be zero in theory due to the carrier phase shift. According to the carrier phase-shifted harmonic distribution expression in Section III-B, the output voltage of each SM at fs can be expressed by a phasor si V , where the amplitude is si V and the initial phase is Compared with the normal state, the switching frequency f s and its integral multiple harmonic component are no longer zero after the fault. s V will be greater than zero, and the phasor is . This phase is used to locate the faulty SM. Therefore, by detecting the amplitude and phase of the bridge arm voltage at switching frequency, it is possible to detect and locate any faulty SM on the bridge arm.
Threshold
The amplitude of the resulting switching frequency component s V is approximately equal to zero in the steady state. A small component of the switching frequency is always present in the steady state owing to dead times and ripples. In addition, system transient conditions can also cause a significant increase in the switching frequency amplitude. As shown in Fig. 6 to accurately distinguish between normal transient conditions and SM fault conditions, it is important to set a threshold for fault detection.
According to the analysis in Section III-B, the harmonic frequency amplitude coefficient of usm is C mn . When m = 1 and n = 0, it is the switching frequency amplitude coefficient C 1,0 from which Eq. (16) follows: In steady state, si V is mainly affected by modulation ratio M. Considering the influence of dead time and system ripple on the switching frequency component, the threshold needs to be added with a small DC offset to overcome its effect. The final threshold is a variable that varies with the modulation ratio M: is the amplitude of output voltage of the SM at f s in transient state.
Implementation
The implementation scheme of fault detection and location is as follows. According to the harmonic distribution characteristics of CPS-PWM modulation strategy and the time domain sampling theorem, in order to analyze the frequency component at the equivalent switching frequency Nfs, the sampling frequency f m is greater than or equal to twice the equivalent switching frequency, and the DFT algorithm is used to extract the amplitude and phase of the bridge arm voltage at the switching frequency f s : After the bridge arm voltage is collected, the sideband component of f s in the bridge arm voltage is filtered by a bandpass filter with a center frequency of f s . Finally, the amplitude of the output voltage at switching frequency calculated by the DFT algorithm is compared with the fault detection threshold value. If s th , the SM is determined to be faulty. The phase angle extracted by DFT is used to locate the faulty SM. Once the faulty SM is located, the by-pass switch of the SM is closed immediately, and the system enters the fault-tolerant control state. The flowchart of the fault diagnosis strategy implementation scheme is shown in Fig. 7.
SIMULATION EXPERIMENT
A 9-level MMC simulation model was built in a MATLAB/Simulink environment. The simulation parameters are shown in Tab. 3. Fig. 8 shows the spectrum distribution of the output voltage waveform of the SM. It can be seen from Fig. 8 that the higher harmonics of the output voltage waveform of the SM are mainly concentrated in the odd times carrier frequency band and its sideband harmonics, as well as the even times carrier sideband, while the most serious harmonics caused by the modulation strategy are at the carrier frequency fc.
Spectrum Analysis
The spectrum distribution of the bridge arm voltage waveform is shown in Fig. 9. It is obvious that the bridge arm voltage waveform no longer contains carrier f c and its sideband harmonics.
The spectrum distribution of the bridge arm voltage after the fault of the SM is shown in Fig. 10. Compared with Fig. 9, the switching frequency f s and its integral harmonic component are no longer zero after the fault.
Simulation Results
Fig . 11 shows the voltage and current waveform of the bridge arm. When the modulation ratio changes from M = 0.68 to M = 0.98, the voltage of the bridge arm increases. In Fig. 11c, at t = 0.3 s, the modulation ratio has a step change. At this time, s V has a small spike change at the switching frequency. According to Eq. (13), the threshold also adaptively changes stepwise with the modulation ratio, avoiding the misjudgment of the fault detection signal. As shown in Fig. 12, an open-circuit fault occurs on SM2 at t = 0.4 s. in Fig. 12a, the voltage waveform of the bridge arm is distorted and a level is lost. The corresponding load phase current waveform is shown in Fig. 12b. In Fig. 12c, according to the analysis in Section III-C, the output voltage of the fault SM increases at the switching frequency; thus, as s V is no longer zero, it is compared with the threshold value, and the fault detection flag signal quickly detects the occurrence of the fault after a 2.5 ms delay.
In Fig. 13a, the output voltage of the AC side of the SM increases after the fault occurs, which is consistent with the analysis in Section III-A. The waveform after bandpass filtering at the center frequency of fs is shown in Fig. 13b. Fig. 14 is a phasor diagram of switching frequency amplitude and phase angle of the bridge arm extracted with DFT algorithm after faults on SM1 and SM2. It can be clearly seen from the phasor diagram that after the faults occurrence on SM1 to SM8, the phase angles of s V correspond to −45° and −90°, respectively, and after the other SMs fails, the phase angles are −135°, 180°, 135°, 90°, 45° and 0°, respectively. For MMC converters with more SMs per arm, the same method can be used to locate the faulty SMs accurately.
CONCLUSION
In this paper, a fault detection method based on highorder harmonic analysis of the MMC bridge arm voltage was proposed. In the proposed method, the amplitude and phase angle of the bridge arm voltage at switching frequency are obtained by collecting the voltage of the MMC bridge arm, and performing filtering and DFT transformation. Then, by comparing the amplitude of output bridge arm voltage at switching frequency with a threshold value varying with the modulation ratio, the normal transient state and the fault state are effectively distinguished.
Subsequently, according to the characteristics of harmonic distribution of the CPS-PWM method, the phase angle corresponding to the switching frequency component is calculated to locate the faulty SM. Simulation results show that the fault diagnosis method proposed in this paper cannot only detect faults quickly, but can also locate faults accurately, because it takes highfrequency harmonics as the detection object, and can monitor the change of amplitude in a short time (2.5 ms). For the entire MMC converter, only six voltage sensors are needed, which reduces cost and detection complexity, and is easy to implement. This method lays the foundation for the next step of fault-tolerant control. Future research will focus on the diagnosis of specific switching devices on SMs in MMC. | 4,937.8 | 2020-06-14T00:00:00.000 | [
"Engineering"
] |
Green synthesis of FeO nanoparticles from co ff ee and its application for antibacterial, antifungal, and anti-oxidation activity
: This study presents a sustainable method for producing iron oxide nanoparticles (FeO NPs) using aqueous extracts from co ff ee seeds. Characterization through X-ray di ff raction (XRD), scanning electron microscopy, and transmission electron microscopy (TEM) revealed non-spherical NPs ranging from 30 to 50 nm. The XRD analysis con fi rmed that the face-centred cubic structure and the Debye – Scherrer ’ s crystalline size support the FeO particle size con fi rmed from TEM. The synthesized NPs demonstrated signi fi cant antimicrobial activity against Escherichia coli and Staphylococcus aureus , as well as antifungal activity against Aspergillus niger . Additionally, they exhibited potent antioxidant properties, e ff ectively inhibiting DPPH, α -amylase, and α -glucosidase compared to acarbose and co ff ee extract. The fi ndings suggest that these FeO NPs hold promise as antimicrobial, antioxidant, antifungal, and potentially antidiabetic agents.
Introduction
Nanotechnology is the ground-breaking technique that comprises managing molecules at the level of nanoscale.The field has become exciting for modern technology as it makes particles of different sizes, chemical properties, textures, dimensions, and shapes.All these products have different properties and applications.The properties of nanoparticles (NPs) differ from their bulk counterparts as a result of their size.Materials at the nanoscale have been extremely progressive in terms of knowledge and applications [1][2][3].NPs fall into three main categories based on their composition: metallic, ceramic, and polymeric.Metallic NPs find applications in diverse fields like textiles, food, agriculture, health, and cosmetics.Their small size grants them a high surface area-to-volume ratio, significantly influencing their physical and chemical properties [4].This alteration enhances their potential utility across various applications, owing to the improvements in their properties [5].
Iron oxide (FeO) NPs are the simplest and smallest particles of iron and they display high surface area and reactivity.These particles are not toxic in nature and display exceptional stability in terms of dimensions.FeO NPs have great electrical and thermal stabilities and have a good magnetic effect [6].On oxidation of FeO by exposure to air and water, free ions of Fe are produced.FeO NPs can be used for various applications such as drug delivery, separation, dye adsorption, photocatalysis, imaging, etc. [7,8].NPs of FeO have been recognized to play a major role as conducting materials.Due to these unique and attractive properties, a lot of research has been carried out on fabricating FeO NPs.Recently, methods such as sol-gel, chemical precipitation, flow injection, ultrasonic, electro-chemical, and hydrothermal have been developed for the synthesis of FeO NPs [9].The structure and morphology of FeO is very important for predicting the properties of FeO NPs in terms of applications [10].Hence, the designing of NPs with different structures is significant.A lot of research has been dedicated to the synthesis of FeO NPs with different morphologies, structures, and forms such as nano-sheets, nano-rods, and nano-particles.However, these methods are expensive, energy-intensive, toxic, and need extreme conditions for operations.Therefore, biological methods for the synthesis of FeO NPs have been recognized to be fast, stable, environmentally viable, efficient, and cost-effective [11][12][13][14].
Synthesis of NPs in a biological way includes the use of plants and microorganisms such as bacteria, viruses, and fungi as reducing agents.However, due to the ease of handling plants, they have been receiving additional research attention.Green synthesis by use of plant-based sources involves the use of different parts of plants such as roots, stems, leaves, flowers, fruits, and seeds [14][15][16].The NPs synthesized from plants are more stable in comparison to the NPs synthesized from microorganisms.Plants have several organic reducing agents that occur naturally which it simpler to produce NPs [17].The relationship between plants and nanotechnology is referred to as green nanotechnology.There is a symbiotic relationship between plant science and nanotechnology as phytochemicals from plants are utilized for synthesizing NPs [18].
Coffee is ranked as the second product after petroleum that is being traded in the world.Coffee possesses various bioactive constituents such as phenols, flavonoids, steroids, alkaloids, saponins, and polysaccharides.These bioactive elements are termed phytochemicals of plants.Phytochemicals are present in coffee seeds (CSs) and they can be extracted and used as reducing agents as well as capping agents for the synthesis of FeO NPs.They play a critical role in converting the ions of iron to atoms of iron by depicting the building blocks of FeO NPs.Some of the most common micro-organisms that are pathogenic to humans are Escherichia coli (E.coli), Staphylococcus aureus (S. aureus), and Aspergillus niger (A.niger).A few strains of S. aureus are capable of resisting antibiotics like penicillin, vancomycin, methicillin, erythromycin, and tetracycline [19].In this view, the NPs of FeO have shown anti-microbial potential in combating pathogens.Also, FeO NPs were reported to have good activity against several pathogenic micro-organisms such as fungi and bacteria due to their ability to produce reactive oxygen species [20].
In this study, we have reported the synthesis of FeO NPs from the aqueous extract of CSs for the first time and tested their efficiency in inhibiting the growth of microorganisms such as E. coli, S. aureus, and fungi A. niger.The particles were characterized to analyse the structures, morphology, and various other properties to understand their applications in diverse sectors.The biological activity of FeO NPs such as anti-oxidant, anti-microbial, and antifungal activities were carried out.
Materials
High purity ferrous sulphate heptahydrate (FeSO 4 •7H 2 O) was obtained from Rankem private limited, Mumbai, India, while CSs were obtained from standard dealers.Nutrient agar medium (NAM), potato dextrose agar (PDA), and acarbose were procured from Himedia.Triple distilled water was used throughout the reaction.
Preparation of CS extract
The commercially available coffee powder (10 g) was added to 100 mL of distilled water and boiled for 15 min at 70°C.Then, the extract was filtered by Whatman filter paper No. 42 and stored at 4-5°C for further investigation.Further, filter extract was used in the synthesis of FeO NPs.
Preparation of green FeO NPs
Green synthesis of FeO NPs was carried out with the coffee extract (100 mL) by heating to 40-60°C with continuous stirring using a magnetic stirrer.When the temperature of the extract reached 50°C, 150 mM of FeSO 4 •7H 2 O and 1 M NaOH solution were added and left for about 2 h till a brownish-black precipitate appeared [21].Now this solution was cooled at room temperature and centrifuged at 4,000 rpm for 10 min with the help of centrifuge tubes.After centrifugation, it was washed three times with distilled water and once with ethanol.After washing, the pellets were dried at 100°C.Afterward, the collected particles were transferred to a ceramic crucible cup and heated in a furnace at 500°C for 2 h, and ground into powder with a mortar and pestle.The resultant brown powder is stored in an airtight container for characterization.
Microorganisms and culture conditions
Microbial cultures were prepared on potato dextrose agar plates and stored at 4°C, while the stock was grown in the dark at 25°C in PDA for 7 days.A growth medium was prepared for use by mixing 80 g of glucose and fresh potato (500 mL) in 3.5 L of distilled water.Fresh potatoes were prepared by dicing 1 kg of potatoes and boiling them in 2 L of distilled water for 30 min.The medium was thus dispensed into 80 beakers with a capacity of 350 mL (50 mL per cup) and autoclaved at 121°C for 30 min.Inoculate vials with fresh microbial samples grown in PDA medium in Petri dishes for 7 days at 28°C.After 10 days of incubation under normal conditions (25°C), the culture media is filtered through filter paper to separate the filtrate and mycelium.The vaccine was shaken with ethyl acetate at 250 rpm for 20 min at room temperature.The extract is filtered and concentrated under vacuum at 40°C with a field evaporator to give a brown product (2 g).
Preparation of antibacterial assay
The method used to analyse antibacterial activity is a Well diffusion test on NAM [22].This medium is poured aseptically into a Petri dish and left for 1 h to solidify.After that, fresh overnight cultures of E. coli and S. aureus (100 µg•mL −1 ) were exposed to nutrient-rich agar using a vacuum and left in the plate for 15-20 min to absorb all bacteria.The wells were prepared by gel puncture (7-8 mm) under sterile conditions.The FeO NPs sample was placed in the well at different concentrations: 50, 100, and 150 µg•mL −1 .Plates were placed at room temperature for 30 min to allow the extract to disperse, then incubated at 37°C for 24 h to allow microbial growth.Antibody-containing materials inhibit the growth of bacteria after incubation by revealing a clear zone of inhibition (ZOI) around the well.
DPPH radical-scavenging activity
The radical scavenging activity of FeO NPs was measured according to their hydrogen donating ability or radical scavenging ability using stable radical DPPH.A solution of DPPH in ethanol (0.1 mM) was prepared and 1.0 mL of this solution was added to 2.0 mL of FeO NP at different concentrations (20-100 µg•mL −1 ).Thirty minutes later, absorbance was measured at 517 nm.Ascorbic acid was used as a positive control.The low absorbance of the reaction mixture indicates greater free radical scavenging activity [23,24].Free radical scavenging activity is expressed as the percentage inhibition of free radicals by the sample and the formula for the same is where A ini refers to the absorbance of the reference/con- trol sample (without FeO NPs).A obs is the absorbance after the addition of FeO NPs
Alpha-amylase inhibition test
Inhibition testing was done using the DNSA method [25].
The first incubation of the mixture was done at 37°C for 20 min.
After incubation, add 250 µL of 1% starch solution to the above buffer and incubate at 37°C for 15 min.Add 1 mL of dinitrosalicylic acid reagent to quench the reaction and then incubate in a boiling water bath for 10 min.Cool the tube and measure the absorbance at 540 nm.The reference sample contains all other reagents and enzymes except the test sample.Alpha-amylase inhibitory activity is expressed as percentage of inhibition.
Alpha-amylase inhibitory activity was calculated according to the following equation: where A i540 is the absorbance without FeO NPs.A e540 is the absorbance with FeO NPs.
Alpha-glucosidase inhibitory activity
Alpha-glucosidase inhibition was determined according to the standard method [26].The analytical mix contains 150 mL of 0.1 M sodium phosphate buffer (containing 6 mM NaCl, pH 6.9) at a concentration of 20-410 mg mPL, 0. where A i405 is absorbance without FeO NPs.A e405 is absorbance with FeO NPs Green synthesis of FeO NPs from coffee and its application 3
Instrumentation
UV-Vis absorption spectra were obtained by Shimadzu 1900i at a wavelength of 200-800 nm.X-ray diffraction (XRD) spectra were recorded on a Bruker AXS D8 Advanced using Cuα radiation and Si(Li) position sensitive detector with a wavelength of 5,406 Å was used.Anton Paar TTK 450 accessory was added at 170°C-450°C.Features were obtained using scanning electron microscopy (SEM) JEOL Model JSM-6390LV.Resolution: 0.23 nm, lattice: 0.14 nm, 14 nm, 2,000× 1,500,000× magnification.Size and shape of the NPs were investigated using transmission electron microscopy (TEM) JEM 2100 plus, JEOL, Japan equipment.
3 Results and discussion
UV analysis
UV-Vis analysis was performed to confirm FeO synthesis by absorption spectroscopy and understand the optical nature.
The UV-Vis absorption spectrum of FeO NPs is depicted in Figure 1 and the absorption spectra shows an absorption band at 293 nm which corresponds to the biomolecules.The strong and intense band at 293 nm represents the abundance of the biomolecules on the surface of the FeO NPs.The band at 293 nm is noticed to be broad and the broadening of the peak is just due to the presence of FeO NPs which extends beyond 500 nm [27].
Structure and composition of FeO NPs
The XRD pattern of FeO NPs synthesized with coffee extract is presented in Figure 2. The examination revealed diffraction peaks at 32°, 35°, 38°, 55°, and 65°, these peaks indicated the formation of FeO NPs and is in good agreement with the literature (JCPDS 86-2316) [28].The sharp and intense peaks revealed the NPs which were obtained from CSs having crystalline nature and Face centred cubic structure [29].The XRD pattern confirms the formation of FeO NPs.FeO possesses a nonstoichiometric FexO configuration with an x value ranging from 0.83 to 0.96, alongside ordered Fe vacancies.This arrangement exhibits low chemical stability and may degrade into α-Fe.But in this study, the synthesized FeO is found to be stable and not prone to oxidation as suggested in the literature and our observations are in agreement with other similar works [30].Debye-Scherrer formula is one of the most widely used formulas to estimate the crystallite size of NPs [31,32].In this study, the Debye-Scherrer formula was used to estimate the average size of crystals and the average size of crystals were seen to be 36 nm.The interlayer spacing (d) is found to be 0.2732 nm and dislocation density (δ) is calculated to be 0.02739 × 10 −14 lines•m −2 .The strain (ε) is found to be 4.26 × 10 −3 and the peak broadening is due to the addition of the lattice strain [33].
Morphology
The NPs extracted from CSs were analysed by SEM to study the morphology of NPs.The results of SEM and EDAX analysis are presented in Figure 3(a-c).It can be observed that the NPs synthesized had agglomerates and were non-uniform in nature and appearance.The sizes of the particles were 20-50 nm approximately.The agglomerates were due to the buildup building blocks due to activities of reducing and capping agents of the coffee extract due to the magnetic activity [34][35][36].
The analysis of the elemental configuration was carried out by EDAX analysis.The results are presented in Figure 3b.It can be clearly seen that the peaks of Fe were observed at 6-7 keV, also the peaks at 0.5 and 0.7 keV showed the presence of C and O, respectively.These results were similar to the work reported by Sadasivam et al. [37].The existence of carbon is due to the carbon available in the plant extract.The elemental analysis of the FeO NP is seen in Figure 3c.The distribution of Fe, C, and O and their amounts in percentage can be seen in Figure 3c.
The TEM analysis of FeO NPs was performed and the images are presented in Figure 4.It is noticed that the FeO NPs are at the nanoscale level and found to be less than 50 nm in scale as observed from the SEM and XRD analysis.The shapes of the NPs are observed to be non-spherical with irregularities and this might be due to the presence of various biomolecules acting as capping agents.However, the NPs are found to be agglomerated as seen in Figure 4 and this might be due to the interaction of biomolecules of coffee extract acting as building blocks of the FeO NPs.
HPLC analysis of CSs
Reverse phase HPLC was performed for the coffee extract to understand the number of biomolecules present in the Green synthesis of FeO NPs from coffee and its application 5 extract which can help in determining the molecules responsible for reduction and capping of the FeO NPs.The HPLC chromatogram is presented in Figure 5 and it can be seen that there are six peaks and two peaks are major suggesting that these two molecules are present majorly in the extract.Each peak at different retention times represents a type of molecule.These observations conclude that the coffee extract contains various biomolecules and these biomolecules can cap and stabilize the FeO NPs formations.
Antimicrobial activity of FeO NPs
The antibacterial activity of the FeO NPs synthesized using aqueous extract of coffee seeds was evaluated against bacteria E. coli, and S. aureus and it was observed that FeO NPs exhibited a good antimicrobial activity compared to the coffee aqueous extract.The antimicrobial activity is due to the interaction of the NPs onto the cell wall of the bacterial strains.However, the ZOI of standard Streptomycin and Vancomycin were found to be high compared to the FeO NPs suggesting that the FeO NPs are moderate and good microbial agents.
Antifungal activity of FeO NPs
The antifungal activities of the FeO NPs synthesized by green synthesis with different concentrations against fungus A. niger are presented in Figure 6.It can be clearly seen from
Antioxidant activity
Antioxidants have been recognized in their work against oxidative damage and have been associated with a reduced risk of chronic disease.Figure 7 shows the DPPH radical scavenging activity of FeO NPs at concentrations of 20-100 µg•mL −1 compared to standard (acarbose) and coffee bean extract.IC50 values of FeO NPs were higher compared to acarbose acid and coffee bean extract.The results showed that the free radical scavenging of FeO NPs slightly increased with the dosage.This result is consistent with the DPPH activity of FeO NPs reported in the literature [38][39][40].
Inhibition of α-amylase and α-glucosidase by FeO NPs
Carbohydrate-digesting enzymes such as pancreatic α-amylase and intestinal α-glucosidase are responsible for breaking down oligosaccharides and disaccharides into monosaccharides suitable for absorption.Inhibiting two digestive enzymes is particularly useful in the treatment of non-insulin diabetes, as it slows the release of sugar from the blood.As shown in Figures 8 and 9, the results showed that α-amylase and α-glucosidase were significantly affected in a concentration-dependent manner after incubation with different FeO NP concentrations.As the concentration of FeO NPs increased, the level of enzyme activity decreased significantly.It can be seen from Figures 8 and 9 that the IC 50 values for amylase and α-glucosidase of FeO NPs were similar to those obtained in previous reports.According to many in vivo studies, inhibition of α-amylase and α-glucosidase is considered one of the most effective treatments for diabetes.
Conclusion
As NPs exhibit many attractive properties and functions in many applications, the study of NP synthesis method has recently become a major area of interest in science and engineering.Biosynthesis of FeO NPs using green sources is an effective method due to its simplicity, environmental protection, and cost.In this study, FeO NPs were successfully produced by bioreducing ferric chloride solution using CS aqueous extract.This is evidenced by UV-Vis spectroscopic analysis, which shows a broad absorption peak at 293 nm.The XRD, SEM, and TEM investigations propounds that the size of the FeO NPs are between 20 and 50 nm in range with non-spherical shape.The synthesized FeO NPs also exhibited potent antibacterial activity against pathogenic bacteria whose MICs inhibited the growth of Escherichia coli and Staphylococcus aureus.The antioxidant activity of the synthesized FeO was analysed and it was seen that the FeO NPs had excellent inhibiting activity against DPPH, α-amylase, and α-glucosidase in comparison with acarbose and coffee extract.The results conclude that the FeO NPs synthesized via green synthesis using aqueous extract of CSs found to have versatile biological significance and further investigations is required to incorporate the FeO NPs in the pharmaceutical formulations.
Figure 2 :
Figure 2: XRD analysis of FeO NPs synthesized from aqueous extract of CSs.
Figure 4 :
Figure 4: TEM images of FeO NPs prepared with coffee aqueous extract.
Figure 6
Figure 6 that the FeO NPs displayed good antifungal activities against A. niger owing to its size and deposition on the fungus.
Figure 6 :Figure 7 :
Figure 6: Antifungal activity of A. niger in the presence of FeO NPs prepared from CS extract.
1 unit of α-glucosidase, and FeO NPs.Pre-incubate the mix at 37°C for 10 min. | 4,482.8 | 2024-01-01T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science",
"Medicine"
] |
Multifunctional light beam control device by stimuli-responsive liquid crystal micro-grating structures
There is an increasing need to control light phase with tailored precision via simple means in both fundamental science and industry. One of the best candidates to achieve this goal are electro-optical materials. In this work, a novel technique to modulate the spatial phase profile of a propagating light beam by means of liquid crystals (LC), electro-optically addressed by indium-tin oxide (ITO) grating microstructures, is proposed and experimentally demonstrated. A planar LC cell is assembled between two perpendicularly placed ITO gratings based on microstructured electrodes. By properly selecting only four voltage sources, we modulate the LC-induced phase profile such that non-diffractive Bessel beams, laser stretching, beam steering, and 2D tunable diffraction gratings are generated. In such a way, the proposed LC-tunable component performs as an all-in-one device with unprecedented characteristics and multiple functionalities. The operation voltages are very low and the aperture is large. Moreover, the device operates with a very simple voltage control scheme and it is lightweight and compact. Apart from the demonstrated functionalities, the proposed technique could open further venues of research in optical phase spatial modulation formats based on electro-optical materials.
Tunable optical components for the dynamic control of light propagation have attracted increasing interest in recent years. One of the most versatile materials employed in optical/photonic tunable devices are liquid crystals (LC), owing to their high intrinsic anisotropy and their strong electro-optic response to the application of voltage control signals 1 . In certain cases, such as free-space optical phase spatial modulators, the low weight, tunable focus, low power consumption and broad range of achievable applications renders LC unique in comparison to other technologies. Nowadays, there is increased research effort to engineer novel structures capable of generating highly performing LC-tunable components with advanced functionalities, among which large-area lenses 2-4 , multi-focal 5 , high fill-factor 6 microlenses, tunable zooming 7 , beam steering 8 , diffraction gratings 9 , aberration correctors 10,11 , tunable optics for astronomical observations 12 , 3D vision applications 13,14 , optical filters 15 , optical switches 16 , micro-axicon arrays 17 , axicons 18,19 , and optical vortices [20][21][22][23] .
In practice, the envisaged applications of LC-tunable phase modulators are at least as numerous as their classic static counterparts, albeit with the advantages previously mentioned. For instance, tunable spherical lenses are very much in demand for applications such as virtual-and augmented-reality displays 24 . Moreover, they can find direct use in cameras, telescopes and optical zooming devices 25,26 . Axicons are a special kind of optical components with a cone-shaped phase profile, which generates a field distribution proportional to the zero-order Bessel function J 0 (non-diffracting Bessel beam). They can be used in large telescopes 27 , laser machining 28,29 , for medical applications 30,31 , or as optical tweezers 32,33 . Powell lenses resemble a round prism with a curved roofline and they shape a laser beam so that it stretches into a uniform line segment. Such functionality is exploitable, e.g., in machine vision applications for the automobile industry and bio-medicine. Beam steerers redirect a laser beam, a key functionality in applications such as free-space optical communications, where non-mechanically controlled devices are greatly preferred in order to avoid failure of moving parts. In that respect, liquid crystals have been demonstrated as an excellent candidate for devices operating even in satellite conditions 34 Fig. 1a, which is based on commercial low-resistivity ITO on glass. Thanks to the micrometric width and orders of magnitude larger length, the line resistance is high, in the range of k -M . Depending on the electrode shape, a voltage distribution profile is generated from one terminal to the other. In the case of linear voltage distribution, the transmission electrode is selected simply as the rectangular stripe shown in Fig. 1a. Due to the low resistivity of the transmission electrode, the voltage distribution is overall governed by Ohm's law and the resulting electric potential drops in a linear fashion between the two terminals. If the desired profile is parabolic, the resistance can be designed to decrease linearly when approaching the center of the device. A way to achieve this is by increasing the width of the transmission electrode towards the center, as depicted in Fig. 1b.
Once the targeted profile is established, the voltage is distributed over the entire active area by evenly arranged perpendicular electrode stubs, as in the configuration of Fig. 1c. Due to fabrication constrains, the electrode width and the gap between adjacent electrodes was selected equal to 10 µm in this study. In order to obtain increased degrees of freedom in terms of spatial modulation of the optical phase profile, two substrates with perpendicular electrode arrangements are used. As a result, four independent voltage signals are available to modulate the optical phase, as shown in Fig. 1d, by tuning their amplitude and phase.
The fabrication of the device is that of a standard LC planar cell with the addition of a photolithographic step. The micro-electrode structure is patterned on a ITO-on-glass substrate, as in Fig. 2a. The commercial ITO electrode employed in this work has a nominal thickness of 26 nm and a sheet resistance of 100 /sq. The average transmittance of the ITO-on-glass substrate is 89% in the visible spectrum. The photolithographic process is the most critical step since the electrode pattern contains lines with different overall size scales. Once the two substrates are patterned, an alignment layer of light-sensitive chemical photoresist is deposited, cured and antiparallel rubbed in order to obtain homogeneous LC alignment. Then, the two substrates are perpendicularly www.nature.com/scientificreports/ arranged, as shown in Fig. 2b and dielectric spacers mixed with optical glue are deposited around the active area. The glue is UV-cured to seal the LC cell, which has a resulting thickness of h = 87 µm. Finally, the cavity is filled at room temperature with the nematic material 6CHBT, characterized by phase transition temperatures: Cr 13 • C N 42.8 • C Iso 41 , density: ρ = 1.01 g/cm 3 (at T = 20 • C) 42 , optical extraordinary and ordinary refractive indices: n e = 1.68 and n o = 1.52 ( n = 0.16) 42 , low-frequency dielectric permittivities: ε ⊥ = 5 and ε � = 12 ( �ε = 7 , at 1 kHz) 42 , viscosity: γ = 21 mPa s at 20 • C 42 , and elastic constants K 11 = 6.71 pN, K 22 = 2.93 pN, and K 33 = 7.38 pN 41 . The device active area has a diameter of 1 cm, hence the aspect ratio length over width of the electrode is 1000 and the electrode resistance is 100 k . Four contacts allow the application of driving voltages at the low AC frequency of 1 kHz, two on the upper substrate and two on the bottom one, with independently controlled amplitude and phase. In what follows all the amplitudes of the voltage signals refer to the AC root mean square (RMS) value V RMS . operating principle. In order to demonstrate the operating principle, we simulated the structure by using the finite-element method (FEM) implemented in the commercial tool COMSOL Multiphysics. For the rectangular voltage transmission electrodes ( Fig. 1), the width of the electrode is W 1 = 10 µm. In the second investigated scenario of Fig. 1b the parameter values are W 1 = 10 µm and W 2 = 60 µm. In both cases the ITO resistivity is R sq = 100 /sq. A cut line along the upper transmission electrode is considered so as to draw the resulting voltage profile. In the general case, the applied voltages have variable amplitudes and fixed phase shifts: Thanks to this configuration, the voltage distribution on the transmission electrode of the upper substrate goes from V 3 = A 3 to V 4 = −A 4 (due to the 180 • phase difference between the signals applied at the two electrodes), crossing at the middle by zero. In the bottom electrode, the voltage goes from V 1 = A 1 to V 2 = −A 2 for the same reason, again crossing at the middle by zero. Finally, the relative phase shift of 90 • between the upper and bottom electrodes, results on different complex voltages at each side of the active area (avoiding cancellation when the amplitudes are equal). Figure 3 shows the voltage profile (absolute value) along the transmission electrode for the two cases of constant and increasing resistance in the case A 3 = A 4 . As expected, the constant (increasing) resistance produces linear (quasi-parabolic) voltage profiles, as shown in Fig. 3a,b, respectively. Other profiles could be possible by modifying accordingly the shape of the electrodes. In all cases, the voltage profile along the transmission electrode is then distributed homogeneously by means of the stub electrodes over the entire surface of the patterned substrate.
When no voltage is applied at the LC device, light polarized along the LC alignment direction experiences the extraordinary LC index n e . Under an applied voltage above the Fréedericksz switching threshold of approximately 1 V, the torque exerted on the positive-�ε LC reorients the average molecular orientation, which is described by the nematic director, parallel to the electric field. In the extreme case of very high voltage, the LC aligns perpendicular to the substrate and the effective refractive index for light propagating through the device tends asymptotically to the ordinary LC index n o . In the variable voltage profile cases here investigated, the LC nematic director (or equivalently the optical axis of the LC anisotropy) is estimated by using a standard Frank-Oseen model 43 . Figure 3c,d show the phase shift that correspond to the voltage profiles calculated in Fig. 3a,b, by adding an offset of 1 V to compensate for the switching threshold and avoid zones with zero modulation, as previously demonstrated in 6 . The phase shift was calculated for the wavelength = 632.8 nm. When the electrodes of the bottom substrate are grounded ( A 1 = A 2 = 0 ), the phase shifts are invariant perpendicular to the axis of the top transmission electrode, thus leading to prism-like profiles in the volume of the device. In the case of constant resistance, the resulting phase profile of Fig. 3c is conical, which is the target profile for operation as a Powell lens. In that of increasing resistance, the phase profile is parabolic, as observed in Fig. 3d, and the device is expected to function as a cylindrical lens. In the voltage biasing scheme A 1 = A 2 = A 3 = A 4 , the profiles acquire axial 3D symmetry and correspond to axicons (or logarithmic axicons for high enough voltage, such as the 2 V case 19 ) for the constant resistance and quasi-parabolic lenses for the increasing resistance electrode. In the latter case, aberrations could be controlled by optimizing the shape of the voltage transmission electrode in order to obtain perfect lenses, although such engineering is beyond the scope of this study. In principle, the gap between adjacent electrodes has to be short in order to avoid discontinuities in the phase profile. This effect is directly related to the device thickness. For high aspect ratios of thickness/gap this effect is vanishing, however for aspect rations close to or less than one the effect can be dominant. In previous numerical studies, we have revealed that the maximum relative deviation occurs at the mid-point of the inter-electrode gap. This value increases exponentially as the aspect ratio of the cell thickness over the gap becomes less than unity 40 . In order to provide an estimate of the effect in the device here investigated, we have calculated the LC profile for the structure shown in Fig. 4a. The thickness of the LC cell is h = 87 µm, the pitch of periodic cell p = 20 µm and the inter-electrode gap w = 10 µm, as previously discussed. A control voltage V 0 is applied on the top electrode, whereas the bottom one is grounded, which corresponds to the configuration A 3 = −A 4 = V 0 and A 1 = A 2 = 0 . Periodic boundary conditions are placed at the x-y and z-y lateral planes. The LC cell is backed by glass. The LC at the LC/glass interfaces are aligned along the z-axis with a pretilt angle of 1 • . Figure 4b shows the tilt angle profile calculated for V 0 = 10 V at the x-y and z-y mid-planes of the LC unit cell. The LC is fully switched in the bigger part of the cell volume, except for the regions in the vicinity of the LC/glass interfaces where they are anchored by the alignment conditions. It is clearly observed that the interelectrode gap induces some degree of inhomogeneity across the gap. To quantify this effect, we calculate the average index along the cell (y-axis) as where n(x, y, z) is the local LC refractive index sensed by z-polarized light, which is given by Figure 4c plots the profiles �n av (x, z) = max{n av (x, z)} − n av (x, z) for two values of applied voltage V 0 = 2 and 10 V. The slight asymmetry observed is due to the pretilt angle that gives the LC nematic director a preferential alignment toward the +z axis. The maximum refractive index modulation is in the order of 5 × 10 −4 and 5 × 10 −3 , respectively. For V 0 = 2 V, this translates to a maximum phase modulation of ∼ 0.14π for z-polarized light propagating along the y-axis, i.e. through the LC cell. This value is but a small fraction of the total phase variation profiles investigated in Fig. 3 and does not significantly affect the performance of the graded-phase components presented in this work. In the case of V 0 = 10 V, the phase modulation is not negligible, particularly when the device operates in the regime of Fig. 4a, namely when a 2D periodic phase modulation profile is formed since the same voltage is applied across the whole surface of the device. In that case, the inter-electrode www.nature.com/scientificreports/ gap for high voltages generates the conditions for a diffraction grating, which will be presented in " 2D tunable diffraction grating".
experimental setup
The first experimental characterization setup is a typical optical setup used in LC experiments, shown in Fig. 5, which is based on the birefringent properties of the LC. When a homogeneous LC cell is placed between crossed polarizers (the sample at 0 • , which coincides with the LC alignment direction, one polarizer at +45 • and other one at −45 • ), the light that passes through areas where the phase shift is multiple of 2π ( π ) it is absorbed by (passes through) the second polarizer, thus producing minimum (maximum) transmittance. The intensity profile of such maxima/minima creates a pattern of interference fringes, which is then post-processed to recover the www.nature.com/scientificreports/ equivalent voltage-dependent spatial phase modulation through the LC device. The implementation consists of a 632.8 nm laser and an × 20 beam expander, which expands the laser beam to a diameter larger than 1 cm so as to capture the interference pattern over the entire active area of the device. The voltage-controlled LC device under test is placed between crossed polarizers. Finally, the focal plane for the transmitted beam is resized by a biconvex lens and captured by a Hamamatsu CCD camera. The second setup is employed to measure the effect of the LC device on the shaping and deflection of the impinging laser beam. For this, a linearly polarized 632.8 nm laser whose polarization is parallel to the LC alignment is used. The laser beam passes through the LC device and its shape and position is captured and a reference test target.
The fabricated device is based on the voltage transmission electrode of Fig. 1a, namely the electrode width W 1 and the gap between the stub electrodes are both equal to 10 µm. As commented before, the obtained optical components are axicons, Powell lenses, beam steerers and 2D tunable diffraction gratings. These devices are demonstrated through both interference patterns and laser intensity measurements and the results are presented in the following.
Results
Axicons. As commented before, the axicon phase profiles are formed when the following voltages are applied to the four terminals of the device: • . An offset signal of 1 V at the low frequency harmonic of 1 Hz was introduced to all four terminals in order to avoid crossing by zero at the central area of the device 6 . The conical shape of the resulting phase profiles is controlled by adjusting the amplitude A, as shown in the interference patterns of Fig. 6.
The corresponding profiles are extracted by processing the interference patterns of Fig. 6. The dark regions correspond to a phase shift multiple of 2π . Due to the axial symmetry the position of these minima can be calculated at any diagonal line of the interference pattern. Here, we used as a reference the horizontal line parallel to the bottom border of the device as in Fig. 6 and the calculated results are shown in Fig. 7. As predicted in the numerical study of "Structure and operating principle", for low voltages the profile corresponds to an axicon, whereas for voltages higher than 1.8 V to that of a logarithmic axicon. This effect is mainly attributed to the quadratic line shape of the effective birefringence close to the threshold and saturation voltages. powell lenses. Powell lenses are used to stretch a laser beam spot along a line segment. The necessary phase profiles to achieve such functionality are conical, yet uniform along one axis, equivalent to a triangular prism. This is obtained by grounding one of the two substrates and letting the two terminals of the opposite substrate www.nature.com/scientificreports/ vary as in the case of the axicon profiles. Since the electrode microstructures of the two substrates are perpendicularly arranged, the device offers the option to rotate the resulting stretched beamline by 90 • . This is demonstrated in the interference patterns of Fig. 8, which are measured for the following terminal amplitude combinations: (Fig. 8a) and (Fig. 8b). The same 1 V offset signal of the previous experiment was used. The laser stretching effect is evident in Fig. 8c, which shows the far-field pattern for the case A 3 = A 4 = 4 V, A 1 = A 2 = 0 , measured with the second setup configuration described in "Results". In this case, the amplitude of 4 V saturates the sides of the active area and closes the central part. As a consequence, the line segment is smaller than the diameter of the active area. This implies that by controlling the applied voltages, both the length and the orientation of the stretched beam line segment can be dynamically adjusted thanks to the versatile driving scheme of the LC device.
Beam steering. In the investigated device, beam steering can be obtained with simpler voltage control without the need for phase shifts among the voltage signals at the four terminals. To achieve steering towards either the vertical or horizontal direction, a single voltage (on top of the 1 V offset) has to be applied at one terminal, whereas the rest have to be grounded. Such configuration generates a voltage gradient along one of the two main axis of the device. Steering towards other directions is also possible by properly selecting the four control voltages.
As an example, Fig. 9a,b shows the interference patterns for horizontal and vertical deviations, which were measured by applying the combinations respectively. No phase shift was applied between the voltage signals. Figure 9c is the phase profile for steering at an intermediate angle, which is achieved for V 1 = 1.5 V, V 2 = 0 V, V 3 = 1 V, V 4 = 2 V. As it can be observed, the resulting phase has a gradient profile. The absolute value of the steering angle can be controlled by adjusting the steepness of the gradient profile, which depends on the amplitudes of the applied voltages. The sign of the angle www.nature.com/scientificreports/ can be switched by inverting the applied voltages. The maximum obtainable phase shift is obtained in the high voltage limit when the LC is fully switched and it is given by which for the investigated device yields �φ max = 44π rad. The steering angle for a total phase shift �φ accumulated over the length x = 1 cm of the voltage transmission electrode is calculated as Therefore, the maximum steering range of the device is from −0.08 • to +0.08 • . Figure 9d-g show the deflected beam spot measured in the far-field for four indicative cases, corresponding to beam steering towards the negative horizontal, positive horizontal, positive vertical, and negative vertical directions, respectively. In this case, the device was found to allow for tuning the steering angle from −0.036 to +0.036 • . This value corresponds approximately to a phase variation �φ = 20π , which is observed in the interference patterns of Fig. 9a,b. 2D tunable diffraction grating. By using a voltage configuration in which only one voltage is applied to one substrate ( V 3 = V 4 = V 0 ), whereas the other is grounded ( V 1 = V 2 = 0 ), a 2D diffraction grating can be obtained when the applied voltage is higher than V 0 = 10 V. As it was demonstrated theoretically on section "Effect of the gap between perpendicular electrodes", when the voltage is higher than 10 V, the phase modulation between electrodes is considerable. A first proof of this can be observed in Fig. 10, which shows the interference patterns for V 0 = 2 and 10 V. In the first case shown in Fig. 10a, the phase shift variation is continuous, whereas an applied voltage V 0 = 10 V produces an periodically arrayed phase modulation (Fig. 10b), which consequently produces a diffraction effect. The amount and efficiency of the resulting diffraction orders depends on the value of the applied voltage.
. www.nature.com/scientificreports/ When the voltage is higher than 10 V, the 2D phase modulation depth suffices to produce the diffraction effect. Higher voltage values increase such phase modulation, thus introducing higher propagating diffraction orders. Figure 11 shows the far-field diffraction patterns for applied voltages of V 0 = 10 , 15, and 20 V. It is clearly observed that for higher voltages more diffraction orders are excited. Therefore, the proposed LC device apart from the beam steering performance discussed in the previous subsection, it can also function as a tunable beam splitter by virtue of the optical diffraction effect.
As far as the switching speed of the device is concerned, in all cases the switch-off time was measured in the order of 3 s, while the switch-on time depended, as expected, on the magnitude of the applied voltage, and it was a few times smaller for the voltage ranges investigated, i.e., up to 2 V. These values are fully in line with the theoretical switching times in a planar LC cell given by 44 which are equal in this case to τ off = 2.3 s and τ on = 0.766 s, for V = 2 V.
conclusions
In this work, a novel technique to control the light phase by stimuli-responsive liquid crystals in combination with ITO grating microstructures, is proposed and experimental demonstrated. The novelty of the proposal resides on two orthogonal gratings based on microstructured transmission electrode with perpendicular stubs and phase shifted voltage control signals applied at its four terminals. The light propagation phase shift between two gratings is manipulated to obtain numerous optical functions. By using only four voltage sources, axicons, Powell lenses, beam steerers and 2D tunable diffraction gratings were experimentally demonstrated. By using other electrode shapes, more functionalities can be envisaged, such as low-aberration or cylindrical lenses. Thus, the proposed electrode scheme implements an all-in-one optical device with unprecedented characteristics, with low operation voltages, large aperture is large, and very simple voltage driving scheme. The proposed technique could open new venues of research in optical phase modulation based on electro-optical materials. | 5,679.8 | 2020-08-14T00:00:00.000 | [
"Physics"
] |
Artificial intelligence centric scientific research on COVID-19: an analysis based on scientometrics data
With the spread of the deadly coronavirus disease throughout the geographies of the globe, expertise from every field has been sought to fight the impact of the virus. The use of Artificial Intelligence (AI), especially, has been the center of attention due to its capability to produce trustworthy results in a reasonable time. As a result, AI centric based research on coronavirus (or COVID-19) has been receiving growing attention from different domains ranging from medicine, virology, and psychiatry etc. We present this comprehensive study that closely monitors the impact of the pandemic on global research activities related exclusively to AI. In this article, we produce highly informative insights pertaining to publications, such as the best articles, research areas, most productive and influential journals, authors, and institutions. Studies are made on top 50 most cited articles to identify the most influential AI subcategories. We also study the outcome of research from different geographic areas while identifying the research collaborations that have had an impact. This study also compares the outcome of research from the different countries around the globe and produces insights on the same.
Introduction
The omnipresence of Artificial Intelligence (AI) within the last decade is magnificent considering its applicability in almost all the real-world contexts. AI is powered by data science and exploratory analysis (DS/DEA), machine learning (ML) and deep learning (DL) approaches, which have shown significant improvements in several domains such as cyber security [47], cancer treatment [10], clean energy [54], financial sector [29], global education [45], etc. Healthcare and medical diagnosis are the other crucial areas where AI has shown potential in analyzing big data sets. Whether it is analytics on patient information to provide accurate predictive analysis or discovering new drugs to cure novel diseases in a timely manner, AI has come a very long way. To visualize the overall picture, Fig. 1 sums up various sectors in the medical domain, where AI has brought revolution and has increased the efficiency of the outcomes. These classifications are just representative to provide the perspective of the effectiveness of AI in these diverse areas.
Such effectiveness of AI can be seen in assisting scientists in tackling the recent pandemic caused by the coronavirus also termed as COVID-19. As of August 15, 2021, a total of 201 million [86] cases of the novel coronavirus disease have been registered globally since the first case of the same was reported from China in December 2019 [82]. This figure of affected cases worldwide is a clear indication of how deadly the novel COVID-19 is. Due to the widespread impact of the virus, research activity related to COVID-19 has been of central importance to many researchers worldwide, in the hope of discovering a new drug to cure the disease, to obtain trends of active or affected cases, or to predict the occurrence of the disease in individuals. Since there was no immediate cure to this virus, the number of fatalities increased gradually worldwide. It was hence necessary to target this devastating pervasion with state-of-the-art AI tools that could help fight the battle with COVID-19. The computational and precise predictions from AI approaches are quite evident, as can be seen from the results in various real-world applications [32,33,62,71,74,36].
Keeping an eye on this boost in AI centric based research on COVID-19, it is crucial to devise a systematic methodology to identify the sources of the most impactful research. With this paper, we aim to construct a bridge between the publications and interested researchers Fig. 1 Applicative areas of AI in healthcare and medical diagnosis who would like to identify the most impactful research content as well as related information. Furthermore, a benchmark of current scientific publications would also be created through such a study, leading to more interest from researchers in various fields belonging to diverse institutions and geographies converging on this area.
Scientometric (also refereed as Bibliometric) analysis provides a systemic review format for studying research publications within a scientific domain. This includes analyses of the citation structure, best papers based on citation count, research areas, highly productive and influential journals, impactful authors, co-citation amongst authors, institutional participation, geographic contribution, bibliographic coupling amongst authors, countries, institutions, as well as that of journals etc. Such analyses have been performed on multiple domains till date, such as a journal specific study of publication and citations structure [57,61,95]. In addition to these, there are studies which provided the results of a scientometric analysis performed to gain insight into research areas such as the study of aggregation operators [11], industry 4.0 revolution [58], brain MRI research [18], multimedia big data [43], etc. Similarly in the medical domain, the scientific outputs were tracked within the topics of the Ebola outbreak [93], global malaria vaccine [35], and Dengue research [96]. A study akin to these was also performed in [3], pertaining to the Zika virus outbreak.
Some researchers in the past studied the research landscape related to COVID-19. In [67], Sahoo and Pandey evaluated the research performance of the overall pandemic due to COVID-19, based on scientometric indicators, where the literature data was obtained from Scopus. Colavizza et al. [25] also recently produced a scientometric overview of the CORD-19 dataset, which is regularly updated with medical related articles from Medline, PubMed, the WHO database, arXiv, bioRxiv and medRxiv. Authors hence presents a detailed literature study based on medical literature and the related dataset. Another such analysis was performed in [37] where in addition to a scientometric study, safety related research dimensions were also presented through a scoping review, which identified the different types of safety issues that attracted research. Similarly, scientometric analyses were also performed in [2,52], focusing on research related to the coronaviruses from 1900 to 2020 and the overall statistics of the global research output respectively. Cunningham et al. [28] produced results from a scientometric analysis with the main aim of studying collaborations in the time of COVID-19. Apart from these, some researchers also considered the impact and hence-forth, the interest of COVID-19 research on specific areas such as endocrinology [7] and ophthalmology [44].
The literature also finds some traces of bibliometric/scientometric studies targeted on AI or related domains. Islam et al. [40] studies the initial bibliometric analysis of COVID-19 on AI. They included only 729 papers with only quantitative analysis of the data gathered from Web of Science (WoS). In a similar study, Wu et al. [89], studied about 1903 articles with the emphasis on visual knowledge map analysis. Abumalloh et al. [1] presented the systematic literature review of the computational approaches of medical image processing specific to the COVID-19. The study by Chicaiza et al. [20] covered the bibliometric analysis of DL related works only for the year 2020. Rodríguez-Rodríguez et al. [66] performed the scientometric analysis of the AI, ML, big data, and Internet of Things (IoT) approaches with applications on the pandemic. In this paper, we have obtained an in-depth knowledge about the current structure of publications related to AI centric research on COVID-19. A bibliometric analysis is presented of the scientific publication data available on the WoS database [24]. WoS is accredited with high-quality publications from high ranked journals and international conferences. The factors such as citation counts, number of publications, and their document types are considered. Furthermore, data related to the highly cited research on AI for COVID-19, along with research areas, journals, institutions, authors, countries etc. are mined in great depth to obtain useful insights. Performing such analysis leads to some interesting results about the research focused on applications of AI in COVID-19.
Collectively, the main purpose of devising this study is to empirically approach a view of how AI backed research related to COVID-19 has been produced to become impactful. Such a study can be helpful in multiple ways. Major contributions of this article have been compiled below: 1. The research landscape related to AI approaches for COVID-19 is extensively studied. 2. This study provides a quantitative and qualitative analysis of the AI centric scientific research on COVID-19. 3. Different parameters of the recent publications are extracted and studied to explore the landscape of this research area. These are: country-wise analysis, institution-wise analysis, source analysis, productive and influential authors, influential AI subcategories etc. 4. A bibliographic coupling of various parameters and keyword analysis is presented for deeper understanding of the targeted research area. 5. This work provides a platform for future research that can directly build upon the existing literature by referring to the current research trend on AI for COVID-19. 6. The research outcome of the produced scientometric analysis explicitly depicts the overall interest of researchers in the topic of utilizing AI in COVID-19 research. 7. Specifically, Table 3 can be of utmost importance for any new researcher trying to foray into the research of COVID-19, by leveraging the information of the most recent as well as trending work within the area.
Rest of the paper is organized as follows: Section 2 provides a background to the COVID-19 outbreak, while giving a bird's eye view of the related AI centric research. In Section 3, we discuss the technology utilized to perform the analysis and visualize the data effectively. In the subsequent section, the methodology is laid out followed by the analysis and knowledge mining. The paper is concluded in Section 5.
Background
Followed by multiple major attacks from viruses such as the severe acute respiratory syndrome coronavirus (SARS-COV), the H1N1 influenza, and the Middle East respiratory syndrome coronavirus (MERS-COV) [15], the world was grappled by the COVID-19 with its first case discovered in late 2019. Growing from this, the virus spread rapidly through geographies. This spread was officially recognized by WHO first [82] on January 21, 2020, followed by declaring it a global health emergency [83]. Finally, through its 40th Situation Report, WHO declared the outbreak as a global pandemic on March 11, 2020 [84,85]. Fast forward to August 2021, confirmed cases of COVID-19 infected millions across various regions of the globe. This increased motivated researchers from various disciplines to come together to study and develop models that would help in curbing the spread of the deadly virus. This motivation was found to be especially true for the AI research community, due to the evident success of AI within diverse fields. While there were 945 papers published in 2020 with citation count of 3,859, the year 2021 had already seen the publication of 1,491 papers with 11,748 citations. With this, it is evident that interest in the AI centric research on COVID-19 will gradually increase as time progresses, and it is likely to grow over time.
Notably, many research works on this topic are already scheduled to be published in the future months.
The study and analysis produced in this paper stands out from all the previous scientometric analyses mentioned in the previous section due to the fact that this work is based on the study of AI-centric research on COVID-19. Apart from this, the scientometric review performed here is extensive in the domain considered, spanning multiple concepts such as trending literature, overall citations, research areas, journals, authors, institutions, countries, keywords and topics, funding agencies, etc. Additionally, an in depth review of the top research articles is performed manually to study the actual use of AI within these works. These articles are then bifurcated into various sub-categories and fields of AI, revealing the interest of researchers, as well as readers in particular contributions of AI. Such a comprehensive study is not visible in any of the existing scientometric analyses presented above.
Technology utilized
To conduct the investigation on publications related to COVID-19, many technical tools were utilized. We briefly discuss them here.
1. Web of Science (WoS): Maintained by Clarivate Analytics, WoS [24] is a website which hosts multiple databases pertaining to diverse academic disciplines. These databases provide comprehensive data about the citation structure as well as other meta-data about the query presented to the database. In this work, WoS has been queried upon to obtain relevant publication information about COVID-19. 2. VOSviewer: It is a software tool [31] utilized for the development of bibliometric networks including multiple entities. These entities can range from being institutions, authors, countries, keywords, journals etc. A comprehensive structure amongst these entities is hence viewed, made possible by VOSviewer. However, there are cases in which the graphs generated by VOSviewer do not have some nodes labeled. Since the overall knowledge gained through such structures is quite good, we ignored the aforementioned drawback and utilized this tool for bibliometric visualizations. 3. Python: The rest of the figures in this paper have been drawn with the help of Python 3.6.
The bar plots, line plots and frequency histograms have been plotted utilizing the Pandas, Matplotlib, and Numpy modules, as a part of the exploratory data analysis performed.
The details of the scientific publications on the applications of AI for COVID-19 were obtained from the WoS database with the query: TS=("covid-19") AND (TS=("machine learning") OR TS=("artificial intelligence") OR ALL=("data science") OR ALL=("data mining")). All the major keywords which lie in the framework of AI were included. We 4 Analysis, results, and discussions
Citation analysis
Out of the obtained 2,434 scientific publications, each was assigned into slabs depending upon the number of citations. A total of eight such slabs of citations were created: "greater than equal to 0", "greater than equal to 10", "greater than equal to 25", "greater than equal to 50", "greater than equal to 100", "greater than equal to 200", "greater than equal to 400", and "greater than equal to 500". On distributing the publications based on their citation counts to these slabs, we see that only one publication obtained more than or equal to 500 citations, which accounts for only 0.04% of the total publications. Furthermore, only one publication so far has garnered less than 500 but more than or equal to 400 citations. The highest percentage share of publications, i.e., 85.54% publications lie in the slab of publications with greater than or equal to 0 citations. This implies that there are 2082 publications out of the total group, which either have citations more than or equal to 1 and less than 10 or remain uncited so far. This implies that due to the recent nature of AI centric publications related to COVID-19, most of the research has not yet garnered citations. This may also be due to the high rate of publications, which makes it difficult for new researchers to cite. The citation structure of all the publications is depicted in Table 1.
Document types as per WoS database
The research publications can be broadly classified into various types such as articles, corrections, data papers, early access articles, editorial materials, letters, news items, articles in proceedings, and reviews. Based on these categories, the obtained records of publications were classified, as shown in Table 2. It is observed that the highest number of publications (2,021) were classified as articles, followed by 257 and 210 publications classified as reviews and early access, respectively. Apart from these, editorial material, letters, meeting abstract, correction, and data papers consist of 95, 23, 23, 13, and 11 publications, respectively. Two publications each were classified as book chapters, news items, and proceedings papers. Corresponding to this analysis, approximately 83% of the total publications lie in the category of articles, while 210 still lie in the early access. This once again suggests that the research interest on COVID-19, focused on AI, is increasing as time escalates. Hence, more publications have been submitted in recent days.
Analysis of top papers on COVID-19
In this section, we analyze the top papers published on COVID-19 with focus on AI. Table 3 depicts the top 50 papers ranked based on total citation (TC) count. The table also depicts the title, authors, journal published in (Source Title), publication month, influential AI subgroups etc. associated with each publication. It can be seen that the top ranked paper, titled "Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal" [90] was published in British Medical Journal in April, 2020. This paper gave an overview of the ML prediction models on the early study of COVID-19 which could help in assisting clinical decisions. Recall from Section 4-A that this is the only paper to have garnered more than 500 citations, accounting for 0.04% of the total citations obtained by all the publications considered together. The article titled "The Impact of COVID-19 Epidemic Declaration on Psychological Consequences: A Study on Active Weibo Users" [49] was published in International Journal of Environmental Research and Public Health (IJERPH) in March 2020 and ranks second in terms of citations in the top 50 papers list globally with 405 citations. It is noticeable that the top ten papers on the basis of TC have been published within the time period March 2020 -August 2020. Another analysis of TC for top 50 papers reveals interesting findings. In Fig. 2, a frequency histogram of publications according to TC is depicted. To obtain this figure, the entire range of TC was divided into 10 bins. Each bar against the bin depicts the number of publications falling in that bin (having obtained corresponding number of citations). Analyzing this frequency histogram demonstrates that most publications in the top 50 list have garnered citations between 60-100. The number of publications decrease as number of citations increase in the bins. Figure 3 is another frequency distribution histogram depicting the relationship of the number of authors with the impact of the publication. In this figure, the X-axis denotes the bins of number of authors, while the bars depict the number of papers in that bin. We decided upon the same 10 bins for this analysis. The highest number of publications in the top 50 list have 2 to 9 authors. The number of publications decreases as the number of authors within each publication increases. It is very interesting to note that there is one paper in the top 50 papers list where the number of authors is 40, which is indicative of an interdisciplinary research work. * Percentages in the last column of the table add up to more than 100. This is because one publication may be classified into more than one publication type
Technical analysis of top articles based on AI centric COVID-19 research
Due to the significant applicative capabilities of AI, it is crucial to dive deep into its impact on the research of COVID-19. In this section, specifics of different fields and types of AI are discussed, with respect to the top-50 research articles provided in Table 3. To begin with the topmost highly cited research article, 'Prediction models for diagnosis and prognosis of COVID-19: Systematic review and critical appraisal', [90] studied published articles and preprints of prediction models of COVID-19, meant for diagnosis, prognosis as well as detection of people in general population with increased risk of infection. In this article, AI, specifically through a text analysis tool, was utilized to prioritize research materials relevant to the study. In the next article from this list, [49], posts from Weibo, a Chinese micro-blogging website were considered to study the psychological impacts of COVID-19 on the former's users. This work used the Online Ecological Recognition (EOR), based on the broad category of ML predictive models. The article titled 'Modified SEIR and AI prediction of the epidemics trend of COVID-19 in China under public health interventions', [92] ranked third highest in terms of citations, developed a simple neural network structure based on long-short term memory (LSTM) which predicted peak of infection in mainland China. Automatic detection of COVID-19 through binary and multi-class classifications was performed in [60]. The article [12], with the fifth-highest citations, employed AI as a tool for primary filtration of gene data through doublet cell detection. Subsequently, a classification deep neural net called the COVNet was proposed in [48], to identify COVID-19 and community-acquired pneumonia infections from no infections using volumetric chest CT scans. The article 'Proteomic and Metabolomic Characterization of COVID-19 Patient Sera' [69] presented an ML-based random forest classifier to identify patients with severe COVID-19 infections, following their protein and metabolite characteristics. With interpretability as a highlighting concern for AI models, authors of [91] developed a discriminative ML model for the classification of the most important biomarkers of patient mortality, concerning the COVID-19 disease. The XGBoost model was considered for this task. In [56], the authors studied CT scans for presented patients using convolutional neural networks to learn specific characteristics. Since these scans presented normal radiological findings in early stages of infection multilayer perceptron (MLP) classifiers were put to use for classification of patients using more clinical information. The article with tenth highest number of citations [77], reviewed research articles to identify most significant seven AI applications pertaining to detection of COVID-19.
In the subsequent data, multiple review articles [4,14,16,23,41,55] studied either the publication landscape, or the impact on COVID-19 responses due to AI and/or other digital technologies such as blockchain, IOT etc. Furthermore, a total of four [6,59,72,75] articles within the top 10-25 articles with highest citations, dealt with the detection of COVID-19 in patients through segmentation and classification on chest X-Ray images. It is interesting to note that most of such models are deep learning models, hence indicating the prowess of this AI trajectory in the detection of COVID-19. To top these off, four [17,38,39,42] articles in the top 10-25 presented data analysis and modelling results and frameworks, few of which dealt with big data analysis, susceptible-exposed-infected-recovered (SEIR) models and mobility networks. Apart from these, only one LSTM based forecasting model for COVID-19 [21] managed to rank 25th in garnering citations.
From the above analysis, it is easy to observe that research on classification models for detecting the presence and/or absence of COVID-19 in patients is quite prominent and also garners the most interest. Amongst these, DL models reign due to their highly accurate results, given a good amount of data. A review of the impact of AI-centric research on COVID-19 is a second favorite amongst researchers to study and present the effect that COVID-19 has had on the world in different ways. Data analysis and various methods of modelling the same have also grasped the interest of researchers to imply essential results through the study of data obtained from multitudes of resources and in diverse modalities.
Research areas of publications
In Table 4, we list the top 50 research areas of publications obtained from WoS. The top 5 research areas include "Computer Science", "Engineering", "Science Technology Other Topics", "Medical Informatics", and "Health Care Sciences Services" with 547, 334, 230, 210, and 201 number of publications in each research area, respectively. It can be seen clearly that only the top 5 research areas have more than 200 publication records, collectively accounting for more than 62% share of total publications (TP). The highest % share, i.e. 22.47% of TP is in the research area "Computer Science", which is approximately 11% more than the % share of the "Engineering" research area. While one might think that research related to COVID-19 focused even on AI is being published only in medical related journals, this is not the case in fact. The list of top 50 research areas indicates that AI centric research on COVID-19 has drawn researchers from diverse fields, viz. science and technology, engineering, chemistry, social science, psychology etc. The minimum number of publications in top 50 is atleast 10.
Analysis of journals
The destination journal of a publication is one of the most important factors while analyzing any publication data. Journals with high TP are the most productive journals, whereas the ones with the highest TC are the most influential journals. Tables 5 and 6, respectively, list the top 20 most productive and most influential journals with AI centric publications related COVID-19.
The "Journal of Medical Internet Research" is the most productive journal with 90 publications, followed by "IEEE Access" with 84 publications. Other journals among the top 5 ranks are "International Journal of Environmental Research and Public Health", "Scientific Reports", and "Computers Materials & Continua" possessing 53, 52 and 44 publications, respectively. The journals in Table 5 consists of a mix of domains from science, medical and engineering. Qualitatively, though the journals may appear belonging to several domains, the scope of each of these journals targets the applicability of AI and ML approaches.
The two journals with highest citations per paper (CPP) in the list are "Computers in Biology and Medicine" (CIBM) and "Chaos Solitons & Fractals" (CSF) with CPP of 30.95 and 22.40. Both journals publish work pertaining to computational biology in common, and thus, they support the publications on COVID-19, which addressed AI, ML, and data analytics. Despite publishing only 19 and 25 papers, these received decent citations of 588 and 560, respectively. Furthermore, the highly cited papers from CIBM targeted the use of DL methods for the intrinsic analysis on COVID-19. These papers were: "Automated detection of COVID-19 cases using deep neural networks with X-ray images", "Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks", and "COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches" etc. The highly cited papers from CSF targeted time series forecasting of COVID-19 and some thorough review on the Applications of machine learning and artificial intelligence for COVID-19.
In Table 6 the top 20 highly influential journals are listed, which garnered the most attention from other publications. The top two journals are "International Journal of Environmental Research and Public Health" (TC = 765) and "IEEE Access" (TC = 647). CIBM and CSF, both journals with the highest CPP above, rank third and fourth in the list of most influential journals. The top seven journals in this list account for more than 300 citations which, apart from the top four, includes "Radiology", "Journal of Medical Internet Research", and "PLOS One" with TC of 451, 429, and 309, respectively.
The highest CPP of 52.80 is received by "IEEE Transactions on Medical Imaging" which targets the application of AI and ML in the medical image processing. It is followed by "Radiology" and CIBM with CPP of 41 and 30.95, respectively. It is observed that there are many open access journals in both the lists as it was the need of the hour to publish articles as soon as possible and within the reach of every academician and scientist. It also points out the extensive amount of funding invested for such research. The funding details are discussed at length in Section 4.12. Figure 4 depicts the co-citation mapping between the journals that have published AI centric papers related to COVID-19. Since there were 30,911 sources, we considered only those journals which have received at least 100 citations. This resulted in a total of 116 journals. Co-citation analysis for a journal considers the total number of citations and compute how many times a journal paper has been cited in another journal. From Fig. 5, it is observed that "IEEE Access", "Radiology", "Lancet", "PLOS One", and "Nature" are the largest nodes indicating that they have been cited more by the other journals for COVID-19 research related to AI. The clusters distinguished by different colors indicate more citations between them. For example, "Arxiv preprint" has articles which have cited more papers from IEEE access, hence they are in same cluster with strong link strength.
Analysis of authors
Akin to journals, the top authors of publications related to COVID-19 focused on AI are categorized as the top 20 most productive authors in Table 7. The difference in the number of publications among these top 20 authors is not much significant. Al-Turjman tops this list with a TP of 9, followed by Duong and Hassanien with a TP of 8 and 7, respectively. Then there are 8 authors with 6 publications each. Rest of the authors in the list have 5 papers. Interestingly, Acharya has the highest CPP of 92.40 with only 5 papers. On the other hand, Ittamalla with 6 papers has the lowest CPP of 0.50. This is because Acharya's publication has targeted AI and medical informatics which were the centre of attraction among researchers during pandemic. Table 8 shows the list of top 20 most influential authors, i.e., authors ranked based on their TC. First rank is obtained by Xue with a TC of 466 with only 6 papers. Other than Xue, there are three other authors with more than 400 citations. These authors are Acharya (TC = 462), Zhu (TC = 456), and Li (TC = 424). Then there are 10 authors with more than 100 citations. All the authors in this list have published atleast 3 papers. Another very interesting observation that can be made here is that, if we compare Tables 7 and 8, we find that only 8 authors from the most productive list find their place in the most influential authors. Also, the 3 topmost influential authors also fall in the list of top 20 most productive authors, which indicates that their quality work is well received by the research community.
The co-citation amongst authors describes the citing relationship between two authors i.e. the number of times an author is cited by the others. The connections resulting from a cocitation versus authors mapping is presented in Fig. 5. For better visualization, the minimum number of citations of each author has been kept at 50 which among 57,770, resulted in only 125 authors. The node corresponding to the WHO is the largest indicating that the articles published by them are cited the most by all the authors, which is very relevant and justified. Authors in the same color cluster indicate the strong co-citation among them. Li and Wang gathered much attention with respect to the red color clusters. Similarly, Huang is another author from green cluster who has been cited most.
Analysis of institutions
Institutions play a key role in the promotion of scientific research. In the case of AI centric research on COVID-19, the top 20 most productive institutions are given in Table 9. This also makes University of Oxford as the most influential institution with the highest TC of 654, while Huazhong University of Science and Technology has the second highest TC amounting to 650, closely trailed by Medical University of Vienna with TC of 649. This can be viewed from Table 10, where the top 20 most influential institutions for AI centric research on COVID-19 are given. There are total of 12 institutions with more than 500 citations which is a great achievement considering almost 1 year of publication. The highest CPP of 129.80 is attained by Medical University of Vienna, which is remarkable to achieve through just 5 publication within few months. Even the lowest citation count in this list is 366, obtained by Icahn School of Medicine at Mount Sinai, emphasizing that there is a huge number of works that are being published in AI centric research on COVID-19.
Analysis of countries
The first case of COVID-19 was reported in China in December, 2019. Since then, strict isolation measures were put into practice to prevent the spread of this disease. However, reports on people being infected by COVID-19 were soon registered in various countries such as Thailand, Korea, Japan etc. [80]. Worldometer [87] listed 222 countries/regions as being For ease of analysis, these countries were divided into two lists of top 20 most productive and most influential countries, as shown in Tables 11 and 12, respectively. Referring to both these tables, it can be observed that USA, China, United Kingdom, and India retain the top 4 ranks. The order in which these countries are ranked based on their TP is USA, China, India, and United Kingdom with a TP count of 721, 326, 344 and 236, respectively. Notably, second ranked China has less than around 50% of total publications than USA. In terms of TC, USA, USA, England, and India obtain counts of 4750, 4130, 2787 and 1803, respectively. The highest CPP of 47.75% is obtained by Belgium with TC of 955 in just 20 publications. Furthermore, we also observe that Asia as a continent has the most number of productive and influential countries in the top 20 list. Specifically, six countries out of the 20 most productive countries lie in Asia. The second spot is taken up by USA.
Bibliographic coupling amongst various data
Bibliographic coupling is a measure that indicates the possibility of similarity between works done by two entities [53]. It is intuitively defined as the number of times more than one entity cites one common entity. These entities may be institutions, countries, authors etc. We have considered all such different entities and obtained results for each of them. network. The size of the bubbles indicates those authors, publications by whom are the most cited in their cluster. For example, publications by Kuhl have been cited highly when compared to other authors in the green cluster. Similarly, Mahmud (blue cluster), Magazzino (voilet cluster), Ittamalla (yellow cluster) and Al-turjman (red cluster) are the main players in their respective clusters. Each cluster is based on of high frequency citations amongst the authors within that cluster. Furthermore, the curves linking the nodes in the clusters have different width. Higher width of the curves demonstrates more matches in the references of the publications by the two authors.
Bibliographic coupling amongst countries
The bibliographic coupling network amongst the countries depicts the similarities in references between publications by two countries, as shown in Fig. 7. It can be clearly observed that the main players in this network are USA and China (green cluster), Canada and Italy (yellow cluster), India and Saudi Arabia (red cluster), Germany (blue cluster), Australia (violet cluster) etc. Curves connecting nodes corresponding to Switzerland and Spain, USA and Canada, India and Turkey etc. possess the most width indicating the similarity in the references of their works.
Bibliographic coupling amongst institutions
A dense network of bibliographic relationships between institutions worldwide is shown in Fig. 8. Huazhong University of Science and Technology (blue cluster), Harvard Medical School (red cluster), and King Saud University, University of Toronto, and King Abdulaziz University (green cluster) can be identified as the main institutions in each cluster within the network that receive high citations within their respective clusters. The high density of connections indicates that the citation network amongst institutions is extremely diverse.
Bibliographic coupling amongst journals
Various clusters can be observed from the bibliographic coupling network between journals as shown in Fig. 9. Amongst these the "Journal of Medical Internet Research" and "PLOS One" (red cluster), "IEEE Access" (green cluster), "Diagnostics" (blue cluster), "Journal of Intelligent & Fuzzy Systems" (yellow cluster) etc. are the main entities in their respective clusters of the entire network. Journals such as "IEEE Access", "Journal of Medical Internet Research", It is worth noticing that the journals that do not appear in any clusters indicates that these journals, being important journals within their respective clusters, do not share any common references with any other journal in the network. Figure 10 shows the network of co-occurrence of different keywords within a publication, used by various authors. It can be seen that "COVID-19" is the highest occurring keyword in all the publications overall. Width of the curves connecting the nodes in the network demonstrate the frequency of these keywords occurring together within publications. For example, "COVID-19", "big data", and "artificial intelligence" have occurred more frequently together when compared to "COVID-19" and "statistics" or "molecular docking" occurring together. Other important keywords in this network are: "machine learning" (not visible in the figure, due to the software's limitations. It is under the COVID-19 node), "neural network", "feature extraction", "classification", "computed tomography" etc.
Funding agencies
In addition to the aforementioned results, the agencies that have funded most publications worldwide were also identified during our analysis. This study presented the geographical locations where agencies were generously funding to deal with the pandemic using the AI centric approaches. The top 50 funding agencies worldwide were identified and ranked based on number of publication records funded by them. This is shown in
Discussion and conclusion
While writing this paper, the entire world is still under the grip of the deadly COVID-19, strying to recover from its devastating impacts during the last two years. To prevent further spread and to gather more insight into the behavior and effect of this deadly virus, researchers from all over the globe, worked desperately to come up with possible vaccines. Biomedical and computational sciences, along with AI and ML have together made this happen in a decent amount of time. This has resulted in a huge volume of scientific publication data, analyzing which, useful insights can be obtained. In this paper for the first time, we have performed an extensive analysis (both qualitative and quantitative) of the AI centric publication related to COVID-19. This analysis has revealed multiple key findings. These are given below: A. Detailed key findings Within the last two years, the research on COVID-19 with focus on AI has shown a significant growth. Overall, the papers have gathered 15,607 citations in just 2,434 papers. The papers published in April 2020 have obtained the highest citations till the time this paper is being written, with the count of 542. Notably, there are 85.54% publications which have either no or less than 10 citations count. This is naturally due to the continuous flow of work in the literature. It is anticipated that the citation counts of the quality work will increase with time.
Apart from this, the maximum number of the papers that have been published are articles, among which many are open access. This is acceptable in the sense that the review process pertaining to these is quicker, and such research work is available to the research community for free. Analyzing the top 50 papers based on citation reveals that most of these have obtained citations between 60 and 100. This reveals the premature status of the AI centric research on COVID-19. Interestingly, there are two papers in the top 50 list which have been published in January and August of 2021 and have gathered significant citations of 124 and 70, respectively. It was also revealed that classification using DL models garnered the highest interest amongst both researchers and readers.
Amongst the research areas, 'Computer Science' and 'Engineering' remain at the top, having contributed the most to AI centric research to COVID-19. Similarly, the Journal of Medical Internet Research, IEEE Access, and the International Journal of Environmental Research and Public Health are among the top 3 productive journals for research on AI for COVID-19. On the other hand, the International Journal of Environmental Research and Public Health ranks highest in the most influential journal list, while IEEE Access stands at second spot.
Considering authors, Al-Turjman published the most number of papers while, Xue obtained the highest citations for research related to AI for COVID-19. Additionally, Harvard Medical School, Huazhong University of Science and Technology, and Chinese Academy of Sciences are the top institutions publishing on COVID-19 focused on AI. However, the University of Oxford stands at the top of the most influential institutions list.
Country-wise analysis of AI centric research on COVID-19 reveals the top contributors to be USA, China, India, and England. However, Asia is the top continent in terms of both citations and publications. It was also revealed that "COVID-19" is the highest used keyword in all the publications considered. It was coupled the most with keywords "artificial intelligence" and "machine learning". Also, the United States Department of Health Human Services is the topmost funding agency, which sponsored the research for the highest number of publications.
B. General observations
The current publication data available on AI centric research on COVID-19 is quite significant and growing gradually. Now, the world is in the state of rehabilitation from the havoc of the pandemic. Many AI and ML approaches have been implemented to the COVID data for the data analytics and predictions. Our investigation has revealed some very useful insights related to the AI centric publications on COVID-19. Although most of the research on this topic is recent, publications, in general, have been cited significantly as compared with other matured fields. These numbers will increase over time because AI-centric research on COVID-19 and related studies are still growing in numbers.
It is interesting to note how the USA and India, being two of the worst hit countries due to COVID-19 in the pandemic, has produced the highest numbers of publications, when considered country-wise. Even though in terms of continents, Asia tops the list on production of papers while also obtaining the highest number of citations. This is an indicator of the fact that multiple countries within Asia are devoted to conducting AI centric research on COVID-19.
As a future work, we shall include various other indexing platforms such as Scopus, Google scholar and DBLP. Furthermore, we shall extend the work on several factors including institution based bibliometric analysis, country-based analyses, etc. In doing so, we will also target to develop a dynamic platform where we can regularly update the statistics related to the AI centric scientific research on COVID-19.
Funding Open Access funding provided by University of Jyväskylä (JYU).
Data availability Data available on request from the authors.
Code availability Code available on request from the authors.
Conflicts of interest/Competing interests There is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . | 9,846.2 | 2023-03-02T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
The new competitive mechanism of hydrogen bonding interactions and transition process for the hydroxyphenyl imidazo [1, 2-a] pyridine in mixed liquid solution
The new competitive mechanism of intermolecular and intramolecular hydrogen bond can be proposed with an improved mixed model. Upon the photoinduced process, the twisting intramolecular charge transfer (TICT) structure of the hydroxyphenyl imidazo [1, 2-a] pyridine (HPIP) can be obtained. TICT character prompts the fluorescent inactivation via non-radiative decay process. For exploring the photochemical and photophysical properties, the electronic spectra and the infrared (IR) vibrational spectra of titled compounds have been detailedly investigated. In addition, the frontier molecular orbitals (MOs) analysis visually reveals that the unbalanced electron population can give rise to the torsion of molecular structure. To further give an attractive insight into the non-radiative decay process, the potential energy curves have been depicted on the ground state (S0), the first excited state (S1) and the triple excited state (T1). Minimum energy crossing point (MECP) has been found in the S1 and T1 state. On the MECP, the intersystem crossing (ISC) might be dominant channel. The density functional theory (DFT) and the time-dependent density functional theory (TDDFT) methods have been throughout employed in the S0 state, T1 state and S1 state, respectively. The theoretical results are consistent with experiment in mixed and PCM model.
depends on its bond length, the bond angle, the local dielectric constant, the electronegativity of the donor and acceptor groups, temperature, and pressure, etc. 27,28 . The hydrogen bonding interaction and corresponding dynamical behaviour have been interpreted via a variety of photophysical and photochemical phenomena 16,[29][30][31][32][33] . For example, the ESIPT reaction, the intramolecular charge transfer (ICT) and the TICT, etc. 19,28 .
The tautomer structures can be obtained by the ESIPT reaction 28,[34][35][36] . The ultrafast hydrogen bond strengthening can offer the driving forces for the ESIPT process. Upon the photo-induced process, the electron population has an obvious change for the molecule, which converts from the π character to the π* character. The electron density of proton donor group and acceptor group will reduce and increase, respectively 29 . Therefore, the proton transfer process will be facilitated in the S 1 state 11,37 . However, the ESIPT process can be understood as a four-level circular loop model and it is a great important process in the photochemistry, biochemistry and so on refs 38, 39. As discussed by Toshiki Mutai et al. isomer of HPIP originated from ESIPT process had shown an extremely weak fluorescence in the polar solvent tetrahydrofuran (THF). They have drawn a conclusion that the fluorescence yield can be widely enhanced by changing the surroundings from the polar liquid state to the solid state 40 . Actually, the non-radiative decay process plays a prominent role in the biological systems 41,42 . The long lifetime deactivation process of HPIP will occur via the TICT character in the polar solvent, and then the ISC between the S 1 state and the T 1 state might be dominant channel for the non-radiative decay. On the MECP, the molecular structures have almost identical energy in the S 1 and T 1 state 41 . Subsequently, the radiationless decay will jump from the T 1 state to the S 0 state. However, the torsion of HPIP isomer can be prohibited in the solid state surroundings, the nearly co-planar form will emit a drastic fluorescence 40 . In this study, we will carefully investigate the fluorescence quenching mechanism of deactivation process. As Zhao and Liu et al. discussed 33,43 .
In the present work, we utilize a mixed solvent model 44 to investigate the reaction where a THF molecule bounds to imino group of HPIP, the other solvent can be substituted by polarizable continuum model (PCM). The complex can be established by hydrogen bond interaction. As Fileti et al. have discussed that for alcohol-water complexs the stability of complex constitution depends on not only different donor and acceptor molecules, but binding energy of hydrogen bond also is influenced by spatial configurations of complex 45 . Especially, Fileti et al. 46 and Malaspina et al. 47 have accurately provided with the most stable complex of pyridine and water molecule via combining Monte Carlo computer simulation and first-principles quantum mechanical calculations method in an aqueous environment 48 . Therefore, in this study HPIP complexs have been certified as most stable structures when considering different spatial configurations of complex. The TICT character can be clearly revealed when we consider the interaction between THF molecule and imine group of HPIP molecule. As Wang et al. discussed multiple proton transfer via an intermolecular hydrogen-bonded water wire can clearly exhibited the effect of hydrogen bonding dynamics for 3-hydroxypyridine 29 . Otherwise it cannot be primely explained in the conventional PCM model.
Results and Discussion
In this study, we primarily investigate the photophysics and photochemistry properties of HPIP. The HPIP in the THF solvent phase has a dual emitting in the PCM solvation, but the fluorescence of keto form isomer will be totally quenched via the TICT character in the mixed liquid model. However, we are interested in the fluorescence quenching process, and we have speculated that the TICT structure might go through an ISC process from S 1 state to T 1 state on the MECP 41, 49-51 . Then, the nonradiative decay will be dominant in the T 1 → S 0 state as shown in Fig. 1. Chu et al. has introduced the competition mechanism between the intermolecular hydrogen bonding interaction and the ESIPT process 52 . However, in this study a new competitive mechanism of intramolecular and intermolecular hydrogen bond will be investigated in the torsional process.
Geometric structures and spectra analysis. Four stable structures have been found in the S 0 state, the HPIP (a), the keto form of HPIP (k-HPIP) (b), the trans-keto form HPIP (trans-k-HPIP) (c) and the open-ended intramolecular hydrogen bond HPIP (o-HPIP) (d) have been shown in the Fig. 2. It should be noted that the k-HPIP has not occurred torsion. However, Four stable structures in the S 1 state, the cis-HPIP (a), in PCM the k-HPIP (ik-HPIP) (b), the nearly vertical isomer form of HPIP (v-HPIP) (c) and the trans-k-HPIP (d) have been shown in the Fig. 3. The k-HPIP form cannot exist in the S 1 state. After finishing the ESIPT process of HPIP molecule the isomer form has been directly optimized into v-HPIP form. The ik-HPIP and the v-HPIP forms present the rotations to ~33° and ~80° between benzothiazole and phenyl group, respectively. For illustrating the reliability of our computation, the absorption and emission spectra have been calculated in the mixed liquid model. The absorption peak values of the cis-HPIP are located in 331 nm. The emission peak values of the cis-HPIP are located in 377 nm. In addition, the fluorescence peak values of the ik-HPIP are located in 602 nm. The above those peak values have coincided with the computational results of Toshiki Mutai et al., which have been shown in Fig. 4. Herein, we have proved the availability of the computing method. The fluorescence of the tautomer form in the S 1 state exists a Stokes' shift beyond 200 nm, which indicates that the ik-HPIP form has a drastic change compared with cis-HPIP form. However, the stable structures have not been investigated exclusively in the mixed liquid model. In the Fig. 5, we have calculated the absorption and fluorescence spectra of the cis-HPIP (a) and trans-k-HPIP (b). As shown in Fig. 5(a), the absorption peak values of the cis-HPIP are located in 328 nm. The emission peak values of the cis-HPIP are located in 374 nm. The emission peak value of v-HPIP form is nearly nonluminous located in the about 847 nm. As shown in Fig. 5(b), the absorption peak is about 497 nm and the fluorescence peak is about 595 nm for the trans-k-HPIP. Although the trans-k-HPIP and k-HPIP cannot be directly gained in the S 0 state, the two structures can be received in the radiationless decay way from the S 1 state to the S 0 state. The reaction mechanism has been exhibited in the Fig. 1. Subsequently, for above structures we have dissected carefully the changes of crucial bond parameters in the S 0 and S 1 state. As shown in the Tables 1-4, the intriguing bond parameters have been revealed. In Table 1 bond angle δ(O 1 -H 1 -N 2 ) increases from 145.4° to 149.4°. There is a definite conclusion that intramolecular hydrogen bond O 1 -H 1 ···N 2 of the cis-HPIP can be strengthened in the S 1 state. The dihedral angle δ(N 2 -C 3 C 8 -C 9 ) is 4.6° in the S 0 state, but it reduces to 1.0° in the S 1 state. The structure of cis-HPIP is near planar in the S 1 state. The bond parameters of ik-HPIP form has been shown in the Table 2, the bond lengths O 1 ···H 1 , H 1 -N 2 and δ(O 1 -H 1 -N 2 ) of ik-HPIP change from 2.150 Å, 1.013 Å and 118.5° to 1.644 Å, 1.055 Å, and 137.9° in the S 1 → S 0 state, which indicates that the intramolecular hydrogen bonds O 1 ···H 1 -N 2 of ik-HPIP form is weaker in the S 1 state than that in the S 0 state. In addition, we can find that the dihedral angle δ(N 2 -C 3 C 8 -C 9 ) of ik-HPIP form decreases from 33° to 0.01° in the S 1 → S 0 state from this Table. In the Fig. 2(d), the o-HPIP molecular structure has been optimized with mixed liquid model. We find that the o-HPIP form has the torsion of about 52.4° in the The bond parameter (bond length (Å), length angle (°) and dihedral angle (°)) of crucial moiety for the cis-HPIP in the S 0 and S 1 state.
Parameters of bonds
The bond parameter (bond length (Å), length angle (°) and dihedral angle (°)) of crucial moiety for the ik-HPIP in the S 0 and S 1 state.
Parameters of bonds Electronic state trans-k-HPIP
able 3. The bond parameter (bond length (Å), length angle (°) and dihedral angle (°)) of crucial moiety for the trans-k-HPIP in the S 0 and S 1 state. S 0 state, since the intramolecular hydrogen bond has been destroyed via reversing the bond O 1 -H 1 . Therefore, we can conclude that the intramolecular hydrogen bond O 1 -H 1 ···N 2 can preclude the torsion of HPIP structure. In the Table 3, the bond parameters of trans-k-HPIP form have been shown. The dihedral angle δ (N 2 -C 3 C 8 -C 9 ) of trans-k-HPIP compared with that of cis-HPIP has twisted about 180°. In addition, a strong intermolecular hydrogen bond N 2 -H 1 ···O 2 has formed. In the S 0 state, the bond lengths O 1 -H 1 , H 1 -N 2 and O 2 ···H 1 are 4.954 Å, 1.030 Å and 1.811 Å, respectively. The corresponding bond lengths are 4.945 Å, 1.025 Å and 1.853 Å in the S 1 state. Finally, the bond parameters of ik-HPIP and v-HPIP form have been shown in the Table 4, it should be noted that the bond length O 1 -H 1 increases from 2.150 Å to 3.389 Å with torsion of the molecule system, which indicates the torsional behavior has given rise to weakening of the intramolecular hydrogen bond N 2 -H 1 ···O 1 . In addition, the intermolecular hydrogen bond N 2 -H 1 ···O 2 has been found in the v-HPIP form, and the bond length O 2 -H 1 is 1.831 Å. The intermolecular hydrogen bonding interaction can compete with the intramolecular hydrogen bonding interaction of the ik-HPIP form in the mixed liquid model. Yan et al. have concluded that the slightly weaker hydrogen bond allows the competition with other type of interaction 53 . Non-coplanar ik-HPIP form results in the intramolecular hydrogen bond is extremely weak. So when the THF molecule approaches to the imino group (=N-H) of ik-HPIP, the intermolecular hydrogen bond will be strengthen and the intramolecular hydrogen bond will be further weaken, the torsion of HPIP molecule can be facilitated as shown in the Fig. 6. We can propose a viewpoint that the intermolecular hydrogen N 2 -H 1 ···O 2 can give rise to the weakening of intramolecular hydrogen bond N 2 -H 1 ···O 1 . Furthermore, we have found that a nonatomic ring structure has been generated in the v-HPIP form linked by two intermolecular hydrogen bonds, which are the strong hydrogen bond N 2 -H 1 ···O 2 and the weak hydrogen bond C 14 -H 12 ···O 1 . The reduced density gradient (RDG) isosurfaces have been shown in the Fig. 7. Herein, the intensity of hydrogen bond N 2 -H 1 ···O 2 can be visibly compared with that of hydrogen bond C 14 -H 12 ···O 1 .
For guaranteeing these structures are the true most stable, the corresponding IR vibrational frequency has been calculated. Meanwhile, the anharmonic effects have been considered in stretching frequencies and ΔZPE correction by means of multiplying correction factors 0.991 and 0.977 that Truhlar et al. have fitted 54 . The IR vibrational spectra of hydrogen bond have been shown in the Fig. 8. In the Fig. 8(a), the vibrational frequency of O 1 -H 1 is about 3200 cm −1 (the anharmonic frequency is 3171.2 cm −1 ) in the S 0 state and is about 2729 cm −1 (2704.4 cm −1 ) in the S 1 state for cis-HPIP form. The 471 cm −1 (466.8 cm −1 ) red shift indicates that the hydrogen bond O 1 -H 1 ···N 2 is stronger in the S 1 state. In the Fig. 8(b), the vibrational frequency of the N 2 -H 1 is tremendously blue-shift 680 cm −1 (673.8 cm −1 ) from 2926 cm −1 (2899.7 cm −1 ) to 3606 cm −1 (3573.5 cm −1 ) in the S 0 → S 1 state for the ik-HPIP form, which has indicated that the hydrogen bond N 2 -H 1 ···O 1 is stronger in the S 0 state. The IR vibrational spectrum of the trans-k-HPIP has been revealed in the Fig. 8(c). The blue-shift 68 cm −1 (67.4 cm −1 ) from 3302 cm −1 (3272.3 cm −1 ) in the S 0 state to 3370 cm −1 (3339.7 cm −1 ) in the S 1 state has indicated that the intermolecular hydrogen bond N 2 -H 1 ···O 2 is stronger in the S 0 state. Upon anharmonic effects the energy of ΔZPE correction for cis-HPIP between the S 0 and S 1 state is 0.16 eV within the error range. The ESP and the frontier molecular orbitals (MOs) analysis. We guess the rotation is connected with relative displacement of the electron clouds and the nucleus in molecule under the photo-induced. The centre of gravity of positive and negative charges is tipped, which results in the drastic change of dipole moment. In order to prove our conjectures, the ESP values of HPIP have been calculated by the Multiwfn program 55 and the ESP surface has been shown in the Fig. 9. The corresponding maxima and minima have been exhibited on the figure, we can clearly found that the negative and positive electrostatic potential exist a drastic polarization distribution on the surface. Moreover, in the Fig. 10, the distribution ratio of the different ESP regions has been quantificationally calculated. The ESP values ranged from −20 Kcal/mol to 20 Kcal/mol widely distribute on the surface area. Thereinto, the negative ESP values mainly origin from the π-electron cloud of aromatic nucleus, the aromatic nucleus of C-H hydrogens mainly contribute to the positive areas. In the Fig. 11, the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) have been revealed. Upon the photo-induced process, the electron distribution mainly changes from the π character on the HOMO to the π* character on the LUMO. Herein, we only analyze the HOMO and LUMO, because transient excitation mainly stems from the contribution of HOMO → LUMO transition. The transition contribution of ππ* character and corresponding oscillator strength have been listed in the Table 5. In this table, Oscillator strength (f) of ik-HPIP and v-HPIP is 0.1334 and 0.0026, respectively. Their fluorescence compared with the fluorescence of cis-HPIP has been quenched. In the Fig. 11(a), electron redistribution in the PCM is nearly identical to that in the mixed solvation. For further studying the relationship between torsion of molecule structure and electron redistribution, we have obtained non-coplanar HPIP (np-HPIP) form by factitiously twisting the dihedral angle δ(N 2 -C 3 C 8 -C 9 ) in the theoretical calculation, in which the np-HPIP molecule cannot be obtained actually in the S 0 state. The MOs graphs of np-HPIP form have been shown in the Fig. 11(b). Its electron redistribution of the HOMO → LUMO compared with that of cis-HPIP still is not significant change. Therefore, we draw a conclusion that factitious torsion of molecule structure cannot result in the drastic electron redistribution on the np-HPIP. However, the HOMO and LUMO of the ik-HPIP, v-HPIP and trans-k-HPIP have been depicted in the Fig. 11(c,d) and (e). The electron redistribution of these isomeric forms is great different from that of cis-HPIP. It should be noted that electron population of the isomeric forms is unbalanced for HOMO → LUMO. These isomeric forms are zwitterions that the imino group carries positive electric charges and the ketonic oxygen atom carries negative electric charges. ICT character of these zwitterions can lead to the increasing of the dipole moments in the S 1 state, so the corresponding electron population will be unbalanced in the molecule. Moreover, the unbalanced electron population can further result in the torsion of isomeric forms. It is worth noting that TICT character of the v-HPIP is shown in the Fig. 11(d), the electron population almost entirely transfers from the phenyl group to the benzothiazole group in the HOMO → LUMO.
The analysis of potential energy curves and MECP. For further studying the non-radiative decay process, comprehending the optical properties and pathways of electronic transition, the potential energy curves of the HPIP in the S 0 and S 1 state have been carefully investigated. The energy of structure is a function of bond length O 1 -H 1 and dihedral angle δ(N 2 -C 3 C 8 -C 9 ), respectively. The potential energy curves will be drawn with the bond length and dihedral angle increased by the fixed step sizes, respectively. The reactive potential barriers and stable structures have been obtained in the potential curves. Firstly, we study the potential energy curves in the S 0 state. As shown in the Fig. 12, the two potential energy curves are the reactive pathways of (a) intramolecular proton transfer and (b) twisting dihedral angle, respectively. In the Figure the k-HPIP form is a stable structure and it has been used as the original structure in dihedral angle scan. Proton transfer cannot spontaneously occur in the S 0 state, because the process needs to cross a potential barrier 7.81 Kcal/mol in the Fig. 12(a). Similarly, the dihedral angle torsion cannot also spontaneously occur, and the process needs to cross an energy barrier 9.92 Kcal/ mol in the Fig. 12(b). However, the energy instantly reduces about 3.17 Kcal/mol in the torsion process from the Fig. 12(b), because the intermolecular hydrogen bond N 2 -H 1 ···O 2 takes shape in torsion process. Moreover, when the dihedral angle turns to about 180°, the trans-k-HPIP will be a stable structure. Therefore, cis-HPIP, k-HPIP Table 5. The corresponding oscillator strength (f), contribution index (CI) and orbital transition for the different molecular structures in the S 0 → S 1 state. and trans-k-HPIP forms can exist in the S 0 state, but the energy barriers of the proton transfer and structural torsion process are so high that the two processes cannot spontaneously occur in the S 0 state. Secondly, we study the potential energy curves in the S 1 state. In the mixed liquid model the keto form will directly be optimized into the v-HPIP form, so the v-HPIP form has been used as the initial structure in dihedral angle scan. In the Fig. 13 the potential energy curve of twisting dihedral angle has indicated that the v-HPIP form located in about 80° is stable structure, and the stable tran-k-HPIP form located in about 180°. Moreover, the torsion from the v-HPIP to the trans-k-HPIP must get over an energy barrier 3.71 Kcal/mol. The reversed torsion process needs to get over a small energy barrier 0.93 Kcal/mol, which indicates that the trans-k-HPIP can spontaneously revert to v-HPIP form in the S 1 state. The corresponding potential barrier is the critical point of cis-and trans-isomer.
To sum up, we have fully explained the reactive pathways for the HPIP complexes via analyzing the corresponding potential energy curves in the S 0 and S 1 state. However, the non-radiative transition process is still ambiguous, so we have further calculated the potential energy curves of the T 1 state. Because the oscillator strength of v-HPIP we have calculated is 0.0026, fluorescence of the v-HPIP has been totally quenched. We speculate that the v-HPIP structure might undergo a non-radiative transition from the S 1 state to the S 0 state. Therefore, herein searching the MECP has become mainly work for us. As shown in the Fig. 14, the potential energy curves of the torsional dihedral angle in the T 1 state and S 1 state have been exhibited. It should be noted that the MECP locates in about 90° in the figure. On this point, the ISC process might be dominant channel in the S 1 → T 1 process. In the potential energy curves of the T 1 state, the MECP structure is extremely unstable, so this structure will fast slide to the k-HPIP form located in about 0° or will get over a negligible energy barrier 0.33 Kcal/mol and then fast slide to the trans-k-HPIP form located in about 180° along with the potential energy curve of T 1 state. The k-HPIP and trans-k-HPIP existed in the S 0 state could be obtained from the radiationless decay process. As Harvey et al. reported that the reliability of hybrid method has been testified about searching the MECP 56 . Therefore, for prove the reliability of MECP, the sobMECP suite 57 has been used in this study. It should be noted that the point sobMECP has sought out is about 0.34 Kcal/mol higher than MECP we have confirmed, and their geometries are nearly consistent.
In addition, the functions of the potential energy curve versus the dihedral angle δ(N 2 -C 3 C 8 -C 9 ) of o-HPIP and np-HPIP forms have been depicted on the Fig. 15. The negligible energy barrier of o-HPIP is 0.52 Kcal/mol and the energy barrier of reversed twisting is 4.43 Kcal/mol in the Fig. 15(a), it is clearly indicated that twisting of o-HPIP can spontaneously occur in the S 0 state, the energy of o-HPIP form is gradually reduced with the torsional process and the stable structure located in about 180°. On the contrary, in the Fig. 15(b) the energy barrier of np-HPIP is 8.91 Kcal/mol and the energy barrier of reversed twisting is 1.65 Kcal/mol. The stable np-HPIP form locates in about 143.4° and the energy of np-HPIP form is gradually increased with the torsional process. Therefore, the intramolecular hydrogen bond can preclude the torsion of HPIP structure.
Conclusions
For illustrating the reliability of our computation, we have compared the electronic spectra with that of Toshiki Mutai et al. Both of the computed results are tremendous coincidental. The hydrogen bonding strengthening mechanism has been proved via analyzing the bond parameters of hydrogen bond in the S 0 and S 1 state. In addition, the analysis of IR vibrational spectra can further illustrate the above strengthening mechanism. In this study, competitive mechanism that intermolecular hydrogen bond N 2 -H 1 ···O 2 can weaken interaction of intramolecular hydrogen bond N 2 -H 1 ···O 1 has been primely proved. MOs analysis has indicated that the unbalanced electron population of the keto forms can give rise to the torsion of structures. For further studying the model of non-radiative decay process, comprehending the optical properties and pathways of electronic transition, the potential energy curves have been studied. Herein, we draw a conclusion that the k-HPIP and trans-k-HPIP cannot spontaneously occur in the S 0 state resulted from calculating the reactive energy barriers. These two structures may be obtained by the non-radiative decay process from S 1 → T 1 → S 0 state. The MECP has further been sought out via sobMECP program 57 . Moreover, we have demonstrated that the intramolecular hydrogen bond N 2 -H 1 ···O 1 Figure 14. Constructed the potential energy curves of the HPIP: red line: the energies of different structures versus the dihedral angles δ(N 2 -C 3 C 8 -C 9 ) in the S 1 state; blue line: the energies of different structures versus the dihedral angles δ(N 2 -C 3 C 8 -C 9 ) in the T 1 state. The numerical value in the graph stands for the energy barrier of the reaction. The MECP and the corresponding structure have been shown. can preclude the torsion of HPIP form via analyzing the o-HPIP form and np-HPIP form and corresponding potential energy curves.
Computational details. For providing with available information of geometrical configurations, such as the minimum energy, potential energy surface, electron spectra and infrared vibrational spectra, etc. 44 . The density functional theory (DFT) and the time-dependent density functional theory (TDDFT) have been throughout employed in the S 0 state and the S 1 state, respectively [58][59][60] . Both of them have been performed at Gaussian 09 program 61 . In this study, we only select to use Becke's three-parameter hybrid exchange function with the Lee-Yang-Parr gradient-corrected correlation functional (B3LYP) [62][63][64] , the Pople's 6-31 G (d) and 6-31 + G (d) triple-ξ quality basis sets are used in this level computation 65 . Herein, the functional B3LYP had been extensively applied in the past few decades, which indicates that B3LYP method is greatly reliable for calculating theoretically in the S 1 state 15,16,30,31,[66][67][68][69][70][71][72][73][74][75][76][77][78][79][80] . We have investigated the solvent effect based on the integral equation formalism polarizable continuum model (IEF-PCM) [81][82][83] . The molecular electrostatic potential (ESP) has been calculated for predicting the nucleophilic and electrophilic sites of target molecule. The ESP surface has been visually portrayed via the Visual Molecular Dynamics (VMD) software 84 . and we have also applied the RDG function to investigate the weak interaction types in the Multiwfn program 55 . The visual software Chemcraft has been applied to visualize these figures, such as the molecular structures, the MOs and the RDG 85 . The MECPs were sought out via sobMECP, which is a modified version of Harvey's MECP program 56 by Tian Lu. The sobMECP is a wrapper of Harvey's MECP program to simplify the operation of the MECP program 55,86 .
For performing fully optimized in our molecular system, the hybrid functional method has been carried out in computed point energies and geometries and corresponding gradients. The effective gradients f and g have been defined as: where terms E 1 and E 2 are energies on the two potential energy curves, respectively. The ∂ ∂ E q / n are corresponding partial derivatives of relative nuclear coordinates q.
The energy difference E 1 − E 2 can reduce gradually in the vector f direction and E 1 can reduce in vector g direction, which the two vectors f and g are orthogonal. The MECP can be sought out by optimizing with effective force f + g of molecular system. E 1 − E 2 of two states presented differ spin status can be reduced with descending total energy.
As hybrid method will be used to find MECP, the spin-orbit coupling will not be taken consideration. In two 2*2 Hessian matrices H 1 and H 2 electronic coupling matrix elements H 12 = H 21 become zero. Therefore, when the diagonal matrix elements of two matrices are equivalent, the MECP will be a real minimum. However, MECP is not a stable point in 3N-6 dimensions when the hybrid method is taken a consideration. The energy of MECP will be corrected via second-order Taylor expansion for two states: where ∆q is the displacement along touching hyperline, which is perpendicular to different gradient x 1 , the H eff is diagonal matrix elements of effective Hessian. For nonlinear molecular system, the MECP can be testified as indeed minimum point in 3N-7 dimensions. So this program can approximatively research dynamic measures in nonadiabatic surfaces 56 . | 6,750.8 | 2017-05-08T00:00:00.000 | [
"Chemistry"
] |
Modified FGP approach and MATLAB program for solving multi-level linear fractional programming problems
In this paper, we present modified fuzzy goal programming (FGP) approach and generalized MATLAB program for solving multi-level linear fractional programming problems (ML-LFPPs) based on with some major modifications in earlier FGP algorithms. In proposed modi-fied FGP approach, solution preferences by the decision makers at each level are not considered and fuzzy goal for the decision vectors is defined using individual best solutions. The proposed modified algorithm as well as MATLAB program simplifies the earlier algorithm on ML-LFPP by eliminating solution preferences by the decision makers at each level, thereby avoiding difficulties associate with multi-level programming problems and decision deadlock situation. The proposed modified technique is simple, efficient and requires less computational efforts in comparison of earlier FGP techniques. Also, the proposed coding of generalized MATLAB program based on this modified approach for solving ML-LFPPs is the unique programming tool toward dealing with such complex mathematical problems with MATLAB. This software based program is useful and user can directly obtain compromise optimal solution of ML-LFPPs with it. The aim of this paper is to present modified FGP technique and generalized MATLAB program to obtain compromise optimal solution of ML-LFP problems in simple and efficient manner. A comparative analysis is also carried out with numerical example in order to show effi-ciency of proposed modified approach and to demonstrate functionality of MATLAB program.
Introduction
Multi-level programming problem (MLPP) concerns with decentralized programming problems with multiple decision makers (DMs) in multi-level or hierarchical organizations, where decisions have interacted with each other. Multi-level organization or hierarchical organization has the following common characteristics: Interactive decisionmaking units exist within a predominantly hierarchical structure; the execution of decisions is sequential from higher level to lower level; each decision-making unit independently controls a set of decision variables and is interested in maximizing its own objective but is affected by the reaction of lower level DMs. So the decision deadlock arises frequently in the decision-making situations of multi-level organizations.
Numerous methods were suggested by researchers in literature (Anandilingam 1988(Anandilingam , 1991Lai 1996;Pramanik and Roy 2007;Shih et al. 1983;Shih and Lee 2000;Sinha 2003a, b) on MLPPs and also on multi-criteria decisionmaking problems (MCDM) and multi-objective programming problems with their applications like Zoraghi et al. (2013) presented a fuzzy multi-criteria decision making (MCDM) model by integrating both objective and subjective weights for evaluating service quality in hotel industries. Sadjadi et al. (2005) proposed a multi-objective linear fractional inventory model using fuzzy programming. Fattahi et al. (2006) proposed a Pareto approach to solve multi-objective job shop scheduling. Aryanezhad et al. (2011) considered the portfolio selection where fuzziness and randomness appear simultaneously in optimization process. Tohidi and Razavyan (2013) presented necessary and sufficient conditions to have unbounded feasible region and infinite optimal values for objective functions of multi-objective integer linear programming problems. Khalili-Damghani and Taghavifard (2011) proposed a multi-dimensional knapsack model for project capital budgeting problem in uncertain situation through fuzzy sets. Makul et al. (2008) presented the use of multiple objective linear programming approach for generating the common set of weights under the DEA framework. Each method appears to have some advantages as well as disadvantages. So, the issue of choosing a proper method in a given context is still a subject of active research. In context of such hierarchical problems, Fuzzy goal programming (FGP) approach seems to be more appropriate than other methodologies. The FGP introduced by Mohamed (1997) was extended to solve multiobjective linear fractional programming problems in , bi-level programming problems in Moitra and Pal (2002), bi-level quadratic programming problems in , and also extended to solve multi-level programming problems (MLPPs) with single objective function in each level in Pramanik and Roy (2007). In recent years, Aghdaghi and Jolai (2008) presented a goal programming approach and heuristic algorithm to solve vehicle routing problem with backhauls. Babeai et al. (2009) investigated the optimum portfolio for an investor using lexicographic goal programming approach. Ghosh and Roy (2013) formulated weighted goal programming as goal programming with logarithmic deviational variables. Lachhwani and Poonia (2012) proposed FGP approach for multi-level linear fractional programming problem. Lachhwani (2013) presented an alternate algorithm to solve multi-level multi-objective linear programming problems (ML-MOLPPs) which is simpler and requires less computational efforts than that of suggested by Baky (2010). Baky (2010) suggested two new techniques with FGP approach based on solution preferences by the decision maker at each level to solve new type of multi-level multi-objective linear programming (ML-MOLP) problems through the fuzzy goal programming (FGP) approach. Abo- Sinha and Baky (2007) presented interactive balance space approach for solving multi-level multi-objective programming problems. Baky (2009) proposed FGP algorithm for solving decentralized bi-level multi-objective programming (DBL-MOP) problems with a single decision maker at the upper level and multiple decision makers at the lower level. The main disadvantage of the FGP algorithms is that the possibility of rejecting the solution again and again by the upper level DMs and re-evaluation of the problem repeatedly, by redefining the tolerance values on decision variables, needed to reach the satisfactory decision frequently arises. To overcome such computational difficulties, we modified FGP approach for ML-LFPP in which solution preferences by decision maker at each level and sequential order of decision-making process in finding satisfactory solutions are not taken into account of proposed technique and we straightforwardly obtain compromise optimal solution of the problem with higher degree of membership function values. In this paper, we proposed modified FGP approach for multi-level linear fractional programming problem (ML-LFPP) in which solution preferences by decision maker at each level and sequential order of decisionmaking process in finding satisfactory solutions are not taken into account of proposed technique. Using modified technique, we straightforwardly obtain compromise optimal solution of the problem with higher degree of membership function values. This modified approach simplifies the solution procedure and reduces the computational efforts with it. Here, we also present coding of generalized MATLAB program based on proposed modified approach for solving ML-LFPPs which is the unique toward dealing with such complex mathematical problems with MAT-LAB. This software based program is useful and user can directly obtain compromise optimal solution of ML-LFPPs. The aim of this paper is to present modified FGP algorithm and generalized MATLAB program which is simple, efficient and requires less computational efforts for solving multi-level linear fractional programming problems (ML-LFPPs).
The paper is organized in following sections: MLPPs and related literature reviews are presented in introduction section. Formulation of ML-LFPP and related notations are discussed in next Sect. 2. Characterization of membership functions, solution approach based on FGP and formulation of FGP models are discussed in next section. Proposed MATLAB program and its functionality are discussed in Sect. 4. Numerical example on modified FGP technique and its comparison with solution technique suggested by Lachhwani and Poonia (2012) are discussed in numerical example Sect. 5. Concluding remarks are given in the last section. Coding of main function and recurresive simplex function are presented in appendices.
Formulation of ML-LFPPs
We consider a T-level maximization type multi-level linear fractional programming problem (ML-LFPP). Mathematically it can be defined as: Here one DM is located on each level. Decision vector X t ; t ¼ 1; 2; . . .; T is control of tth level DM having N t number of decision variables. Here, it is assumed that the denominator of objective functions is positive at each level for all the values of decision variables in the constraint region.
Modified FGP methodology for ML-LFPP
The proposed modified FGP procedure is based on finding the compromise optimal solution as described by Lachhwani (2013) for multi-level multi-objective linear programming (ML-MOLP) problems. Here, we need to express the definitions related to efficient solution and compromise optimal solution in context of MLPP as: Definition 2 For a problem (1), a compromise optimal solution is an efficient solution selected by the decision maker (DM) as being the best solution where the selection is based on the DM's explicit or implicit criteria. Zeleny (1982) as well as most authors describes the act of finding a compromise optimal solution to problem as ''……an effort or emulate the ideal solution as closely as possible''.
Our FGP model for determining compromise optimal (efficient) solution is based on the finding of the totality or subset of efficient solutions with the DM, then choosing one best solution on some explicit or implicit algorithm.
FGP formulation for ML-LFPP
To formulate the modified FGP models of ML-LFPP, the objective numerator f tN ðXÞ þ a t ; 8t ¼ 1; 2; . . .; T, objective denominator f tD ðXÞ þ a t ; 8t ¼ 1; 2; . . .; T at each level and the decision vector X t ; ðt ¼ 1; 2; ::; T À 1Þ would be transformed into fuzzy goals by assigning an aspiration level to each of them. Then, they are to be characterized by the associated membership functions by defining tolerance limits for the achievement of the aspired levels of the corresponding fuzzy goals. Here, decision vector X t of up to (T-1) levels is transformed into fuzzy goals in order to avoid decision deadlock situations.
Characterization of membership functions
To build membership functions, fuzzy goals and their aspiration levels should be determined first. Using the individual best solution without considering inference of decision variables on lower levels, we find the maximum and minimum values of all the numerator and denominator objective functions at each level solution and construct payoff matrices as: : : : : X T N T ðX T Þ 2 6 6 6 6 6 6 4 3 7 7 7 7 7 7 5 ð2Þ X t N t X 1 N 1 ðX 1 Þ X 2 N 2 ðX 2 Þ : : : : : : : : X T D T ðX T Þ 2 6 6 6 6 6 6 4 and X t D t X 1 D 1 ðX 1 Þ X 2 D 2 ðX 2 Þ : : : : The maximum values of each row N t ðX t Þ and D t ðX t Þ 8t ¼ 1; 2; . . .; T give upper and lower tolerance limit or aspired level of achievement for the membership function of tth level numerator and denominator objective function respectively. Similarly, the minimum values of each row N t ðX t Þ and D t ðX t Þ 8t ¼ 1; 2; . . .; T give lower and upper tolerance limit or lowest acceptable level of achievement for the membership function of tth level numerator and denominator objective function respectively. Hence, linear membership functions for the defined fuzzy goals (as shown in Fig. 1a, b respectively) are: Now, the linear membership functions for the decision vector X T (t ¼ 1; 2; . . .; T À 1Þ (as shown in Fig. 2) are formulated in modified form as: where X t and X t are taken as the values of the corresponding decision vectors at each level which yield the highest and lowest values of the numerator part of objective functions ((N t ðXÞ and N t ðXÞ 8t ¼ 1; 2; . . .; T À 1Þ at each level respectively defined as: Here, it is important to note that for simplicity of proposed technique and in order to avoid decision deadlock situation in the whole solution methodology, the solution preferences by the decision maker in terms of values of decision vector at each level with respect to the values of decision vector at lower levels are not considered. This results that large amount of computational tasks is reduced into limited simple calculation in modified FGP model formulation. Also, linear membership functions are considered because these are more suitable than nonlinear ones in context of complex ML-LFPPs and it further reduces computational difficulties in modified method.
FGP solution approach
In GP approach, decision policy for minimizing the regrets of the DMs for all the levels is taken into consideration. Then each DM should try to maximize his or her membership function by making them as close as possible to unity by minimizing its negative deviational variables. Therefore, in effect, we are simultaneously optimizing all the objective functions. So, for the defined membership functions in (6), (7) and (8), the flexible membership goals having the aspired level unity can be represented as: where d NÀ t ; d DÀ t ; d Nþ t d Dþ t ð ! 0Þ (8t ¼ 1; 2; . . .; T) and d À t ; d þ t ð ! 0Þ 8t ¼ 1; 2; . . .; T À 1 represent the under and over deviational variables respectively from the aspired levels. I is the column vector having all components equal to 1 and its dimension depends on X t . Thus ML-LFP problem (1) changes into: Subject to; . . .; m and X 1 ! 0; X 2 ! 0; . . .; X T ! 0: In this FGP approach, only the sum of under deviational variables is required to be minimized to achieve the aspired level. It may be noted that when a membership goal is fully achieved, negative deviational variable becomes zero and when its achievement is zero, negative deviational variable becomes unity in the solution. Now if the most widely used and simplest version of GP (i.e. minsum GP) is introduced to formulate the model of the problem under consideration, the FGP model formulation becomes: . . .; m and X 1 ! 0; X 2 ! 0; . . .; X T ! 0: where k (in FGP model II) represents the fuzzy achievement function consisting of the weighted under deviational variables and the numerical weights _ w t ; € w t [ 0; ð8t ¼ 1; . . .; TÞ represent the relative importance of achieving the aspired level of the respective fuzzy goals subject to the constraints in the decision-making situation. To assess the relative importance of the fuzzy goals properly, the weighted scheme suggested by Mohamed (1997) can be used to assign the values to _ w t ; € w t [ 0; ð8t ¼ 1; . . .; TÞ. In the present formulation _ w t ; € w t [ 0 can be determined as:
MATLAB Program for ML-LFPPs based on modified FGP approach
Here, we discuss the coding and functionality of generalized MATLAB program for finding the compromise optimal solution of any ML-LFPPs based on proposed modified FGP approach. Using this program, the user needs to input data related to the problem and then user can directly obtain compromise optimal solution of ML-LFPP in single iteration with this program. To run this program, the two files are imported (used for main function and simplex function respectively and as shown in appendices) in the current MATLAB folder as: 1 opt_pro.m 2 simplex_function1.m Then go to the MATLAB command prompt and type opt_pro to execute the program. The main coding of this program is partitioned into following two parts as: (a) Main function (MATLAB coding of main function is shown in appendices) (b) Simplex function (MATLAB coding of simplex function is shown in appendices) The functionality of MATLAB program to obtain optimized values of decision variables and corresponding objective functions can be described in following stepwise algorithm as: Step 1: In first step, main function takes input values from the user and converts them into suitable matrices. Then these matrices are passed to the simplex function as its input arguments in single matrix containing constraints as well as objective functions.
Step 2: For each level, the simplex function is called two times to compute minimized and maximized values of numerator and denominator objective functions.
Step 3: In this step, firstly the simplex function separates the constraints matrix and objective function matrix and then it computes the optimized solution based on the simplex method after some iteration. Then it provides these optimized solutions to the main function as its output argument in a single matrix containing values of decision variables and values of corresponding objective functions.
Step 4: So, in this way, repeating the step 2 and step 3, we get the maximum and minimum values for each objective function.
Step 5: Using these optimized values, the main function takes the decision variables up to (T-1) levels.
Step 6: Using these values of decision variables and values of numerator and denominator objectives, it constructs the type I, type II and type III constraints as defined in (6), (7) and (8).
Step 7: It recognizes all these constraints along with the initial constraints to construct a single constraint matrix.
Step 8: Then it constructs an objective function to minimize the sum of all the D variables (negative deviational variables) which are formulated during type I, II and III constraints.
Step 9: Now it again passes a matrix containing both the constraints as well as objective functions to the simplex function as its input argument.
Step 10: Now the simplex function decodes the input matrix to find the constraint matrix and objective function.
Step 11: Using the above constraints matrix and objective function, it computes the optimal solution using the usual simplex method. This optimized solution is then passed to the main function as its output argument.
Step 12: Now the main function, using these optimized values of main decision variables, computes the corresponding values of objective functions and displays them as output values.
Numerical Example
In this section, we illustrate the same numerical example considered in Lachhwani and Poonia (2012) in order to show efficiency of modified method over earlier technique as well as to demonstrate proposed MATLAB program on numerical example.
Following the procedure, FGP model I and II can be described as: Solving this programming problem, the compromise optimal solution obtained is: k ¼ 1:8596; x 1 ¼ 2:3333;x 2 ¼ 0;x 3 ¼ 0; Note that the satisfactory solutions of the same problem using FGP technique proposed by Lachhwani and Poonia (2012) are: ðx 1 ; x 2 ; x 3 ; x 4 Þ ¼ ð0:4471; 1:169105; 0; 1:2764Þ with ðZ 1 ; Z 2 ; Z 3 Þ ¼ ð3:42738; 1:642437; 0:7515643Þ (For proposed method I) and ðx 1 ; x 2 ; x 3 ; x 4 Þ ¼ Table 1 Comparison of values between modified FGP approach and FGP technique by Lachhwani and Poonia (2012) Lachhwani and Poonia (2012). It is clear that both the approaches are close to one another but the modified methodology is efficient and requires less computations than earlier technique in terms of considering the solution preferences by the decision maker at each level.
Again, If we compare main advantages of proposed modified FGP methodology on different parameters as shown in table (Table 2) considering theoretical aspects of techniques and numerical example, it shows that the proposed modified technique has advantages of simplicity, efficiency, construction of MATLAB program, without decision deadlock situations, less computational efforts etc. than the technique suggested by Lachhwani and Poonia (2012) on each of these parameters. Now, if we use the proposed MATLAB program on this numerical example and input the total no. of variables, total no. of constraints, numerator/denominator objective matrices, decision variables in matrix format for each stage etc. (as shown in Fig. 5). Then we get the compromise optimal solution ðk; x 1 ; x 2 ; x 3 ; x 4 Þ ¼ ð1:8596; 2:3333; 0; 0; 0:3333Þ which is the same as illustrated above with our proposed methodology. This validates our proposed MATLAB program.
Conclusions
This paper presents an improved FGP technique (in terms of achieving higher values of membership functions, simplicity, computational efforts etc.) as well as generalized MATLAB program to obtain compromise optimal solution of ML-LFPPs. The proposed technique is simple, efficient and requires less computational works than that of earlier techniques. Also the proposed MATLAB program is unique and latest for solving these complex mathematical problems. This software based program is useful and user can directly obtain compromise optimal solution of ML-LFPPs with it. However, the main demerit of this MATLAB program is that construction of its coding is difficult and complex which also depends on the complexity of the problem.
Certainly there are many points for future research in the areas of MLPPs, based on modified FGP approach and should be studied. Some of these areas are: (1) The proposed technique can be extended to more complex hierarchical programming problems like multi-level quadratic fractional programming problems (ML-QFPPs), multi-level multi-objective programming problems (ML-MOPPs) etc. and related computer programs can also be constructed in MATLAB or other programming platforms.
(2) Further modifications can be carried out in recent techniques for ML-LFPPs in order to improve efficiency of solution algorithm.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 4,888.8 | 2015-03-01T00:00:00.000 | [
"Computer Science"
] |
Automatic segmentation with detection of local segmentation failures in cardiac MRI
Segmentation of cardiac anatomical structures in cardiac magnetic resonance images (CMRI) is a prerequisite for automatic diagnosis and prognosis of cardiovascular diseases. To increase robustness and performance of segmentation methods this study combines automatic segmentation and assessment of segmentation uncertainty in CMRI to detect image regions containing local segmentation failures. Three existing state-of-the-art convolutional neural networks (CNN) were trained to automatically segment cardiac anatomical structures and obtain two measures of predictive uncertainty: entropy and a measure derived by MC-dropout. Thereafter, using the uncertainties another CNN was trained to detect local segmentation failures that potentially need correction by an expert. Finally, manual correction of the detected regions was simulated in the complete set of scans of 100 patients and manually performed in a random subset of scans of 50 patients. Using publicly available CMR scans from the MICCAI 2017 ACDC challenge, the impact of CNN architecture and loss function for segmentation, and the uncertainty measure was investigated. Performance was evaluated using the Dice coefficient, 3D Hausdorff distance and clinical metrics between manual and (corrected) automatic segmentation. The experiments reveal that combining automatic segmentation with manual correction of detected segmentation failures results in improved segmentation and to 10-fold reduction of expert time compared to manual expert segmentation.
Segmentation of cardiac anatomical structures in cardiac magnetic resonance images (CMRI) is a prerequisite for automatic diagnosis and prognosis of cardiovascular diseases. To increase robustness and performance of segmentation methods this study combines automatic segmentation and assessment of segmentation uncertainty in CMRI to detect image regions containing local segmentation failures. Three existing state-of-the-art convolutional neural networks (CNN) were trained to automatically segment cardiac anatomical structures and obtain two measures of predictive uncertainty: entropy and a measure derived by MC-dropout. Thereafter, using the uncertainties another CNN was trained to detect local segmentation failures that potentially need correction by an expert. Finally, manual correction of the detected regions was simulated in the complete set of scans of 100 patients and manually performed in a random subset of scans of 50 patients. Using publicly available CMR scans from the MICCAI 2017 ACDC challenge, the impact of CNN architecture and loss function for segmentation, and the uncertainty measure was investigated. Performance was evaluated using the Dice coefficient, 3D Hausdorff distance and clinical metrics between manual and (corrected) automatic segmentation. The experiments reveal that combining automatic segmentation with manual correction of detected segmentation failures results in improved segmentation and to 10-fold reduction of expert time compared to manual expert segmentation.
To perform diagnosis and prognosis of cardiovascular disease (CVD) medical experts depend on the reliable quantification of cardiac function 1 . Cardiac magnetic resonance imaging (CMRI) is currently considered the reference standard for quantification of ventricular volumes, mass and function 2 . Short-axis CMR imaging, covering the entire left and right ventricle (LV resp. RV) is routinely used to determine quantitative parameters of both ventricle's function. This requires manual or semi-automatic segmentation of corresponding cardiac tissue structures for end-diastole (ED) and end-systole (ES).
Existing semi-automated or automated segmentation methods for CMRIs regularly require (substantial) manual intervention caused by lack of robustness. Manual or semi-automatic segmentation across a complete cardiac cycle, comprising 20 to 40 phases per patient, enables computation of parameters quantifying cardiac motion with potential diagnostic implications but due to the required workload, this is practically infeasible. Consequently, segmentation is often performed at end-diastole and end-systole precluding comprehensive analysis over complete cardiac cycle.
Recently 3,4 , deep learning segmentation methods have shown to outperform traditional approaches such as those exploiting level set, graph-cuts, deformable models, cardiac atlases and statistical models 5,6 . However, recent comparison of a number of automatic methods showed that even the best performing methods generated anatomically implausible segmentations in more than 80% of the CMRIs 7 . Such errors do not occur when experts perform segmentation. To achieve acceptance in clinical practice these shortcomings of the automatic approaches need to be alleviated by further development. This can be achieved by generating more accurate segmentation result or by development of approaches that automatically detect segmentation failures.
In manual and automatic segmentation of short-axis CMRI, largest segmentation inaccuracies are typically located in the most basal and apical slices due to low tissue contrast ratios 8 . To increase segmentation performance, several methods have been proposed [9][10][11][12] . Tan et al. 9 used a convolutional neural network (CNN) to regress anatomical landmarks from long-axis views (orthogonal to short-axis). They exploited the landmarks to determine most basal and apical slices in short-axis views and thereby constraining the automatic segmentation of CMRIs. This resulted in increased robustness and performance. Other approaches leverage spatial 10 or temporal 11,12 information to increase segmentation consistency and performance in particular in the difficult basal and apical slices.
An alternative approach to preventing implausible segmentation results is by incorporating knowledge about the highly constrained shape of the heart. Oktay et al. 13 developed an anatomically constrained neural network (NN) that infers shape constraints using an auto-encoder during segmentation training. Duan et al. 14 developed a deep learning segmentation approach for CMRIs that used atlas propagation to explicitly impose a shape refinement. This was especially beneficial in the presence of image acquisition artifacts. Recently, Painchaud et al. 15 developed a post-processing approach to detect and transform anatomically implausible cardiac segmentations into valid ones by defining cardiac anatomical metrics. Applying their approach to various state-of-the-art segmentation methods the authors showed that the proposed method provides strong anatomical guarantees without hampering segmentation accuracy.
A different research trend focuses on detecting segmentation failures, i.e. on automated quality control for image segmentation. These methods can be divided in those that predict segmentation quality using image at hand or corresponding automatic segmentation result, and those that assess and exploit predictive uncertainties to detect segmentation failure.
Recently, two methods were proposed to detect segmentation failures in large-scale cardiac MR imaging studies to remove these from subsequent analysis 16,17 . Robinson et al. 17 using the approach of Reverse Classification Accuracy (RCA) 18 predicted CMRI segmentation metrics to detect failed segmentations. They achieved good agreement between predicted metrics and visual quality control scores. Alba et al. 16 used statistical, pattern and fractal descriptors in a random forest classifier to directly detect segmentation contour failures without intermediate regression of segmentation accuracy metrics.
Methods for automatic quality control were also developed for other applications in medical image analysis. Frounchi et al. 19 extracted features from the segmentation results of the left ventricle in CT scans. Using the obtained features the authors trained a classifier that is able to discriminate between consistent and inconsistent segmentations. To distinguish between acceptable and non-acceptable segmentations Kohlberger et al. 20 proposed to directly predict multi-organ segmentation accuracy in CT scans using a set of features extracted from the image and corresponding segmentation.
A number of methods aggregate voxel-wise uncertainties into an overall score to identify insufficiently accurate segmentations. For example, Nair et al. 21 computed an overall score for target segmentation structure from voxel-wise predictive uncertainties. The method was tested for detection of Multiple Sclerosis in brain MRI. The authors showed that rejecting segmentations with high uncertainty scores led to increased detection accuracy indicating that correct segmentations contain lower uncertainties than incorrect ones. Similarly, to assess segmentation quality of brain MRIs Jungo et al. 22 aggregated voxel-wise uncertainties into a score per target structure and showed that the computed uncertainty score enabled identification of erroneous segmentations.
Unlike approaches evaluating segmentation directly, several methods use predictive uncertainties to predict segmentation metrics and thereby evaluate segmentation performance 23,24 . For example, Roy et al. 23 aggregated voxel-wise uncertainties into four scores per segmented structure in brain MRI. The authors showed that computed scores can be used to predict the Intersection over Union and hence, to determine segmentation accuracy. Similar idea was presented by DeVries et al. 24 that predicted segmentation accuracy per patient using an auxiliary neural network that leverages the dermoscopic image, automatic segmentation result and obtained uncertainties. The researchers showed that a predicted segmentation accuracy is useful for quality control.
We build on our preliminary work where automatic segmentation of CMR images using a dilated CNN was combined with assessment of two measures of segmentation uncertainties 25 . For the first measure the multiclass entropy per voxel (entropy maps) was computed using the output distribution. For the second measure Bayesian uncertainty maps were acquired using Monte Carlo dropout (MC-dropout) 26 . In 25 we showed that the obtained uncertainties almost entirely cover the regions of incorrect segmentation i.e. that uncertainties are calibrated. In the current work we extend our preliminary research in two ways. First, we assess impact of CNN architecture on the segmentation performance and calibration of uncertainty maps by evaluating three existing state-of-the-art CNNs. Second, we employ an auxiliary CNN (detection network) that processes a cardiac MRI and corresponding spatial uncertainty map (Entropy or Bayesian) to automatically detect segmentation failures. We differentiate errors that may be within the range of inter-observer variability and hence do not necessarily require correction (tolerated errors) from the errors that an expert would not make and hence require correction (segmentation failures). Given that overlap measures do not capture fine details of the segmentation results and preclude us to differentiate two types of segmentation errors, in this work, we define segmentation failure using a metric of boundary distance. In 25 we found that degree of calibration of uncertainty maps is dependent on the loss function used to train the CNN. Nevertheless, in the current work we show that uncalibrated uncertainty maps are useful to detect local segmentation failures. In contrast to previous methods that detect segmentation failure per-patient or per-structure 23,24 , we propose to detect segmentation failures per image region. We expect that inspection and correction of segmentation failures using image regions rather than individual voxels or images would simplify correction process. To show the potential of our approach and demonstrate that combining automatic segmentation with manual correction of the detected segmentation failures per region results in higher segmentation performance we performed two additional experiments. In the first experiment, correction of detected segmentation failures was simulated in the complete data set. In the second experiment, correction www.nature.com/scientificreports/ was performed by an expert in a subset of images. Using publicly available set of CMR scans from MICCAI 2017 ACDC challenge 7 , the performance was evaluated before and after simulating the correction of detected segmentation failures as well as after manual expert correction.
Data
In this study data from the MICCAI 2017 Automated Cardiac Diagnosis Challenge (ACDC) 7 was used. The dataset consists of cardiac cine MR images (CMRIs) from 100 patients uniformly distributed over normal cardiac function and four disease groups: dilated cardiomyopathy, hypertrophic cardiomyopathy, heart failure with infarction, and right ventricular abnormality. Detailed acquisition protocol is described by Bernard et al. 7 . Briefly, short-axis CMRIs were acquired with two MRI scanners of different magnetic strengths (1.5 and 3.0 T). Images were made during breath hold using a conventional steady-state free precession (SSFP) sequence. CMRIs have an in-plane resolution ranging from 1.37 to 1.68 mm (average reconstruction matrix 243 × 217 voxels) with slice spacing varying from 5 to 10 mm. Per patient 28 to 40 volumes are provided covering partially or completely one cardiac cycle. Each volume consists of on average ten slices covering the heart. Expert manual reference segmentations are provided for the LV cavity, RV endocardium and LV myocardium (LVM) for all CMRI slices at ED and ES time frames. To correct for intensity differences among scans, voxel intensities of each volume were scaled to the [0.0, 1.0] range using the minimum and maximum of the volume. Furthermore, to correct for differences in-plane voxel sizes, image slices were resampled to 1.4 × 1.4 mm 2 .
Methods
To investigate uncertainty of the segmentation, anatomical structures in CMR images are segmented using a CNN. To investigate whether the approach generalizes to different segmentation networks, three state-of-the-art CNNs were evaluated. For each segmentation model two measures of predictive uncertainty were obtained per voxel. Thereafter, to detect and correct local segmentation failures an auxiliary CNN (detection network) that analyzes a cardiac MRI was used. Finally, this leads to the uncertainty map allowing detection of image regions that contain segmentation failures. Fig. 1 www.nature.com/scientificreports/ Bayesian dilated CNN (DN). The Bayesian DN architecture comprises a sequence of ten convolutional layers. Layers 1 to 8 serve as feature extraction layers with small convolution kernels of size 3 × 3 voxels. No padding is applied after convolutions. The number of kernels increases from 32 in the first eight layers, to 128 in the final two fully connected classification layers, implemented as 1 × 1 convolutions. The dilation level is successively increased between layers 2 and 7 from 2 to 32 which results in a receptive field for each voxel of 131×131 voxels, or 18.3×18.3 cm 2 . All trainable layers except the final layer use rectified linear activation functions (ReLU). To enhance generalization performance, the model uses batch normalization in layers 2 to 9. In order to convert the original DN 27 into a Bayesian DN dropout is added as the last operation in all but the final layer and 10 percent of a layer's hidden units are randomly switched off.
Bayesian dilated residual network (DRN). The Bayesian DRN is based on the original DRN from Yu et al. 28 for image segmentation. More specifically, the DRN-D-22 28 is used which consists of a feature extraction module with output stride eight followed by a classifier implemented as fully convolutional layer with 1 × 1 convolutions. Output of the classifier is upsampled to full resolution using bilinear interpolation. The convolutional feature extraction module comprises eight levels where the number of kernels increases from 16 in the first level, to 512 in the two final levels. The first convolutional layer in level 1 uses 16 kernels of size 7 × 7 voxels and zero-padding of size 3. The remaining trainable layers use small 3 × 3 voxel kernels and zero-padding of size 1. Level 2 to 4 use a strided convolution of size 2. To further increase the receptive field convolutional layers in level 5, 6 and 7 use a dilation factor of 2, 4 and 2, respectively. Furthermore, levels 3 to 6 consist of two residual blocks. All convolutional layers of the feature extraction module are followed by batch normalization, ReLU function and dropout. Adding dropout and switching off 10 percent of a layer's hidden units converts the original DRN 28 into a Bayesian DRN.
Bayesian U-net (U-net). The standard architecture of the U-net 29 is used. The network is fully convolutional and consists of a contracting, bottleneck and expanding path. The contracting and expanding path each consist of four blocks i.e. resolution levels which are connected by skip connections. The first block of the contracting path contains two convolutional layers using a kernel size of 3 × 3 voxels and zero-padding of size 1. Downsampling of the input is accomplished by employing a max pooling operation in block 2 to 4 of the contracting path and the bottleneck using a convolutional kernel of size 2 × 2 voxels and stride 2. Upsampling is performed by a transposed convolutional layer in block 1 to 4 of the expanding path using the same kernel size and stride as the max pooling layers. Each downsampling and upsampling layer is followed by two convolutional layers using 3 × 3 voxel kernels with zero-padding size 1. The final convolutional layer of the network acts as a classifier and uses 1 × 1 convolutions to reduce the number of output channels to the number of segmentation classes. The number of kernels increases from 64 in the first block of the contracting path to 1024 in the bottleneck. In contrast, the number of kernels in the expanding path successively decreases from 1024 to 64. In deviation to the standard U-net instance normalization is added to all convolutional layers in the contracting path and ReLU non-linearities are replaced by LeakyReLU functions because this was found to slightly improve segmentation performance. In addition, to convert the deterministic model into a Bayesian neural network dropout is added as the last operation in each block of the contracting and expanding path and 10 percent of a layer's hidden units are randomly switched off.
Assessment of predictive uncertainties. To detect failures in segmentation masks generated by CNNs in testing, spatial uncertainty maps of the obtained segmentations are generated. For each voxel in the image two measures of uncertainty are calculated. First, a computationally cheap and straightforward measure of uncertainty is the entropy of softmax probabilities over the four tissue classes which are generated by the segmentation networks. Using these, normalized entropy maps E ∈ [0, 1] H×W (e-map) are computed where H and W denote the height and width of the original CMRI, respectively. Second, by applying MC-dropout in testing, softmax probabilities with a number of samples T per voxel are obtained. As an overall measure of uncertainty the mean standard deviation of softmax probabilities per voxel over all tissue classes C is computed where B(I) (x,y) ∈ [0, 1] denotes the normalized value of the Bayesian uncertainty map (b-map) at position (x, y) in 2D slice I, C is equal to the number of classes, T is the number of samples and p t (I) (x,y,c) denotes the softmax probability at position (x, y) in image I for class c. The predictive mean per class μ (x,y,c) of the samples is computed as follows: In addition, the predictive mean per class is used to determine the tissue class per voxel.
Calibration of uncertainty maps. Ideally, incorrectly segmented voxels as defined by the reference labels should be covered by higher uncertainties than correctly segmented voxels. In such a case the spatial uncertainty maps are perfectly calibrated. Risk-coverage curves introduced by Geifman et al. 30 visualize whether incorrectly www.nature.com/scientificreports/ segmented voxels are covered by higher uncertainties than those that are correctly segmented. Risk-coverage curves convey the effect of avoiding segmentation of voxels above a specific uncertainty value on the reduction of segmentation errors (i.e. risk reduction) while at the same time quantifying the voxels that were omitted from the classification task (i.e. coverage).
To generate risk-coverage curves first, each patient volume is cropped based on a minimal enclosing parallelepiped bounding box that is placed around the reference segmentations to reduce the number of background voxels. Note that this is only performed to simplify the analysis of the risk-coverage curves. Second, voxels of the cropped patient volume are ranked based on their uncertainty value in descending order. Third, to obtain uncertainty threshold values per patient volume the ranked voxels are partitioned into 100 percentiles based on their uncertainty value. Finally, per patient volume each uncertainty threshold is evaluated by computing a coverage and a risk measure. Coverage is the percentage of voxels in a patient volume at ED or ES that is automatically segmented. Voxels in a patient volume above the threshold are discarded from automatic segmentation and would be referred to an expert. The number of incorrectly segmented voxels per patient volume is used as a measure of risk. Using bilinear interpolation risk measures are computed per patient volume between [0, 100] percent.
Detection of segmentation failures.
To detect segmentation failures uncertainty maps are used but direct application of uncertainties is infeasible because many correctly segmented voxels, such as those close to anatomical structure boundaries, have high uncertainty. Hence, an additional patch-based CNN (detection network) is used that takes a cardiac MR image together with the corresponding spatial uncertainty map as input. For each patch of 8 × 8 voxels the network generates a probability indicating whether it contains segmentation failure. In the following, the terms patch and region are used interchangeably.
The detection network is a shallow Residual Network (S-ResNet) 31 consisting of a feature extraction module with output stride eight followed by a classifier indicating the presence of segmentation failure. The first level of the feature extraction module consists of two convolutional layers. The first layer uses 16 kernels of 7 × 7 voxels and zero-padding of size 3 and second layer 32 kernels of 3 × 3 voxels and zero-padding of 1 voxel. Level 2 to 4 each consist of one residual block that contains two convolutional layers with 3 × 3 voxels kernels with zero-padding of size 1. The first convolutional layer of each residual block uses a strided convolution of 2 voxels to downsample the input. All convolutional layers of the feature extraction module are followed by batch normalization and ReLU function. The number of kernels in the feature extraction module increases from 16 in level 1 to 128 in level 4. The network is a 2D patch-level classifier and requires that the size of the two input slices is a multiple of the patch-size. The final classifier consists of three fully convolutional layers, implemented as 1 × 1 convolutions, with 128 feature maps in the first two layers. The final layer has two channels followed by a softmax function which indicates whether the patch contains segmentation failure. Furthermore, to regularize the model dropout layers ( p = 0.5 ) were added between the residual blocks and the fully convolutional layers of the classifier.
Evaluation
Automatic segmentation performance, as well as performance after simulating the correction of detected segmentation failures and after manual expert correction was evaluated. For this, the 3D Dice-coefficient (DC) and 3D Hausdorff distance (HD) between manual and (corrected) automatic segmentation were computed. Furthermore, the following clinical metrics were computed for manual and (corrected) automatic segmentation: left ventricle end-diastolic volume (EDV); left ventricle ejection fraction (EF); right ventricle EDV; right ventricle ejection fraction; and left ventricle myocardial mass. Following Bernard et al. 7 for each of the clinical metrics three performance indices were computed using the measurements based on manual and (corrected) automatic segmentation: Pearson correlation coefficient; mean difference (bias and standard deviation); and mean absolute error (MAE).
To evaluate detection performance of the automatic method precision-recall curves of identification of slices that require correction were computed. A slice is considered positive in case it consists of at least one image region with a segmentation failure. To achieve accurate segmentation in clinic, identification of slices that contain segmentation failures might ease manual correction of automatic segmentations in daily practice. To further evaluate detection performance detection rate of segmentation failures was assessed on a voxel level. More specific, sensitivity against the number of false positive regions was evaluated because manual correction is presumed to be performed at this level.
Finally, after simulation and manual correction of the automatically detected segmentation failures, segmentation was re-evaluated and significance of the difference between the DCs, HDs and clinical metrics was tested with a Mann-Whitney U test.
Experiments
To use stratified four-fold cross-validation the dataset was split into training (75%) and test (25%) set. The splitting was done on a patient level, so there was no overlap in patient data between training and test sets. Furthermore, patients were randomly chosen from each of the five patient groups w.r.t. disease. Each patient has one volume for ED and ES time points, respectively.
Training segmentation networks. DRN and U-net were trained with a patch size of 128 × 128 voxels which is a multiple of their output stride of the contracting path. In the training of the dilated CNN (DN) images with 151 × 151 voxel samples were used. Zero-padding to 281 × 281 was performed to accommodate the 131 × 131 voxel receptive field that is induced by the dilation factors. Training samples were randomly chosen from training set and augmented by 90 degree rotations of the images. All models were initially trained with three loss www.nature.com/scientificreports/ functions: soft-Dice 33 (SD); cross-entropy (CE); and Brier loss 34 . However, for the evaluation of the combined segmentation and detection approach for each model architecture the two best performing loss functions were chosen: soft-Dice for all models; cross-entropy for DRN and U-net and Brier loss for DN. For completeness, we provide the equations for all three used loss functions.
where N denotes the number of voxels in an image, R c is the binary reference image for class c and A c is the probability map for class c.
where N denotes the number of voxels in an image and p denotes the probability for a specific voxel x i with corresponding reference label y i for class c. Choosing Brier loss to train the DN model instead of CE was motivated by our preliminary work which showed that segmentation performance of DN model was best when trained with Brier loss 25 .
All models were trained for 100,000 iterations. DRN and U-net were trained with a learning rate of 0.001 which decayed with a factor of 0.1 after every 25,000 steps. Training DN used the snapshot ensemble technique 35 , where after every 10,000 iterations the learning rate was reset to its original value of 0.02.
All three segmentation networks were trained using mini-batch stochastic gradient descent using a batch size of 16. Network parameters were optimized using the Adam optimizer 36 . Furthermore, models were regularized with weight decay to increase generalization performance.
Training detection network.
To train the detection model a subset of the errors performed by the segmentation model is used. Segmentation errors that presumably are within the range of inter-observer variability and therefore do not inevitably require correction (tolerated errors) are excluded from the set of errors that need to be detected and corrected (segmentation failures). To distinguish between tolerated errors and the set of segmentation failures S I the Euclidean distance of an incorrectly segmented voxel to the boundary of the reference target structure is used. For each anatomical structure a 2D distance transform map is computed that provides for each voxel the distance to the anatomical structure boundary. To differentiate between tolerated errors and the set of segmentation failures S I an acceptable tolerance threshold is applied. A more rigorous threshold is used for errors located inside compared to outside of the anatomical structure because automatic segmentation methods have a tendency to undersegment cardiac structures in CMRI. Hence, in all experiments the acceptable tolerance threshold was set to three voxels (equivalent to on average 4.65 mm ) and two voxels (equivalent to on average 3.1 mm ) for segmentation errors located outside and inside the target structure. Furthermore, a segmentation error only belongs to S I if it is part of a 2D 4-connected cluster of minimum size 10 voxels. This value was found in preliminary experiments by evaluating values {1, 5, 10, 15, 20} . However, for apical slices all segmentation errors are included in S I regardless of fulfilling the minimum size requirement because in these slices anatomical structures are relatively small and manual segmentation is prone to large inter-observer variability 7 . Finally, segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures.
Using the set S I a binary label t j is assigned to each patch P The detection network is trained by minimizing a weighted binary cross-entropy loss: where w pos represents a scalar weight, t j denotes the binary reference label and p j is the softmax probability indicating whether a particular image region P (I) j contains at least one segmentation failure. The average percentage of regions in a patient volume containing segmentation failures ranges from 1.5 to 3 percent depending on the segmentation architecture and loss function used to train the segmentation model. To train a detection network w pos was set to the ratio between the average percentage of negative samples divided by the average percentage of positive samples.
Each fold was trained using spatial uncertainty maps and automatic segmentation masks generated while training the segmentation networks. Hence, there was no overlap in patient data between training and test set across segmentation and detection tasks. In total 12 detection models were trained and evaluated resulting from the different combination of 3 model architectures (DRN, DN and U-net), 2 loss functions (DRN and U-net with CE and soft-Dice, DN with Brier and soft-Dice) and 2 uncertainty maps (e-maps, b-maps).
The patches used to train the network were selected randomly ( 2 3 ), or were forced ( 1 3 ) to contain at least one segmentation failure by randomly selecting a scan containing segmentation failure, followed by random sampling of a patch containing at least one segmentation failure. During training the patch size was fixed to 80 × t ic log p(y i = c|x i ), where t ic = 1 if y i = c, and 0 otherwise.
where t ic = 1 if y i = c, and 0 otherwise. www.nature.com/scientificreports/ 80 voxels. To reduce the number of background voxels during testing, inputs were cropped based on a minimal enclosing, rectangular bounding box that was placed around the automatic segmentation mask. Inputs always had a minimum size of 80 × 80 voxels or were forced to a multiple of the output grid spacing of eight voxels in both direction required by the patch-based detection network. The patches of size 8 × 8 voxels did not overlap. In cases where the automatic segmentation mask only contains background voxels (scans above the base or below apex of the heart) input scans were center-cropped to a size of 80 × 80 voxels. Models were trained for 20,000 iterations using mini-batch stochastic gradient descent with batch-size 32 and Adam as optimizer 36 . Learning rate was set to 0.0001 and decayed with a factor of 0.1 after 10.000 steps. Furthermore, dropout percentage was set to 0.5 and weight decay was applied to increase generalization performance. Segmentation using correction of the detected segmentation failures. To investigate whether correction of detected segmentation failures increases segmentation performance two scenarios were performed. In the first scenario manual correction of the detected failures by an expert was simulated for all images at ED and ES time points of the ACDC dataset. For this purpose, in image regions that were detected to contain segmentation failure predicted labels were replaced with reference labels. In the second scenario manual correction of the detected failures was performed by an expert in a random subset of 50 patients of the ACDC dataset. The expert was shown CMRI slices for ED and ES time points together with corresponding automatic segmentation masks for the RV, LV and LV myocardium. Image regions detected to contain segmentation failures were indicated in slices and the expert was only allowed to change the automatic segmentations in these indicated regions. Annotation was performed following the protocol described in 7 . Furthermore, expert was able to navigate through all CMRI slices of the corresponding ED and ES volumes.
Results
In this section we first present results for the segmentation-only task followed by description of the combined segmentation and detection results.
Segmentation-only approach. Table 1 lists quantitative results for segmentation-only and combined segmentation and detection approach in terms of Dice coefficient and Hausdorff distance. These results show that DRN and U-net achieve similar Dice coefficients and outperformed the DN network for all anatomical structures at end-systole. Differences in the achieved Hausdorff distances among the methods are present for all anatomical structures and for both time points. The DRN model achieved the highest and the DN network the lowest Hausdorff distance. Table 2 lists results of the evaluation in terms of clinical metrics. These results reveal noticeable differences between models for ejection fraction (EF) of left and right ventricle, respectively. We can observe that U-net trained with the soft-Dice and the Dilated Network (DN) trained with Brier or soft-Dice loss achieved considerable lower accuracy for LV and RV ejection fraction compared to DRN. Overall, the DRN model achieved highest performance for all clinical metrics.
Effect of model architecture on segmentation. Although quantitative differences between models are small, qualitative evaluation discloses that automatic segmentations differ substantially between the models. Figure 2 shows that especially in regions where the models perform poorly (apical and basal slices) the DN model more often produced anatomically implausible segmentations compared to the DRN and U-net. This seems to be correlated with the performance differences in Hausdorff distance.
Effect of loss function on segmentation.
The results indicate that the choice of loss function only slightly affects the segmentation performance. DRN and U-net perform marginally better when trained with soft-Dice compared to cross-entropy whereas DN performs better when trained with Brier loss than with soft-Dice. For DN this is most pronounced for the RV at ES.
A considerable effect of the loss function on the accuracy of the LV and RV ejection fraction can be observed for the U-net model. On both metrics U-net achieved the lowest accuracy of all models when trained with the soft-Dice loss.
Effect of MC dropout on segmentation. The results show that enabling MC-dropout during testing seems to result in slightly improved HD while it does not affect DC. Fig. 3a shows average voxel detection rate as a function of false positively detected regions. This was done for each combination of model architecture and loss function exploiting e- (Fig. 3a, left) or b-maps (Fig. 3a, right). These results show that detection performance of segmentation failures depends on segmentation model architecture, loss function and uncertainty map.
Detection of segmentation failures. Detection of segmentation failures on voxel level. To evaluate detection performance of segmentation failures on voxel level
The influence of (segmentation) model architecture and loss function on detection performance is slightly stronger when e-maps were used as input for the detection task compared to b-maps. Detection rates are consistently lower when segmentation failures originate from segmentation models trained with soft-Dice loss compared to models trained with CE or Brier loss. Overall, detection rates are higher when b-maps were exploited for the detection task compared to e-maps. www.nature.com/scientificreports/ Detection of slices with segmentation failures. To evaluate detection performance w.r.t. slices containing segmentation failures precision-recall curves for each combination of model architecture and loss function using e-maps (Fig. 3b, left) or b-maps (Fig. 3b, right) are shown. The results show that detection performance of slices containing segmentation failures is slightly better for all models when using e-maps. Furthermore, the detection network achieves highest performance using uncertainty maps obtained from the DN model and the lowest when exploiting e-or b-maps obtained from the DRN model. Table 3 shows the average precision of detected slices with segmentation failures per patient, as well as the average percentage of slices that do contain segmentation failures (reference for detection task). The results illustrate that these measures are positively correlated i.e. that precision of detected slices in a patient volume is higher if the volume contains more slices that need correction. On average the DN model generates cardiac segmentations that contain more slices with at least one segmentation failure compared to U-net (ranks second) and DRN (ranks third). A higher number of detected slices containing segmentation failures implies an increased workload for manual correction.
Calibration of uncertainty maps. Figure 4 shows risk-coverage curves for each combination of model architectures, uncertainty maps and loss functions ( Fig. 4 left: CE or Brier loss, Fig. 4 right: soft-Dice). The results show an effect of the loss function on slope and convergence of the curves. Segmentation errors of models trained with the soft-Dice loss are less frequently covered by higher uncertainties than models trained with CE or Brier loss (steeper slope and lower minimum are better). This difference is more pronounced for e-maps. Models trained with the CE or Brier loss only slightly differ concerning convergence and their slopes are approximately identical. In contrast, the curves of the models trained with the soft-Dice differ regarding their slope and achieved minimum. Comparing e-and b-map of the DN-SD and U-net-SD models the results reveal that the curve for b-map has a steeper slope and achieves a lower minimum compared to the e-map. For the DRN-SD model these differences are less striking. In general for a specific combination of model and loss function the risk-coverage curves using b-maps achieve a lower minimum compared to e-maps.
Correction of automatically identified segmentation failures. Simulated correction. The results
listed in Tables 1 and 2 show that the proposed method consisting of segmentation followed by simulated manual correction of detected segmentation failures delivers accurate segmentation for all tissues over ED and ES points. Correction of detected segmentation failures improved the performance in terms of DC, HD and clinical metrics for all combinations of model architectures, loss functions and uncertainty measures. Focusing on the DC after correction of detected segmentation failures the results reveal that performance differences between evaluated models decreased compared to the segmentation-only task. This effect is less pronounced for HD where the DRN network clearly achieved superior results in the segmentation-only and combined approach. The DN performs the least of all models but achieves the highest absolute DC performance improvements in the combined approach for RV at ES. Overall, the results in Table 1 disclose that improvements attained by the combined approach are almost all statistically significant ( p ≤ 0.05 ) at ES and frequently at ED (96% resp. 83% of the cases). Moreover, improvements are in 99% of the cases statistically significant for HD compared to 81% of the cases for DC. Table 2 are inline with these findings. We observe that segmentation followed by simulated manual correction of detected segmentation failures resulted in considerably higher accuracy for LV and RV ejection fraction. Achieved improvements for clinical metrics are only statistically significant ( p ≤ 0.05 ) in one case for RV ejection fraction.
Table 1. Segmentation performance of different combination of model architectures, loss functions and evaluation modes (without or with MC dropout enabled during testing) in terms of Dice coefficient (top) and
Hausdorff distance (bottom) (mean ± standard deviation). Each combination comprises a block of two rows. A row in which column Uncertainty map for detection indicates e-or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in bold are ranked first in the segmentation only task whereas numbers accentuated in italics are ranked first in the combined segmentation & detection task. The last row states the performance of the winning model in the ACDC challenge (on 100 patient images) 32 . Number with asterisk indicates statistical significant at 5% level w.r.t. the segmentation-only approach. (2) simulated manual correction of automatic segmentations using spatial uncertainty maps. ρ denotes the Pearson correlation coefficient, bias denotes the mean difference between the two measurements (mean ± standard deviation) and MAE denotes the mean absolute error between the two measurements. Each combination comprises a block of two rows. A row in which column Uncertainty map for detection indicates e-or b-map shows results for the combined segmentation and detection approach. Numbers accentuated in bold are ranked first in the segmentation only task. Numbers in italics indicate statistical significant at 5% level w.r.t. the segmentation-only approach for the specific clinical metric. www.nature.com/scientificreports/ In general, the effect of correction of detected segmentation failures is more pronounced in cases where the segmentation-only approach achieved relatively low accuracy (e.g. DN-SD for RV at ES). Furthermore, performance gains are largest for RV and LV at ES and for ejection fraction of both ventricles.
Method
The best overall performance is achieved by the DRN model trained with cross-entropy loss while exploiting entropy maps in the detection task. Moreover, the proposed two step approach attained slightly better results using Bayesian maps compared to entropy maps.
Manual correction. Table 4 lists results for the combined automatic segmentation and detection approach followed by manual correction of detected segmentation failures by an expert. The results show that this correction led to improved segmentation performance in terms of DC, HD and clinical metrics. Improvements in terms of HD are in 50 percent of the cases statistically significant ( p ≤ 0.05 ) and most pronounced for RV and LV at end-systole.
Qualitative examples of the proposed approach are visualized in Figs. 5 and 6 for simulated correction and manual correction of the automatically detected segmentation failures, respectively. For the illustrated cases (simulated) manual correction of detected segmentation failures leads to increased segmentation performance. On average manual correction of automatic segmentations took less than 2 min for ED and ES volumes of one patient compared to 20 min that is typically needed by an expert for the same task.
Ablation study
To demonstrate the effect of different hyper-parameters in the method, a number of experiments were performed. In the following text these are detailed. Table 5. We observe that segmentation performance started to converge using seven samples only. Performance improvements using an increased number of MC samples were largest for the Dilated Network. Overall, using more than ten samples did not increase segmentation performance. Hence, in the presented work T was set to 10.
Impact of number of Monte
Effect of patch-size on detection performance. The combined segmentation and detection approach detects segmentation failures on region level. To investigate the effect of patch-size on detection performance three different patch-sizes were evaluated: 4 × 4, 8 × 8, and 16 × 16 voxels. The results are shown in Fig. 7. We www.nature.com/scientificreports/ can observe in Fig. 7a that larger patch-sizes result in a lower number of false positive regions. The result is potentially caused by the decreasing number of regions in an image when using larger patch-sizes compared to smaller patch-sizes. Furthermore, Fig. 7b reveals that slice detection performance is only slightly influenced by patch-size. To ease manual inspection and correction by an expert, it is desirable to keep region-size i.e. patchsize small. Therefore, in the experiments a patch-size of 8 × 8 voxels was used.
Impact of tolerance threshold on number of segmentation failures.
To investigate the impact of the tolerance threshold separating segmentation failures and tolerable segmentation errors, we calculated the www.nature.com/scientificreports/ ratio of the number of segmentation failures and all errors i.e. the sum of tolerable errors and segmentation failures. Fig. 8 shows the results. We observe that at least half of the segmentation failures are located within a tolerance threshold i.e. distance of two to three voxels of the target structure boundary as defined by the reference annotation. Furthermore, the mean percentage of failures per volume is considerably lower for the Dilated Residual Network (DRN) and highest for the Dilated Network. This result is inline with our earlier finding (see Table 3) that average percentage of slices that do contain segmentation failures is lowest for the DRN model.
Discussion
We have described a method that combines automatic segmentation and assessment of uncertainty in cardiac MRI with detection of image regions containing segmentation failures. The results show that combining automatic segmentation with manual correction of detected segmentation failures results in higher segmentation performance. In contrast to previous methods that detected segmentation failures per patient or per structure, we showed that it is feasible to detect segmentation failures per image region. In most of the experimental settings, simulated manual correction of detected segmentation failures for LV, RV and LVM at ED and ES led to statistically significant improvements. These results represent the upper bound on the maximum achievable performance for the manual expert correction task. Furthermore, results show that manual expert correction of detected segmentation failures led to consistently improved segmentations. However, these results are not on par with the simulated expert correction scenario. This is not surprising because inter-observer variability is high for the presented task and annotation protocols may differ between clinical environments. Moreover, qualitative results of the manual expert correction reveal that manual correction of the detected segmentation failures can prevent anatomically implausible segmentations (see Fig. 6). Therefore, the presented approach can www.nature.com/scientificreports/ potentially simplify and accelerate correction process and has the capacity to increase the trustworthiness of existing automatic segmentation methods in daily clinical practice. The proposed combined segmentation and detection approach was evaluated using three state-of-the-art deep learning segmentation architectures. The results suggest that our approach is generic and applicable to different model architectures. Nevertheless, we observe noticeable differences between the different combination of model architectures, loss functions and uncertainty measures. In the segmentation-only task the DRN clearly outperforms the other two models in the evaluation of the boundary of the segmented structure. Moreover, qualitative analysis of the automatic segmentation masks suggests that DRN generates less often anatomically implausible and fragmented segmentations than the other models. We assume that clinical experts would prefer such segmentations although they are not always perfect. Furthermore, even though DRN and U-net achieve similar performance in regard to DC we assume that less fragmented segmentation masks would increase trustworthiness of the methods.
In agreement with our preliminary work we found that uncertainty maps obtained from a segmentation model trained with soft-Dice loss have a lower degree of uncertainty calibration compared to when trained with one of the other two loss functions (cross-entropy and Brier) 25 . Nevertheless, the results of the combined segmentation and detection approach showed that a lower degree of uncertainty calibration only slightly deteriorated the detection performance of segmentation failures for the larger segmentation models (DRN and U-net) when exploiting uncertainty information from e-maps. Hendrycks and Gimpel 37 showed that softmax probabilities generated by deep learning networks have poor direct correspondence to confidence. However, in agreement with Geifman et al. 30 we presume that probabilities and hence corresponding entropies obtained from softmax function are ranked consistently i.e. entropy can potentially be used as a relative uncertainty measure in deep learning. In addition, we detect segmentation failures per image region and therefore, our approach does not require perfectly calibrated uncertainty maps. Furthermore, results of the combined segmentation and detection approach revealed that detection performance of segmentation failures using b-maps is almost independent of the loss function used to train the segmentation model. In line with Jungo et al. 38 we assume that by enabling MC-dropout in testing and computing the mean softmax probabilities per class leads to better calibrated probabilities and b-maps. This assumption is in agreement with Srivastava et al. 39 where a CNN with dropout used at testing is interpreted as an ensemble of models.
Quantitative evaluation in terms of Dice coefficient and Hausdorff distance reveals that proposed combined segmentation and detection approach leads to significant performance increase. However, the results also demonstrate that the correction of the detected failures allowed by the combined approach does not lead to statistically significant improvement in clinical metrics. This is not surprising because state-of-the-art automatic segmentation methods are not expected to lead to large volumetric errors 7 and standard clinical measures are not sensitive to small segmentation errors. Nevertheless, errors of the current state-of-the-art automatic segmentation methods may lead to anatomically implausible segmentations 7 that may cause distrust in clinical application. Besides increase in trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs, improved segmentations are a prerequisite for advanced functional analysis of the heart e.g. motion analysis 40 and very detailed morphology analysis such as myocardial trabeculae in adults 41 .
For the ACDC dataset used in this manuscript, Bernard et al. 7 reported inter-observer variability ranging from 4 to 14.1 mm (equivalent to on average 2.6 to 9 voxels). To define the set of segmentation failures, we employed a strict tolerance threshold on distance metric to distinguish between tolerated segmentation errors www.nature.com/scientificreports/ and segmentation failures (see Ablation study). Stricter tolerance threshold was used because the thresholding is performed in 2D, while evaluation of segmentation is done in 3D. Large slice thickness in cardiac MR could lead to a discrepancy between the two. As a consequence of this strict threshold results listed in Table 3 show that almost all patient volumes contain at least one slice with a segmentation failure. This might render the approach less feasible in clinical practice. Increasing the threshold decreases the number of segmentation failures and slices containing segmentation failures (see Fig. 8) but also lowers the upper bound on the maximum achievable performance. Therefore, to show the potential of our proposed approach we chose to apply a strict tolerance threshold. Nevertheless, we realize that although manual correction of detected segmentation failures leads to increased segmentation accuracy the performance of precision-recall is limited (see Fig. 3) and hence, should be a focus of future work. The presented patch-based detection approach combined with (simulated) manual correction can in principle lead to stitching artefacts in the resulting segmentation masks. A voxel-based detection approach could potentially solve this. However, voxel-based detection methods are more challenging to train due to the very small number of voxels in an image belonging to the set of segmentation failures.
Evaluation of the proposed approach for 12 possible combinations of segmentation models (three), loss functions (two) and uncertainty maps (two) resulted in an extensive number of experiments. Nevertheless, future work could extend evaluation to other segmentation models, loss functions or combination of losses. Furthermore, our approach could be evaluated using additional uncertainty estimation techniques e.g. by means of ensembling of networks 42 or variational dropout 43 . In addition, previous work by Kendall and Gal 44 , Tanno et al. 45 has shown that the quality of uncertainty estimates can be improved if model (epistemic) and data (aleatoric) uncertainty are assessed simultaneously with separate measures. The current study focused on the assessment of model uncertainty by means of MC-dropout and entropy which is a combination of epistemic and aleatoric uncertainty. Hence, future work could investigate whether additional estimation of aleatoric uncertainty improves the detection of segmentation failures.
Furthermore, to develop an end-to-end approach future work could incorporate the detection of segmentation failures into the segmentation network. Besides, adding the automatic segmentations to the input of the detection network could increase the detection performance.
Finally, the proposed approach is not specific to cardiac MRI segmentation. Although data and task specific training would be needed the approach could potentially be applied to other image modalities and segmentation tasks.
Conclusion
A method combining automatic segmentation and assessment of segmentation uncertainty in cardiac MR with detection of image regions containing local segmentation failures has been presented. The combined approach, together with simulated and manual correction of detected segmentation failures, increases performance compared to segmentation-only. The proposed method has the potential to increase trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs. | 11,709.6 | 2020-11-13T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
A Pathfinder in High-Pressure Bioscience: In Memoriam of Gaston Hui Bon Hoa
On 26 July 2020, our colleague and friend Dr [...].
became the main object for Gaston's studies for over 20 years. Gaston's research on the effects of hydrostatic pressure on substrate binding, equilibria of heme iron ligation, and the protein-protein interactions of cytochromes P450 [4][5][6][7][8][9][10][11][12][13] has become a part of the golden fund of cytochrome P450 science and has served as a prototype for pressure perturbation studies in other laboratories. Another essential part of Gaston's heritage is a series of studies on cytochrome c oxidase and its interactions with cytochrome c in collaboration with Jack Kornblatt [14][15][16]. During the most recent years, Gaston applied pressure perturbation to the studies of neuroglobin and other globins [17][18][19] and viroid RNA [20,21]. Regardless of the subject of Gaston's research, his experimental approaches have always been distinguishable by their inventive design and innovative research strategies. The list of Gaston's scientific publications includes over 150 experimental and review papers in reputable scientific journals. He attended various scientific congresses as a participant and keynote speaker ( Figure 2). The predominant part of Gaston′s scientific heritage is devoted to the effects of hydrostatic pressure on proteins and their use in the studies of protein conformational dynamics and function. His first publication in this field, "High-pressure spectrometry at sub-zero temperatures," appeared in 1982 [3]. In this paper, Gaston and his co-authors describe the high-pressure optical cell of their construction and report the results of their pioneering study on the effects of pressure on the spin equilibrium of cytochrome P450. Since then, the idea of using pressure to displace protein conformational equilibria and perturb protein-solvent interactions became the keynote of Gaston′s research. Likewise, cytochrome P450 became the main object for Gaston′s studies for over 20 years. Gaston′s research on the effects of hydrostatic pressure on substrate binding, equilibria of heme iron ligation, and the protein-protein interactions of cytochromes P450 [4][5][6][7][8][9][10][11][12][13] has become a part of the golden fund of cytochrome P450 science and has served as a prototype for pressure perturbation studies in other laboratories. Another essential part of Gaston′s heritage is a series of studies on cytochrome c oxidase and its interactions with cytochrome c in collaboration with Jack Kornblatt [14][15][16]. During the most recent years, Gaston applied pressure perturbation to the studies of neuroglobin and other globins [17][18][19] and viroid RNA [20,21]. Regardless of the subject of Gaston′s research, his experimental approaches have always been distinguishable by their inventive design and innovative research strategies. The list of Gaston′s scientific publications includes over 150 experimental and review papers in reputable scientific journals. He attended various scientific congresses as a participant and keynote speaker ( Figure 2).
Besides being a prominent scientist, Gaston was distinguished by his remarkable engineering skills. Since the experimental equipment that he needed for his innovative studies had not yet been (and remains not to be) commercially available, he designed and built Besides being a prominent scientist, Gaston was distinguished by his remarkable engineering skills. Since the experimental equipment that he needed for his innovative studies had not yet been (and remains not to be) commercially available, he designed and built research instruments by himself. The list of unique equipment built by Gaston in his laboratory in Paris includes thermostatically controlled high-pressure optical cells withstanding pressures up to 6 kbar, pressure jump and high-pressure stop-flow devices, and photoacoustic spectrophotometer.
In the European Union Biotechnology Program projects, Gaston collaborated with many laboratories in Germany, France, and the United Kingdom. He also coordinated two Below we give the floor to some of Gaston′s collaborators and friends to allow them to share their memories about this remarkable scientist.
Gregory A. Petsko
He was one of the nicest human beings I ever knew. He was also a brilliant experimentalist, with some of the best hands I have ever seen and a keen instinct for doing just the right experiment.
When you are a young scientist, it is impossible to overstate the importance of your intellectual growth and self-esteem that come about when an older scientist takes an interest in you, teaches you, and encourages your own ideas. During my time in Prof. Pierre Douzou′s lab as an EMBO fellow in 1973, I benefited from just such interest not only from Pierre but also from Gaston. I had been a protein crystallography graduate student, so much of what I know about studying proteins in solution, including enzyme kinetics, I learned at the bench from Gaston. He probably never had a more willing-and more Below we give the floor to some of Gaston's collaborators and friends to allow them to share their memories about this remarkable scientist.
Gregory A. Petsko
He was one of the nicest human beings I ever knew. He was also a brilliant experimentalist, with some of the best hands I have ever seen and a keen instinct for doing just the right experiment.
When you are a young scientist, it is impossible to overstate the importance of your intellectual growth and self-esteem that come about when an older scientist takes an interest in you, teaches you, and encourages your own ideas. During my time in Prof. Pierre Douzou's lab as an EMBO fellow in 1973, I benefited from just such interest not only from Pierre but also from Gaston. I had been a protein crystallography graduate student, so much of what I know about studying proteins in solution, including enzyme kinetics, I learned at the bench from Gaston. He probably never had a more willing-and more inept-pupil. However, a more patient, generous, and able teacher one could not have asked for.
We were trying to measure enzyme reactions in aqueous-organic media at subzero temperatures so that we could trap kinetically significant intermediates and characterize them spectroscopically [22]. My dream, which he shared, was that, ultimately, I could use these same techniques, pioneered by him and Pierre, to accumulate such intermediates in enzyme crystals so that their structures could be determined at atomic resolution. That dream was fully realized twenty-seven years later, when Steve Sligar, Dagmar Ringe, Ilme Schlichting, and I successfully determined the three-dimensional structures of every kinetically significant intermediate in the very enzyme Gaston had taught me about those many years before, cytochrome P450.
The past never ceases to call to us, and if we heed that call, it can summon up bad memories as well as good ones. I have nothing but the best memories of my time in Paris, my time before striking out on my own as a scientist, and my time with Gaston Hui Bon Hoa. My life is so much better for having known him.
Stephen G. Sligar In science, it is the interpersonal connections that define opportunities and career paths. When I was a graduate student in physics at the University of Illinois, wandering into the Biochemistry Department led me to meet I. C. Gunsalus or Gunny. A giant in microbiology and biochemistry, he was a close friend with Marianne Grunberg-Manago and Pierre Douzou at the Institute de Biologie Physico Chimique (IBPC). Thus, when I was a tenured professor, I sought Gunny's advice as to where to spend a sabbatical year-and his suggestion was Paris, IPBC, and Pierre's laboratory.
Through some magic of these greats, I received a Fullbright Fellowship in 1989 to support the move of our family to Paris. After arriving, I learned my de jour mentor was a most energetic Gaston Hui Bon Hoa. What a great time scientifically and a most memorable personal life experience. Never, to this day, have I met anyone with such boundless energy and enthusiasm for doing experiments. Gaston would always prefer to do another experiment than write anything up for publication. His filing cabinets were full of so much data that, at any point, if one wanted a manuscript, you just went to the cabinet, pulled out a folder, and started writing. The IBPC was such a motivating place that we returned for several summers after the sabbatical. We formed partnerships with Jack and Judy Kornblatt, Dimitri Davydov, Christiane Jung, and others. This family spawned many discoveries. One memorable idea was hatched at a group wine event on the top floor of the IBPC. We were discussing water under pressure and then the osmotic forces that can act to desolvate. One problem facing molecular biologists was the "star" activity of restriction endonucleases. A list of solutions to avoid to prevent the loss of 6-base pair recognition was at the back of every catalog. After a lot of wine, it occurred to us that these were all osmolytes. Several pioneering papers ensued, showing that osmotic pressure indeed removed bridging hydrogen bonding water between protein and the outer base pairs that could be reversed by applying hydrostatic pressure! Gaston was unique and a source of motivation and companionship that is missed by all. He continued doing experiments after IBPC, working with Mike Marden, and thinking of new ways to advance biophysics through careful measurement. A real pioneer! Jack A. Kornblatt How do I remember our love affair with IBPC, Pierre Douzou, and most especially, with Gaston Hui Bon Hoa? It was in June 1982 when I and Judy, my wife, arrived in Paris from Cap d Ail after a month at a French-language school. It was cold and raining heavily. We were wet and tired when we eventually came to our apartment and went back down to the bar on the rez de chaussee, where we tried to calm down with a large brandy each. A very inauspicious beginning to a wonderful year! I went on up the street to IBPC and was introduced to Gaston Hui Bon Hoa, with whom I would work that year and during the summers in subsequent years. I had come to do low-temperature studies on the cytochrome c oxidase. However, Pierre explained that Gaston had just finished building a high-pressure optical cell (la bombe) that could be interfaced with a spectrophotometer or fluorometer. I put the oxidase into Gaston's "la bombe" and was hooked. The world's most exciting data poured out. The influence of high hydrostatic pressure on the oxidase was phenomenal. It allowed the catching and trapping of the intermediates with very large volume changes and helped us point out the energy transduction mechanism. All throughout this, Gaston guided my hands.
Later, when Steve Sligar came into the lab, he brought the beautiful structure of cytochrome P450cam. After staring at it for days, it finally occurred to me that osmolytes worked by reducing the activity of water and that it might be one of the reasons why they aid camphor in gaining access to a water-filled pocket. This realization launched me to finally complete the description of the energy transduction and catalytic cycles of the oxidase. The papers wrote themselves.
Very recently (2008), the team at Cornell developed high-pressure SAXS. Were Gaston with us today, he would be incredibly excited. "I have to try it out on P450cam", he would have said. He was a man with boundless enthusiasm and energy, and he happily shared it.
At the beginning of this overlong appreciation, I used the term love affair. It was just that. Additionally, by the way, 20 years later, I finally got to do the low-temperature work that I had initially planned! [11]. The high-pressure approach used by Gaston encouraged me to keep working together. During another seven-month stay in Gaston's laboratory in 1993, we worked together with B. Canny and J.C. Chervin from the Université Pierre et Marie Curie Paris, Laboratoire de Physique des Milieux Condensés, to design a high-pressure cell with sapphire anvils, which we later used in Berlin for FTIR spectroscopic investigations on the carbon monoxide complex of cytochromes P450 [23]. Gaston visited my laboratory in Berlin several times as a participant in the EU project BIO2-942060. Looking back, I have to say that without Gaston, I would not have been able to establish high-pressure technology on my own.
In addition to being an inspiring scientist, Gaston was a good friend and, as I realized, a great family man. I fondly remember a trip with his family in 1991 to the Pont de Normandie near Le Havre and Honfleur, a beautiful little French port town in Normandy near the Seine estuary in the English Channel. In addition, I had the pleasure of attending his 80th birthday party in Paris seven years ago. I will always remember Gaston.
Dmitri R. Davydov I first met Gaston Hui Bon Hoa at the international conference on Cytochromes P450 in Moscow in 1991. At that moment, the focus of my studies was (and remains to be) on the catalytic mechanisms and the protein-protein interactions of cytochromes P450. I knew very little about the effects of hydrostatic pressure on proteins and their use in biophysical studies. Still, I knew Gaston's name from his publications on the impact of pressure on P450cam [7,24]. My attention was drawn to Gaston's presentation at the meeting. It spurred my interest in the use of pressure perturbation. I got captivated by his enthusiasm and ideas on how pressure perturbation may be used in P450 research. Soon, I joined his research group in the laboratory of Pierre Douzou in IBPC in Paris as an INSERM fellow ("poste verte"). It was a wonderful time. I enjoyed the creative atmosphere in the lab and admired Gaston's engineering ingenuity. Besides researching pressure-induced transitions in P450 2B4 (which I brought from my lab in Moscow) [5,25], I also participated in Gaston's engineering efforts. I designed data acquisition and analysis software for high-pressure spectroscopy, which became a core of my SpectraLab software that I still use in my research. At the end of my fellowship, we were successful in acquiring an INSERM collaborative grant followed by an INTAS multilateral research grant. This funding allowed us to continue our collaboration [9,26,27]. From 1993 to 1999, I enjoyed visiting Gaston's lab several times a year. After my move to the US, our collaboration continued [6], and in 2002, Gaston visited me in the University of Texas Medical Branch (UTMB, Galveston, TX) to install his high-pressure equipment, which I still use in my research. Later, we met several times in San Diego, Paris, and other places. Owing to my collaboration with Gaston, pressure perturbation became an integral part of my research strategy, and the effects of pressure on proteins became another focus of my scientific interests. I enjoyed our collaboration and friendship a lot. Besides being a talented experimentalist and prominent scientist, he was also a wonderful person and a good friend.
Collaboration with Gaston became momentous and life-defining for many of his colleagues and friends. Gaston's ingenuity and enthusiasm swept them along with him and promoted the development of high-pressure bioscience all over the world. So, let this Special Issue become our tribute to this impassioned pathfinder, a talented scientist, and great friend. | 3,463.6 | 2021-08-01T00:00:00.000 | [
"Physics"
] |
Pre-grasp Manipulation Planning to Secure Space for Power Grasping
An object can be gripped firmly through power grasping, in which the gripper fingers and palm are wrapped around the object. However, it is difficult to power-grasp an object if it is placed on a support surface and the grasping point is near the support surface. Because there is no gap between the object and the support surface, the gripper fingers and the support surface will collide when the gripper attempts to power-grasp the object. To address this, we propose a pre-grasp manipulation planning method that uses two robot arms, whereby space can be secured for power grasping by rotating the object while being supported against the support surface. The objects considered in this study are appropriately shaped for a power grasp, but power grasping cannot be performed directly because the desired power-grasping location is close to the support surface on which the object is placed. First, to power-grasp the object, candidate rotation axes on the object and in contact with the support surface are derived based on a mesh model of the object. Then, for each such axis, the object pose that allows power grasping is obtained. Finally, according to the obtained object pose, the paths for rotating and power-grasping the object are planned. We evaluate the proposed approach through simulations and experiments using two UR5e robot arms with a 2F-85 gripper.
surrounding environment. Here, the term "appropriate shape" 18 implies that the size or shape of the object at the location 19 where power grasping is to be performed is such that the 20 whole or part of the object can be placed between the gripper 21 fingers (i.e., object size is smaller than the maximum distance 22 between the gripper fingers). Assuming a free-floating object, 23 power grasping can be performed on any part of the object 24 as long as its shape is suitable. However, if the object is 25 placed on a horizontal and flat support surface, such as a 26 table, a power grasp may not be possible because there is no 27 space between the object and the support surface, and thus 28 the gripper fingers cannot be wrapped around the object. If 29 an object is voluminous, such as a cuboid, it may be rotated 30 and placed stably so that another side of the object is in 31 contact with the support surface. Thereby, it is possible to 32 secure space for power grasping at a desired point of the 33 object. However, if an object is not voluminous, that is, one 34 of its dimensions is significantly smaller than the others, 35 it is difficult to create space for power grasping using this stably.
43
In this paper, we propose pre-grasp manipulation planning 44 so that space for power grasping may be secured ( Figure 45 1). Pre-grasp manipulation is the process of determining 46 preliminary manipulations for reorienting an object to an appropriate pose (position and orientation) so that a given 48 task can be performed [5]. In the present study, it refers to parts of the surrounding environment, such as the edges 65 of the support surface or the walls, can be used for 66 manipulation planning, except for the support surface 67 itself.
68
(c4) The object is not voluminous. This implies that one di-69 mension of the object is assumed very small compared 70 with the others. As shown in Figure 2, an object may 71 be placed on the support surface in different poses, but 72 it is not stable because it is not voluminous.
73
(c5) Because the gripping force of the precision grasp using 74 only one gripper is not sufficient to lift the target object, 75 grasping the object in an arbitrary position may cause 76 slippage.
77
Because of (c3) and (c4), it is not possible to power-78 grasp the object directly if a grasp planning that does not 79 allow collision with the surrounding environment is used. 80 Therefore, it is necessary to determine an object pose that 81 can secure space for two grippers to perform power grasping. 82 In this situation, because, initially, the target object can 83 only be precision-grasped, and condition (c5) is assumed, it 84 should be rotated while being supported against the surface 85 so that its pose can be changed without slipping. To this end, 86 we use a mesh model of the object to determine rotation-87 axis candidates based on the position of all vertices in the 88 model. For each candidate axis, a target rotation angle is 89 determined to secure space for two grippers to power-grasp 90 the object. Subsequently, planning is performed to rotate the 91 object using one robot arm, power-grasp it by the other arm, 92 and finally regrasp it in a power-grasp manner by the robot 93 arm used for the rotation. All grasping points for rotating 94 with the upper part of the object, and finally gripping the 127 object [6]. A thin nail made of smooth plastic and rubber was 128 attached to each finger to increase the grasping success rate.
129
In [7], a gripper was proposed that could grasp a flat object 130 by attaching a crawler to one fingertip. If the force applied to 131 the object while the gripper finger is closed exceeds a certain 132 threshold, the crawler starts to move, and the object rotates 133 in the direction of the gripper palm. Through this in-hand 134 manipulation, the gripper can power-grasp flat objects. 135 Grippers have also been proposed for power-grasping gen-136 eral objects. In [8], a three-fingered gripper was presented, on 137 one finger of which a small two-fingered gripper is mounted.
138
It is thus possible to power-grasp an object by using a gripper 139 selected according to the size of the target object, thereby 140 separating the problem of maintaining force closure and ma-141 nipulating the object. In another study, a plate was attached 142 to each finger of an existing gripper for power grasping [9]. Effective power grasping can also be achieved by grasp 148 planning. In [10], the power-grasping strategy was selected 149 depending on the size of the object. Specifically, for small 150 target objects, parallel fingers (index, middle, ring, and little) 151 of the robot hand were used, but the thumb and palm were 152 not used. For larger objects, the thumb and palm of the robot 153 hand were also used. Another grasp planning method was 154 based on hand-object geometric fitting [11]. Considering the 155 surface of an object, target contacts suitable for the finger or 156 palm of the robot hand were located for grasp planning. In 157 the case of precision grasping, as many target contacts were 158 located as the number of fingers. To power-grasp an object, 159 both the finger and the palm of the robot hand were associated 160 with the target contacts. However, the actual number of 161 contacts was larger than that of the target contacts because 162 the proximal links of the finger were encouraged to touch the 163 object for more stable gripping. In another study, gripping 164 was performed by reshaping the fitting weights to points on 165 the finger and palm of the gripper [12]. To precision-grasp 166 an object, more weight was applied to the fingertip area 167 than elsewhere. In the case of a power grasp, the object was 168 gripped by assigning more weight to the center of the finger 169 and palm.
170
However, the gripper is not always possible to approach 171 the grasp pose obtained by the aforementioned methods. 172 For example, if an object is placed on a flat surface, or if 173 there are obstacles, a collision may occur when the gripper 174 moves to a grasp pose that would be valid if the surrounding 175 environment were not considered. However, if pre-grasp ma-176 nipulation is performed to relocate an object to an appropriate 177 pose before final grasping, the gripper can grasp the object by 178 attaining the obtained grasp pose. In [5], preparatory object 179 rotation was presented, where a handled object was rotated 180 about an axis perpendicular to the support surface so that the 181 handle was in front of the robot before the object was lifted. 182 This was motivated by the fact that a person usually rotates 183 a handled object so that the person could hold the handle 184 and lift the object conveniently. The authors in [13] also pre-185 rotated an object about an axis vertical to the support surface, 186 but the target rotation angle was determined according to the 187 manipulator payload calculated based on the load limit of 188 each joint.
189
Another approach is to slide an object to render grasp-190 ing possible. In [14] and [15], objects were classified into 191 categories according to appearance, and each category had 192 a pre-shape data structure comprising a robot hand, a set 193 of starting poses, etc. When a target object was input, after 194 the category in which the object belonged was determined, 195 the robot hand slid and moved the object to the final region 196 so that it could be grasped using the corresponding pre-197 shape. In [16], a combination of pre-grasp manipulation and 198 transport task planning was proposed. The trajectory was 199 optimized using the functional gradient optimization method. 200 In addition, to increase the planning success rate, objects 201 were released after pre-grasp manipulation, and were grasped 202 again to perform the transport task. In [17], the configurations 203 of the start and goal regions were sampled to determine 204 the appropriate motion for grasping a thin object by pairing 205 each configuration. In this strategy, the object was grasped 206 after it was slid to the edge of the support surface. In [18], 207 not only the edge of the support surface but also various 208 parts of the surrounding environment (such as surface and 209 walls) were used to grip an object. Specifically, a graph was 210 constructed using environmental constraint exploitation and 211 valid transitions, and manipulation planning was performed 212 VOLUME 4, 2016 by regarding gripping motions as goal nodes. 213 Certain studies have been concerned with securing space 214 for grasping by rotating an object around a axis parallel to 215 the support surface. A method for rotating a cuboid object 216 using hybrid force-velocity control was proposed in [19]. 217 Grasp planning for a cuboid object in the corner of a box, 218 is difficult. However, using this method, the cuboid could be 219 rotated so that space for grasping was secured. Two robot 220 arms were used in [20] to grasp an object. One arm pushed 221 and lifted the object using a half-sphere end-effector to secure The regrasp method can be used to grasp an object in 227 a desired pose. In [21], a pin perpendicular to the support 228 surface was installed to increase the success rate of pick- Another strategy proposed in [22] was to reorient an object to 235 an appropriate pose for performing the next task by decom- the object could be grasped in this pose. In [24], pre-grasp 245 planning was performed using multimodal motion planning.
246
Using the regrasp graph, it was automatically determined 247 whether to use only one robot arm or both arms to perform 248 a given task, including pre-grasp manipulation. It was also 249 determined whether to include handover motion if two robot 250 arms were used.
251
The main difference between the present study and the 252 aforementioned studies is the target object, which is too 253 heavy or too large to stably lift using only one gripper (c5).
254
Therefore, handover or in-hand manipulation is not an appro-255 priate method to secure space for grasping. To address this, 256 when an object is rotated to secure gripping space, such as 257 space for power grasping in this study, the support surface is
271
Our task is to power-grasp an object in a situation where 272 power grasping cannot be performed directly. The proposed 273 method requires two robot arms: a supportive and a master 274 robot arm. The role of the latter is to power-grasp an object 275 after the former rotates the object. Then, after the supportive 276 arm regrasps it in a power-grasp manner, the task is com-277 pleted.
278
For each object, a support-grasping pose and two power-279 grasping poses are heuristically predefined (Figure 1 (a)). The 280 support grasping pose is associated with rotating the object, 281 and the power grasping pose for each robot arm is the target 282 pose for power grasping the object.
283
The pipeline of the proposed method is shown in Figure 284 3. First, in the simulation, the mesh model of the target 285 object is loaded in the initial pose. To find cases in which 286 the target object and any flat surface can be in an edge-face 287 or two vertex-face contacts, the convex hull of the object 288 is derived. Among the edges of the convex hull, those 289 that are not short and are close to the support surface are 290 designated as rotation-axis candidates that may be used to 291 sustain the object while being rotated. For each candidate, 292 the goal angle is determined for securing space so that power 293 grasping may be performed using two grippers. When the 294 target object is rotated by the goal angle around the candidate 295 rotation axis, inverse kinematics solutions for reaching the 296 support-grasping pose and power-grasping poses are calcu-297 lated. Based on these solutions, integrated paths for rotating 298 and finally power-grasping the object are planned. After the 299 integrated paths are obtained, one of them is executed by 300 the robot arms. In the next section, we provide a detailed 301 explanation of the proposed method. 303 Under condition (c5), the target object should be rotated 304 while being sustained on the support surface so that space 305 may secured for power grasping by the grippers. For this 306 reason, the rotation axis must be on the object surface and in 307 contact with the support surface while the object is rotated. 308 Algorithm 1 is proposed to determine candidate rotation axes 309 satisfying this requirement.
311
Various types of contact can occur between two objects [25]. 312 In order that an object be rotated around an axis while being 313 supported by the support surface, an edge-face contact must 314 occur. In this study, the edge in this contact is the rotation axis 315 that lies on the object surface. In addition, the support surface 316 corresponds to the face of the contact. Rotation is possible 317 even when an object is supported by a single vertex-face 318 contact. However, the axis of rotation may be distorted be-319 cause the part supporting the object is small. Therefore, in 320 this study, rotation is assumed to occur only in the case of 321 if d e < ∆d s and l e > ∆l then 8:
324
To determine the rotation axis that induces an edge-face or 325 two vertex-face contacts, the convex hull C obj of the object 326 O obj is calculated, and the edges E obj of C obj are obtained 327 using the ConvexHull(·) and Edges(·) functions, re-328 spectively (Figure 3 (b)). Among these edges, one that is not 329 short and is close to the support surface is designated as a 330 rotation axis candidate (Figure 3 (c)). To find the candidate 331 rotation axes, in the simulation, we use the position of the 332 endpoints for each edge e obj ∈ E obj and that of the support 333 surface parallel to the xy-plane in the world frame. These 334 points can be derived from the mesh model of the object and 335 the predefined height of the support surface in the simulation. 336 Based on these positions, the DistEdgeSupport(·) and 337 EdgeLength(·) functions return the maximum distance 338 d e between the endpoints of the edge e obj and the support 339 surface, and the length l e of the edge e obj , respectively. If d e 340 is within a certain value ∆d s , and l e is larger than a thresh-341 old ∆l, the corresponding edge e obj becomes a candidate 342 rotation axis. Because a very short edge e obj is similar to a 343 vertex-face contact, the threshold ∆l is used to exclude it. In 344 addition, the value ∆d s is used to find edges that can make 345 contact with the support surface. In fact, the rotation-axis 346 candidates obtained in the simulation should not be in contact 347 with the support surface, but only close to it. If the rotation-348 axis candidates are in contact, then the object already collides 349 with the support surface because the candidates are on the 350 object surface. Therefore, the maximum rotation θ max cannot 351 be determined by checking the collisions between the object 352 and the support surface. To resolve this, in the simulation, the 353 object is slightly separated from the support surface ( Figure 354 4 (a)). The detailed derivation of the maximum rotation θ max 355 VOLUME 4, 2016 is described in Section IV-B.
357
To determine the maximum rotation θ max for each rotation-358 axis candidate λ and rotation direction r dir , the object is ro-359 tated repeatedly by ∆φ until a collision occurs (Figure 4 (b)).
360
The rotation direction r dir for each rotation-axis candidate 361 may be clockwise (r cw ) or counterclockwise (r ccw ). Based 362 on λ, r dir , and ∆φ, the TFRotObj(·) function returns 363 the transformation matrix T ∆φ for rotating the object by 364 ∆φ with respect to the rotation axis candidate λ and in 365 the direction r dir . If the object collides with the support 366 surface before it is rotated to the limit angle φ obj , that is, 367 the CheckCollision(·) function returns true and θ max 368 is less than φ obj , the current rotation angle returned from 369 the RotObj(·) function is the maximum rotation θ max ; 370 otherwise, the limit angle φ obj is the maximum rotation θ max .
V. OBTAINING OBJECT POSES FOR POWER GRASPING 389
In the previous section, the rotation-axis candidates and their 390 suitability scores were obtained. Herein, we describe the 391 determination of rotation angles and object poses so that 392 space may obtained for power grasping (Algorithm 2). When 393 one robot arm attempts to rotate the target object, the robot 394 precision-grasps the object in the support-grasping pose G s . 395 Then, two robot arms power-grasp the object in the power-396 grasping poses G p1 , G p2 . As noted earlier, G s , G p1 , and G p2 397 are heuristically predefined with respect to the object; G p1 398 corresponds to robot1, G p2 to robot2, and G s to both robot1 399 and robot2 (Figure 1 (a)). has rotated ( Figure 5 (b)). That is, this pose represents the 428 case in which space is secured so that two grippers can 429 power-grasp the object in G p1 and G p2 , when the object is 430 rotated around λ.
432
Although the previously obtained object pose secures space 433 for the grippers to power-grasp the object, it does not ensure 434 that the gripper attached to each robot arm can approach 435 G p1 or G p2 . To determine whether this is feasible, the 436 IKSolPow(·) function yields inverse kinematics solutions 437 C p IK for the two robot arms reaching G p1 and G p2 when the 438 object is in P obj . Specifically, the solutions induce G p1 and 439 G p2 to coincide with the power-point frame that is defined 440 with respect to each gripper base (Figure 5 (a) and (c)). The 441 gripper has a mechanism that enables power-grasping the 442 object when the power-grasping pose G pi (i = 1, 2) coincides 443 with the power-point frame (c1).
444
Because of (c2), to rotate the object from the initial pose to 445 P obj , the gripper should precision-grap the object. Therefore, 446 it is necessary to determine whether the gripper can reach the 447 support-grasping pose G s when the object is in P obj . First, 448 one of the two robot arms is selected to rotate the object. If 449 d g1 > d g2 , then robot1 and robot2 become the master robot 450 arm R mas and the supportive robot arm R sup , respectively; 451 otherwise, R mas and R sup are assigned reversely. Subse-452 quently, inverse kinematics solutions C s IK are calculated for 453 the supportive robot arm R sup reaching G s when the rotated 454 object is located in P obj . That is, we obtain the solutions 455 whereby G s coincides with the precision point frame defined 456 with respect to the gripper base of the supportive robot 457 arm R sup . The IKSolSup(·) function performs the above 458 process and returns C s IK , R mas , and R sup . If both C p IK and 459 C s IK are valid, the corresponding T ∆φ , S rot , P obj , C p IK , C s IK , 460 R mas , and R sup are coupled and added to the set Ω so that 461 they can be used in the next algorithm.
463
In the previous section, P obj was obtained for each rotation 464 axis candidate, and inverse kinematics solutions were derived 465 so that the two robot arms could reach the power-grasping 466 points. Based on the set Ω, herein, the integrated paths for 467 pre-grasp manipulation are planned (Algorithm 3). When the 468 robot arm attempts to reach the gripping location, namely, 469 G p1 , G p2 , or G s , it initially approaches a position slightly 470 away from the gripping location, and then the robot reaches 471 the gripping location. Thereby, collisions between the gripper 472 and the object, which could occur if the gripper moved di-473 rectly to the gripping location, are avoided. If valid integrated 474 paths are obtained, pre-grasp manipulation can be performed 475 using real robots.
5:
if µ rot = ∅ then end foreach 9: end foreach 10: foreach (µ rot , S rot , P obj , C p IK , R mas , R sup ) ∈ M sup do 11: µ rot,first ← FirstPath(µ sup ) 12: µ rot,last ← LastPath(µ sup ) 13: if µ sup1 = ∅ and µ mas1 = ∅ and µ sup2 = ∅ then 18: end if 20: end foreach 21: end foreach path for rotating the object from P obj to P init . In fact, as the 483 supportive robot arm R sup should rotate the object from P init 484 to P obj , the order of the points in the obtained path should be 485 reversed. In Figures 5 (b) and (c), the points of the path µ rot 486 corresponding to P obj and P init are µ rot,last and µ rot,first , 487 respectively.
488
To determine the path µ rot , inverse kinematics solutions 489 corresponding to all the support-grasping poses shown in 490 Figure 5 (c) should be obtained. These poses are specified 491 whenever the object is rotated by ∆φ. To check whether the 492 obtained path µ rot is valid, it is necessary to observe the 493 change for each joint of the supportive robot arm R sup with 494 respect to adjacent points on the path µ rot . If the change in 495 all joint values is sufficiently small, µ rot is valid. An example 496 of path µ rot is shown in Figure 6 (a).
498
If the path µ rot is valid, the first and last configurations of 499 µ rot should be used so that a path can be obtained, whereby 500 the master robot arm R mas can power-grasp the object, and 501 the supportive robot arm R sup can regrasp the object in a 502 power-grasp manner. First, the FirstPath(·) function is used 503 to return µ rot,first , which is the first configuration of the 504 path µ rot . That is, the supportive robot arm R sup precision-505 grasps the object in P init . Using µ rot,first , the ReachPath1(·) 506 function returns the path µ sup1 planned to connect the initial 507 configuration and µ rot,first of R sup (Figure 6 (b)). Next, to 508 power-grasp the object using the master robot arm R mas , 509 the ReachPath2(·) function with c p IK ∈ C p IK is used to 510 obtain the path µ mas1 of R mas that connects the initial 511 configuration and c p IK of R mas (Figure 6 (c)). Then, the path 512 µ sup2 of the supportive robot arm R sup is obtained from the 513 ReachPath3(·) function using c p IK and µ rot,last (Figure 6 (d)); 514 µ rot,last returned from the LastPath(·) function is the last 515 configuration of the path µ rot , that is, the supportive robot 516 arm R sup precision-grasps the object located in P obj .
517
Based on the valid µ sup1 , µ rot , µ mas1 , and µ sup2 , the 518 integrated path for pre-grasp manipulation can be obtained, 519 and finally, power-grasping can be performed by securing 520 space for the grippers (Figure 6 (e)). The integrated path power-grasp these objects firmly, we used thick gripper tips 574 so that sufficient force may be applied against the palm to 575 hold the object even if the volume is small (Figure 8).
577
The proposed method secures space so that two grippers may 578 power-grasp an object in such a way that it is rotated while 579 touching the support surface, which sustains the weight of the 580 object. The rotation axis was not distorted in the simulation; 581 however, in experiments with real robot arms, there are cases 582 in which the rotation axis is distorted as the object is rotated. 583 That is, the contact between the support surface and part 584 of the rotation axis is broken while the object is rotated. 585 This is because the mesh model of the object is manually 586 constructed, and thus the shape of the real object is not 587 accurately represented. In addition, the real object must be 588 placed in the same pose as in the simulation so that it can 589 be rotated using the path derived in the simulation. However, 590 because the actual object is placed manually, an error in the 591 pose may occur. In addition, when the real robot arm power-592 grasps the object, it may move arbitrarily, and thus distortion 593 of the rotation axis can also occur. These factors cause 594 inconsistencies between the rotation axis in the simulation 595 and that in the real world, so that the derived path may not be 596 appropriate for manipulating the object as planned.
597
To address this, the gap between the simulation and the 598 real world should be reduced. To this end, A vision system 599 and the non-rigid registration and pose refinement methods 600 in [23] can be used. In addition, the in-hand pose estimation 601 used in [23] can be applied when the object moves arbitrarily 602 while the robot arm power-grasps the object. Based on this 603 estimation, the actual pose of the object can be calculated 604 such that the modified pose for power grasping can be deter-605 mined. Therefore, the success rate of power-grasping can be 606 improved. 608 We proposed pre-grasp manipulation planning involving two 609 robot arms to secure space for power grasping. One of the 610 conditions is that the gripping force of the precision grasp 611 using only one gripper is insufficient to lift the object stably. 612 Therefore, to obtain space for power grasping, the object was 613 rotated while being supported against the support surface. 614 To this end, rotation-axis candidates that allow the object 615 to rotate while touching the support surface were derived 616 from a mesh model of the object, and the rotation required 617 for the gripper to perform power grasping was calculated for 618 each rotation axis. The robot arm for rotating the object was 619 determined based on the distance between the rotation axis 620 and the power-grasping point of each arm. After the object 621 pose allowing power grasp by two grippers was obtained, 622 the path for rotating the object in this pose using one robot 623 arm was determined. Finally, the path for power-grasping 624 the object using two robot arms was derived. The proposed 625 method was verified through simulations and experiments 626 with two UR5e robot arms attached to a 2F-85 gripper. pre-grasp manipulation planning. Currently, the object is only 633 rotated for power grasping, but the proposed method can be 634 applied to any task in which the robot arm cannot grip an 635 object at a given location.
636
In the future, we intend to automatically, rather than manu-637 ally, determine power-grasping and support-grasping points. 638 Thereby, it is expected that pre-grasp manipulation planning 639 may be more effective, and the proposed method can be 640 easily applied to other objects. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. ERIC DEMEESTER (Member, IEEE) graduated in 1999 as Mechanical Engineer, option mechatronics, at KU Leuven. His PhD research targeted user-adapted plan recognition and shared control for wheelchair driver assistance. Since 2018, he is associate professor at KU Leuven. He is one of the coordinators of the ACRO research group, which focuses on automation, computer vision and robotics in various sectors such as manufacturing, logistics, care, pharma, nuclear, and agriculture. His research interests include probabilistic state estimation, computer vision, collision-free path planning and decision-making under uncertainty, and increasingly also machine learning. | 7,112 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Systemic inoculation of Escherichia coli causes emergency myelopoiesis in zebrafish larval caudal hematopoietic tissue
Emergency granulopoiesis occurs in response to severe microbial infection. However, whether and how other blood components, particularly monocytes/macrophages and their progenitors, including hematopoietic stem/progenitor cells (HSPCs), participate in the process and the underlying molecular mechanisms remain unknown. In this study, we challenged zebrafish larvae via direct injection of Escherichia coli into the bloodstream, which resulted in systemic inoculation with this microbe. The reaction of hematopoietic cells, including HSPCs, in the caudal hematopoietic tissue was carefully analysed. Both macrophages and neutrophils clearly expanded following the challenge. Thus, emergency myelopoiesis, including monopoiesis and granulopoiesis, occurred following systemic bacterial infection. The HSPC reaction was dependent on the bacterial burden, manifesting as a slight increase under low burden, but an obvious reduction following the administration of an excessive volume of bacteria. Pu.1 was important for the effective elimination of the microbes to prevent excessive HSPC apoptosis in response to stress. Moreover, Pu.1 played different roles in steady and emergency monopoiesis. Although Pu.1 was essential for normal macrophage development, it played suppressive roles in emergency monopoiesis. Overall, our study established a systemic bacterial infection model that led to emergency myelopoiesis, thereby improving our understanding of the function of Pu.1 in this scenario.
Zebrafish (Danio rerio) provides an ideal system for studying bacterial infection-stimulated immune response, particularly the interaction of phagocytes with microbes, owing to its optical transparency and the exclusive involvement of myeloid phagocytes in early embryos [10][11][12][13] . Using this model, the behaviours of phagocytes, including their response, mobility, movement, interaction, and engulfment, have been reported [10][11][12][13] . Recently, a study investigated emergency granulopoiesis through local infection of larvae in the hindbrain 10 . However, to our knowledge, myelopoiesis in systemically infected larvae and the role of HSPCs in this phenomenon have not been elucidated.
HSPCs emerge in zebrafish from approximately 36 hours post-fertilization (hpf) via endothelial-to-hematopoietic transition 14,15 . From 2 days post-fertilization (dpf), these HSPCs migrate to the caudal hematopoietic tissue (CHT), an organ transiently supporting hematopoiesis and the functional homolog of the fetal liver and placenta [16][17][18][19] . CHT refers to the lumen between the caudal artery (CA) and the definitive caudal vein (dCV). Structurally, it is a complex vascular network composed of a fibroblastic reticular-cell matrix, loose mesenchyme, and expanded blood progenitors [16][17][18][19] . After a transient stay, these HSPCs move to the kidney, which is the functional equipment of the bone marrow 17,19,20 , from 3-4 dpf on, and eventually, give rise to all blood components. Some of the HSPCs migrate to the thymus to generate T lymphoid cells 21 . Functional T lymphoid cells are detected after 6 dpf, and the adaptive immunity functions until the juvenile period 22 .
Taking advantage of the well-addressed process of hematopoiesis 20 , bacterial inoculation of early embryos provides an effective assay for investigating infection-induced emergency hematopoiesis, in addition to elucidating the behaviour of myeloid phagocytes. Consequently, an improved understanding of the cellular and molecular mechanisms responsible for emergency hematopoiesis can be achieved. In this study, we established a systemic infection model by injecting the non-pathogenic bacteria, Escherichia coli, into the bloodstream of zebrafish embryos. CHT [16][17][18][19] was examined to elucidate the influence of the bacteria on various phagocytes and their progenitors, including HSPCs.
Both macrophages and neutrophils in the CHT are involved in bacterial phagocytosis upon systemic infection.
To investigate the hematopoietic reaction following systemic inoculation of larvae with microbes, 2 dpf larvae were challenged by intravenous injection of Dsred-labelled E. coli 13,23 . A site close to the ear, where vessels are enriched (Fig. 1A), was chosen as the injection site to facilitate rapid inoculation of the microbes into the blood circulation. When the microbes were administered, tremendous amounts of circulating bacteria (D) or Tg(lyz:eGFP) (E) CHT from 0.5 hpi to 6 hpi. The white stars in (D) denote mpeg1-GFP + macrophages that engulfed large amounts of Dsred + E. coli. The white arrowheads in (E) indicate the initial aggregation of Dsred + E. coli on the surface of lyz-GFP + neutrophils, which were quickly phagocytosed. The red foci in (D,E) represent the phagocytosed bacteria. Scale bars, 20 μ m. See also Video S1 and S2.
were identified by intensive red fluorescent signals moving through the vessels (see supplementary Video S1 and S2). Gradually, the circulating microbes disappeared; instead, large clusters of Dsred + bacterial foci were detected 13,23 (Fig. 1B; see supplementary Video S1 and S2). In parallel with the disappearance of moving bacteria in the circulation, the number of large foci increased to approximately 16 in the CHT 16,19,24 at 1 day post-injection (dpi). However, this number decreased thereafter, and the bacteria almost completely disappeared by 6 dpi (Fig. 1B). The alteration of foci reflected bacterial phagocytosis and digestion by the myeloid phagocytes, which are the only immunocytes that function at these stages 13,23 . To monitor the behaviour of both myeloid phagocytes, Tg(mpeg1:eGFP) 25 and Tg(lyz:eGFP) nz117 26 , which specifically label macrophages and neutrophils, respectively, were exploited. The mpeg1-GFP + macrophages reacted immediately to interact with and engulf the microbes 13,23 (Fig. 1D, white stars). At approximately 30 minutes post-injection, the macrophages had engulfed numerous microbes, resulting in the formation of large red foci in the CHT ( Fig. 1D; see supplementary Video S1; white stars). Over time, the number of active macrophages increased markedly, and approximately 72% of them were observed to actively engulf microbes at 6 hours post-injection (hpi) (Fig. 1C), which was consistent with the numerous large E. coli foci observed in the CHT (Fig. 1B). Gradually, macrophages with a huge microbe burden underwent cell death, manifested by weakened and even lost GFP signals, and these scarified macrophages were quickly engulfed by their surrounding macrophages (see supplementary Video S1, white arrowheads). The lyz-GFP + neutrophils also phagocytosed bacteria following treatment with Tg(lyz:eGFP) nz117 26 . Interestingly, the phagocytic behaviour of neutrophils was distinct from that of macrophages. Neutrophils first aggregated the bacteria on their surface, resulting in their encircling by more extensive Dsred + signals ( Fig. 1E; see supplementary Video S2, white arrowheads), followed by engulfment of the bacteria 13 (see supplementary Video S2, white arrowheads). However, the reaction sensitivity and phagocytic ability of neutrophils were less efficient than those of macrophages 27 . Only approximately 37% of the total lyz-GFP + neutrophils in the CHT performed phagocytosis, which was half the rate determined for macrophages at similar time points (Fig. 1C). Therefore, both macrophages and neutrophils were involved in phagocytosis when encountering bacteria in the CHT.
Inoculation of microbes into the bloodstream leads to emergency granulopoiesis. Emergency granulopoiesis has been detected in larvae infected through the hindbrain 10 . Whether a similar phenomenon was recapitulated in the CHT during systemic infection was investigated. We first examined several neutrophil markers, including cebp1, lyz, mpx, and Sudan Black (SB) 28 , at 2 dpi. All examined neutrophil markers displayed a drastic increase in the treated larvae compared with the controls ( Fig. 2A; see supplemental Figure S1A-C). Moreover, the degree of neutrophil expansion depended on the bacterial burden. More significant expansion of SB + cells was observed with 5-10 × 10 3 colony-forming units (cfu) than with 5-10 × 10 2 cfu E. coli (see supplemental Figure S1D). However, when the E. coli volume reached 5-10 × 10 4 cfu, the larvae showed remarkable mortality (see supplemental Figure S1E), and approximately half of the surviving larvae presented obvious morphological abnormalities, exemplified by pericardium oedema (see supplemental Figure S1F). A larger volume of microbes caused a stronger neutrophil reaction, but an excessive burden led to fish abnormality and lethality. Therefore, 5-10 × 10 3 cfu E. coli was chosen as the dosage in all further experiments because it caused a remarkable immune response with few morphological defects and low lethality, thus permitting continuous investigation of the challenge-induced emergency hematopoietic reaction.
Next, the number of SB + neutrophils was calculated. A transient reduction of SB + neutrophils was detected at 6 hpi (Fig. 2B), suggesting their early exhaustion. To confirm this phenomenon, the challenged Tg(lyz:eGFP) nz117 26 were examined carefully. The lyz-GFP + population showed a similar reduction as that of SB + neutrophils (Fig. 2D). Further investigation revealed obvious phagocytosis of the Dsred + microbes by the lyz-GFP + neutrophils (Fig. 2C, white arrows) and a corresponding increase in the portion of lyz-GFP + /terminal deoxynucleotidyl transferase dUTP nick-end labelling (TUNEL) + cells in the microbe-treated larvae when compared with their controls (Fig. 2E). These results suggested that the early exhaustion of neutrophils was caused by their increased apoptosis while fighting the bacteria. Subsequently, the SB + neutrophil population began to expand beginning at 1 dpi, reaching their maximal level at 2-4 dpi, and then returning to physiological baseline numbers from 6 dpi on (Fig. 2B). The expansion of neutrophils indicated that emergency granulopoiesis occurred in response to systemic infection. CHT is a transient organ for definitive hematopoiesis [16][17][18][19] , and therefore, the origin of expanded neutrophils in this region was explored. Time-lapse images were acquired from 1 dpi for the treated Tg(lyz:eGFP) nz117 26 , and the data revealed the generation of nascent lyz-GFP + neutrophils in the niche adjacent to the CA ( Fig. 2F; see supplementary Video S4), where more immature progenitors resided 24 . The lyz-GFP + neutrophils showed weak signals initially that gradually increased in strength, indicating differentiation of the neutrophils after challenge. Meanwhile, the lyz-GFP + neutrophils divided more frequently in E. coli-treated larvae. An average of 8.00 ± 0.41 divisions were observed in four imaged larvae, which is approximately four-times higher than that in the control groups (only 2.33 ± 0.33 divisions were observed in three control larvae during the imaging time window) ( Fig. 2F; see supplementary Video S3 and S4). As a result, the lyz-GFP + neutrophil numbers increased. The in situ expansion of neutrophils in the CHT predicted a definitive hematopoietic origin 10 . To test our hypothesis, we utilised a runx1 w84x mutant, in which definitive hematopoiesis is abolished 24 . The results revealed no detectable expansion of SB + neutrophils in challenged runx1 w84x larvae compared with the control (Fig. 2G). This finding suggested that the expanded neutrophils were largely generated from runx1-regulated definitive hematopoiesis 24 , in agreement with a previous study 10 . The enhanced output of neutrophils supported the activation and expansion of their progenitors. This hypothesis was verified by the significantly enhanced output of both pu.1 + and cebpα + myeloid progenitors at 2 dpi ( Fig. 2H; see supplemental Figure S2A). Further calculations demonstrated that the increase in myeloid progenitors was initiated as early as 1 dpi (Fig. 2I), a time point prior to the tremendous output of neutrophils (Fig. 2B). However, the myeloid progenitors did not display a reduction at 6 hpi ( Fig. 2I), suggesting limited exhaustion of myeloid progenitors in the initial fight against the bacteria. The expansion of myeloid progenitors predicted their higher proliferation upon challenge. To support Scientific RepoRts | 6:36853 | DOI: 10.1038/srep36853 this hypothesis, Tg(coro1a:eGFP) 29 , which marks both myeloid phagocytes and their progenitors, was evaluated by anti-phospho-histone H3 (pH3) antibody staining 30 . A remarkable increase in pH3 + /coro1a-GFP + cells was detected in the microbe-treated larvae, when compared with the control, at 1 dpi (see supplemental Figure S2 B,C). Together, these results suggested that the myeloid progenitors were activated, and emergency granulopoiesis occurred in the larvae following systemic infection.
Distinct reactions of the HSPC compartment in response to different bacterial burdens.
Because no specific HSC-labelling method is available in zebrafish, we mainly utilised Tg(runx1:en-GFP), in which runx1-GFP largely marks the HSPC compartment 31 , to dissect their reactions in our assay. To our surprise, runx1-GFP + cells did not show an obvious increase in the CHT after challenge by 5-10 × 10 3 cfu E. coli were administered. When the volume was 5-10 × 10 2 cfu, the runx1-GFP + HSPCs showed a notable expansion (Fig. 3A,C). Surprisingly, when the burden increased, the number of HSPCs decreased, and 5-10 × 10 4 cfu E. coli led to a significant reduction in runx1-GFP + cells in surviving larvae with a normal appearance at 2 dpi ( Fig. 3A,C). Similar alterations were also observed in cmyb + cells (Fig. 3D,F). Together, these results indicated that the HSPC reaction was dependent on the bacterial burden. Next, the possible mechanisms underlying the HSPC reactions were examined. Several inflammatory cytokines have been suggested to be critical for the activation of HSPC proliferation 5,35-37 ; however, their overproduction causes HSPC apoptosis 5,6,38,39 . Therefore, the expression levels of various cytokines were measured in response to different bacterial burdens by quantitative real-time polymerase chain reaction (qPCR). The results revealed an obvious increase in the expansion of these factors, including tnfα, ifng1-2, and il1b (see supplemental Figure S3D), following a more severe challenge at 2 dpi. This result was consistent with the reaction of HSPCs, further supporting that an optimal level of inflammatory factors is essential for their homoeostasis and that overdose might lead to their exhaustion 5,6,37-39 .
Emergency monopoiesis occurs during systemic infection. Similar emergency granulopoiesis,
but with a different HSPC reaction, was observed in our assay compared with the results of a previous study 10 . Whether this difference was caused by the distinct infection methods or variations in the microbes used was explored further. When the microbes, whether they were Salmonella typhimurium or Dsred-labelled E. coli 13,23 , were injected into the 2 dpf larval hindbrain or the blood circulation, emergency granulopoiesis occurred, as evidenced by clear expansion of SB + neutrophils (see supplemental Figure S4A). Thus, emergency granulopoiesis occurred following either brain or systemic infection of the larvae. When the mpeg1 + macrophages were examined, no expansion of mpeg1 + macrophages was observed in the CHT when the larvae were challenged via the hindbrain by both microbes (see supplemental Figure S4B) 10 , indicating that brain infection did not lead to emergency monopoiesis in the CHT. However, intravenous injection of microbes caused a tremendous expansion of mpeg1 + macrophages in the CHT (see supplemental Figure S4B). Thus, emergency monopoiesis, in addition to granulopoiesis, likely occurred when the microbes were systemically inoculated. This conclusion was further confirmed by the drastic expansion of another macrophage marker, mfap4, at 2 dpi (Fig. 4A,B). The reasons for the different reactions of the macrophages in response to different infection methods were investigated further. Although an obvious restriction of the bacteria was detected in the injured brain, hindbrain administration led to limited circulating microbes in the CHT, in contrast to the drastic increase in circulating microbes after intravenous injection (see supplemental Figure S4C,D). Therefore, the tremendous number of bacteria in the circulation and the intensive involvement of macrophages in bacterial phagocytosis (Fig. 1D) probably provided cues that led to emergency monopoiesis. To carefully dissect the process of emergency monopoiesis, mfap4 + and mpeg1-GFP + cells were quantified. Similar to the fluctuation of neutrophils, mfap4 + and mpeg1-GFP + macrophages 25 displayed an initial exhaustion (Fig. 4B,D), which was in agreement to their intensive involvement in phagocytosis and resultant increased apoptosis (Fig. 4C,E). Subsequently, a significant expansion of mfap4 + macrophages followed (Fig. 4B). However, their recovery to baseline was faster than that of SB + neutrophils. At 4-6 dpi, the macrophage numbers had already declined to the levels of the control (Fig. 4B), approximately 2 days earlier than the neutrophils (Fig. 2B). Other lineage markers presented no obvious alterations (see supplemental Figure S5). Overall, systemic infection of microbes caused both emergency monopoiesis and granulopoiesis, which are collectively referred to as emergency myelopoiesis.
Emergency monopoiesis is achieved through the expansion of primitive myeloid cells. The definitive hematopoietic origin of the emergency granulopoiesis suggested a similar origin for emergency monopoiesis. To verify this hypothesis, time-lapse imaging was performed in the infected Tg(mpeg1:eGFP) larval CHT. Expansion of mpeg1-GFP low macrophages was detected, and their numbers increased upon infection ( Fig. 4F; see supplementary Video S5). These mpeg1-GFP low macrophages should have been nascent. However, they highly expressed GFP signals following engulfment of bacteria ( Fig. 4F; see supplementary Video S5). The mpeg1-GFP low macrophages suggested the definitive hematopoietic origin of emergency monopoiesis. However, when the mfap4 + macrophages were examined in the runx1 w84x 24 , they presented a surprisingly remarkable expansion following challenge, and the numbers of expanded macrophages were similar to those in the siblings (Fig. 4G,H). This macrophage phenomenon is the converse of that observed for neutrophils in similar mutant larvae, suggesting that the emergency monopoiesis was largely independent of runx1-mediated hematopoiesis and that these cells were probably generated from primitive myeloid cells. Thus, emergency granulopoiesis and monopoiesis at this stage had different origins. The immune response and hematopoiesis-related factors are transcriptionally influenced after challenge. The molecular mechanisms underlying emergency myelopoiesis were further explored. To this end, deep-sequence analysis was performed using samples collected at successive time points after treatment. Three typical representative time points-6 hpi, 1 dpi, and 4 dpi-were chosen on the basis of both the alteration of myeloid phagocytes and the level of bacterial clearance. The results indicated that large amounts of factors changed during the different stages after challenge (Fig. 5A,B; Table S1 and S2). Altered factors functioning in bacterial defense and hematopoiesis were examined further. The heat-map results revealed that dozens of bacterial defense-related and hematopoiesis-related genes were transcriptionally modified throughout the process ( Fig. 5C and Table S2). These genes could be classified mainly into two types. The first type showed a typical increase at 6 dpi but a quick reduction thereafter. Nos2b and Duox-two important factors that are closely involved in the formation of H 2 O 2 and NO-were the representative examples (Fig. 5C), suggesting essential roles for small molecules as initial emergency signals 10,34,40 . Another type manifested an initial reduction followed by an obvious increase at later stages. This group of factors accounted for a large portion of the total members, and was typified by pro-inflammatory cytokines such as il1b and mmp9, as well as most hematopoiesis-related genes (Fig. 5C). qPCR was performed to validate the deep-sequence analysis data. The results revealed similar alterations in the expression of key factors, including tlr5a, mpx, il1b, mm9, irf8, csf3r, and pu.1, to that in the deep-sequence analysis results (Fig. 5D). However, the HSPC marker cmyb exhibited a slight upregulation (< 2-fold) (Fig. 5D), which was consistent with its behaviour in the deep-sequence analysis, and this finding further supported the results suggesting limited alterations of HSPC numbers in our assay. myelopoiesis was explored because it presented significant increases in expression levels after infection (Fig. 5C,D). To this end, Pu.1 was functionally disrupted using either pu.1 G242D/G242D hypomorphic alleles or morpholinos (MOs) knockdown 41,42 . In agreement with previous studies 43, 44 , the large phagocytic foci that appeared in wild type (WT) embryos (see supplementary Video S6) were detected in small numbers, and the clearance of E. coli was much slower in challenged Pu.1-deficient embryos ( Fig. 6A; see supplementary Video S7). Furthermore, these infected embryos showed the highest mortality (Fig. 6I). Together, these results suggested the presence of defective phagocytosis and immune responses in the absence of normal Pu.1 function 43,44 . Next, emergency myelopoiesis in challenged pu.1-deficient larvae was examined. Because deficiency in Pu.1 activity resulted in obvious defects in macrophage development but an expansion of the neutrophil population in larvae during the steady state 41 , we predicted that Pu.1 was probably dispensable for emergency granulopoiesis but played critical roles in emergency monopoiesis. To test this hypothesis, emergency granulopoiesis was first examined. The results indicated that, even without the normal function of Pu.1, emergency granulopoiesis took place, as evidenced by the fact that the SB + and lyz + neutrophils and pu.1 + and cebpα + myeloid progenitors in pu.1-deficient larvae expanded noticeably to the level of their counterparts in infected WT larvae at 2 dpi (Fig. 6B-E; see supplemental Figure S6A,B). The baseline of neutrophils and their progenitor population in these larvae was higher than the baseline in WT 41 (Fig. 6C,E), indicating that the expansion potential of neutrophils after infection was smaller than that in WT (Fig. 6H). However, emergency granulopoiesis still occurred, although the intensity was not as strong as that in WT. Thus, Pu.1 was largely dispensable for emergency granulopoiesis. However, when the macrophage marker mfap4 was checked, the mfap4 + cells that were markedly reduced in PBS-treated pu.1 G242D/G242D larvae presented a surprisingly drastic expansion after infection (Fig. 6F,G). Their number was even higher than that in the infected WT larvae at 2 dpi (Fig. 6G). Consistently, the expansion potential of mfap4 + cells was much higher than that in WT (Fig. 6H). Other macrophage markers, including csf1ra and mpeg1, displayed similar expansion in the E. coli-challenged pu.1 G242D/G242D larvae (see supplemental Figure S6C,D), further confirming that the macrophage lineage dramatically expanded in the emergency condition when Pu.1 was defective. Therefore, Pu.1 functioned differently during the demanding situation of monopoiesis. Although Pu.1 is essential for normal macrophage formation, this finding revealed its suppressive roles during emergency monopoiesis.
Protective roles of Pu.1 in HSPC survival following challenge. A deficiency in the efficient clearance of bacteria and the resulting greater severity of bacteraemia-like syndrome in Pu.1-deficient embryos probably increased the exposure of HSPCs to microbes in the CHT. We were interested in the reaction of the HSPCs in this scenario. To this end, Tg(runx1:en-GFP) larvae were treated with control and pu.1 MOs. The response of runx1-GFP + cells following bacterial challenge was investigated. Approximately 4% of the runx1-GFP + cells were observed to engulf bacteria in the control group. However, this population clearly expanded in the pu.1 morphants (Fig. 7A,B), suggesting that HSPCs could directly interact with microbes in the CHT and that during more severe infection, more HSPCs were involved. Consequently, more runx1-GFP + cells underwent apoptosis, as evidenced by the increased percentage of TUNEL + /runx1-GFP + apoptotic cells in the challenged pu.1 morphants, and the level was higher than that detected in either the infected control embryos or the PBS-treated pu.1 morphants (Fig. 7C,D). Therefore, Pu.1 was critical for HSPC survival following challenge. To confirm this conclusion, another HSPC marker, cmyb, was evaluated. In agreement with the findings in runx1-GFP + HSPCs, cmyb-GFP + cells underwent similar excessive apoptosis in infected pu.1 morphants (see supplemental Figure S6E,F). However, the cmyb-GFP + cells underwent similar proliferation in both the control and pu.1 morphants treated with either PBS or E. coli (see supplemental Figure S6G,H) when examined using anti-pH3 antibody staining 30 , suggesting a dispensable role for Pu.1 in infection-induced HSPC proliferation. To accurately present the data, the cmyb + cells were quantified. The cmyb + cell number was markedly lower in the treated pu.1 morphants than in the control group at 2 dpi (Fig. 7E,F), which was consistent with their increased apoptosis (see supplemental Figure S6E,F). A similar phenomenon was recapitulated in pu.1 G242D/G242D embryos, although the reduction of cmyb + cells was less drastic in pu.1 G242D/G242D than in pu.1 morphants (Fig. 7F,G). This result is likely a consequence of the partial disruption of Pu.1 activity in pu.1 G242D/G242D compared with the more severe disruption resulting from a high dose of pu.1 MOs 41 . Thus, Pu.1 was essential for the efficient clearance of microbes, which in turn prevented over-exposure of HSPCs to microbes. This procedure is quite important for HSPC homeostasis after E. coli challenge.
The reduction of HSPCs but expansion of myeloid progenitors in the challenged Pu.1-deficient larvae appeared to be contradictory, which led us to suspect that E. coli affects HSPCs and myeloid progenitors in distinct manners. To verify our hypothesis, co-staining of cmyb-GFP and pu.1 was performed in Tg(cmyb:eGFP). In the control group treated with PBS, the majority of the cmyb-GFP + cells expressed pu.1 signals, resulting in a small ratio (approximately 23%) of pu.1 + -only myeloid progenitors (see supplemental Figure S7). However, upon challenge, the pu.1 + -only cell population increased dramatically to approximately 41% (see supplemental Figure S7), suggesting that the pu.1 + -only myeloid progenitors themselves underwent notable expansion upon challenge. Concordantly, in E. coli-challenged pu.1 morphants, pu.1 + -only myeloid progenitors showed drastic expansion, but cmyb-GFP + HSPCs showed a clear reduction when compared with their control counterparts (see supplemental Figure S7). Consequently, the percentage of pu.1 + -only myeloid progenitors increased to approximately 52%, which was much higher than that of the other groups (see supplemental Figure S7). Therefore, the HSPCs and myeloid progenitors separately responded to the microbes, and Pu.1 deficiency led to a reduction in HSPCs but showed a limited influence on the expansion of the myeloid progenitors.
Discussion
Taking advantage of the optical transparency of zebrafish larvae, an emergency myelopoiesis model was established through direct injection of Dsred + E. coli 13,23 into the circulatory system. Although intravenous injection of microbes has been employed by several groups [10][11][12][13] , infection-induced myelopoiesis has rarely been the experimental focus. Recently, Kathryn E. Crosier's group dissected the role of the Cebpβ -Nos2a pathway in demand-adapted emergency granulopoiesis by injecting GFP + Salmonella into the brains of larvae. In that study, emergency granulopoiesis was achieved by sacrificing lymphopoiesis, and HSPCs clearly increased under this condition; by contrast, macrophages showed no notable increase in the trunk region 10 . This work facilitated the initiation of research investigating infection-induced hematopoiesis using larval zebrafish 45 . However, the reaction of hematopoietic cells to systemic infection, particularly when HSPCs directly encountered microbes in CHT, has not been addressed. In our study, direct inoculation of microbes into the zebrafish bloodstream led to the expansion of both macrophages and neutrophils. Thus, emergency monopoiesis, in addition to emergency granulopoiesis, occurred with the use of this method, which could serve as a good supplementary assay to study emergency myelopoiesis 10 .
In contrast to locally infected larvae 10 , direct injection of bacteria into the circulation led to the development of a bacteraemia-like syndrome, which caused immediate and significant participation of macrophages and neutrophils in phagocytosis and digestion of microbes 13,23,27 . The intensive involvement of myeloid phagocytes led to their increased apoptosis and quick exhaustion, which was probably the cue for their subsequent expansion 46 . The expanded macrophages and neutrophils in the challenged larvae were probably of different origins, as suggested by the data obtained for runx1 w84x 24 . Almost no neutrophils were found in runx1-deficient larvae, regardless of whether they were challenged by microbes, thus supporting a runx1-dependent definitive hematopoietic origin of granulopoiesis under both physiological and stressed conditions. The macrophages slightly expanded in the steady state 41 , and emergency monopoiesis occurred normally in the runx1 mutant. This result suggested that the macrophages at this stage were largely generated from primitive hematopoiesis, which occurred independently of runx1. A recent study has demonstrated that the microglia, a subtype of macrophages in the brain, mainly originates from primitive myelopoiesis throughout the larval period 47 . In another study, a mutant fish line with compromised definitive hematopoiesis showed a limited influence on macrophages at later larval stages 48 . Thus, it is possible that larval macrophages have a largely primitive origin.
The HSPCs directly interacted with the microbes in the CHT in our assay. And their response was dependent on the bacterial burden. A relatively lower volume of E. coli led to a moderate expansion of HSPCs, which is consistent with previous reports 10,33,34 . However, excessive stress caused by a large microbial burden resulted in a drastic exhaustion of HSPCs. The distinct reactions of the HSPCs to different bacterial burdens were probably related to the severity of the direct exposure of HSPCs to the microbes and the overproduction of pro-inflammatory cytokines. Because a higher dose of E. coli would overcome the clearance by phagocytes and lead to the production of excessive levels of pro-inflammatory cytokines, it can be inferred that the longer and stronger influence of microbes on HSPCs in the CHT and the overproduction of pro-inflammatory cytokines probably facilitated their apoptosis. This hypothesis was further supported by the reduction of HSPCs in the infected Pu.1-deficient embryos. The functional defects of the macrophages in Pu.1-deficient embryos resulted in the slower clearance of E. coli 43,44 . Consequently, the interaction between HSPCs and pathogens was prolonged. Concordantly, increased apoptosis of HSPCs occurred, overcoming the cell proliferation and leading to a reduction of cell numbers. The increased apoptosis was correlated with excessive production of inflammatory factors, including IFNγ and TNFα (see supplemental Figure S6E). Appropriate levels of IFNγ and TNFα are essential for the activation of HSPC proliferation 5,35-37 . However, their overproduction causes rapid HSPC apoptosis 5,6,38,39 . Thus, the drastically altered levels of IFNγ and TNFα were probably responsible for the increased apoptosis of HSPCs.
Pu.1 is indispensable in the commitment of myeloid cells 41,49,50 and in leukaemogenesis 51,52 . However, its function in infection-induced emergency myelopoiesis had not been addressed. Taking advantage of pu.1 G242D/G242D and morpholino-mediated functional disruption, the roles of Pu.1 in infection-induced emergency myelopoiesis were carefully dissected. Surprisingly, compared with the insensitive expansion of neutrophils, macrophages with significant physiological shortcomings in the presence of defective Pu.1 presented drastic expansion after infection, and their numbers quickly exceeded the values determined in infected WT embryos. Pu.1 seemed to be an inhibitory regulator for the infection-induced expansion of macrophages. Thus, it played distinct roles in physiological and emergency monopoiesis. A previous study has demonstrated that Cebpβ plays different roles in physiological and emergency granulopoiesis because its deficiency leads to ineffective emergency granulopoiesis, although it is dispensable for the normal development of neutrophils 2,10,53 . The data obtained for cebpβ and pu.1 suggested that the regulatory networks underlying emergency myeloid cell development differed from that utilised in the steady state. Therefore, elucidation of the mechanisms responsible for emergency myelopoiesis is an interesting topic for further investigation. Fish lines. AB, pu.1 G242D41 , runx1 w84x 24 , Tg(runx1:en-GFP) 31 , Tg(cmyb:eGFP) 32 , Tg(coro1a:eGFP) 29 , Tg(mpeg1: eGFP) and Tg(lyz:eGFP) nz11726 strains were used and maintained under standard conditions. Tg(mpeg1:eGFP) lines. 4.1-kb DNA sequence upstream of the mpeg1 translation start site amplified with the primers 5′ -ACATGCATATCTTGCAGTATA-3′ /5′ -GATCGCCAGATGGGTGTTTT-3′ was used as a promoter to drive eGFP expression in the pTol2 vector. The pTol2-mpeg1-eGFP construct was injected into the wild-type fish embryos at one-cell stage. The embryos with an appropriate GFP expression were selected and raised to adults. The founder lines were identified based on their eGFP expression pattern. Phagocytosis assays and time-lapse live imaging. The Dsred-labeled E. coli 23 were cultured as previously described 43 . The cultured E. coli were collected in filter-sterilized PBS prior to the injection. To quantify the burdens, the volume of E. coli for injection was added to 1 ml LB and then plated at 1:10 and 1:100 dilutions on LB agar supplemented with 50 mg/ml kanamycin. Colonies were counted in plates incubated at 37 °C overnight to quantify the actual infection doses. The E. coli volume of each concentration was then microinjected into the circulation of each anesthetized embryo. The injected embryos were anesthetized, mounted in 1% agarose, and subsequently imaged under an LSM700 confocal microscope (Carl Zeiss) (X20 objective). Images were captured every 5 min, extracted, and converted into a movie using ZEN2012 software. Movie Maker was used to create the movies.
Double fluorescence immunohistochemistry staining and terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL). Double fluorescence immunohistochemistry staining of larvae was performed 24 . For eGFP and pH3 double staining, Tg(cmyb:eGFP) embryos treated with control or pu.1 morpholinos 41 were fixed in 4% paraformaldehyde at the desired stages. The fixed embryos were incubated with primary rabbit anti-phospho-histone H3 (1:250, 4 °C, overnight) (pH3; Santa Cruz Biotechnology, sc-8656-R) and goat anti-GFP (1:400, 4 °C, overnight) (Abcam, ab6658) antibodies according to the manufacturer's protocol and subsequently stained with Alexa Fluor 647 anti-rabbit and Alexa Fluor 488 anti-goat secondary antibodies (Invitrogen). For the TUNEL assays, the in situ cell death detection kit, TMR Red (Roche 12156792910), was applied. The staining process was performed as indicated in the protocol. All fluorescence images were obtained using an LSM700 confocal microscope (Carl Zeiss).
Differentially expressed gene (DEG) analysis.
To explore the molecules involved after infection, E. colitreated embryos were selected at 6 hpi, 24 hpi and 4 dpi. Their total RNA was extracted for deep sequencing by the Biomarker Company, Beijing. The differentially expressed genes (DEGs) between any 2 samples were identified based on the following two criteria. 1) The expression value (FPKM) of the DEG must be larger than 1 in both samples, which indicates that the gene is active in these samples and that the detected expression values are not caused by background noise (for example, read mismatching or multi-hit alignment). 2) The variation in gene expression between the two conditions should be larger than 2-fold. Based on these two criteria, we identified 1441 DEGs in at least one comparative case (see Supplementary Table S1 and S2). Gene set enrichment analyses were performed for the functional annotation of the DEGs. Functional annotation tools in DAVID Bioinformatics Resources 54 were used to conducted these analyses. | 7,711.2 | 2016-11-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Novel Low-Complexity Low-Latency Power Efficient Collision Detection Algorithm for Wireless Sensor Networks
Collision detection mechanisms in Wireless Sensor Networks (WSNs) have largely been revolving around direct demodulation and decoding of received packets and deciding on a collision based on some form of a frame error detection mechanism, such as a CRC check. The obvious drawback of full detection of a received packet is the need to expend a significant amount of energy and processing complexity in order to fully decode a packet, only to discover the packet is illegible due to a collision. In this paper, we propose a suite of novel, yet simple and power-efficient algorithms to detect a collision without the need for full-decoding of the received packet. Our novel algorithms aim at detecting collision through fast examination of the signal statistics of a short snippet of the received packet via a relatively small number of computations over a small number of received IQ samples. Hence, the proposed algorithms operate directly at the output of the receiver's analog-to-digital converter and eliminate the need to pass the signal through the entire. In addition, we present a complexity and power-saving comparison between our novel algorithms and conventional full-decoding (for select coding schemes) to demonstrate the significant power and complexity saving advantage of our algorithms.
Introduction
Most research activities in WSNs focus on maximizing the network lifetime and minimizing the power con-sumption since they are powered by using finite energy sources (e.g., batteries).In this regard, some efforts deal with the routing schemes in order to rout a packet through most energy efficient links from a source node to a destination ( [1]- [3]), while other researches extensively explore MAC schemes ([4]- [6]) which efficiently reduce packets collisions.However, MAC layer schemes intrinsically cannot eliminate all kind of collisions, because of hidden nodes problems, as well as collisions when multiple nodes sense the medium free at the same time followed by transmitting their packets.Hence the collision may occur at the receiver where it is difficult to distinguish between desirable and interferers signals.
Authors in [7], investigate the effect of interference signals on decoding power.They suggest adapting the decoder power based on the communication range.That is the decoder power needed to be increased, while transmitter power needs to be decreased for short rang communication systems.Authors in [8], design LDPC decoder architecture for low power WSNs.They suggest different LDPC codes and analyze the energy saving for encoded communication system.Their analysis shows how the decoder power levels affect the Bit Error Rate (BER).Author in [9], investigate the trade-off between the transmission power and decoding power in WSNs by employing convolutional codes with a specific ECC complexity in order to extend the network lifetime.In [10], the authors studied the relationship between the number of received bits and the decoder power consumption using LDPC codes in WSNs.Their analysis shows a large improvement in the network lifetime up to four times with LDPC codes which is more efficient than the convolutional and block codes.A power management technique at the receiver side in WSNs has been presented in [11].Authors used rateless codes to minimize the power consumption, and their analytical results showed up to 80% of energy that is saved in comparison with IEEE 802.15.4 physical layer standard.Some efforts (e.g.[12]) focused on the actual design of LDPC decoder where early stopping methods are proposed in order to reduce the number of unnecessary iterations when decoding received packets.Such method is efficient in low SNR but it has a limitation when SNR is high.
Error correction schemes in wireless communication systems increase the reliability between a transmitter and receiver by reducing the probability of error.Reducing the probability of error can be achieved by increasing the transmit power or using a complex decoder that consume too much power to decode every received packet correctly.However, in limited power recourses networks such as WSNs, such increase in the transmit power as well as decoding power are not efficient since it contradicts with the design objective of WSNs which aims at energy-efficient solutions.Hence, in WSNs a fundamental trade-off exists between the transmitter and receiver power that should be considered to enhance the network lifetime.
One of the main sources of overhead power consumption in WSNs is collision detection.When multiple sensors transmit at the same time, their transmitted packets collide at the central node (the receiver) [13].Authors in [14] use out of band control channel to indicate the transmission status (i.e.active state) for sensors which have packets ready to be transmitted.Sensors sense the control channel to detect collision.However, such technique is not accurate to detect collisions that may occur at the receiver.In addition, current collision detection mechanisms have largely been revolving around direct demodulation and decoding of received packets and deciding on a collision based on some form of a frame error detection mechanism, such as a CRC check [15].The obvious drawback of full decoding of a received packet is the need to expend a significant amount of energy and processing complexity in order to fully-decode a packet, only to discover the packet is invalid and corrupted due to collision.So, decoding of corrupted packets becomes useless and provides the main cause of unnecessary power consumption.
In this paper we pose the following questions: Can we propose a power-efficient technique to detect packets' collisions at the receiver side of WSNs without the need for full-decoding of received packets?Further, can we eliminate the need to pass corrupted packets through the entire demodulator/decoder? From the perspective of achieving an efficient collision detection scheme at the receiver-side of WSNs we propose a suite of novel, yet simple and power-efficient algorithms to detect a collision without the need for full-decoding of the received packet.Our novel approach aims at detecting collision through fast examination of the signal statistics of a short snippet of the received packet via a relatively small number of computations over a small number of received IQ samples.Hence, operating directly at the output of the receiver's analog-to-digital-converter (ADC) and eliminating the need to pass the signal through the entire demodulator/decoder line-up.Figure 1 illustrates where we apply our proposed scheme.In addition, we present a complexity and power-saving comparison between our novel Statistical Discrimination (SD) algorithms and conventional Full-Decoding (FD) algorithm (i.e.Soft Output Viterbi Algorithm) to demonstrate the significant power and complexity saving advantage of our scheme.Accordingly, our novel SD scheme has the following advantages: • The SD scheme not only reduces processing complexity and hence power consumption, but it also reduces the latency incurred to detect a collision since it operates on only a small number of samples-that may be chosen to be in the beginning of a received packet-instead of having to buffer and process the entire packet as is the case with a Full-Decoding (FD) algorithms.• The SD scheme does not require any special pilot or training patterns.It operates directly on the (random) data, i.e., the received packet as is.• With a relatively short measurement period, the SD scheme can achieve low False-Alarm and Miss probabilities.It achieves a reliable collision-detection mechanism at the receiver-side of WSNs in order to minimize the receiption power consumption.• The SD scheme can be turned over various design parameters in order to allow a system designer multiple degrees of freedom for design trade-offs and optimization.The remainder of this paper is organized as follows.Section 2 describes our system.In section 3, we explain the proposed algorithms and show how to select a system threshold level.In section 4, we evaluate the power saving based on our proposed algorithms.In addition, we compare the computational complexity of our algorithms against commonly used decoding techniques (e.g., Soft Output Viterbi Algorithm, or SOVA).In section 5, we provide analysis and numerical empirical characterization to provide some quantitative theoretical framework and shed some light on the behavior of the various system factors and parameters involved in our proposed algorithms.In section 6, we present performance results, and finally in section 7 we provide the conclusion for this paper.As seen in Figure 2, there are N wireless sensors that communicate to the central node, where at any point in time, multiple packets may accidentally arrive simultaneously and cause a collision.Without loss of generality, we shall assume for the sake of argument that one sensor is denoted a "desirable" sensor, while the rest of the colliding sensors become "interferers".We assume the maximum number of sensors i.e., N = 30.This number can be tuned as required is order to meet designers' requirements.
System Description
A commonly accepted model for packet arrivals, i.e., a packet is available at a sensor and ready to be transmitted, is the well-known Bernoulli-trial-based arrival model, where at any point in time, the probability that a sensor has a packet ready to transmit is α.
Upon the receipt of a packet, the central node processes and evaluates the received packet and makes a decision on whether the packet is a collision-free (good) or has suffered a collision (bad).In this paper, we propose a suite of fast collision detection algorithms where the central node evaluates the statistics of the received signal's IQ samples at the output of the receiver's ADC directly using a simple SD scheme (as will be explained in more detail in the following sections), saving the need to expend power and time on the complex modem line-up processing (e.g., demodulation and decoding).If the packet passes the SD test, it is deemed collision-free and undergoes all the necessary modem processing to demodulate and decode the data.Otherwise, the packet is deemed to have suffered a collision, which in turn triggers the central node to issue a NACK message per the mechanism and rules mandated by the specific multiple-access scheme employed in the network.
It is noted that the actual design details and choice of the multiple access mechanism, e.g., slotted or un-slotted Aloha, are beyond the scope of this paper and irrelevant to the specifics of the technique proposed herein.
Algorithm Description
As mentioned earlier, our proposed algorithms are based upon evaluating the statistics of the received signal at the receiver ADC output via the use of a simple statistical discrimination metrics calculation that are performed on a relatively small portion of the received IQ packet samples.The resulting metrics values are then compared with a pre-specified threshold level to determine if the statistics of the samples of received signals reflect an acceptable signal-to-interference-plus-noise ratio (SINR) from the decoding mechanism perspective.If so, the packed is deemed collision-free and qualifies for further decoding.Otherwise, the packet is deemed to have suffered a collision with other interferer(s) and is rejected without expending any further processing/decoding energy.A repeat request may then be issued so the transmitting sensors to re-try depending on the MAC scheme.In other words, the idea is to use a fast and simple calculation to determine if the received signal strength (RSS) is indeed due to a single transmitting sensor that is strong enough to achieve an acceptable SINR at the central node's receiver, or the RSS is rather due to the superposition of the powers of multiple colliding packets, hence the associated SINR is less than acceptable to the decoding mechanism.
Let's define the k th received signal (complex-valued) IQ sample at the access node as: Is a complex-valued quantity that represents the k th IQ sample component contributed by the desired sensor, and 2) Moment based metric1 : 3) Signal Dynamic-Range Maximum-Minimum based metric: The computed statistical discrimination metric is then compared with a pre-specified threshold value that is set based on a desired signal-to-interference-plus-noise ratio (SINR) cut-off assumption2 , _ cut off
SINR
. That is (and as will be described in more detail later in this paper), a system designer pre-evaluates the appropriate threshold value that corresponds to the desired _ cut off
SINR
. If the SD metric value is higher than the threshold value, then the SD metric value reflects a SINR that is less than
SINR
and the packet is deemed not usable, and vice-versa.Accordingly, a "False-Alarm" event occurs if the received SINR is higher than _ cut off SINR but the SD algorithm erroneously deems the received SINR to be less than _ cut off
SINR
. On the other hand, if the SD algorithm deems the SINR to be higher than
SINR
while it is actually less than
SINR
, a "Miss" event is encountered.Miss and False-Alarm probabilities directly impact the overall system performance as will be discussed in the following sections.Therefore, it is desired to minimize such probabilities as much as possible.
Threshold Selection
The decision threshold is chosen based on evaluating the False-Alarm and Miss probabilities and choosing the threshold values that satisfy the designer's requirements of such quantities.For example, we generate, say, a 100,000 Monte-Carlo simulated snapshots of interfering sensors (e.g., 1 -30 sensors with random received powers to simulate various path loss amounts) where for each snapshot we compute the statistical discrimination value for the received SINR and compare it with various threshold levels, determine if there is a corresponding False-Alarm or Miss event and record the counts of such events.At the end of the simulations the False-Alarm and Miss probabilities are computed and plotted versus the range of evaluated threshold values, which in-turn, enables the designer to determine a satisfactory set point for the threshold.
Power Saving Analysis
To analyze the power saving of our proposed SD system we introduce the following computational complexity metrics: ( ) In above formulas, S is the number of computational operations incurred in our proposed scheme, while F is the number of computational operations incurred in a full-decoding scheme, miss P and FA P are the probabilities of Miss and False-Alarm events respectively.Hence, B F represents the computational complexity for the case where the central node (the receiver) makes a wrong decision to fully-decode the received packet (i.e., declared as a collision-free packets) while the packet should has been rejected (i.e., due to collision).On the other hand, G F is the computational complexity for the case where the central node makes a correct decision to fully decode received packet.
In addition, and for the comparison purposes, we introduce the following formulae in order to compare the computational complexity saving achieved by our proposed SD approach (i.e., SD T ) over the FD approach (i.e., FD T ): In above formulae, collision P and _ no collosion P are the probabilities of collision and no-collision events respectively.collision P and _ no collosion
P
have been obtained via Monte-Carlo simulation as follows: A random number of interfering sensors (maximum of 30 sensors) is generated per a simulation snapshot, where each sensor is assumed to have a randomly received power level at the access node (to reflect a random path loss/location effect).The generation of the interfering sensors is based on a Bernoulli trial model where it is assume that the probability of a packet available for transmission at a sensor (hence the existence/generation of the sensor for the snapshot at hand) is equal to α .If the total SINR is found to be worse than the cut-off limit, a collision is assumed and vice-versa.For our numerical example in this section we used 0.3 . Also, we typically generate more than 100,000 snapshots in order to achieve a reliable estimate of the collision probabilities.For the aforementioned choices of α and
Comparing with Full-Decoding Algorithms
In order to assess the computational complexity of our SD scheme, we first quantize our metrics calculation in order to define fixed-point and bit-manipulation requirement of such calculations.We also assume a look-up table (LUT) approach for the logarithm calculation.Note that the number of times the algorithm needs to access the LUT equals the number of IQ samples involved in the metric calculation.Thus, our algorithm only needs to perform addition operations as many times as the number of samples.Hence, if the number of bits per LUT word/entry is equal to M at the output of the LUT, our algorithm needs as many M-bit addition operations as the number of IQ samples involved in the metric calculation.
As a case-study, we compare the complexity of our SD scheme with the complexity of a FD algorithm assuming a Soft Output Viterbi Algorithm (SOVA).SOVA has been an attractive choice for WSNs [16].Authors in [17] measure the computational complexity of SOVA (per information bit of the decoded codeword) based on the size of the encoder memory.It has been shown in [17] that for a memory length of λ , the total computational com- plexity per information bit can be estimated as: In contrast, our SD system does not incur such complexity related to the size of the encoder memory.In addition, our SD system avoids other complexities required by a full decoding such as time and frequency synchronization, Doppler shift correction, fading and channel estimation, etc., since our SD scheme operates directly at the IQ samples at the output of the ADC "as is".Finally, the FD approach requires buffering and processing of the entire packet/codeword while our SD scheme needs only to operate on a short portion of the received packet.Now let's compute the computational complexity for our SD approach using the logarithmic (entropy) metric.
Let's assume that the IQ ADCs each is D bits.Also, let's assume a ( ) ⋅ operation is done through a LUT ap- proach to save multiplication operations.In addition, let's also assume that the square-root, ⋅ , is also done through a LUT approach.Hence, each of the 2
I and 2
Q operations consume of the order of D bit-comparison operations to address the ( ) ⋅ LUT.Then, if the output of the LUT is G bits, it follows that we need about G bit additions for an 2 2 I Q + operation.Let's assume that the ⋅ LUT has G bits for input addressing and K output bits.Then, we need about G+1 bit-comparison operations to address the ⋅ LUT.Let's assume a ( ) log ⋅ is also done through a K-bit-input/L-bit-output LUT.Hence, a ( ) parison operations to address the ( ) log ⋅ LUT.Finally, for simplicity, let's assume that a bit comparison opera- tion costs as much as a bit addition operation 3 .Accordingly, the total number of operations needed to compute the ( ) log ⋅ for one IQ sample is: If we assume the IQ over-sampling rate (OSR) to be Z (i.e., we have Z samples per information symbol), then we need about Z L × bit additions to add the ( ) ⋅ values for every information symbol.Hence, for one information symbol, we need a total of: Now if we assume an M-ary modulation (i.e., ( ) log M information bits are mapped to one symbol), then the computational complexity per information bit can be computed as: For example, in order to show the complexity saving of our SD algorithm, let's assume a QPSK modulation scheme (M = 4).Also, let's assume Z = 2 (2 samples per symbol), and D = G = K = L = 10 bits, which represents a good bit resolution.Also, let's assume a memory size of 6 λ = for the SOVA decoder.Using the formulae (8), it follows the SOVA FD algorithm costs 271 operations per an information bit while our Entropy (Logarithmic) SD algorithm based on formula (11) costs only 61 operations per an information bit, which represents a 77% saving on the computational complexity.
In addition, in a no-collision event, the SD algorithm check would represent a processing overhead.Nonetheless, our SD scheme still provides a significant complexity saving over the FD scheme as demonstrated by the following example.Table 1 in Appendix A shows the probability of Miss and False-Alarm to be 0.0762 and 0.0684, respectively for QPSK and a 50 bits measurement period 4 .Now, based on formulae (4) and ( 5 For the comparison purposes between our SD algorithm and SOVA FD algorithm, formulae ( 6) and ( 7) are used to find the computational complexity when no-collision is detected: Note that the above complexity saving calculation, in fact, represents a lower bound on the saving since the above calculation did not take into account the modem line-up operational complexity in order to demodulate and receive the bits in their final binary format properly (i.e., synchronization, channels estimation, etc.).
The performance of our algorithms can be tuned as desired by a system designer.Appendix A provides performance comparisons for various examples where the system designer may choose to reduce the measurement period (e.g., to 25 or 50 bits) at the expense of increasing the Miss and False-Alarm probabilities, or may increase the throughput by using a longer estimation period in order to improve the accuracy of the statistical discriminator performance and reduce the Miss and False-Alarm probabilities (i.e.our system throughput ( ) δ is defined as ( ) Where FA P denotes the False-Alarm probability).
Empirical Characterization
In this section, we attempt at empirically characterizing the statistics of various key quantities considered and encountered in this work, in an attempt to shed some light onto the behavior of such quantities and pave the way for some analytic mathematical tractability.
Statistics of the IQ Signal Envelope
In order to obtain reliable statistics, we have simulated different scenarios that reflect reasonably realistic assumptions 5 .For example, in our simulations, we assume that packets are generated at the various sensors using a Bernoulli trial model.That is, the probability of a packet available for transmission at a sensor is equal to α.We also generate random number of sensors per a network snapshot that are placed at random locations and distances from the central node in order to reflect various/random path loss situations 6 .The individual received sensor and noise components at the access node, as well as the total received signal (the superposition of the sensor received signals plus AWGN) are always normalized properly to reflect the correct SINR assumption.
In general, the parameters covered in this investigation include: • Number of sensors 7 As seen from the above simulation setting list, the simulations are always run assuming a fixed SINR value, in order to enforce a collision, or a no-collision event for the entire simulation session.Accordingly, the statistical analysis and characterization in this section are evaluated conditional on a collision or no-collision event in order to isolate the statistical characteristics of the metrics from the collision statistics, which can be dependent upon the MAC mechanism and other system parameters such as the specifics of the path loss distributions encountered by the sensors, which would affect the level of the received SINR …etc.
In general, we have found that the Normal (Gaussian) distribution has the closest fit to the actual (simulated) PDF of received signal envelope when SINR ≥ 0 dB.For SINR < 0 dB, however, the Rayleigh distribution seems to be a better fit.We qualify the fitting accuracy of a distribution using the least-mean-square error (LMSE) criterion.Accordingly, the Normal and Rayleigh distributions have exhibited the minimum LMSE in comparison with other distributions as seen in Figure 3 and Figure 4 (such as 5 th degree polynomial fit, the Weibull For example, in Figure 3, the normal distribution with mean ( ) resulted in a LMSE = 0.0024 and exhibited the closest fit to the actual (simulated) PDF of the received signal envelope.The choice of parameters for this example has been as follows: • Maximum number of sensors is 30 (i.e., the number of simultaneous sensors existing in the network per a simulation snapshot is between 2 and 30 sensors).).• Modulation scheme is 8PSK.
• Measurement period is equal to 50 information bits.
In Figure 4, the Rayleigh distribution achieved a LMSE = 0.0032 and exhibited the closest fit to the PDF of received signal envelop.Again, the choice of parameters in this figure is assumed as follows: • Maximum number of sensors is 30.
• Measurement period is equal to 50 information bits.
Figure 5 and Figure 6 show similar examples for a 3 rd moment SD metric.Figure 5 shows how the Normal distribution continues to have the closest fit and achieves a LMSE = 0.0036, while in Figure 6 the Rayleigh distribution has the best fit with LMSE = 0.0041.The choice of parameters follows: • Maximum number of sensors is 30.• Measurement period is 200 information bits.• 3 rd moment metric.
Figure 7 and Figure 8 show corresponding examples for the MAX-MIN based metric.Again, the Normal and Rayleigh distributions have best fits with LMSE = 0.0039 and LMSE = 0.0220, respectively.Our choice of parameters follows: • Maximum number of sensors is 30.• Measurement period is 1000 information bits.
MAX-MIN based metric.
Statistics of the SD Metrics
In general, the ensemble (overall) average (mean) of the first moment of the IQ envelope of the received signal, as well as the second moment (i.e., the power of received signal) is a function of the received SINR.In the following, we plot the ensemble averages of the first and second moments (in Figure 9 and Logarithmic metric.
In addition, we have found that the normal distribution has the best fit to the simulated PDFs for the Logarithmic, Moment and MAX-MIN based metrics 9 .The corresponding normal curve fittings are shown in Figures 11-13 for logarithmic, moment and MAX-MIN based metrics respectively.
Based on the normal PDF fit [19], one can calculate the False-Alarm and Miss probabilities as follows: If we assume a pre-defined threshold level ( ) γ , then it can be shown that: It should be noted that direction of the metric threshold-crossing versus SINR, i.e., whether the metric being greater than or less than the threshold is an indicative of SINR being greater than or less than the cut-off SINR (i.e., a collision or not event) is easily seen by inspecting the numerical behavior of the metric, which has been strictly consistent.Also, it should be noted that as indicated by Equations ( 12) and ( 13) above, the means (and variances) of the curve-fitting Gaussian PDFs used in approximating the False-Alarm probability versus the Miss probability are of generally different values that are functions of the operating SINR10 since these PDFs are computed under disjoint conditions (i.e., SINR greater than or less than the cut-off), as demonstrated, for example, , SINR = 6.5 dB ( ) • Measurement period is 50 information bits.
For Figure 17: • Maximum number of sensors is 30.
For Figure 18: • Maximum number of sensors is 30.
• Measurement period is 500 information bits.
For • Measurement period is 500 information bits.
MAX-MIN based metric.
Performance Evaluation
In this section we provide numerical performance evaluation of our proposed statistical discrimination algorithms for various system design scenarios and parameter choices.We also consider three modulation schemes, namely, QPSK, 8 PSK and 16 PSK.As pointed out in previous sections, without loss of generality and for the sake of a case study, we assume that a typical error correcting decoding scheme can successfully decode a packet with a satisfactory bit-error rate (BER) as long as the received signal-to-interference-plus-noise ratio (SINR) is higher than 5 dB (i.e., 5 dB ), since a 5 dB SINR seems a reasonable assumption based on typical coding requirements in wireless systems [18].Although the majority of the numerical results presented in this section are focused on the example of We also evaluate the sensitivity of our proposed discriminators to the SINR deviation from the 5 dB cut-off point.That is, since the thresholds designed for the discriminators are pre-set based on studying (e.g., simulating) the statistics of the IQ signal envelope assuming "cut-off" SINR of 5 dB, it is important to investigate if the algorithm would still work reliably if the signal's SINR is offset by a dB ±∆ (e.g. when cut off SINR − is 5 dB).In addition, we evaluate various measurement periods (number of information bits and number of samples per symbol, i.e., over-sampling rate), as well as various levels of quantization of the SD metric computation to evaluate the performance of our algorithms in fixed-point implementation.
We typically generate 100,000 simulation snapshots where each snapshot generates a random number of interferers up to 30 sensors with random power assignments.Figure 22 shows a flowchart for our simulation setup and procedure.
Figures 23-25 show the Miss (red points) and False-Alarm (green points) probabilities versus the choice of the metric comparison threshold level (i.e., above which we decide the packet is valid (collision-free) and viceversa) for the entropy (logarithmic) metric, the 3 rd moment metric, the MAX-MIN based metric, and for QPSK, 8 PSK and 16 PSK modulation schemes, respectively (The choice of system parameters is defined in the caption of the corresponding figure).As shown in the figures, the intersection point of the red and green curves, can be a reasonable point to choose the threshold level in order to have a reasonable (or balanced) consideration of the Miss and False-Alarm probabilities, but certainly a designer can refer to Appendix A to choose an arbitrarily different point for a different criterion of choice.For example, Figure 26 shows how the throughput of our proposed metrics may improve to 99.00% if a system designer sets the threshold at 15.2 or higher since this threshold results in a low False-Alarm probability of 0.01.Also, more results for logarithmic, moment, MAX-MIN based metrics are available in Appendix A.
Conclusion
In this paper we propose a novel simple power-efficient low-latency collision detection scheme for WSNs and analyze its performance.We propose three simple statistical discrimination metrics which are applied directly at the receiver's IQ ADC output to determine if the received signal represents a valid collision-free packet.Hence, saving a significant amount of processing power and collision detection processing time delay compares with conventional full-decoding mechanisms, which also requires going through the entire complex receiver and modem processing.We also analyze and demonstrate the amount of power saving achieved by our scheme compared with the conventional full-decoding scheme and provide a mathematical empirical characterization of the statistics of various quantities encountered in our scheme.As demonstrated by the numerical results and performance analysis, our novel scheme offers much lower computational complexity and shorter measurement period compared with a full-decoding scheme, and minimal impact on throughput, which can also be arbitrarily minimized per the system designer's choice of parameter setting and trade-offs.
Figure 1 .
Figure 1.Block diagram for a receiver's line-up.
Figure 2
Figure 2 depicts an example of a WSN where a number of intermediate sensors are deployed arbitrarily to perform certain functionalities including sensing and/or collecting data and then communicating such information to a central sensor node (a receiver).The central node may process and relay the aggregate information to a backbone network.As seen in Figure2, there are N wireless sensors that communicate to the central node, where at any point in time, multiple packets may accidentally arrive simultaneously and cause a collision.Without loss of generality, we shall assume for the sake of argument that one sensor is denoted a "desirable" sensor, while the rest of the
Figure 2 .
Figure 2. Wireless Sensor Network (WSN) with one desirable sensor, multiple interferer sensors and a central sensor (a receiver).
Hence, the complexity savings (in number of operations per information bit) becomes:
Figure 3 .
Figure 3.A curve-fitting comparison of various statistical distributions overlaid on the actual PDF for the IQ signal envelope as obtained from Monte-Carlo simulations: Logarithmic metric, SINR = 4 dB.
Figure 4 .
Figure 4.A curve-fitting comparison of various statistical distributions overlaid on the actual PDF for the IQ signal envelope as obtained from Monte-Carlo simulation: Logarithmic metric, SINR = −6 dB.
distribution and the Log-normal distribution).
Probability of a packet available for transmission at a sensor is 0.3 (i.e., theBernoullitrial model probability is 0.3 α =
Figure 5 .Figure 6 .
Figure 5.A curve-fitting comparison of various statistical distributions overlaid on the actual PDF for the IQ signal envelope as obtained from Monte-Carlo simulations: 3 rd moment metric, SINR = 3 dB.
Modulation scheme is QPSK.
Modulation scheme is 16 PSK.
Figure 10
respectively) of the IQ envelope quantity viruses the corresponding first and second moment values that correspond to the best fitting distribution (i.e., Normal and Rayleigh PDFs as pointed out above).The parameters in Figure9 and
Figure 7 .
Figure 7.A curve-fitting comparison of various statistical distributions overlaid on the actual PDF for the IQ signal envelope as obtained from Monte-Carlo simulations: MAX-MIN based metric, SINR = 3 dB.
Figure 8 .
Figure 8.A curve-fitting comparison of various statistical distributions overlaid on the actual PDF for the IQ signal envelope as obtained from Monte-Carlo simulations: MAX-MIN based metric, SINR = −5 dB.
Figure 9 .
Figure 9.The mean μ for the received signal envelope for the simulation data samples & curve fitting distribution vs. SINR.
Figure 10 .
Figure 10.The second moment for the received signal power for the simulation data samples & curve fitting distribution vs. SINR.
Figure 10
Figure 10 are assumed as follows:• Maximum number of sensors is 30.
Figure 11 .
Figure 11.The PDF (simulation versus fitted) of the metric value, when treated as a random variable (over snapshots): Logarithmic metric.
Figure 12 .
Figure12.The PDF (simulation versus fitted) of the metric value, when treated as a random variable (over snapshots): 3 rd moment metric.
Figure 13 .in Figure 9 and Figure 10 .
Figure 13.The PDF (simulation versus fitted) of the metric value, when treated as a random variable (over snapshots): MAX-MIN based metric. in Figure 9 and Figure 10.Clearly, FA P and Miss P are not complimentary (i.e., do not necessarily add up to unity).Figures 14-19 compare the simulated versus the empirically derived mathematical results for the False-Alarm and the Miss probabilities, for the Logarithmic, Moment and MAX-MIN based metrics.Our choice of parameters in these figures is as follows: For Figure 14: • Maximum number of sensors is 30.• 5 dB cut off SINR − =
Figure 14 .
Figure 14.Comparison of False-Alarm probabilities for simulation and mathematical calculations: Logarithmic metric.
Figure 15 .
Figure 15.Comparison of Miss probabilities for simulation and mathematical calculations: Logarithmic metric.
Figure 16 .
Figure 16.Comparison of False-Alarm probabilities for simulation and mathematical calculations: 3 rd moment metric.
Figure 17 .
Figure 17.Comparison of Miss probabilities for simulation and mathematical calculations: 3 rd moment metric.
Figure 18 .
Figure 18.Comparison of False-Alarm probabilities for simulation and mathematical calculations: MAX-MIN based metric.
Figure 19 .
Figure 19.Comparison of Miss probabilities for simulation and mathematical calculations: MAX-MIN based metric.
Figure 19 :
• Maximum number of sensors is 30.Modulation scheme is 16 PSK.
to demonstrate the ability of our technique to work reliably with various SINR requirements.
the SINR = 6.5 dB for calculating False-Alarm probabilities, and the SINR = 3.5 dB for calculating Miss probabilities
Figure 22 .
Figure 22.Flowchart for the simulation setup.
Table 7 .
QPSK-Maximum to minimum based metric. | 8,057 | 2015-06-29T00:00:00.000 | [
"Computer Science"
] |
Computerized assessment of background parenchymal enhancement on breast dynamic contrast-enhanced-MRI including electronic lesion removal
Abstract. Purpose Current clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels. Approach A U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy c-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings. Results Statistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, p<0.001). Scores from all breast regions performed significantly better than guessing (p<0.025 from the z-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points. Conclusions Results demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.
Introduction
2][3][4][5] BPE is qualitatively defined according to the Breast Imaging Reporting & Data System (BI-RADS ® ) as minimal, mild, moderate, or marked BPE based on the visually perceived volume and intensity of enhancement in normal breast fibroglandular tissue (FGT) after contrast injection for dynamic contrast-enhanced (DCE) MRI. 2,6The distribution of the enhancement through the breast over the course of a dynamic contrast series often occurs initially at the periphery of the FGT due to the pattern of blood inflow from the internal and lateral thoracic arteries, which then feeds into the retroareolar region, which is the last to enhance. 2,7Normal FGT tends to exhibit a slow early and persistent delayed uptake of contrast, although in some cases of moderate or marked BPE, there is a rapid early contrast uptake. 7Radiologists typically use visual assessment to rate BPE during the early phase images of the DCE series, around 1 to 2 min postcontrast.In many cases, tumor volumes can cause an overestimation of BPE by radiologists; the increased intensity of the tumor enhancement due to angiogenesis can inflate the visual assessment of BPE.Also in cases with marked BPE, it can become difficult to differentiate between tumor and normal FGT, thus reducing the sensitivity in breast cancer screening. 8These effects have contributed to the intraobserver variability in clinical BPE assessment that have been reported, thus necessitating an objective method for quantifying BPE. 4 A number of groups have developed quantitative measures for BPE, but a general consensus of the most useful value has yet to be reached.0][11] Recently, one study that was based on a semiautomated segmentation algorithm achieved strong performance in distinguishing women who did and did not develop breast cancer using a quantitative BPE value. 11dditionally, another study found that the complexity of the BPE assessment caused only weak correlations between the investigators' quantitative values and the associated clinical ratings. 12hese studies demonstrate that further investigation is needed to develop a fully automated, objective method for quantifying BPE.
We have developed an automated machine learning method to segment breasts and electronically remove the influence of lesion presence on a computer BPE score. 13Our method was designed to mimic radiologist assessment of BPE from maximum intensity projections (MIPs), and it offers a robust estimation of BPE levels from breast DCE-MR projection images.We investigated the performance of computer BPE scores calculated from the second postcontrast subtraction MIPs of both breasts, the affected breast, and the contralateral, unaffected breast images created before and after the electronic removal of lesions.Additionally, we investigated the effect of various image parameters on the performance of computer BPE scores calculated from original and rescaled versions of maximum-or average-intensity projections (AIPs) of first-or second postcontrast subtraction DCE-MRI volumes.
Dataset
A dataset of 426 conventional breast DCE-MR exams (from 399 patients aged 23 to 89 years) was retrospectively collected at the University of Chicago over a span of 12 years (from 2005 to 2017) under HIPAA-compliant Institutional Review Board-approved protocols (Table 1).Routine bilateral breast MRIs were acquired using a Philips Achieva scanner with either 1.5 or 3 T magnet strength.The breast DCE-MRI protocol included a fat-saturated 3D T1 weighted spoiled gradient-echo sequence that was used to acquire pre-and postcontrast images with a temporal resolution of 60 to 75 s (TE = 2.2 to 2.8 ms, TR = 4.5 to 7.5 ms, flip angle = 10 deg to 20 deg, in-plane resolution = 0.5 to 1.0 mm, FOV = 28.0 to 44.1 cm, matrix ¼ 320 − 552 × 256 − 525, slice thickness = 1 to 3.5 mm, interslice gap = 0.8 to 2.5 mm).Radiologist BPE ratings were acquired from a prior clinical review.A subset of 76 exams from 73 patients (6 minimal, 18 mild, 26 moderate, 11 marked, and 15 unknown BPE) was set aside to use in developing the breast segmentation methods.A subset of the remaining exams, 350 exams (99 minimal, 159 mild, 78 moderate, and 14 marked BPE) from 326 patients, each with only one diagnosed lesion, was used for independent testing of the proposed machine learning algorithm for BPE.For each exam, the breast containing the diagnosed lesion is termed the "affected" breast, and the contralateral breast is termed the "unaffected" breast.
Breast Segmentation
The 2D U-Net convolutional neural network 14 is capable of producing accurate segmentations when it is trained on a relatively small number of images; 15 thus a training set of 76 exams was selected to contain a variety of lesion sizes and BPE levels represented in the full dataset.For the subset of 76 exams, an expert radiologist (7 years of experience in breast imaging) provided manual delineations of breast margins on the MIP of the second postcontrast subtraction image volume.The radiologist-delineated breast margins were used as the reference standard for training a U-Net for whole breast segmentation from second postcontrast subtraction MIPs, and visual assessment was used to qualitatively review the segmentation performance for the training set.The base U-Net model 14 was trained using the Adam optimizer and a binary cross-entropy loss function; training was allowed to run for up to 200 epochs.The U-Net produced pixel probability map outputs with values ranging from 0 to 1, and a threshold of 0.25 was applied to convert the predicted U-Net outputs to binary segmentation images.To produce the breast region masks for use in our method, a subsequent postprocessing step was conducted to identify the largest object from the mask as the region containing both breasts.The region containing both breasts was vertically split at its center point to generate masks defining only the affected breast region and only the unaffected breast region.These breast masks were applied to the full postcontrast subtraction projection images to retain only the pixels belonging to both breasts, the affected breast, or the unaffected breast (Fig. 1).Without a radiologist reference available for the test cases, visual assessment was used to ensure that the binary mask sufficiently contained the entire breast region with minimal pixels from the chest wall.
Electronic Lesion Removal
A well-established, in-house, automated 3D fuzzy c-means (FCM) clustering approach was used to segment the lesions from the DCE-MR volumes. 16The lesion sizes, approximated by the square root of the lesion area at the center lesion slice, ranged between 2 and 65 mm.To electronically remove the lesions, the lesion area defined by the FCM segmentation was replaced with a value equivalent to the average intensity of the pixels bordering the lesion segmentation on the second postcontrast subtraction image slice.This process was repeated on each slice that passed through the lesion before projecting the maximum pixel values from all available volume slices to produce a new MIP that excluded the influence of the lesion.The breast masks generated from the U-Net outputs were used to retain only the pixels belonging to both breasts, the affected breast, and the unaffected breast on the second postcontrast subtraction MIP with the lesion removed (Fig. 2).For comparison across input image parameters, this method was also conducted using first postcontrast subtraction images and using average-intensity projections to produce images of the affected breast without the influence of the lesion.All exams from a given patient were in either the training set or test set.Exams without a clinical BPE rating in the radiologist report were categorized as "unknown."
Computer BPE Score
For each of the defined breast regions (both, affected, and unaffected), the computer BPE scores were automatically calculated from the second postcontrast subtraction MIPs.Within each MIP, the pixel values were rescaled so that the original pixel values ranging from 0 to 255 were scaled to a range of 0 to 1.To reflect the qualitative definitions of BPE assigned by radiologists based on the amount and intensity of the enhancement in FGT, the average pixel intensity of the pixels contained within each breast served as the computer BPE score (Fig. 3).
Evaluation of BPE Scores
To determine the strength and direction of the correlation of the computer BPE scores with radiologist BPE ratings, Kendall's tau-b was used in rank correlation with a t-test to determine the statistical significance of the correlation. 17To assess how lesion removal changes the computer BPE scores, the ratio of the computer BPE score calculated after lesion removal to the computer BPE score calculated before lesion removal was examined according to the lesion size for the second postcontrast subtraction MIP of each affected breast.Receiver operating characteristic (ROC) analysis was performed using the proper binormal model. 18Clinical radiologist BPE ratings were the only truth available for BPE assessment, so the performance of the computer BPE scores was compared with random guessing.To determine the predictive value of the computerextracted BPE scores, ROC analysis was performed using computer BPE scores for binary classification of minimal versus marked BPE; it was also evaluated for binary classification of low (mild, minimal) versus high (marked, moderate) BPE (Fig. 4).The statistical significance of the area under the ROC curve (AUC) relative to random guessing was determined using the z-test with Bonferroni corrections for multiple comparisons. 19ank correlation and ROC analysis were also used to understand the effect of different image parameters on the calculated BPE.The minimal versus marked BPE and low versus high BPE tasks were thus evaluated for computer BPE scores calculated from the affected breast in each of the image types (shown in Fig. 5).
Results
On the independent set of 350 second postcontrast subtraction MIPs, a statistically significant positive correlation was found between the computer BPE scores and the radiologist BPE ratings for all breast regions, before and after the lesion removal (p < 0.001, t-test) (Fig. 6).
The ratio of the scores calculated after versus before lesion removal, sorted by size and BPE level, are shown with example cases of affected breasts (Fig. 7).As would be expected, the computer BPE scores were reduced after the lesion removal; this was more pronounced for larger lesions and cases with low BPE levels.More specifically, among the cases with lesions larger than 10 mm, the average computer BPE score was reduced by 3.83% for minimal BPE ratings and 1.48% for marked BPE ratings.The AUCs in the task of classifying minimal versus marked BPE and in the task of classifying low versus high BPE according to radiologist ratings were calculated for each of the breast regions in the second postcontrast subtraction MIPs (Table 2).All classification tasks performed statistically significantly better than guessing (z-test).For all breast regions, the computer BPE scores yielded greater AUC results for minimal versus marked BPE than for low versus high BPE levels, which was expected because it is easier to distinguish between the two extreme BPE levels than the intermediate ones.The computer BPE scores from the affected breast, both before and after lesion removal, yielded greater AUC results than the computer BPE scores from the unaffected breast for both classification tasks; thus the computer BPE scores calculated from that region were used in subsequent evaluations.
The results of the comparisons between computer BPE scores calculated from varying image types are shown in Table 3 and Fig. 8 (affected breast scores only).Statistically significant correlations were found between the radiologist BPE ratings and the computer extracted BPE scores from the rescaled images, except for the first postcontrast subtraction AIP.Computer BPE scores Fig. 6 Positive correlation between all computer BPE scores (second postcontrast subtraction MIP) and the radiologist BPE ratings were statistically significant (p < 0.001).BPE scores from unaffected breasts are not shown because there is no change in score after lesion removal.performed statistically significantly greater than random guessing in minimal versus marked BPE level classification, except for the first postcontrast subtraction AIP.Computer BPE scores performed statistically significantly greater than random guessing in low versus high BPE level classification, except for the mean of original MIPs and original first postcontrast subtraction AIP.For all image types, the computer BPE scores yielded greater AUC results for minimal versus marked BPE than for low versus high BPE levels.In both BPE level classification tasks, computer BPE scores from rescaled images yielded greater AUC results than from original images.ROC curves showed that computer BPEs from second postcontrast-projections yielded greater AUC results than first postcontrast-projections and computer BPE scores from MIPs yielded greater AUC results than computer BPE scores from AIPs.Compared with the other image types, the computer BPE scores of the rescaled second postcontrast MIP statistically significantly outperformed other rescaled image types for minimal versus marked BPE classification (p < 0.05, z-test).
Discussion
In current clinical settings, radiologist BPE ratings are subjectively assigned based on the relative volume and intensity of enhancement in normal fibroglandular breast tissue after contrast injection for DCE-MRI.This study presented an automated computer algorithm for the assessment of BPE and investigated the effect of using various breast DCE-MR image types.The results of this work demonstrate the promising performance of an automatic BPE scoring method, which yields computer BPE scores in classifying marked versus minimal BPE across various image viewing projections and DCE time points.Our method of computing BPE scores from breast DCE-MR MIP images was not influenced by the contrast enhancement within lesions, which currently causes intraobserver variability in clinical BPE level assessment, because the algorithm includes an electronic removal of the lesion.The automatically calculated computer BPE scores from all breast regions had a statistically significant correlation with the radiologist BPE ratings, with the exception of one image type; thus the computer BPE scores had a positive correlation with increasing BPE.The ratio of the computer BPE scores calculated after lesion removal to before lesion removal demonstrate the importance of electronic lesion removal to avoid inflation of BPE estimations, especially in cases containing large lesions and low BPE levels.Although the computer BPE scores from the second postcontrast subtraction MIPs of the affected and unaffected breasts appeared similar in boxplots, computer BPE scores of the affected breast yielded greater AUC results than those of the unaffected breast in the prediction of radiologist BPE ratings.Based on the computer BPE scores from all breast regions, the classification of minimal versus marked BPE yielded greater AUC results than the classification of low versus high BPE, which was expected because it is easier to distinguish between the two extreme BPE levels than the intermediate ones.
Although we observed that the AUC in the task of BPE level classification increased from before to after lesion removal, we failed to show that it was a statistically significant increase.The electronic removal of the lesion from the affected breast increased AUC results in the predictions for the minimal versus marked task, but not for the low versus high task; this may be due to the complexity of the BPE levels considered in each task.Given that the removal of lesions had the greatest impact on reducing the computer BPE score for minimal BPE cases, the lesion removal would improve the classification of minimal versus marked BPE.In the low versus high task, however, the large prevalence of mild and moderate BPE cases contributes to the difficulty of the task due to the similarity between intermediate BPE levels that exists even after lesion removal.Additionally, the AUC results for computer BPE scores calculated from various image projections and postcontrast subtraction time points demonstrated the flexibility of the algorithm in BPE level classification tasks.Comparisons between the original and rescaled versions of the maximum-and average-intensity projections (MIP and AIP) created from the first or second postcontrast subtraction images of the affected breast demonstrated that the computer BPE scores calculated from the rescaled, second postcontrast subtraction MIP yielded the greatest overall AUC results.Therefore, of the scores evaluated in this study, the best computer-generated representations of the relative intensity and volume of enhancement qualitatively assessed by radiologists were the computer BPE scores of the rescaled, second postcontrast subtraction MIP.Future investigations should be done to address the limitations of our study to improve the performance of computer BPE scores.For instance, although our method includes threedimensional lesion segmentation, our BPE scoring method is limited to two-dimensional MIPs.Also the performance of the breast segmentation was limited to a qualitative visual assessment; thus there is potential to improve the breast segmentation process.In the future, including a quantitative analysis of the breast segmentation would facilitate an assessment of the variability in computer BPE scores based on the precision of the masks that define breast regions.Additionally, in this work, the computer BPE scores were calculated from MIPs that often contained major vasculature, which contain bright pixels that may inflate the computer estimation of BPE (a current limitation).Future investigations should aim to remove the influence of the vasculature's enhancement, as we have already considered for lesions, to produce a more accurate representation of the FGT enhancement.The only truth that we had available to assess the performance of our computer BPE scores for BPE level classification tasks were the radiologist BPE ratings assigned during initial clinical review; thus our ROC analyses were limited to comparisons against random guessing performance.Further investigation of variability in the reference standards used for algorithm development may improve the overall performance of our method in BPE classification tasks.In addition, future investigations should determine the significance of the influence of lesion enhancement on radiologist BPE ratings.Allowing radiologists to reassess images after electronic lesion removal would provide the opportunity to perform more comprehensive analyses of the computer BPE scores as well.
Ongoing investigations of our machine learning method for BPE scoring are being performed using an independent dataset of high-risk screening patients to evaluate the role of computer BPE scores in breast cancer risk assessment.Similar to the approach of many artificial intelligence methods that use tumor features as prognostic markers, other image-based biomarkers, such as BPE, may be factored into clinical risk assessment models.Ultimately, we believe that computer BPE scores have the potential to improve the predictive value of breast cancer risk assessment models in the future.
Disclosures
M.L.G is a stockholder in R2 technology/Hologic and QView, receives royalties from multiple medical imaging companies via licenses through the Polsky Center for Entrepreneurship and Innovation at the University of Chicago, and was a cofounder in Quantitative Insights.K.D. receives royalties from Hologic.L.D. declares no competing interest.It is the University of Chicago Conflict of Interest Policy that investigators disclose publicly actual or potential significant financial interest that would reasonably appear to be directly and significantly affected by the research activities.The corresponding authors had full access to all data in the study and had final responsibility for the decision to submit for publication.
Fig. 2
Fig.2Flowchart of the method for electronic lesion removal, image projection, and breast segmentation from a postcontrast subtraction breast DCE-MRI.Lesion and breast segmentations were performed using FCM clustering and U-Net CNN, respectively.The breast mask was vertically split at the center to select the affected or unaffected breast regions from the projection image excluding the lesion.Computer BPE scores were calculated in a separate rescaled MIP after implementation of our digital electronic lesion removal algorithm.
Fig. 1
Fig. 1 2D U-Net was trained for computerized breast segmentation on MIPs of the second postcontrast subtraction DCE-MRIs.A binary threshold was applied to the predicted U-Net output to generate breast region masks, and the individual breast regions were created by a vertical split at the center of the breast region containing both breasts.
Fig. 4
Fig.4Clinical radiologist BPE ratings were used as the reference standard for ROC analysis.ROC analysis was performed to determine the predictive value of computer BPE scores for binary classification of minimal versus marked BPE and of low (minimal, mild) versus high (moderate, marked) BPE.
Fig. 3
Fig. 3 Computer BPE scores were calculated from the affected breast (green box), unaffected breast (red box), and both breasts (blue box), before (left) and after (right) lesion (red arrow) removal.
Fig. 5
Fig.5Example images of an affected breast from a case classified as marked BPE by a radiologist.The computer BPE scores were calculated from the affected breast region in the firstor second-postcontrast subtraction maximum-or average-intensity projection (MIP or AIP) images after electronic lesion removal (bottom row).
Fig. 7
Fig. 7 Ratio of the computer BPE score (second postcontrast subtraction MIP) calculated after lesion removal to the score calculated before lesion removal for the affected breast is shown versus the lesion size (n ¼ 350).Results demonstrate the importance of lesion removal to avoid inflation of computer BPE estimations, especially in cases containing large lesions and low BPE levels.
Fig. 8
Fig. 8 ROC curves for the binary classification tasks of marked BPE (n ¼ 14) versus minimal BPE (n ¼ 99) (a), (c) and high (marked or moderate) BPE (n ¼ 92) versus low (mild or minimal) BPE (n ¼ 258) (b), (d) using the mean pixel intensity of (a), (b) the original and (c), (d) rescaled image of the affected breast.Asterisks indicate classification performance statistically significantly greater than random guessing.Raw, uncorrected p values are reported from the z-test; statistical significance for performance greater than random guessing was assessed after a Bonferroni correction for 13 comparisons.1pcs, 2pcs: first-and second-postcontrast subtractions; MIP, AIP: maximumand average-intensity projection.
Table 1
Distribution of radiologist BPE ratings contained within the dataset of 426 DCE-MR exams from 399 patients.
Table 2
Effect of breast region used for computer BPE score.AUC results from ROC analysis for the task of BPE level classification using computer BPE scores calculated from the rescaled second postcontrast subtraction MIP.High BPE includes marked or moderate BPE, and low BPE includes mild or minimal BPE.Raw, uncorrected p-vales from the z-test are reported in this table; however, statistical significance of the AUCs was assessed using the Bonferroni correction for 13 comparisons.Asterisks indicate performance statistically significantly greater than random guessing.The bolded selection was used in subsequent analyses.
Table 3
Effect of breast image parameters used for the computer BPE score.¼ 0.299) 0.69 ± 0.06 (p < 0.01)* 0.58 ± 0.03 (p ¼ 0.016) Results from Kendall's rank correlation and ROC analysis for computer BPE scores calculated from the affected breast region.High BPE includes marked or moderate BPE, and low BPE includes mild or minimal BPE.Raw, uncorrected p-vales from the t-test or z-test are reported in this table; however, statistical significance of the AUCs was assessed using the Bonferroni correction for thirteen comparisons.Asterisks on tau-b values indicate a statistically significant correlation between the computer BPE scores and radiologist BPE rating.Asterisks on AUC values are statistically significantly greater than random guessing. | 5,573.4 | 2024-05-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Direct visualization of orbital electron occupancy
Orbital is one of the primary physical parameters that determine materials’ properties. Currently, experimentally revealing the electron occupancies of orbitals under the control of external field remains demonstrated the visualization of the real-space orbital occupancy by choosing LiCoO 2 as a prototype. Through multipole modelling of the accurately measured structure factors, we found the opposite changes of Co t 2g and e g orbital occupancies under different electrochemical states which can be well-correlated with the CoO 6 octahedra distortion. This robust method provides a feasible route to quantify the real-space orbital occupancy on small-sized particles, and opens up a new avenue for exploring the orbital origin of physical properties for functional materials.
Materials' properties are usually tuned by physical parameters including lattice, charge, orbital and spin 1,2 . The lattice composed of the periodic arrangement of atoms is the structural basis to understand the physical properties of materials. The electrons have three attributes including charge, spin, and orbital, which give rise to a wide range of functional properties of materials such as electrical, magnetic, optical, and thermal properties along with the underlying crystal lattice 3,4 .
Understanding the origin of these distinctive functionalities in materials therefore critically relies on our ability to accurately measure both electron's degrees of freedom and lattice for revealing their complex interactions. Over the years, the remarkable progress in the microscopy, spectroscopy and diffraction has enabled to probe the lattice, charge and spin with high precision and sensitivity, which has made significant contributions to reveal fundamental physical mechanisms for the relevant functional properties that have made up the cornerstone of current electric devices (such as field-effect transistors, magnetic random-access memory and piezoelectric transducers) [5][6][7][8][9] . By contrast, much less advancement has been made on the detection and characterization of electron orbital on a specific atom in the materials. Although it is well known that orbital degree of freedom plays an important role in many novel physical phenomena (e.g., high temperature superconductivity, colossal magnetoresistance, metal-insulator transition, and topological states of matter as well as in (electro)chemistry [10][11][12] . Revealing the underlying real-space orbital physics will not only deepen our understanding on the emergent functionalities of materials and (electro)chemical processes, but also provide a new knob to tune the properties of functional materials.
Theoretically, density functional theory (DFT) has been developed to reveal the ground-state electron density and successfully explains the origin of the unique functionalities and predicts new properties for various materials 13 . Although great advance has been made on quantifying bonding interactions and electronic structures, it is limited by the computer processing power which can only calculate several unit cells at ground-state and may miss some chemical interactions under the external field in practice 14,15 . Over the past decades, improvements of methodology have made it possible to measure the charge density of materials. Although the orbital sensitivity of scanning tunneling microscopy (STM) with a well-controlled calibration of tip-sample distance has been demonstrated 16,17 , it can only visualize orbital relying on the freshly cleaved and atomically sharp surfaces under the ultra-high vacuum environment 18 . Thus, there are serious limitations in applications. Because the electron orbital represents the shape of the electron cloud in a solid, the most intuitive way to resolve the orbital configuration is to accurately measure the electron density distribution around the bonding atoms, which are coded in the structure factors through Fourier transformation. Compared with other methods which can be employed for structure-factor measurement, single-crystal X-ray diffraction (SCXRD) is the primary approach. Unfortunately, it suffers from defects, extinction effect, and absorption that degrade the measurement accuracy for low-order structure factors which carry most information of the valence electron density, especially for functional materials containing heavy elements 19 . Instead, electron diffraction is extinction-free and more sensitive at low scattering angles compared with XRD, which makes it more suitable for low-order reflections 20,21 . Besides, quantitative convergent-beam electron diffraction (QCBED) with nanometer-sized electron beam can be used to obtain structure factors from perfect crystal regions so that the dynamic diffraction theory for perfect crystal can be applied. Based on the manybeam dynamic diffraction theory, the intensity distribution of CBED patterns can be obtained through solving the Schrödinger equation under the periodic potential field in crystals, which is compared to experimental patterns to get low-order structure factors [22][23][24][25] . In addition, low temperature (100 K), short-wavelength (0.41343 Å) and high resolution (sinθ/λmax ≥ 1.2 Å -1 ) synchrotron powder X-ray diffraction (SPXRD) with negligible absorption effect can accurately measure the high-order structure factors and Debye-Waller factors of small-sized particles which are not accessible by SCXRD. Moreover, the acquisition of electron diffraction and SPXRD at the same temperature ensures that the thermal vibrations are the same, making the experimental results more accurate and precise. However, to the best of our knowledge, the combination of QCBED and SPXRD has not been realized. What is more, the quantitative topological analysis of the refined electron density based on the results from QCBED has also not been performed. Combing the strengths of QCBED and SPXRD, it becomes possible to accurately measure the shape of the valence electron clouds around bonding atoms and quantify the orbital occupancy to uncover the origin of properties for functional materials beyond the scope of the crystal lattice and charge under the control of external fields.
Here, we use LiCoO2 as a model material and apply the method described above to successfully resolve a mystery in its electrochemical properties from the orbital point-of-view. As is well-known, LiCoO2 is the first cathode material used for commercial LIBs and still dominates in the portable electronics market because of many unique advantages including high electron conductivity, high volumetric energy density as well as excellent cycle life 26 . However, only limited voltage can be applied to extract no more than 60% Li to maintain a reasonable cycle-reversibility. Much work focused on the charge compensation mechanism of LiCoO2 during electrochemical charging and found that oxygen is involved in the redox reaction in highly charged LiCoO2 27 . In addition, there have been many effective strategies to improve the cycle stability of LiCoO2 at high voltage [28][29][30] . can be accurately extracted from SPXRD through Rietveld refinement. In order to reduce systematic errors, low temperature (100K) was used for all data collection to minimize the thermal diffuse scattering and anharmonicity that contributes to the background and high-order Bragg reflections, and finally to enhance the signal-to noise ratio of diffraction data 32 . The low-order structure factors were accurately measured by QCBED at 100K to ensure that the X-ray and electron measurements were done at almost the same temperature. In the meantime, low-temperature measurements can reduce the beam damage during CBED acquisition. The initial structure parameters obtained by Rietveld refinement from SPXRD are employed for Bloch wave calculation, during which the thickness, beam direction and structure factors are treated as refinable parameters. The refinement was made by comparing the experimental intensity profile across CBED systematic rows with the calculated intensity using a goodness of fit criterion 20 . It is worth noting that these experimental structure factors are model independent 33 . Fig. 2 displays the five low-order structure factors measurements including (003), (01 ̅ 1), (006), (012), and (01 ̅ 4) for LiCoO2, where good agreement between the experimental intensity and the calculated one is reached. All refined low-order structure factors at different SOC are listed in Table S1-S4.
Atom-centered multipole expansion 19 , which is based on the spherical harmonic functions, has been proven to be successful to describe the real space non-spherical electron density, in which the electron density of each atoms is described as: where and are the core and valence electron densities, respectively. and ± are the population parameters of valence electron density and spherical harmonic density ± , respectively. and ′ are valence-shell contraction-expansion parameters. is the radial function. This method implicitly assigns each density fragment to the centered nucleus. Therefore, the shape of the observed electron density can be flexibly fitted by a sum of non-spherical pseudoatomic densities. These consist of a spherical-atom (or ion) electron density obtained from multi- the multipole modeling electron density 31 . The electron density topological analysis on BCPs illustrated in Table 1, indicates that the Co-O interaction is the closed-shell interaction 36 shows that the valence electrons occupy the t2g orbital rather than the eg orbital of the Co atom in the CoO6 octahedra, which agrees with the 3d-orbital populations of Co as seen in Table 2. However, with decreasing Li content, the accumulation and depletion of the valence electron density in terms of the t2g and eg orbital, respectively, becomes more and more inapparent as seen from Fig. 3B to Since the 3d-orbital electrons can be described in terms of atomic orbitals 39 : = ( ) ± , ( ) is the radial function, ± is the spherical harmonic function and is the population parameter of atomic orbital. This expression equals to the valence part of Eq.1.
Thus, the relationship between d-orbital occupancies and multipole population parameters is casted
in the form of a 15 × 15 matrix, and reduce to smaller size under different site symmetries 39 . In LiCoO2, the site symmetry of Co is 3 ̅ with a 4 × 4 matrix which converts the multipole populations to the eg and t2g orbital occupancies 40 , as shown in Table 2 What's more, the orbital rehybridization is related to the distortion of CoO6 octahedra ( Table S5). Especially when x ≥ 0.6, the total valence electrons of Co increase and that of O decrease, in line with the dramatic changes of lattice and electronic structure of the CoO6 framework ( Table 2 and Table S5). These variations aggravate the structure degradation and lead to the irreversible capacity fading of LiCoO2 at high voltage. The authors declare no competing interests.
Methods
Sample preparation. Pristine polycrystalline LiCoO2 powder was bought from Alfa with a purity of 99.5%.
The delithiated samples were prepared using the electrochemical method. The high-loading electrodes (LiCoO2 loading mass ~100 mg) were charged/discharged to different states of charge in a Swagelok cell, with Li metal as the counter electrode (1M LiPF6 in ethylene, dimethyl carbonate).
Subsequently, the charged Swagelok cells were disassembled, and the obtained powder samples were washed three times with dimethyl carbonate before drying. The cell assembling/disassembling and powder washing/drying were carried out in an argon-filled glove-box.
The TEM samples were fabricated using focus ion beam (FIB) milling, FEI Helios 600i. To prevent surface damage from ion milling, a 40 nm thick carbon layer was deposited using thermal evaporation. On the top of the particle, a regular Pt protection layer was deposited with standard settings using an electron-beam at 5 kV, 86 pA for 300 nm thickness following by an ion-beam at To reduce the surface damage and the thickness of amorphous, low-energy focused Ar ion milling was conducted using Fischione 1040 NanoMill system. Through multipole refinement and the quantitative topological analysis of the electron distribution, we can measure the occupancies of 3d orbital. Table S5 for details). , ∇ and ∇ 2 denote the electron density as well as its gradient and Laplacian, respectively. The Hess eigenvectors is defined by the diagonalization of the symmetric matrix of the nine second derivatives of . Furthermore, 1 and 3 are the Hess eigenvalues perpendicular and parallel to the bond path at the critical point, respectively. | 2,662.8 | 2021-05-25T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Possible Molecular States of $D^{(*)}D^{(*)}$ and $B^{(*)}B^{(*)}$ within the Bethe-Salpeter framework
Recently the LHCb collaboration reported a new exotic state $T^+_{cc}$ which possesses $cc\bar u\bar d$ flavor structure. Since its mass is very close to the threshold of $D^0D^{*+}$ (or $D^{*0}D^{+}$) and its width is very narrow, it is inclined to conjecture that $T^+_{cc}$ is a molecular state of $D^0D^{*+}$ (or $D^{*0}D^{+}$). In this paper we study the possible molecular structures of $D^{(*)}D^{(*)}$ and $B^{(*)}B^{(*)}$ within the Bethe-Salpeter (B-S) framework. We employ one boson exchange model to stand the interaction kernels in the B-S equations. With reasonable input parameters we find the isospin eigenstate $\frac{1}{\sqrt{2}}(D^0D^{*+}-D^{*0}D^{+})$ ($J^P=1^+$) constitutes a solution, which supports the ansatz of $T^+_{cc}$ being a molecular state of $D^0D^{*+}$ (or $D^{*0}D^{+}$). With the same parameters we also find that the isospin-1 state $\frac{1}{\sqrt{2}}(D^{*0}D^{*+}+D^{*0}D^{*+})$ ($J^P=0^+$) can exist. Moreover, we also study the systems of $B^{(*)}B^{(*)}$ and their counterparts exist as possible molecular states. Consistency of theoretical computations based on such states with the data of the future experiments may consolidate the molecular structure of the exotic state $T^+_{cc}$.
It is difficult to attribute so many exotic states into a basket determined by the traditional hadronic picture where a meson contains a quark and anti-quark pair and a baryon is composed of three valence quarks. Instead, it is suggested that they should be multi-quark states which are predicted by the SU(3) quark model [10]. During these years it turns out to be a hot topic to discuss the structures of exotic states. Those new states are often proposed to be molecular states, compact tetraquarks, a mixing of both structures or dynamical effect [11,12].
The mass of T + cc is very close to the mass threshold of D 0 D * + or (D + D * 0 ) so naturally many authors suggested that T + cc could be a loose D 0 D * + (D + D * 0 ) bound state [13][14][15][16][17][18]. Some authors also consider it as a tetraquark [19,20]. Generally a compact tetraquark has a wide decay width whereas a molecular state has a relatively narrow one. Viewing the width of T + cc we also tend to accept T + cc as a D 0 D * + (D + D * 0 ) bound state. In this paper we study the possible bound state of D 0 D + , D 0 D * + and D * 0 D * + systems within the Bethe-Salpeter (B-S) framework where the relativistic corrections are automatically included.
In this work we will study the systems composed of two charmed or bottomed mesons and the scenario has not been explored in the B-S framework yet. In our early papers [21][22][23] we deduced the B-S equations for the systems containing one vector and one pseudoscalar, two pseudoscalars and two vectors respectively. Following the approach in [21][22][23] we investigate the systems with two charmed or bottomed constituents such as T + cc . Since the interaction kernels are not the same as given in [21][22][23] and the objects under investigation are new it needs to re-study the whole scenario.
If the interaction between two constituents is attractive and large enough a bound state could be formed. In this work we employ the one-boson-exchange model to calculate the interaction kernels where the effective vertices are taken from the heavy meson chiral perturbation theory [24][25][26][27][28][29] . The exchanged particles are some light mesons such as π, ρ and ω. We ignore the contribution from η exchange because its mass is larger than π and there exists an additional suppression factor 1 √ 3 at the effective vertex (See Appendix A). In Ref. [24] the authors indicated that σ exchange makes a secondary contribution, thus we also do not include it. With the effective interactions we derive the kernel and establish the corresponding B-S equation. The B-S equation is solved in momentum space so the kernel we obtain by calculating the corresponding Feynman diagrams can be used directly rather than converting it into a potential form in coordinate space.
With all the input parameters, these B-S equations are solved numerically. In some cases there no solution which satisfies the equation exists as long as the parameters are set within a reasonable range, it implies the proposed bound state should not emerge in the nature. On the contrary, a solution of the B-S equation with reasonable parameters implies that the corresponding bound state is formed. In that case, the obtained B-S wave function can be used to calculate the decay rate of the bound state.
After this introduction we deduce the B-S equations and the corresponding kernels for the two meson systems with different quantum numbers. Then in section III we present our numerical results of the binding energies along with explicitly displaying all input parameters. Section IV is devoted to a brief summary.
II. THE BETHE-SALPETER FORMALISM
Initially, people employed the B-S equation to explore the bound states of two fermions. Later this approach was extended to study the bound states made of one fermion and one boson [30][31][32]. In Refs. [33][34][35][36][37] the B-S equation was used to study the spectra of the mesonmeson molecular states and then deal with their decays. The method was extended to explore some other systems in our early papers [21][22][23]38].
In Ref. [34,35] the B-S equation for a bound state made of two pseudoscalars was deduced. Later we deduced the B-S equations for a system composed of one pseudoscalar and one vector or two vectors which are one particle and one antiparticle [21][22][23].
In this work we are only concerned with the ground states where the orbital angular momentum between two constituent mesons is zero (i.e. l = 0). For a system whose constituents are two pseudoscalars or one pseudoscalar and one vector, its J P is 0 + or 1 + . For the molecular states which consist of two vector mesons their J P may be 0 + , 1 + and 2 + .
Obviously, these systems composed of two charmed (or bottomed) hadrons (off-shell ) should belong to the same representations of isospin. In this case, the total wave function for the combined systems of D 0 and D + (D * 0 and D * + ) must be symmetric under group O(3) × SU I (2) × SU S (2), where SU I (2) and SU S (2) are isospin and spin groups respectively. For the D 0 D + system its total spin is 0 so its isospin should be 1. Instead, for the D * 0 D * + system its isospin is 0 as J P = 1 + , whereas it is 1 as J P = 0 + or J P = 2 + . For the D * + D 0 or D * 0 D + systems two isospin states are possible: 1 The B-S wave function for the bound state |S of two pseudoscalar mesons can be defined as following: where φ 1 (x 1 ) and φ 2 (x 2 ) are the field operators of two mesons, respectively, the relative coordinate x and the center of mass coordinate X are where η i = m i /(m 1 + m 2 ) and m i (i = 1, 2) is the mass of the i-th constituent meson. After some manipulations we obtain the B-S equation in the momentum space where ∆ i is the propagator of the i-th meson and ∆ 1 = i The relative momenta and the total momentum of the bound state in the equations are defined as where P denotes the total momentum of the bound state. Since only l = 0 is considered and the total spin wavefunction is symmetric the system of an isospin-scalar is forbidden and the isospin-1 state is reduced to D 0 D + . The exchanged mesons between the two pseudoscalars are vector mesons, obviously we only need to keep the lightest vector mesons ρ and ω [34,35] , the Feynman diagrams corresponding these effective interactions are depicted in Fig. 1. With the Feynman diagrams depicted in Fig. 1 and the effective interactions shown in appendix A we obtain the interaction kernel where q = p 1 − p ′ 1 . For exchanging ρ the expression K S0 (p, p ′ , m ρ ) includes the contributions from figures Fig. 1 (a) and (b) but for exchanging ω it only includes the contribution from figure Fig. 1 (a). C S0 = 1 2 for ρ and ω. Since the constituent meson is not a point particle, a form factor at each interaction vertex among hadrons must be introduced to reflect the finite-size effects of these hadrons. The form factor is assumed to be in the following form: where Λ is a cutoff parameter. Solving the Eq. (3) is rather difficult. In general one needs to use the so-called instantaneous approximation:p ′ 0 = p 0 = 0 for K 0 (p, p ′ ) by which the B-S equation can be reduced to where E i ≡ p 2 + m 2 i , E = P 0 , and the equal-time wave function is defined as ψ S (p) = dp 0 χ S (p) . For exchange of a light vector between the mesons, the kernel is where the expressions of K S0 (p, p ′ , m V ) can be found in Appendix B.
B. The B-S equation of 1 + which is composed of a pseudoscalar and a vector
The B-S wave function for the bound state |V composed of one pseudoscalar and one vector mesons is defined as following: where ǫ is the polarization vector of the bound state, χ V is the B-S wave function, φ 1 (x 1 ) and φ µ 2 (x 2 ) are respectively the field operators of the two mesons. The equation for the B-S wave function is Here − g µα ) are the propagators of pseudoscalar and vector mesons. We multiply an ǫ * µ on both sides, sum over the polarizations and then deduce a new equation FIG. 2: a bound state composed of a pseudoscalar and a vector by exchanging π. With the Feynman diagrams depicted in Fig. 2 and Fig. 3 we eventually obtain where The contributions from Fig. 2 are included in K V 3αβ (p, p ′ , m π ) and those from Fig. 3 (a) and ( and setting p 0 = q 0 = 0 we derive the BS equation which is similar to Eqs. (7) but possesses a different kernel, where where the expressions of The quantum number J P of the bound state composed of two vectors can be 0 + , 1 + and 2 + . The corresponding B-S wave function |S ′ is defined as following: The equation for the B-S wave function is derived as where ∆ jµλ = i With the Feynman diagrams depicted in Fig. 4 and the effective interaction we obtain The contributions from vector-exchanges are included in K αα ′ µµ ′ 01 (p, p ′ , m V ) and K αα ′ µµ ′ 02 (p, p ′ , m V ) and those for exchanging pseudoscalars are included in K αα ′ µµ ′ 03 (p, p ′ , m P ) and K αα ′ µµ ′ 04 (p, p ′ , m P ). When the bound state is an isospin-scalar C 01 = − 3 2 and C 02 = 3 2 for ρ, C 01 = 1 2 and C 02 = − 1 2 for ω and C 03 = − 3 2 and C 04 = 3 2 for π. When the bound state is an isospin-vector C 01 = 1 2 and C 02 = 1 2 for ρ, C 01 = 1 2 and C 02 = 1 2 for ω and C 03 = 1 2 and C 04 = 1 2 for π.
−g µλ ) we derive the B-S equation which is similar to Eq. (7) but possesses a different kernel.
D. The B-S equation of 1 + state which is composed of two vectors
The B-S wave function of 1 + state |V ′ composed of two axial-vectors is defined as where ε is the polarization vector of 1 + state. The corresponding B-S equation is where K αα ′ µµ ′ 1 (p, p ′ ) is the same as K αα ′ µµ ′ 0 (p, p ′ ) in Eq. (17).
The expressions of K 11 (p, p ′ , m V ), K 12 (p, p ′ , m V ), K 13 (p, p ′ , m P ) and K 14 (p, p ′ , m P ) can be found in Appendix B.
E. The B-S equation of 2 + state |T ′ which is composed of two wectors
The B-S wave-function of 2 + state composed of two axial-vectors is written as where ε is the polarization vector of the 2 + state. The B-S equation can be expressed as where K αα ′ µµ ′ 2 (p, p ′ ) is the same as K αα ′ µµ ′ 0 (p, p ′ ) in Eq. (17). where The expressions of K 21 (p, p ′ , m V ), K 22 (p, p ′ , m V ), K 23 (p, p ′ , m P ) and K 24 (p, p ′ , m P ) can be found in Appendix B. (7), (13), (18), (22) and (26). Since we are interested in the ground state of a bound state the function ψ J (p) (J represents S, V, 0, 1 or 2) only depends on the norm of the three-momentum and we may first integrate over the azimuthal angle of the functions in (7), (13), (18), (22)
Now let us solve the B-S equations
to obtain a potential form U J (|p|, |p ′ |) , then the B-S equation turns into a one-dimension integral equation When the potential U J (p, p ′ ) is attractive and strong enough the corresponding B-S equation has a solution(s) and we can obtain the mass of the possible bound state. Generally the standard way of solving an integral equation is to discretize and perform algebraic operations. Concretely, we let |p| and |p ′ | take n ( n is sufficiently large) order discrete values Q 1 , Q 2 ,...Q n and the gap between two adjacent values be ∆Q, then the integral equation is transformed into n coupled algebraic equations. ψ J (Q 1 ), ψ J (Q 2 ), ...ψ J (Q n ) ( the subscript J denotes S, V , 0, 1 or 2) constitute a column matrix and the coefficients would stand as an n × n matrix M, thus these algebraic equations can be regarded as a matrix equation with a unique eigenvalue 1. If one can obtain a value of E which satisfies the equation with reasonable input parameters and E is not far from E 1 + E 2 the corresponding eigenvector should exist as a bound state.
In our calculation the values of the parameters g DDV , g DD * P , g DD * V , g D * D * V and g ′ D * D * V are presented in Appendix A. Supposing T + cc is a D 0 D * + bound state, by fitting its mass we fix Λ = 1.134 GeV. In Ref. [39,40] the authors suggested a relation: Λ = m + αΛ QCD where m is the mass of the exchanged meson, α is a number of O(1) and Λ QCD = 220 MeV i.e. Λ ∼ 1GeV for exchanging ρ or ω. The value of Λ we obtained locates within the range. The masses of the concerned constituent mesons m D , m D * , m B and m B * are directly taken from the databook [41]. [42] the authors also obtained the same results with similar approach. For the D 0 D * + (I = 1) or D 0 D + (I = 1) system, employing a larger Λ and coupling constant we can obtain a solution. It may imply the effective interaction between the two constituents is weak. For the D * 0 D * + system we can obtain an eigenvalue 18.51 MeV, the corresponding eigenstate is a bound state of J = 0 and I = 1. In table I there are many places symbolized by " * " or " − " which means such bound states cannot exist due to the symmetry restriction or the B-S equation has no solution. However in Ref. [43] D * D * system with J = 1 and isospin I = 0 was suggested to exist, which contradicts to our result. The reason is that the authors of Ref. [43] did not symmetrize and antisymmetrize the flavor wave functions of D * D * for I = 0 and I = 1 states [44,45]. Instead, we redo the calculation as the total symmetry of the wave-function including flavor, spin parts and orbital angular momentum is taken into account. For the D * 0 D * + system the spin wavefunction is symmetrized and/or antisymmetrized so that the flavor wave functions need to be correspondingly symmetrized and antisymmetrized when l = 0. For I = 1 and I = 0 states of D * 0 D * + the symmetric and antisymmetric flavor wave-functions were considered in Ref. [46]. Considering the flavor SU(3) symmetry and heavy quark effective symmetry we generalize those relations as g BBV = g DDV , g BB * P = g DD * P , g BB * V = g DD * V , g B * B * V = g D * D * V and g ′ B * B * V = g ′ D * D * V which should be a not-bad approximation.
We use the same parameter Λ fixed for the DD * systems to solve the B-S equation for the B ( * ) B ( * ) systems. We find taht two states which are the counterparts of D ( * ) D ( * ) can exist. The binding energy of each state shown in table II is apparently larger than that of the corresponding state of D ( * ) D ( * ) since the mass of B ( * ) meson is larger than that of D ( * ) meson.
IV. A BRIEF SUMMARY
In this work we study whether two charmed (or bottomed) mesons can form a hadronic molecule. We employ the B-S framework to search for possible bound states of D ( * ) D ( * ) [47] and B ( * ) B ( * ) . In Ref. [21,22,34,35,38] the B-S wave functions for the systems of one vector and one pseudoscalar, two pseudoscalar mesons and two vectors were studied. It is noted that all those works are dealing with bound states made of one particle and oneantiparticle, no matter they are pseudoscalar or vector bosons. In comparison, this work is concerning particle-particle bound states(charmed D ( * ) D ( * ) or bottomed B ( * ) B ( * ) ). Since the two constituents are accounted as identical, symmetrization of the total wavefunctions is necessarily required. In this work we deduce the interaction kernels for these systems and solve these B-S equations.
In order to obtain the interaction kernels for B-S equations we use the heavy meson chiral perturbation theory to calculate the corresponding Feynman diagrams where π, ρ or ω are exchanged. All coupling constants are taken from relevant references. For making predictions we use the binding energy of T + cc to fix the parameter Λ under the hypothesis that T + cc is a bound state of D 0 D * + with I = 0 and J = 1. With the same parameters we confirm that D * 0 D * + with I = 1 and J = 0 should exist. For the D * 0 D * + system with I = 1 and J = 2, a larger Λ or large coupling constants are needed to form bound states.
Considering the flavor SU(3) symmetry and heavy quark spin symmetry we employ the same parameters to calculate possible bound states of B ( * ) B ( * ) . Two states which are the counterparts of D ( * ) D ( * ) can exist. The binding energy of each state is apparently larger than that of the corresponding state of D ( * ) D ( * ) since B ( * ) meson is heavier than D ( * ) meson.
Since the parameters are fixed from data which span a relatively large range we cannot expect all the numerical results to be very accurate. The goal of this work is to study whether two charmed (or bottomed) mesons can form a molecular state. Our results, even if not accurate, have obvious qualitative significance. Definitely, further theoretical and experimental works are badly needed for gaining better understanding of these exotic hadrons.
The effective interactions can be found in [24][25][26] where a and b represent the flavors of light quarks. In Refs. [24] M and V are 3 × 3 hermitian and traceless matrixs respectively. | 5,054 | 2021-12-28T00:00:00.000 | [
"Physics"
] |
A compact Scheimpflug lidar imaging instrument for industrial diagnostics of flames
Scheimpflug lidar is a compact alternative to traditional lidar setups. With Scheimpflug lidar it is possible to make continuous range-resolved measurements. In this study we investigate the feasibility of a Scheimpflug lidar instrument for remote sensing in pool flames, which are characterized by strong particle scattering, large temperature gradients, and substantial fluctuations in particle distribution due to turbulence. An extinction coefficient can be extracted using the information about the transmitted laser power and the spatial extent of the flame. The transmitted laser power is manifested by the intensity of the ‘echo’ from a hard-target termination of the beam located behind the flame, while the information of the spatial extent of the flame along the laser beam is provided by the range-resolved scattering signal. Measurements were performed in heptane and diesel flames, respectively.
Introduction
Light detection and ranging (lidar), is an optical remote sensing technique in which backscattered light from a laser beam is detected with range resolution. The well-defined spatial, temporal, and spectral properties of laser light allow a lidar system * Author to whom any correspondence should be addressed.
Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
to provide information about a probe volume extending over distances in the order of kilometers. The technique is well established in atmospheric science, where it is applied to monitor and characterize temperature, wind speed, and atomic/ molecular and particulate species [1][2][3]. In addition to atmospheric science, other application fields include topography, forestry, and ecological research [4][5][6]. The principles of lidar and examples of applications are described in [7,8].
Although lidar technology has changed since its advent in the 1970s, the design has remained mostly constant; a high-intensity laser pulse of short duration (typically 10 ns) transects a remote region of interest, and the induced light scattering is collected by a telescope and temporally resolved on a detector. Range-resolved information is hence readily obtained by converting the time-resolved signal into a rangeresolved signal using the speed of light in the medium. Ultimately, the range resolution, ∆R = cτ L /2, is limited by the laser pulse duration L. Nanosecond laser pulses thus provide a maximum range resolution of approximately 1 m, which is adequate in many applications, but often insufficient for industrial combustion studies, such as monitoring processes in power plants, boilers, or fires. The range resolution can be improved by using shorter laser pulses as long as the detector provides a bandwidth that matches the laser pulse duration. A range resolution of 5 mm has been demonstrated using a picosecond laser and a streak camera [9]. The ps-lidar system was demonstrated for thermometry in a full-scale room fire experiment [10]. Despite the potential for ps-lidar, it requires a heavy and bulky system and a mode-locked picosecond laser, which also necessitates quite stable conditions in terms of the temperature and humidity of the surroundings. On-site industrial applications are therefore very challenging.
In 2015, a new lidar technique was proposed and demonstrated [11]. This technique, called Scheimpflug-lidar, does not utilize the time-of-flight concept, but rather achieves range resolution by imaging the backscattered light onto a tilted linear sensor. In a sense, this approach is reminiscent of the very first attempts at active remote sensing based on triangulation with continuous-wave (cw) searchlights. Scheimpflug-lidar requires a setup aligned so that the laser beam, the optical axis of the collection optics, and the orientation of the linear sensor satisfy the Scheimpflug condition. Correctly aligned, the focal plane of the sensor is located along the laser beam and thus the scattered laser light can be sharply imaged with spatial resolution along the laser beam propagation direction. Since the technique does not require a highpower short-pulse laser, the setup is compact and relatively inexpensive due to the availability of cw diode lasers. Since the first demonstration of the technique for atmospheric sensing of oxygen concentration based on differential absorption [11], it has been applied, primarily, in ecological research [6]. Furthermore, down-sized setups suitable for combustion diagnostics have been demonstrated [3,12].
In the present work a prototype for a compact and portable Scheimpflug-lidar instrument has been designed and its feasibility for practical applications in full-scale industrial environments has been tested. The lidar device contains both light source (a 40 mW cw diode laser) and collection optics in a single unit mounted on a tripod, and instrumental settings and data acquisition are controlled by a laptop PC. The measurements were performed in pool fires (turbulent diffusion flames) created by burning evaporating liquid fuels contained in a 60 cm diameter pool. The sooty pool fires were located about 3 m from the lidar device. Flames based on two different fuels, namely heptane and diesel, were investigated. The study demonstrates that the total flame extinction can be determined quantitatively by analyzing the intensity of the lidar signal reflected off a beam termination screen located about 4 m behind the flame. The simultaneously recorded range-resolved scattering signal from the flame allows estimation of the flame size along the laser beam direction, and dividing by the measured total extinction yields an extinction coefficient. The results show that the lidar device is feasible for remote sensing in larger combustion environments, which is promising for applications in full-scale industrial environments, where conditions are harsh and optical access is limited. Figure 1 shows a photograph of the lidar instrument mounted on a tripod. The device has the dimensions 50 × 40 × 20 cm 3 , and weighs about 13 kg. With the tripod set up and extended, the device stands at 1 m. Due to its smaller size and compactibility, the lidar instrument can fit into the trunk of a regular passenger car, thus allowing easy transportation. Preparation of the system prior to a field experiment takes about 30 min to an hour for the first initialization. To initialize the setup, first, theoretical calculations are made to determine the location of where the lidar device should be placed for the observed area. Once the lidar device is placed in its position, a rough alignment is done by observing the back-reflected light on the diode array. Afterwards, the signal can be observed on the computer and the angle of the laser can be changed to optimize the signal and place the back reflection of a known distance at a specific pixel. Additional changes to the lens and sensor focusing are done to further optimize the signal. Once the setup has been aligned, further adjustments can be made within minutes to optimize the signal. Figure 2 shows a schematic illustration of the optical arrangement for Scheimpflug lidar. As can be seen, the detector, i.e. a line-array CMOS sensor (Hamamatsu, S11639-01, 2048 pixels), is tilted relative to the collection lens (2 ′′ diameter). The intersection of the image plane, in which the linearray detector is located, the lens plane, and the object plane, in which the laser beam is propagating, is called the Scheimpflug intersection. In addition, the position of the object plane is constrained by the Hinge rule, which dictates that the front focal plane of the lens, the object plane, and the image plane displaced to the center of the lens, must coincide.
Methods and materials
With the system aligned according to the mentioned principles, the 450 nm diffraction-limited cw laser beam, provided by a diode laser (Thorlabs PL450B-450 nm, 80 mW maximum output power, 100:1 linear polarization), a section of the beam, corresponding to the start of the field of view (red star) to a point corresponding to the end of the field of view (green circle), is sharply imaged onto the line-array sensor.
The angle of viewing constrains the resolution and results in a nonlinear range resolution i.e. a pixel receiving light from a far location accommodates light from a longer sample volume than a pixel receiving light from a nearby location. A consequence of the resolution varying nonlinearly with range is that it counteracts the inevitable 1/R 2 dependence of the light collection efficiency, meaning that it is possible to obtain an instrument response that is essentially constant across the entire range, thus maximizing the dynamic range of the instrument [12]. In order to suppress light at other wavelengths than 450 nm, an interference filter, having center transmission wavelength at 450 nm and 10 nm full width at half maximum (FWHM), is positioned in front of the collection lens. Data acquisition and instrument settings are controlled by a laptop PC. The lidar instrument not only provides high range resolution, but also high temporal resolution, as the line-array detector has a maximum effective sampling rate of 2000 lines s −1 and 16-bit dynamic range, allowing for studies of the dynamics of large probe volumes. The high sampling rate of the line-array detector also offers online background subtraction by modulating the laser beam on and off in synchronization with the detector exposures [4].
In the present experiments, the laser beam is directed towards a termination screen located about 7 m from the lidar instrument. Between the instrument and the termination screen a pool flame is positioned, at a distance of about 3 m from the lidar instrument. The flame is produced by igniting the liquid fuel inside a 0.6 m diameter pool (see the photo in figure 3). Figure 4 shows sets of lidar data recorded in a heptane flame (panel a) and a diesel flame (panel b), respectively, with a laser power of 40 mW. A background, corresponding to the signal recorded with the laser beam switched off, was subtracted according to the on-line subtraction procedure described above. The heptane flame data (panel a) was recorded with a sampling rate of 1000 lines s −1 , whereas the diesel flame data was recorded with a rate of 2000 lines s −1 . In the data matrices, time is on the x-axis, range (distance from the lidar device) is on the y-axis, while the color scale reflects lidar signal intensity. Each column is a lidar curve recorded at a particular time. Note that the x-scale and the color bars are different in the two panels. A bright band located at a distance of ∼3 m is clearly observable in panel a. This band corresponds to The other bright structure, at a distance of ∼7 m corresponds to laser light reflected off of the termination screen. The intensity and the position of the termination signal varies over time. The variation in intensity is due to the varying scattering of the turbulent flame, i.e. the extinction of the laser light varies in time. This relation between extinction observed through the strength of the termination 'echo' and the light scattering caused by the flame is particularly evident at the time 0.22 s in panel b, where the flame is abruptly extinguished, thus resulting in an abrupt increase in the termination signal intensity. The reason for the spatial variation of the termination signal over time observed in both panels is most likely beam steering from flame turbulence.
Results and discussion
Examples of single lidar curves, extracted from the data displayed in figure 4, are shown in figure 5. The scattering from the flame is evident as a distinct peak located at about 3 m away for the heptane flame and about 2.5 m for the diesel flame.
The reflection from the termination screen corresponds to the intense peak at about 7 m distance. It is possible to estimate the spatial extent of the flame along the laser beam direction by fitting a Gaussian function to the scattering signal from the flame, as shown in figures 5(b) and (d). This method is limited by the shape of the flame and an example of when this method no longer applies can be seen in figure 5(e), where the application of a Gaussian fit to the flame gives the result in figure 5(f). Regardless of the poor fit, the lidar curve still gives spatial information and merely requires a more robust and complicated method to extract more precise spatial information from a non-Gaussian shape. The S/N ratio is measured by taking the ratio between the noise (defined as the rms value) located outside of the signals and the peak height of the Gaussian as shown in figure 5 Taking the FWHM as the flame size, figure 6 shows the measured flame size for the heptane flame (panel a) and the The total flame extinction can be extracted from the intensity of the peak due to the reflection from the termination screen. The reference intensity, I 0 , (corresponding to zero extinction) is the termination peak intensity recorded without a flame. Hence, with a termination peak intensity recorded with the flame present, I f , the total extinction is given by Next, the flame-size results, shown in figure 6, are combined with the measured total extinction, shown in figure 7, to extract an extinction coefficient, which is given by where L flame is the flame size as defined above. The evaluated extinction coefficients for the two flames are plotted in figure 8. The mean extinction coefficient is 33 m −1 (standard deviation 18 m −1 ) for the heptane flame and 96 m −1 (standard deviation 72 m −1 ) for the diesel flame. The extinction coefficient for the diesel flame is about three times higher than for the heptane flame. This result is not surprising since a diffusion flame burning on diesel gives rise to higher soot volume fraction than a flame burning on heptane [13]. The standard deviations of the extinction coefficients suggest that the combustion of diesel is more turbulent than for heptane.
Conclusions
A compact and portable lidar instrument, based on the Scheimpflug principle, has been constructed and its feasibility for practical diagnostics has been investigated through remote sensing experiments on pool fires burning heptane and diesel.
Overall, the experiments demonstrate that the instrument withstands the prevailing challenging conditions, i.e. variations in temperature and humidity. By combining the information of the laser light transmission through the flame, achieved via the intensity of the laser light reflected off a termination screen, and the spatial extent of the flame, obtained through the range resolved flame scattering, extinction coefficients for the flames can be estimated. These results indicate that the Scheimpflug lidar device, requiring only one optical port and providing range-resolved remote sensing, could be a valuable diagnostic tool in harsh intractable environments such as those prevailing in real industrial applications.
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors. | 3,430 | 2023-03-08T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Feasibility study: spot-scanning proton arc therapy (SPArc) for left-sided whole breast radiotherapy
Background This study investigated the feasibility and potential clinical benefit of utilizing a new proton treatment technique: Spot-scanning proton arc (SPArc) therapy for left-sided whole breast radiotherapy (WBRT) to further reduce radiation dose to healthy tissue and mitigate the probability of normal tissue complications compared to conventional intensity modulated proton therapy (IMPT). Methods Eight patients diagnosed with left-sided breast cancer and treated with breast-preserving surgery followed by whole breast irradiation without regional nodal irradiation were included in this retrospective planning. Two proton treatment plans were generated for each patient: vertical intensity-modulated proton therapy used for clinical treatment (vIMPT, gantry angle 10°–30°) and SPArc for comparison purpose. Both SPArc and vIMPT plans were optimized using the robust optimization of ± 3.5% range and 5 mm setup uncertainties. Root-mean-square deviation dose (RMSD) volume histograms were used for plan robustness evaluation. All dosimetric results were evaluated based on dose-volume histograms (DVH), and the interplay effect was evaluated based on the accumulation of single-fraction 4D dynamic dose on CT50. The treatment beam delivery time was simulated based on a gantry rotation with energy-layer-switching-time (ELST) from 0.2 to 5 s. Results The average D1 to the heart and LAD were reduced to 53.63 cGy and 82.25 cGy compared with vIMPT 110.38 cGy (p = 0.001) and 170.38 cGy (p = 0.001), respectively. The average V5Gy and V20Gy of ipsilateral lung was reduced to 16.77% and 3.07% compared to vIMPT 25.56% (p = 0.001) and 4.68% (p = 0.003). Skin3mm mean and maximum dose were reduced to 3999.38 cGy and 4395.63 cGy compared to vIMPT 4104.25 cGy (p = 0.039) and 4411.63 cGy (p = 0.043), respectively. A significant relative risk reduction (RNTCP = NTCPSPArc/NTCPvIMPT) for organs at risk (OARs) was obtained with SPArc ranging from 0.61 to 0.86 depending on the clinical endpoint. The RMSD volume histogram (RVH) analysis shows SPArc provided better plan robustness in OARs sparing, including the heart, LAD, ipsilateral lung, and skin. The average estimated treatment beam delivery times were comparable to vIMPT plans when the ELST is about 0.5 s. Conclusion SPArc technique can further reduce dose delivered to OARs and the probability of normal tissue complications in patients treated for left-sided WBRT.
Introduction
Breast cancer is one of the most common cancers among women globally [1]. Breast-conserving surgery with adjuvant whole breast irradiation has become an increasingly popular treatment option for early-stage breast cancer [2][3][4][5][6]. Currently, conventional photon treatment methods such as tangential intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) have offered increased feasibility for normal tissue sparing in left-sided breast irradiation [7][8][9]. However, long-term follow-up data after adjuvant radiotherapy have shown increased risks of ischemic heart disease, presumably due to incidental irradiation of the heart. Left-sided WBRT involves closer proximity between the heart and radiation field and is associated with an increased rate of fatal cardiovascular events compared with women who received right-sided irradiation [5,6,10,11]. Part of the anterior heart and left anterior descending artery (LAD) may receive significant dose during irradiation of the left-sided breast, and this may contribute to myocardial or coronary artery disease [12]. Darby et al. showed linear correlation between increasing mean heart dose and the incidence of ischemic heart disease among breast cancer patients [13]. Additionally, similar studies have shown that breast cancer patients are at a higher risk of long-term cardiac morbidity after radiation therapy treatment, which is directly related to the volume of the irradiated heart [5,6]. Therefore, the optimization of WBRT has given increasing emphasis on reducing the cardiac dose.
Compared to photon radiotherapy, proton beam therapy may provide a dosimetric advantage when treating left-side breast cancer due to the sharp distal dose falloff of the proton beam. Utilization of intensity modulated proton therapy (IMPT) for breast cancer treatment has increased over the last several years [14][15][16]. In IMPT, the positions and number of beam spots are optimized simultaneously to obtain the desired dose distribution, and robust optimization has been used to deal with uncertainties such as setup uncertainty, range uncertainty, and breathing motion uncertainty [17][18][19][20][21][22]. However, due to the low delivery efficiency with the current proton system, IMPT plans in breast cancer are still limited to a few beam angles. In addition, a large volume of the target may exceed the maximum field size. As a result, some IMPT plans may require a second isocenter for field matching [23], which further prolongs treatment time. These obstacles restrict the ability to further exploit the benefits of IMPT, and motivates us to explore better planning techniques to overcome the current limitations in terms of plan quality and clinical workflow efficiency. Spot-scanning proton arc therapy (SPArc) is an emerging technique that is able to deliver the proton beam through a dynamic rotational gantry [24]. Preliminary results demonstrated the potential clinical benefits for various disease sites, including prostate, head and neck, lung, and brain cancers [25][26][27][28]. This study is the first to exploit the feasibility and potential benefits of utilizing SPArc in the treatment of left-sided breast cancer patients compared to the conventional IMPT technique.
Retrospective patient data selection and treatment planning
Eight patients treated with whole breast irradiation without regional nodal irradiation from our institution using IMPT were included in this study. All patients underwent 4D-CT simulation using a spiral CT scanner (Philips Brilliance Big Bore, Philips Healthcare System, Cleveland, OH), and an average CT image was reconstructed based on a pixel-by-pixel averaging of the 4D-CT scan. The CT datasets were then transferred to RayStation version 9A (RaySearch Laboratories AB, Stockholm, Sweden) for planning. Clinical target volume (CTV) was defined as the volume irradiated based on the Radiation Therapy Oncology Group (RTOG) guidelines [29]. The internal target volume (ITV) was generated on the average CT scan, which was the union of the CTVs from all individual respiratory phase CT scans. Two separate treatment plans were created for each case: vertical IMPT (vIMPT, 10°-30°) and SPArc (partial-arc, 320°-150°) plans. Three of the patients with large tumors required two-isocenter IMPT plan due to the field size limitation (20 cm × 24 cm maximum field size). SPArc plans used a single isocenter with a partial arc. Both planning strategies used ITV plus robust optimization to take into account setup (± 5 mm) and range (± 3.5%) uncertainties (total 21 scenarios). The plan optimized using the Monte Carlo (MC) algorithm with a sampling history of 50,00 ions/spot, and a final dose computed using the MC algorithm with 1.0% statistical uncertainty and a dose grid of 3 mm. Proton beam model is based on the IBA ProteusONE energy range from 70 to 227 MeV, with spot size 1-sigma in air measurement ranging from 3.3 mm @227 MeV to 7.9 mm @70 MeV. The beam computation settings such as energy layer spacing and spot spacing were set by default in Ray-Station using automatic with scale 1 where Bragg peaks overlap at 80% of the max dose. Organs at risk (OARs) include heart, LAD, ipsilateral lung, contralateral breast and skin3mm. The skin3mm was defined as a 3 mm deep layer starting from the external body contour and following the extension of the ITV, and the ITV excludes the skin structure. The prescribed dose for all patients was 4256 cGy in 16 fractions [30,31]. Plans aimed to achieve 100% of the prescribed dose in 98% of the ITV. SPArc and vIMPT plans were optimized in Raystation TPS in similar objectives and constraints for OARs. The objective and constrain functions were specified individually for each patient to obtain the best achievable treatment plan until there is no significant improvement.
Nominal dosimetric plan quality evaluation and plan robustness analysis
Target coverage and doses to OAR's were all evaluated and compared based on the DVH between SPArc and vIMPT. Also, the plan dose homogeneity was evaluated by homogeneity index (HI), which was defined as D 5 / D 95 (where D5 and D95 are the minimum dose in 5% and 95% of the target volume). The ideal value of HI is 1. ITV coverage was evaluated by the conformality index (CI), which was defined as CI = (TVDp/TV)*(TVDp/VDp), where the TV is target volume, and TVDp and VDp are the target volume covered by the prescribed dose, and the volume enclosed by the prescription isodose line, respectively [32]. The plan robustness was defined by the ability of a proton plan to retain its objectives under the influence of uncertainties. In the present study, all plans were evaluated using the worst case scenario perturbed dose with setup uncertainties of ± 5 mm for x, y, z directions, and ± 3.5% range uncertainties. Besides, the rootmean-square deviation doses (RMSD) for each voxel of all the 21 scenarios were calculated. The RMSD volume histograms (RVH) and the area under the RVH curve (AUC), which introduced by Liu et al. were computed for relative comparison of IMPT and SPArc plan robustness [33]. The smaller the AUC value, the more robust the plan was for the specific structure(s).
Evaluation of motion interplay effect
The interplay effect was evaluated by the single-fraction 4D dynamic dose calculation without considering rescanning for different starting respiratory phases [34]. The 4D dynamic evaluation method distributes the spots over the different breathing cycle phases based on the delivery time and sequence. Then, the dose on each breathing phase were computed. Displacement vector fileds (DVFs) was generated via deformable image registation on the corresponding respiration phase to the reference 4D-CT phase (e.g.50% at this study). By utilizing the corresponding DVFs, the dose in each phase was mapped to the reference phase. The accumulation of the dose from different phases to the reference phase is called 4D dynamic dose [27]. It is assumed that the energy-layer-switching-time (ELST) of 1 s and a regular respiratory breathing cycle of 4.5 s in the study. The 4D dynamic dose calculation used a method by relating the time sequence of each spot delivery to the corresponding 4D-CT phase from the patient breathing cycle. Then it accumulated each spot dose via the deformable image registration on the corresponding respiration phase to the reference 4D-CT phase (CT50) associated with the corresponding DVF for evaluation.
Treatment beam delivery time calculation and statistics analysis
The treatment delivery efficiency of SPArc and vIMPT plans were evaluated based on assumptions of a gantry with 1 rotation per minute gantry speed, 2 ms spot switching time, and ELST from 0.2 to 5 s [24]. Statistical analysis was performed with non-parametric Wilcoxon signed rank test using SPSS 21.0 software (International Business Machines, Armonk, New York). The p value < 0.05 was considered statistically significant.
Evaluation of Potential clinical benefit for OARs based on the NCTP model
Potential clinical benefits of each OAR such as heart, LAD, left lung, and skin were estimated using the normal tissue complication probability (NTCP) model from the literature (Table 1). Briefly, Lyman-Kutcher-Burman (LKB) and Poisson LQ models were employed [35][36][37][38][39]. To compare risk values between SPArc and vIMPT plans, we defined the ratio of NTCP (R NTCP ), as R NTCP = NTCP SPArc /NTCP vIMPT . Figure 1 shows an example (patient #5) of radiation treatment plans and DVHs for SPArc and vIMPT. With a similar target coverage (Table 2), the SPArc technique achieved significantly higher dose homogeneity compared with the vIMPT technique (p = 0.005). Specifically, SPArc plans showed a significant reduction in heart dose (D1) of 51.42% compared to vIMPT (53.63 cGy vs 110.38 cGy, p = 0.001), as well as a substantial decrease in the maximum dose to LAD of 51.72% (82.25 cGy vs 170.38 cGy, p = 0.001). Compared to vIMPT, the volume of left lung received 500 (cGy) and 2000 (cGy) was reduced by 34.40% (16.77% vs 25.56%, p = 0.001) and 34.51% (3.07% vs 4.68%, p = 0.003) via SPArc. The skin3mm structure mean and maximum dose was reduced to 3999.38 cGy and 4395.63 cGy compared to vIMPT plans (4104.25 cGy (p = 0.039) and 4411.63 cGy (p = 0.043) respectively. However, the study found that the mean dose of the contralateral breast was increased to 18.5 cGy in the SPArc plans compared to the vIMPT plans (12.13 cGy, p = 0.011).
Plan robustness evaluation in the presence of the setup and range uncertainties
All the AUC values of target volumes and OARs from eight cases were evaluated. With a comparable target coverage, some dosimetric impacts of OARs were mitigated in the presence of setup and range errors via SPArc compared to vIMPT, such as heart (4.00 in vIMPT plan versus 2.25 in SPArc plan, p = 0.009), left-lung (168.25 in vIMPT versus 122.63 in SPArc, p = 0.001) and LAD (21.25 in vIMPT versus 9.88 in SPArc, p = 0.01). There is no statistical difference in contralateral-breast and skin3mm's dosimetric robustness. Figure 2 illustrates RVHs from case number 5. Table 1 OARs, corresponding clinical endpoints, and NTCP models used in the present work
Evaluation of dosimetric impact from the interplay effect
The study found that SPArc could improve the ability to mitigate the interplay effect in both target and OARs ( Figure 3 shows a representative example of the 4D dynamic dose calculation of SPArc versus vIMPT plans. Table 3 lists the estimated beam delivery time per fraction for both SPArc and vIMPT plans for various ELST. When the proton system's ELST is 5 s, the average estimated delivery time ratios between SPArc and vIMPT plans was 1.40 (1059 s vs. 758 s), which means it would take significantly longer to deliver a SPArc plan (p < 0.001). The difference became smaller as the ELST is faster. When the ELST was less than 0.5 s, the (Fig. 4). However, the estimated treatment time did not take into account the additional time to perform iso-shift and re-imaging. For the 2-isoenter vIMPT plan, additional couch movement for the next iso and IGRT verification procedures may be needed to ensure the treatment accuracy. For SPArc, only a single iso is needed, which would save significant additional treatment time as well as simplify the clinical treatment workflow.
Potential clinical benefit for heart
The results show that there was a potential clinical benefit based on NTCP model calculation of using SPArc over vIMPT (Table 4). More specifically, heart, LAD, left-lung, and skin complications showed an overall reduction in the toxicity risk prediction for SPArc plans compared with the vIMPT plan, with R NTCP ranging from 0.61 to 0.86, depending on the clinical endpoint (Fig. 5).
Discussion
This is a first and comprehensive dosimetric planning study to explore the feasibility and potential dosimetric and clinical benefits in the management of patients with left-sided breast cancer receiving whole breast irradiation. This study also analyzed plan robustness in the presence of setup and range errors in addition to the breathing-induced interplay effect. Our results indicate that the SPArc technique with additional degree of freedom in optimization and delivery could not only improve dosimetric quality but also improve plan robustness compared to conventional vIMPT. Recently, there is a trend to use more fields in the breast cancer treatment which might be able to improve the treatment plan quality as well. To provide a more comprehensive comparison among these planning strategies, additional data were included in the Additional file 1 including the comparison with 3F-IMPT and 5F-IMPT.The result showed that as more beam angles were used in IMPT, the more robust the plan quality is. However, as a tradeoff, multi-field IMPT takes longer to deliver. In addition to the plan quality improvement, one of the driven motivation of SPArc is to shorten the treatment delivery time and simplify the clinical workflow. The results from this study agree with previous findings that SPArc could shorten the total treatment delivery time based on the modern proton therapy machines where the average of ELST is less than 0.5 s [25][26][27][28]40]. In the presence of the large target size, which requires multiisoenter field matching, SPArc technique could utilize a single-isoenter to simplify the clinical treatment workflow. This is due to the current en face beam angle selection. A 2-iso setup was needed where the target exceeds the lateral maximum field size. e.g. for IBA ProteusONE, the lateral max field size is 20 cm. Any target which was larger than 20 cm laterally from Beam-Eye-View, requires additional iso. By taking advantage of the arc trajectory, SPArc can deliver the proton spot to the boundary of the lateral region through a tangent beam direction. Thus, SPArc effectively increased the lateral target coverage by using the single iso. Such principle also applies for multifield IMPT e.g. 3F-IMPT and 5F-IMPT where singleiso setup was needed. However, please be aware of that SPArc or multi-field IMPT will not solve the problem where the target exceed the max field size in superiorinferior direction. In these scenarios, multi-iso setups for SPArc are still needed. For example, three out of eight cases included in this study required a second isocenter. As a result, therapists need to apply an isocenter shift, image validation, and second treatment field in the vIMPT treatment. A review of treatment logs of these three cases found that it took 5.11 ± 0.05 min on average to perform these additional procedures for the 2 nd isocenter shift. These additional couch isocenter shift and image acquisition times prolong the overall treatment time and also increase the chance of intrafraction motion [41][42][43]. Thus, SPArc has the potential to provide a more efficient clinical treatment workflow through one arc trajectory and further reduce the uncertainties from the intrafraction motion.
Cardiac toxicity remains a leading treatment related cause of morbidity and mortality among long-term breast cancer survivors after radiotherapy, especially in the patient population with left-sided breast cancer [44]. Previous studies have found several heart dosimetric metrics related to acute or late cardiotoxicity, although there are still debates in which dosimetric metric and substructures are more related to the acute or late cardiotoxicity [45][46][47][48].
Darby et al. found that the rate of the incidence of ischemic heart disease increased linearly with the mean heart dose by 7.4% per Gy [13]. In addition, the RAD-COMP (Radiotherapy Comparative Effectiveness) trial has also pointed out that the mean heart dose as a critical indicator for cardiotoxicity [45,49]. The mean heart dose of the delivered vIMPT plans in our study was 6.38 cGy, which is higher than SPArc 4.5 cGy (p = 0.04). Moreover, there is increasing evidence that the dose of heart substructures needs to be considered. Some studies have focused on the LAD as important parts of the heart associated with radiation-induced heart disease [11,50]. Conventional proton beam therapy (IMPT or Passivescattering) could reduce the dose of the heart and LAD in left-side breast cancer patients compared to the photon radiotherapy technique in the high cardiac doses sparing [10,15,51]. This study found that the new proton treatment technique, SPArc, could further reduce the D1 of heart and LAD which might mitigate the probability of heart acute and late toxicities. We recognize that the relevance of photon NTCP models to proton therapy has not been established and further proton study would be needed to correlate the proton dose with the cardiotoxicity. The study also found that the contralateral breast mean doses were slightly higher in SPArc planning group compared with vIMPT. It is important to consider and choose the optimal treatment technology for an individual patient considering the possible clinical benefits as well as the limitation of using SPArc technique. Another critical OAR that could benefit from SPArc is the healthy lung tissue. Reducing the radiation dose to the lung can result in reducing the risk of radiation pneumonitis in patients. Our feasibility study finds that the technology of SPArc can substantially improve not only the heart and LAD sparing but also the lung sparing in comparison with vIMPT. Previous studies have confirmed that proton therapy can significantly reduce the V500(cGy) and V2000(cGy) of the ipsilateral lung by nearly 50% compared to traditional 3DCRT and IMRT [10,52,53]. This study found that SPArc plans further reduced all dose-volume parameters while providing a reduced or similarly high-dose radiation volume with IMPT in left-sided WBRT.
The study showed a very interesting result where SPArc has better capability of mitigating the motion interplay effect over IMPT, even though SPArc deliver spots through some tangent arc trajectories which are supposed to be more sensitive to the motion and it has a similar treatment delivery time compared to the single field IMPT. Although the exact rationale behind this phenomenon of interplay effect mitigation is not well understood yet, a similar finding was also reported in the lung mobile target treatment in comparison between SPArc and IMPT [27]. There might have one hypothesis to explain the phenomena. When the number of beam angles increases, it could effectively reduce the dosimetric impact from the proton range uncertainties. For example, when the tumor moves in and out the beamline due to the breathing induce motion, there might have 50% of dose overshooting or undershooting the target from each beam angles using a two-field IMPT plan. On the other hand, SPArc, as an advanced IMPT technique consists of hundreds of beam angles. As a result, overshooting or undershooting the target might only contribute a few percentages of total dose difference in each beam angle. Such advantage may help SPArc effectively mitigate the dosimetric impact from the interplay effect. Because the breathing-induced motion is not significant (< 2 mm, Additional file 1: Table S4.) in most of the breast cancer patient population, it is limitation of this motion evaluation study. To prove this new hypothesis of interplay mitigation effect in a relationship to the degree of freedom or beam angles, a more quantitative study would be needed.
Besides, spot characteristics also play an important role in the interplay effect evaluation [54]. In addition, the spot spacing parameter for planning optimization determine the number of the spots where a higher value increases the inter-spot distance and less spot would be used in a plan. Thus, the plan might be more sensitive to the motion uncertainties [55,56]. Similarly, the energy layer spacing parameter determine the number of energy layers [57]. These planning optimization parameters may also play a critical role in the interplay effect. We would recommend different institutions to evaluate the interplay effect based on their own proton beam model and planning optimization parameters in order to offer an optimal treatment plan with an efficiency delivery and robust plan quality [58].
Conclusions
SPArc can achieve superior OARs sparing and robust plan quality in left-sided WBRT compared to the traditional IMPT. With ELST less than 0.5 s in current modern proton systems, the total beam delivery time per fraction of SPArc would be faster than IMPT which would be desirable for future clinical implementation.
Additional file 1:
A comprehensive comparison of SPArc, vIMPT, 3F-IMPT, 5F-IMPT in terms of the plan quality, robustness evaluation and treatment delivery efficiency. Table s1. Target volume and OARs dosimetric parameters among vIMPT, 3F-IMPT, 5F-IMPT and SPArc. Table s2. Absolute difference of target volume and OARs dosimetric parameters to SPArc. Table s3. Absolute difference of target volume and OARs dosimetric parameters to vIMPT. Figure s1. Total average treatment beam delivery time. Table s4. The movement was calculated based on the mass centre difference between the CTVs in the 4DCT phases in 3D, superior inferior (SI), left-right (LR) and anterior-posterior (AP). | 5,423.6 | 2020-08-28T00:00:00.000 | [
"Medicine",
"Physics"
] |
RETRACTED ARTICLE: Human induced pluripotent stem cell-derived platelets loaded with lapatinib effectively target HER2+ breast cancer metastasis to the brain
Prognosis of patients with HER2+ breast-to-brain-metastasis (BBM) is dismal even after current standard-of-care treatments, including surgical resection, whole-brain radiation, and systemic chemotherapy. Radiation and systemic chemotherapies can also induce cytotoxicity, leading to significant side effects. Studies indicate that donor-derived platelets can serve as immune-compatible drug carriers that interact with and deliver drugs to cancer cells with fewer side effects, making them a promising therapeutic option with enhanced antitumor activity. Moreover, human induced pluripotent stem cells (hiPSCs) provide a potentially renewable source of clinical-grade transfusable platelets that can be drug-loaded to complement the supply of donor-derived platelets. Here, we describe methods for ex vivo generation of megakaryocytes (MKs) and functional platelets from hiPSCs (hiPSC-platelets) in a scalable fashion. We then loaded hiPSC-platelets with lapatinib and infused them into BBM tumor-bearing NOD/SCID mouse models. Such treatment significantly increased intracellular lapatinib accumulation in BBMs in vivo, potentially via tumor cell-induced activation/aggregation. Lapatinib-loaded hiPSC-platelets exhibited normal morphology and function and released lapatinib pH-dependently. Importantly, lapatinib delivery to BBM cells via hiPSC-platelets inhibited tumor growth and prolonged survival of tumor-bearing mice. Overall, use of lapatinib-loaded hiPSC-platelets effectively reduced adverse effects of free lapatinib and enhanced its therapeutic efficacy, suggesting that they represent a novel means to deliver chemotherapeutic drugs as treatment for BBM.
Platelets are anucleate, cellular fragments derived from megakaryocyte membranes predominantly present in bone marrow [34][35][36][37] .Novel drug delivery systems have been designed to mimic various properties possessed by platelets, such as their ability to adhere to and deliver toxic drugs to tumor cells 38,39 .Most of these types of delivery systems utilize platelet membranes and require a complex production process.Platelet membrane-cloaked nanoparticles have also been considered but are still less biocompatible as they induce an immune response in vivo [40][41][42][43] .Unlike nanoparticle-based drug delivery systems, platelets are naturally cleared from the body by reticuloendothelial cells in liver and spleen [44][45][46][47][48][49] .Platelets harboring encapsulated agents have demonstrated systemic clearance similar to donor-derived platelets 28,39 .Previous studies report that platelets are activated by tumor cells and adhere to tumor cells, a phenomenon referred to as tumor cell-induced platelet aggregation (TCIPA), which can also increase metastasis 50 .Activated platelets release contents from their granules, and platelets loaded with drugs (such as doxorubicin 51 ) release those drugs in the proximity of tumor site 50 .However, cancer patients undergoing chemotherapy can be thrombocytopenic, and cancer patients who receive multiple allogenic drug-loaded platelet transfusions could develop refractoriness to platelets due to HLA alloreactivity and subsequently require transfusions with HLA-matched donor platelets loaded with drugs [52][53][54] .Finding alternative sources of non-immunogenic, high-quality autologous platelets could reduce these risks.
Since human induced pluripotent stem cells (hiPSCs) are a potentially replenishable source of transfusable drug-loaded HLA negative (Universal) or autologous or HLA matched platelets, our primary objectives were to develop a clinically adaptable protocol to generate hiPSC-derived megakaryocytes and platelets in vitro and then determine whether those hiPSC-platelets could serve as drug carriers to target HER2+ BBMs in vitro and in mouse models.In vivo, lapatinib-loaded hiPSC-platelets targeted BBMs and achieved a longer retention time than synthetic drug delivery systems or free drugs.Encapsulated lapatinib appeared to escape immunosurveillance and delivered drug in the vicinity of tumor cells via TCIPA, apparently circumventing the blood-brain barrier (BBB), which can interfere with treatment of brain metastases 55 .We conclude that drug-loaded hiPSC-platelets exhibit enhanced therapeutic efficacy at potentially reduced drug dosages, limiting damage to normal tissues, and could serve as an efficient drug carrier to treat HER2+ BBM tumors in brain.
Culturing of Day 0 immature megakaryocytes (Supplementary Fig. S2A (Gating Strategy)) in STEMspan Megakaryocyte Expansion Supplement (STEMCELL Technologies) and STEMspan ACF medium, along with addition of barasertib starting on Day 2, increased the ploidy of immature CD41a + CD42b + megakaryocytes (Supplementary Fig. S2A (Gating Strategy)) by Day 4 (Supplementary Fig. S1A (Scheme) and D), which was used as a surrogate marker for megakaryocyte maturation (" ≥ 8n ploidy" served as a marker of maturation).Addition of barasertib has been previously shown to promote generation of polyploid megakaryocytes (Supplementary ◂
R E T R A C T E D A R T I C L E
Fig. S1D and S2B) and increased terminal differentiation to matured megakaryocytes to platelets 64 .Similarly, megakaryocytes cultured in ultra-low adherent culture plates along with barasertib produced ~ 30 Calcein Blue AM + Annexin-V -CD41a + CD42b + platelets per megakaryocyte cell by day 7 (Fig. 1C,D) (Fig. 1C, Supplementary Fig. S2C), indicating that our protocol can successfully differentiate hiPSC cells through the stages of immature diploid megakaryocytes (Supplementary Fig. S2A), to maturation (Supplementary Fig. S2B) and to terminal differentiation of platelets (Supplementary Fig. S2C).
Large, highly polyploid (Supplementary Fig. S1D) megakaryocytes (Supplementary Fig. S1E) contained demarcation membrane systems and organelles, including mitochondria, granules, and multiple nuclei, and were capable of generating distinct pro-platelets (Supplementary Fig. S1G) by culture days 5-7 (Fig. 1D).Platelets can be damaged by extracellular metalloproteinases, which shorten their time in circulation and promote loss of surface CD42b expression relative to CD41 (αIIb) 65,66 .To determine if pro-platelets were differentiated and intact, we examined CD41a and CD42b expression in living (Calcein Blue AM + ) and non-apoptotic (Annexin V − ) hiPSC platelets purified by BSA gradient segregation (Fig. 1C,D and Supplementary Fig. S2C) and compared expression to infused donor platelets (Supplementary Fig. S2C-E).We observed no or minimal loss of surface CD42b on iPS-derived relative to donor platelets.Approximately 80% of CD41 + hiPSC-derived platelets were CD42b + .Another ~ 10% of CD41 + hiPSC-derived platelets were smaller in size and granularity relative to the larger proportion of CD41 + CD42b + hiPSC-derived platelets.We conclude that this 10% was non-functional and did not count them as platelets (Supplementary Fig. S2E).Using platelets from human blood donors to establish size gating (Fig. 1C and Supplementary Fig. S2C), we observed that the ~ 80% of hiPSC-platelets generated above expressed CD41a and CD42b comparably to donor platelets.We then examined the kinetics of platelet production from maturing megakaryocytes by flow cytometry-based counting of CD41 + CD42b + platelet-sized particles and found that platelet levels began to increase on day 5 and peaked at day 7 (Fig. 1D).
Transmission electron micrography (TEM) analysis demonstrated that hiPSC-platelets were ultrastructurally identical to human donor platelets (Fig. 1E): hiPSC-platelets were anucleate and possessed organelles seen in adult human blood donor platelets, including mitochondria, OCSs (open canalicular systems), and granules.We then performed light transmission aggregometry (LTA), a method used clinically to evaluate platelet function in vitro [67][68][69] , on hiPSC-platelets generated in vitro from hiPSCs and showed that 3 × 10 7 resting hiPSC-platelets resuspended in human plasma reached 50-60% aggregation at all time points analyzed after exposure to 20 µM ADP and TRAP (Fig. 1F).Donor platelets exhibited stronger aggregation and reached 85-80% aggregation under the same conditions.To assess platelet activation, we performed activation assays using a FITC-conjugated PAC1 antibody, which binds to the activated conformation of αIIbβ3 integrin, and a PE-conjugated P-Selectin monoclonal antibody, which binds to P-selectin (P-Sel) on the surface of granules in activated platelets.In response to ADP and TRAP treatment, hiPSC-platelets showed stronger PAC1 binding than unstimulated controls (Fig. 1G,H and Supplementary Fig. S3A).PAC1 and P-Sel activation profiles of hiPSC-platelets were similar to those of donor platelets, although some hiPSC-platelets showed pre-activation (Fig. 1G,H), as has been reported by others 66 .
Functional comparison of lapatinib-loaded and non-loaded hiPSC-derived platelets.
We next confirmed that our hiPSC-platelets could encapsulate drugs by assessing uptake of the fluorescent probe coumarin-6 (C6, 0.1%) by fluorescence imaging (Fig. 2A).We then demonstrated that doxorubicin could be encapsulated into hiPSC-derived platelets using the same protocol, suggesting that lapatinib could be similarly encapsulated (Supplementary Fig. S4A).We and others 51,[70][71][72] previously demonstrated that donor platelets can take up drug via passive transfer (Supplementary Fig. S4A).We calculated the encapsulation efficiency (a number that describes the extent of drug encapsulation) to be 93% in hiPSC-platelets, while drug loading capability (which indicates the amount of drug loaded per unit weight of hiPSC-platelets) to be 49% (Fig. 2B), the latter suggesting that the volume ratio of hiPSC-platelets to lapatinib was ~ 1:2.Interestingly, pH modulated lapatinib release in vitro: release was slower at pH 7.4 but more rapid at pH 6.5.Accordingly, at pH 6.5, ~ 80% of loaded lapatinib was released into the buffer within 40 h, as compared to ~ 40% release at pH 7.4 within 40 h (Fig. 2C).
Additional TEM analysis indicated that both lapatinib-loaded and non-loaded platelets showed organelles such as mitochondria, granules and glycogen granules, and that lapatinib-loaded platelets exhibited normal morphology (Fig. 2D).Lapatinib-loaded hiPSC-platelets and donor platelets showed comparable activation capacity in vitro based on surface expression of PAC1 and P-Sel (Fig. 2F and Supplementary Fig. S3A), as well as a similar ability to aggregate in vitro after ADP and TRAP exposure (Fig. 2E).These analyses confirm that hiPSC-platelets maintain integrity and biological function after lapatinib loading and retain properties of bona fide human platelets.
R E T R A C T E D A R T I C L E
V levels, while exposure to non-loaded controls had little or no effect Annexin-V positivity (Fig. 3C,D).RT-qPCR analysis of FACS-sorted CD326 + BBM cells revealed significant upregulation of the pro-apoptotic genes BAD, BAK, BAX, and p53 in BBM cells exposed to lapatinib-loaded hiPSC-platelets relative to those exposed to non-loaded control hiPSC-platelets (Fig. 3E), indicative of apoptosis.To determine whether BBM cells contained lapatinib delivered by hiPSC-platelets, we first loaded hiPSC-platelets with saline, the fluorescent probe C6, or a 1:1 mixture of lapatinib and C6 and analyzed C6 accumulation in BBM cells by flow cytometry (Fig. 3F and Supplementary Fig. S5A).We then isolated C6-positive BBM cells by FACS and analyzed lapatinib concentrations in these cells via LC-MS/MS (Supplementary Fig. S5B) as well as viability using CCK8 assays (Fig. 3G).Lapatinib levels were significantly elevated in BBM cells exposed to lapatinib-loaded hiPSC-or donor-platelets compared to BBM cells exposed to C6 only-loaded hiPSC-or donor-platelets or with non-loaded platelets (Controls) (Fig. 3F,G and Supplementary Fig. S5A and S5B).We also observed significantly reduced viability of BBM cells exposed to a 1:1 mixture of lapatinib and C6, as compared to non-loaded or C6-loaded platelets (Fig. 3G).
Analysis of therapeutic efficacy of lapatinib-loaded hiPSC-platelets in vivo.We next assessed potential anti-tumor effects of lapatinib-loaded hiPSC-platelets in vivo using female NOD/SCID mice bearing xenograft BBM tumors, which we developed 41,74 (Fig. 4A).To do so, we injected BBM cells (100 K) expressing ZsGreen1 Renilla luciferase intracranially in the subcortical region of mice via the cisterna magna on day 0 and allowed tumor development to occur over 7 days.Once tumors were detectable by bioluminescence imaging (BLI) lapatinib-loaded hiPSC or donor platelets, free lapatinib, or non-loaded control hiPSC and donor platelets were intravenously administered via the tail vein every 3 days.Tumor progression was then monitored by BLI every 2 days up to day 30 (Fig. 4A,B,E).That analysis revealed significantly reduced BLI counts in mice infused with lapatinib-loaded hiPSC-or donor-platelets relative to cohorts that received non-loaded hiPSC-or donor-platelets or free lapatinib (Fig. 4A,B).We monitored mice for up to 60 days and observed that BLI intensity decreased to undetectable levels in mice treated with lapatinib-loaded hiPSC-platelets (Fig. 4A) relative to vehicle-or non-loaded platelet-treated controls.Moreover, overall survival of mice bearing BBM tumors was significantly higher in animals treated with lapatinib-loaded hiPSC-or donor-platelets relative to vehicle-or non-loaded platelet-treated controls (Fig. 4C,E and Supplementary Fig. S6B and C).Pre-treatment of BBM1 cells with lapatinib-loaded hiPSC-platelets also reduced tumor seeding capacity of BBM1 cells by 1,000-fold relative pre-treatment with non-loaded hiPSC-platelets (Fig. 4D).As expected, tumor size (as determined using HEstained horizontal brain sections and AMIRA-based area quantification) also decreased significantly in tumorbearing mice infused with lapatinib-loaded hiPSC-or donor-platelets relative to free lapatinib or untreated controls (Fig. 4E and Supplementary Fig. S7C).Also, after 24 days of treatment with free lapatinib, mice had a final average body weight of ~ 18.9 g, indicative of significant weight loss, compared to animals treated with non-loaded or lapatinib-loaded hiPSC-platelets (Fig. 5A), suggesting that lapatinib toxicity to normal tissue is reduced by encapsulation in hiPSC-or donor-platelets.Interestingly, histopathological analysis using hematoxylin and eosin (H&E) staining of heart, liver, spleen, lung, and kidney did not reveal significant abnormalities in any of the treatment groups (Supplementary Fig. S6A).
We also measured plasma lapatinib concentrations in mice bearing BBM derived tumors at various timepoints after treatment (Fig. 5B).Whereas free lapatinib was rapidly cleared with a short half-life (t 1/2 = 2.1 ± 0.6 h), lapatinib released from lapatinib-loaded hiPSC-or donor platelets remained at a higher level with a longer half-life (t 1/2 = 31.3± 0.9 h; Fig. 5B).Analysis of tissue distribution showed that, compared to levels of lapatinib released from lapatinib-loaded hiPSC-or donor-platelets, concentrations of free lapatinib were significantly higher in heart, kidney and liver tissues but relatively lower in tumor tissue (Fig. 5C).However, we observed an increase in lapatinib concentration in lungs of tumor-bearing mice treated with lapatinib-loaded hiPSC-and donor-platelets relative to free lapatinib.Furthermore, lapatinib levels were higher in kidney and heart in free lapatinib-administered animals relative to animals treated with lapatinib-loaded hiPSC-and donor-derived platelets, suggesting decreased toxicity (Fig. 5C).To confirm minimal adverse effects in animals treated with lapatinib-loaded donor-or hiPSC-platelets, we performed histopathological analyses of all major organs including lung, kidney, and heart from BBM tumor-bearing mice treated with both.Relative to non-loaded platelet controls, we observed no significant changes in morphology in any group analyzed (Supplementary Fig. S6A).To confirm that elevated lapatinib levels did not alter heart function we also quantified levels of cardiac troponin I (cTnI) in sera of variously treated tumor-bearing mice and found that levels were lower in all test groups than those seen in positive control mice exposed to isoproterenol, which induces higher troponin I levels, strongly suggesting that heart function is normal in mice treated with lapatinib-loaded or non-loaded donor or hiPSCplatelets (Supplementary Fig. S7A).Circulation or clearance kinetics of 5 × 10 8 intravenously infused non-loaded or lapatinib-loaded human iPSC-and donor-platelets was comparable in macrophage-depleted NOD/SCID mice, with a time to reach maximal accumulation (Tmax) of 1 h (Supplementary Fig. S7B and S8A-B).HiPSC-platelets, like human blood platelets, circulated for at least 24-48 h, indicating that lapatinib-loaded hiPSC-platelets possess a lower rate of clearance from blood circulation, which may allow them to accumulate and release lapatinib in the vicinity of the tumor tissues, potentially sparing normal tissues.
Discussion
Breast cancer patients can develop BBMs years or even decades after their initial diagnosis, indicative of a long latency period, despite the presence of circulating tumor cells 12,75,76 .Metastases are responsible for 90% of all cancer deaths 3,4,7,8 .Patients diagnosed with brain metastases have a dismal probability of 1-year survival 1 .Furthermore, the advancement and improved efficacy of treatments for primary breast cancer have led to an increased propensity to develop metastatic breast tumors 6,10 .Although all breast cancer subtypes can metastasize to the brain, patients with HER2+ primary breast tumors have a higher risk of developing brain metastasis 9 .~ 40% www.nature.com/scientificreports/ of patients with HER2+ primary breast cancer develops brain metastasis 1,6 .Major treatment modalities for HER2+ BBMs have been radiation therapy, surgical resection of BBM tumors, and systemic chemotherapy treatment with various drugs [77][78][79][80][81][82] .However, the benefits of radiation therapy are limited to short-term palliation of distressing symptoms and this modality cause neurocognitive dysfunction 83 .Additionally, a major limitation in developing effective chemotherapeutic treatments for HER2+ BBMs is the poor activity of anti-neoplastic agents against HER2+ BBMs and poor penetration of these agents into brain tissue 3,7,8 .Previous studies reported modest activity of lapatinib against BBM tumors in the brain 22,[84][85][86] .Therefore, development of drug delivery systems that can augment BBB clearance to reach the tumors residing in the brain, reduce side effects and simultaneously improve the efficacy, therapeutic index, and biocompatibility of promising drugs treatments is needed [27][28][29][30]33,42,87 . Althoughvarious drug delivery systems have been developed, poor biocompatibility has limited their application in clinical settings.Some studies have demonstrated the utility of natural donor-derived blood platelets as drug carriers 51 .For example, a recent study demonstrated the use of doxorubicin-loaded blood-platelets as a delivery system to treat lymphoma and achieved a longer retention time, relative to free drug retention times.However, allogenic drug-loaded platelet transfusions run the risk of alloimmunization to HLA 52,53,61 .Patients who receive multiple platelet transfusions, such as those with various types of cancer, often develop platelet refractoriness due to HLA alloreactivity and subsequently require additional transfusions with HLA-matched donor platelets [88][89][90][91] .Therefore, finding alternative sources of non-immunogenic, high-quality platelets might decrease risks associated with allogeneic drug-loaded platelet transfusions.
R E T R A C T E D A R T I C L E
Therefore, our main objectives were to develop a clinically adaptable protocol to generate hiPSC-derived megakaryocytes that can efficiently mature and terminally differentiate into highly functional platelets and to determine whether hiPSC-platelets possess properties of natural platelets and could serve as an alternative to allogeneic donor-derived platelets as drug carriers for targeting and treating BBMs.Here, we report a serum and feeder-free protocol to differentiate human hiPSCs into megakaryocytes, which in turn mature and terminally differentiate to generate functional platelets.Using this method, we generated hiPSC-platelets on a large scale, allowing us to perform flow cytometry-based activation assays and LTA-based aggregation assays, in vitro.The latter demonstrated that donor-derived platelets do aggregate more readily than hiPSC-derived platelets, potentially due to the method used to enrich them.Specifically, we performed differential centrifugation to remove the naïve megakaryocytes from platelet medium followed by BSA density gradient-based centrifugation to generate purer platelet populations.These methods efficiently remove large megakaryocytes but cannot remove small-sized debris materials (e.g., CD42b -or CD41a -or Annexin-V + particles) that does not aggregate and can lead to higher background.This purification step does not occur in the preparation of fresh donor platelets.Furthermore, hiPSC derived platelets are fetal in nature showed better aggregation profiles compared to platelets derived from CD34+ umbilical cord blood cells, which show no or minimal aggregation based on LTA representing significant improvement 61 .Subsequently, we used these hiPSC-platelets to encapsulate lapatinib and found that they can be loaded with an encapsulation efficiency of ~ 90% without any alteration in platelet morphology and functionality (Fig. 2).Using Boyden chambers, we observed that lapatinib-loaded hiPSC-and donor-derived platelets significantly reduced the viability of BBM1 and BBM2 cells compared to free lapatinib (Fig. 3B).We attribute this increased cytotoxicity to the rapid migration of the lapatinib-loaded platelets from the top chamber to the bottom chamber, where the BBM1 and BBM2 tumor cells were cultured in a monolayer; however, this remains to be validated through time-lapse imaging based analyses of binding kinetics of variously stained platelets binding to the tumor cells.
We also showed that the in vitro release of lapatinib from lapatinib-loaded hiPSC-and donor-derived platelets was pH-sensitive.This is critical, as the tumor microenvironment is acidic compared to normal tissues, reflecting the hypoxic conditions induced by rapid tumor cell proliferation 92,93 .However, it remains unclear how lapatinib release from hiPSC-and donor-derived platelets is accelerated under acidic conditions.The lower pH could induce the generation of platelet-derived extracellular vesicles containing lapatinib, as has been reported for cancer cell-derived vesicles 94 .Loaded platelets can also respond to thrombin-mediated activation and generate extracellular vesicles, which could augment lapatinib release.Such vesicles, which are ~ 200 nm in diameter, could be important secondary drug delivery systems for lapatinib with greater potential to infiltrate BBM tumors and fuse with cancer cells.In addition, an acidic environment can promote the generation of mixed platelet-leukocyte aggregates and increased chemotaxis of neutrophils mediated by platelets in a P-Sel-dependent manner 95 ; and P-Sel, in turn, is exposed on the platelet surface under acidic conditions.Finally, alterations in platelet structure have been reported under acidic conditions 96 , which may increase OCS "permeability" 97 .Overall, these findings suggest that drug-loaded hiPSC-and donor-derived platelets are more likely to release their contents in or around the acidic environment established by BBM tumor cells rather than the normal physiological environment, reducing potential toxicity to surrounding tissues.
After treatment with free lapatinib and lapatinib-loaded hiPSC-platelets, the growth inhibition and apoptosis of BBM cells were comparable in vitro.However, the effective treatment of brain metastases is generally hindered by the BBB 55 .Thus, our more clinically significant finding was that hiPSC-platelets can act as an efficient drug carrier for the treatment of HER2+ BBMs tumors residing in the brain.We found that, compared to the infusion of free lapatinib, the infusion of lapatinib-loaded hiPSC-platelets significantly decreased tumor progression and tumor size and more potently reduced the tumor-seeding capability of BBM cells.Plasma analyses also indicated that lapatinib concentrations remained higher in mice treated with lapatinib-loaded hiPSC-platelets, with a much longer half-life than the free drug.These results suggest that lapatinib-loaded hiPSC-platelets could be more effective against BBM than free lapatinib.Furthermore, following infusion of lapatinib-loaded hiPSC-platelets, lapatinib levels were significantly higher in tumors relative to the major organs evaluated, which was not the case in mice infused with free lapatinib.Consistent with these findings, our histological assessment of various tissues in treated mice revealed no overt signs of injury; however, we did observe body weight loss in tumor-bearing mice treated with free lapatinib, which is an early indicator of toxicity.Finally, cTnI levels were low and comparable
R E T R A C T E D A R T I C L E
Breast-to-brain metastasis (BBM) cell cultures.BBM tissue specimens (HER2+) were collected and propagated as previously described 73 .Briefly, the cell lines COH-BBM1 (BBM1) and COH-BBM2 (BBM2) were established by validation of phenotypic markers (HER2, pan-cytokeratin) and exclusion of brain cells (astrocytes, microglia, and endothelial cells), and cultured in DMEM/F12 media (Thermo Fisher Scientific) supplemented with 10% fetal bovine serum (FBS; Sigma Aldrich), 1% Glutamax (Life Technologies), and 1% antibiotic-antimycotic (Life Technologies), at 37 °C and 5% CO 2 .Cell line authentication was performed by short tandem repeat profiling by IDEXX BioAnalytics, and the cell lines were determined to be Mycoplasma-negative by PCR (Agilent Mycosenser Mycoplasma Assay Kit) as recently as 1 month before the final experiments.
In vivo treatment of BBM xenograft tumors with free lapatinib and lapatinib-loaded hiPSC-platelets.All NOD/SCID mice were maintained under veterinary supervision and housed under standard living conditions, with a 12-h light/dark cycle and access to food and water ad libitium.Investigators were blinded to treatment group for the analysis of mice.The experiments were not randomized.No statistical methods were used to predetermine sample size.No strategy was used to eliminate or identify confounders.Female NOD/SCID mice were randomized to treatment and control groups of n = 7, giving 80% power to detect a treatment effect size of 65% compared to a baseline response of 5% at a significance level of 0.05.Throughout the course of our experiments, no animals were excluded from the study.Macrophages were depleted in all NOD/SCID mice by intravenous injection of liposome-encapsulated clodronate, as shown previously 61,99 .
To evaluate tumor growth and survival in vivo, BBM1 cells (100 K in 20 μL PBS) were transduced with mCherry and firefly luciferase (mCherry:LUC, Addgene) [100][101][102] and injected into the brains of 6-week-old NOD/ SCID female mice (7 mice/group, 24 total; 2 mm right and 1 mm anterior to the bregma suture).Only NOD/ SCID female mice were used for these studies because BBM tumors occur predominantly in females.Tumor progression was monitored every 48 h for 40 days using BLI on a Xenogen Imaging System (Xenogen Corp).At the conclusion of the experiments, mice were euthanized, and their brains and other tissues collected and fixed in formalin (Thermo Fisher Scientific) for downstream lapatinib concentration analyses.
The tumor-bearing female NOD/SCID mice were randomly divided into six groups to receive treatment with: (1) normal saline; (2) non-loaded hiPSC-platelets; (3) free lapatinib; (4) lapatinib-loaded hiPSC-platelets; (5) non-loaded donor-derived platelets; (6) lapatinib-loaded donor-derived platelets.Each group had n = 7 female NOD/SCID mice, for a total of 42 animals.The dose of lapatinib for each injection was 6 mg/kg and based on the effective drug loading rate of 48.5%, the dose of lapatinib-loaded hiPSC-platelets was 12 mg/kg.Treatments were intravenously administered via the tail vein, and the volumes of tumors were measured every 3 days.
Statistical analyses.
Data are presented as mean ± standard deviation (SD), using data generated from n = 3 biological replicates with n = 2 technical replicates present for each biological replicate.The statistical significance of differences between groups was determined (unless otherwise noted) using one-or two-way analysis of variance (ANOVA) with Bonferroni correction for multiple comparisons (GraphPad Prism 8. | 5,660.2 | 2021-10-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Steady state analysis of Boolean molecular network models via model reduction and computational algebra
Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem.
Several methods have been proposed in the literature for dealing with this problem, including exact as well as heuristic methods. We provide a brief review of the different types here. For this purpose, we represent a Boolean network as follows. Let K = {0, 1}, and assume that the network has n nodes x 1 , . . . , x n . Each node x i has associated to it a Boolean function f i : K n −→ K. Thus, we can represent the Boolean network as a function f = ( f 1 , . . . , f n ) : K n −→ K n .
One can represent the variable dependencies through the dependency graph of the network, whose nodes are the variables x 1 , . . . , x n . There is an edge x i → x j if x i appears in the function f j , that is, the state of x j depends on the state of x i . The problem of finding steady states is then formulated as finding all states x ∈ K n such that f (x) = x.
One approach to the problem is model reduction. Some existing reduction methods use a "steady-state approximation" [25][26][27][28] to reduce the number of variables. Intuitively, if a function depends on a variable, e.g., f i = f i (x j , x k , x l ), then we can remove variable x j from the network by replacing f i (x j , x k , x l ) with the new function f i (f j (x 1 , . . . , x n ), x k , x l ). By repeating this process, one obtains a reduced network that in practice is much smaller than the original network. The stopping criteria for reduction methods is that variables can be removed only if the steady state information is preserved. The steady states of the reduced network are in algorithmic one-to-one correspondence with the steady states of the original network. More precisely, the reduction algorithm decomposes a large system into a smaller system and a set of equations in triangular form, so that when the steady states of the reduced system are found, the steady states of the original systems can be found simply by backwards substitution. That is, the existence of the one-to-one correspondence is not just theoretical.
Another method uses the fact that one can represent a Boolean function as a polynomial function in the variables x 1 , . . . , x n , with coefficients in the finite number system K = {0, 1} (with integer addition and multiplication modulo 2). The problem of finding the steady states of a Boolean network in n variables, as above, can then be reformulated as the problem of finding the solutions to a system of polynomial equations [29][30][31]. Then, the roots of the system of polynomial equations is encoded by the set {p 1 , . . . , p n }. Using tools from computational algebra it is possible to find another set that has the same roots (a Gröbner basis), such that it is possible to do a generalized version of Gaussian elimination. These computations can be done using several different software packages developed for this purpose.
A graph-theoretic method, Minimal Feedback Vertex Sets, consists of finding a set of vertices in the dependency graph of the network that "generate" all steady states. More precisely, one finds a set S ⊂ {1, . . . , n} such that knowing the coordinates x i , for all i ∈ S, of a steady state completely and algorithmically determines the other coordinates of the steady state. It turns out that so-called feedback vertex sets have this property. In practice, by finding a minimal feedback vertex set, one reduces the problem from checking 2 n states to the problem of checking 2 |S| states, where |S| is typically much smaller than n [23]. A feedback vertex set can be found by removing vertices from the graph until the graph has no directed cycles. A minimal feedback vertex set can be found by finding the smallest number of vertices that we need to remove from the graph so that it does not have directed cycles.
SAT methods have also been used for the purpose of finding steady states of Boolean networks, which are used to determine whether a Boolean expression in several variables has a variable assignment that makes the expression true; see [32][33][34][35]. In essence, the system of Boolean equations, f i = x i , is rewritten as a single equation G(x) = 1, and then the problem of finding the steady states becomes the problem of finding when the equation G(x) = 1 is satisfied. For example, Melkman, Tamura, and Akutsu [33,35] used SAT algorithms to find steady states of AND/OR Boolean networks, i.e., Boolean networks in which the f i contain only the AND and OR operators, with a time complexity of O(1.587 n ) (where n is the number of nodes). Dubrova and Teslenko [34] also developed a SATbased algorithm to find all attractors of a Boolean network with very good performance characteristics. The methodology was tested on Boolean networks with sizes ranging from 12 to 52. It was also tested using random networks with up to 7000 nodes and average in-degree less than 2. For a fixed in-degree of 2 the maximum size networks tested have 2000 nodes.
Integer programming-based method have also been used to find the steady states of Boolean networks, Tamura, Hayashida, and Akutsu [36]. In essence, the system of Boolean equations is rewritten as a set of inequalities Ax ≤ b, x ≥ 0 and the goal is to maximize a linear function of the form c T x.
Strategic Sampling, (Zhang, Hayashida, Akutsu, Ching, and Ng, [37]) is a recursive search approach to identify all steady states of a random Boolean network with maximum in-degree 2, with an average time complexity of O(1.19 n ) (where n is the number of nodes). The idea is that the equations are solved recursively: First one considers the solutions of the equation f 1 = x 1 . Since the f i 's depend on few variables in practice, one only has to keep http://www.biomedcentral.com/1471-2105/15/221 track of the variables that appear in f 1 . Then, one finds the solutions of f 2 = x 2 that are compatible with the solutions previously found. The process continues until one finds solutions of all equations. In the worst case, however, algorithm complexity can be O(n2 n ) [31].
Finally, the problem of finding attractors has also been studied by using Binary Decision Diagrams (BDD) [38][39][40][41]. The idea is to represent the Boolean functions as a directed graph that efficiently encodes the functions by allowing fast evaluation. Then, by combining the BDD representation of all the Boolean functions, the problem of finding steady states becomes a search problem in the larger BDD. Many of these methods were tested on some biologically relevant networks with fewer than 100 nodes.
In this paper, we present a new method for computing steady states of a Boolean network, combining a graph theoretic reduction/transformation method with an approach using computational algebra. We show that the method performs favorably on some types of networks in comparison with other methods on a collection of benchmark networks, consisting of both published models and random networks with certain properties, namely Kauffman networks and networks whose in-degree distribution satisfies a power law.
Methods
The method we propose for steady state analysis is a combination of network reduction/transformation and computational algebra (see Figure 1). The reduction technique we use is based on results in [42,43]. In [42] it was shown that any Boolean network can be "transformed" into an AND-NOT network, namely a network whose Boolean functions are all of the form y 1 ∧ y 2 ∧ . . ., where y i ∈ {x i , ¬x i }. The AND-NOT network has the property that its steady states are in one-to-one correspondence with the steady states of the original network. Furthermore, the one-to-one correspondence between steady states is algorithmic. In [43], the authors proposed a method to reduce an AND-NOT network to another, smaller AND-NOT network in polynomial time, in such a way that the steady states of the original and the reduced network are in one-to-one correspondence, in a constructive way. This reduction algorithm looks for motifs (e.g. feed-forward loops) in the wiring diagram and removes nodes in such motifs; the reduction stops when there are no more motifs to be reduced (attempting to do further reductions would destroy the 1-1 correspondence of steady states). Once the reduced network is constructed, one can compute its steady states by converting the Boolean functions into polynomial functions and then solving a system of polynomial equations, as explained above. The computational algebra technique is based on [29,30]. The idea is that by computing a Gröbner basis (a special set of polynomials with the same roots as the original equations), it is possible to find the roots of the system of polynomial equations using a generalized version of Gaussian elimination.
The correspondence between Boolean and polynomial functions is accomplished via the "dictionary" x ∧ y ↔ x · y, x ∨ y ↔ x + y + xy, ¬x ↔ x + 1. The correspondence is unique if we limit the degree with which each variable appears in the polynomial function to 1, since any function K n −→ K can be represented uniquely as a polynomial function that is square-free, that is, in which every variable appears with exponent 1.
The algorithm is summarized in the following pseudocode and a more detailed description follows. The source code can be found at github.com/PlantSimLab/ ADAM.
Algorithm 1 Steady state computation.
Input: Compute p =polynomial representation of h 6. Solve the equations h i = x i , using computational algebra (i.e. compute the Gröbner basis to perform a generalized version of Gaussian elimination) and let L = {s 1 , . . . , s r } be the set of solutions 7. Use backtracking to compute the steady states of g: L = {s 1 , . . . , s r } 8. Project each s j to its first n coordinates to obtain s j The input of our algorithm is an n−dimensional Boolean network f = (f 1 , . . . , f n ). In Step 1, we use the formulas from [42] to compute an AND-NOT network g = (g 1 , . . . , g m ), with m ≥ n, which has the same number of steady states as f . The idea is to introduce variables to rewrite the Boolean operations using only the operators AND and NOT; for example, can be written as f 1 = ¬x 2 ∧ ¬x 5 , where f 5 = ¬x 3 ∧ ¬x 4 . Furthermore, the steady states of f are given by projecting the steady states of g to their first n coordinates. In Step 2, we simply consider the wiring diagram of g, which is a signed directed graph that encodes which variable depends on which others and whether the interactions are activating or inhibiting. In Step 3, we use the algorithm from [43] to reduce the wiring diagram of g to another signed directed graph, W . Then, in Step 4, we construct the AND-NOT network that has W as its wiring diagram, h = (h 1 , . . . , h l ); the steady states of g can be computed from the steady states of h by backtracking [43]. In Step http://www.biomedcentral.com/1471-2105/15/221 5, we compute the polynomial representation of h. This is done by replacing ¬x i with 1 + x i , and x i ∧ x j with x i x j , as explained earlier. In Step 6 we solve the system of polynomial equations h i = x i , i = 1, . . . , l; this is done using tools from computational algebra as done in [29,30]. The solutions of the system, L = {s 1 , . . . , s r }, will also be solutions of h. In Step 7, we use backtracking to compute the steady states of g, L = {s 1 , . . . , s r }. And finally, in Step 8, we project each s j to its first n coordinates and obtain the steady states of f (See Additional file 1 for an example and Additional files 2 and 3 for the code version used for this publication).
Results and discussion
We first tested the software implementation of our algorithm on 1,000,000 Boolean networks with 50 nodes each, for which we also computed all steady states by a custommade algorithm based on minimal feedback vertex sets. For each graph we found the minimal number of vertices that had to be removed so that the graph had no directed cycles; call this set S. Then, for each element in {0, 1} |S| , the values of the other variables are completely determined. This gave us 2 |S| candidates for steady states which we then checked by exhaustive search. In all cases our algorithm computed correctly all steady states. We are therefore confident that our implementation is errorfree. This extends to the relevant functionalities of other software packages we used for intermediate computations (Macaulay2 [44], Boost Library [45], BoolStuff Library [46]).
Then we used over 100,000 Boolean networks to benchmark our method against others. The methods we used for comparison were those with published benchmarks or those for which the code was readily available. As we will see later, for Kauffman networks with K = 2, the timing of our method grows linearly with the number of nodes; thus, it was not necessary to include in our benchmarks methods that were reported to grow exponentially for such networks (e.g. [34,37]). We selected three methods with good computational efficiency for K = 2: Zanudo and Albert [26]; Devloo, Hansen, and Labbé [32]; and Tamura, Hayashida, and Akutsu [36]. For the most recent algorithm, Zañudo and Albert [26] use a method that identifies motifs (subsets of nodes) that stabilize in one or a small number of states. The steady states from these motifs are used to reduce the network to find the attractors. It is important to mention that this method can find not only the steady states of Boolean networks, but also information about all the attractors of the network, which our method is not currently designed to do.
We used random biologically meaningful Boolean networks [47][48][49] and published networks [13][14][15][16][17][18][19][20][21][22] (the Boolean representation of these models was obtained from The Cell Collective [50]). The results for Zañudo and Albert and our algorithm were generated by us and the other results are reported from published benchmarks [32,36]. The computations for our algorithm and that of [26] were done on a 3.4 GHz Linux machine. The computations for Tamura's and Devloo's algorithms were done on a Linux system with 3GHz and a Sun SPARC Ultra 10 machine, respectively, as reported in [32,36]. Considering that the different computers described above have processors with similar speed and that the computations were done in a single processor, the use of results from different machines will not affect the main conclusions of our comparison. Moreover, some methods did not have reported results for certain network sizes; in that case, we computed an approximate timing using interpolation/extrapolation of the reported values; we linear and exponential fits for the timings that grew linearly and exponentially, respectively.
First, we compare the performance of different methods on Kauffman networks with connectivity K = 2 and K = 3. For our and the Zañudo algorithm, each reported number is the average or standard deviation of 1000 Boolean networks. In Table 1 we report the timings for Kauffman networks with K = 2. We can see that the algorithm in [36] performs best, followed by our algorithm. Note http://www.biomedcentral.com/1471-2105/15/221 that all timings grow linearly with the number of nodes. As mentioned in [36], the good results with Tamura's algorithm may be due to the fact that the authors optimized the computations for Boolean functions that have 2 inputs. The results for Kauffman networks with K = 3 in Table 2, however, show that our method performs better by an order of magnitude. These results show that, while our algorithm is not optimized for very low in-degree networks, it is more scalable for networks with higher connectivity. Not all molecular networks have properties similar to Kauffman networks, but can exhibit power law properties for their degree distribution. Thus, we supplemented the results from Tables 1 and 2 with benchmark networks whose connectivity follows a power law distribution [51]. We considered power-law networks with average connectivity k = 2 and k = 3. That is, the average number of edges is the same, but the connectivity distribution is more biologically realistic. There were no bechmarks for these types of networks for Tamura's and Devloo's algorithms, so we only report Zañudo's and our algorithm. In Table 3, we see that our algorithm can handle networks with k = 2 with up to 1000 nodes in under 7 seconds on average. It is important to mention that these timings differ considerably from the timings for K = 2 (Table 1). Table 4 shows the results for networks with connectivity k = 3. Not surprisingly, increasing the average connectivity has a dramatic effect on the size of networks that can be studied; for example, the network sizes that can be dealt with in under 7 seconds decreases from 1000 to about 140 when we increase k from 2 to 3. Further increasing the average connectivity will have a much more dramatic effect.
Finally, our results on published networks are shown in Table 5, sorted by average connectivity. Since all models have external parameters corresponding to environmental conditions (i.e. we have one BN for each parameter set), we sampled the parameter space and computed the average timing of each algorithm. The numbers we report are the averages of 10000 simulations for each model. As expected, for all networks with small average connectivity (less than 3) our algorithm performed very well and finished in less than half a second, consistent with the timings from Tables 3 and 4. Four models have average connectivity greater than 3 and our algorithm performed very well on three of them. However, for the largest network (225 nodes and k = 5.16), there were parameter sets (51% of the sampled parameters) which could not be analyzed. DF=did not finish in a day. *=49% of simulations reported, 51% of simulations were stopped because they did not finish in a day or had a large memory consumption.
The computational complexity of our algorithm depends on the type of networks used as well as the connectivity. The algorithm seems to run in polynomial time for Kauffman networks with K = 2 (Table 1), but slower for power-law networks with the same connectivity (Table 3). For other types of networks the complexity is much harder to infer, but Table 2 suggests that the complexity is exponential. Also, the complexity of the mathematical tools we use is not well understood in the context of Boolean models. For example, the algebraic step of our algorithm can be doubly exponential, but it has been shown to work much faster in practice and, as our work shows, it runs much faster for sparse Boolean models.
Conclusions
The capability to analyze the attractors of discrete dynamic models of biological networks is a key technology in any systems biology toolkit that incorporates this popular type of model. This capability needs to include steady state analysis as well as the determination of periodic points of larger periods. And it needs to apply to models that allow an arbitrary (finite) number of states for its variables, such as logical models. In this paper, we have focused on Boolean networks as the model type most commonly used currently. And we have focused only on steady state analysis, at the exclusion of periodic limit cycles. As is the case in many situations, algorithms available for this purpose, some of which we used here for comparison, perform well on some types of models and not so well on others. For instance, for Kaufmann networks with connectivity 2, the method in [36] outperforms all other methods, including ours. The method in [26] is generally slower than our method in computing steady states, but has the added capability that it also finds limit cycles of larger lengths, which our method is not currently equipped to do.
We have used three types of networks for benchmarking: Kauffman networks, power law networks, and published networks. Kauffman networks are commonly used for this purpose, but they don't capture all properties of molecular networks, which include a power law distribution of node connectivities. Our analysis of published networks shows that some of them have high average connectivity, not generally considered in theoretical studies. These pose serious challenges to computational methods, as we demonstrate. As more large published networks become available, they will represent the most important suite of benchmark models to be used, in our opinion.
We believe that this study also holds another important lesson. Our method is a combination of two methods, neither one of which performs particularly well when applied on its own (see Additional file 1). In combination, however, they are quite powerful: model reduction plus polynomial algebra. This might point towards a general strategy for other algorithms of this type. Nonetheless, as our calculations show, the challenge of finding steady states is far from solved in general, even for existing published models. Thus, much work remains to be done. | 5,595.8 | 2014-06-26T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Structural-dynamical transition in the Wahnstr\"om mixture
In trajectory space, dynamical heterogeneities in glass-forming liquids correspond to the emergence of a dynamical phase transition between an active phase poor in local structure and an inactive phase which is rich in local structure. We support this scenario with the study of a model additive mixture of Lennard-Jones particles, quantifying how the choice of the relevant structural and dynamical observable affects the transition in trajectory space. We find that the low mobility, structure-rich phase is dominated by icosahedral order. Applying a nonequilibrium rheological protocol, we connect local order to the emergence of mechanical rigidity.
Introduction
Supercooled liquids show emergent dynamical and structural heterogeneities when cooled towards the glass transition [1,2,3,4]. The relation between slow dynamics and some form of short-range (local) order, however, is still poorly understood. On the one hand, the efficient filling of space with atoms of different sizes requires a certain degree of topological order [5] and the dynamic slowdown can rigorously be linked to emerging static lengthscales [6]; on the other hand, computer simulations have shown that the correlation between local structural features and slow dynamics is strongly model dependent [7,8]. In experiments, colloidal [9,10,11,12] and metallic glasses [13,14,15] provide evidence for emerging local order as well as, on the contrary, support for purely dynamical scenarios where local structure has limited influence on the dynamics [16,17]. Historically, the study of local structure with complex higher order metrics has played a decisive role in understanding amorphous systems and packings since the times of Bernal and Finney [18,19,20,21] and has contributed to a geometric and thermodynamic interpretation of the emerging frustration in glasses [22,23]. However, alternative approaches which disregard structural features and focus on dynamical [24] or vibrational/elastic aspects [25] of the problem or relaxation have been proposed, in striking contrast with established thermodynamic theories of the glass transition [26,27]. It is therefore important to understand what drives strong or weak coupling between structure and dynamics in different supercooled liquids.
A major difficulty encountered in the investigation of the role of structural changes in dynamic arrest is the fact that particle-resolved studies (and in particular conventional computer simulations) can only access a limited dynamic range of slow relaxation. Typically, this encompasses 4 to 5 orders of magnitude in time, meaning that such studies mainly capture the onset of the mechanisms that characterise the deeply supercooled and glassy regimes (when the relaxation times are 10 to 20 orders of magnitude larger with respect to the liquid regime) [28]. Therefore, alternative sampling routes to explore the deeply supercooled regime from a structural and/or dynamical point of view have been developed in recent years, including pinning fields [29,30,31], particle-swap Monte-Carlo on particular models [32,33] or biased dynamical ensembles [24,34,35,36].
A potential route to study dynamical and structural heterogeneities in glassformers is provided by efficient sampling methods in trajectory space, where novel dynamical phase transitions have been uncovered and connected to the dynamical slowdown observed in supercooled liquids [24]. The study of trajectory space in glassy systems has been originally promoted in the context of the dynamical facilitation theory of slow dynamics [37,38,34]. Within this framework, on-lattice idealised models [37,39,40,41] as well as more realistic models of structural glasses [34,35,42,43,44] have been shown to undergo a first-order dynamical phase transition in trajectory space between an active phase with high mobility (fast relaxation) and an inactive phase with low mobility (slow dynamics).
However, this purely dynamical picture has been more recently complemented by a structural aspect: active/inactive phases correspond to trajectories particularly poor/rich in local structure [45,46] and can be seen as representative of the low temperature state of the supercooled liquid [36]. Dynamical transitions are therefore understood to correspond to a structural-dynamical transitions, where the slowdown of the dynamics becomes intimately related to the growth of short-range-order domains.
Still, much of the evidence for structural-dynamical phase transitions in atomistic models of glassformers up to now is restricted to only two model systems (the Kob-Andersen mixture [34,42,45,35,36], a popular Lennard-Jones mixture with weak structural dynamical correlations [47], and the moderately polydisperse hard spheres [46]). In order to understand how system-dependent this picture is, it is important to extend the scope of these studies to other model systems.
In the present numerical work, we consider the case of a popular atomistic glassformer originally introduced by Göran Wahnström as a simple model for supercooled liquids [48]. It consists in a binary mixture of Lennard-Jones particles whose parametrization has been found to provide a good model of fragile glasses, with a particularly strong coupling between its slow dynamics and the emergence of local geometrical motifs [47,49,50]. These are typically icosahedra, a very common arrangement in simple models of glass-forming liquids composed of spherically symmetric particles.
The article is structured as follows: in Section 2 we present the model studied and the importance sampling technique employed for trajectory sampling; in Section 3 we introduce the relevant observables and the phase transitions in trajectory space that can be probed through the dynamical s-ensemble and the structural-dynamical µ-ensemble; in Section 4 we show that it is possible to connect the structural-dynamical transition to the emergence of rigidity in the glass, as the icosahedra-rich phase presents distinctive rheological properties; finally, we conclude the article with an overview of the results and their implications.
Model and sampling technique 2.1 The Wahnström binary mixture
We study the Wahnström binary mixture of Lennard-Jones particles. The model is a 50:50 mixture of large (A) and small (B) particles with parameters σ A /σ B = 1.2, m A /m B = 2, A / B = 1 and cutoff r cut = 2.5σ at number density ρ = 1.296. Lengths, temperature and times are reported in units of σ A , A /k B , and (m A σ 2 A / A ) 1/2 , respectively. The mixing rule for the interaction is additive, i.e. it follows the Lorentz-Berthelot rules This atomistic supercooled liquid has been extensively studied since its original design [48]. The model reproduces to a good degree the relaxation behaviour of socalled fragile glasses, as its structural relaxation time τ α (as measured from the decay of the intermediate scattering function [49]) undergoes a non-Arrhenius (superexponential) increase when the system is cooled below the crossover or onset temperature T onset = 1.0 [51,49]. Furthermore, as the temperature is decreased, the disordered structure of the liquid changes with the formation of five-fold symmetric domains and in particular of local particle motifs with icosahedral coordination [49,51,52] which contribute to the emergence of strong frustration [53]. Equilibration of the liquid in conventional simulations around and below the so-called mode-coupling temperature T M C = 0.56 is computationally expensive, making the low temperature, activated regime (crucial for testing theoretical predictions [2]) unreachable. Divergence of the relaxation times, if modelled by the super-Arrhenius Vogel-Fulcher-Tamman law ln τ α /τ ∞ = DT 0 /(T − T 0 ), is predicted at temperature T 0 ≈ 0.46. For reference, we report in Fig. 1(a) the temperature dependence of both the structural and dynamical properties of the model.
Beyond local structural order, the model has been shown to crystallise, under suitable conditions, into a MgZn 2 Laves phase formed by icosahedral motifs and socalled Frank-Kasper bonds [54] but in the supercooled liquid regime the contribution of such a large unit cell to the increased degree of local order has been shown to be limited [49].
Replica exchange in trajectory space
As in previous work [35,36,45,46], in order to sample large fluctuations of the time-integrated observables we employ an importance sampling technique that extends equilibrium replica exchange methods to ensembles of trajectories.
We sample space and time extensive observables O x on systems of N = 512 evolving for a finite observation time t obs . A generic time-integrated observable O x is defined as a double sum over the number of particles and a discretization of time into L intervals for a total of K = L+1 points: where f t,i is a specific microscopic observable (e.g. a single particle indicator function). The goal of the importance sampling technique is to efficiently measure the probability distribution P (O x ; T s ) for a given value of the thermostat temperature T s . In particular, we are interested in the large deviations from the typical value of the probability distribution. In order to calculate such rare fluctuations in trajectory space, new trajectories are generated through shifting and shooting moves (inspired by Transition Path Sampling [55]). Hence, the algorithm performs a random walk in trajectory space with acceptance probability determined by a Metropolis rule which ensures detailed balance and where are the values of a biasing pseudopotential which is a function of the extensive observable O x computed over old and new trajectories. We choose Ψ to have a parabolic form where O j 0 is the reference (typical) value associated to replica j. Depending on the observable, we take a number of distinct replicas varying from 8 to 16, with equally spaced values of O j and values for the harmonic constant ω that ensure good mixing of neighboring replicas. Mixing is also enhanced by 2500 swap attempts among all (not-necessarily neighboring) replicas.
The Monte-Carlo algorithm in trajectory space simulation starts with an equilibrated trajectory assigned to all replicas at temperature T s . A new trajectory is then generated via Transition Path Sampling moves (1/4 shifting, 3/4 shooting [55]) independently for every replica, accepted or rejected according to Eq. 3. Swap attempts between different replicas are then performed, completing the cycle. During the sampling, we employ a velocity-Verlet integrator with timestep dt = 0.005 to resolve the equation of motion and the Andersen thermostat to keep the temperature constant.
We perform several tens of thousands of cycles and collect statistics and block averages from three to eight non-overlapping blocks of data whose size ranges between 1.2 · 10 4 and 3 · 10 4 trajectories, which deliver estimates for the averages and standard errors. Crucially, depending on the sampling temperature T s , correlations may be very long-lived and the number of Monte-Carlo cycles spent during the equilibration in trajectory space can be very large (∼ 6 · 10 4 ) Monte-Carlo sweeps), as shown in Fig. 2. We then discard trajectories produced during equilibration and collect data from the converged, late steps of the Monte-Carlo.
From the collected ensemble of trajectories, we calculate distributions and expectation values using the Multistate Bennett Acceptance ratio (MBAR) method extended to ensembles of trajectories [57]. This technique allows us to obtain the unbiased probability distribution P (O x ; T s ) and expectation values for any quantity A as where y is the conjugated field to the observable O x and indicate averages according to the unbiased distribution P (O x ; T s ). Notice that the denominator in Eq. 5 corresponds to the moment generating function of the probability distribution and is a generalization of the partition sum to trajectory spaces. In this work, we focus on two particular ensembles: ) γ ] −1 fit for τα and n respectively, with γ = 6.6, T 1/2 = 0.47 from Ref. [56]. Vertical lines also indicate the location of relevant temperatures in the Wahnström model: the onset of slow dynamics Tonset, the mode-coupling transition temperature TMC and the VFT temperature T0. (b) Threedimensional rendering of the four local motifs considered in this work: the icosahedron, the defective icosahedron (10B) and two nine-particle motifs unrelated to five-fold symmetry.
For the icosahedron and 10B, a pentagonal ring of particles is highlighted in gold. and the observable O x is the time-integrated mobility of the particles; and the µ-ensemble, where y = µ and the relevant observable is a time-integral over the number of particles in a particular local motif (here the icosahedron).
In the presence of transitions in trajectory space, we expect to measure probability distributions for the timeintegrated observables that are not Gaussian and display long, eventually exponential tails. For variables that follow a Gaussian probability distribution, the kurtosis the ratio between the fourth central moment and the squared second moment) has value κ 4 = 3. Therefore, the excess kurtosis κ exc = κ 4 −3 is often employed as a benchmark for the deviations from a Gaussian distribution. So-called leptokurtic (fat-tailed) distributions correspond to positive κ exc while platykurtic (thin-tailed) distributions correspond to negative κ exc . Table 1. Expectation values for the mean, the variance and the excess kurtosis κexc for several observables as measured via trajectory sampling for different values of the trajectory length at temperature Ts = 0.67. 3 Dynamical and structural phase transitions
Observables
We analyse the emergence of phase transitions in trajectory space by monitoring a variety of observables. We perform importance sampling in trajectory space according to time-integrated observables that are either dynamical (such as the mobility excitations) or structural (a selection of geometrically different structural motifs, see Fig. 1(b)). Furthermore, in order to relate the trajectory-space picture back to the thermodynamic picture, we also monitor the inherent state energy of the selected configurations, whose statistics in the trajectory ensemble has been proven to closely reproduce the equilibrium properties.
Structures are detected employing the Topological Cluster Classification algorithm and we refer to Reference [58] for a more detailed discussion of the geometries considered here.
In particular, for the time-integrated quantities we have: number of excitations: To quantify the number of mobile particles, we compute the observable where is the single particle displacement, Θ is the Heaviside function and a is a scale for cage motion, here set to a = 0.3σ large . number of particles in icosahedral motifs: Given the important role of icosahedral order in the Wahnström mixture, we track this specific local motif along the trajectories. Additionally, we perform importance sampling according to the number of icosahedra. The corresponding time-integrated extensive structural-dynamical observable is then where h ico i is an indicator function, which takes value 1 if a particle is found in an icosahedral environment or 0 if it is not. With a certain abuse of language, we will interchangeably refer to the population of icosahedra or the population of particles in icosahedral motifs when considering the intensive quantity O ico /N K.
number of particles in 9A motifs: We compute O 9A performing the summation as in Eq.7, but with a different indicator function h 9A i . In this case, we consider the 9A structure of the Topological Cluster Classification, which is composed of six particles combined to form three four-folded rings, surrounded by three further spindle particles on each quadrangular facet (forming a tricapped trigonal prism). According to previous studies [49], we do not expect this motif to be a good predictor of structural-dynamical heterogeneity for the Wahnström mixture. However, in the case of other simple liquids dominated by five-fold symmetric local order, such as moderately polydisperse hardsphere, 9A motifs have been shown to be complementary to local icosahedral order, becoming less frequent when the packing fraction (and the population of icosahedra) increase [59].
number of particles in BCC motifs: As a further test, we compute the time-integrated observable O BCC considering a nine particle structure that (weakly) correlates with body centered cubic local order and anticorrelates strongly with icosahedral and five-fold symmetric order. number of particles in five-fold symmetric motifs: Finally, to track five-fold symmetric local order that is not fully icosahedral, we consider the defective icosahedron structure 10B, composed of three interlaced pentagonal rings. This structure is characteristic of hardsphere mixtures, and has been shown both in simulations and experiments to drive a clear structuraldynamical phase transition [46].
We also measure a static observable, i.e. not timeintegrated. This is the inherent state energy (ISE) of configurations located at the centre of each trajectory, chosen in order to avoid finite-time effects on the statistics [60]. Inherent state energies are obtained minimising the potential energy of the system for a maximum of 1000 iterations of the FIRE algorithm [61].
s-ensemble
First, we consider the response of the system to a dynamical bias. This means that we collect trajectories according the observable O exc , i.e. the time-integrated number of mobility excitations. We employ the large deviation formalism and notation, and we define s as the dynamical conjugate field related to the excitations, so that positive/negative values of s correspond to atypically small/large densities of mobility excitations, hence the name of s-ensemble [37]. As we sample the mobility large deviations, we track all the other dynamical and static order parameters.
In Fig. 3 and in Table 1 we summarise our findings for a particular thermostat temperature T s = 0.67T onset ≈ 1.2T MC , where T MC and T onset indicate the transition temperature predicted by the power-law fit to the relaxation of mode-coupling theory and the onset of the two-step relaxation dynamics respectively. In Fig. 3 we compare the (scaled) logarithm of the probability distributions (i.e. the rate function) of the considered observables O x for increasing values of the trajectory length t obs = 1.9, 3.8, 5.7, 9.5τ α (K = 20, 40, 60, 100). At the considered temperature, we expect to observe deviations from Gaussian fluctuations in the tails (i.e. large deviations) of the probability distributions. With this comparative analysis, we want to stress that the choice of the observable is non-trivial and different observables present characteristic features.
First we notice that the population of excitations (which is the reaction coordinate along which we perform importance sampling) shows mostly Gaussian fluctuations around the mean value O x /KN = 0.096(2) for all the sampled trajectory lengths. However, the variance computed at different trajectory lengths appears to slowly converge to smaller values, with the tails of the probability distributions gradually narrowing. This indicates that very short trajectories of length t obs = 1.9, K = 20 are affected by finite size effects that enhance the observation of large fluctuations.
Higher order moments converge even more slowly but point to the emergence of non-Gaussian features. For example, the excess kurtosis is negative for short trajectories and becomes mildly positive for the longest trajectories K = 100. This underlines that even longer trajectories are needed to obtain more marked signatures of a dynamical phase transition in terms of population of excitations at the relatively high temperature considered here, with an non-negligible increase of the computational cost. Notice that it is only in the long time limit that a large deviation principle holds and rate functions converge [62], and therefore it is only in this limit that a formal phase transition in trajectory space is expected. measured on the same trajectories produced in the sensemble? In the following, we analyse them one by one.
For the time-integrated population of particles in icosahedral motifs, we observe that average values do not depend on K; however, higher order moments show a dependence on the trajectory length. The values of the excess kurtosis κ exc show a marked increase in non-Gaussian features of the trajectory probability distribution, as confirmed by direct inspection of the probability distribution. The excess kurtosis is positive (i.e. fat-tails) and goes approximately from 0.48 to 2.0 when the trajectory length increases from K=20 to K=100. For a comparison, notice that for a common leptokurtic distribution of positive random variables such as the Rayleigh distribution, the excess kurtosis is κ exc = −(6π 2 − 24π + 16)/(4 − π) 2 hence κ exc ≈ 0.24 showing that the distribution for the icosahedra is even more leptokurtic. Compared to the response of the mobility excitations, the time-integrated population of icosahedra provide a much stronger signature for a dynamical phase transition. In particular we observe that populations of icosahedra of order 0.2 are only two orders of magnitude less likely than the converged typical value O ico /N K = 0.09, with a strong exponential tail in the probability distribution. Non-Gaussian fluctuations are therefore stronger when tracking the time-integrated population of icosahedra than in the case of excitations.
These results are consistent with previous literature [49,53,47] where the role of icosahedral motifs as locally favoured structures (LFS) of the Wahnström mixture as been discussed and their strong correlation with dynamical heterogeneities measured. They also confirm the scenario originally suggested for another popular glassformer (the Kob-Andersen mixture), whereby trajectories sampled according to time-integrals of the LFS delivered stronger signatures for a dynamical transition that mobility excitations [35,36].
An icosahedral motif is detected in the TCC via the combination of seven five-fold symmetric rings [58], and the statistics of the number of icosahedra appears to strongly indicate the presence of non-Gaussian fluctuations related to a structural-dynamical phase transition in the system. How does such transition change if we take into account a less restrictive observable that still identifies five-fold symmetry? To answer this question, we consider the so-called defective icosahedron structure 10B (see Sec.3.1 above). We first notice that the average population of particles in 10B per trajectory is much larger than the population of icosahedra (0.59 vs 0.089) and the variance again slowly converges with increasing t obs . However, The excess kurtosis is much smaller in absolute values, changing sign from negative towards positive values (leptokurtic distributions) as the trajectory length is increased. This matches the dynamical notion of locally favoured structures: icosahedra are not only the minimum energy structure for the Wahnström interaction, they also are the individual motif (among the several options of the Topological Cluster Classification) that displays the longest persistence time [49]. The indicators for a structural-dynamical transition in terms of 10B motifs are much weaker than in the case of icosahedral order. Yet, they confirm that the inactive (low population of excitations) regime is dominated by long-lived five-fold symmetric motifs.
Is it possible to detect signs of the transition in other structural observables? We consider the two exemplary cases of the 9A and BCC9 structures. These motifs both correspond to arrangements of 9 particles with different symmetries which are not minimum energy clusters of the potential. The average populations of the two motifs are very different (∼ 0.07 for 9A and 0.74 for BCC9). The 9A probability distribution is well approximated by a Gaussian for all the trajectory lengths considered here, and the corresponding excess kurtosis are (in absolute value) the smallest among all the considered structures. The BCC9 motif, conversely, presents relatively large but negative excess kurtosis, indicating that the tails of the distributions decay more rapidly than in the case of a Gaussian distribution.
For a given trajectory length, we consider the sensemble averages as a function of the field s to highlight correlations and anticorrelations between the observables. With trajectories of length K = 60, we show in Fig. 4 that for negative s 0 we sample trajectories characterised by large densities of excitations (active phase) while for s 0 we have trajectories with low densities of excitations (inactive phase). These correspond respectively to trajectories that are poor and rich in icosahedra. The anticorrelation between mobility and five-fold symmetry is reflected also in the negative correlation between mobility and 10B structures. On the other hand, mobility positively correlates with the remaining motifs (9A and BCC9).
Finally, we consider how the active/inactive transition is translated in terms of the energy landscape of the system. To do so we also track the inherent state energy (ISE) of the central configuration of every single trajectory and plot the corresponding probability distribution. This (as expected) does not show dependence in the trajectory length and it is well reproduced by a Gaussian fit, see Fig. 3. Normal fluctuations are confirmed by the analysis of the respective excess kurtosis, which are by far the smallest measured throughout our analysis (as small as κ exc = −0.003). In Fig. 4, we do observe a transition to trajectories whose central configurations display typically much more negative energies with respect to the equilibrium typical value at s = 0. This is consistent with the finding that in a different binary mixture (Kob-Andersen) low mobility is a good predictor of low inherent state energies [60].
µ-ensemble
The direct route to access structural-dynamical phase transitions is to sample trajectories according to a relevant time-integrated structural observable. From the previous discussion, and in particular from the magnitude of the non-Gaussian fluctuations as measured by the excess kurtosis, it is evident that icosahedral motifs are well suited to this purpose.
Therefore we perform additional trajectory sampling according to the time-integrated number of icosahedral motifs. As in the case of the s-ensemble, we sample trajectories following the replica exchange scheme, with quadratic pseudo-potentials for the replicas with suitable spring constant ω.
In the new ensemble of trajectories, the conjugate field related to the number of particles in icosahedral motifs is termed µ. Consistently with previous works in the literature [35,36], averages of any arbitrary quantity in the µ-ensemble are defined as In the previous section, we have shown that in the s-ensemble an emergent active/inactive transition is mirrored by a rapid increase of the population of particles in icosahedral motifs. In the µ-ensemble we sample such structural transition directly. In Fig. 5(a,b) we plot the µdependence of the average mobility O exc µ /N K and the average population of icosahedra O ico µ /N K for several thermostat temperatures T s , from T s = 0.72 to T s = 0.65. At different temperatures, we perform simulations of different trajectory lengths t obs . Since the relevant timescale for the dynamics is the structural relaxation time τ α , we plot the first moments as a function of the nondimensional scaled conjugate field µτ obs := µt obs /τ α . Just below the onset temperature we observe signs of a phase transition at large µτ obs between trajectories poor in icosahedra with high mobility and trajectories rich in icosahedra with low mobility. As we reduce the thermostat temperature, the transition moves to values closer to µ = 0. Through a spline fit and the estimate of the maximum in the derivative, we obtain the value µ * τ obs at which the transition takes place. The very small values of µ * τ obs at relatively high temperatures compared to T 0 obtained from the Vogel-Fulcher-Tammann fit or the mode-coupling T M C temperatures suggest that trajectories with an exceptionally high population of icosahedra should be highly likely, and signatures of bi-modality in the probability distribution of the time-integrated observables should become accessible even to conventional simulations as the temperature is reduced.
In Fig. 5 (c,d) we plot such probability distributions both for µ = 0 and the critical value µ = µτ * obs , shifting and rescaling the abscissa axis by the mean and the standard deviation. We observe that, as temperature decreases, the structure-rich tail of the probability distribu- tions raises of several orders of magnitude. Signs of bimodality are weak at low temperatures, due to the relatively short observation time t obs , but clearer at higher temperatures. Moreover, if we evaluate the probability distributions at coexistence µ = µ * , Fig.5(d), a peak at high population of structures emerges more clearly. The knowledge of µ * τ obs (T s ) allows us to draw an approximate structural-dynamical phase diagram, Fig. 6, identifying the locus of points where the transition from icosahedra-poor to icosahedra-rich trajectories occurs. For the considered temperatures, we observe that most of the data points lie on a straight line. An extrapolation of the line to µτ obs = 0 would imply that at temperature T = 0.64 coexistence between the two structuraldynamical phases would be observable at µ = 0, i.e. in conventional simulations with no need for importance sampling. Previous numerical studies of the model [63,49] managed to equilibrate the supercooled liquid down to temperature T = 0.58, with no signature of a transition while decreasing the temperature, but with a rapid increase of the population of icosahedra. This excludes the possibility of a transition at µ = 0 for at least T > 0.58. As discussed in [36], several alternative scenarios can be obtained with different extrapolations at low temperatures, including ones where the transition asymptotically reaches µ = 0 only in the T → 0 limit [64]. Here we notice that as we reduce the temperature, the critical field µ * τ obs is reduced by progressively smaller amounts for the successive temperatures. Lower temperature sampling is partly hindered by the long convergence times of the Monte-Carlo in trajectory space, see Fig. 2.
In the icosahedra-rich regime, approximately 50% of the particles can be found in a local icosahedral environment. However, a complex unit cell formed by several icosahedra and Frank-Kasper bonds has been shown to drive the system towards crystallisation [54]. We check this possibility by monitoring the concentration of Frank-Kasper bonds, here defined as pairs of large A particles surrounded by six common B particles. In Fig. 7 we plot the average fraction of particles involved in Frank-Kasper bonds for the increasing reference concentration of icosahedra O j ico in the replica-exchange scheme at an exemplary temperature T s = 0.67. We observe a rapid increase in the number of Frank-Kasper bonds as we consider replicas with very high concentrations of icosahedra. This is consistent with the overall behaviour of the Wahnström supercooled liquid at low temperatures, where Frank-Kasper bonds are very common [54]. However, in order to form a crystalline phase, four-fold Frank-Kasper bonds between the large particle species are necessary. If we focus on the fraction of A particles in four-fold bonds, this increases very mildly across all of the replicas, and stays below 5% in the highest bias replica, excluding crystal formation in the icosahedra-rich phase.
In conclusion, both the s and the µ ensemble calculations provide evidence for an inactive and icosahedra-rich dynamical phase that becomes progressively more likely to be observed for T < T onset . We now study the icosahedral phase more in detail to understand its relation with the emergence of rigidity in the glass. many orders of magnitude. Such a phenomenological transition is accompanied by the emergence of solid behaviour: the glass behaves like a solid, in the sense that it can be probed through rheological measurements, proving a finite elastic response and shear modulus.
Rheological response of the inactive/icosahedra-rich phase
We have shown that as the temperature is decreased, the Wahnström mixture explores more and more frequently trajectories that are exceptionally rich in structure. Moreover, the icosahedra-rich trajectories not only are characterised by low mobility (inactive trajectories) but they also tend to have configurations with low inherent state energies. Is it possible to connect these structural and dynamical changes to the emergence of solidity, i.e. to the rheological response of the system?
We test this idea realising an ensemble of configurations extracted from the trajectories produced in the µ ensemble at the thermostat temperature T s = 0.65. From every umbrella i of the replica-exchange algorithm we extract a population of configurations that are representative of the fluctuations, in trajectory space, around a specific value of the population of icosahedra In particular, we produce a discrete group of 8 sets with 75 initial configurations each at the following typical population of icosahedra n 0 To understand the purely mechanical response of the different sets of configurations we study the linear shearing of the system in the Athermal Quasi-static limit (AQS) [65,66]. Under this protocol, the system is slowly deformed in a chosen direction at a fixed shear rateγ = 0.005 for a small time interval ∆t = 0.005; subsequently, the FIRE energy minimisation algorithm [61] is employed to lead the particles to the closest inherent state. The two steps are repeated until the system reaches a maximum total strain of γ = 0.5 In Fig. 8 and Fig. 9 we plot the response of the system in terms of shear stress σ xy and fraction of particles in icosahedral domains for different typical values of the initial population of icosahedra n 0 i . The first striking result is that the yield stress σ yield = max γ σ xy (γ) strongly depends on n 0 i , and it approximately doubles as the typical population of icosahedra quadruples. The yield strain γ yield (the value of strain at which the maximum stress is reached) is not sensitive to the different starting conditions and is located at approximately γ yield ≈ 0.12 for the chosen strain rate. At the same time, we notice that the shear protocol induces a sudden increase of the population of icosahedra at very early times (very small strains) and a progressive decay of the population which accelerates as the yield strain is reached. The overall, instantaneous increase of the population of icosahedra can be understood as a consequence of the minimisation procedure, which destroys thermal fluctuations present in the initial configurations and promotes the formation of local minimum energy motifs, such as the icosahedron. This implies that the overall population of icosahedra n ico (γ) can be distinguished into two families: the first refers to the subset of particles that are located in icosahedral domains in the original starting configurations produced in the µ ensemble, and it is identified by the boolean vector η µ of length N ; the second refers to all the remaining particles in icosahedral domains, resulting from the AQS protocol, identified by the vector ημ.
As the system is sheared, the number of icosahedra changes very mildly for strains below the yield strain, and only later declines, supporting the idea that the popula-b a Fig. 10. Auto-correlation of the probability for a particle to be in icosahedral domain for the ηµ and the ημ populations (see main text for definition) for different values of the typical initial population of icosahedra.The ημ family is not defined at γ = 0 so, we take the smallest γ as the reference state. The vertical dashed line corresponds to the yield strain.
tion of icosahedra is related to the rigid, elastic response of the system. Having defined two subpopulations of icosahedra, we now quantify their respective differences in the mechanical response.
To do so, we compute separate auto-correlation functions η x (γ)η x (0) for the η µ and the ημ populations, Fig. 10(a,b). We notice that only for large initial populations of icosahedra the autocorrelation functions start close to unity. This shows that the reorganisation induced by the AQS protocol not only forms new icosahedral motifs, but it also initially destroys a fraction of them. The two families of autocorrelation functions show distinctively different behaviours: the icosahedra present in the initial µ-ensemble, Fig. 10(a), show a long plateau that terminates only when the yield strain is attained; the icosahedra generated via AQS, Fig. 10(b), continuously decorrelate at earlier times (smaller strains).
A further confirmation of the different responses between the η µ and the ημ icosahedra is provided by the distribution of the potential energy of the individual particles constituting the two families. In Fig. 11 we plot the overall energy distributions, for the population, collected all along the shearing protocol. We clearly observe that not all icosahedral motifs are energetically the same: particles located in icosahedral motifs purely emerging from energy minimisation have energies that are typically higher than particles identified in icosahedral motifs in the original µ ensemble configurations. The energy gap between the two families widens as we consider initial configurations with larger concentrations of icosahedra: at very high initial concentrations (55%), the η µ subpopulation has energies that are 8% lower than the ημ subpopulation.
Conclusions
Through numerical simulations, we have discussed a third example of structural-dynamical phase transition in a model of atomistic glassformer, after the previously considered cases of the Kob-Andersen mixture [34,42,45,35,36] and the moderately polydisperse hard-spheres [46].
A quantitative analysis of the probability distributions of time-integrated observables demonstrates that wellchosen time-integrated structural motifs can be used to perform efficient importance sampling. In particular, it makes possible to explore structure-rich trajectories (representative of colder temperature states [36]) that are otherwise hard to reach. At the same time, we find confirmation for a sharp (first-order) transition in trajectory space, that becomes measurable below the onset temperature, between a structure-rich and a structure poor dynamical phase, the former becoming more and more likely as the temperature is reduced, similarly to what has been previously observed in the Kob-Andersen mixture [36]. Within the range of the explored temperatures, it is impossible to assess what the low temperature fate of the transition may be: the reduction of the critical conjugate field value µ * suggests that, as the temperature decreases, the structure-rich phase would prevail. However, it is unclear whether the previously reported crystallisation into complex Laves phases [54] would interfere with the emergence of the icosahedra-rich phase. In our simulations, we monitored the evolution Frank-Kasper bonds (an essential element of the complex crystalline phase) and do not find a significative increase in the icosahedra-rich phase compared to the icosahedra-poor phase.
In Reference [36], the study of the alternative Kob-Andersen mixture at low temperature indicated possible scenarios for the temperature dependence of the structural-dynamical transition. In the present case of the Wahnström mixture, we observe that at 0.72T onset the structure-rich phase is highly metastable, while a relatively modest decrease of the temperature to 0.67T onset makes the exploration of the structure-rich basin 5 to 7 times more likely, see Fig. 5 (d). The temperature dependence of the critical value µ * τ obs shows a decrease towards µ = 0 which becomes less pronounced as the temperature is decreased. This is accompanied by strong correlations between successive steps in the trajectory-space Monte-Carlo that slow down equilibration and make lower temperature sampling particularly challenging. The present data support the narrowing of the free-energy gap (in trajectory space) between the structure-rich and structurepoor states when the temperature is decreased, and do not exclude the possibility that the transition terminates at a lower critical point at finite temperature, as in kinetically constrained models with additional softness [67].
In order to better understand the importance of the structure-rich phase, configurations obtained through trajectory sampling have been probed through an outof-equilibrium rheological protocol, effectively regarding these configurations as samples of an amorphous material at T = 0. Consistently with previous studies of the Wahnström mixture based on conventional simulations [68], we find that icosahedra play a major role in the emergence of rigidity: icosahedra-rich configurations display much larger yield stresses than icosahedra-poor ones. However, we nuance this statement, as we are able to split the overall family of icosahedral motifs according to the preparation protocol: well-thermalised configurations from trajectory sampling have icosahedral regions that are more robust to shear and with lower energies than the icosahedral domains obtained via energy minimisation. This highlights that the requirement of sampling long-lived structural motifs (implicit in trajectory sampling) allows us to explore metabasins that are not just richer in structure, but more stable as well.
CPR acknowledges the Royal Society for funding. FT and CPR acknowledge the European Research Council (ERC consolidator grant NANOPRS, project number 617266). This work was carried out using the computational facilities of the Advanced Computing Research Centre, University of Bristol. FT contributed to the generation and analysis of the data and the writing of the manuscript. FT, TS and CPR contributed to the editing of the manuscript. | 9,170 | 2018-04-01T00:00:00.000 | [
"Physics"
] |
Development of the technical vision algorithm
. The purpose of the work is creation of a set of technical means (switches, sensors) for recognition of transport infrastructure facilities, including development of algorithms for autonomous operation of technical facilities under changing environmental factors. In this work, we used methods for determining the volume of a three-dimensional facilities from the data of photo and video recording of the surrounding situation. The algorithm of technical vision was obtained, which is implemented as a program on a mobile device for recognition by means of stereometry of transport infrastructure facilities and their defects and storage of transport infrastructure defects. The novelty of the research is building of decision algorithms based on devices and sensors that recognize changing road conditions, namely, defects in coverage. The data obtained can be used in the planning of road repairs, in the analysis of traffic accidents by road police, road users for processing complaints, etc.
Introduction
The key component in implementation of a fully autonomous robot capable of performing tasks of collecting road data is the technical vision system [1][2].Research in this area is performed by Google (USA), in Russia, the research is performed by Kamaz concern and others.The rapid growth in the number of automotive motor vehicles (AMV) in Russia and the world is many times ahead of the pace of road construction.As a result, the road network operates in stressed conditions.The condition of highways, the quality of the road bed surface, the visibility, the width of the roadway, the arrangement of the relevant signs have a significant impact on road safety and define the concept of "road conditions" in their totality.In investigation of road traffic accidents, in most cases it is believed that their main causes are the negligence or mistakes of the driver -the person (incorrect assessment of road conditions), as well as the AMV fault.According to experts, the real impact of road conditions on occurrence of road accidents is from 60 to 80% of cases [3].
The trajectory and speed of the automobile depends on the "road conditions".If a person drives the car, then he or she makes a decision about the driving regimens based on their experience and psychological and physiological state, if the robot is driving, it is orientated on the basis of a set of sensors that allow navigation in space [4].In the investigation of road accidents involving robots, it is believed that their main reasons are the lack of data on the engineering infrastructure due to the limited action of sensors and Corresponding author<EMAIL_ADDRESS>
Research methods
Due to affordability and prevalence of mobile devices (smartphones, unmanned aerial vehicles (drones), etc.), it is proposed to use mobile measurements on a series of successive video shots.The most accessible method can be to determine the amount of damage using markers from video recording [6].To determine the volume of a three-dimensional object from the photo and video data, it is permissible to apply the method of reconstructing a threedimensional coordinate vector from two perspective projections forming a stereo pair [11] or a simpler method using orthographic projections.This allows acting without accurate information about the transformation, relying on additional information about the object, which is provided with the presence of markers.From the graphical marks, the elements of the geometric transformations matrix are formed and the three-dimensional coordinates of the object points are determined.Then, the task of determining the volume of the object from its video image is reduced to transformation of the image into a digital volumetric model.
The main stages of the solution are as follows: identification of stable features of the video series; determination of reference frames; localization and definition of typical points of the video frame; solution of the inverse problem of photogrammetry; obtaining the three-dimensional mathematical model of the object.Further use of this model for expert evaluation and the formation of design estimates.With the designation of homogeneous geometric coordinates, the transformation of the linear perspective (Fig. 1) is represented as a 4x4 matrix: where x,y,z are point coordinates in the three-dimensional space, x ' ,y',z' are homogeneous geometrical coordinates of the point, h is the scale factor, Т ' is the four-dimensional matrix, .In the course of photographing, the results are projected onto a two-dimensional plane, in this case, on a 0 z 0x1 plane by means of the projection transformation
T T T T T T T T T T T T T T T T T
The composition of these two linear transformations gives the following
T T T T T T T T T T T T T T T
As a result, let us put down the transformation in the following form Where * x and * y are the coordinates of perspective projection on the picture plane of the photo image 0 z .After excluding scale factor h , we obtain two scalar equations:
0, T T x x T T y y T T y z T T y
Under the assumption of the known , , , T x y z , these equations can be used for direct simulation of the photographing process.If * * , , , , x y x y z , are known, then equations ( 6) and ( 7) are two equations with 12 unknown elements ij T .By applying these equations to 6 n noncoplanar points in the object space and to their images on the perspective projection, we obtain a homogeneous system of 2n equations with 12 unknown values.
To solve the resulting system, let us transfer the terms containing normalizing coefficient 44 T to the right-hand side and set value 44 1 T .Thus, to find the solution of we obtain the overdetermined system of equations, the matrix of which cannot be inverted, since it is not square.As it is known from the theory of the method of least squares, the best averaged solution can be calculated by multiplying both sides of the matrix equation by the transposed matrix of the system.Then we obtain a system of 11 linear equations with respect to 11 unknown values with a symmetric square matrix.This equation can be solved with the square root method.Thus, the known coordinates are used to determine the transformation generating this perspective projection, for example, a photo.
Finally, the last approach [12] assumes that * * , , T x y is known.In this case, two equations are obtained from three unknown spatial coordinates , , x y z .This is an underdetermined system of equations, so it is impossible to solve it.However, if two perspective projections are known, say, two photographs obtained from different angles, then equations (6 and 7) can be written for both projections.Then we get the following: , , ,
T T x x T T x y T T x z x T T T y x T T y y T T y z y T T T x x T T x y T T x z x T T T y x T T
where upper indices 1 and 2 denote the first and the second perspective projection.These equations represent four equations from three unknown spatial coordinates , , x y z .Thus, an overdetermined system of equations is again obtained, and one can apply the methods of least squares and the square root to find the solution.
As a result, the elements of the geometric transformations matrix are formed from the graphical marks and the three-dimensional coordinates of the object points are determined.Thus, the problem is reduced to solving a system of linear equations from which the elements of the transformation matrix are determined.It is enough for the road foreman to make two photos, and send them for further processing.
Isolation of characteristic points of the image and determination of the contour of damage can be performed by one of the known methods of Sobel, Laplace, Kani [13][14].Availability of local inhomogeneities on defects of natural origin (potholes, rills, holes) makes it expedient to use blob detectors based on the Laplace method [15], for correct identification of interpolation points of a three-dimensional object.With that, availability of a rectangular image table makes it possible to effectively apply the modern theory of shearlets to solve the problems posed [16].
Shearlets were first identified in 2006 as a structure that allows efficient working with multidimensional data.These structures are widely used to suppress noise on images in order to improve visual perception or increase clarity.Within the scope of the task set, shearlets can serve as a pre-processing tool for stereo pair images to facilitate automatic detection of local features of the damage object.
Results
The practical application of the claimed algorithm is quite wide for recording and determination of the actual dimensions of potholes in the road surface.Let us apply the method to a real stereo pair (Fig. 2).https://doi.org/10.1051/matecconf/201821604003Polytransport Systems-2018 Fig. 2. Stereo pair: road cone against the background of damage to the road surface.
The coordinates of the vertices of the 3D object were measured with a ruler.The coordinates of the corresponding points on the images were recorded in a graphic editor using the mouse.Attempt to restore the 3D coordinates of the vertices of the object gave a good match with the original values.Moreover, it was possible to correct the measurements on the photo and the typing errors when taking samples during the calculations.Given the coordinates of the road surface point on the left and right images, one can estimate the accuracy of the presented technical vision algorithm.In our case, the distance from the tip of the cone to the asphalt coating calculated by the Pythagorean theorem was 31.975cm, which is 0.078% different from the 32 cm value on the technical data sheet.
Given the features of the image of roadway defects (lack of clear boundaries, presence of foreign objects, insignificance of certain defects), it should be possible to allow "manual" editing.The road foreman must be provided with an interface that allows highlighting typical points by moving the cursor.The algorithm for creating such interface on a smartphone running Android OS is described in [17] (Fig. 3).The algorithm of the program on a smartphone for a road foreman can be as follows: the left and right image of the stereopair is alternately dropped onto the screen of the smartphone, the image of the wire model of the 3D object is superimposed on the image, the characteristic points of the object flash on the model, and the user moves the cursor to select the corresponding point on the photo, after processing both images of the stereo pair in the smartphone memory, linear perspective transformation matrices are formed, then the command is given to the user to select an arbitrary point on each stereo pair image from the corresponding test mode menu (if it is a typical point of the object), or set the mode of verification of the Pythagorean Theorem (if it is a point on the road base), or go to continue working; then a voice command is given to the user to select the next point on each stereo pair image, the distance between points is calculated, then the user is asked to select the third and so on points in each stereo pair picture.https://doi.org/10.1051/matecconf/201821604003Polytransport Systems-2018 Fig. 3. Image of the program running on the smartphone screen.
As the array of typical image points is replenished, it becomes possible to construct a three-dimensional mathematical model for defects of the road surface.Then, based on the obtained model, the area and the volume of the geometric figure characterizing this particular damage are calculated using the triangulation technology.
Conclusion
The result obtained corresponds to the stated goal of the study.An algorithm for detecting road surface defects based on the photogrammetry method was obtained as a result of the work.The algorithm is implemented in the program for mobile devices, which can be used by road services, traffic police, emergency surveyors in the cases where it is required to establish the size of defects and their location.Further directions of the study may include the analysis of the ways of automating the process of searching for graphic markers and reference points on photo-images, including damaged road surfaces, evaluating ways of managing autonomous mobile robots, analyzing the constantly changing traffic situation and its effect on the percentage of erroneously calculated transport infrastructure defects, as well as further optimization of the proposed stereo matching algorithm.
The research was performed with the financial support of the Russian Foundation for Basic Research and Tomsk Region Administration (project code 16-41-700400 р_а). | 2,940 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Properties of surface engineered metallic parts prepared by additive manufacturing
The potential of the Additive Manufacturing technologies is impeded by the surface finish obtained on the as-manufactured material. Therefore, the influence of various surface treatments, commonly applied to space hardware, on the mechanical properties of three selected metallic alloys (SS316L, AlSi10Mg, Ti6Al4V) prepared by using Selective Laser Melting (SLM) and Electron Beam Melting (EBM) additive manufacturing processes have been investigated. Within this study, SLM using EOS M400 and EOS M280 equipment and in addition EBM using an ARCAM Q20 machine have been applied for sample manufacturing. A half-automated shot-peening process followed by a chemical and/or electrochemical polishing or Hirtisation ® process has been applied in order to obtain lower surface roughness compared to their as-received states. Special emphasize has been taken on their tensile, fatigue, and fracture toughness properties. In addition, their stress corrosion cracking (SCC) behaviour including microstructural analysis using HR-SEM have been investigated.
Introduction
Additive manufacturing technologies allow manufacture of shapes that are not possible with conventional machining. This design freedom is very attractive for space hardware, as equipment performance could be significantly improved with increased shape complexity. Furthermore, conventional machining results in a high buy-to-fly ratio; i.e. more than 70% of the material is transformed to swarf, which has to be recycled. Using additive manufacturing, the amount of material to be recycled is substantially decreased, limited to the light structure that may be needed to support the part while it is built. The potential of the Additive Manufacturing technologies is however impeded by the surface finish obtained on the asmanufactured 3D objects. Especially inner surfaces of additive manufactured complex shaped parts became difficult or all but impossible to reach by conventional machining processes. Recently, surface finishing processes have been developed enabling not only the surface finish of complex outer but also inner structures of AM parts, e.g. Abrasive Flow Machining by Micro Technica ® Technologies GmbH, ALMBrite TM by South West Metal Finishing, MMP Technology ® by BinC Industries SA, 3DSurFin ® by Airbus Defence and Space GmbH or the Hirtisation process by Hirtenberger Surface Engineering GmbH (HES). Within this study an extensive survey of surface finishing processes has been performed taking various relevant factors for space hardware (surface roughness, cleanliness, costs, process stability, accessability, etc.) into account. An assessment lead to the selection of surface finishing processes and combination thereof for three selected metallic alloys (SS316L, AlSi10Mg, Ti6Al4V) prepared by Selective Laser Melting (SLM) and in case of Ti6Al4V also by Electron Beam Melting (EBM). Here we will report on static and dynamic mechanical properties of above mentioned alloys and manufacturing processes after application of the selected surface finishing processes. In addition, the AM alloys have been tested according to their resistance against stress corrosion cracking (SCC).
Materials and manufacturing
The raw materials have been supplied for SS316L and AlSi10Mg by Electro Optical Systems GmbH. The Ti6Al4V powder has been acquired from TLS Technik GmbH & Co. Spezialpulver KG. The SS316L and Ti6Al4V samples have been manufactured by SLM using an EOS M280 machine. In case of Ti6Al4V, in addition an Arcam A2X for EBM has been applied. The AlSi10Mg samples have been manufactured using an EOS M400 machine. The AlSi10Mg and Ti6Al4V as-build samples have been heat treated directly after manufacturing using a Carbolite thermal vacuum oven.
Surface finishing methods
The following Tab. 1 will give an overview of the different surface finishing scenarios applied to the materials/manufacturing combinations. All samples are shot peened in a first step. In a second and/or third step chemical and/or electrochemical polishing has been applied. In case of AlSi10Mg a patented surface finishing process by Hirtenberger Surface Engineering has been applied.
Characterization methods
Static Tensile tests have been performed according to ASTM E8-04 using an universal testing machine type Shimadzu AGC-10/TC with a maximum load of 100 kN for each of the material/manufacturing combinations in the three principal directions in as received and in surface finished conditions. Five parallel samples have been used. Flat samples in x and 45° direction (relative to the building surface) and round samples have been used in z-direction. Out of the tests the Young´s Modulus, 0.2% proof strength Rp0.2, ultimate tensile strength Rm (UTS) and strain to failure A% have been determined.
Fatigue tests have been performed in a servo-hydraulic testing machine for each of the material / manufacturing combinations in the weakest direction out of the static tensile tests in as received and in optimized surface finished conditions. Four different load levels on 3 parallel samples have bene used. The proposed sample shape (Kt = 2.3) allows testing at any R value (R = 0.1 in this study) and allows the assessment of the effectiveness of the surface finishing procedure also at inner surfaces, such as holes. Out of the tests, a reduced Woehler curve for each material in as received and optimized surface finished conditions can be determined. All fractured samples have been subjected to an investigation of the fracture surface by high resolution scanning electron microscopy HRSEM/ZEISS -Gemini Supra VP equipped with EDAX -EDS/EBSD -Detectors in order to determine the starting point of final fracture and possible presence of micro-pores. Fracture toughness tests have been performed in a universal testing machine type Shimadzu AGC-10/TC with online CTOD (crack tip open displacement) measurement for each of the material / manufacturing combinations in the strongest and weakest direction out of the static tensile tests in optimized surface finished conditions. 5 parallel samples have been used. Precracking of the samples to achieve the required total crack length has been done by fatigue loading with online CTOD measurement in the servo-hydraulic testing machine. Based on a first assessment of the required dimensions for valid K1C evaluationbased on available material data for conventional materials and standard ASTM E399 no of the investigated alloys fulfil the requirements. For AISI 316L it is not possible to manufacture the required dimensions of more than 1000 mm. Therefore, in all cases where the requirements for direct assessment of K1C according to ASTM E399 are not fulfilled, fracture toughness values based on J-Integral evaluation (ASTM 1820) are used. All fractured samples have been subjected to an investigation of the fracture surface by scanning electron microscopy in order to determine the starting point of final fracture and possible presence of micro-pores. The aim of the stress corrosion testing according to ECSS-Q-70-37C is to assess the susceptibility to stress corrosion cracking (SCC) of the material / manufacturing combinations in as received condition and with its selected best surface treatment procedures. Surface roughness measurements have been performed for as received and surface finished samples according to ISO 4287-1997 on the side and top surface. The average and standard deviation of the Ra-(average mean of the profile height) and Rz-value (average maximum height of the profile) were calculated from these measurements.
Tensile test results
The tensile behaviour and tensile properties -Young´s Modulus, Yield strengths, tensile strength and strain to failurehas been determined for the 4 different materials / AM manufacturing method combinations before and after surface treatment according to ASTM E8-04 for vertical, horizontal and 45° samples. The tensile properties of AlSi10Mg manufactured by SLM are shown in Figure 1. The Yield strength (Rp0.2) shows an anisotropy behaviour regarding build-up orientation of samples where the weakest direction was the vertical (185 MPa) and the strongest the horizontal one with the value of 201 MPa. On the other hand the tensile strength and the strain to failure shows the opposite behaviour where the strongest direction was the vertical (355 MPa and 4.3%) and weakest direction the horizontal one (319 MPa and 3.5%). It has to be mentioned that the sample shapes for the vertical (round) differ from that of the horizontal (flat) and 45° (flat) ones, which could have an influence on the anisotropy of the tensile strengthusually the vertical direction shows the lowest strength. The anisotropy behaviour was not changed by the surface treatment, but values for Yield strength and tensile strength were slightly improved. No statistically relevant changes in the strain to failure after surface treatment have been observed.
Figure 1: Summary of tensile parameters for AlSi10Mg
The tensile properties of AISI 316L manufactured by SLM are shown in Figure 2. The Yield strength and tensile strength show an anisotropy behaviour regarding the building strategy. The lowest values of UTS and Rp0.2 (543MPa and 428 MPa) were observed for vertical direction while the highest for samples built in the horizontal direction (614 MPa and 506 MPa). As expected, the strain to failure shows the opposite behaviour with the highest ductility in the vertical and the lowest ductility in the horizontal direction. The anisotropy behaviour was not changed by the surface treatment. Again the surface treatment improved the Yield strength about 10.7% in vertical and 3.3% in horizontal direction as well as tensile strength about 1.5% in vertical and 3.7% in horizontal direction. No statistically relevant changes in the strain to failure after surface treatment have been observed.
Figure 2: Summary of tensile parameters for AISI 316L
The tensile properties of Ti6Al4V manufactured by SLM are shown in Figure 3. The results of UTS and Yield strength revealed very low anisotropy of measured specimens thus being independent on the manufacturing strategy. The highest strengths values were observed for samples built-up in 45° direction (984 MPa). The strain to failure shows the highest ductility for the vertical and lowest ductility for the horizontal direction. The anisotropy behaviour was not changed by the surface treatment. After surface treatment the values for Yield strength and tensile strength were slightly increased. No statistically relevant changes in the strain to failure after surface treatment have been observed.
Figure 3: Summary of tensile parameters for Ti6Al4V / SLM
The tensile properties of Ti6Al4V manufactured by EBM are shown in Figure 4. In comparison to the SLM process, the UTS, Rp0.2 and the strain to failure decreased with higher anisotropy behaviour. The highest UTS of 877 MPa was obtained for the vertical direction and the lowest UTS for horizontally aligned samples (792 MPa).
Figure 4: Summary of tensile parameters for Ti6Al4V / EBM
In general, the strengths values and strain to failure was rather low compared to SLM Ti6Al4V samples. The anisotropy behaviour was not changed by the surface treatment. After surface treatment the values for Yield strength and UTS were slightly increased. No statistically relevant changes in the strain to failure after surface treatment has been observed except for the horizontal direction were the strain to failure was nearly doubled. The following Figure 5 shows the fractured areas of horizontal and vertical tensile specimens. It can be seen that areas of voids with un-molten powder particles are present in the fractured surface. These defects were found in nearly all EBM specimens (also in the fatigue samples). In addition, the cross section of the flat specimens were not rectangular (see upper right side of Figure 5) leading to an overestimation of the measured cross section and subsequent underestimation of the strength values of the flat specimens in horizontal and 45° direction. The root cause for these internal defects is not clear yet.
Fatigue results
In order to assess the influence of the surface roughness on the load bearing cross section, all strength data of the fatigue tests were corrected using the average Rz values measured before. The following sketch (see Figure 6) shows the used method for the correction of the It can be seen, that the surface roughness correction led to a slight increase of the strength values for all materials, but in all cases the surface finishing procedure lead to clear increase in the fatigue strength values well above the corrected values. The following Table 2 shows the fatigue live strength (N > 10 7 load cycles for all materials in as received and surface finished conditions). In Table 3 the surface roughness and the corresponding improvements for each material/process after the individual surface finishing scenarios is shown.
Fracture toughness results
All fracture mechanic tests have been performed in surface finished conditions. However, the influence of the surface finishing on the fracture mechanic properties is considered very limited.
Susceptibility to Stress Corrosion Cracking
The SCC behaviour has been determined for the 4 different material / AM manufacturing method combinations before and after surface treatment. The tests were carried out in accordance to the standard ECSS-Q-70-37C. The results are shown in Table 4. All 4 materials passed the test according to the standard. | 2,900.8 | 2021-02-04T00:00:00.000 | [
"Materials Science"
] |
RNA sequencing to characterize transcriptional changes of sexual maturation and mating in the female oriental fruit fly Bactrocera dorsalis
Background Female reproductive potential plays a significant role in the survival and stability of species, and sexual maturation and mating processes are crucial. However, our knowledge of the reproductive genes involved in sexual maturation and mating has been largely limited to model organisms. The oriental fruit fly Bactrocera dorsalis is a highly invasive agricultural pest, known to cause major economic losses; thus, it is of great value to understand the transcriptional changes involved in sexual maturation and mating processes as well as the related genes. Here, we used a high-throughput sequencing method to identify multiple genes potentially involved in sexual maturation and mating in female B. dorsalis. Results We sequenced 39,999 unique genes with an average length of 883 bp. In total, 3264 differentially expressed genes (DEGs) were detected between mature virgin and immature Bactrocera dorsalis libraries, whereas only 83 DEGs were identified between flies that had mated or were mature virgins. These DEGs were functionally annotated using the GO and KEGG pathway annotation tools. Results showed that the main GO terms associated with the DEGs from the mature virgin vs. immature groups were primarily assigned to the metabolic and developmental processes, which we focused on, whereas those from the mated vs. mature virgin group largely belonged to the response to stimulus and immune system processes. Additionally, we identified multiple DEGs during sexual maturation that are involved in reproduction, and expression pattern analysis revealed that the majority DEGs detected were highly enriched in those linked to the ovaries or fat bodies. Several mating responsive genes differentially expressed after mating were also identified, and all antimicrobial peptides detected were highly enriched in fat body and significantly up-regulated approximately 2- to 10-fold at 24 h after mating. Conclusion This study supplied female reproductive genes involved in sexual maturation and the post-mating response in B. dorsalis, based on RNA-seq. Our data will facilitate molecular research related to reproduction and provide abundant target genes for effective control of this agricultural pest. Electronic supplementary material The online version of this article (doi:10.1186/s12864-016-2532-6) contains supplementary material, which is available to authorized users.
Background
Tephritid fruit flies, especially those belonging to the genus Bactrocera, are globally important due to their destructive impact on agriculture. Their high reproductive ability is an important factor that leads to enhanced survival rates and potential to multiply. Reproductive biology has been well studied in tephritid species, including Bactrocera tryoni (Froggatt), Bactrocera cucurbitae (Coquillett) and Ceratitis capitata (Wiedemann), but most reports have mainly focused on the processes at the physiological and behavioral levels [1][2][3]. Bactrocera dorsalis (Hendel) (the oriental fruit fly), which is one among the important Bactrocera species, is a highly invasive agricultural pest that is currently distributed across most Asian countries and in a number of Pacific Islands. Adults can lay eggs on various types of host plants, where the hatched larvae subsequently feed and cause crop loss [4]. Some factors that regulate the fecundity and mating behavior of B. dorsalis females have been previously studied, such as methyl eugenol-fed males and multiple mates, etc. [5]. Although the B. dorsalis transcriptome from different developmental stages of the whole body as well as specific tissue in males is available on NCBI, large-scale molecular analysis of reproduction in B. dorsalis females remains limited [6][7][8][9][10]. As an invasive agricultural pest with a wide host range and high fecundity, it is essential to investigate the reproductive biology of B. dorsalis females at the molecular level in order to formulate simple and effective strategies for agricultural pest control.
Recently, transcriptional changes in several genes that may be involved in female sexual maturity and mating were identified in C. capitata based on microarray data [11]; however, the cDNA library of C. capitata is limited to the head of female adult. Additionally, it is worth noting that the reproductive strategy of B. dorsalis is different from that of C. capitata. For instance, medfly reach sexual maturity rapidly within 2-3 days after emergence, whereas it takes approximately 2 weeks for the oriental fruit fly to complete the transformation after emergence to maturity [12,13]. Therefore, our study is planned because both species have different mating strategy and that the methodology followed in previous study provided only limited resources.
Female reproductive potential significantly contributed to the multiply and stability of species, which mainly depends on the process of sexual maturation and mating [14]. In most holometabolous insects, adults emerge from the pupal stage as sexually immature individuals. After they ingest a food source, they become sexually mature within several days dependent on the species [14]. During the development of the oocyte, the ovary becomes mature and then when females reach sexual maturity they began to mate and oviposit [15]. Until now, studies on sexual maturity have mainly concentrated on the model species whose genomes have been sequenced except C. capitata, including Drosophila melanogaster, Apis mellifera, Aedes aegypti and Blattella germanica [16][17][18][19][20]. However, these reports on model species mainly focused on the protein vitellogenin and information on other proteins that play roles in reproduction is lacking. While genes involved in female adult reproductive maturation, including EcR, USP and InR, have been identified in Tribolium castaneum using microarrays, they are mainly the early genes in the signal transduction pathway, and little is known about the other genes, especially the late or effect genes [21].
Successful mating behavior and the following egglaying is essential for sexual reproduction in insect, especially for its multiply. Female reproductive process has been revealed to be greatly affected by male reproductive gland secretion during copulation. Reports in C. capitata and Bactrocera tryoni showed the effect of mating on remating propensity, refractory period and fecundity of females [22][23][24]. In D. melanogaster, it was reported that mating regulated ovulation and spawning as well as other processes in mated females, such as, inducing innate immunity, acceleration of reproductive maturation and egg production [25][26][27][28]. The sex peptide receptor, which mediates the post-mating switch in females, has been reported in D. melanogaster, Helicoverpa armigera and B. dorsalis [4,[29][30][31]. However, the other factors involved in this process are still largely unknown. Genome-wide research into the post-mating response in females has been performed in model species such as D. melanogaster, the honeybee queen Apis mellifera, and C. capitata, which have revealed that the post-mating response differs among species as well as the time following mating [11,25,28,[32][33][34]. Regardless, there is no such information available for B. dorsalis, and the transcriptional changes and genes involved in this process is also still unknown.
In the present study, the transcriptome of the whole body of the female B. dorsalis were sequenced at three different physiological stages (immature, mature virgins and mated individuals) using the Illumina HiSeq 2500 system, and assembled based on our previously established transcriptome of B. dorsalis [6]. Furthermore, analysis of the DEGs between mature virgins and immature individuals was performed in order to identify the transcripts involved in the process of sexual maturation. An additional comparison between mated individuals and mature virgins was also carried out to identify the genes that respond to mating. Tissue specific expression profiles of the differentially expressed genes from these two comparisons were analyzed by qRT-PCR. Finally, the expression levels of the genes involved in the post-mating response at different time points were also obtained.
Insect culture
B. dorsalis were cultured in our lab under a 12 h light/ dark cycle at 27°C. Adult flies were reared on artificial diets composed of 25 % yeast extract and 75 % sugar [35].
Sample preparation
Newly emerged adults were collected and sexed within 12 h after emergence, and both sexes were subsequently maintained in separate cages. As male and female oriental fruit flies require 14 days to reach sexual maturity under standard rearing conditions, 1-to 2-day-old (1-2 days) individuals were considered immature while 14-day-old (14 days) individuals were considered sexually mature. Twenty-four hours after emergence, the immature female flies were chilled briefly and RNA was extracted from the whole bodies of 10 individuals using TRIzol regent (Invitrogen, Carlsbad, CA, USA). Similarly, mature virgin samples were collected 14 days after emergence and total RNA was isolated from the whole bodies of 10 mature virgin females.
To obtain mated flies, approximately 100 fourteenday-old virgin flies of each sex were placed into a 30 cm 3 cage shortly before dark as B. dorsalis mating occurs during dusk [36]. When copulation was observed, mating pairs were isolated and placed in small vials about 14 cm in width and 23 cm in height with 11.5 cm diameter. Notably, only pairs that mated for at least 90 min were used in further experiments in order to avoid false mating with little or no sperm transfer. Copulation terminated naturally and the mated pairs were placed in separate cages according to sex. RNA was extracted from the whole bodies of 10 mated females the following day (24 h after mating) with the same procedure as was used for the virgin flies. RNA samples were treated to remove any contaminating DNA with the DNA-free kit (Ambion, Foster City, CA, USA).
Sequencing
Total RNA from each sample was isolated using TRIzol reagent (Invitrogen) following the manufacturer's instructions. The concentration and integrity of total RNA were determined using the 2100 Bioanalyzer (Agilent, Santa Clara, CA, USA). Total RNA was then incubated with 10 U DNase I (Takara, Shiga, Japan) at 37°C for 30 min, and purified using the Dynabeads® Oligo (dT)25 (Thermo Fisher, Waltham, MA, USA) following the manufacturer's instructions. To ensure the accuracy of the results, we conducted two independent replicates for each sample, specifically immature (1, 2), mature virgin (1, 2) and mated (1,2) individuals. For each case, the following validation described below was performed.
For cDNA library construction 100 ng purified mRNA was used using NEBNext® UltraTM RNA Library Prep Kit for Illumina sequencing (New England Biolabs, Ipswich, MA, USA). First-strand cDNA was synthesized using Pro-toScript II Reverse Transcriptase. After incubation, Second Strand Synthesis Enzyme Mix was added for the synthesis of the second strand of the cDNA. Doublestranded cDNA was then purified using AMPure XP Beads (Beckman Coulter, Brea, CA, USA), followed by an end repair step using End Prep Enzyme Mix. After adaptor ligation and further purification, the cDNA library was obtained. To verify the quality of the library, it was checked using three different methods, including Qubit (Thermo Fisher) quantitative analysis, 2 % agarose gel electrophoresis, and high-sensitivity DNA chip determination.
The cDNA libraries were then used for cluster generation with the TruSeq PE Cluster Kit (Illumina, San Diego, CA, USA) and sequenced on an Illumina HiSeq ™ 2500 instrument with paired-end sequencing.
Sequence assembly and annotation
Raw data were first filtered by removing adaptor sequences and low quality reads using FASTX-Toolkit (http://hannonlab.cshl.edu/fastx_toolkit/), and then assembled into contigs using Trinity (v2013-02-25) with N bases as the default assembly parameters. Finally, redundant sequences were removed while those remaining constituted the transcriptome used in downstream analyses. Furthermore, depth and coverage of sequencing was evaluated. The assembly data have been submitted to NCBI as a TSA under the accession number GEEA00000000.
All contigs were annotated with GetORF of the EM-BOSS package [37]. The ORF of each predicted protein was used in a BLASTp search against the Swiss-Prot and the NCBI nr databases, with the e-value threshold set to 10 −5 . Gene Ontology (GO) annotations were performed with GoPipe [38]. Predicted proteins were first used for a BLASTp search against the Swiss-Prot and TrEMBL databases with an E-value cut-off of 10 −5 , and then the results were analyzed by GoPipe based with the gene2go software. The KOG (Eukaryotic Orthologous Groups) and KEGG pathway annotations were also analyzed using the Cluster of Orthologous Groups database and the Kyoto Encyclopedia of Genes and Genomes database, respectively [39].
Differential expression analysis
For the identification of genes that were possibly involved in reproduction, those that were differentially expressed at the various physiological stages of adults were analyzed. Two comparisons of the expression level of paired sets of the three samples were performed between mature virgin vs. immature and mated vs. mature virgin individuals. The number of reads for contigs from each sample was first converted to Reads per Kilobase per Million (RPKM), and then the MARS model (MAplot-based method with Random Sampling model) in the DEGseq package was used to calculate the expression abundance of each contig [40]. A false discovery rate (FDR) < 0.001 was considered to indicate significant expression abundance. Additionally, differentially expressed genes were further used for GO and KEGG analysis to identify the pathways that the DEGs were predicted to be involved in. Relativity analysis was performed to evaluate the stability and reliability of the replicates. GOEAST was used for GO term enrichment analysis [41]. The relative significant level of enrichment of the differently expressed gene to all genes was calculated with hypergeometric statistical test method, then the GO term whose FDR < 0.001 was considered to be significant. For pathway enrichment analysis, a similar method was used. The KEGG pathway was considered as a unit, and its relative significant level of enrichment of the differently expressed pathway to whole genome was calculated.
Quantitative real-time PCR (qRT-PCR) verification
Selected DEG data was validated by qRT-PCR using the SYBR Premix Ex Taq kit (Takara) according to the manufacturer's instructions with a real-time thermal cycler (Bio-Rad, Hercules, CA). TRIzol reagent (Invitrogen) was used to extract total RNA from B. dorsalis at three different physiological stages, including the immature (newly emergent within 24 h), mature virgin (14-day-old before mating), and mated (14-day-old post-mating). At least 10 insects were collected for each sample. The first strand cDNA was obtained from 2 μg of total RNA using M-MLV Reverse Transcriptase (Takara) with the primer oligo-anchor R (5′-GACCACGCGTATCGATGTCGAC T 16 (A/C/G)-3′). The primers used for qRT-PCR detection are listed in Additional file 1: Table S1. The relative gene expression data were analyzed using the 2 -ΔΔCt method as described by Zheng et al. [42]. The results were analyzed using a one-way analysis of variance (ANOVA) statistical test. All quantitative PCR experiments were repeated in three biological replicates.
Tissue specific expression pattern analysis of DEGs involved in reproduction
Total RNA was extracted from different tissues of the mature female adults, including the head, thorax, ovary, fat body and midgut. The tissue samples were dissected from 30 individuals. After the cDNA was obtained, qRT-PCR was performed according to the methods described above. A total of eleven genes differentially expressed during maturation and six AMPs whose transcript abundance changed after mating were used in this study.
Change in transcriptional abundance of matingresponsive genes
To investigate the expression level of mating-responsive genes before and after mating, newly emergent individuals (1 day after emergence) of each sex were separated and the mated pairs were obtained as described above. RNA was extracted from the head or fat body of mated females 1, 12, 24 h after mating. At least 30 individuals were used for each sample and qRT-PCR experiments were repeated in three biological replicates.
Results and discussion
De novo sequence assembly The cDNA samples from different stages were sequenced, and these raw data (Additional file 2: Table S2) were assembled into 39,999 unique genes (Additional file 3: Table S3). The mean contig size was 883 bp with lengths ranging from 201 to 27,791 bp (Additional file 3: Table S3 and Additional file 4: Figure S1). The contig size distribution revealed that more than half of the contigs (23,645; 59.11 %) were between 200 and 500 bp in length, whereas 34.88 % (13,951) were between 500 and 3000 bp in length (Additional file 4: Figure S1).
Gene ontology and clusters of orthologous group classification
Gene ontology (GO) analysis was performed for functional categorization of unigenes (Additional file 5: Figure S2). Within the biological processes, the main GO terms were grouped into cellular (3362; 19 %) and metabolic (2696; 15 %; Additional file 5: Figure S2A). As for the molecular function category, binding (3663; 50 %) constituted the largest group, followed by catalytic activity (2096; 29 %; Additional file 5: Figure S2B). In the previously sequenced B. dorsalis transcriptome obtained from different developmental stages, the metabolic process group (35 %) was much larger than the cellular process group (16 %) [6]. The primary reason for this difference might be that the metabolic activities of insects during larval development is greater than in the adult stage. Many complicated physiological processes occurs during molting and metamorphosis, such as a molting cascade similar to the larval molt, histolysis of larval tissues, remodeling and formation of adult tissues, etc. [43,44].
Assignments of clusters of orthologous groups (COG) were performed to further evaluate the completeness of the transcriptome and the effectiveness of our annotation process. The annotated sequences were grouped into 25 major functional classes (Additional file 6: Figure S3). Among them, the majority of the clusters were "Signal transduction mechanisms" (795; 13.01 %), "General function prediction only" (742; 12.14 %), whereas "Cell motility" (17; 0.28 %) and "Nuclear structure" (28; 0.46 %) represented the smallest groups (Additional file 6: Figure S3). Our transcriptome will aid further research focused on signal transduction in tephritid files.
Transcriptional changes during female maturation
To identify transcripts that may play a role in female maturation, differentially expressed gene sequences between the mature and immature libraries were identified with a total of 3264 DEGs (FDR < 0.001, Fig. 1 & Additional file 7: File S1). Among them, 2567 transcripts were more abundant in the mature females, while 697 were more abundant in immature females. Our results are different from those revealed by the adult transcriptome of C. capitata obtained with a microarray, in which only 811 transcripts displayed significant changes during female maturation. Furthermore, in the transcriptome of C. capitata, the transcripts with greater abundance were found in the immature females (462) instead of the mature individuals (349) [11]. These differences may be attributed to the use of different sequencing methods and the different sexual maturation time utilized by these two species. For example, B. dorsalis is sexually mature after 14 days, while C. capitata can mate after only 4 days [12,13]. Therefore, more genes are possibly implicated in the sexual maturation in B. dorsalis.
Analysis of enriched biological process GO terms with significant differences (p < 0.05) during female maturation was performed. We focused on the physiological processes that appear to play essential roles in maturation, including reproductive, signaling, developmental, metabolic, immune processes as well as response to stimulus (Fig. 2a). A table of the identified genes is provided in the supplementary data (Additional file 8: Table S4). Results revealed that differentially expressed genes in the metabolic and developmental processes are a much larger group than in other processes, which is similar to the results in C. capitata that indicated the most enriched terms in mature females were categorized in the metabolic process group [11]. These data indicate that a variety of metabolic activities and developmental affairs usually occur during oogenesis and ovary maturation. Additionally, enrichment analysis of the KEGG pathways for the differentially expressed genes during maturation was also performed. Results showed that the number of differentially expressed genes in the metabolism pathways are the largest groups, including Carbohydrate metabolism, Energy metabolism, Nucleotide metabolism, Lipid metabolism, Metabolism of cofactors and vitamins, Amino acid metabolism, etc. (Additional file 9: File S2). This is similar with the result of GO enrichment analysis.
Transcriptional changes of mating-responsive genes in females
To identify transcripts that may have been involved in female mating, differentially expressed genes between the mated and mature virgin females were identified. Out of 83 transcripts that displayed significant transcriptional changes between virgin mature and mated females 24 h post-mating, 65 (78 %) were more abundant in mated females, while only 18 (22 %) were more abundant in virgin mature females (FDR < 0.001, Fig. 1 & Additional file 7: File S1). This is similar with the result found in C. capitata and D. melanogaster [11,27,28]. Only 32 and 28 transcripts were altered in abundance 24 h after mating in C. capitata and D. melanogaster, respectively. However, in a study of Drosophila using DNA microarray analysis, 539 transcripts are differentially expressed in the lower reproductive tracts [25]. One reason for the variable changes in transcription could be that like previous studies by Lawniczak and Begun [28] in Drosophila, our study focused on broader changes of gene expression in the whole body, while a study by Mack et al. [25] in Drosophila concentrated on the lower reproductive tracts, where significant changes take place following mating. Furthermore, the time point for sample collection after mating in both our study and that of the medfly was 24 h [11], that in the Lawniczak and Begun [28] analysis was 1-3 h, whereas the Mack et al. [25] examined several time points (0, 3, 6 and 24 h postmating). It was demonstrated in previous studies that post-mating transcriptional changes are highly variable at different time points, thus possibly explaining transcriptional differences among these studies [25]. Statistically significant differences (p < 0.05) among enriched biological process GO terms during mating included those involved in the response to stimulus, as well as immune system, developmental, cellular, biological regulation and metabolic processes. Of these transcripts, those that are related to the response to stimulus and immune system process are the largest group (Fig. 2b). A table of these genes is provided in the supplementary data (Additional file 10: Table S5). Similar results were found in A. mellifera where the genes involved in stress response were significantly regulated following mating [45]. Additionally, an immune response was stimulated by mating in Drosophila, and it might it arise from sexually antagonistic interaction between the sexes [46,47]. Furthermore, enrichment analysis of KEGG pathways for the mating responsive genes was also performed (Additional file 9: File S2). Our results showed that the metabolism pathways were the main groups, including carbohydrate metabolism, transport and catabolism, glycan biosynthesis and metabolism, lipid metabolism, metabolism of cofactors and vitamins. The different results between KEGG enrichment and GO enrichment might be caused by the different database used in the analysis.
Genes involved in oogenesis and ovary maturation
From DEG analysis of the samples from immature (newly eclosed) vs. mature virgin (sexually mature before mating) individuals, we identified seven genes that reportedly participate in oogenesis and ovary maturation in insects (Additional file 8: Table S4). Interestingly, two genes that were previously found to play important roles in Wnt signaling pathways and an additional two involved in sex determination were also identified in this DEG library [48][49][50][51]. The subsequent qRT-PCR validated this result (Fig. 3). All of these genes were significantly highly expressed at the sexually mature and post-mating stage, which is consistent with our transcriptome data, indicating that they are involved in physiological processes of mature females as is the case in other insects [52][53][54] (Fig. 3). Additionally, we analyzed expression of these genes in several tissues to elucidate their roles and found that most are highly enriched in the ovary, including hu-li tai shao (hts), oskar, mago nashi, yolkless (yl), disheveled, axin, sry and transformer-2 (Fig. 4). The expression level of oskar in the ovary is approximately 50 fold higher than that in the fat body and 300 fold in the midgut. Notably, three vitellogenin genes can be detected in the head, thorax, ovary, fat body, while they are more abundantly expressed in fat body (Fig. 4). These results are explained below by analyzing their roles determined for other insects.
hts was reported to modulate actin polymerization in neurons, sharing a conserved function with mammalian adducing in actin-capping activity [55]. One isoform of hts, Ovhts-RC, was shown to be distributed along a cortical anterior-posterior gradient during Drosophila lateoogenesis to modulate spatially restricted actin filament growth at the oocyte cortex while alterations in Ovhts-RC led to actin overgrowth in the oocyte [52]. A previous report indicated that oskar participated in regulating pole plasm assembly, and a long form of oskar played an important role in yolk endocytosis and F-actin projection at the posterior pole [56]. Recently, it was found that oskar functions upstream of D-EndoB in the polarized endocytic activity of the yolk uptake process in Drosophila oocytes [57]. It was also found that mago nashi interacts with Tsunagi/Y14 and other proteins, forming a multiprotein complex to play an essential role in localization of oskar mRNA within the oocyte in Drosophila [58]. Recent analyses demonstrated that mago nashi forms a complex with Tsunagi/Y14 and Ranshi to influence oocyte differentiation within the posterior pole of the presumptive oocyte in Drosophila [53]. Vitellogenin, also known as yolk protein, has been well studied in insects, especially in Drosophila, Aedes aegypti, Anopheles gambiae and C. capitata [54,[59][60][61][62]. In these species, it is expressed in the fat body, secreted into the hemolymph and subsequently sequestered by oocytes to facilitate the Fig. 2 GO classification of the DEGs to demonstrate changes in transcript abundance with maturation and mating. GOEAST was used for GO term enrichment analysis, and the relative significant level of enrichment of the differently expressed gene to all genes was calculated with hypergeometric statistical test method, then the GO term whose FDR < 0.001 was considered to be significant enriched. The number of enriched transcripts for GO annotated sequences was revealed in the various samples of different biological categories. a Mature virgin females compared to immature females, and b mated females compared to mature virgin females transport of carbohydrates, lipids and other nutrients to the ovaries [63]. In the transcriptome we obtained, three vitellogenin genes were identified, including vitellogenin 1, vitellogenin 2 and vitellogenin 2 precursor, all of which are up-regulated in sexually mature females, indicating that they are essential for Bactrocera sexual maturation. The gene yolkless, commonly known as the vitellogenin receptor, belongs to the apolipoproteins receptor (LDLR) superfamily [64]. Drosophila yolkless is expressed very early during the development of the oocyte at both transcriptional and translational level, long before vitellogenesis begins. Previous studies found that yolkless mutants failed Fig. 3 qRT-PCR confirmation of the differentially expressed genes between mature virgin females and the immature individuals. Total RNAs from B. dorsalis at three different physiological stages were isolated, including the immature (newly emergent within 24 h), virgin (14-day-old before mating) and mated (14-day-old post-mating). Relative transcript levels are calculated by real-time PCR using the 16 s rRNA gene as the standard control. Different letters represent significant differences (p < 0.05, Duncan's test) among samples. Three independent biological replicates were performed for each sample to incorporate yolk proteins during oogenesis [65]. Our findings are validated in the qRT-PCR results, revealing that these genes were enriched in the ovary or fat body. Furthermore, we found disheveled and axin (isoform A) in our transcriptome, both of which are core components of Wnt signaling pathways (Additional file 8: Table S4). They are also required for cell movement and FAK regulation during ovarian morphogenesis, which may explain why both genes are abundantly expressed in mature B. dorsalis females [48] (Fig. 3).
Notably, two genes involved in sex determination were also identified, including sex-determining region y protein (sry) and transformer-2 (tra-2). It has been reported that sry plays a role in mammalian sex determination, reduced or delayed sry expression impairs testis development. This indicates a potential role of sry dysregulation in human intersex disorders [49]. Additionally, tra-2 was also reported to be an essential switch for the sex determination cascade in dipteran insect species [50,51]. Knockdown of tra-2 leads to the production of male-only progeny in B. dorsalis [66]. In the present study, we identified these two genes and found they were up-regulated in sexually mature females compared to their newly eclosed counterparts, which is the first Fig. 4 Differentially expressed genes between mature virgin females and immature individuals in various tissues. qRT-PCR was used to analyze mRNA levels of differentially expressed genes, including those found in the head, thorax, ovary, fat body and midgut. Relative transcript levels are calculated by real-time PCR using the 16 s rRNA gene as the standard control. Various letters indicate significant differences in the expression level (p < 0.05, Duncan's test). Three biological replicates were performed evidence that sex determination related genes may also function during sex maturation.
Genes involved in the post-mating response of females
Upon mating, significant expression changes occurred in genes involved in the immune or stress response (Fig. 2b). The immune response stimulated by mating might arise from sexually antagonistic interaction between the sexes [27,28,46,47]. Several up-regulated genes from the DEG analysis showed significant increases in transcript abundance in females, including two PRRs (peptidoglycan-recognition protein SB1-like and peptidoglycan-recognition protein LB-like) and six antimicrobial peptides (AMPs) (defensin, diptericin, phormicin-like, sapecin, cecropin-1 and attacin-C-like) (Additional file 10: Table S5). To investigate the expression pattern of these genes, the expression levels of AMPs in different tissues were analyzed by qRT-PCR. The results showed that all were highly enriched in the fat body, which is usually an important organ involved in the immune response in insects (Fig. 5). The expression pattern of these genes indicates that they may participate in the immune response after mating in B. dorsalis.
To further validate, the changes in expression at different time points after mating were analyzed in both the fat body and head. All genes except attacin-C-like and phormicin-like were significantly up-regulated 24 h after mating in the fat body approximately 2-10 fold (Fig. 6a). This is partly supported by the previous studies in D. melanogaster, which showed that the immune response is activated in mated females by mating and associated sex peptides transfer [28,46,47]. Upon mating, potent signal molecules from the males enter the females and the behavior and physiology of the mated females shifted from reproductive receptivity to fecundity. The immune response was induced in the females and it might arise from sexually antagonistic interaction between the sexes [45,47]. However, this is different from the results of the previous report on C. capitata, which revealed no significant changes of transcript abundance were detected in immune-related genes after mating, except for a large reduction in defensin in the female abdomen [11]. This data suggests that post mating changes differ among species [33]. Additionally, the expression level of most AMPs did not significantly change during the time course in the female head 24 h post-mating, except for sapecin and cecropin-1, both of which significantly increased in transcript abundance, by 2.3 and 3.0 fold, respectively (Fig. 6b). It is clear that the fat body is the main tissue where the immune response takes place and not the head. 5 Expression pattern of the immune-related genes between mated females and the mature virgin individuals in different tissues, including the head, thorax, ovary, fat body and midgut. Relative transcript levels of six antimicrobial peptide are calculated by real-time PCR using the 16 s rRNA gene as the standard control. Different letters indicate significant differences (p < 0.05, Duncan's test) among samples and three independent biological replicates were performed for each sample Fig. 6 The effect of mating on the expression level of immune-related genes in the fat body (a) and head (b) of B. dorsalis females. RNA was extracted from the head or fat body of mature vigin females as well as mated females at 1, 12, 24 h after mating. Relative transcript levels of six antimicrobial peptide are calculated by real-time PCR using the 16 s rRNA gene as the standard control. Various letters represent significant differences of the expression level of genes (p < 0.05, Duncan's test). Three biological replicates were performed | 7,337.4 | 2016-03-05T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
] |
CRITICAL SUCCESS FACTORS FOR ADOPTION OF CLOUD COMPUTING IN PUBLIC UNIVERSITIES IN KENYA
: Cloud computing technology is a distributed computing approach whereby users access shared resources under various service models through the internet. It allows individual access to information technology resources through the internet upon demand. Cloud computing is an essentially growing terminology in the IT world and has become increasingly present in the life of institutions of higher learning. Institutions of higher learning consider cloud computing and construction of digital content platforms as a way of enhancing resource utilization and improvement of service delivery. The sudden and frenzied rush for cloud computing by universities has been aggravated by exponential growth in data traffic and the need for innovative learning such as e-learning and virtual classrooms amid COVID 19 pandemic (Kenya Education Network-KENET, 2021). Perhaps it is from that realization and the need to adhere to COVID 19 protocols that most public universities in Kenya have adopted cloud computing. In this study, we sought to find out the critical success factors for adoption of cloud computing in public universities in Kenya. The study set out three (3) objectives and consequently three (3) null hypotheses to guide it. Quantitative research design was adopted for this study. Similarly, International Business Machines-IBM (2011) model for cloud adoption offered theoretical guidance. At a confidence interval of 95%, an online sample size calculator was used to arrive at three hundred and sixty two (362) respondents out of six thousand two hundred (6200) target population. Proportionate stratified random sampling technique and an online list randomizer were used to select respondents in the selected universities to participate in the study. Multiple regression was used to test the hypotheses in this study based on empirical data obtained by a survey questionnaire of thirty nine (39) questions from the two (2) public universities. Multiple (β=.353), and User Preparedness (β=.47 5) on adoption of cloud computing at p<.05. Regression results gave a coefficient of determination R 2 =.908 which means 90.8% of the variation in adoption of cloud computing can be explained by Management Support, Technical Support, and User Preparedness combined. Based on the coefficient of determination (R 2 ), the three null hypotheses (H0 1, H0 2, & H0 3 ) were rejected at p<0.05. Regression analysis showed that Management Support, Technical Support and User Preparedness are critical success factors in cloud adoption in public universities in Kenya. This study provides new and relevant insights to literature on cloud adoption in higher education service in Kenya. Shapiro -Wilk test showed that the responses were normally distributed at p>.05 with a coefficient of .078. Multiple regression showed that the responses had tolerable multicollinearity with a Variance Inflation Factor loading of 2.439 which satisfied O’brien (2007) benchmar k of multicollinearity. The Kaiser-Meyer-Olkin (KMO) test gave a value of .740, while the Bartlett’s test was significant at (Approx χ 2 =2738.819, p=.000) which confirmed the appropriateness of the factor analysis for management support.
Introduction
Cloud computing is one of the technologies that has been adopted by many organizations. With the competitive world, where technology is used to gain competitive advantage, many organizations have adapted themselves to the use of technology in order to remain competitive (Margianti & Mutiara, 2019). Most organizations, especially institutions of higher learning in the world consider cloud technology as a critical intervention in their strife to improve their capacities in terms of delivery of service delivery.
Cloud computing revolution is fast sweeping its way through most institutions of higher learning globally (Borse & Gokhale, 2019). This could be the reason why many institutions of higher learning in the world over have either embraced cloud technology or are on the road map to its adoption (Alzahrani, 2015). Although originally conceived for the banking sector, cloud technology has increasingly spread to other sectors of the economy, especially the education sector.
In many countries, the application of cloud technology flourishes as an efficient and scientific tool for reducing operational costs, improving service delivery, and streamlining academic activities in public universities (Borse & Gokhale, 2019).
Statement of the Problem
In Kenya, the Vision 2030 blue print considers Information Communication and Technology (ICT) as part of Science Technology and Innovation (STI) which is one of the key pillars towards attainment of Millennium Development Goals (MDGs). Cloud computing (CC) is one of the innovative technologies that offers ingenious services to Information Communication and Technology (ICT) in institutions of higher learning in Kenya. Perhaps it is because of the importance and copious advantages of cloud computing that public universities in Kenya have increasingly adopted cloud computing (Waga, Makori & Raba, 2014). Amidst reduction of offline classes, the technology provides students and staff seamless access to teaching and learning resources with greater flexibility through personalized virtual collaboration, communication and sharing of resources (Kurelovic, Rako & Tomljanovic, 2013). Despite the recent and unprecedented adoption of cloud computing (CC) in institutions of higher learning, few studies have been carried out in this area (Mwavali, 2021) and therefore this study sought to analyse the critical factors which should be considered in order to successfully implement cloud technology in public universities Kenya.
Theoretical and Model Underpinnings
This study was based on International Business Machines-IBM (2011) model. The model comprised comprising of: (1) organizational environments; including top management support, re-engineering business process, effective project management, change management strategy, and institution-wide commitment; (2) people factors; including education and training, employee's attitude, project team and user involvement both at system requirements definition and cloud computing implementation; (3) technical problems; including suitability of software and hardware, IT maturity and data accuracy; (4) cloud computing vendor commitment; including vendor support, vendor tools and vendor and client partnerships; and (5) cultural impact; including organizational cultures and computer culture.
The model was chosen because it offers possibility of grouping complex issues into manageable constructs in adoption of new technologies.
Conceptual Framework
Owing to IBM (2011) model and literature on cloud computing a conceptual framework was generated. This study was conceptualized as presented in Figure 1.
Literature Review
Cloud computing is a technology where data and different applications are stored on storage networks and servers which are located in a remote place and accessed by the users via the Internet. This technology allows storage and access of data and programs in a seamless way over the internet instead of computer hard drives (Desai, Patel & Patel, 2016). It is an on-demand, self-service global platforms of network technology which allows users to retrieve computing resources at any time at every place (Ghanem, 2019). On the other hand, institutions of higher learning need unchanged and consistent creativity and innovation so as to maintain cost effectiveness, efficiency and also for the provision of high quality services to staff and students. In this era of technology and elearning, students, staff and other stakeholders in institutions of higher learning, need to access various web-based information and resources with minimum down time. Moreover, cloud computing also solves issues to do with data management and assessment or analysis in teaching and research which are the core functions of public universities in Kenya (Waga, Makori & Raba, 2014).
Considerations in Adoption of Cloud Computing
Many institutions of higher learning, especially universities have been in the rush to roll out cloud-based applications in the recent past. Initially, public universities had reservations about its adoption largely because of fear of the unknown and that the idea was still new (Onyango, 2016). However, the scenario has changed in the recent past with cloud computing being one of the fastest growing sectors of digital economy and is applicable to various activities of everyday life including education (Kurelovic, Rako & Tomljanovic, 2013). The concerted efforts to utilize Cloud Computing (CC) in support of remote teaching and learning in public universities in Kenya during the COVID-19 pandemic have increasingly emerged (Kenya Education Network-KENET, 2021). The extraordinary disruption caused by the pandemic in Public Universities called for KENET to offer support and guidance on adoption of innovative ways so as to ensure continuity of learning in those institutions. Perhaps, the realization that cloud technology has copious advantages, secure and user friendly, has led to its full acceptance and vast implementation in institutions of higher learning. Consequently, KENET was mandated to transform research and learning using cloud-based educational platforms, resources and services that are deliberately designed to help universities navigate smoothly towards remote teaching and learning. The KENET data centres offer cloud computing using its own community infrastructure as an educational service that is used to host learning management systems, institution repository system, integrated library system and student information centres. The platforms allow universities to manage their services in a secure environment as well as install applications and operating systems of their choice (KENET, 2021).
Top Management Support
Information adoption literature considers top management support as crucial and positively related to successful implementation of cloud computing in most organisations (Gangwer, 2015). The support is in terms of corporate values, culture, allocation of resources, conducive infrastructural platforms, and support during the change process (Ramdani & Kawelek, 2007;Borgman et al. 2013;Makena, 2013;Gangwer, 2015;Yigitbasioglu, 2015). For cloud computing to be a success in most organisations in Kenya, those in leadership positions must be ready to champion and support it (Ogwel et al., 2020)
Technical Support
Technical support is one of the factors which facilitate successful implementation and adoption of cloud computing in most institutions (Cegielski et al., 2012;Tweel, 2012;Makena, 2013;Ogwel et al., 2020). Technical support in cloud computing resonates around the cloud service providers and is the main driving factor in ensuring that cloud information is available at all times or whenever needed (Gangwar, 2015). The web services that support institutions of higher learning in Kenya include CRM and ERP which are available in software as a service (KENET, 2021).
User Preparedness
Cloud computing as a complex system requires that the end users are adequately prepared through appropriate training and education before it is rolled out (Gangwar, 2015). User preparedness help in reducing ambiguity, anxiety and stress among users while improving the users' perception on ease of use and usefulness (Munguti & Opiyo, 2018). Consequently, there have been various ICT trainings given to end users so as to help them navigate and interact with e-resources through online platforms with ease (Omwansa, Waema & Omwenga, 2014).
The literature above demonstrates that top management support, technical support and user preparedness are key parameters which should be considered in adoption of cloud technology. However, it would be interesting to find out whether the same factors are significant in cloud adoption in institutions of higher learning in developing countries. It was against this background that this study sought to assess the critical success factors in cloud adoption in public universities in Kenya.
The study set out the following hypotheses: H01: Management Support has no Statistically Significant Effect on Cloud Computing Adoption in Public Universities in Kenya.
H02: Technical Support has no Statistically Significant Effect on Cloud Computing Adoption in Public Universities in Kenya.
H03: User Preparedness has no Statistically Significant Effect on Cloud Computing Adoption in Public Universities in Kenya.
Sample
This study was carried out in two (2) public universities in Kenya. The universities had approximately six thousand two hundred (6,200) members of staff. An online sample size calculator was used to determine the sample size that would precisely reflect the target population. At a confidence level of 95%, a sample of three hundred and sixty two (362) was arrived at. Stratified simple random sampling technique, and more specifically an online list randomizer was used to proportionately select respondents in the two public universities from the following Strata; Senate Members (15) This study used the following two instruments; questionnaires and document analysis. The main instrument was the questionnaire which was administered on three hundred and sixty two (362) members of staff in order to obtain information on management support, technical support, user preparedness, and cloud adoption. Two hundred and forty two (242) questionnaires were duly filled and returned. The questionnaires comprising of thirty nine (39) questions were closed ended and in the Likert type of scale ranging from: 'Strongly Disagree' (SD), Disagree (D), Neutral (N), Agree (A), Strongly Agree (SA). Document analysis was used to find out the number of staff in the universities.
Independent Variables 5.2.1 Management Support
This study operationalized management support in terms of consideration of cloud computing in budget allocation, having cloud computing as part of its vision, championing cloud computing initiatives, adequate staffing of IT department, availability of cloud computing infrastructure, articulation advantages of cloud computing, reengineering of procedures to accommodate cloud computing, provision of sufficient information on cloud computing, commitment towards cloud computing initiatives, and development of proper road map towards cloud computing. Cronbach's Alpha test revealed that the responses were internally consistent with a coefficient of .790 which satisfied Sekeran (2000) benchmark of α =0.7. Shapiro-Wilk test showed that the responses were normally distributed at p>.05 with a coefficient of .078. Multiple regression showed that the responses had tolerable multicollinearity with a Variance Inflation Factor loading of 2.439 which satisfied O'brien (2007) benchmark of multicollinearity. The Kaiser-Meyer-Olkin (KMO) test gave a value of .740, while the Bartlett's test was significant at (Approx χ 2 =2738.819, p=.000) which confirmed the appropriateness of the factor analysis for management support.
Technical Support
Technical support was operationalized in terms of requisite IT infrastructure in place, competent staff to handle cloud computing, sensitization of potential users on cloud computing, adequate information system support for cloud computing, well developed computing platforms, efficient IT system, university enterprise system suited for cloud computing, users having good computer knowledge, modern IT environment, thorough testing of cloud before 'go-live' stage, and vendors offering technical support and follow up. Cronbach's Alpha test revealed that the responses were internally consistent with a coefficient of .781 which satisfied Sekeran (2000) benchmark of α =0.7. Shapiro-Wilk test showed that the responses were normally distributed at p>.05 with a coefficient of .087. Multiple regression showed that the responses had tolerable multicollinearity with a Variance Inflation Factor loading of 3.125 which satisfied O'brien (2007) benchmark of multicollinearity. The Kaiser-Meyer-Olkin (KMO) test gave a value of .637, while the Bartlett's test was significant at (Approx χ 2 =1918.838, p=.000) which confirmed the appropriateness of the factor analysis for technological support.
User Preparedness
User preparedness was operationalized in terms of user being conversant with the cloud system, trained on cloud computing, adequate cloud competencies, positive attitude towards cloud computing, readiness to upgrade to new technologies, upbeat about cloud computing, and familiarity with cloud computing. Cronbach's Alpha test revealed that the responses were internally consistent with a coefficient of .708 which satisfied Sekeran (2000) benchmark of α =0.7. Shapiro-Wilk test showed that the responses were normally distributed at p>.05 with a coefficient of .789. Multiple regression showed that the responses had tolerable multicollinearity with Variance Inflation Factor loading 1.988 which satisfied O'brien (2007) benchmark of multicollinearity. The Kaiser-Meyer-Olkin (KMO) test gave a value of .572, while the Bartlett's test was significant at (Approx χ 2 =850.613, p=.000) which confirmed the appropriateness of the factor analysis for user preparedness.
Dependent Variable-Cloud Adoption
In this study, cloud adoption was measured in terms of strength of cloud computing, increase in efficiency and accountability via cloud computing, periodic evaluation and monitoring of cloud computing, cloud computing aligned to a department, assisted in definition of roles, detailed installation plans, adequate system testing, requisite skills by system users, and proper planning and coordination of cloud computing. Cronbach's Alpha test revealed that the responses were internally consistent with a coefficient of .780 which satisfied Sekeran (2000) benchmark of α =0.7. Shapiro-Wilk test showed that the responses were normally distributed at p>.05 with a coefficient of .778. The Kaiser-Meyer-Olkin (KMO) test gave a value of .649, while the Bartlett's test was significant at (Approx χ 2 =1332.009, p=.000) which confirmed the appropriateness of the factor analysis for cloud adoption.
Model and Analysis
In order to establish the critical success factors in cloud adoption in public universities in Kenya, the aggregate mean scores of the independent variables; management support (MS), technical support (TS), and user preparedness (UP) were regressed on the aggregate mean scores of the dependent variable adoption of cloud computing (ACC): ACC= β0+ β1MS + β2TS + β3UP + ε.
Limitations of the Study
The major limitation was on external validity or generalizability of the findings of the study to other public universities. Given the sample size of 362 respondents and 242 valid questionnaires, the study was faced with a limitation as to whether the findings of the study were generalizable to other universities on the basis of this study alone. Whereas the study attempted to capture all variables used to measure adoption of cloud computing (ACC), management support (MS), technical support (TS), and user preparedness (UP) in public universities, the researcher reckons that to achieve a complete and accurate reflection of those constructs, is hardly fully achievable.
Correlation Analysis
Correlation analysis was done to determine relationships between the study variables. Pearson product moment correlation coefficient test showed that the there was strong positive relationship between the study variables and therefore it was safe to run regression analysis as shown in Table 1.
Relationship between ACC and MS, TS, UP
The aggregate mean scores from data on adoption of cloud computing (independent variable) were regressed on the aggregate mean scores from data on management support, technical support, and user preparedness (dependent variables). Hypotheses set were tested using multiple regression method and the model set constructed. (2021) ANOVA results based on the F-test (F= 816.528, p=000) showed that the multiple regression model was robust enough to explain the adoption of cloud computing using management support, technical support, and user preparedness as the critical success factors in Table 2.
The model also showed that R 2 (coefficient of determination) is 0.908, which means that approximately 90.8% of the variation in adoption of cloud computing can be explained by management support, technical support, and user preparedness combined as shown on Table 3. Because management support, technical support, and user preparedness combined significantly predicted variations in adoption of cloud computing, F (3,238) = 816.528, p < 0.05, R 2 =0.908, and the respective beta coefficients (β= .257, β=.353, β= .475, p<0.05), the Hypothesis 1 (H01), Hypothesis 2 (H02) and Hypothesis 3 (H03) were therefore safely rejected.
On the basis of the results on Table 4, the following model was constructed to explain the effect of individual factors on cloud computing in public universities in Kenya: ACC = -.047 + .257MS + .353TS + .475UP.
Effect of Management Support on Adoption of Cloud Computing
The result of the study indicated that management support had a positive and significant effect on cloud computing (β= .257, p<.05). The results suggest that as the level of management support increases, so the level of cloud computing adoption. Top management support has a crucial role in initiating, implementing and adopting cloud computing. It is the organizational leadership that typically supports initiatives, provides resources, participate in decision making and set organizational strategies and direction for cloud adoption (Ogwel et al., 2020). Cloud computing adoption must therefore receive approval from management besides them demonstrating strong commitment for the successful implementation of cloud computing in any organization.
According to Gangwar (2015), successful adoption of cloud computing require strong leadership, commitment, and participation from top management. The roles of management in adoption of cloud computing include establishing reasonable goals for adoption of cloud computing, exhibiting strong commitment for its implementation and communication the corporate cloud computing strategy to all the employees.
The results link well with literature and empirical findings by (Ramdani & Kawalek, 2007;Borgman et al., 2013;Makena 2013;Yigitbasioglu, 2015;Ogwel et al., 2020) on the effect of top management support on adoption of cloud computing in organisations. Drawing from these findings, top management give the cloud computing the necessary impetus for its successful adoption in any organization.
Effect of Technical Support on Adoption of Cloud Computing
Hypothesis 2 (H02) predicted that technical support does not significantly affect cloud computing adoption. The results indicated that technical support has a significant positive effect on cloud computing adoption (β=.353, p=<0.05). The results suggest that as the level of technical support increases, so does the level of cloud computing adoption. This in effect means having suitable software and right hardware is a fundamental step in adoption of cloud computing in an organization. The findings support literature by (Cegielski et al., 2012;Tweel, 2012;Hudson, 2013;Makena, 2013;Ogwel et al., 2020) on the significant role of technical support in adoption of cloud computing in organisations.
Effect of User Preparedness on Adoption of Cloud Computing
Hypothesis 3 (H03) stated that user preparedness does not have significant effect cloud computing adoption. The results found that there exists a positive significant effect of user preparedness on cloud computing adoption (β= .475, p<0.05). The finding suggests that higher user preparedness leads to higher cloud computing adoption in organisations. These results support the literature and findings by Munguti & Opiyo (2018) on the need to appropriately prepare users' perceptions and competencies for successful adoption of cloud computing in organisations.
Conclusion
This study successfully extended knowledge by studying and testing whether management support, technical support, and user preparedness affected adoption of cloud computing. Many innovative organisations have found that cloud computing adoption provides them with the flexibility to take control of their innovation processes while streamlining other business processes and controlling cost. This study has demonstrated that management support, technical support, and user preparedness go a long way in ensuring that the adoption of cloud computing is a success. Providers of cloud system services require suitable platforms for developing and installing their applications. They are facing the challenge of integrating technology-oriented system with business processes and this can be overcome by considering management support, technical support, and user preparedness as the remedy for the challenges. Based on the findings of this study and analysis of relevant studies, critical success factors for adopting cloud computing in public universities in Kenya have been put forward.
In conclusion, the findings of this study have important implications for academic, management practice and human resources. As scholarly inquiries into the notion of critical success factors for the adoption of cloud computing has remained conceptual to date, this research is one of the attempts to test the concepts in an empirical setting. Institutions of higher learning may endorse the validity of incorporating management support, technical support, and user preparedness in order to enhance the successful implementation of cloud computing.
Recommendations
Several practical implications arise from the findings of this study. The findings have demonstrated the importance management support, technical support, and user preparedness in the adoption of cloud computing. Organizations that want to capitalize on the innovative technologies like cloud computing must ensure that managers explicitly give the project the requisite support. Also, the organization must give the necessary technological support for ease of adoption to cloud computing. Furthermore, the users of the cloud computing technology must be prepared for ease of uptake of this innovation. Therefore, efforts to increase cloud computing adoption can be | 5,366.2 | 2021-09-17T00:00:00.000 | [
"Computer Science"
] |
Studies on mcl-Polyhydroxyalkanoates Using Different Carbon Sources for New Biomedical Materials "2279
Polyhydroxyalkanoates (PHAs) are microbial homoand copolymers of [R]-β-hydroxyalkanoic acids, produced by a wide variety of bacteria as an intracellular carbon and energy reserve. To obtain mcl-PHAs of microbial origin, we used a Pseudomonas spp. strain (from the National Institute for Chemical-Pharmaceutical Research and Development (ICCF) culture collection of micro-organisms), by varying the carbon sources and the precursors. In this work, assays were performed with fermentation media seeded with inoculum cultures of strain Pseudomonas putida in a proportion of 10%. The influence on strain development and mcl-PHA production of carbon sources consisting in C6, C7, C8 and C9 fatty acids (as polymers precursors) was analyzed. Due to their properties, similar to conventional plastics and their biodegradability, PHAs are suitable for many applications and for biomedical materials useful in surgical sutures, tissue engineering and drugs carriers, leading us to deepen the study of obtaining micro/nanofibers by the electrospinning method.
Introduction
Polyhydroxyalkanoates (PHA) are microbial homo-and copolymers of [R]-βhydroxyalkanoic acids, are produced by a wide variety of bacteria as an intracellular carbon and energy reserve [1,2]. The factors that affect the growth of the microorganism and implicitly the PHAs production, depend very much on the composition of the medium, and are as follows: the concentration and the type of the carbon source, the amount of nitrogen and phosphorus source. Other factors, also important, are pH, temperature, oxygen concentration, and the system of cultivation and they can influence the conversion of the substrate and the content of PHA in the cells [2][3][4]. Depending on the number of carbon atoms, contained by the monomers units, PHAs isolated can be classified as follows: (i) short chain length (scl) PHAs-3 to 5 carbon atoms/monomer, (ii) medium chain length (mcl) PHAs-6-14 carbon atoms/monomer, and scl-co-mcl with repeat-unit monomers containing 3-14 carbon atoms [2]. Many studies confirmed that mcl-PHA type is much more flexible and resistant than scl-PHAs [5,6].
Due to the fact that they have properties similar to plastics obtained from petroleum, but especially due to the fact that they are biodegradable, PHA can be an alternative to synthetic polymers [7]. These are promising materials due to their useful characteristics: thermoplastic and elastomeric properties, biodegradability, biocompatibility and nontoxicity. Consequently, they are good candidates for various applications in industry (replacements for petroleum-derived plastics, packaging industry, laminate papers and 2 of 5 cardboards), fine chemical industry (starting materials for the synthesis of antibiotics and other fine chemicals) or medicine (scaffolds for bone tissue engineering, drug delivery system) [8][9][10][11].
In this paper, we studied the optimal concentration of fatty acids to obtain new biomaterials used in medical domain.
Materials and Methods
The ingredients and the reagents used in experiments were purchased from Sigma-Aldrich (St. Louis, MO, USA), Merck (Kenilworth, NJ, USA), and Larodan (Solna, Sweden).
For PHA production Pseudomonas putida ICCF 391 was used. The stock culture was grown at 29 ± 1 • C and periodically transferred on fresh M44 (cDSMZ424) agarized medium. During the research, the stock cultures were kept at 5 • C in the refrigerator. Preinoculum medium (M44) contained (g/L): yeast extract 10, peptone 10, glycerol 50, agar 20. The cell culture from the pre-inoculum medium was taken in 2 mL of distilled water and passed into the inoculum medium (100 mL), whose composition was (g/L): glucose 10, corn extract 15, KH 2 PO 4 10, NaCl 10, MgSO 4 0.5. The inoculum culture, developed 24 h at 30 • C on shaker (220 rpm), was used in a proportion of 10% for the inoculation of the fermentation medium. The medium (250 mL/flask) used to produce PHAs contained (g/L): NaNH 4 HPO 4 ·4H 2 O 3.5, K 2 HPO 4 7.5, KH 2 PO 4 3.7, and was periodically supplemented (at 0 and 24 h) with fatty acids investigated (C6, C7, C8, C9), in different combinations, whose amount varied in the range 15.23-16.7 g/L. The experiments were performed to make a comparison between the degree of conversion of fatty acids and the composition of the polymers obtained, when a single precursor (C8, C9) or combinations of precursors (C8-C9, C6-C8, C7-C9) were added to the bioprocess medium. At the time of inoculation, the bioprocess medium was supplemented with two solutions containing trace elements (1.0 mL/L from each solution) prepared and sterilized separately [3].
After dissolving the ingredients in distilled water and adjusting the pH to 6.8-7.00, the media were sterilized for 30 min at 115 • C.
The bioprocesses were conducted for 48 h, at 30 • C, and 220 rpm, and the optical density (OD) of the culture was measured periodically (at λ = 550 nm, 1:25 dilution) with a spectrophotometer (UV-VIS, Jasco V-Able 630). After centrifugation and processing of the bioprocess media, the amount of dry biomass and the amount of polyhydroxyalkanoates obtained were determined.
The biomass (obtained after centrifugation of the medium) was treated with methanol and then dryied under vacuum. The mcl-PHAs were extracted from biomass by acetone Soxhlet extraction method (biomass: acetone ratio was 1:20). The next step consisted in the concentration of the extract obtained and the precipitation of the mcl-PHA with cooled (in refrigerator) methanol (1:10 concentrated extract: methanol). The precipitated polymer has been dissolved in chloroform, the chloroform was evaporated, the polymer was left to dry and after that, weighed [2].
In order to determine the monomer composition of the obtained polymers, acid methanolysis of these polymers was performed, which resulted in obtaining a mixture of methyl esters, further identified chromatographically, based on methyl esters standards C6-C11, using an HP 5 column (5% phenyl-methyl-polysiloxane) [2,12]. After chromatographic identification of the monomers, their purity degree was calculated to determine the degree of conversion of the substrate consisting of fatty acids provided as a carbon source.
Results and Discussion
The amount of precursors, the manner of supplementation and their type, as well as the results obtained at the end of the bioprocesses, performed in order to obtain PHA, are presented in Table 1. Correlating the data from the experiments performed, we noticed that using a mixture of C8 and C9, a lower amount of dry biomass (g/L) was obtained, compared to the fermentation in which C8 was used as single precursor. When we used a mixture of C8-C6, we noticed that the amount of biomass was higher in the cases that C8 was the first fatty acid added in the bioprocess medium. In fact this combination of precursors (C8-C6), with C8 as first source of carbon added, was the best of all (3.636 g DCW/L).
In the Table 2 are presented the results obtained for polymer composition and purity expressed as g/100 g of analyzed product as determined by GC-FID. After processing the biomass and obtaining PHA, it can be seen that when higher amounts of biomass were achieved (P19, P5, P21), the results were reflected in the amount of polymer obtained (expressed as a percentage, relative to the amount of biomass). The amount of biopolymer contained in biomass was in the range of 36-56%. In bioprocesses in which mixed additions C8-C9 were made, the percentage of C9 hydroxy acids was higher (45.72 and 59.59 respectively) than that of C8 hydroxy acids (33.79 and 14.9, respectively); in C7-C9 additions, C7 or C9 hydroxy acids prevailed depending on the precursor initially added; and in the C6-C8 combinations C8 hydroxyacids prevailed, according to the results obtained following gas chromatographic analyzes ( Table 2). The analytical results revealed that the highest values obtained for the component hydroxy acids were: 66.18 for C7, between 79.46-88% for C8, and from 45.72 to 79.32% for C9. The highest degree of conversion was achieved by octanoic acid (79.46-88%).
The results obtained for the amount of biomass and, after its processing, for the amount of biopolymer, reveal that the best influence on PHA biosynthesis was achieved by octanoic acid, alone or in combination with hexanoic acid (added to the bioprocess medium after 24 h) as can be seen in Figure 1. hydroxy acids was higher (45.72 and 59.59 respectively) than that of C8 hydroxy acids (33.79 and 14.9, respectively); in C7-C9 additions, C7 or C9 hydroxy acids prevailed depending on the precursor initially added; and in the C6-C8 combinations C8 hydroxyacids prevailed, according to the results obtained following gas chromatographic analyzes (table 2). The analytical results revealed that the highest values obtained for the component hydroxy acids were: 66.18 for C7, between 79.46-88% for C8, and from 45.72 to 79.32% for C9. The highest degree of conversion was achieved by octanoic acid (79.46-88%).
The results obtained for the amount of biomass and, after its processing, for the amount of biopolymer, reveal that the best influence on PHA biosynthesis was achieved by octanoic acid, alone or in combination with hexanoic acid (added to the bioprocess medium after 24 h) as can be seen in Figure 1.
Conclusions
The highest degree of conversion of fatty acids into biopolymer was achieved for octanoic acid, results also revealed in the amount of biomass obtained.
Nonanoic acid is probably more difficult to metabolize by the microorganism than octanoic acid because, for the same amount added to the bioprocess medium, the biomass resulting from the media containing nonanoic acid was less. This fact is confirmed if we evaluate, comparatively, the supplementation made in the batches with both fatty acids: the amount of PHA obtained was higher when the initial supplementation was made with octanoic acid. The fact that the microorganism (P. putida)has a higher affinity for octanoic acid can also be seen if we compare batches 19 (P19) and 21 (P21): the initial supplementation with octanoic acid was beneficial both in terms of the amount of biomass obtained (3.636 g/L), as well as its content in PHA (56.29%).
The results also revealed the performance of the microorganism to produce mcl-PHA by converting the monomers tested as precursors to the maximum limit of 16.70 g/L. Thus, the polymers have contained 66.18% -C7, from 79.46% to 88% -C8, from 45.72% to 79.32% -C9.
Conclusions
The highest degree of conversion of fatty acids into biopolymer was achieved for octanoic acid, results also revealed in the amount of biomass obtained.
Nonanoic acid is probably more difficult to metabolize by the microorganism than octanoic acid because, for the same amount added to the bioprocess medium, the biomass resulting from the media containing nonanoic acid was less. This fact is confirmed if we evaluate, comparatively, the supplementation made in the batches with both fatty acids: the amount of PHA obtained was higher when the initial supplementation was made with octanoic acid. The fact that the microorganism (P. putida) has a higher affinity for octanoic acid can also be seen if we compare batches 19 (P19) and 21 (P21): the initial supplementation with octanoic acid was beneficial both in terms of the amount of biomass obtained (3.636 g/L), as well as its content in PHA (56.29%).
The results also revealed the performance of the microorganism to produce mcl-PHA by converting the monomers tested as precursors to the maximum limit of 16.70 g/L. Thus, the polymers have contained 66.18%-C7, from 79.46% to 88%-C8, from 45.72% to 79.32%-C9.
Following the obtained results, an in-depth study of these biopolymers can be continued for their use as a material in the electrospinning method to obtain fibres and scaffolds for tissue engineering applications.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable. | 2,644.6 | 2020-11-14T00:00:00.000 | [
"Biology",
"Engineering"
] |
The prohormone convertases PC1 and PC2 mediate distinct endoproteolytic cleavages in a strict temporal order during proopiomelanocortin biosynthetic processing.
Two subtilisin-like endoproteases called PC1 and PC2 are distributed in a tissue-specific manner in the pituitary and in the brain. AtT-20 cells and corticotropes of the anterior pituitary express primarily PC1 and perform a limited number of cleavages of the proopiomelanocortin (POMC) precursor during biosynthesis. Melanotropes of the intermediate pituitary express both PC1 and PC2 and perform a more extensive set of cleavages during the biosynthetic processing of POMC. To investigate the role of PC2 in the biosynthetic processing of POMC, AtT-20 mouse corticotropes were stably transfected with a full length PC2 cDNA. The AtT-20 cells expressing PC2 acquired the ability to perform all the additional cleavages seen in the intermediate pituitary, but did not acquire the ability to alpha-N-acetylate the product peptides. The kinetics of the earliest steps in biosynthetic processing were unaltered by the expression of PC2, and the changes due to PC2 expression were seen only in the middle and late steps in biosynthetic processing. Thus, both the identity of the final product peptides and the kinetics of the processing steps in the AtT-20 cells expressing PC2 fit the patterns expected for melanotropes of the intermediate pituitary.
POMC processing is cell type-specific in the pituitary, and the distribution of PC1 and PC2 mRNAs might explain the differences in processing: corticotropes express primarily PC1 mRNA, while melanotropes express high levels of PC2 mRNA along with PC1 mRNA. AtT-20 cells are a mouse corticotrope cell line which expresses almost exclusively PC1 and has been extensively documented as a model for anterior pituitary corticotropes (2,21).
As has been found for most prohormones, all of the cleavages of the POMC precursor occur at pairs of basic amino acids (see Fig. 1) (2,(22)(23)(24)(25)(26). The initial steps of POMC processing (steps 1, 2, and 3 in Fig. 1) occur with similar kinetics in both lobes of the pituitary. Cleavage of OLPH to make P-endorphin (step 4) occurs in both lobes of the pituitary, but melanotropes of the intermediate pituitary carry out much more extensive cleavage at this site. Two of the endoproteolytic cleavages that take place very late in the biosynthetic pathway occur only in intermediate pituitary melanotropes and in the brain and do not occur in anterior pituitary corticotropes: step 6 forms Lyso-y3MSH and step 7 forms 0endorphin . Another biosynthetic step which is restricted to melanotropes and to neurons in the nucleus of the solitary tract is the a-N-acetylation of ACTH and @-endorphin (2,24).
Previous studies using the vaccinia virus system to express PC1 and PC2 have shown that both candidate endoproteases can mediate a number of cleavages of the POMC precursor, but did not fully explain the differences in cleavages seen between corticotropes and melanotropes in vivo (27)(28)(29). In particular, since the cleavages to produce the melanotropespecific products, Lyso-rsMSH and P-endorphin , were undetectable in cells expressing both PC1 and PC2, there remained the important question of whether additional prohormone convertases needed to be identified to understand POMC processing in the pituitary and brain.
In order to avoid any potential problems that might arise from the disruption of cellular functions associated with vaccinia infection (30) and to create cell lines which could be used to investigate PC1 and PC2 function in depth, we wished to create stable AtT-20 lines with altered levels of PC1 and PC2 expression. AtT-20 cells have a high level of PC1 mRNA and protein and extremely low levels of PC2 mRNA (7,13,14,21). We used AtT-20 cells whose PC1 levels had been reduced by expression of antisense RNA to PC1 to demonstrate the crucial role of PC1 in the early steps in POMC processing (13). Our initial work, which utilized an expression vector driven by the mouse metallothionein-1 promoter, did not succeed a t getting adequate PC2 expression in AtT-20 cells to decide whether PC2 had a role in peptide processing. We have found that an expression vector using the human cytomegalovirus promoter (31) produces levels of expression of various rat peptidylglycine a-amidating monooxygenase (EC 1.14.17.3) constructs in AtT-20 cells much higher than those achieved using the metallothionein-based vector (32,33). Therefore, we used this expression vector to make stable AtT-20 lines expressing elevated levels of rPC2, so that we could investigate possible changes in POMC peptide processing in a cell line whose ability to make secretory granules and store products for later secretion had not been compromised.
In this work, biosynthetic labeling demonstrates that the earliest cleavages of POMC seen in corticotropes (see Fig. 1) are unaltered by overexpression of PC2, while later cleavages normally only found in melanotropes are induced by expression of PC2. Identification of the product peptides showed that all of the endoproteolytic cleavages unique to melanotropes are accurately produced by AtT-20 cells expressing PC2, while a-N-acetylation is not induced by expression of PC2. Even the kinetic patterns of the cleavages seen in melanotropes are matched upon expression of PC2 in AtT-20 cells, with some cleavages happening rapidly and others taking many hours to occur. Finally, studies of secretion showed that the new peptides produced by AtT-20 cells expressing PC2 are stored in a compartment from which secretion can be stimulated by secretagogues.
MATERIALS AND METHODS
Establishment of Stable Cell Lines Expressing Rat PC2"Starting with the pBluescript plasmid encoding the full sized rat PC2 (13), the cDNA was cut 3' to the stop codon at nucleotide 2058 with BstXI, the ends of the DNA were made blunt by treatment with S1 nuclease, and then XbaI was used to obtain the full insert. The 2-kb fragment was inserted into an expression vector which utilizes the cytomegalovirus promoter (pCIS.SCXXNH, kindly provided by Dr. Cornelia Gorman, Genentech) (31) prepared by digestion with XbaI and HpaI, creating the expression plasmid pCIS.sPC2; the orientation was verified by restriction mapping. The plasmid was cotransfected along with the pMt.neo-1 plasmid into AtT-20 cells using the lipofection method, enabling drug selection with G418 (32). Drug-resistant cell lines were screened by Northern blot analyses using a probe for PC2 mRNA (13). Poly(A)+ RNA was prepared using the PolyATract kit (Promega) following the manufacturer's instructions.
Peptides were released from the antisera by boiling into SDS-urea-0-mercaptoethanol; for peptide analyses, guanidine-HCl-P-mercaptoethanol was used (35). SDS samples were analyzed using borateacetate-buffered tube gels, followed by slicing, eluting, and liquid scintillation counting (41); alternatively, 35S-labeled samples were analyzed on slab gels (16.4% acrylamide and 0.6% N,N'-methylenebisacrylamide) using the low molecular weight rainbow standards (Amersham). Slab gels were rinsed for 30 min in 30% isopropanol, 10% acetic acid and impregnated for 30 min with the Amplify fluorographic solution (Amersham), followed by drying and fluorography. Retention of small peptides in the gel during these procedures was verified by rehydration of the dried gel, liquid scintillation counting, and comparing these counts to samples subjected directly to elution and scintillation counting.
POMC peptides released from the antibody with guanidine-HC1 were subjected to gel filtration on Sephadex G-75 in 10% formic acid, 20 pg/ml bovine serum albumin (35). In order to establish the identity of the a-melanotropin-sized peptides immunoprecipitated with the NHz-terminal ACTH antiserum, the peptide pool was subjected to reversed-phase high-pressure liquid chromatography (RP-HPLC) on a Cls-pBondapak column in trifluoroacetic acid along with synthetic peptides as internal markers (36), followed by liquid scintillation counting; additional samples were digested with chymotrypsin for 5 h and analyzed by RP-HPLC with marker peptides (36). Similarly, peptides reacting with the @-endorphin antiserum and approximately the size of 8-endorphin were subjected to cation exchange HPLC in the presence of camel @-endorphin marker peptides (the camel and mouse 8-endorphin sequences are identicaP(23)); the resulting radioactive peaks were desalted using a Sep-Pak, and then subjected to chymotryptic digestion (35) and analysis by RP-HPLC in trifluoroacetic acid with added internal markers. Peptides reacting with the y3MSH antiserum and approximately the size of y3MSH were subjected to radioactive amino acid microsequencing by Dr. Henry Keutmann (Massachusetts General Hospital). Joining peptide was analyzed by TSK gel filtration and DEAE-ion exchange chromatography (38).
AtT-20 Cell Lines Expressing High Levels of PC2 mRNA-
The goal of establishing stable AtT-20 cell lines expressing high levels of PC2 mRNA was to determine whether the cleavages in POMC that are unique to melanotropes and do not occur in corticotropes ( Fig. 1) could be introduced by overexpression of PC2 in this corticotrope cell line. Using the pCIS.sPC2 construct, two independent cell lines expressing high levels of PC2 mRNA were established by Northern analysis; both exhibited similar alterations in POMC processing, storage, and secretion and have been stable for over 5 months. Additional drug-resistant lines from the same transfections, which did not express significant amounts of PC2 mRNA, did not show changes in POMC metabolism. When poly(A)+ RNA was prepared in order to allow analysis of larger amounts of mRNA than tested previously (13), a faint PC2 mRNA signal was detected in wild-type AtT-20 cells; the best pCIS.PC2 cell line expressed 50-100 times as much PC2 mRNA as the wild-type cells, and several times more PC2 mRNA than the intermediate pituitary. The expression of endogenous PC1 mRNA was unaltered by the expression of transfected PC2 mRNA (data not shown).
Changes in the POMC Products in Cells Expressing PC2- Identical aliquots (1.5 X lo6 cpm) of trichloroacetic acid-precipitable material were used for each extract of each cell line. Immunoprecipitation was performed using the N-ACTH antiserum (top), (3-endorphin antiserum (middle), and yMSH antiserum (bottom). Immunoprecipitates were analyzed by SDS-polyacrylamide gel electrophoresis using the borate-acetate system. Similar changes due to the expression of PC2 were seen in seven biosynthetic labeling experiments of this type. decreased the amount of DLPH and produced a major increase in the ratio of the smaller peptide, P-endorphin, to the larger peptide, BLPH (Fig. 2, middle). In addition, expression of PC2 led to a new peak the size of glycosylated y3MSH in the cell extracts after immunoprecipitation with yMSH antiserum (Fig. 2, bottom). The production of joining peptide was analyzed following incubation of the cells for varying periods of time in medium containing [3H]Trp and was unchanged by the expression of PC2; over 95% of the joining peptide produced by wild-type or PC2 cells was a-amidated, indicating that the a-amidation of peptides was unaltered by the expression of PC2 (not shown).
Expression of PC2 also led to changes in the newly synthesized POMC peptides seen in the culture medium. Fig. 3 demonstrates that the basal rate of secretion of aMSH-sized peptides, ACTH, and glycosylated ACTH, and (?-endorphinsized peptides by PC2 cells was consistently more rapid than secretion by wild-type AtT-20 cells. Only the rate of secretion of POMC was not altered by the expression of PC2. Similarly increased rates of secretion were seen with two independent lines expressing PC2. One of the important characteristics of neuroendocrine cells is their ability to store peptides for extended times and then release the peptides when stimulated. The higher rate of basal secretion in PC2 cells than in wild-type cells raised the possibility that it would not be possible to stimulate secretion above the high basal rate. To investigate this question, wildtype and PC2 cells were incubated with [35S]Met for 30 min and then chased in nonradioactive medium for four consecutive 1-h periods. During the third period, cells were stimulated 30 min and then chased in nonradioactive medium for four successive 1-h collections. Medium from the second hour of basal secretion was analyzed with the N-ACTH antiserum (top) and the (3-endorphin antiserum (bottom); an amount of medium corresponding to 2.0 X lo6 cpm of trichloroacetic acid-precipitable material in the pulse cell extract was analyzed for each cell line. Similar results were seen in six experiments comparing basal secretion by wild-type and PC2 cells.
using 10 nM phorbol ester (as in Ref. 32). Samples of medium were immunoprecipitated with the N-ACTH, (?-endorphin, and yMSH antisera, and analyzed by SDS-polyacrylamide gel electrophoresis (Fig. 4). Medium from the wild-type cells showed the expected pattern with all antisera; the secretion of labeled POMC and AB1 decreased (since the cells were being chased) and primarily the two forms of ACTH were secreted in response to the secretagogue (Fig. 4, top).
The secretion of POMC and AB1 from the PC2 cells showed a similar pattern through the experiment (Fig. 4, bottom).
However, unlike the wild-type cells, PC2 cells did not show a major stimulation of secretion of the two forms of ACTH , and instead showed a major stimulation of the secretion of aMSH-sized material (Fig. 4, bottom). Results obtained with the (?-endorphin antiserum showed that PC2 cells secreted primarily (?-endorphin-sized peptides in response to the secretagogue, while the wild-type cells secreted primarily PLPH (data not shown). Similarly, results obtained with the r M S H antiserum showed that only the PC2 cells released a peak the size of glycosylated r3MSH in response to secretagogue (not shown).
Identification of the POMC Products Whose Production Was Increased by Expression of PC2"To identify the peptide product comigrating with ACTH(1-13)NHz and aMSH in cells expressing PC2, the ACTH( l-13)NHz-sized material was isolated from cells incubated in medium containing [3H]Tyr. When an aliquot of the intact peptide was analyzed by RP-HPLC with internal synthetic peptide markers (36), the product comigrated with ACTH(l-13)NH2 rather than with aMSH [a-N-acetyl-ACTH(1-13)NHz] (Fig. 5, top). In order to confirm the lack of a-N-acetylation, chymotryptic peptides were analyzed. After digestion of [3H]-Tyr-labeled ACTH(1-13)NHz-sized material with chymotrypsin, the radiolabeled chymotryptic peptide comigrated with Ser-Tyr, the product expected from ACTH(1-13)NHz, not with a-N-acetyl-Ser- Fig. 3; during the third 1-h collection from the medium, 10 nM phorbol myristate acetate was included to stimulate secretion. The medium was analyzed as in Fig. 3, using the N -ACTH antiserum with samples of wild-type (top) and PC2 (bottom) medium. Basal secretion data from the second hour of chase are the same as in Fig. 2, and the stimulated (stirn) samples are from the third hour. Similar results were seen with the slab gel technique in two other experiments examining stimulated secretion and in all three experiments using the @endorphin and yMSH antisera. Fig. 2, and the aMSH-sized material was prepared by immunoprecipitation and gel filtration. The intact labeled material was analyzed by RP-HPLC with 20 pg each of synthetic aMSH and ACTH(1-13)NH2 added; the marker peptides were detected by absorbance at 220 nm. The chymotryptic digests of the same material were analyzed with 50 pg each of synthetic Ser-Tyr and a-N-acetyl-Ser-Tyr (Ac-Ser-Tyr) added; marker peptides were detected by absorbance at 280 nm. The analyses of intact ACTH(1-13)NHz were confirmed with an additional [3H]Tyr-labeled sample and with a [35S]Met-labeled Sample.
To investigate the forms of &endorphin produced in cells expressing PC2, the 0-endorphin-sized material produced by cells incubated with [3H]Tyr was isolated and then analyzed by cation exchange HPLC with synthetic marker peptides included (23). As expected from our previous work (35), wildtype cells were found to produce only material comigrating with intact, unmodified 0-endorphin(1-31) (Fig. 6, top). The @-endorphin-sized material produced by PC2 cells was fractionated into peaks comigrating with @-endorphin(1-31) and 6-endorphin ; no peak of radioactivity comigrated with acetylated @-endorphin or with acetylated 0-endorphinfl-31). The two 13H]Tyr-labeled peaks from the PC2 cells in Fig. 6 (top) were collected and digested with chymotrypsin. Both peaks of 0-endorphin-sized material yielded a [3H]Tyr-labeled peptide that comigrated with Tyr-Gly-Gly-Phe, the chymotryptic product from nonacetylated forms of @-endorphin, not with a-N-acetyl-Tyr, the chymotryptic product from acetylated forms of @-endorphin (35). Wild-type and PC2 cells were incubated with l3H]Tyr as in Fig. 2, and the p-endorphin-sized material was prepared by immunoprecipitation and gel filtration. The intact labeled material was analyzed by cation exchange chromatography with 15 pg of each of the indicated marker peptides added; similar results were obtained with peptides labeled with [%]Met. The material from the PC2 cells comigrating with p-endorphin(1-27) and p-endorphin(1-31) was dried, desalted with a Sep-Pak, digested with chymotrypsin, and analyzed with 50 pg of the indicated markers added. Marker peptides were detected by absorbance at 280 nm. expression of PC2 leads to several changes in the biosynthesis and secretion of smaller products from POMC, and the steady-state labeling pattern suggests that the effects of PC2 are restricted to a subset of POMC-derived peptides, with no major changes in the steady-state intracellular levels of POMC, ABI, and joining peptide. To investigate the kinetics of biosynthetic processing, wild-type and PC2 cells were labeled with [3H]Tyr for 30 min and then further incubated in nonradioactive medium (chased) for various periods of time; a n example of such an experiment is shown in Fig. 7. At the end of the pulse period and throughout the chase, the amounts of POMC precursor and ACTH biosynthetic intermediate were unaltered by the expression of PC2. Although glycosylated and nonglycosylated ACTH(1-39) accumulated in the wild-type cells, they did not accumulate in the PC2 cells. Instead, the PC2 cells accumulated a peak of ACTH(1-13)NH2. The rate of conversion of @LPH to @-endorphin was also increased substantially by the expression of PC2. As noted above, the expression of PC2 was correlated with increased basal secretion of both ACTH(1-39) and of ACTH(1-13)NHz, leading to higher levels of these peptides in the medium of PC2 cells compared to wild-type cells.
When the total radioactivity in each peak was calculated for each of the pulse-chase experiments and plotted as a function of the time of chase, the data clearly showed that the disappearance of POMC and the transient appearance and then disappearance of AB1 were unaltered by the expression of PC2 (Fig. 8). The disappearance of POMC followed the same time course when determined using the endorphin antiserum; POMC disappearance followed a single exponential curve with a half-life of 30 min, as found previously (41). ACTH-related and 0-endorphin-related product peptides from the POMC precursor (42) was not altered by the expression of PC2; only the identity of the peptides and the rate of secretion were changed by the expression of PC2. Thus the expression of PC2 had no effect on cleavages 1, 2, and 3 (Fig. l), but greatly enhanced cleavages 4 and 5 .
Previous studies demonstrated that melanotropes carry out cleavages 6 (to form Lyso-ysMSH) and 7 (to form 0-endorphin(1-27)) much later in the biosynthetic pathway than the other cleavages (2,22,25,42). In order to determine whether PC2 cells exhibited a similar time course as melanotropes, wild-type and PC2 cells were labeled for 30 min and chased for 2 h in nonradioactive medium as in Fig. 7. Although a significant amount of ACTH(1-13)NH2 was present at this time, no clear peak of -y3MSH or @-endorphin(1-27) was found in the PC2 cells after 2 h of chase (not shown). Therefore, longer labeling and chase times were used; when wild-type and PC2 cells were labeled for 2 h in [3H]Tyr (Fig. 9, top), again no clear peak of y3MSH was seen in the PC2 cells. After 2 h of chase (Le. 4 h from the start of labeling), a peak at the position of -y3MSH was seen (Fig. 9, middle) and this peak continued to increase at longer chase times (Fig. 9, bottom). Similar kinetics were seen for the appearance of @endorphin(1-27) (not shown). Thus the new cleavages due to the expression of PC2 (steps 6 and 7 in Fig. 1) only became apparent after 2-4 h from the start of labeling.
DISCUSSION
In order to investigate the role of the candidate endoprotease PC2 in propeptide biosynthetic processing, AtT-20 mouse pituitary corticotrope cells were stably transfected with a vector encoding rat PC2. Similar studies have defined the specificity of the endoprotease furin (43)(44)(45)(46)(47). Expression of PC2 led to the appearance of two endoproteolytic cleavages observed in intermediate pituitary melanotropes but not in anterior pituitary corticotropes (summarized in Fig. 10): cleavage 7 occurs at the Lys-Lys near the COOH-terminal end of @-endorphin( 1-31) and produces P-endorphin( 1-27); cleavage 6 occurs within the Arg-Lys preceding yMSH to produce Lyso-r3MSH. These new peptides were only produced 2 or more h after the initial synthesis of POMC, as also found in the intermediate pituitary. The expression of PC2 also greatly accelerated the cleavage (step 4) of 8-lipotropin to create &endorphin, and the cleavage (step 5) of ACTH to create ACTH(1-13)NH2; cleavages 4 and 5 were detected within 1 h of the synthesis of POMC. The expression of PC2 did not, however, have any effect on the rate of initial cleavage of intact POMC, on the cleavage at the NH,-terminal of ACTH, or the cleavage at the NH2-terminal of joining peptide (cleavages 1 to 3); occurrence of these three early cleavages was blocked by the expression of antisense RNA to PC1 (13).
The secretion of all of the smaller peptides whose synthesis was dependent on PC2 was stimulated substantially by the addition of secretagogue. In addition, cells expressing PC2 showed a significantly elevated basal rate of' secretion of all the product peptides; the reasons for this altered rate of basal secretion are not known.
It is important to note that cleavages 4 and 5 occur rapidly after cleavages 1, 2, and 3 in the intermediate pituitary, which has high endogenous levels of PC2 mRNA. It may be that the low level of cleavages 4 and 5 in wild-type cells and in some anterior pituitary corticotropes is due to a low level of PC2 expression (48,49); suppression of the low level of endogenous PC2 expression using antisense RNA in cultured cells may be a useful method to address this question, as antisense RNAs to peptidylglycine tu-amidating monooxygenase and PC1 have proven useful in previous work (13,50).
T h e function of' PC1 and PC2 has been studied by transiently infecting cells using the vaccinia virus system (27,28).
Using this system, PC1 and PC2 were shown to perform some of the cleavages normally seen in POMC processing, such as cleavage 2 (Fig. 1). There is agreement from the vaccinia expression data (27,28) and our antisense RNA data (13) that cleavages 1 and 2 can be mediated by PC1. While previous studies agreed that cleavages 2, 3, and 5 could be performed by PC2, they disagreed on whether cleavages 1 and 4 could be performed by PC2 (27,28). Unlike our results, data obtained using the vaccinia virus-infected cells indicated neither PC1 nor PC2 mediated the RK (Arg-Lys) cleavage to produce Lyso-yiMSH (cleavage 6 ) nor the KK (Lys-Lys) cleavage to produce 0-endorphin(1-27) (cleavage 7); cleavages at these sites occur in intermediate pituitary and brain (2) (Fig. 1). One of the major advantages of the vaccinia virus system is that much of the normal cellular protein synthesis is shut down by the vaccinia infection (27)(28)(29)(30), and for many purposes the high level of production of protein achieved with the vaccinia expression system is very useful. However, for studies of cellular function, the suppression of cellular protein syn-thesis raises the possibility that key cellular functions such as the creation of new, fully functional secretory granules will be disrupted by the vaccinia infection.
Another difference between our data and the data from the vaccinia virus-infected cells (7,28) is that there was no a-Nacetylation of ACTH(1-13)NHz in the stably transfected AtT-20 cells, while the vaccinia virus-infected cells produced acetylated ACTH(1-13)NH2. Consistent with the lack of acetylation of ACTH in this work was the concomitant lack of acetylation of p-endorphin; in the pituitary and in the brain, the a-N-acetylation of ACTH(1-13)NH2 and @-endorphin always go together, and current data argue the same enzyme may acetylate both peptides (2,24,51). ACTH(1-13)NHZ is a n excellent substrate for most of the acetyltransferases in cells (51,52), and any access of the cytosolic or nuclear acetyltransferases to ACTH(1-13)NH2 in the infected cells could lead to the production of acetylated ACTH(l-l3)NH*.
Finally, cleavages 6 and 7 only occur in cells expressing high levels of PC2 and then only occur several hours after the initial synthesis of POMC. It is important to note that cleavage 6 produced the same product found in the rat intermediate pituitary (Lyso-rsMSH) (25). The small amount of cleavage of transfected mutant proneuropeptide Y with an Arg-Lys pair at the cleavage site (similar to cleavage site 6 in POMC) also resulted in the COOH-terminal peptide product retaining the NH2-terminal Lys residue (16, 53,54). It is interesting that cleavages 6 and 7 are similarly slow in the intermediate pituitary (25,35); for example, Fig. 6 shows that about 25% of the newly made 0-endorphin has been cleaved at the Lys-Lys bond after 8 h of incubation, which is slightly faster than the time course of P-endorphin COOH-terminal shortening in the intermediate pituitary (35). Interestingly, cleavages 6 and 7 were not detected when PC2 expression was achieved using the vaccinia system; perhaps the low pH and high Ca2+ levels required for optimal PC2 enzyme activity (55, 56) cannot be achieved in the secretory granules of vaccinia virusinfected pituitary cells, and certainly these conditions would not be expected to be met in non-neuroendocrine cells. It is well established that the later cleavages in the biosynthetic pathway occur in secretory granules (57).
These examinations of AtT-20 cells expressing PC2 or antisense RNA to PC1 suggest that the levels of PC1 and PC2 expression do explain the endoproteolytic cleavages characteristic of corticotropes and melanotropes. Identification of the N-acetyltransferase will be required to understand fully the biosynthesis of melanotrope-specific peptides.
Importantly, PC1 and PC2 were found to function sequentially in the biosynthetic pathway. Both PC1 and PC2 appear to be soluble enzymes without a transmembrane domain, raising the interesting question of whether the ordered cleavage reactions reflect the subcellular localization of the enzymes, the structural properties of POMC and the metabolic intermediates, or the changing properties of secretory granules during maturation. | 5,938.8 | 1993-01-25T00:00:00.000 | [
"Biology"
] |
Effect of the calcination process on the magnetotransport properties in co-precipitation derived La 0.67 Ca 0.33 MnO 3 ceramics
: La 0.67 Ca 0.33 MnO 3 (LCMO) attracts considerable attention as a quintessential example for colossal magnetoresistance ( CMR ), metal-insulator transition and related temperature coefficient of resistance ( TCR ) studies. Here, co-precipitation method was utilized to prepare the LCMO ceramics, whose magnetotransport properties as a function of calcination temperature ( T cal ) and calcination time (t cal ) were investigated. The magnetotransport properties of these LCMO ceramics were significantly enhanced compared with LCMO derived by sol-gel methods. The TCR of LCMO increased firstly and then decreased as the T cal increased, whereas the metal-insulator transition temperature (T MIT ) shifted towards to the lower temperature. Magnetoresistance ( MR ) increased as T cal rose and reached 82.4 % at T cal = 800 ℃. The mechanism of such magnetotransport properties with different temperature ranges was discussed. The optimal TCR of 32.3%·K -1 in LCMO was prepared with T cal = 500 ℃ and t cal = 8 h, showing that co-precipitation method would facilitate the potential application of LCMO in infrared detecting and magnetoresistive switching.
Introduction
The past decades have witnessed the intensive study of manganite R1-xBxMnO3 (R 2 is La, Sr, Nd etc., and B is divalent alkaline earth element) due to their rich electronic properties, which arise from interplay among lattice, spin, charge, and orbital [1] .
Among them, La1-xCaxMnO3 is a typical mixed-valence compound obtained by substituting La for Ca from the parent compound, antiferromagnetic (AFM) insulator LaMnO3. LCMO has been paid extensive attention, owing to the colossal magnetoresistive (CMR) effect and the potential applications for magnetic sensors, memory devices, etc. [2][3][4][5]. Furthermore, there exists a paramagnetism-ferromagnetism transition generally accompanied by a metal-insulator transition (MIT) in LCMO [6], which is well accepted as a result of competition between double exchange of Mn 3+ /Mn 4+ and Jahn-Teller effect [7,8]. A sharp MIT with a low resistivity can generate a large TCR, which makes the CMR manganites of great application potential in infrared detecting and bolometer, etc. [9,10], which motivated this study.
TCR is usually related to the grain size, crystallinity and homogeneity of the ceramic powders. These electrical and magnetotransport properties rely drastically on the preparation methods as well as preparation conditions. Many techniques can be used to prepare LCMO powders, such as the solid phase reaction [11], spray-drying [12], sol-gel method [13], co-precipitation method [14]. As for conventional solid phase reaction, a relative high calcination temperature, long calcination time and multiple grinding and sintering processes are necessary to obtain high-quality LCMO ceramics.
A solution of particular concentration and a co-current flow atomization system are required of spray-drying, which increased the preparation complexity and cost. Sol-gel method also has several disadvantages, the raw materials for the sol-gel method are expensive, the operation is complicated, the process time is long and the prepared samples are prone to crack. Notice that, the chemical co-precipitation method is simple in operation and mild in condition, which is utilized in this study to prepare LCMO ceramics.
Previous studies reported the electrical properties and TCR of La0.67Ca0.33MnO3, with 3 maximum TCR of 9.7%·K -1 and corresponding metal-insulator transition temperature (TMIT) of 267.59 K [21,22]. Therefore, it is of importance to further explore different calcination process to enhance MR and TCR in co-precipitation derived LCMO ceramics.
In present work, calcination process effect on the electrical transport and magnetotransport properties of LCMO ceramics. Tcal of 400, 500, 650 and 800 ℃ were used since the synthesis of the LCMO phase started at about 500°C and completed at about 800 °C [21]. The maximum TCR reached to 32.3 %·K -1 at Tcal = 500 ℃ and maximum MR is 82.8 % at Tcal = 800 ℃, which are largely enhanced compared with the sample derived by sol-gel method [23]. In the region of T < TMIT, the ρ-T could be demonstrated by the electron-electron scattering. In the region of T >TMIT, conductive mechanism was explained through the small polaron hopping (SPH) model. In addition, the resistivity (ρ) can be fitted by phenomenological percolation model over the entire temperature scope. Our study provides a new synthesis method for the application of LCMO materials in infrared detectors, contactless magnetoresistive switches, and memories to some extent.
Surface morphologies of LCMO samples
The surface morphologies of the LCMO samples are presented in Fig. 2(a-d). A large number of holes were found at the grains and grain boundaries (GBs) at Tcal = 400 5 ºC. This may be because the decomposition temperature of Ca(OH)2 is in the range of 500-650 °C. The excessively low Tcal cannot completely decompose the Ca(OH)2 in the precursor powder, and CO2 will be generated during the calcination process, causing the pores and defects at the grains and GBs. For Tcal above 500 °C, fewer defects were observed, while the densities of all samples increased. To explore the effects of grain size on different Tcal, the mean grain sizes of all samples were calculated with Nona Measurer 1.2 Software. The average grain sizes in Fig. 2(a-e) were 13.16, 14.89, 15.05, and 25 μm, respectively. This indicates that increased Tcal would lead to the growth of polycrystalline ceramic grains. This is mainly because with the increase of the Tcal, the sample activity will increase, resulting in higher crystallinity. However, when Tcal was 800 °C, the grain size of the powder can reach to 25 μm and the main crystal phase is formed, reducing the sintering activity and affecting the process of secondary recrystallization. In summary, the samples have good sintering activity and few defects at a Tcal of 500 °C.
Electrical transport properties
The temperature dependence of the resistivity (ρ-T) of LCMO was measured in the 150-300 K under 0 and 1 T (Fig. 3(a-d)). The ρ-T curves indicated that all samples underwent a metal to insulator transition [24,25]. In the low Tcal region of 400-650 °C, the TMIT and resistivity of LCMO ceramics showed a slight change. When Tcal was lifted from 650 to 800 °C, the ceramics simultaneously showed a sudden drop in TMIT and a drastic increase in ρ. The reduced grain size will enlarge the surface energy. For a Tcal of 400-650 °C, the grain size of the LCMO powder is small (13.16, 14.89, and 15.05 μm, respectively), and the surface energy is larger than the sample of Tcal of 800 °C. As a consequence, the ceramics with-size grains dense and high density was obtained after sintering. The scattering effect of the GBs on the electrons was weakened inducing a low resistivity. Combined with SEM image analysis, the high Tcal of 800 °C is believed to be detrimental to the TCR. 6 The resistivity for all samples substantially decreased under 1T field, Helman and Abeles 's model [26] can be used to explain. This model simulated that the resistivity should flat out gradually with the increase of the magnetic field and eventually vanish at certain critical field. The ρ-T curves showed that the resistivity decreased and the TMIT shifted towards the higher temperature. it will cause delocalization of charge carriers at 1T field, which may lead to the ordering of magnetic spins and decrease the resistivity.
The TCR of the samples prepared at different Tcal are shown in Fig. 3(e). The main electrical performance comparison of LCMO samples under different Tcal are presented in Table 2. Fig. 3(e) and Table 2 show that the peak TCR increased first and then decreased with the Tcal increased. When Tcal = 500 °C, the TCR reached 32.3%·K -1 , and TMIT shifted toward the low-temperature region, which show the advantages over the provious studies [27,28]. The reason for this phenomenon is that the lower Tcal (400 -500 °C) causes smaller-size precursor particles, produces denser ceramics, which involves fewer GBs and fewer pores, especially the latter; and boosts the transport properties. On the other hand, looser ceramics derived from higher Tcal (650-800 °C) can result into poorer conductivity, which is responsible for the low TCR observed in Fig. 3(e). However, for Tcal of 400 °C, the organics were not completely evaporated, which suppressed the TCR. In summary, optimal TCR can be found in the LCMO with Tcal of 500 °C. at Tcal = 800 °C, which is higher than the MR of LCMO ceramics prepared by the solgel method [29,30].
3.4. Mechanism of the transport properties 7 There are three temperature regimes considered in this study to understand the conduction mechanism of the ρ-T of LCMO ceramics prepared by co-precipitation at different temperatures [31,32].
Low-temperature range
Eq. (1) is a famous empirical equation that has provided a fitting method for ρ-T in the low-temperature region (T < TMIT): where ρ0 is the residual resistivity stemming from the GBs, as well as from the scattering mechanisms, which independent of temperature. The The ρ-T fitted by Eq. (1) are shown in Fig. 4(a) and Table 3. The squared linear correlation coefficients (R 2 ) are listed. The obtained values of R 2 were as high as 99.0% for all samples, which represents the high fitting quality. With application of 1T magnetic field, it can be seen from Table 3 that both the fitting values of ρ0 and ρ2 for all ceramics decreased. Additionally, the GB scattering decreases, which would be resulted in decreases of resistivity. The value of the 2 2 is greater than that of the 4.5 4.5 for all ceramics. Consequently, the scattering of electron-electron plays a major role in the conductivity of samples in the T < TMIT regime.
High-temperature range
The ρ-T of paramagnetic (PM) high-temperature region is fitted by SPH model. The SPH model could be approximated by an adiabatic Eq. (2).
where is the resistivity coefficient, is the polaron activation energy, and is the Boltzmann constant. 8 The ρ-T fitting results are presented in Fig. 4(b) and Table 4. The results revealed that the SPH model fits the ρ-T well. The fitting parameters are shown in Table 4. After 1T magnetic field was applied, the values of all ceramics are decreased, which would due to the addition of an external magnetic field changing the bond angle of Mn-O-Mn. In other words, the effective carrier mass is changed. As a result of this influence, the effective band gap is changed, and the carrier needs higher values of activation energy to cross this gap.
Entire temperature range
The phenomenological percolation model analyzed the ρ over the whole temperature scope, and the combined formula is as follows in Eq. (3).
where is the volume fraction of the FM phase and (1 − ) is the volume fraction of the PM phase. LCMO ceramics has been fitted over the whole temperature scope with Eq.
(3) and the fitting parameters are demonstrated in Fig. 4(c) and Table 5. Tc-mod stands for temperature at maximum resistivity. U ≈ 0 (1 − − ) ⁄ is the energy difference between the FM and PM states. There are two adjustable fitting parameters, namely, Tc-mod and 0 . The fitting parameters Tc-mod and U0/KB are shown in Table 1. Obviously, the value of the Tc-mod is very closed to TMIT. (that is, the temperature is lowered just after the sample is heated to 500 ºC), the resistivity is as high as 1.0412 Ω·cm, which is much higher than other study [33]. This may be that Ca(OH)2 etc. are not completely decomposed, resulting in a large number of pores in the target, uneven composition, and low density, which largely improves resistivity. The resistivity of the samples reached a maximum value of 271.3 K at 8 h.
Effect of tcal
From Fig. 5(b), the TCR first increased and then decreased. It reached a maximum value 9 of 32.3%·K -1 at tcal = 8 h. In general, the Tcal of 500 ºC and the tcal of 8 h are found to be the optimal sintering parameters for the co-precipitation derived LCMO ceramic.
Conclusion
A series of LCMO ceramics were prepared by co-precipitation method. The effect of calcination process on structure and magnetoelectric transport performance of LCMO ceramics was studied, and the conduction mechanism of the ρ-T was analyzed. | 2,835 | 2021-04-03T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Electrostatic Separation of Carbon Dioxide by Ionization in Bifurcation Flow
Carbon dioxide is one of the major green house gases as well as impurities in process gases used for various manufacturing industries. In the present work, our recently developed ionization separator (Ito et al., Ind. & Eng. Chem. Res., 42, 5617-5621, 2003) was applied to the separation of carbon dioxide from inert gases. As a result, it was found that carbon dioxide can be separated mostly in the form of anion although some fraction of carbon dioxide decomposes by the soft X-ray irradiation. The maximum efficiency of electrostatic separation of carbon dioxide was 14% when helium stream contains 2.4 ppm of carbon dioxide at the applied voltage of 600V and the separation efficiency was decreased with increase in the inlet concentration. The dependency of separation efficiency on the applied voltage was qualitatively explained by the separation model that accounted for the electrical migration, the generation and the neutralization of anions and cations formed from carbon dioxide.
Introduction
Reduction of carbon dioxide (CO 2 ) concentration in air and other gases has been of great concern in various fields; e.g., control of indoor air quality by ventilation (Nabinger et al., 1994, Persily, 1997), green house effect (Hansen et al., 1981), contamination control of process gases for semiconductor manufacturing (Briesacher et al., 1991).
Adsorption of CO 2 with molecular-sieves and other adsorbents has been a common method for reducing CO 2 concentration.However, it requires the replacement or regeneration of adsorbents for maintaining the adsorption performance and the high pressure drop across the packed bed is a great issue when considering the energy consumption for gas purification.Metallic getter alloys are also used for gas purification (Briesacher et al., 1991).Although this method removes trace impurities Recently, behaviors of ions, radicals and molecular species have been studied for the purpose of gas cleaning and purification from gas stream by corona discharge (Ohkubo et al., 1994), surface discharge (Oda et al., 1997) and electron beam injection (Hirota et al., 1995), etc. However these methods using high discharge energy are not effective for lowering the concentration of CO 2 .
Ionization is the process where electrically neutral atoms or molecules acquire either positive or negative electrical charge.In aerosol researches, ions are most frequently used to charge aerosol particles for electrical mobility classification (Knutson and Whitby, 1975).Gaseous molecules in the atmosphere are ionized by irradiation with radioactive source, electrical discharge, and combustion, etc. Radioactive-ray irradiation or corona discharge initially generates primary positive ions and free electrons.The free electrons readily attach to electronegative species in air to form negative ions.
Since the primary positive and negative ions are unstable, secondary ionization due to ion-molecule reaction occurs by the collisions between ions and neutral species.The ion-molecule reaction is influenced by the external electric fields.
We have proposed a new ionization separation technique which utilizes selective ionization of contaminant species and electrostatic migration of ions in bifurcating flow.The contaminant species removed by this method are toluene (Ito et al., 2002a, Ito et al., 2003) and ethanol vapor (Ito et al., 2002b).By this method, the contaminant species that have lower ionization potential and higher proton affinity compared to carrier gas becomes positive ions by the irradiation with α-ray or soft X-ray, and are electrostatically separated from the carrier gas.Negatively charged impurities generated by electron attachment can also be separated from carrier gas by applying an electrical field (Tammon et al., 1995).However, most of the molecular species in the atmosphere are ionized into both polarities.For instance, CO 2 can take both forms of cation and anion, such as (CO 2 ) x + , (CO 2 ) x CO + , (CO 2 ) x O + , (CO 2 ) x C + , (CO 2 )O 2 + , (CO 2 ) x -, (CO 2 ) x O -(x=1,2,3,…) according to Alger and Rees (1976).Nevertheless, since the generation rate and concentration of each ion species formed from CO 2 are different (Stamatovic et al., 1985), it might be possible to separate CO 2 in a form of cation or anion by ionization followed by electrostatic separation.
In this paper, our ionization separation technique was applied to CO 2 .We studied the separation performance of CO 2 in various carrier gases and the influence of CO 2 concentration on the separation efficiency, and discussed the separation mechanisms via the ion-molecule reaction kinetics of CO 2 in bipolar ionization field.
Experimental apparatus and procedure
Fig. 1 shows the ionization separator, whose structure is basically the same as used previously (Ito et al., 2002a), except that the improvement was made to avoid contamination from structural materials by exchanging metal (SUS316) and quartz glass for plastics.Flow containing CO 2 is split into two ones, while the flow is being irradiated with soft X-ray from photoionizer (Model L6941, Hamamatsu Photonics, Japan; energy: 3.0-9.5 keV) under an electric field.The soft X-ray emitter has a dose of 4.7 × 10 -6 Gy/s at a distance of 1 m away from the source.The production rate of primary ions in the ionization separator is estimated to be in the order of 10 16 m 3 /s when the generation of an ion pair requires the energy of 34 eV in air (Ito et al., 2003).In the separator, CO 2 molecules are positively and negatively ionized as a result of ion-molecule reactions, and migrate towards either cathode or anode by the applied electric field.The ionization potential, proton affinity and electron affinity of CO 2 compared to other gases are listed in Table 1.
Fig. 2 shows the experimental setup.High purity helium (>99.9999%) and nitrogen (>99.9999%) are used as a carrier gas.The carrier gas is mixed with standard gas of nitrogen containing CO 2 (0.103%) to obtain a given concentration, and introduced into the ionization separator at the flow rate of 1.0 L/min.Inlet flow is equally divided into two outlet ones.The outlet concentrations of CO 2 are measured by sampling the gas by auto-sampler (GL Sciences Inc.: GSS-5000AH) followed by the determination with a gas chromatograph with FID (Shimadzu, GC-17A), converting CO 2 into methane by a methanizer (GL Sciences Inc.: MT221).
Experimental Results
The decomposition of CO 2 may occur by the irradiation with soft X-ray in the ionization-separator. Hatherly and Codling (1995) reported the dissociative ionization of CO 2 by soft X-ray irradiation In order to evaluate the separation performance of ionization-separator for CO 2 taking into account the decomposition during the separation, we defined the relative CO 2 concentration from the separator as C * = C/C V=0 , where C is the outlet concentration with soft X-ray irradiation and voltage application, and C V=0 is the outlet concentration with the irradiation without the application.Fig. 4 shows the change in CO 2 concentration at the grounded outlet of ionization-separator with the voltage application under the irradiation of soft X-ray.Helium flow containing 2.4 ppm CO 2 was introduced into the separator at the flow rate of 1.0 L/min.CO 2 concentration was measured at the outlet of grounded electrode while varying the voltage between +600V and -600V as shown in the figure .As seen in the figure, CO 2 concentration is slightly higher than the inlet one of 2.4 ppm when the ground electrode is anode.It is considerably lower than when it is cathode, and follows the change in applied voltage, indicating that the separation of CO 2 does occur even though the dissociation of CO 2 takes place.Furthermore, we can see from it that the migration of CO 2 in a form of anion is dominant in the separator whereas toluene vapor was separated in the form of cation (Ito et al., 2002a(Ito et al., , 2003)).
Fig. 5 shows the influence of applied voltage on the relative concentration of CO 2 at the inlet CO 2 concentrations of 2.4 and 4.4 ppm.The relative CO 2 concentration has a minimum and maximum at the voltage of 600V and -600V for both inlet concentrations.The voltage corresponds with that at which the maximum separation of toluene was attained in the previous work (Ito et al., 2003).Fig. 6 shows the relative CO 2 concentration at the anode and cathode as a function of inlet CO 2 concentration at the applied voltage of 600 V. Relative CO 2 concentration at the anode increases as the inlet one decreases, while that at the cathode decreases.The dependency of relative concentration on the inlet CO 2 one is similarly reported for toluene in our previous work (Ito et al., 2003).However, the dependency on the inlet concentration is less pronounced compared to toluene.
The influence of carrier gas on relative CO 2 concentration was studied by using helium and nitrogen.Fig. 7 compares the relative CO 2 concentrations helium and nitrogen at the inlet CO 2 one of 2.4 ppm.Relative CO 2 concentration in helium is higher than that in nitrogen because ion-molecule reaction of CO 2 was probably enhanced due to higher mobility of ions in helium than that in nitrogen.
The mobility of ions in helium is about three times the mobility in nitrogen (Bricard and Pradel, 1966).The ion currents in the ionization-separator for both carrier gases were measured with an electrometer, and are shown in Fig. 8 as a function of applied voltage.The ion current in nitrogen steeply increases with the voltage and reaches constant value at a voltage of 600V.On the contrary, the ion current in helium slightly changes with the applied voltage, and is only one-third of the current in nitrogen.Because helium has a smaller proton affinity and a higher ionization potential compared to nitrogen, helium molecules are difficult to become cations compared to nitrogen ones.
The concentration of generated ions, N, was calculated by I = e κ N 2 AL (Liu and Pui, 1974), where e is the elementary unit of charge, κ is the recombination coefficient of ions (1.5×10 -12 m 3 /s in helium, 2.0 × 10 -12 m 3 /s in nitrogen, Glushchenko et al., 1988), AL is the effective volume in separator.The concentrations of ions calculated from the saturation current are 3.13 × 10 13 molecules/m 3 in helium and 4.66 × 10 13 molecules/m 3 in nitrogen.Since 1 ppm of CO 2 concentration corresponds to 2.5 × 10 19 molecules/m 3 , the number of generated ions measured by the electrometer is six orders of magnitude lower than that of CO 2 molecules.The following two explanations are plausible to explain the discrepancy between the ion concentration and CO 2 concentration: 1) the ion concentrations determined from the saturation current are not correct because many ionic species and electrons are involved in the transfer of electrical charge; 2) the CO 2 depletion mechanism is more complicated than just ionization-separation.
Discussion
The separation mechanism of ionization separator was discussed in our previous work by using a simple transport model, which employed one-dimensional convective diffusion equations for positively charged species and neutral species.In this paper, we applied the same separation model while including the transport of anions of CO 2 as well as the transport of cations.It was assumed that the net generation rates of CO 2 cations and anion are proportional to the number of neutral CO 2 molecules and that the depletion rates of CO 2 anions and cations are proportional to the numbers of CO 2 cations and anions without the decomposition of CO 2 .The mass balance of CO 2 molecules and CO 2 ions, as shown in Fig. 9, yields the following one-dimensional convective diffusion equations.
where C is the concentration, u is the gas flow velocity, D is the diffusion coefficient, α is the net depletion rate constant of CO 2 ions, β is the net generation rate constant of CO 2 ions, and Z and E are the electrical mobility of CO 2 ions and the electrical field strength, respectively.ZE is equal to the ion drift velocity, ZE = v.
The boundary conditions for the ionization-separator imposed for solving Eqs. 1, 2 and 3 are where L is the distance between the anode and the cathode.The following constraint for the mass balance is also adopted.The rate constants of α and β and electrical mobility of CO 2 ions are varied in the order of 10 2 to 10 3 referring to the previous work (Iinuma, 1991).We assigned the diffusion coefficient of CO 2 in helium at 293 K to be 6.2 × 10 -5 m 2 /s according to Poling et al. (2000).
The dimensionless concentration of CO 2 is calculated by solving Eqs. 1, 2 and 3 to see how the difference of electrical mobility between CO 2 cation and CO 2 anion influences CO 2 separation.Fig. 10 shows the dimensionless CO 2 concentration at the outlet as a function of applied voltage for various electrical mobilities while keeping the other parameters unchanged.No separation occurs when the mobilities of CO 2 anions and cations are the same (Z c = Z a ).However, when the mobility of cations is different from that of anions, which is conceivable, ionic species generated from CO 2 are transported to the cathode and anode at a different velocity, causing the separation of CO 2 with a maximum at a given applied voltage in all cases.When the electrical mobility of CO 2 cations is larger than that of anions (Z c > Z a ), CO 2 is concentrated at the cathode, whereas CO 2 concentration is higher at anode when the inequality is reversed (Z c < Z a ).Therefore, when the species to be separated are ionized in both polarities, the separation occurs by the difference between electrical mobility of cations and anions originated from the species.The extent of separation is more pronounced when the difference is larger.Furthermore, it is known from Fig. 10 that the decreases in cation's and anion's moblities lead to the increase in separation efficiency together with the shift of peak voltage toward higher one.
The calculated curves of dimensionless concentration are symmetric with respect to the origin point of V=0 and C*=1 for all mobilities although it was observed in Fig. 5 that the experimental curves are asymmetric with respect to the point.The asymmetry in the experimental curve of dimensionless concentration may result from the decomposition of CO 2 ions after separation because the decomposition of neutral CO 2 does not lead to the asymmetry in the concentration curves.
We also investigated the influence of net generation rate and net depletion rate constants when the electrical mobilities of cations and anions are the same.The results are shown in Fig. 11.The calculated curves are symmetric with respect to the origin point for various combinations of α and β.
Therefore, the asymmetry observed in the experimental curve as a function of applied voltage may result from the change in α and β with the applied voltage other than the decomposition of CO 2 .The larger net generation rate constant of CO 2 anions (compare curve i with ii), or the smaller net generation rate of cations (compare curve i with iv) causes the higher dimensionless outlet concentration.We also see from Fig. 12 that the increase in net depletion rate constant of CO 2 anions (compare curve ii with iii) or the decrease in net depletion rate constant of cations (compare curve iv with v) leads to the reduction in dimensionless concentration of CO 2 .
What follows from the calculations is that the separation efficiency of CO 2 is a function of the electrical mobilities of anions and cations and the depletion and generation rate constants of CO 2 ions, and that there exists an optimal voltage to separate CO 2 .Furthermore, the asymmetry in the dimensionless outlet concentration plotted as a function of applied voltage results from the changes in net generation and depletion rate constants with the voltage as well as the decomposition of CO 2 .
The experimental separation efficiency is no better than 15% as shown in Figs. 4 to 7. Therefore, further improvement is required in order to use the separator as a practical tool for CO 2 contamination control.The CO 2 separation efficiency can be improved by increasing CO 2 anion generation rate as well as avoiding the ion depletion.This might be achieved by using more intensive ionization source and improving the structure of separator so as to make the flow in it well-defined bifurcating one without stagnant regions in the separator.
Conclusions
The following findings are obtained through the application of ionization separator to CO 2 separation.i) CO 2 molecules in a helium and nitrogen can be separated mostly in the form of anion with the ionization separator although some fraction of CO 2 is decomposed by the soft X-ray irradiation.
ii) The CO 2 concentration relative to that without applied voltage increases with decreasing the inlet concentration as found in the separation of toluene of our previous works.
iii) Maximum relative concentration of CO 2 is 1.14 in helium carrier gas containing 2.4 ppm CO 2 at an applied voltage of 600V.
iv) The dependency of relative concentration on the applied voltage is qualitatively explained by the separation model that accounts for the transports of CO 2 cations and anions as well as neutral molecules together with the generation and depletion of these species.
Figure 1 .
Figure 1.Schematic of ionization-separator with Soft X-ray generator and wire mesh electrodes.
(Figure 3 .
Figure 3. Decomposition of CO 2 by soft X-ray irradiation in the ionization-separator. (a) Change in the CO 2 concentration with soft X-ray irradiation.(b) Outlet concentration with soft X-ray irradiation vs. outlet concentration without soft X-ray irradiation.
Figure 4 .
Figure 4. Change in the CO 2 concentration with applied voltage of +600V and -600V.
Figure 5 .
Figure 5. Change in the relative concentration of CO 2 as a function of applied voltage.
Figure 6 .
Figure 6.Influence of inlet CO 2 concentration on relative one with appliedvoltage of 600V.
Figure 7 .
Figure 7. Influence of carrier gas on relative CO 2 concentration.
Figure 8 .
Figure 8. Change in the ion current in ionization-separator with applied voltage.
Figure 9 .
Figure 9. Change in the ion current in ionization-separator with applied voltage.
Figure 10 .
Figure 10.Change in the calculated dimensionless concentration of CO 2 as a function of applied voltage at various electrical mobility of CO 2 ion.β a = β c = α a = α c = 500 s -1 .
Figure 11 .
Figure 11.Influence of net generation and depletion rate constant on calculated concentration.Z a = Z c = 1.4 m 2 V -1 s -1 .
Table 1 .
Ionization potential, proton affinity and electron affinity of various gases | 4,349.8 | 2004-06-01T00:00:00.000 | [
"Engineering",
"Chemistry",
"Environmental Science"
] |
Power-Efficient Computing : Experiences from the COSA Project
1CNAF-Italian Institute for Nuclear Physics (INFN), Bologna, Italy 2University of Ferrara and Italian Institute for Nuclear Physics (INFN), Ferrara, Italy 3Italian Institute for Nuclear Physics (INFN), Padova, Italy 4University of Parma and Italian Institute for Nuclear Physics (INFN), Parma, Italy 5Italian Institute for Nuclear Physics (INFN), Pisa, Italy 6Italian Institute for Nuclear Physics (INFN), Roma, Italy
Introduction and Related Works
Energy consumption has increasingly become one of the most relevant issues for scaling up the performances of modern HPC systems and applications, a trend which is expected to continue in the foreseeable future.This implies that costs related to the running of applications are more and more dominated by the electricity bill, and for this reason the adoption of energy-efficient processors is necessary, ranging from many-core processors architectures like Graphics Processing Unit (GPU) to System on Chip (SoC) designed to meet the demands of the mobile and embedded market.SoC hardware platforms typically embed in the same die lowpower multicore processors combined with a GPU and all the circuitry needed for several I/O devices.These processors feature high performance-per-watt ratio aiming at high energyefficiency but at the same time require careful programming and optimization to be also compute-efficient.Moreover, for the case of off-the-shelf SoCs, various limitations may arise: 32-bit architectures, small CPU caches, small RAM sizes, high latency interconnections, unavailability of ECC memory, and so forth.
Investigating and assessing the performance of these systems for scientific workloads are the aim of the Computing On SOC Architecture (COSA) project [1], a 3-year initiative that is funded by the Italian Institute for Nuclear Physics (INFN) and started in January 2015.Seven INFN departments have been involved in the COSA project, namely, CNAF (Bologna), 2 Scientific Programming Pisa, Padova (Padua), ROMA1 (Rome), Ferrara, Parma, and Legnaro National Laboratories (LNL).
Processors based on ARM architecture have recently attracted a strong interest from several research communities as energy-efficient building blocks for HPC clusters [2], microservers, and other computing systems.They are widely adopted in several commercial low-power and batterypowered devices, such as tablets and smartphones, and many SoCs embedding ARM cores are designed to have, as main strength points, low power consumption and high energyefficiency.Several research projects have then investigated towards this direction.Among them, the Mont-Blanc project [3,4], coordinated by the Barcelona Supercomputing Center [5], has deployed several generations of HPC clusters based on ARM processors, developing also the corresponding ecosystem of HPC tools targeted to this architecture.Another project along this direction is the EU-FP7 EUROSERVER [6], coordinated by CEA [7], which aims to design and prototype technology, architecture, and systems software for the next generation of datacenter "microservers," exploiting 64-bit ARM cores.
Many SoCs based on ARM cores embed GPUs in the same die, and for this reason several projects aim also to exploit these GPUs' computing power [8,9].One of such projects, particularly interesting for adopting off-the-shelf boards, is the ICARUS project [10], aiming to build and study the performance of a mobile solar-powered cluster of NVIDIA Jetson boards.
Other research groups are exploring Dynamic Voltage and Frequency Scaling (DVFS) techniques as a way to modulate power consumption of processor and memory, scaling the clock frequency of one or both subsystems according to the execution of memory-or compute-bound application kernels [11].More recently, other projects are focusing also on Near Threshold Voltage (NTV) computing [12], making the processor work at even lower voltages; since this may lead to computation errors, appropriate checks and recomputations have to be added to algorithms in this case.
The COSA project is on the same research line of the above initiatives, aiming to explore the energy-efficiency of several systems, including ARM-based architecture, lowpower SoCs, and also multicore and many-core based processors like Intel Xeon CPUs and NVIDIA Tesla GPUs.In addition to strong hardware agnosticism, the COSA project is characterized by being strongly application-driven, aiming to study the energy consumption behavior of a wide set of benchmarks and software, widely used within the INFN.
For this reason, the COSA project shows affinities also with other initiatives, such as the SpiNNaker (Spiking Neural Network Architecture) project [13] proposed by the Advanced Processor Technologies Research group at the University of Manchester [14], the EU FET-HPC project ExaNeSt [15], the INFN APEnet+ project [16], and the INFN-COKA (Computing on Knights Architecture, [17]) project, of which COSA is a natural extension.
In this work, we explore the performance of energyefficient systems based on multicore GPUs, low-power CPUs, and SoCs.We have ported several scientific workloads and we have investigated the computing and energy performance comparing them with traditional systems mainly based on x86 multicores.We have also evaluated the benefits of manual clock frequency tuning exploiting DVFS (Dynamic Voltage and Frequency Scaling) with respect to the default frequency governors, looking for an optimal tradeoff between energyto-solution and time-to-solution.
This work is organized as follows.In Sections 2 and 3, we describe the clusters built and maintained by the COSA collaboration and the power measurement tools.In Sections 4 and 5, we report on the benchmarking activities and on the real-life applications ported to SoCs and many-core systems.The paper concludes with concluding remarks and a prospect on future work in Section 6.
The COSA Clusters
The COSA project built and currently maintains three computing clusters, located at the departments of CNAF (Bologna), ROMA1 (Rome), and Padova (Padua).A fourth cluster has been installed in Ferrara with the main contribution of the University of Ferrara.The cluster hosted at CNAF is composed of development boards powered by state-of-the-art low-power SoCs with both ARM and Intel architectures.The cluster located in Rome exploits the lastgeneration FPGAs to prototype low-latency network connections between low-power CPUs.Finally, the third and fourth clusters, located in the Padova and Ferrara departments, are based on traditional, high-end CPUs, accelerators, and network connections.These latter clusters are used as a reference for the performance of the scientific applications run on the other machines.Clearly, comparing high-end servers (equipped with multiple sockets, redundant power supplies, fans, disks, and huge RAM amounts) with standalone boards powered by a single SoC is somehow "unfair," but, nevertheless, indications about the limitations and capabilities in terms of energy-efficiency of these low-power systems can be drawn.In the following subsections, we describe these clusters in more detail.
The Low-Power
Cluster Based on SoCs.The CNAF department hosts an unconventional cluster of ARMv7, ARMv8, and x86 low-power SoCs nodes, interconnected through 1 Gbit/s and 10 Gbit/s Ethernet switches.These platforms are used as a testbed for synthetic benchmarks and reallife scientific applications in both single-node fashion and multinode fashion (see Sections 4 and 5).
Ubuntu is installed on all the platforms.A master server is used as a monitoring station and an external network file system hosting all software and datasets is mounted on every cluster node.CPU frequency scaling is used by the Linux operating system to change the CPU frequency for saving power depending on the system load, and the recommended "ondemand" governor is enabled by default.We set governor to "performance" so to avoid dynamic CPU frequency scaling and maximize CPU performance.The GPU frequency has been set to its maximum value (see Table 1).
The ARM cluster is composed of eight NVIDIA Jetson TK1 boards, four NVIDIA Jetson TX1 (64-bit) boards, two ODROID-XU3 boards, a CubieBoard, a SABRE board, and an Arndale Octa board, all interconnected with standard 1 Gbit/s Ethernet.We notice that the Jetson TX1 cluster is connected with 1 Gbit/s Ethernet even though a USB-Ethernet bridge provides the physical connection with the SoC, increasing the latency with respect to standard 1 Gbit/s Ethernet connections.
The 64-bit x86 cluster is composed of four mini-ITX boards powered by the Intel Avoton C2750, four mini-ITX motherboards based on the Intel Xeon D-1540 CPU, and four mini-ITX boards based on the Intel Pentium N3700 processor.The "Avoton," the "Xeon D," and the "TX1" clusters are connected with both 1 Gbit/s and 10 Gbit/s Ethernet connections, and the "N3700" processor only with 1 Gbit/s Ethernet network.The 1 Gbit/s connections are provided by on-board connectors, while the 10 Gbit/s links are obtained with a PCI Host Bus Adapter (HBA).
Table 1 summarizes and compares the relevant features of systems available at CNAF.The Thermal Design Power (TDP) of the SoCs in this cluster, when declared, ranges from 5 W of Intel Pentium N3700 to 45 W of the 8-core Intel Xeon D-1540 Processor.
The High-End
Hardware Clusters.At CNAF department, the traditional reference architecture is a x86 node from an HPC cluster, equipped with two Intel Xeon E5-2620 v2 CPUs, 6 physical cores each, HyperThread enabled (i.e., 24HT cores in the single node), and with a NVIDIA K20 GPU accelerator card with 2880 CUDA cores.The HPC server is rated with a TDP of 160 W for the two CPUs (about 80 W each) and 235 W for the GPU.
In Ferrara, another commodity HPC reference cluster named COKA is available (Computing On Kepler Architecture) [18].The cluster is made of 5 computing nodes, hosting each 2x Intel Xeon E5-2630 v3 CPUs and 8x NVIDA K80 dual-GPU boards, interconnected with Mellanox MT27500 Family [ConnectX-3] InfiniBand HCA (two per node).The TDP of each CPU is 85 W, while for an NVIDIA K80 board it is 300 W, amounting for a total maximum power of 3.2 kW for each computing node or 16 kW for the whole cluster.On this cluster, the power drain can be measured from node's power supplies and read out at ≈1 s granularity thanks to the IPMI (Intelligent Platform Management Interface) protocol.Otherwise, power drain could be monitored also via processor hardware registers, thanks to the PAPI Library [19], for both the Intel CPUs and NVIDIA GPUs, as detailed in Section 3.
The FPGA-Based Cluster.
In the last few years, reconfigurable devices characterized by complex architecture emerged as an effective and powerful alternative to lowpower SoC.Last-generation high-end FPGAs, in addition to huge amount of user-programmable logic, include multiple embedded ARM 64-bit cores (ARM Cortex-A57) running at 1.5 GHz with high-speed interfaces to storage (USB, SATA) and network (10-40 Gb/s Ethernet) standard protocols and tightly coupled to (up) 1 TFLOPS of configurable DSP blocks.It is well known that, beyond the computing performance of the elementary processor, the main issue limiting the scalability of a massive parallel system is the efficiency of the interconnection architecture.
The department of ROMA1 has launched in the past the APEnet+ [16] project aiming to design a low-latency, high-performance 3D Torus network architecture optimized for scientific computing on GPU-accelerated HPC systems.APEnet+ is embedded in FPGA and it implements powerful host interface (PCIe Gen3 x8), a GPU-dedicated low-latency DMA engine, and a number of custom, high-speed serial channels on Torus side.One of the main limiting factors of network performance in the current implementation of APEnet+ architecture (named V5) is the lacking, in target FPGA device (28 nm process FPGA generation), of a highperformance embedded hard-core processor needed to execute highly demanding network supporting computing tasks (e.g., the Virtual-to-Physical address translation as well as DMAs initialization).
In the framework of COSA, the main goals of the ROMA1 department have been the evaluation of the FPGA embedded ARM cores as (1) atomic computing core for a fine-grained parallel HPC system and (2) powerful microprocessor able to sustain the computing tasks required by network support.In this perspective and in full synergy with the activities of EU FET-HPC project ExaNeSt [15], we procured and assembled a small size (6 computing nodes) FPGA-based computing cluster based on the Xilinx Zynq UltraScale+ [20] development kit produced by Trenz Electronic GmbH, equipped with the XCZU9EG-1FFVC900 that integrates a programmable quad-core ARM Cortex-A53 (64-bit) SoC @1.5 GHz.Targeting this platform, we completed the porting of APEnet+ IP V5 and we have deployed the first running release of such, as an FPGA Zynq-based cluster.This cluster sports a standard 1 Gbit/s Ethernet service network and a preliminary working prototype of a low-latency and highperformances Torus network based on multiple 10 Gbit/s point-to-point links.The current development plans foreseen to design the additional features needed (a) to improve the physical link throughput and the switching performance, (b) to optimize network collective operations, (c) to enhance the network IP resiliency capabilities at extreme-size system scale (exaFLOPS), and (d) to support high-radix network topologies (n-D Torus and Dragonfly).The incremental adoption of the new features and optimizations will allow delivering the final optimized FPGA-based cluster at the end of 2017.
Power Monitor Tools
We developed and implemented several fine-grained power monitoring systems that are able to provide power and energy readings for a generic application.
The first of these systems, hereafter papi, is a software wrapper [21] that exploits the PAPI Library [19] to read appropriate hardware registers containing power-related information.This wrapper allows applications to directly start and stop measurements using architecture-specific interfaces, such as the Running Average Power Limit (RAPL) for Intel CPUs and NVIDIA Management Library (NVML) for NVIDIA GPUs.The papi system is particularly useful in highend systems, where processors commonly implement these registers and their usage is well documented.
The second system, hereafter custom, requires dedicated hardware [22] and represents a viable solution whenever appropriate hardware registers are unavailable or show difficulties in their readout.For example, this custom power meter can be used to monitor the power drained by SoC development boards, as shown in Figure 1.The custom setup uses an analog current-to-voltage converter (LTS 25-NP current transducer) and an Arduino UNO board; the latter uses its embedded 10-bit ADC to digitize current readings and store them in memory.The Arduino UNO board is synchronized with the development board hosting the SoC through a simple serial protocol built over a USB connection.The monitor acquires current samples with 1 ms granularity.This setup is able to correlate current measurements with specific application events with an accuracy of a few milliseconds, minimally disrupting the execution of the application to profile.The application, while running, can start and stop the out-of-band measurement, letting the Arduino UNO board store readings in its memory.When needed (e.g., after the end of a function to energy profile), data can be offloaded from the Arduino UNO board to the application itself.
Another power measurement equipment, hereafter referred to as multimeter, consists of a DC power supply, a high-precision Tektronix DMM4050 digital multimeter for DC current measurements connected to National Instruments data logging software, and a high-precision AC power meter.When this monitoring system is used, the AC power of the high-end server node is measured by a Voltech PM300 Power Analyzer upstream of the main server power supply (measuring on the AC cable).Instead, for the SoCs, the DC current was absorbed downstream of the power supply.We believe that such difference does not affect significantly the results, given the close to one cos phi factor of the server power supply.
Synthetic Benchmarks
In order to characterize the single-node performance of the various CPUs available in the COSA CNAF clusters, we performed several types of benchmarks: both synthetic tests developed in house and well-documented test suites publicly available in literature.
Here, results are shown only for the most recent and powerful CPUs of the COSA CNAF clusters, and we stress that any measurement of the total power of the cluster is out of the scope of this work.Also, the synthetic benchmarks showed in this section test only CPUs, not GPUs, and we refer the reader to the following section for real-life workloads executed on the GPUs of the COSA cluster.
From the broad spectrum of tests which are handy in the literature, here we show the results obtained for a test based on the High-Performance Conjugate Gradients (HPCG) Benchmark [23], a renowned new metric for ranking HPC systems, designed to exercise computational, communication, and memory access patterns which are frequently observed in real-life applications, for example, with low compute-todata-access ratios.
In Figures 2(a) and 2(b), results of the HPCG metrics are shown in terms of weighted GFLOP/s ("HPCG score," (a)) and power absorbed during the execution (b), which has been measured using the multimeter system described in Section 3.
As is naturally expected, the absolute performances of a high-end server are much higher.However, the situation reverses itself when considering the power ratio.
This holds true also for MATMUL, a home-made test, written in C and parallelized with OpenMP, which performs single-precision matrix multiplication using SGEMM function from the OpenBLAS library [24].Figures 2(c) and 2(d) show instead the GFLOP/s and absorbed power by MATMUL for a size of the matrices equal to 4096.
We are currently investigating the weird behavior of the Jetson TX1 board when MATMUL is executed in two cores (see Figure 2(d)).
We note that the Jetson TK1 board performs slightly better than the newer Jetson TX1, although 32-bit architecture is a significant limitation and the somewhat higher clock frequency explains its better performances.
In general, Jetson TX1 and N3700 boards, which are already 64-bit platforms, seem to be very promising lowpower architectures.Interestingly, we notice that the performances of the Xeon D-1540 platform resemble, and even overtake, in the case of MATMUL, those of a traditional highend server.The nice performances of the Xeon D-1540 are ascribable to the capacity of the SGEMM function to exploit the FMA instruction, which is missing in the other boards.However, power consumption is higher than the TDP (80 W versus 45 W of TDP), both with and without turbo boost enabled.
We are fully aware that comparing small low-power boards and HPC nodes, usually equipped with high-end GPU boards, several disks, and large RAM, cannot be fully fair; we also know that application spectrum running on HPC nodes is not directly comparable with that running on small development boards.However, we think that this comparison helps to highlight useful information about limits and potentialities of low-power SoCs and processors and gives interesting hints for further investigations in using this class of computing systems for HPC workloads.
The next section illustrates the results obtained with more realistic tests, that is, running on the COSA clusters several applications taken from different realms of science.
Applications
5.1.Lattice Boltzmann.Several fields of computational fluid dynamics are increasingly using Lattice Boltzmann Methods (LBM) to study behavior of flows and to solve numerically the equation of motion of flows in two and three dimensions.These methods can be efficiently implemented on computing systems and are also able to handle complex and irregular geometries as well as multiphase flows.
LBM are discrete in position and momentum spaces and are based on the synthetic dynamics of populations associated with the edges of a discrete 2D or 3D lattice.At each time step, populations of each site are propagated (i.e., copied from the neighboring sites), and then incoming populations collide among each other, mixing their values (i.e., new populations values are computed from old ones and the neighbors ones).See [25] for a deeper introduction.
LBM are labeled as , where represents the dimension, 2D or 3D, and represents the number of populations associated with each lattice site.Here, we consider the stateof-the-art 237 model for simulation of two-dimensional fluids with 37 populations per site.This model correctly reproduces the thermohydrodynamic equations of motion of a fluid in two dimensions and also enforces the equation of state for an ideal gas = [26,27].
From a computational point of view, LBM are easy to implement and a large degree of parallelism can be exploited, making them suitable to run on state-of-theart multicore and many-core processors.steps performed by a Lattice Boltzmann simulation are the computation of the propagate and collide functions: (1) propagate moves populations across lattice sites according to the stencil pattern shown in Figure 3; it collects at each site all populations that will interact at the next phase: collide.Implementation-wise, propagate moves blocks of memory locations allocated at sparse memory addresses, corresponding to populations of neighbor cells.
(2) collide performs all the mathematical steps associated with the computation of the collisional operator and computes the population values at each lattice site at the new time step.Input data for this phase are the populations gathered by the previous propagate phase.
The D2Q37 LBM is more complex than simpler LB models such as D2Q9 or D2Q17 because the propagate function uses a fourth-order scheme of movements, exchanging populations with neighbors up to distance 3 for each site.This translates into severe requirements in terms of memory bandwidth and floating-point throughput.propagate implies accessing 37 neighbor cells to gather all populations making this step mainly memory-bound.collide requires approximately 7600 double-precision floating point operations per lattice site.collide exhibits a significant arithmetic intensity and is the dominating part of the overall computation, taking roughly 90% of the total run-time.
The D2Q37 model has been implemented and extensively optimized on a wide range of parallel machines like BG/Q [28] as well as on a cluster of nodes based on traditional commodity x86 CPUs [29], GPUs [30][31][32], and Xeon-Phi [33,34].It has been extensively used for large-scale production simulations of convective turbulence [35,36].Performance and energy requirements of the propagate and collide functions were estimated on several architectures, measuring, respectively, the time-to-solution ( ) and the energy-to-solution ( ).
Low-Power SoCs.
Using the custom power meter system described in Section 3, and of the propagate and collide functions have been measured while running on the CPU and on the GPU of a Jetson TK1 [22].Two different implementations have been used, a plain C one, using ARMv7 intrinsics (with ARM Cortex-A15 processor), and a CUDA one (GK20A, the GPU component of the Jetson TK1 SoC), respectively, for the CPU and GPU.Since on this board the clock frequencies of both processors (and memory too) could be changed thanks to DVFS (Dynamic Voltage and Frequency Scaling), both and vary according to these frequencies.Several runs have been performed for every possible clock combination showing the frequency optimization space available on this SoC for both processors [22].
The plain C code has been also ported to the ARMv8 architecture and run also on the Cortex-A53 hosted on a HiSilicon Kirin 6220.
Several tradeoff points between and can be identified for each tested processor.Choosing a single metric, for example, the EDP (Energy Delay Product), we can select an optimal tradeoff point and perform a comparison according to it, as shown, for example, in Table 2 for the collide function.
High-End System.
Modern high-end architectures also allow tuning processor clocks through DVFS.We have then changed the clock frequency on computing elements, both CPUs and GPUs, of the COKA cluster and read power values related to processors with papi as described in Section 3. We used different code versions, optimized, respectively, for Intel Xeon E5-2630 v3 CPU and for K80 GPUs.For the latter, we ran only on one GPU, out of the two hosted in a NVIDIA Tesla K80 board.In this case, in order to run for a significant amount of time we had to use a larger lattice of 1024 × 8192 points.Selecting best EDP values, we compare the two architectures as shown in Table 3.
We also evaluated the energy-saving potential of DVFS tuning for the full high-end HPC node, running a full Lattice Boltzmann simulation code on 16 GPUs.In this case, we measured the power drain at the wall socket, performing the readout from the power supplies through IPMI (Intelligent Platform Management Interface).Results are reported in Figure 4.As we see, changing the frequency from default to 732 MHz has little to no impact on execution time but allows saving ≈7% of the total energy of the computing node.The reason for this is associated with the computational needs of kernel functions executed by the application [37].In fact, as a general rule, the throughput of the processor should balance that of the memory, and increasing the processor frequency does not bring any benefit if the application performance is limited by memory accesses.More in general, it is possible to decrease the processor frequency without impacting performances (up to a threshold below of which the application becomes compute-bound) for any code spending significant portions of its execution time in memory-bound operations or waiting for communications and synchronizations.
LHCb Software.
LHCb [38] is one of the four main experiments collecting data at the LHC (Large Hadron Collider), the world's largest particle accelerator located at CERN laboratory in Geneva.Its purpose is to investigate b-hadron decays with high statistics and precision, aiming mainly at the study of observables and rare decays violating CP (C, charge conjugation, and P, parity symmetries).The combination of C-and P-symmetries is found to be violated in several decay processes and among the others in the bquark hadron decays.Studying such decays will give precise measurements of CP violating processes observables, which furthermore may depend on new physics effects.
The computing model of LHCb requires the reconstruction and analysis of simulated and real data.The software stack consists of a group of packages for the generation of simulated events and their reconstruction and analysis.Testing the full software stack on SoC architecture can provide valuable hints for the evolution of the computing model towards the third period of LHC data taking, due to start in 2021.Simulating data is usually a CPU-bound task, given the need to generate the events produced by the colliding beams, the interaction with the detector of all the particles, and the response of the readout electronics.In general, this task is not IO-bound, whereas the data reconstruction, either of real or simulated data, requires the access of data which are geographically distributed in several storage areas.
In order to determine the performances of the various available processors, we decided to set up a simulation task using the LHCb application Gauss [39] and a reconstruction task using the software package Brunel [40], which takes the raw detector data and builds physics objects starting from kinematic quantities like tracks, vertexes, and particle identification quantities.The CPU and IO requirements of two applications are in general different.With the evaluation of the CPU performances and power consumption being the purpose of these tests, we decided for the reconstruction task to download the input file in advance.Furthermore, since the LHCb software stack is compiled for x86 architectures, we chose to run the tests on the following CPUs: Intel Pentium N3700, Intel Avoton C2750, Intel Xeon D-1540, and Dual Intel Xeon E5-2683 v3.
In Figure 5, the average timing per event of the four CPUs is reported.The histograms in (a) refer to the Gauss tests, while the ones in (b) refer to the Brunel tests.Simulation tasks usually take a small input file and produce a much bigger one (∼1 GB), whereas reconstruction tasks elaborate the information of an input file with an event-based data structure.So the latter application reads the file at each iteration, hence requiring frequent access to the memory.As is clear from Figure 5, the execution times for the simulation task are strikingly different, reflecting the low profile of the Intel Pentium N3700 with respect to the other architectures.In particular, the CNAF HPC node is around 20 times faster than N3700 for the Gauss simulation task (8 s versus 168 s) and around 8 times faster (4 s versus 31 s) than N3700 for the reconstruction task (Brunel).The discrepancy in execution time for the reconstruction task is mitigated by the memory access.Thus, for the application that requires memory access, a low TDP solution could be profitably considered.
Then, Figure 6 shows the effects of taking into account power consumption in order to compare architectures.The metric shown in the latter figure includes both the execution time and the absorbed power (since energy is the product of the two), which has been measured using the multimeter system described in Section 3.
Taking the energy per event as a reference metric, the N3700 SoC is around 4.5 times more efficient than the CNAF HPC node for the simulation (Gauss) task (2.1 J versus 9.4 J) and around 13.5 times more efficient than the CNAF HPC node for the reconstruction (Brunel) application (0.002 J versus 0.027 J), confirming and actually reinforcing our statement above; that is, low-power CPUs such as the N3700 are the best performing in case of not-negligible IO applications.
Of course, it should be noted that many of these CPUs would be required in order to provide the throughput needed by LHCb, and we are planning to design a compact system consisting of low-power CPUs and providing a throughput comparable to a usual high-end server.
Neural Networks.
Spiking neural networks play a dual role, depending on the scale of the simulated models: they contribute to a scientific grand-challenge, that is, the understanding of brain activity and of its computational mechanisms, and, by inclusion in embedded systems, they enhance the ability of applications like autonomous navigation, surveillance, and robotics.Therefore, fast and powerefficient execution of spiking neural network models assumes a driving role, at the cross-road between embedded and highperformance computing, shaping the evolution of the architecture of specialized and general-purpose multicore/manycore systems.See, for example, the TrueNorth [41] low-power specialized hardware architecture for embedded applications and [42] about the power consumption of the SpiNNaker specialized hardware architecture, based on the combination of embedded multicores and a dedicated networking infrastructure.About the strategy based on more standard HPC platforms and general-purpose simulators, see, for example, [43,44].
Indeed, the quantitative codesign of the EURETILE many-tile execution platform and its many-process programming environment [45] Currently, the WaveScalES experiment in the Human Brain Project uses this engine to simulate the activity of cortical slow waves on high-resolution models of cortical areas, including several tens of billions of synapses and the ExaNeSt [15] project includes DPSNN in the set of benchmarks that drive the development of future interconnects for platforms including millions of embedded ARM cores.
In [47], a first comparison of the power, energy, and speed of execution on ARM cores versus Intel Xeon quadcores is reported.There, DPSNN has been run on NVIDIA Jetson TK1 boards (which include a quad-core ARM Cortex-A15 @2.3 GHz, 28 nm CMOS technology) and on clusters mounting quad-core Intel Xeon CPU (E5-620 @2.4 GHz, 32 nm CMOS).Here we extend the measures to the newgeneration NVIDIA Jetson TX1 SoC based on ARMv8 architecture.Jetson TX1 includes four ARM Cortex-A57 cores plus four ARM Cortex-A53 cores in big.LITTLE configuration.We have measured its performances in executing the DPSNN code along with those of a coeval mainstream Intel processor architecture using a hardware/software configuration suitable to extrapolate a direct comparison of time-to-solution and energy-to-solution at the level of the single core.The energy consumption has been measured using the multimeter system described in Section 3.
We have used a Jetson TX1 board and a Supermicro SuperServer 7048GR-TR with two hexa core Intel E5-2620 v3 @ 2.40 GHz as hardware platforms, and we have run four MPI processes on both, simulating 3 s of the dynamics of a network made of 10 4 Leaky Integrate and Fire with Calcium Adaptation (LIFCA) neurons connected via 18 × 10 6 synapses.Results are shown in Figures 7 and 8 and can be summarized as follows: although the x86 core architecture is about five times faster than the ARM Cortex-A57 core in executing the simulation, the energy it consumes to do it is about three times higher than the energy consumed by the ARM Cortex-A57 core.
Computed
Tomography.Among the various scientific applications explored within the COSA project, X-Ray Computed Tomography turned out to be a particularly well suited case for the use of low-power SoCs, as described in [48].
X-ray Computed Tomography can be gainfully applied to the field of Cultural Heritage in order to reconstruct the internal structure of art objects in a noninvasive way for both scientific investigations and restoration purposes.This is typically both time-consuming and power-consuming; also, in most situations, executing the reconstruction software directly where and when the X-ray measurements are acquired is infeasible.Hence, the possibility of running the reconstruction algorithm on a mobile, possibly batterypowered, device is particularly appealing.For our study, we have considered the C and MPI Filtered Backprojection algorithm for Computed Tomography reconstruction [49] developed by the X-ray Imaging Group of the Physics and Astronomy Department at the University of Bologna [50].The Filtered Backprojection algorithm is heavily used in 3D Tomography reconstruction, and performances can be easily measured in terms of 2D slices (of a 3D volume) reconstructed per time unit and per energy unit.
We have exploited the Graphics Processing Unit (GPU) of the Jetson TK1 SoC, available in the COSA cluster located at CNAF, and maximized the simultaneous use of CPU and GPU by combining a multithreaded OpenMP version and a GPU-CUDA version of the reconstruction algorithm, which have been executed in parallel on the SoC.We ran the OpenMP and CUDA implementation in two different SSH sessions at the same time on the same platform.We have used three OpenMP threads for the CPU implementation, running on three of the four available cores, and one thread for the CUDA implementation, running on the fourth core.
Figure 9 compares the numbers of slices per time unit (a) and per energy unit (b) reconstructed using the CPU (OpenMP version) and the GPU (CUDA version) on a Jetson TK1 SoC and on the reference Xeon server equipped with a NVIDIA K20 GPU for a characteristic image from the considered dataset (we refer the reader to [48] for more details).Power consumption has been measured using the multimeter system described in Section 3.
In [48], we showed that only three Jetson K1 boards equipped with Giga Ethernet interconnections allow reconstructing as many 2D slices (of a 3D volume) per unit time as a traditional high-performance computing node, using one order of magnitude less energy.
It is important to note that the reconstructed images have been always compared in terms of pixel-by-pixel standard deviations with the image reconstructed using the original, serial code to check the correctness of the reconstruction process.Therefore, our results seem to be very promising in view of the construction of an energy-efficient computing system of a mobile tomographic apparatus.
Einstein Toolkit.
The scientific problem considered in this application is a high-resolution simulation of inspiral and merger phase of binary neutron star systems, which are among the most likely sources of gravitational waves targeted for detection by LIGO/Virgo [51][52][53].The numerical setup of the test is based on the Einstein Toolkit (ET) Consortium code [54], which performs the time evolution of matter coupled to Einstein's equations (General Relativistic Hydrodynamics), and it is the same as that used in production setting and described in detail in [55].
The numerical complexity of the code reflects the need to compile the whole Einstein Toolkit [56], which is an open set of over 100 components (Cactus thorns) for computational relativity and consists of 1000 source files written in C, C++, F77, and F90 that implement OpenMP parallelism and MPI distribution of execution and partition of workloads and memory allocations.In this setting, the following are included: (i) Cactus framework for parallel high-performance computing (ii) Mesh refinement with Carpet (iii) Matter evolution with GRHydro [57] (iv) Metric evolution using McLachlan BSSN evolution of the matrix (v) Initial data computed using the LORENE CODE.
Simulations based on this framework are routinely executed on all the major HPC systems and the basic performance test reported here refers to the "Galileo" Tier-1 HPC system.Galileo is located at the CINECA HPC center in Bologna [58] and is partially financed by INFN.On this system, we performed scaling reference test (see Figure 10) using different resolutions, ranging from 0.75 to 0.09375 (corresponding to 1100 m to 138 m).In order to perform a simulation of 30 ms of physical time, using resolution = 0.25 (the finest grid), a production run on Galileo requires a week on 256 cores and allocates 108 GB of physical RAM on the system.The code has been successfully compiled and run on several COSA boards.The ET "Hilbert" (ET 2015 05) stable release has been compiled on all the four considered COSA platforms and the binary codes have been executed using parallel distribution with the MPI and OpenMP programming paradigms.This is very good news, as it shows that the software environment available on the considered architectures is enabled to run actual HPC applications.
The second objective has been the evaluation of the performance that the SoCs are able to deliver on this real simulation case.Due to the memory constraint of the COSA platforms, we had to impose additional symmetry and a very coarse resolution ( = 0.75) in order to limit memory configuration below 2 GB.We evolved the system on a configuration with cubic multi-Grid mesh with five levels of refinement and we performed 800 time evolution steps.
In Figure 11, the performances of the tested systems are shown on a single node in order to avoid possible effects of the communication network in place.All the available cores have been allocated with only MPI processes or only OpenMP threads or mixed processes and threads.This gives an assessment of the CPU speed and intranode performance, which are consistent with respect to the relationship between the peak performances of the different cores.The internode communication has been stressed by scaling from 1 to the maximum number of available nodes (see Figure 12).This procedure shows the possibility to scale and distribute the workloads between different nodes, but since the communication in the COSA CNAF cluster is based on Ethernet technology instead of high-speed interconnection technology (like InfiniBand on Galileo), the plots show poor scalability and confirm that high-speed communication between nodes should be adopted in order to use such system for real HPC applications.5.6.Bioinformatics.We have also tried to run on the COSA CNAF cluster a few significant applications from the Bioinformatics realm, as described in [59].In the tests described below, power consumption has been measured using the multimeter system described in Section 3.
First, we have considered GROMACS (GROningen MAchine for Chemical Simulations, [60]), free, open-source software for molecular dynamics simulations used worldwide to simulate the Newtonian equations of motion for systems with hundreds to millions of particles, for example, proteins or polymers.GROMACS is written in ANSI C, and both a parallel version using standard MPI and a GPU accelerated version using CUDA are available.
The porting of GROMACS version 5.1 to Jetson TK1 platform has been easy, and simulations of a real-life use case (approximately 55000 atoms) have been run using 32 cores, that is, 8 Jetson TK1 boards connected through 1 Gbit/s Ethernet and CPU + GPU on a single node.In the CPU-only multicore mode, the SoC was proven to be ten times slower than the reference high-end server equipped with two Intel Xeon E5-2620 v2 CPUs.However, it is slightly better in terms of power ratio.Instead, for the CPU + GPU run on just one node, the Jetson TK1 is only 5.5 times slower but 6.6 times more power-efficient.
Furthermore, we ported a space-aware system biology stochastic simulator to the newer Jetson TX1 boards.The considered simulator couples Dynamical Probabilistic membrane systems [61] with a modified version of the -leaping stochastic simulation method [62,63], which takes into account size and volumes of objects in crowded systems and allows modeling biochemical systems executing thousands of simulations in parallel.
Again, the porting of the S-DPP algorithm to lowpower Jetson TX1 boards has been straightforward, and computing performances are satisfying (2-3 times slower than the reference high-end Xeon servers), with 10 times lower power consumption.As is the case with all the considered applications, performances become poor when attempting real multinode runs due to the high network latency of the interconnection.
Conclusions
In this paper, we have presented the results coming from the COSA (Computing On SOC Architecture) project, aiming at investigating novel hardware platforms and software techniques that can be exploited to realize energy-efficient computing systems for scientific applications.The hardware platforms taken into consideration are those with high ratio of flop per watt.We have considered standard multicore and many-core devices such as CPUs and GPUs and System on Chip (SoC) developed for the mobile and embedded realms.These sectors and the HPC world have historically been very isolated, but we are now experiencing very important convergence between the two markets.This is true for what concerns constraints (i.e., energy-efficiency) and needs (i.e., computing power).The COSA project is investigating the possibility to exploit this convergence to increase performance/power ratio in computing systems running scientific applications.Porting real-life applications from various scientific fields to low-power Systems on Chip architectures derived from the embedded and mobile markets is one of the main objectives of the project.Hardware platforms based on both Intel and ARM architectures have been considered and benchmarked with synthetic tests and real-life applications belonging to various scientific fields, ranging from High Energy Physics to Biomedicine, Theoretical Physics, Computer Tomography, and Neural Network simulations.
The results of our work show that it is now actually possible, and in some cases even very easy, to compile and run complex scientific workloads on low-power, off-theshelf devices not conceived by design for high-performance scientific applications.
For certain applications, the computing performances are satisfying and even comparable to those obtained on traditional high-end servers, with much lower power consumption, in particular if the GPU available on SoCs is exploited.
Comparing high-end HPC servers with development boards hosting low-power Systems on Chip taken from the mobile and embedded world is clearly not fair if only energy aspects are taken into account.In fact, there are workloads that simply cannot be run on off-the-shelf hardware and embedded platforms, but, nevertheless, the work presented in this paper showed that, for certain applications, even parallel applications, the use of low-power architectures represents a feasible choice in terms of tradeoff between execution times and power drain.However, the high-end platforms are still capable of better absolute computing performance.Among the limitations that low-power SoCs-based platforms still show, we can list the following: small maximum RAM size, high latency of the network connections, few and small PCI slots, and missing support for ECC memory.For all these reasons, off-the-shelf systems can hardly be used for extreme scale applications and highly demanding HPC computations; anyhow, our experience shows that systems based on lowpower SoCs can be a viable solution to reduce power consumption if a proper integration is carried on.Indeed, the exploitation of this kind of hardware in a production environment requires by definition many nodes to be integrated, and the costs of such integration should be carefully analyzed and taken into account when making comparisons with standard architectures systems.The analysis of this integration cost is out of the scope of this paper and is demanded to a future work.
In general, performances become poor when attempting real multinode runs on platforms without native Ethernet connections.Some of these platforms, in fact, implement the network connections through slow USB-Ethernet bridges.However, almost all the SoCs under evaluation provide PCI lanes that, if supported by motherboards with proper connectors, can be exploited to install low-latency cards allowing for network performances comparable to those obtained in standard high-end servers.However, to further improve the network performance of SoCs-based system, the project is also investigating the adoption of custom toroidal interconnections prototyped with state-of-the-art FPGAs.
As for high-end platforms, whenever the scientific workloads succeed in exploiting the large number of graphics cores available in the SoCs, it is possible to obtain good speed gains and increase in computing power per watt.This is true also for the considered applications and in particular for those that we have been able to run on the NVIDIA Jetson platforms (both TX1 and TK1) exploiting the CUDA environment.
Another research line that the project is investigating to reduce energy consumption of HPC application runs does not focus on hardware but on software.Modern operating systems, in fact, allow controlling the performance of various hardware components (i.e., tuning the RAM bus and CPU and GPU clocks) with libraries calls and APIs.Our results show that, if properly instrumented to reduce the GPU frequency when executing certain functions, the considered application can save ≈7% of the total consumed energy without impacting performances.
In the future, the COSA project will continue to analyze real-life applications performances on novel architectures, from both low-power and high-end ecosystems, including, for example, the recently released Intel Knights Landing many-core systems.In particular, for the low-power platforms, the performances of clusters with low-latency interconnections will be explored.
Figure 1 :
Figure 1: The custom power meter attached to a Jetson TK1.
Figure 2 :
Figure 2: Running two kinds of benchmarks on the various CPUs of the COSA CNAF cluster.(a) and (b) show the results of the HPCG benchmark: HPCG score (weighted GFLOP/s) on (a) and ratio between HPCG score and absorbed power on (b), for increasing number of CPU cores.(c) and (d) show GFLOP/s and power ratio for the home-made MATMUL benchmark.
Figure 3 :
Figure 3: The 37-element stencil used for the computation of propagate function in the D2Q37 LB code.
Figure 5 :
Figure 5: CPU time per event for the x86 architectures tested.(a) shows the results for the Gauss application, while (b) shows the Brunel application.
Figure 6 :
Figure 6: Energy consumption per event for the x86 architectures tested.(a) shows the results for the Gauss application, while (b) shows the Brunel application.
Figure 9 :
Figure 9: Slices per second (a) and per Joule (b) of OpenMP and CUDA versions of the Filtered Backprojection algorithm executed on traditional high-end Xeon architecture (red) and on Jetson TK1 board (blue).The rightmost bar shows the combined GPU + CPU solution on Jetson TK1; that is, only 3 CPU threads are used for the OpenMP version.Otherwise, the OpenMP version uses all the available threads, that is, 24 threads on Xeon and 4 threads on Jetson TK1.
Table 1 :
Hardware specifications of the boards in the COSA cluster at INFN-CNAF.Top: ARM; bottom: Intel.
Table 2 :
Best EDP values, with corresponding energy-to-solution and time-to-solution, running the collide kernel in single precision.Lattice: 128 × 1024.
Table 3 :
Best EDP values, with corresponding energy-to-solution and time-to-solution, running the collide kernel in double precision.Lattice: 1024 × 8192. | 10,598 | 2017-09-25T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
In silico ribozyme evolution in a metabolically coupled RNA population
Background The RNA World hypothesis offers a plausible bridge from no-life to life on prebiotic Earth, by assuming that RNA, the only known molecule type capable of playing genetic and catalytic roles at the same time, could have been the first evolvable entity on the evolutionary path to the first living cell. We have developed the Metabolically Coupled Replicator System (MCRS), a spatially explicit simulation modelling approach to prebiotic RNA-World evolution on mineral surfaces, in which we incorporate the most important experimental facts and theoretical considerations to comply with recent knowledge on RNA and prebiotic evolution. In this paper the MCRS model framework has been extended in order to investigate the dynamical and evolutionary consequences of adding an important physico-chemical detail, namely explicit replicator structure – nucleotide sequence and 2D folding calculated from thermodynamical criteria – and their possible mutational changes, to the assumptions of a previously less detailed toy model. Results For each mutable nucleotide sequence the corresponding 2D folded structure with minimum free energy is calculated, which in turn is used to determine the fitness components (degradation rate, replicability and metabolic enzyme activity) of the replicator. We show that the community of such replicators providing the monomer supply for their own replication by evolving metabolic enzyme activities features an improved propensity for stable coexistence and structural adaptation. These evolutionary advantages are due to the emergent uniformity of metabolic replicator fitnesses imposed on the community by local group selection and attained through replicator trait convergence, i.e., the tendency of replicator lengths, ribozyme activities and population sizes to become similar between the coevolving replicator species that are otherwise both structurally and functionally different. Conclusions In the most general terms it is the surprisingly high extra viability of the metabolic replicator system that the present model adds to the MCRS concept of the origin of life. Surface-bound, metabolically coupled RNA replicators tend to evolve different, enzymatically active sites within thermodynamically stable secondary structures, and the system as a whole evolves towards the robust coexistence of a complete set of such ribozymes driving the metabolism producing monomers for their own replication. Reviewers This article was reviewed by Gáspár Jékely, Anthony Poole and Armen Mulkidjanian
Background
The RNA-World scenario of the origin of life is based on the hypothesis that the first evolvable ancestors of all recent life on Earth must have been RNA-like macromolecules [1,2]. Even though we cannot (and probably will never be able to) positively prove this on the basis of fossil evidence, strong support to the prebiotic existence of an RNA-World comes from modern molecular biology: recent organisms still carry reliable clues suggesting that RNA had played a central role both in the metabolism and in the genetics of very early forms of life [3][4][5]. The fact that RNA enzymes occur in key positions of the molecular machinery of all living cells including the ribosomethe enzyme complex responsible for translation, of which RNA constitutes the functional core, is a convincing indirect evidence for an early RNAdominated world of living organisms [6,7]. It took subsequent eons of evolution for the genetic role to be taken over by DNA and the metabolic role by protein enzymes [8] to yield the DNA-RNA-protein-World of life as we know it today.
The theoretical reason for the overwhelming majority of scientists studying the origin of life to accept the view that RNA (or some RNA-like replicator macromolecule, [9]) is the best candidate for the role of booting up life is the dual nature of RNA: it carries genetic information in its nucleotide sequence, and RNA molecules of different sequences are capable of catalysing a seemingly infinite variety of different chemical reactions [6,[10][11][12]. Thus, prebiotic RNA enzymes (ribozymes) might have been genes and enzymes at the same time, so that two of the three essential functions of living systems (i.e., inheritance, metabolism and membrane, [13]) might have been embodied in the same chemical entity, comprising an infrabiological system [14] that is viable and evolvable in itself.
The Metabolically Coupled Replicator System (MCRS) is a family of computer models aimed at demonstrating that the RNA-World scenario is both ecologically and evolutionarily feasible under reasonable physical and chemical assumptions [15][16][17][18][19]. This means thatgiven the physico-chemical properties of RNA macromolecules and their precursorsa sufficiently diverse community of RNA populations can be maintained and evolved, in which the different RNA species cooperate to produce monomers for their own replication, and possibly also to supply other "common goods" for the replicator community ( Figure 1A).
Even this early stage of prebiotic natural history must have been the result of an evolutionary process, because the first replicators had to be assembled without the help of other replicators, from chemical building blocks produced abiotically. Their lines of progeny must have evolved specific enzymatic activities through the process of retroevolution [20,21], driven by the self-inflicted selective (ecological) pressure due to the depletion of building blocks (monomers) in their environment. MCRS models address the evolution of prebiotic metabolism at this very early phase, assuming that a few terminal reactions of monomer production could be catalysed by some strains of replicators more effectively than they were supplied by the original abiotic source. The consequent fitness advantage of the cooperating replicators had set the retroevolutionary machinery in motion, to catalyse an expanding network of metabolic (or other beneficial) reactions.
The MCRS framework was originally developed as a toy model [15] in which all the physico-chemical attributes of the replicator molecules and their precursors (metabolites and monomers) were implicit in the updating rules of a stochastic cellular automaton. During the last 15 years we have refined the toy model approach in many ways [16,17], but the physico-chemically implicit nature of the MCRS framework has not been changed so far. The present study is the first one to consider explicit primary (nucleotide sequence) and secondary (2D folding) structures of the RNA molecules, thus taking the energetic andto some extentsteric properties into account in both the genetic and the metabolic functions of the replicators. Another important novelty of this approach is that we consider mutations on the genetic (nucleotide sequence) level instead of the phenotypic mutations used in previous MCRS models [16,18]. The phenotypic effects of mutations are determined through consequential changes in the secondary structure of the mutant RNA molecule, which in turn determines its replicability, decay rate and enzymatic efficiency. The genotype-phenotype mapping is mediated by the minimum folding energy of sequences. We show that adding this realistic detail makes the MCRS approach even more robust than the toy model versions: the replicator system stays persistent and evolves to stationary states that are remarkably insensitive to changes in almost any model parameter.
Model framework
The model is implemented as a stochastic cellular automaton (SCA) on a rectangular grid of size G × G. The grid represents the mineral surface on which chemical reactions (replication and monomer production) occur. Each cell represents a discrete site on this surface that is either empty or binds one replicator molecule. Periodic boundary condition (toroidal surface) is used to avoid edge-effects. The model is initiated with a random population of replicators, occupying 80% of the sites at t = 0. The update process is random and asynchronous: the state of each site is updated at each time step in a random order. The number of update steps within one generation (from time step t to t + 1) is equal to the number of sites in the lattice. The lattice we used in all simulations consists of 300 × 300 sites (G = 300), thus one generation comprises 90.000 updates.
Catalytic activities and metabolism
We assume that replicator molecules are RNA sequences with their secondary structures explicitly considered. Secondary structures and minimal free energies are calculated by the ViennaRNA Package 2.1.5 [22]. Secondary structure corresponds to enzymatic activity; energy determines the stability of folded RNA molecules. For sake of simplicity we assume that a sequence can be either in a folded state corresponding to the lowest free energy E (if there is an unique structure with optimal energy) or in a completely unfolded state with zero Each replicator is folded into 2D structures (dashed arrows) which may acquire catalytically active (coloured) regions including "active" and "helper" (boldface) bases. A parasitic sequence not contributing to monomer production (no arrow to M) but consuming monomers, is shown in white (no enzymatically active region). B: Colours represent catalitically active regions within a sequence; α is the catalytic efficiency of the given region (detailed explanation in text).
energy. The probability that a sequence is in the folded state is determined by Boltzmann statistics as follows: where factor c scales the probability to yield p fold ≈ 1 in case of optimal energy. Numerical experiments suggest that the optimal energy in the 15-75 nt range of sequence length has a lower bound at E min = −25 kcal/mol. According to the probability distribution above a sequence with zero free energy is folded with probability 0.5, while a sequence of low energy is folded into (a stable) secondary structure with 0.5 < p fold < 1.0. (At this stage we omit the possibility of folding into suboptimal structures.) We assume that there are three well-defined secondary structures (active site configurations) corresponding to three different enzymatic activities (α 1 , α 2 and α 3 ) which are all required for metabolism to work. The structureactivity mapping follows the following simple rules ( Figure 1B): 1. An internal loop (or bulge loop) of length five and a base A in its middle (3 rd ) position corresponds to the unit activity (α 1 = 1) in the first enzymatic process. If there is a "helping base" U adjacent to this catalytic base A, then the activity is boosted by 10% (α 1 = 1.1). If the loop is only four bases long but contains the catalytic base A in the 2 nd or 3 rd position, then the enzymatic activity, due to steric reasons, is only 0.1 (α 1 = 0.1). In any other case there is no catalytic activity (α 1 = 0). 2. A hairpin loop of length 9 and a base G in the middle (5 th ) position provides activity 1 (α 2 = 1) for the second enzyme-catalysed process. Base A next to the catalytic base G increases the activity by 10% (α 2 = 1.1). If the length of the loop is 8 (but contains the catalytic base G in position 4 or 5), then the activity is 0.1 (α 2 = 0.1). In any other case there is no catalytic activity (α 2 = 0). 3. A hairpin loop of length 13 and a base U in the middle (7 th ) position yields an activity of 1 (α 3 = 1) in the third enzyme-catalysed process. Base C neighbouring catalytic base U increases the activity by 10% (α 3 = 1.1). If the length of the loop is 12 (but contains U in positions 6 or 7) the activity is 0.1 (α 3 = 0.1). In any other case there is no catalytic activity (α 3 = 0).
An adequately long sequence can accommodate more than one active catalytic region (promiscuous enzymes). We assume that, due to steric reasons, a promiscuous enzyme cannot catalyse more than one reaction at the same time, thus we assume a sub-additive effect on different activities: where m is the number of different active sites of a given molecule, and σ > 1 provides the sub-additive effect.
With this factor we take into account the effect that for a promiscuous enzyme the necessary conformational changes for catalysis ("induced fit") need more time than for mono-active enzymes because of the more rigid secondary structure. Relatively short sequences cannot act as promiscuous enzymes due to energetic constraints, however.
Degradation, replication and mutation
In each time step, a replicator can replicate or decay. The probability of decay depends on the minimal free energy of the secondary structure (which is associated with its stability) according to the following equation: where E min is the optimal (minimal) free energy of the replicator in the relevant sequence length range. In the range of L = 10…70, E min = −25 kcal/mol. The prefactor 0.9 is the maximum degradation rate corresponding to E ≈ 0. The minimum degradation rate is 0.1 (preventing unlimited longevity of sequences with optimal energy), which is ensured by the multiplier factor 0.8. For a replication event to occur the focal replicator s must be complemented by all three different enzymatically active molecules in its metabolic neighbourhood (MET(h,s), the set of h sites concentric on the site of the focal replicator s). The metabolic activity (M s ) around replicator s is the geometric mean of the different metabolic enzyme activities within its metabolic neighbourhood [15]: where a j,i denotes the j th type of activity (j = 1, 2, 3) of replicator i within the metabolic neighbourhood of size h around the focal replicator s (i ∈ MET(h, s)), see Figure 2 for an illustration of the metabolic neighbourhood. Due to the geometric mean any of the three types missing from the metabolic neighbourhood implies a total metabolic activity of zero (M s = 0), and thus no chance for the focal replicator to be copied. Besides the metabolic activity M s , the replication probability of a sequence s also depends on the minimal free energy of its secondary structure. Since replication is possible only in the unfolded state for any sequence, its replicability is proportional to the factor (1 − p fold ).
The length of the sequence also affects replicability through the time required for completing replication. In our simple model context the time consumption of a replication event is affected by two factors in a linear manner: the first one b 1 is invariant (representing the length-independent initiation/termination steps of the replication process); the other one, b 2 , is proportional to the length L of the sequence (this is the replication process itself ). The parameter b 2 represents the time needed for the addition of a single nucleotide to the sequence being replicated. To sum up these two factors, the replicability (R) of a given replicator s is the following: where g is a scaling factor and l (l > 0) ensures that even the best ribozymes (with p fold close to 1) can replicate to some extent. The claim C s of replicator s belonging to the replication neighbourhood of an empty site to replicate into that empty site is the product of the metabolic activity (M s ) and replicability (R s ) of the replicator s: The actual probability of template s to replicate into an empty site depends on its own claim and that of all the other replicators present in the replication neighbourhood of the empty site: where l runs through the (von Neumann-type) replication neighbourhood of the empty site (see Figure 2) and C e is the claim of the empty site to remain empty. The probability that an empty site remains empty is P empty ¼ 1− X l P l . We set C e to be an ε fraction of the theoretical maximum of the claims. To compute this maximum we assume that all metabolic neighbourhoods are optimal. Specifically, a focal replicator s has eight neighbours in its h = 3 × 3 metabolic neighbourhood and a single empty site into which it can place its copy. Thus the maximal metabolic activity in case of three different reactions requires 3, 3 and 2 replicators with optimal catalytic activity (α ∧ ¼ 1:1) each. We assume also that the active replicators are all monoactive, i.e., we disregard promiscuity. With this simplification we implicitly assume that all sequences are folded at the optimal energy: p fold = 1. According to these postulates the theoretical maximum With this simplification the claim of the focal replicator is: With the default parameters (g = 10, b 1 = 1, b 2 = 0.05) and at L = 35 (in accordance with our numerical experience) the maximum claim is C s ∧ ¼ 10:48. As the claim of an empty site to remain empty is ε fraction of the maximum claim, C e ∧ ¼ 1:048. Mutation takes place during a successful replication step with a per base probability: p sub , p ins , p del of substitution, insertion and deletion, respectively. These three processes act independently.
Diffusion
Empirical studies have proven that RNA precursors (intermediate metabolites and monomers) and RNA macromolecules (replicators) are capable of reversible binding to charged mineral surfaces, which implies twodimensional diffusion of limited speed for small and large molecules [23,24].
Diffusion of replicators
The speed of surface diffusion depends on the strength of interaction between the surface and the moving material. For example, the different diastereomers of sugars attach to the surface with different numbers of hydroxyl groups and consequently they move on it with different velocities, resulting in enantiomer selection [25]. For RNA molecules such simple relationships hold only if the sequence is short; longer RNA sequences always fold into complex three-dimensional structures which determine their speed of diffusion on the surface. Three-dimensional RNA structures cannot be reliably calculated yet, unfortunately, therefore we assume that different RNA molecules share the same diffusibility on average. With this simplifying assumption the movement of replicators on the surface can be implemented by the Toffoli-Margolus diffusion algorithm [26] which is scaled by a single parameter: the strength of diffusion (D). We used an asynchronous version of the original algorithm which was designed for synchronously updated simulations. The procedure is as follows: in each update step a randomly chosen 2 × 2 block is rotated by 90°clockwise or counter-clockwise with probability 0.5. The diffusion of replicators is scaled by parameter D, specifying the average number of such diffusion updates relative to one replication/decay update. Higher values of D correspond to higher diffusion rates. Note that D = 0 does not imply exactly zero diffusion, because a small non-zero rate of mixing comes from each new replicator being placed to a site adjacent to its template. Additionally, D = 1 means that each replicator moves on the grid surface four times on average during a generation time (from t to t + 1). The speed of replicator diffusion can be gradually decreased using D < 1 values, which means that the Toffoli-Margolus algorithm does not operate after every replication/decay update step. For example D = ½ means that the diffusion of replicators acts at every second update step on average.
Diffusion of metabolites
The diffusion of RNA monomers and the metabolites along the chemical pathways producing them is implicitly considered in the size of the metabolic neighbourhood (h): larger h means faster metabolite diffusion, i.e., a longer distance that the metabolite (or monomer) molecule can cover on the mineral surface before being used in a reaction, degraded, or desorbed from the surface and lost to the third dimension. To simplify the treatment of metabolite diffusion, degradation and desorption we assume that all the metabolically vital enzyme functions have to be present within the metabolic neighbourhood of the replicator to be copied in order to supply monomers for its replication ( Figure 1, [15]).
Results and discussion
We have performed a large batch of simulations at and around the default values of the model parameters ( Table 1). The actual parameter values are arbitrary, since no measurements are available to use them here, but they were chosen so as to reflect our best knowledge (or guess) regarding the physical-chemical properties of RNAs and their monomers. To our surprise the results proved to be very robust against changes of reasonable magnitude in almost all parameters anyway; only extreme values produced substantially different outcomes, and those were far out of the physico-chemically reasonable range. The simulation study had two general focuses: the ecology (i.e., the coexistence criteria) and the evolution (i.e., the long-term equilibria of replicator sequence distribution and enzymatic functionality) of the metabolic replicator community. Obviously, evolutionary questions can be asked only in a system that is ecologically feasible, i.e., all the three metabolically cooperating, enzymatically active replicator types must be coexistent to be capable of evolution. We will show that the coexistence criteria of the present, physically more detailed model are in very good agreement with our toy model built earlier on similar, but largely implicit assumptions [15,18,19]. The ecologically relevant parameters of both the toy model and this one are the mobility of the replicators (D) and that of the small molecules (metabolites and monomers) represented by the size of metabolic neighbourhoods (h). Evolutionary changes in replicator structure and in metabolic functions thereof depend mainly on a single parameter related to the replication process: the per base elongation time ("length penalty"; b 2 ). After having experimented with simulations varying all other parameters we concluded that the system is spectacularly insensitive to them, so we kept them constant at their default values, and systematically checked the effects of variation in D, h and b 2 .
The mineral surface (the lattice of 300 × 300 sites) was seeded in all simulations with a random population of replicators at an initial frequency of 80%, leaving 20% of the sites empty. The length of the initial replicators was also random with L evenly distributed on the [25,…,60] range. Simulations lasted for 10 6 generations, after which the frequency of each replicator type (F 1 , F 2 , F 3 , F 12 , F 13 , F 23 , F 123 and F p ), the average of each enzymatic activity A 1 , A 2 and A 3 and the average length of each replicator type (L 1 , L 2 , L 3 , L 12 , L 13 , L 23 , L 123 and L p ) were recorded (index numbers correspond to different combinations of the three possible enzymatic activities as given in Figure 1; p stands for parasitic sequences). The ecological and the evolutionary aspects of the simulations are treated in separate sections below.
Coexistence
In the random initial replicator population the proportion of enzymatically active replicators was low (slightly above 20%) with mono-active enzymes represented at the highest frequency, suggesting that enzymatically active secondary structures as defined by the rules of the model (Figure 3) are not very rare per se in a random sequence pool. Obviously, the emergence of two or more "active centres" within the same random sequence (i.e., the random occurrence of promiscuous enzymes) is much less likely but still occurs in 1-2% of the initial RNA pool. In most of the simulations the frequencies of mono-active replicators increase to their equilibria, while promiscuous enzymes almost completely disappear from the system after a temporary surge upwards ( Figure 3A). This behaviour was qualitatively the same for all realistic parameter sets including the default. The frequency of functionless replicators (parasites) is also high in all simulations, and in good accordance with the original spatial MCRS model's prediction [15]: cooperative replicators coexist with parasitic ones, and the latter cannot ruin the system. Figure 3B shows that the frequency of promiscuous enzymes depends on b 2 , the length penalty (elongation time) parameter: the smaller the penalty the longer the average replicator, which implies that it is easier to fit more than one active loops into its sequence. At the extremely low b 2 = 0.001 the frequency of biactive enzymes increases to about 10%, and even replicators with all three types of active loop remain in the system at a low abundance.
Metabolic neighbourhood size (h)
Extremely small metabolic neighbourhoods are unlikely (or unable) to accommodate all the necessary enzyme activities and thus result in system collapse, because the number of catalytic activities is larger or just slightly smaller than the number of neighbourhood sites. This effect does not show up in our simulations, because system size (i.e., the number of different enzymatic activities for metabolism to work) is 3, whereas the smallest metabolic neighbourhood size used in the simulations is h = 3 × 3 = 9. Figure 4 shows the effect of increasing metabolic neighbourhood size on stationary replicator frequencies and the evolved averages of enzyme activity and replicator length after 10 6 generations. The frequencies of enzymatically active replicators ( Figure 4A) decrease while parasitic replicators become more common with increasing metabolic neighbourhood size, suggesting that faster metabolite diffusion benefits parasitism. Indeed, if monomers are still available relatively far from where they were produced (i.e., far from the sites of enzymatically active replicators) then parasites can make use of their higher replicability R due to loose folding (small p foldcheck Eq. 5), and thereby take over in most localities. Increasing metabolic neighbourhood size represents a shift towards the meanfield version of the MCRS model which is known to go extinct for lack of advantage of rarity even without parasites [15,19]. Figures 4B and C show that the evolved enzymatic activities and the evolved lengths of the cooperating (metabolic) replicators do not depend on neighbourhood size. Note that at larger h values (5 × 5 and 7 × 7) parasites tend to be somewhat shorter than cooperating ribozymes, but the difference is not very large, even though length reduction is an efficient way to increase fitness through better replicability (Eq. 5). Decreasing length is obviously constrained in metabolic replicators, because they have to maintain their catalytically active structure, but this does not apply to parasites which are catalytically inactive. The fact that the parasites do not decrease their length to considerably shorter than that of the cooperators suggests that most of the surviving parasites might be the single-(or at most a few-) step mutant offspring of active metabolic replicators, with very small Hamming distances from the "masters". Parasites constitute the non-functional mutant cloud ("quasispecies", [27]) of metabolically cooperating replicators, and the genetic proximity of parasites to cooperators is maintained through mutation-selection balance. The shorter (faster replicable) mutants of the parasitic mutants are heavily selected against due to the disadvantage of aggressive parasites: wherever they become abundant they exclude their metabolically cooperating "hosts" locally and they perish for lack of replication resources (monomers). This mutation-selection balance represents high-turnover source-sink dynamics with respect to the replicator quasispecies.
Replicator diffusion (D) Figure 5 shows that the replicator mobility parameter D acts in an all-or-none manner: zero diffusion kills the system, but almost any positive speed of diffusion is sufficient to maintain coexistence, and further increase in D does not change the results even in the quantitative sense. We have increased the diffusion parameter as D = 2 n , n∈{−2,-1,0,1,2}, and found that none of the output variablesthe stationary frequencies F i (Figure 5A), the average enzyme activities A i ( Figure 5B) and the average length L ( Figure 5C)of the different replicator types had changed substantially within the range of D studied. With the overwhelming dominance of mono-active metabolic ribozymes this result is not very surprising: zero diffusion does not mix the different replicator types to the extent necessary for most metabolic neighbourhood to be complete, but even a very low threshold level of mixing may be sufficient for that, and it ensures long-term coexistence. Once it is above the threshold, increasing replicator mobility further does not change much in terms of metabolic efficiency and replicator coexistence. In fact even unrealistically high levels of replicator mixinge.g., completely randomized spatial patternsdo not alter this result. Fast replicator diffusion does not help the parasite either, because it does not imply a shift of the model towards the mean-fieldit is only the fast diffusion of metabolites (i.e., larger metabolic neighbourhood) that fosters parasite invasion, which is evident on Figure 4A. (Note that a metabolic neighbourhood size equal to lattice size is the limit at which the system comes at the mean-field.) Of course assuming fast replicator diffusion and slow metabolite diffusion at the same time would be very far from being physico-chemically realistic, but fortunately it is not necessary: very low replicator mobility is sufficient for the coexistence of the MCRS model. The case would be even more striking if promiscuous (bi-or tri-active) metabolic replicators were more abundant in the system, because then even less mixing would be sufficient for the average metabolic neighbourhood to be complete. Why is it then that enzyme promiscuity cannot be evolved or maintained even at extremely low replicator mobilities? For a system size of 3 (i.e., with metabolism requiring 3 different enzyme activities to be present within the same metabolic neighbourhood) there are two options to maintain complete local metabolisms with promiscuous replicators: 1) different bi-active replicators should be present at sufficient frequencies in a system still mixing somewhat to complement each other with their enzyme functions, or 2) tri-active replicators should prevail, in which case no mixing is needed at all. Both cases are obviously less advantageous than mono-active replicators in a slightly mixed system: 1) At any small D the mono-active replicators are at an advantage over biactive ones, because promiscuous enzymes are less efficient in both catalytic functions than mono-actives (cf. Eq. 2 with α i > 1), and bi-actives still need some mixing to avoid forming homogeneous patches on the surface and thus going extinct. Two of the three possible bi-active types ({1,2} and {1,3}) show up and then vanish during the first 3 x 10 5 generations ( Figure 3A), along with a temporary increase and subsequent decrease in average replicator lengths L (data not shown). This suggests that the initial difficulty of assembling three different mono-active replicators in the same metabolic neighbourhood is circumvented by evolving bi-active ones that are later excluded by their own, more efficient mono-active mutants. Note that {2,3} type bi-active replicators do not show up in large numbers because they need to be longer to be stable, and they are expendable because the two shorter bi-active types can deliver all the three necessary enzyme functions. 2) Tri-active replicators need to be much longer than mono-actives to fit all the three active loops into their secondary structure. This makes them much less probable to replicate than shorter replicators in the same replication neighbourhood, so that rarely emerging tri-active mutants are very likely fast eliminated from the system by their competing neighbours.
Replicator sequence evolution Folding energy
The Gibbs free energy (E) of a (folded) sequence defines the probability of the sequence being in the folded state (p fold , Eq. 1.). Replicability, enzymatic activity and replicator degradation rate are responsible for the dynamics of the system. These are the replicator traits that evolve, and all three of them depend on p fold , albeit each one in a different way. Replicability and degradation rate (i.e., the direct components of replicator fitness affecting the rate of replicator birth and death, respectively, independently of their contribution to metabolism) are traded off: increasing p fold decreases replicability (Eq. 5), which is a negative fitness effect, but it also decreases the rate of replicator degradation (Eq. 3), which is positive. The indirect fitness component is enzymatic activity, which increases with p fold (Eq. 2) and improves the probability of replication through its positive effect on metabolism (Eq. 4). It is the combined fitness effect of these three fitness components at particular p fold values (i.e., at certain folding energies) that is optimized during the evolution of the replicator community. Figure 6 illustrates how the mean of the folding energy distribution of the replicator community proceeds towards the low end of its possible range. The free energy histogram of the initial (random) population has a Cauchy-like profile ( Figure 6A) for all replicator types. Note that enzymatic activity 3 (green on the histograms) requires the presence of the longest sequence-constrained loop in its secondary structure, thus it has the highest average folding energy of all replicator types; otherwise the profile of the distribution is the same as those of the other replicators. During the simulations the initial energy distribution flattens out at first ( Figure 6B) and then the energy profile shifts towards the empirical energy minimum (E min = −25 kcal/mol, Figure 6C) where, finally, it accumulates ( Figure 6D) so that in the stationary replicator community most replicators belong to the lowest (optimal) energy class. Since a loop responsible for an enzyme activity can be embedded in sequences of very different folding energies, once a loop is in place the rest of the sequence evolves towards more durable (tightly folded) structures. Of course, to reach the energy minimum is not the direct target of the evolutionary processthe real target is the fitness maximum which, in this case, happens to coincide with the energy minimum (i.e., with tight folding). Compact folding is beneficial for fitness through its direct decreasing effect on replicator degradation andfor metabolic replicatorsits indirect metabolic effect mediated by increased enzymatic activity. These advantages over-compensate its adverse effect on replicability.
The nearly identical energy distributions of functionless (parasitic) and metabolically active replicators at the stationary state of the system may be surprising at first, but the explanation is simple. Mono-active replicators easily mutate to parasitic ones, and parasites are in principle free to mutate further towards increasing their direct fitness (by higher replicability and/or lower degradation). However, fast replicating parasites are strongly selected against locally, because they demolish metabolism in their own vicinities and starve to death. That is, parasites are quickly eliminated by local selection, and permanently reintroduced by mutation from metabolically active replicators. Thus the parasites we see in the system are the close sequence relatives of functional metabolic enzymes, representing their parasitic quasispecies [27] created by mutation and effectively selected against on a peaky fitness landscape. Sequence similarity implies energetic similarity on averagethis is why the energy profile of parasites remains close to that of metabolic replicators.
Length penalty
To understand the effect of the length penalty parameter (b 2 ) we have to see that the system maintains a delicate evolutionary balance between high replicability (which implies short sequences, Eq. 5) and efficient enzymatic activity (which requires long enough sequences) Eq. 2). The replicability criterion and the enzymatic efficiency criterion act on the length of the sequence (L) antagonistically, resulting in a trade-off relation between these evolving traits. On the one hand, the shorter the sequence the faster its replication. On the other hand, longer sequences may acquire more complex folded structures which in turn may account for a more efficient enzymatic activity. Besides the length of the sequence, enzymatic efficiency depends on the energy of the folded structure, too, but folding energy may change with sequence changes almost independently of L. The actual length of the evolved replicators (around 30 nt, which is, amazingly, almost independent of model parameters) is the optimum compromise between all the length-dependent fitness criteria.
Recall that the length penalty parameter represents the time needed to add a single nucleotide to a new replicator being produced during the template copy process. Low length penalty means that the elongation of a long sequence is punished less (in terms of the replicability component of fitness) and consequently the length of replicators evolves to the upper limit allowed in the model (70 nt, Figure 5C). The increase of stationary replicator length at extremely low length penalty is driven by the fitness advantage (the decrease of degradation rate and the increase of enzyme activities) due to the more compact structure that longer sequences can acquire. Longer replicators can accommodate multiple active centres more easilythis is why promiscuous ribozymes can reach much higher proportions at the extremely small, biologically unrealistic length penalty b 2 = 0.001 (cf. Figure 3). This effect also shows up on Figure 7B, where the relatively small enzyme activity of mono-active replicators within the 0 < b 2 < 0.05 range is due to the fact that some of the activity is in promiscuous sequences which are not included in the data. The frequency of promiscuous enzymes is negligible above this range.
Even a very small increase of b 2 (i.e., a small decrease in the speed of replication) is sufficient to cut back the stationary length of the replicators to 30-35 nt, and it essentially does not change with increasing length penalty any further ( Figure 7C). This suggests that 30-35 nt is about the minimum size of a replicator capable of maintaining a single enzyme activity in a sufficiently stable structure. Since the parasitic replicators present in the system are the products of a single (or at most a few) point mutations of mono-active metabolic replicators, they preserve the length of their masters, as explained in the previous section.
The most conspicuous effect of variation in length penalty is on the ecology of the metabolic replicator community, which is summarized in Figure 7A. The triangles on the graph represent the proportions of surviving systems out of 100 independent simulations, each initialized with different random number seeds, but identical otherwise. The coloured data points show the actual stationary frequencies of the different replicator types after 10 6 generations, provided that the system survived. Within the b 2~[ 0-0.25] range the replicator community reaches the stationary state in all simulations, with the mean frequency of replicators (averaged over the 100 replicate simulations) falling between 0.2 and 0.35. These stationary frequencies do not change at higher length penalties either, but from b 2 = 0.25 upwards the metabolic replicator community may collapse, and it does so with increasing probability. That is, the system either survives and approaches its intrinsic equilibrium or it dies out; the probability of extinction increases with the length penalty parameter. b 2 = 0.25 may be the break-even point for the average initial fitness of the system: below 0.25 the combined effect of the three fitness components (replicability, degradation rate and enzyme activity) results in positive initial population growth potential on average, which leaves ample possibility for replicators to evolve towards even higher fitnessas a result the replicator community always coexists and evolves under b 2 = 0.25. Increasing the length penalty decreases the average replicability of the replicators (cf. Eq. 5), implying that their average reproductive potential decreases (Eqs. 6 and 7). At about b 2 = 0.25 the average growth potential of the initial (random) replicator community turns to negative, so that at length penalty above 0.25 the average initial system starts to decrease. The shrinking replicator community cannot escape going extinct unless a lucky combination of the fitness components creates a local "seed" of metabolic replicators with positive growth potential, which may then invade the whole lattice. The booting up of surviving systems starts with the temporary proliferation of biactive ({1,2} and {1,3}) replicator types capable of performing all the three necessary catalytic functions together ( Figure 4A), which in turn requires a temporary increase in replicator length L (data not shown). The chance for this to happen decreases with increasing b 2 , because high length penalty implies a fast track to extinction, with little chance for the promiscuous "seed" to assemble. The evolved community consists of all the three metabolic replicators accommodating at least one enzymatic activity each, so their average stationary length cannot go below the empirical threshold L~35 ( Figure 7C).
Conclusion
Perhaps the most striking feature of the model is its remarkable robustness against changes in almost any of its parameters. The two exceptions are metabolic neighbourhood size (h) and the cost of replicator elongation (length penalty; b 2 ), but even those act only on the ecology of the replicator community, leaving the evolved traits of the replicators (length, folding energy, enzyme activity) mostly unchanged.
Coexistence
Metabolic neighbourhood size defines the distance of the system from the mean-field case in which all the replicators have the same monomer supply and thus the one of highest replicability inevitably excludes all the other ones and the metabolic system collapses. Small h gives an advantage to metabolically active replicators, and the advantage is highest for the least abundant metabolic replicator type. This advantage of rarity keeps the replicator community coexistent, but increasing h benefits parasitic replicators which become abundant in the stationary replicator community at larger metabolic neighbourhoods ( Figure 4A). The conclusion from all this is that for an efficient metabolic replicator system to be maintained a small metabolic neighbourhood size is necessary, which entails three different physicalchemical properties of small metabolites and monomers: slow surface diffusion and/or fast degradation and/or fast desorption from the surface. The direct criterion of coexistence is that metabolites and monomers do not drift far from the enzyme producing them before they are a) used in a metabolic reaction (metabolites) or in replication (monomers), b) degraded, or c) desorbed from the surface. Even though increasing h decreases the stationary frequencies of metabolic replicators and increases that of parasites, it does not affect the evolved traits of metabolic replicators: they remain essentially the same across the range of h studied ( Figure 4B and C). The length penalty parameter (b 2 ) affects system survival in a probabilistic manner: the higher the length penalty the less chance for the metabolic replicator community to survive ( Figure 7A), but the evolved features of surviving systems are also the same across the b 2 range.
Robustness
Another conspicuous feature of the data on Figures 3, 5, 6 and 7 is that the evolved enzyme activities, folding energies and lengths of the different replicator types are not only invariant across the relevant parameter ranges of the model, but they also seem to converge: at any specific value of D, h or b 2 the data points representing the different replicator types are very close to each other. This suggests that the evolving traits (length, folding energy, enzyme activity) of the replicators tend to become similar during the evolution of the communities, and this tendency to trait convergence is essentially independent of the actual system parameters. The underlying principle might be that of group selection [28]. The most efficient local communities are those in which all the three metabolically active replicators are present in a uniform frequency distribution, because these produce the most monomers to support their own reproduction (cf. Eq. 4). Maintaining the optimal even distribution of metabolic replicators in the long runand thus the highest indirect fitness for the communityis possible only if the fitnesses of the replicators are as similar as possible, hence the group selection pressure towards uniform replicator traits. Note that local group selection is also the ultimate mechanism that keeps the system coexistent in the first placeits effect through evolved fitness evenness is an extra component of ecological robustness in the Metabolically Coupled Replicator System.
Enzyme efficiency and specificity
We have addressed two general aspects of ribozyme evolution with this model: those of efficiency and of specificity. Adaptation in efficiency means improvement in the activity of the enzyme in the reaction it is supposed to catalyse. Enzyme activity is dependent on the substrate affinity and the conversion efficiency of the active centre. These are traits which can evolve somewhat independently of each other, the first being dependent on the size and shape of the binding pocket [29], the second on the positions of chemically active groups inside the pocket. Of course the simultaneous evolution of substrate affinity and conversion efficiency is well possible [30], as implicitly assumed in the different enzymatic activities attributed to different active site sequence patterns ( Figure 2) in the model. The fact that mean ribozyme activity jumps close to its theoretical maximum from the low activity start in a very short time and it remains high for any parameter setting shows that the indirect (metabolically mediated) replicator fitness component is under very strong positive selection.
It is probable (and, for a successful launch of the evolution of catalysis, also sufficient) that early ribozymes were not very efficient, but they might have possessed more than a single catalytic activity (enzyme promiscuity). Catalytic promiscuity is known to exist both in protein and in RNA enzymes, with different mechanisms in the two cases. For protein enzymes, the binding pocket may either attach different substrates (substrate promiscuity), and/or it can catalyse more than a single reaction on the same substrate (catalytic promiscuity). In RNA enzymes the mechanism of promiscuity is different, because it occurs through different foldings of the same sequence, so that the same ribozyme can operate as two different enzymes in time-sharing mode [31][32][33].
The evolutionary advantages of temporary ribozyme promiscuity are obvious, if different catalytic functions need to be present at the same location for a vital function of an evolvable system to work, even if the cost of catalytic promiscuity is paid in reduced enzyme activity, as is the case for ribozymes in general. Keeping different, functionally complementary ribozymes together can be realized through closing them in vesicles [34] or by constraining their mobility on mineral surfaces [15,35,36]. In either case, communities of promiscuous ribozymes may increase against the odds of extinction due to local demographic stochasticity when catalytically active replicators are scarcei.e., in the initial state dominated by random, mostly catalytically inactive sequences. However, for promiscuity to persist in spite of the higher efficiency of specialized catalysts, replicator mobility needs to be extremely low [18] which is a condition rather unlikely to occur both in vesicles and on mineral surfaces. For metabolisms of even a few more than two reactions the promiscuity of the enzymes gives only a marginal advantage, because bi-active ribozymes still need to be complemented by other ones, which requires replicator mobility anyway. Based on our model's predictions it seems likely that metabolically more efficient mono-specific ribozymes had soon taken over the catalytic functions of promiscuous ones after a short transient upshot of the latter. The evolutionary process of ribozyme specialization might have proceeded through the effects of the replicators' fitness components (degradation rate, enzymatic activity, replicability) which are all mediated by the folding probability p fold and, ultimately, by the folding energy E of the replicator sequences.
In previous MCRS models [15,17,19] we have implemented replicators as abstract chemical entities capable of self-reproduction and enzyme activity. Those studies were aimed at verifying the MCRS concept as a feasible mechanism for prebiotic replicator coexistence. The present model differs from the previous ones in that the abstract replicators are replaced by RNA sequences, the energetic features and the corresponding 2D folding structures of which are explicitly considered in defining their enzymatic activities, decay rates and replicabilities. The aim of this study was to demonstrate that the MCRS concept does work with RNA sequences featuring realistic physicochemical properties (2D structure and free energy) at least as well as it does with abstract replicators. Our results suggest that the MCRS principle is as robust in this model as it was in the previous ones, leading to dynamics qualitatively similar to those of the toy models: the system of metabolic replicators stays coexistent indefinitely, and it resists destructive parasitism by fast replicating, non-enzymatic RNA sequences. An additional, attractive feature of the new model is its remarkable insensitivity to changing its crucial parameters across wide ranges, the likely reason of which might be the feedbacks and trade-offs realized through the folding energetics of the sequencesi.e., more realistic replicator traits that the toy models did not consider. The consequent improved structural stability of the explicit replicator system lends further support to the fundamental logic of the MCRS concept.
We have also addressed the question of possible evolutionary scenarios for building up MCRS in previous work [18] using abstract replicators. The same problem is, of course, pertinent with explicit RNA replicators as well. The occurrence of new metabolic enzyme activities and the increase of existent ones in mutant replicators is a possibility in the present model as well as in the toy versions, the only substantial difference between the two approaches being the effect of diffusion on the evolution and the dispersion of promiscuous replicators admitting multiple enzyme activities: it was enhanced by increasing diffusibility in the toy model, whereas it remained in essence the same in the explicit version. It would be interesting to see results with larger metabolic neighbourhood (h = 7×7) and higher replicator diffusion. One would expect that parasitic sequences can diverge more since they will not as easily demolish metabolism in their own vicinities, due to larger diffusivity.
Reviewers' report
The results of the simulations with h = 7×7 and D = 4 ( Figure 3) show that increasing metabolic neighbourhood size, even if it decreases replicator frequency, does not affect the stable persistence of the system. Its stationary characteristics (the equilibrium distributions of replicator frequencies, enzymatic activities and sequence chain lengths) have also been largely insensitive to the actual value of the dispersal parameter D (Figure 4). This is in good accordance with our previous results [19] obtained with toy model simulations scanning a wide range of the (D, h, r) parameter space. The earlier study revealed a positive effect of increasing dispersal (D) which prevents the aggregation of conspecific replicators, and a negative effect of increasing h, though approximating the meanfield situation that is known to go extinct. The deleterious effect of larger h can be compensated by increasing D to some extent, but the compensatory effect is limited: at too large metabolic neighbourhood sizes the system collapses anyway (cf. [19]: Figures 2 and 4). The parameter setting suggested by the Reviewer (large h, large D) is, unfortunately, practically not feasible in the explicit model, because even on a high-capacity grid computer it would take months to run a single simulation. The huge difference between the CPU time demands of the toy models and the present one is due to the "handling time" of sequence folding. However, the results of the toy models are likely to carry over to the explicit case in this respect, too, since in all other respects we experienced qualitative matches.
Importantly, the authors should provide their code as an Additional file or deposit it to a public repository for others to reproduce or extend the model calculations.
As the code is the result of a long process of development, and it will be further developed for later studies, we do not find it convenient to publish it at this stage. However, we are willing to send the code to the reviewer or to anyone for further studies or for reproducing our results, on an individual basis.
Minor comments: pg [19][20] The discussion about the dynamics of parasites is repetitive in this section. I suggest to delete or shorten this part: "Mono-active replicators easily mutate to parasitic ones … their own vicinities and starve to death".
We have shortened this part.
Typos: page 11: "we assume that different RNS molecules" change to RNA Corrected. page 17: "i.e., larger metabolic neighbourhood) that fosters parasite invasion, which is evident on Figure 4A".change to Figure 3A Corrected.
Reviewer 2: Anthony Poole
This is a very elegant study which has been explained in very accessible language. I found the results very insightful, and need say little other than that, for those interested in the RNA world, this is a paper well worth reading.
We thank the Reviewer for their positive judgment of the study.
For my money, the most exciting result is that this model suggests that catalytic promiscuity in early ribozymes may have been extremely short-lived. This bears thinking about, particularly given the view, popular in protein science circles that early enzymes were promiscuous (both in their substrate specificity and enzymatic reactions). It is also noteworthy that group selection appears as a feature of the model. This is broadly consistent with the cooperative networks that Lehman and colleagues observed for fragmented ribozymes (Nature 491:72-77). I would be interested to see a brief discussion of that work and how it relates to the authors' findings.
Lehman and co-workers showed that mixtures of RNA fragments that self-assemble into self-replicating ribozymes can form catalytic cycles and more complex networks. We agree that some aspects of our model show some similarity to some of Lehman's, even though the two models (Lehman's and ours) are essentially different in their basic assumptions. Ours assumes cooperation in the evolutionary sense, so that a complete metabolic neighbourhood (a cooperating "team" of potentially competing replicators) is capable of replicating a focal sequence which thus will have two identical copies locally (apart from mutations). The model of Lehman, on the other hand, assumes collective autocatalysis: in which the complex networks arise due to the fact that the members of the set catalyze each other's formation, rather than replication. Yet, it is true that both models are prone to being parasitized and ultimately exterminated by parasitic replicators in a mean-field framework, both requiring some form of spatial structure as a potential defense: "Longer term evolutionary optimization would have required spatial heterogenity or compartmentalization to provide lasting immunity against parasitic species or short autocatalytic cycles." (Vaidya et al. Nature 491:77 (2012)).
Just a minor quibble about this statement in the Background: "recent organisms still carry reliable clues suggesting that RNA had played a central role both in the metabolism and in the genetics of very early forms of life [3,4]". The papers cited here are both excellent, but address the more chemical aspects of the origin of RNA itself and of RNA catalysis. By contrast, the recent paper by Hoeppner et al. (PLoS Comp Biol 8: e1002752. http:// dx.doi.org/10.1371/journal.pcbi.1002752) used a comparative genomic approach to look at whether there are 'clues' of the RNA world in modern organisms, so is perhaps more appropriate, given the sentence.
Thanks for drawing our attention to the paperwe have cited it in the corresponding part of the text.
Minor comments/typos (can be deleted from the review once addressed): Page 7, the use of the term "catching" -perhaps "binding" is more appropriate here. Enzymologists routinely talk about substrate binding and product release.
We rephrased the sentence in a more clear way.
Reviewer 3: Armen Mulkidjanian
Könnyű and co-workers have attempted to fill the gap between purely mathematical, "toy" modeling of very early evolution and the physico-chemical realm within which such evolution may have proceeded. Specifically, the model of Könnyű and co-workers explicitly accounts for the primary and secondary structure of replicators. Hopefully, the authors would continue their efforts to model the physics and chemistry of the early evolution. Therefore, the comments below contain certain recommendations which could be realized either upon revising of the given manuscript or in the future work of the authors.
Major comment: 1) My major concern is the plausibility of the metabolic part of the model. The authors assumed that "the different RNA species cooperate to produce monomers for their own replication, and possibly also to supply other "common goods" for the replicator community". Thereby, only three catalytic activities were assumed to be sufficient to perform all these functions in the model of the authors. Obviously, the assumption of only three catalytic activities is an oversimplification, which is quite understandable in the given context. In a wider context, however, the source of monomers and "common goods" for the first replicators is one of the open questions in the origin of life research. Apparently much more than three catalytic activities should have been simultaneously needed to produce different monomers and, in addition, the "common goods". Furthermore, there is no evidence of ribozymes capable of synthesizing nucleotides from scratch; the chemistry of the RNA catalysis, as revealed so far, is not very encouraging in this respect. A possible solution for this conundrum is that monomers and other common goods (e.g. amino acids and sugars) could be initially produced in abiotic reactions [1][2][3][4][5]. Only later, step by step, the first replicating entities may have learned to synthesize nucleotides. Syntheses of nucleobases and amino acids were recently demonstrated in the solutions that contained formamide or urea and in the presence of UV-light, see [6,7] for reviews. Environments with high levels of amides/urea could be envisioned on the primordial Earth [6,8,9]. The hypothesis of abiotic origin of monomers got a major boost after Sutherland, Powner and their co-workers succeeded in synthesizing nucleotides from scratch in geologically plausible, one-pot settings [10,11]. In addition, these authors have shown that natural nucleotides were particularly UV-stable, so that their relative fraction selectively increased under UV illumination [10], in support of earlier theoretical predictions [12]. In such a case, the very first replicators would require only catalytic abilities of assembling abiotically formed monomers -a task that should have been feasible for ribozymes. Hence, scenarios of "abiotic syntheses" match the simplified approach of Könnyű and co-workers in that they imply only few catalytic activities. In a framework of such scenarios, however, an essential source of monomers for replication would be the decay of other replicators. Furthermore, the abilities to accelerate this decay by cleaving neighboring sequences (which would correspond to a reversion of the assembly reaction) as well as to use the fragments for building the own "body" would be extremely advantageous. A scenario of "abiotic syntheses", of course, would require a separate model that would differ from the model in the given manuscript. Still, in the view of anticipated importance of replicator decay/ cleavage, an analysis of the present model in relation to the decay processes, namely a consideration of the model outcome as a function of decay parameters (currently absent from the manuscript) might be of use for readers.
We absolutely agree with the Reviewer in that even a very simple realistic metabolism requires a lot more different types of catalysts than postulated in our model. However, we are also sure that the only way such a metabolically competent ribozyme set could have evolved is through the retroevolutionary mechanism explained in some detail elsewhere [20][21], which must have started from abiotiocally produced monomers at its very beginning, exactly as the Reviewer suggests. We have inserted a paragraph into the Background section pointing this out explicitly and citing the relevant literature.
Minor comments:
2) The manuscript would benefit from a graphical presentation of the focal cell with its neighborhood. Without such a figure, the sentence "For a replication event to occur the focal replicator s must be complemented by all three different enzymatically active molecules in its metabolic neighborhood (MET(h,s), the set of h sites concentric on the site of the focal replicator s)" is not quite clear.
We added a figure that explains metabolic and replicator neighbourhood configurations.
3) It is not clear how the model accounts for "the time consumption of "releasing" the product and "catching" the next substrate for catalysis". The whole section on sub-additive effects is rather incomprehensible and showed be re-written in a more clear way.
We rephreased it in a more clear way.
4) The statement that "replication is possible only in the unfolded state" should be defined as one of assumptions of the model. Generally, it is possible to imagine that unfolding could proceed concurrently with replication. That is how replication occurs in our cells.
Indeed, we cannot exclude the possibility of the simultaneous unfolding and replication of RNA molecules on the basis of first principles, but note that our postulate of the temporal separation of the two processes is in fact a worst-case assumption: the chance of catalytically active, low-energy folds to be replicated is small. Therefore if this assumption has any effect on the results, then it is negative, but we actually think that changing it may not alter the results in the qualitative sense. | 13,851.6 | 2015-05-27T00:00:00.000 | [
"Biology",
"Computer Science",
"Engineering",
"Environmental Science"
] |
Optimization of Process Parameters Based on Machine Learning and Determination of Relative Importance of Process Variables on Jute Ply Yarn Breaking Extension
The breaking extension of jute ply yarn is comparatively low and ordinarily varies from 1.5 – 3.5% depending upon the process parameters like number of plies, single yarn twist factor and ply to single yarn twist ratio. To achieve various levels of jute ply yarn breaking extension within the above range, optimization of the three process variables was done in this study using a machine learning-based decision tree. For such optimization, twenty-seven different types of jute ply yarns were produced using three levels of singles' twist factor, number of ply and ply-to-single twist ratio. A total of 216 observed breaking extension values of these yarns were used for regression-based machine learning wherein 67% of test data were used to train the mode and the remaining 33% of test data were used for validation. An 8-node decision tree thus achieved from the model was used for the optimization process. A boxplot vs. terminal node graph was also used for classified optimization of ply breaking extension for various levels. The study conducted in this work reveals that the breaking extension of jute ply yarn varies directly with the number of plies, where 4-ply jute yarn produces maximum breaking extension and 2-ply produces the minimum. It was also observed that the decision tree was useful for the judicial selection of process parameters to achieve various levels of jute ply yarn breaking extension, wherein, it was found that the critical values for ply to single yarn twist ratio and single yarn twist factor were 0.80 and 26. The study also shows that apart from the individual influence of the variables on the breaking extension of ply yarns, interactions between variables also influence the breaking extension of jute ply yarn.
INTRODUCTION
Jute is a lingo-cellulosic fibre which possesses high tensile strength, low extensibility, high modulus and high surface friction [1][2].Apart from its traditional use in the field of flexible packaging for centuries, jute is now being used in composites and other industrial applications due to its favourable properties like high specific modulus and low breaking extension [3].It is a well-known fact that single yarns made of staple fibres are hairy, less resistant to abrasion, more uneven, and, most importantly, have lower specific strength and elongation.Despite these shortcomings, single yarns are still used for the majority of textile products are made from single yarns.However, plied or folded yarns are used in some cases https://doi.org/10.31881/TLR.2023.206 to obtain unique qualities in yarn and/or fabric that cannot be obtained through any other means.
Research indicates that plying several single yarns enhances important yarn characteristics such as strength, elongation, evenness, hairiness, and abrasion resistance [4][5].Elongation at break is one of the prime quality attributes of any spun yarn.Optimum elongation at break is important for the processability of the yarn in the downstream processing like weaving and knitting as well as the properties of the end products made with the yarn [6].Researchers also observed that controlled extension is one of the key parameters for the satisfactory performance of sewing thread [7].
Significant research findings are available where researchers have studied the extension behaviour of ply yarns made out of various textile fibres.According to Palaniswamy & Mohamed's observation, the degree of twist in a cotton ply is directly correlated with the breaking elongation of the yarn [4].
According to Omerglu's research, the breaking elongation of cotton plied yarn is statistically influenced by both the ply and single yarn twist levels, with the effect of single twist being more pronounced in finer yarns [8].Twist multiplier and twist direction's effects on acrylic-viscose plied yarn breaking elongation were reported by Tarafdar [9].
Although plenty of studies are available for other textile materials, the same for jute has not yet been reported.As jute has distinctive properties like low extension at break, this necessitates an independent study to understand the influence of individual process variables like number of plies, single yarn twist factor, ply to single twist ratio and single yarn breaking extension as well as their interactions on the breaking extension of jute ply yarn.As low extension of jute yarn is sometimes detrimental to its performance in sewing [10], it is also important to optimise the above parameters to achieve the desired level of ply-breaking extension.
The Classification and Regression Tree (CART) is one of the simplest predictive algorithms used in machine learning for such optimization of variables to achieve a desired output.CART is a powerful and popular algorithm due to its interpretability and its ability to capture non-linear relationships between features and the target variable.The algorithm works by recursively partitioning training data into smaller subsets based on threshold values of decisive features.Decision trees with several terminal nodes are created using the CART algorithm, allowing for parameter optimization [11,12].
While machine learning for textile optimization is still relatively new, a few recent researches have documented using it to optimize textile processes and materials [13][14][15][16][17]. Gültekin et al. used the decision tree regression method for the prediction of static tear strength performance from woven fabric physical parameters [13].Thakur et al. used various machine-learning approaches to classify fabric defects like holes, knots, slubs and stains on solid woven fabrics [14].The least-square support vector regression method of machine learning was used by Pervez et al. for optimization exhaustion percentage, fixation rate, total fixation efficiency and colour strength in reactive cotton dyeing process [15].https://doi.org/10.31881/TLR.2023.206 Efforts have been made in this study to optimize the important parameters quantitatively that influence jute plied yarn breaking extension using the CART regression module.The relative importance of those parameters in determining jute plied yarn extension as well as the variable interactions have also been studied in this work.EXPERIMENTAL 2, 3, and 4-ply jute yarns of 930 tex resultant count were prepared from 465 tex, 310 tex, and 233 tex single jute yarns, respectively, using raw jute of TD4 quality [18].Machine parameters were set to achieve Z-twisted single yarns with three levels of twist factor of 24, 26 and 28 in the tpc-tex unit.The singles were then used to prepare S over Z twisted ply yarns using three levels of ply-to-single twist ratio (0.5, 0.7 and 0.9).A total of 27 different types of plied yarns, thus prepared, were used for the investigation, taking into account three levels of ply number, three levels of single yarn twist factor [TF(S)], and three levels of ply to single twist ratio [P:S].The details of machine parameters used in spinning single yarns and twisting of ply yarns are given in Table 1.To achieve the desired level of twist in the single as well as ply yarns, draft change pinions were used accordingly.0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 https://doi.org/10.31881/TLR.2023.206 For the regression model, a total of three variables (predictors) were taken into account: the number of plies, the single yarn twist factor, and the ply-to-single twist ratio.
While the single and ply yarns' twist levels (tpc) were evaluated in accordance with ASTM D1423, the yarns' breaking extension was tested in Instron UTM in accordance with ASTM D2256 and ten observations of breaking extension for each quality were recorded.As jute fibre possesses inherent variability in length and thickness, the properties of jute yarn vary widely for a particular quality [19].
The same variations in the breaking extension of jute ply yarn were also observed in this case, and accordingly, the Symmetric Trimming methodology was used with the removal of the top and bottom 10% of the test data [20,21].
Minitab® 21.4.2 statistical software was used for the CART regression algorithm to get regression decision trees.In the model, 67% of test data was used to train the machine while the remaining 33% of data was used for testing the model.The decision tree that lies within 1 standard error and has a maximum coefficient of determination (R 2 ) was considered an optimal tree for decision-making.3.
Model Validity and Accuracy of Prediction:
The accuracy of the model was analysed from the output coefficient of determination (R 2 ), mean absolute per cent error (MAPE) as well as scatter plots of response fits vs. actual values for both training and test data (Figure 1).From the model summary given in Moreover, as can be seen from the scatter plot in Figure 1, both the test and training data points are relatively close to the calculated line, which denotes an equal amount of actual value and response value.Furthermore, identical scattering patterns between the training and test sets imply that the tree's performance on fresh data is comparable to its performance on training data.From the decision tree, it can be seen that the average value of breaking extension for 2 and 3-ply yarn is 1.90 (node N2) and the same for 4-ply yarn is 2.50 (node N4).Again when node N2 is divided downstream according to many ply, it provides terminal nodes TN2 for 2 ply yarn and TN3 for 3 ply yarn which show average breaking extension values of 1.89 and 2.10 respectively.Thus, when TN2, TN3 and N4 are compared, it is evident that the average breaking extension of 2-ply, 3-ply and 4-ply jute yarns are 1.89%, 2.10% and 2.50% respectively.These indicate that the breaking extension of jute ply yarns increases with the increase in the number of plies.
When ply to single yarn twist ratio is considered as a predictor, it can be seen from TN1, N3, N5 and N7 of the decision tree that P:S of 0.80 is a decisive value; breaking extension of jute ply yarns becomes significantly low below that P:S and significantly high above it.This may be due to the higher obliquity of single yarns in the ply due to the higher ply to single twist ratio which may contribute to higher breaking extension of the ply yarn.The same is also evident from the breaking extension values in the terminal nodes TN4 and TN5.https://doi.org/10.31881/TLR.2023.206 From N6, TN6, TN7 and TN8, which are branched based on the decision rule of single yarn twist factor, it can be seen that a single yarn twist factor of 26 is a decisive figure.Jute ply yarns made out of higher singles' twist factor show high ply breaking extension.Using the decision tree and the boxplot, the following optimizations are made for various ranges of ply-breaking extension - A 2-ply jute yarn always shows a breaking extension of less than 2.0%, irrespective of other process parameters.
When one desires to produce jute ply yarn having breaking extension between 2.0 to 2.25%, the same can be achieved from 3-ply jute yarn having P:S 0.80 -0.90.The same range of breaking extension is also achievable by using 4-ply jute yarn, but in that case, the single yarns should have a twist factor below 26 and ply to single twist ratio should be kept within 0.50 -0.80.
Jute ply yarn with breaking extension in the range of 2.25 -2.50% may be achieved in 4-ply yarn by using singles' having a twist factor more than 26 and twisted using a P:S between 0.50 -0.80.
If 4-ply jute yarn is made out of slightly low twisted singles (TF < 26) but ply to a single twist ratio above 0.80, the resultant ply yarn may show breaking extension in the range of 2.5 -3.0%.
When the higher breaking extension of jute ply yarns (more than 3.0%) is required, the same may be achieved by using high twisted singles (TF > 26) and a high level of P:S (>0.80)
Interaction between the Predictors
To determine the interaction between the variables used for this study, multiple regression analysis with response optimisation was done and given in Fig. 6.It is observed that among the three variables considered in the study, the interaction between single yarn twist factor and No. of ply is significant, as the slope of each of the mean BE(P) lines is different from the other.Similar interaction has also been observed between ply to single twist ratio and No. of ply.However, no interaction between the single yarn twist factor and ply-to-single twist ratio has been observed.
CONCLUSION
From this study, it can be concluded that the classification and regression tree (CART) based machine learning may be used to optimise jute ply yarn process parameters to achieve various levels of desired https://doi.org/10.31881/TLR.2023.206breaking extension of the ply yarn.From the study, it is observed that the breaking extension of jute ply yarn is influenced by the number of plies used, single yarn twist factor and ply-to-single twist ratio, where the relative importance of many plies is the most followed by ply-to-single twist ratio.The study also shows that apart from the individual influence of the above variables, the breaking extension of jute ply yarn is significantly influenced by the interaction between the single yarn twist factor and many plies as well as the interaction between ply to single yarn twist ratio & number of ply.
The following optimised parameters were thus achieved from the study for various levels of jute ply yarn breaking extension: Breaking extension of less than 2.0% was achieved from 2-ply jute yarn, irrespective of other process parameters.
Breaking extension between 2.0 to 2.25% was achieved in 3-ply jute yarn having ply to single twist ratio between 0.80 -0.90.The same range of breaking extension was also observed in the case of -ply jute yarn when single yarns' having twist factor below 26 tpc-tex unit were used and ply to single twist ratio was kept within 0.50 -0.80 Jute ply yarn breaking extension in the range of 2.25 -2.50% was achieved in 4-ply yarn having a twist factor of more than 26 and twisted using a P:S between 0.50 -0.80.
4-ply jute yarn made out of slightly low twisted singles (TF < 26) but ply to single twist ratio above 0.80 showed breaking extension in the range of 2.5 -3.0%.
The maximum level of breaking extension of jute ply yarn was observed when the same was made out of high twisted singles (TF > 26) and a high level of P:S (>0.80).
of 216 test data of plied yarn tenacity considered for the model after the removal of 10% top and bottom outliers from 270 observations, 146 random observations were used for training the CART regression model and the remaining 70 observations were used for testing the model accuracy.The response information received after training and testing the model is given in Table
Figure 2 .Figure 3
Figure 2. Coefficient of determination vs. number for terminal nodes
Figure 3 .
Figure 3. 8 node regression-based decision tree for optimization of jute plied yarn breaking
Figure 4 .
Figure 4. Boxplot of ply yarn breaking extension for each terminal node
Figure 5 Figure 5 .
Figure5obtained from the model shows the relative importance of all three predictors; No. of ply, single yarn twist factor and ply-to-single twist ratio.The figure illustrates that the number of ply is the most significant parameter (top predictor) of the breaking extension of jute plied yarn followed by ply to single twist ratio which contributes 69.2% concerning the top predictor in determining ply yarn breaking extension.On the other hand, the single yarn twist factor has a 26.5% impact on jute ply breaking extension.
Figure 6 .
Figure 6.Variable interactions plot for ply yarn breaking extension
Table 1 .
Process parameters of spinning and twisting
Table 2
below shows the yarn parameters used for the preparation of 27 different types of experimental samples.
Table 2 .
Single and ply yarn parameters
Table 3 .
Model response information
Table 4
and test data have only a 3% and 4% ratio between the fitted error and the actual value across all cases.
model is efficient enough to predict response for new yarn parameters.Also, MAPE values show that https://doi.org/10.31881/TLR.2023.206 the training | 3,670.8 | 2024-03-15T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
In vitro fatigue and fracture testing of temporary materials from different manufacturing processes in implant-supported anterior crowns
The aim of this study was to investigate the in vitro fatigue and fracture force of temporary implant-supported anterior crowns made of different materials with different abutment total occlusal convergence (TOC), with/without a screw channel, and with different types of fabrication. One hundred ninety-two implant-supported crowns were manufactured (4° or 8° TOC; with/without screw channel) form 6 materials (n = 8; 2 × additive, 3 × subtractive, 1 × automix; reference). Crowns were temporarily cemented, screw channels were closed (polytetrafluoroethylene, resin composite), and crowns were stored in water (37 °C; 10 days) before thermal cycling and mechanical loading (TCML). Fracture force was determined. Statistics: Kolmogorov–Smirnov, ANOVA; Bonferroni; Kaplan–Meier; log-rank; α = 0.05. Failure during TCML varied between 0 failures and total failure. Mean survival was between 1.8 × 105 and 4.8 × 105 cycles. The highest impact on survival presented the material (η2 = 0.072, p < .001). Fracture forces varied between 265.7 and 628.6 N. The highest impact on force was found for the material (η2 = 0.084, p < .001). Additively and subtractively manufactured crowns provided similar or higher survival rates and fracture forces compared to automix crowns. The choice of material is decisive for the survival and fracture force. The fabrication is not crucial. A smaller TOC led to higher fracture force. Manually inserted screw channels had negative effects on fatigue testing. The highest stability has been shown for crowns with a low TOC, which are manufactured additively and subtractively. In automix-fabricated crowns, manually inserted screw channels have negative effects.
Introduction
Implant-supported restorations are a state of the art for the treatment of anterior gaps and guarantee good to sufficient survival [1,2]. During osseointegration of dental implants, biocompatible materials with good marginal fit and sufficient flexural strength are needed for temporary restorations to guarantee quick, easy, and inexpensive production. Temporary crowns and fixed partial dentures (FPDs) play an essential role in the long-term therapeutic success of the dental treatment. The main requirements are protection of the prepared teeth or implant-supported abutments from thermal, chemical, or mechanical influences and to guarantee phonetic and mechanical function, as well as esthetic demands [3,4]. They are also indispensable to shape the marginal gingiva for optimal emergence profiles in implant restorations [5].
Temporary materials are typically based on methacrylate (MMA) or dimethacrylate (DMA) resins, containing approximately 30-40 w% inorganic fillers for improved modulus of elasticity and wear resistance [4,6]. Materials with different filler contents are expected to show different clinical performances [7]. Standard materials are automix (cartridge, paste-paste) systems, which require impressions or deepdrawing foils for labside or chairside fabrication [8]. While providing good clinical performance [9,10], heat of reaction and polymerization shrinkage could be a disadvantage of these materials [11][12][13]. In contrast to directly processed 1 3 materials, materials processed via computer-aided design and computer-aided manufacturing (CAD/CAM) usually have improved mechanical properties [14]. In addition, a significantly better fit of the indirectly fabricated temporary crowns can be assumed [13,15] Additive and subtractive CAD/CAM manufacturing is increasingly superseding conventional treatment in modern dentistry [16][17][18]. In case of fracture or loss, new restorations can be quickly rebuild [19,20]. CAD/CAM block materials for subtractive manufacturing are produced under controlled industrial conditions. Additively manufactured FPDs are produced in the dental lab. This leads to reduced residual monomers and a heating of the material can be avoided, which benefits to the surrounding tissue and prepared teeth. Less polymerization shrinkage promises more precise internal and marginal fit [21]. Compared to automix systems, no pulling off from teeth or models is needed, which makes flexibility for these materials obsolete. Less flexibility technically allows higher filler content and consequently improved mechanical properties. However, in case of materials for additive manufacturing, filler content and size are limited due to the need of viscosity of the resin [16,22,23]. In case of subtractive manufacturing, the bur size used in the milling process limits the design of subsequent restorations. Overall, this may favor CAD/CAM materials compared to conventional automix materials against the background of the requirements for temporary materials [24,25].
In case of insufficient removal of residual cement, temporary restorations can lead to gingival and peri-implant inflammation. This risk can be eliminated by cementing the restoration to an abutment outside the patients' mouth and leaving a channel for screw retention. However, stability of the restoration might be affected by the screw channel [26,27].
The clinical situation determines the total occlusal convergence (TOC) of the abutment: to guarantee a minimum material thickness, individual abutments or abutments with a bigger TOC might be indicated.
New materials and manufacturing techniques require extensive research before clinical use. In vitro tests can provide information about their properties, including mechanical stability, and long-term performance. Thermal cycling and mechanical loading might predict clinical failures in laboratory studies [28] and allow to estimate whether the temporary materials are appropriate for clinical application [29]. Clinical evidence for the success of CAD/CAM temporary materials is very limited [9,10,30], but in vitro data about performance and stability of these materials in comparison to conventional approved automix systems may help to estimate the clinical performance of these materials [16][17][18]31]. CAD/CAM and automix temporary materials were found to show comparable good clinical performance [31]. Despite frequent clinical use, there is limited scientific in vivo data on the performance of implant-supported temporary anterior crowns.
The aim of this study was to compare materials from different manufacturing processes for implant-supported temporary anterior crowns, to investigate the effect of different abutment TOC, and to evaluate the influence of a screw channel.
The hypothesis of this in vitro study was that the in vitro performance and fracture force of implant-supported temporary anterior crowns is influenced by the type of material, the type of manufacturing, a screw channel, or the TOC of the abutment.
Temporary anterior crowns (FDI #11) were digitally designed (inLab CAD SW 20.0.2, Dentsply Sirona, USA) with the default settings for resin composite materials. 192 crowns (n = 8 per material and group, esteemed power 85% for 8 specimens, G*Power, Kiel, D) were manufactured representing four groups: • 4° TOC of the abutment with a screw channel • 4° TOC of the abutment without a screw channel • 8° TOC of the abutment with a screw channel • 8° TOC of the abutment without a screw channel Six different materials were used representing two resinbased materials for additive manufacturing and three materials for subtractive manufacturing (two resin composites with different filler contents and one poly-ether-ether-ketone (PEEK) material). The crowns were milled, or 3D printed (details see Table 1). For the milled and printed materials, the insertion of a screw channel was selected in the pre-settings. One automix resin composite was used as a reference. Automix references were made by a direct method using the over-impression technique (Permadyne, 3 M, USA). A screw channel was manually drilled into the palatal sides of the automix crowns (FG shank round end cylinder diamond bur red/fine, diameter: 1.6 mm; Hager & Meisinger GmbH, D). Finishing and polishing was done, using rotary rubber cups (Astropol Polishing Kit, Ivoclar-Vivadent, Schaan, FL). All materials were fabricated according to the manufacturers' instructions.
All abutments were steam cleaned and the inner sides of the crowns were sandblasted (110 μm Al 2 O 3 , 2.0 bar). All crowns were temporarily cemented (Temp Bond, Kerr Dental, USA), and the screw channels were closed with polytetrafluoroethylene tape and temporary resin composite (Protemp 4, 3 M, USA). The crowns were then stored in distilled water (37 °C) for 10 days.
Specimens were subjected to simultaneous thermal cycling and mechanical loading (TCML: TC = 2 × 1200 cycles between 5/55 °C distilled water with duration of 2 min of each cycle, ML = 50 N for 4.8 × 10 5 cycles, f = 2.1 Hz, mouth opening = 2 mm, chewing simulator EGO, Regensburg, D) as standardized antagonists. All crowns were loaded with 6-mm steatite balls (CeramTec, Plochingen, D) in a one point of contact 1.5 mm below the incisal edge on the palatal crown side. Tests are equivalent to approximately 2 years of clinical service time [28,29].
Crowns which failed during TCML were excluded and dropout times (number of cycles) were recorded.
After TCML, all crowns were investigated in detail with a digital microscope (4-10 × magnification, VHX-S550E, Keyence, J). Fracture force of intact restorations was determined by loading the crowns to failure (Z010, Zwick Roell, D, v = 1 mm/min). In analogy to chewing simulation, the force was applied 1.5 mm below the incisal edge with a round end cylinder metal antagonist (d = 6 mm). To prevent force peaks, a tin foil (0.3 mm, Helago, Heinz & Laufer, D) was positioned between crown and antagonist. After fracture testing the crowns were optically examined and categorized. Fracture patterns were palatal fracture, mesial-distal fracture, palatal + mesial-distal fracture, or deformation.
Calculations and statistical analysis were performed with a statistical software program (IBM SPSS Statistics, v. 26.0 for Windows; IMB Corp, USA). Normal distribution of data was assessed with the Kolmogorov-Smirnov test. Means and standard deviations were calculated and analyzed using one-way analysis of variance (ANOVA), two-way ANOVA, and subsequent post hoc analysis (Bonferroni test). Betweensubject effects were investigated. Cumulated survival was calculated with Kaplan-Meier analysis and log-rank tests (Mantel-Cox) test. The level of significance was set to α = 0.05.
Kaplan-Meier analysis and log-rank tests (Mantel Cox) showed significant (p < 0.001, chi 2 : 20.816-48.792) differences between the materials in the individual groups (Fig. 2). C and PP showed early failures between 50,000 and 250,000 load cycles. The other systems revealed failures from approx. 200,000 load cycles with a lower failure rate.
Fracture force
Mean fracture force (Fig. 3, Table 3) varied between 265.7 N (PP) and 628.6 N (MB). Fracture testing revealed statistically significant (p < 0.001; ANOVA) higher force of MB crowns compared to all other materials. No significant difference in mean fracture force occurred between the other materials except for PP with significantly less mean fracture force compared to MG (p < 0.001; ANOVA) and MS (p = 0.037; ANOVA). In five cases (MS, MB, and PT without a screw channel and MS and MB with a screw channel) TOC of the abutment showed significant (p < 0.001; ANOVA) differences in the results with higher fracture force in crowns with 4° abutments. All automix "A" crowns with a screw channel failed during chewing simulation.
Fracture pattern
Fracture pattern in most cases (94%) was characterized by fracture at the palatal edge of the abutment, 6% additionally failed mesial-distal. No differences were found between failure during TCML (93% palatal, 7% additionally failed mesial-distal) or fracture test (91% palatal, 9% additionally failed mesial-distal). All MB crowns were excluded from the fracture pattern comparison due to deformation without fracture. Exemplary images of fracture patterns are provided in Fig. 4.
Failure during TCML and survival
The hypothesis that the in vitro performance of implant-supported temporary anterior crowns is influenced by the type of material, the type of manufacturing, a screw channel, or the TOC of the abutment could be confirmed only for the type of material and the type of fabrication.
Material
The survival time of the crowns depended most clearly on the type of material. TCML resulted in individual survival rates between 135,000 and 480,000 cycles. While the failed specimens exhibited fractures, neither cracks, nor spalling, nor fractures were found in the surviving specimens. The material dependent performance might be mainly attributed to the composition of the temporary materials as found earlier for FPDs [7]. The different production and processing of the resinbased materials allows for different filler levels and conversion rates. These can affect the mechanical properties such as modulus of elasticity or flexural strength and therefore the in vitro performance.
Manufacturing
Clearly shortest survival was found for the automix material, followed by one printed material. All milled systems and one printed group provided highest survival times. Large deviation and distribution of mean survival cycles, as found more frequently with automix and printing systems, indicate an influence of crown manufacture.
Screw channel
The different wall thicknesses and the associated different load of the crowns due to different TOC were not noticeable for chewing forces of 50 N during TCML. As found for FPDs [7,32] or molar crowns [26], independent from material, screw channels did not weaken the crowns. Only the survival time of the automix material was reduced when a screw channel was present. Any of the automix crowns with a screw channel survived all loading cycles. A possible reason might be the manual insertion of the screw channel, which was only performed in this group. The diamond drilling may cause damage and cracks in the crown, which will force a premature failure.
TOC of the abutment
The TOC of the abutment had no influence on survival time of the crowns. Based on the assumption that the complete simulation cycle simulates a clinical survival time of approximately 2 years [28,29], the materials with failures starting at approximately 1.2 × 10 5 mechanical cycles seem suitable for a clinical application up to six months. Accordingly, the milled crowns are suitable for at least 2 years. At masticatory forces of 50 N, it was not possible to distinguish the influence of angle or the presence of a screw channel on survival rates. Further tests might be performed with higher forces or in staircase processes with increasing force limits.
Fracture force
The hypothesis that the fracture force of implant-supported temporary anterior crowns is influenced by the type of material, the type of manufacturing, a screw channel, or the TOC of the abutment could be partly confirmed.
Material
Significant differences were found in the fracture force depending on the material. The highest fracture values by far were determined in the PEEK group. However, this material was also the only one that did not exhibit brittle fracture behavior, which certainly also made the detection and assessment of the failure more difficult. The question arises, as to which deformation the crowns can still be considered clinically acceptable. The achieved fracture forces do not seem to be related to the mechanical properties of the materials. Neither materials with higher strength nor increased modulus of elasticity showed an increased fracture force. Furthermore, no correlation could be found between the fracture values and the filler content. The results are in contrast to recent studies with higher fracture forces measured in anterior provisional restorations on extracted teeth [35]. However, the tooth-or implantsupported crowns may behave differently [7,26]. Different results for implant-and tooth-supported molar crowns, which were also shown in previous studies [26], might be attributed to the altered loading situation of anterior and posterior crowns.
Manufacturing
In general, the milled crowns tended to show higher fracture forces than the printed systems or the automix material. Composition or filler-dependent stability has also been shown in recent studies [7]. However, one printed group with 4° TOC (material PT) and without screw channel yielded quite comparable results to the milled systems. Therefore, the results cannot necessarily be attributed to the different filler composition of printing and milling materials, which varies between 86 and 27 w%. Differences in fracture force might indicate variations in the homogeneity of the additive material, the influence of the material layers during printing or post-curing effects [33,34], and thus an effect of the fabrication. Variations might also occur due to defects like pores or insufficient connection of the individual layers or even reduced quality of insufficient combination of material and process [23]. Subtractively manufactured crowns could have an advantage over crowns manufactured in the other two processes: the discs for subtractive processing are manufactured under controlled industrial production at high temperature and pressure, which may improve homogeneity and mechanical properties [6]. Yet, in contrast to other in vitro tests [36], automix crowns that survived TCML showed no significant difference in fracture force which reflected the principally sufficient material properties of automix materials and, again, shows the manual manufactory as a possible source of error.
Screw channel
The presence of a screw channel did not show any influence on fracture force. Automix material crowns with a handmade screw channel could not be compared, as none survived the simulation. In general, the presence of a screw channel is expected to reduce the stability of crowns [37][38][39]. However, there seem to be differences in the anterior and posterior application and the associated loading situation with bending or compression. Recent studies even showed higher stability in molar crowns with a screw channel [26]. The screw channel might have no influence on the strength of the anterior crowns because it is located below the contact point.
TOC of the abutment
TOC of the abutment had strong influence on fracture force of the crowns, with higher fracture values for crowns on 4° abutments. Contradictory results were found only for one printed system. These results confirm previous studies that fracture forces were strongly dependent on the abutment TOC of molar crowns [27] or anterior temporary FPDs [26]. Yet there is no data that abutments with smaller TOC show higher fracture force. This may be due to the greater wall thicknesses of the 4° crowns examined for the same external dimensions. The higher material thickness promises higher strength, especially in the case of bending and torsion occurring in the anterior region. Since the palatal edge of the abutment is the most common fracture site, rounded palatal edges may be indicated to prevent crack initiation and development.
Aging with TCML can help distinguish potential failure modes of materials by simulating fatigue, but loading to fracture tests may not be able to accurately reflect the failure modes that can be observed in clinical settings. Simulations used in TCML do not consider factors such as varying chewing forces or frequency as well as the individual design of preparations and the crowns, or the complex oral situation of the jaw. The use of implant analogs shifts the fracture risk to the crowns to be examined, as desired. However, it also limits the clinical interpretation of the results. Therefore, these simulation results should be interpreted with caution when predicting clinical failure.
In general, most materials have the potential to withstand maximum bite forces in the incisive region, which are reported to reach up to 158 N [40]. However, patients with implant-supported restorations can show higher or uncontrolled mastication forces due to absence of periodontal receptors [41].
Conclusion
Both additively and subtractively manufactured temporary crowns provided similar or even higher survival rates and fracture forces compared to standard automix implant-supported anterior crowns. PEEK-based systems showed best performance. The choice of material is decisive for the survival time and fracture force. The fabrication is not crucial, but milled crowns tend to perform better. A smaller TOC of the abutment led to higher fracture force. Only manually inserted screw channels in automix-made crowns had negative effects on fatigue testing.
Clinical consequence
The choice of material is crucial for the performance of provisional implant-supported anterior crowns: additively and subtractively manufactured crowns show comparable or better properties than automix systems. A lower abutment TOC guarantees higher fracture forces. Manually inserted screw channels in automix crowns have negative effects and should be avoided. | 4,470.2 | 2023-05-03T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Applications of Artificial Intelligence and Voice Assistant in Healthcare
The modern smart technology such as Artificial Intelligence (AI) is merging with humans’ physical lives and is going to change the way we live, work, and interact. AI in the healthcare sector is gaining attention from researchers, health professionals, and life sciences companies. The new technology advancement has brought various opportunities in electronic health (e-health) that allows healthcare to be accessible regardless of distance using information and communication technologies (ICTs) such as use of blood pressure telemonitoring service and voice assistants. Voice Assistant (VA) as an emerging technology in healthcare helps to reduce expenses, build loyalty, drive revenue, and it is especially beneficial amidst COVID-19 outbreak as healthcare will need to move towards more touch-free technologies post-pandemic. In this paper, we summarize the latest developments of applications of AI and VA in healthcare, and some basic knowledge regarding the techniques, the current state of this technology in healthcare, and possible developments in future, which potentially can transform many aspects of patient care.
Introduction
Introduction of new technologies, and improvement of data technologies such as storage size, computational power, and data transfer speeds are the key factors driving the growth of Artificial Intelligence (AI) in many fields. Voice assistant are AI-based technologies, which are designed to think and act like a human. Apple's Siri, Amazon Alexa, and Google Assistant (Figure 1) are examples of Voice-activated systems, which help consumers to manage their daily tasks for instance searching online, listening to news, controlling other connected smart devices (e.g., light, AC), answering a phone call, paying utility bills, and setting reminders. In recent years, such technologies have seen strong growth (The Star Online, 2020). The report by Marketsandmarkets in 2020 stated the Voice Assistant Application Market is estimated to increase from USD 2.8 billion in 2021 to USD 11.2 billion in 2026, with a Compound Annual Growth Rate (CAGR) of 32.4 percent throughout the forecast period. The voice assistant application market owes its widespread adoption to the advancements in voice-based AI technologies, growth in the number of voice-enabled devices, and increasing focus on customer engagement. Several applications in electronic health (e-health) have been created with the rapid development of new technologies, which enables healthcare to be provided for patients from home via Information and Communication Technologies (ICTs) (Lo Presti et al., 2019).
Apple's Siri Google Assistant Amazon Alexa Figure 1: Sample of Voice Assistants Over the past decades the focus was on the innovation provided by medical products. The present decade is focused on providing medical platforms, real-time, outcome-based care such as smart watch. With the explosion of available medical data, the next decade is moving towards medical solutions by using AI, robotics, and virtual and augmented reality, with the purpose of delivering intelligent solutions (Mehta et al., 2019). Various types of AI-based applications are already being employed by healthcare providers, and life science companies. These applications include disease diagnosis and treatment recommendations, medical document classification, and question answering based on consumer's commands, Voice assistants are becoming important in healthcare as they can help patients get answers to critical questions about various diseases, help patients to learn about symptoms, and also assist in setting up doctor appointments, etc. More importantly this technology help patients to interact with smartphones without using their hands, which is a great help in Covid-19 outbreak and healthcare will need to move towards more touch-free technologies postpandemic (The Star Online, 2020). The significant increase in the number of individuals in need of healthcare services (e.g., growth of golden agers) combined with ongoing developments in healthcare technology is predicted to put upward pressure on health and long-term care spending (Fanta & Pretorius, 2018). VAs are available on smart phones and smart devices, which are used by youngsters as well as older adults due to their utility (Vollmer Dahlke & Ory, 2017). These smart devices are specifically designed to cater to the needs of all age groups, different ethnic groups, and work profiles worldwide (Dogra & Kaushal, 2021;Koon et al., 2020). The aim of this review is to understand the availability of recent advancements in AI-based technologies, provide awareness about the potential of AI in healthcare services, and inspire the researchers in the related field.
AI-based Technologies in Healthcare
Voice assistants use AI technology to communicate with the users in natural language (Terzopoulos & Satratzemi, 2019). AI-based technologies ( Figure 2) such as Machine Learning, Deep Learning, and Natural language processing are of high importance to healthcare, which are defined and described below. Exploring approaches to help machines develop their own sort of common sense has always been an interest of scientists. Such machines not only have high predictive accuracy based on the previous data, but also are intelligent and have the ability to learn. Artificial intelligence (AI), Machine Learning (ML), and Deep Leaning (DL) refer to intelligence demonstrated by machines. Conventional machine learning uses the theory of statistics and employs algorithms to learn from a large dataset, train the system, and make informed decisions based on what it has learned. Deep learning is a subset of machine learning that uses advance machine learning approaches. Deep learning structures algorithms in layers to create an artificial neural network (ANN) for the purpose of learning and making intelligent decisions without any human intervention (Alpaydin, 2008). ML plays a key role in many health innovations. It assists researchers with drug development, drug discovery, drug testing, and drug repurposing. Drug discovery (Reda et al., 2020) aims at uncovering putative drug candidates or gene targets, or causal factors, of a given disease or a given chemical compound. Drug testing (Aziz et al., 2021) helps to evaluate the effectiveness of drug properties and develop in silico prediction models to save time and money on later testing stages, and subsequent in vitro and in vivo experiments. Drug repurposing uses various methods such as identify correlations between drug molecules and gene or protein targets in literature to find a new therapeutic indications (Tari & Patel, 2014). DL has recently been applied to spot malignant tumors or to predict drug effectiveness based on large amounts of healthcare data, and various attributes (Chang et al., 2018).
Personalized medicine or "precision medicine" is another well-known application in healthcare. It uses enormous amount of data, such as medical imaging data or existing medical documentation to predict and analyze diagnostic decisions for each individual patient (Toh & Brody, 2021).
There are many applications such as PathAI 1 , Tempus 2 , Microsoft's Project InnerEye, IBM's Watson AI technology, and Pfizer 3 that uses ML to predict illness and deliver personalized treatments for patients. These applications use computer vision and machine learning with the aim of providing quicker and more accurate diagnoses.
ii. Natural Language Processing Natural language processing (NLP) enables computers to understand text and voice data similar to human. NLP employs several technologies including machine learning, and deep learning models, to understand the semantic and sentiment of user's data (Alpaydin, 2008). NLP derives computer programs in text translation, text summarization, and speech recognition, and is widely used in the form of voice-operated GPS systems, digital assistants, speech-to-text dictation software, and customer service chatbots. The dominant applications of NLP in healthcare involve classification of clinical documentation, conversational chatbots, automated question answering, and disease diagnosis, which are described in the following section.
b. Chatbots
A Chatbot helps consumers by simulating human-like conversations via text messages or voice commands. Chatbot uses NLP technology and supports speech to text and text to speech conversion so that the user can also communicate using voice. Speech and NLP technology is used to process text, transcribe consumers interactions, and respond to consumers' inquiries and questions automatically.
In healthcare chatbots can offer useful medical information based on user needs. Such AIbased systems, eliminate the cost and time in seeking medical help, particularly in rural areas, which consultation with qualified professionals is not easily available (Hsu & Yu, 2022). Moreover, mental health chatbot has shown to be an effective and engaging way to deliver mental health support and decreases depression and anxiety symptoms in students (Fitzpatrick et al., 2017;Wyllie et al., 2022).
c. Question Answering
Question classification and question answering helps to extract information find answers efficiently. It employs NLP algorithms to retrieve documents relevant to a question posed by humans in a natural language, and then processes such documents to automatically generate a paragraph-length answer. For instance, automated question classification can be applied on cancer-related questions that have been enquired on the web. Question classification could also be used to assist clinical support staff in answering questions by suggesting a likely set of answer templates or be used to provide metadata for questions on the web, so that questions posted in social media could be linked to similar questions or to sources on the web that might provide answers. McRoy et al (2016) proposed a classifier to answer community-based questions. The scheme of the classification includes a set of questions such as clinical, non-clinical, and patient-specific questions.
d. Diagnosis
The development of advanced AI-based technologies and the recent research in the field of NLP leads to flourishing new businesses with innovative concepts in healthcare. Today, applications can perform disease diagnosis based on user's symptoms through medical reports or over one-on-one conversation (Badlani et al., 2021). Such applications can transcribe patients' interaction, and effectively extract the wealth of information into a format which can be utilized effectively by physicians and healthcare professionals. IBM's Watson is one of the well-known data analytics processors that employs machine learning and NLP to generate answers to questions. It has been used in precision medicine to help medical staff and provides treatment methods based on huge number of past clinical trials. IBM's Watson is the first medical AI and introduces their first application on cancer diagnosis and treatment in 2013, which received attention of healthcare providers. However, in recent years, Watson has been criticized for its lack of accuracy (Ross & Swetlitz, 2018). Complexity of patient files, and lack of available clinical data based on locality failed the system to provide good recommendations. Despite Watson's unsuccessful attempt, big companies such as Google, and Microsoft are still attracted to develop AI solutions in health care. (2017), 46% of U.S. adults are using voice assistants at home. Due to simplicity of this technology, millions of devices use them in households nowadays. Smart speakers are stand-alone devices that can be connected to smartphones. These portable devices are useful at home or work, and perform actions based on the given commands ( Figure 3). Voice assistants can also be used on smartphones. Smartphones offer built-in VAs such as Samsung Bixby, Google Assistant, and Apple's Siri. Apple has Siri built into on all Apple devices.
Voice Assistant Applications
Recently, apple introduced HomePod mini, a compact smart speaker which has native Apple Music and Apple HomeKit integration. Google designed Google Assistant to give users conversational and two-way interactions and made it available across all android smartphones. Google Assistant also works in Google Home smart speaker that allow users to control the smart devices. Amazon successfully launched Alexa led by Echo devices that provides a lot of functionality and has an associated app for Android and iOS phones. The companies are attempting to make VAs ubiquitous and market them across various thirdparty devices to appeal to different user preferences and contexts (Rubin, 2018).
Figure 3: Types of AI-based Voice Assistants
The working mechanism of VA is simple. Voice assistant is usually unobtrusive and constantly monitor its surroundings for trigger words such "Ok Google" or "Hey Siri". Once the trigger word is said loud enough for the bot to hear, it will begin listening to the user's query. Unlike humans, machines require structure, detail, and process to break down complex nuances of the human language such as context, user intent, slang, accents, etc. Therefore, voice assistants rely on natural language processing (NLP) software to step in and resolve any barriers to understanding. After processing the user's query using voice recognition and NLP, voice assistant retrieves information related to the question by accessing knowledge base where information is stored. Finally, the output is the answer to the user's request using textto-speech technology.
i. Voice Assistant in Healthcare
The role of AI in healthcare is complex and its capabilities are continuously extending (Cheung et al., 2020 for 125,000 avoidable deaths, hospitalizations, and failures in treatment (Health Policy Institute, 2021). Using VAs such as Alexa on smartphones helps patients, particularly elderlies, to create timely reminders for taking their dose on time and prevent missing anu doses. 3) Virtual Care: VAs in healthcare systems recommends doctors or specialists and allow consumers to arrange an appointment with their desired doctor by speaking with voice assistants. 4) E-monitoring: Tech companies, and wellness application development companies are integrating voice-activated tool, which allows consumers to track and monitor their medical conditions via a voice command. For example, users can see trends in blood sugar or monitor their eating habits in applications with VAs. Enterprise Bot's HealthAI 4 is one of the organizations that provides voice assistant technology in healthcare to automate routine tasks. Another application is Dr. AI (Mohr, 2019) launched by HealthTap which uses Amazon Alexa to help patients diagnose their illness through a conversation. Users are able to ask Dr. A.I. to diagnose their symptoms, pose health questions or complaints, and get suggestions for treatment and/or recommendations of nearby doctors. Dr. AI checks that against their health profile in HealthTap and can ask follow-up questions to gather more data. It then provides potential diagnoses and guidance on what to do next. Roche Diabetes Care 5 provides voice assistant focusing on pharmaceuticals and diagnostics. This application works with Alexa or Google Assistant with the purpose of improving people's lives.
The Future of AI in Healthcare
Despite the development of various AI-based applications in several sectors, the use of AI in healthcare is still in its early stage. According to the research, the combination of computer vision and machine learning to analyze medical imaging and other types of data such as text or speech, has shown to be effective in decision making, capturing clinical notes, providing automatic answers to the healthcare related questions, and document classification. Although there are challenges in delivering precision medicine and providing personalized medicine, given the rapid advances in AI, we expect AI to enhance and provide more accurate results. The efficiency and accuracy of AI-based healthcare services would enable clinicians to cope with the growing demand on healthcare. We believe companies will push AI-based healthcare systems toward predictive analysis to predict and determine whether an individual is at risk of any diseases based on the individual's place of living, diet, emotion or mental conditions, or daily activities. Therefore, care providers can suggest preventative measures before the disease get worse. It not only reduces the costs but also improves individuals' health and quality of life. Moreover, the fast advances in voice-based technology coupled with the coronavirus precautions encourages individuals to move toward using touch-free technology and voice assistants in daily life. The adoption of such technologies has already been accelerated due to the pandemic; however, the widespread use of VAs depends on several factors such as consumer engagement, public awareness, and governments policies toward using AI-based technologies. | 3,551.8 | 2022-12-19T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Mitochondrial DNA abnormalities and metabolic syndrome
Metabolic syndrome (MetS) is a complex pathological condition that involves disrupted carbohydrate, protein, and fat metabolism in the human body, and is a major risk factor for several chronic diseases, including diabetes, cardiovascular disease, and cerebrovascular disease. While the exact pathogenesis of metabolic syndrome is not yet fully understood, there is increasing evidence linking mitochondrial dysfunction, which is closely related to the mitochondrial genome and mitochondrial dynamics, to the development of this condition. Recent advancements in genetic sequencing technologies have allowed for more accurate detection of mtDNA mutations and other mitochondrial abnormalities, leading to earlier diagnosis and intervention in patients with metabolic syndrome. Additionally, the identification of specific mechanisms by which reduced mtDNA copy number and gene mutations, as well as abnormalities in mtDNA-encoded proteins and mitochondrial dynamics, contribute to metabolic syndrome may promote the development of novel therapeutic targets and interventions, such as the restoration of mitochondrial function through the targeting of specific mitochondrial defects. Additionally, advancements in genetic sequencing technologies may allow for more accurate detection of mtDNA mutations and other mitochondrial abnormalities, leading to earlier diagnosis and intervention in patients with MetS. Therefore, strategies to promote the restoration of mitochondrial function by addressing these defects may offer new options for treating MetS. This review provides an overview of the research progress and significance of mitochondrial genome and mitochondrial dynamics in MetS.
Introduction
Metabolic syndrome (MetS) is a complex and multifactorial disorder that affects a significant portion of the global population. It is characterized by a cluster of interrelated conditions, including hypertension, obesity, insulin resistance, dyslipidemia, and hyperglycemia, which increase the risk of cardiovascular disease, stroke, and type cells and organs, resulting in a wide range of symptoms (Gorman et al., 2016). Mitochondria have their own DNA, which is distinct from nuclear DNA (nDNA). The different structural and functional properties of mtDNA and nDNA lead to their different applications in science. With its significantly higher mutation rate, mtDNA has been used as a powerful tool to trace the lineage. Methods have been developed to trace the ancestry of many species over hundreds of generations and have become a mainstay of phylogenetic and evolutionary biology (Bernardo et al., 2014). mtDNA is a circular DNA molecule that exists in multiple copies in each mitochondrion. They encode a small number of genes involved in oxidative phosphorylation. mtDNA abnormalities, such as mutations, deletions, copy number loss, and rearrangements, can disrupt the energy production capacity of the mitochondria, leading to the generation of excess reactive oxygen species (ROS), which can cause oxidative damage to DNA, proteins, and lipids. Recent studies have shown that mtDNA copy number loss and mutations in mtDNA-encoded proteins are associated with insulin resistance and glucose intolerance. Moreover, mtDNA abnormalities can lead to increased oxidative stress, impaired mitochondrial function, and altered energy metabolism, all of which are thought to be involved in the pathophysiology of MetS. Additionally, disturbances in mitochondrial dynamics, such as changes in mitochondrial fission and fusion, have been linked to the development of insulin resistance and obesity. Here we will summarize the current state of knowledge on the relationship between mtDNA abnormalities and MetS, with a focus on mtDNA copy number loss, mutations, abnormalities of mtDNAencoded proteins, and mitochondrial dynamics imbalance. The purpose of this review is to provide a comprehensible overview of the role of mtDNA in MetS and to identify potential therapeutic targets for the prevention and treatment of this increasingly prevalent disease (Figure 1).
mtDNA copy number loss in metabolic syndrome
Nuclear gene mutations that lower mtDNA expression or primary mtDNA mutations that directly impairs the function or quantity of the gene products encoded by mtDNA cause mitochondrial disorders (Gorman et al., 2016;Filograna et al., 2021). MetS is a collection of metabolic abnormalities that may be linked to variations in mitochondrial DNA content (Samson and Garber, 2014). In healthy humans, there are usually hundreds to thousands of copies of mtDNA in each cell. The number of mtDNA copies can vary between cell types and tissues. In MetS patients, the number of mtDNA copies has been found to be lower compared to healthy individuals. Specifically, a number of studies have reported that MetS patients have reduced mtDNA copy number in their adipose tissue, skeletal muscle, and blood cells. The degree of reduction can vary, but in general, tend to have a lower mtDNA copy number. In leukocytes, the MetS group has a lower mtDNA copy number than the non-MetS group, with a lower mtDNA copy number linked to low plasma high density lipoprotein (HDL) and high triglycerides (Huang et al., 2011). The number of copies can be utilized to assess mitochondrial dysfunction (Fazzini et al., 2021). The difference in mtDNA copy number before and after disease suggests that mtDNA plays a regulatory function in the development of a variety of chronic diseases, including cardiovascular disease, kidney disease, liver disease, neurological disease, and cancer, all of which have therapeutic potential (Castellani et al., 2020). mtDNA copy number was found to be inversely related to metabolic syndrome and type 2 diabetes risk. Obesity characteristics mediate a large percentage of the overall effect of mtDNA copy number on type 2 diabetes (Fazzini et al., 2021). Human mitochondrial DNA haplogroups are genetic haplogroups defined by mitochondrial DNA differences (Swerdlow et al., 2020). In diverse populations, mtDNA haplogroups are hypothesized to be associated in a variety of metabolic illnesses, including metabolic syndrome, obesity, T2D, and T2D-related comorbidities. The N9a haplogroup is a subgroup of the mtDNA N9 haplogroup, which is commonly found in East Asia and Southeast Asia. The experiment confirmed that mtDNA haplogroups N9a (N9a1 and N9a10a) exhibit lower activity of the respiratory chain complex compared to three non-N9A haplogroups (D4j, G3a2, and Y1). This difference could contribute to changes in mitochondrial function and REDOX status. The mtDNA haplogroup plays a critical role in mitochondrial function and mitochondria-mediated signaling pathways and has been associated with the development of T2D. Insulin resistance, a hallmark of T2D, may result from reduced oxidative phosphorylation, which increases the generation of reactive oxygen species (ROS), which is a key regulator of T2D-related insulin receptor signaling and inflammation. The mitochondrial retrograde signaling pathways of two N9a hybrids and three non-N9A hybrids were observed. The effect of this retrograde signaling change on diabetes in cellular models was further studied (Hua et al., 2019). The experiment revealed the mechanism of N9a in T2D and established the mitochondrial retrograde signaling pathway. The presence of N9a in T2D was confirmed by other evidence: most of the observed changes in biological characteristics were related to metabolic regulation. Another study suggested that N9a haplogroup may not be a protective factor for T2D (Hwang et al., 2011). Hezhi Fang showed that the mitochondrial REDOX signaling pathway, which is usually associated with insulin sensitivity, drove ERK1/ 2 phosphorylation/activation and led to changes and activation of insulin-stimulated glucose uptake in N9a and non-N9A cells by targeting the underlying signaling pathway (Fang et al., 2018). The gene TLR4 on chromosome nine exhibited an increased response to ERK1/2 overactivation in N9a cells but reduced insulin-stimulated glucose uptake (Fang et al., 2018). Steatohepatitis-associated circRNA ATP5B Regulator (SCAR), which is located in mitochondria, inhibits mitochondrial ROS (mROS) output and fibroblast activation . The relationship between mtDNA copy number and mitochondrial function was confirmed in knockdown cell models with reduced mtDNA copy number, which resulted in reduced expression of important complex proteins, changes in cell morphology, and decreased activity of respiratory enzymes. The models showed that mitochondrial function could be restored by bringing the mtDNA copy number back to wild-type levels. When mtDNA copy number levels decreased, they can increase the production of ROS and lead to oxidative stress, which is linked to a decline in mitochondrial function (Castellani et al., 2020). mtDNA levels depend on the stability of the mitochondrial genome through appropriate mitochondrial translation.
Frontiers in Cell and Developmental Biology frontiersin.org Mitochondrial ribosomal proteins (MRPs) are the key components responsible for the translation of mitochondrial gene coding proteins. Abnormal structure and function of MRPs will affect the synthesis of mitochondrial coding proteins, reduce mtDNA copy number and the efficiency of oxidative phosphorylation, and finally lead to metabolic disorders. With the discovery of mtDNA mutations, the genotypes, clinical symptoms and diagnosis and treatment of mitochondrial diseases have been rapidly developed, especially the relationship between mtDNA copy number and clinical diseases has been paid more and more attention. Changes in the copy number of mitochondrial DNA indicate the onset of metabolic syndrome and other diseases, it provides convenient conditions for the research and treatment of diseases. It is important to note that mtDNA copy number alone is not a definitive diagnostic tool for metabolic syndrome, and its use in clinical practice is still limited. However, it can serve as a useful biomarker for assessing mitochondrial dysfunction and metabolic disease risk in certain populations. Further research is needed to better understand the relationship between mtDNA copy number, metabolic dysfunction, and potential therapeutic interventions.
Mitochondrial gene mutations and genetic metabolic abnormalities
Mitochondrial diseases are multisystem diseases with oxidative phosphorylation defects and great clinical, biochemical and genetic heterogeneity. The distribution and relative level of heterogeneity of mtDNA mutations may lead to different rates of aging and disease progression between individual cells (Hahn and Zuryn, 2019). Endocrine dysfunction is often observed in hereditary mitochondrial diseases, reflecting reduced intracellular hormone production or extracellular secretion. Diabetes is the most frequently described endocrine disorder in patients with inherited mitochondrial diseases, but other endocrine manifestations in these patients may include growth hormone deficiency, hypogonadism, adrenal dysfunction, hypoparathyroidism, and thyroid disease. Although mitochondrial endocrine dysfunction often occurs in the context of multisystem diseases, some mitochondrial diseases are characterized by isolated endocrine involvement (Chow et al., 2017). While a single gene mutation can play a role in mediating obesity, it is often the result of the interaction of genetic and environmental factors. To date, nuclear gene mutations related to obesity, such as MC4R, BDNF, FTO, etc., have been identified (McCaffery et al., 2012). However, the association between mtDNA mutations and childhood obesity remains to be elucidated. Several mutations in the mitochondrial tRNA gene have been found to be associated with diabetes and metabolic disorders. It suggests that mtDNA mutation may be related to obesity. For example, tRNA thr mutations C (10003T > C), and tRNA Glu mutations C (14709T > C) and G (14692A > G) are associated with such diseases . The point mutation of 3243rd nucleotides in mitochondrial tRNA-Leu (UUR) can cause maternal diabetes and deafness (Alves et al., 2017). Mitochondrial diabetes is caused by mutations in mitochondrial genes. β Diabetes caused by oxidative phosphorylation is a rare type of diabetes. Mitochondrial tRNALeu (UUR) 3243A > G-spot mutation is the most common cause of mitochondrial diabetes (Capriglia et al., 2021). Kearns-Sayre syndrome, which is caused by a large deletion in the mitochondrial DNA, can also be associated with diabetes (Kamal et al., 2015) (Table 1). mtDNA is different from nuclear DNA and is not protected by histones. As a result, it is more susceptible to the attack of mitochondrial ROS and is prone to mutation, which may be accelerated in diabetes. It has been suggested that increased ROS production (OH-, H 2 O 2 ) may be the core of diabetic pathology during hyperglycemia (Li et al., 2008). The characteristics of mitochondrial diabetes, also known as maternally inherited diabetes and deafness (MIDD), include maternal inheritance of diabetes, nerve deafness, relatively low body weight (BMI), early onset age (between 30 and 40 years), progressive decline in islet cell function leading to insulin-dependent diabetes mellitus, negative urine sugar antibody, other neuromuscular lesions, and progressive sensorineural hearing loss (Tsang et al., 2018). The premature death of MIDD patients due to cardiac causes is a significant problem. It is reported that in some of these cases, the mutation load in myocardium is more abundant than that in blood (Yoshida et al., 1994), indicating that the reduction of cardiac ATP synthesis is the most likely stimulator of cardiomyocyte hypertrophy and failure in these individuals. Other systemic manifestations of MIDD patients: MELAS syndrome (mitochondrial myopathy, encephalopathy with lactic acidosis and recurrent stroke like attack). MELAS should be considered as a clinical diagnosis when young MIDD patients have a stroke. Lactic acidosis is caused by the decrease of blood pH and buffer capacity due to the increase of lactic acid concentration. In MELAS patients, abnormal mitochondria can not metabolize pyruvate, resulting in a large amount of pyruvate to produce lactic acid, which accumulates in blood and body fluids. A characteristic pathological change of MELAS patients is the accumulation of a large number of abnormal mitochondria in the arterioles and capillary walls of brain and muscle. It is reported that 13% of the 199 affected members of 45 families with m.3243A > G mutation have a combination of MIDD and MELAS characteristics, and deafness is the only manifestation in some patients (Gerbitz et al., 1995). The reduced uptake of brain single photon emission computed tomography (SPECT), especially in the bilateral parietooccipital or occipital regions in MIDD (Suzuki et al., 1996), may reflect the low metabolic state of these neurons due to mitochondrial respiratory dysfunction (Murphy et al., 2008). The study shows that the occurrence of diabetic complications depends not only on glycemic control, but also on individual factors that may be related to genetic heterogeneity (Alves et al., 2017) Specifically, mtDNA is almost completely composed of coding regions, which is easy to be damaged due to the lack of histone protection, and lacks an effective self-healing mechanism. Its mutation frequency is significantly higher than that of nuclear genes. The 3243A to G-spot mutation may be one of the genetic factors leading to familial aggregation of diabetes (Li et al., 2008). A Rötig found that almost 3/4 of the mtDNA mutation patients had family history of diabetes, almost all related to the maternal mitochondrial genetic pattern (Rötig et al., 1996). Recent studies have shown that individuals may have more than two point mutations, which could contribute to the pathogenesis of diabetes. However, Rötig's team did not identify any subjects with multiple mutations, possibly due to limitations in the technology used for Frontiers in Cell and Developmental Biology frontiersin.org analysis, which was not based on direct DNA sequencing. While these alternative technologies may be clinically convenient, they may not be the most sensitive for detecting point mutations (Li et al., 2008).
mtDNA-encoded proteins and metabolic syndrome
There are some significantly differences between the structure of mtDNA and nDNA, instead mtDNA is similar to bacterial chromosomes. The structure of mtDNA molecule is a doublestranded closed circular DNA molecule including a heavy outer ring (H) and a light inner ring (L) with no intron sequences except for a small region related to mtDNA replication and transcription. There are 13 proteins encoded by mtDNA associated with metabolic syndrome, they are the core subunit of the oxidative phosphorylation (OXPHOS) complexes I, III, IV, and V. E.g., seven subunits (ND 1-6 and 4L) of complex I, cytochrome b (Cyt b) of complex III, the COX I-III subunits of cytochrome oxidase or complex IV, and the ATPase six and 8 subunits of ATP synthase (Figure 2). Known so far, Metabolic syndrome associated with proteins encoded by mtDNA include: T2DM, CMP, MELAS, hypertension, and hyperlipidemia, etc., (Sharma and Sampath, 2019;Finsterer, 2020).
The increased risk of cardiovascular disease is an important cause of diabetes (Fiorentino et al., 2013). All mitochondrial genomes of the maternal relatives of T2DM subjects aged 28-71 years with an average age of 43 years were screened by PCR-Sanger sequencing. Analysis and research found that both families have ND1 T3394C mutations, as well as multiple sets of mutations in Y2 and M9a in the mitochondrial haplogroup. The m.T3394C mutation on the tyrosine at position 30 of ND1 may lead to the failure of ND1 mRNA metabolism, leading to mitochondrial dysfunction. The m.A14693G mutation in the TΨC-loop of tRNAGlu is critical to the formation and stability of the tRNA structure, its mutation may cause tRNA metabolism disorder, thereby aggravating the mitochondrial dysfunction caused by the ND1 T3394C mutation (You et al., 2022). There is another group of studies on the clinical, genetic and biochemical characteristics of the maternal genetic lineage of T2DM in China. Through PCR and direct sequencing analysis, it was found that there are two potential pathogenic mutations in ND1 and ND2. The level of reactive oxygen species in T2DM patients with both m.T4216C and m.C5178A mutations was significantly increased (p < 0.05). In addition, plasma levels of malondialdehyde and 8-hydroxydeoxyguanosine in
FIGURE 1
Mitochondria play a critical role in the development of metabolic syndrome, and several mechanisms have been proposed to explain their involvement. Firstly, low mtDNA copy number can lead to reduced genomic stability, potentially contributing to the pathogenesis of MetS. Secondly, mitochondria gene mutations can lead to genetic metabolic abnormalities, which is commonly observed in MetS patients. Thirdly, abnormal expression of mtDNA-encoded proteins can impair mitochondrial function and contribute to the development of MetS. Lastly, disruptions in mitochondrial dynamics, which is driven by the interplay between fusion and fission, can contribute to the accumulation of mtDNA mutations and other mitochondrial abnormalities, thereby leading to metabolic diseases. These mechanisms provide potential targets for developing novel therapeutic strategies for MetS.
Frontiers in Cell and Developmental Biology frontiersin.org patients with T2DM were significantly increased, and the level of superoxide dismutase was decreased (p < 0.05). In summary, ND1 T4216C and ND2 C5178A mutations may cause oxidative stress, impair mitochondrial function, and cause the pathogenesis of T2DM (Jiang et al., 2021). Studies have shown that increased production of reactive oxygen species may be the cause of obesity and hyperglycemia. The experiment found that four new mutations were found in the mitochondrial genes of ND1, ATPase 8, ND5 and Cyt b of the subject: W121 (1)G (3667T>G), M42 (2)T (8490T>C), V290 (1)I (13204G>A) and V170 (3)V (15256A>G). Using silicon analysis to analyze mutations in the structure and function of these proteins. The study found that the mutation of ATPase 8 (T8490C) neither changed its secondary structure nor its function. Because the new suspected pathogenic mutation was not present in 105 race-matched controls, ATPase 8 is not a causative factor. However, the synergistic activity of Cyt b, ATPase 8, ND1 and ND5 genes may be a factor of secondary complications in patients with chronic T2D. It is also the cause of hypertension and hyperlipidemia (Elango et al., 2014). Studies induced chronic T2DM (cT2DM) model by intraperitoneal injection of STZ (35 mg/kg) after 6 weeks of a high-fat and highsugar diet. H&E staining was used to observe the morphological damage of rat hippocampus. The expression of inflammatory mediators (COX-2, TNF-α, IL-1β) and oxidative stress indicators (MDA,p22phox) in the brain tissue of cT2DM rats is increased, and studies have evaluated homozygous, heterozygous, dominant and recessive models, COX2 in 3′-UTR may contribute to the etiology of T2DM or adjust its risk. Therefore, increased COX2 may be associated with T2DM (Ozbayer et al., 2018;Xie et al., 2018). The last group of through direct sequencing of PCR fragments of a 14-year-old index male hypertrophic cardiomyopathy (hCMP) patient, it was found that m.14757T> a, m.15236A>G, m.15314G> a resulted in amino acid residues in the Cyt b gene replace, Cyt b is very likely to be related to cardiomyopathy (Zarrouk-Mahjoub et al., 2015). The 3697G>A mutation can cause to isolated severe complex I defects, leading to lactic acidosis, central nervous system dysfunction and other metabolic syndromes (Zhong et al., 2019). Modification of proteins encoded by mtDNA is still a technical bottleneck at present. It is expected that the continuous progress of mitochondrial gene editing technology can provide a new breakthrough for the treatment of metabolic diseases by targeting mtDNA encoding proteins.
Mitochondrial dynamics drived mtDNA change and metabolic diseases
The process of mitochondrial fusion, fission, biogenesis, and mitophagy that determines mitochondrial morphology, quality, and abundance are known as mitochondrial dynamics (Vásquez-Trincado et al., 2016). Fusion and fission processes are essential for the maintenance of important cellular functions such as mitochondrial respiratory activity, mitochondrial DNA (mtDNA) distribution, apoptosis, cell survival or calcium signalling. ATP production is modulated by mitochondrial networks generated by fusion and this pathway is controlled by transmitting the membrane potential from areas of high O2 availability to those with low availability, thus allowing the dissipation of energy (Westermann, 2010). The selective elimination of dysfunctional mitochondria, a quality-control process that guarantees a healthy mitochondrial population, is a dynamic characteristic of mitochondria (Chan, 2020). Thus, many dynamic properties of mitochondria maintain cell function, one of which is to influence the amount and distribution of mtDNA to cause many diseases.
The mutants of mtDNA caused mitochondrial illnesses are usually recessive, and the mutational burden must reach high levels, perhaps 60-90 percent heteroplasmy, before cells exhibit respiratory chain dysfunction. In skeletal muscle, mutant mice with the mitochondrial fusion genes Mfn1 and Mfn2 disrupted show substantial mtDNA depletion that precedes physiological problems such as the mutant mouse display low blood glucose levels under fasting conditions and reduced body temperatures. Furthermore, the mutant muscle's mitochondrial genomes rapidly accumulate point mutations and deletions. In a separate investigation, we discovered that disrupting mitochondrial fusion significantly enhances mitochondrial dysfunction and death in a mouse model with high mtDNA mutation levels. Mitochondrial fusion is likely to be a protective factor in human illnesses linked with mtDNA mutations due to its dual function of maintaining mtDNA integrity and retaining mtDNA function in the face of mutations (Chen et al., 2010). When cells are exposed to a substantial nutrition supply, such as in obesity or type 2 diabetes, this profusion condition is prevalent in instances of enhanced energy efficiency owing to famine or acute stress. Excess nutrition exposure enhances mitochondrial fission and inhibits mitochondrial fusion, both of which are linked to uncoupled
FIGURE 2
This is a diagram of the mitochondrial genome. It encodes for 13 essential proteins involved in respiratory chain electron transfer and oxidative phosphorylation. ND1, ND2, ND3, ND4, ND4L, ND5, ND6 are subunits of complexI. Cyt b is subunit of complexIII. COX1, COX2, COX3 are subunits of complexIV. ATP8 and ATP6 are subunits of complexV. It also encodes for 22 transfer RNAs (tRNAs) and 2 ribosomal RNAs (rRNAs) that are necessary for protein synthesis within the mitochondria.
Frontiers in Cell and Developmental Biology frontiersin.org respiration (Liesa and Shirihai, 2013). Type 2 diabetes has been linked to a decrease in the expression of OXPHOS-CR involved in mitochondrial biogenesis and oxidative phosphorylation activity (Mootha et al., 2003). When obese patients were compared to lean co-twins, it was discovered that they had a global expressional downregulation of mitochondrial oxidative pathways, as well as a concomitant downregulation of mtDNA, mtDNA-dependent translation system, and protein levels of the oxidative phosphorylation machinery. In fact, fatty acid oxidation, ketone body formation and breakdown, and the tricarboxylic acid cycle were shown to be negatively linked with insulin resistance, obesity, and inflammatory cytokines, indicating a downshift (Heinonen et al., 2015). Furthermore, Mitochondrial biogenesis helps to regulate energy balance, and increased ROS produced from electron transport chain under hyperglycemic conditions is thought to exacerbate pathological pathways, leading to diabetic microvascular and macrovascular complications (Brownlee, 2001).Mitochondrial kinetics and functional disorders are closely related to myocardial contractile dysfunction and myocardial injury. Fusion is a process that takes place under the tight control of mitofusion1 (Mfn1), mitofusion2 (Mfn2), and Opa1. After mitochondrial fusion, the possibility of being degraded is reduced. Fission is regulated by a series of signaling molecules such as Dynamin-related protein 1 (Drp1) and mitochondrial fission factor (Mff), a process that is primarily aimed at removing damaged mitochondria. Fission isolates damaged fragments of mitochondria and induces mitophagy. Imbalance of myocardial mitochondrial dynamics and homeostasis is one of the pathophysiological processes underlying the numerous cardiac symptoms of the metabolic syndrome. The mechanisms of mitochondrial dynamic have lately been linked to cardiovascular disease. Such as changes in mitochondrial dynamics play a key role in cardiovascular remodeling. Cardiac hypertrophy, diabetic cardiomyopathy, myocardial infarction, and atherosclerosis all have mitochondrial network fragmentation as a common pathogenic characteristic. Mitochondrial morphological changes eventually result in metabolic failure, mitochondrial DNA damage, and/or cell death (Vásquez-Trincado et al., 2016). Galloway et al. observed in diabetic cardiomyopathy that within 3 weeks of hyperglycemia, no significant changes in mitochondrial morphology occurred, but after 5 weeks of hyperglycemic environment, this was accompanied by increased proteolytic cleavage of Opa1 and impaired fusion processes leading to more mitochondrial fragmentation (Galloway and Yoon, 2015). Cells can also protect cardiomyocytes by inhibiting AMPK-based related signaling pathways, inhibiting mitochondrial division, promoting mitochondrial fusion, enhancing mitochondrial capacity, and reducing ROS damage. Research has revealed that aging-related mtDNA mutations in adult cardiac progenitor cells (CPCs) may alter receptor-mediated mitophagy in the differentiation process, resulting in persistent fission and less functioning fragmented mitochondria (Lampert et al., 2019). An Opa1 heterozygous mouse with abnormal mitochondrial morphology (mitochondria with abnormal cristae and mitochondrial tissue) showed an increase in ROS and a reduction in mitochondrial DNA content, implying compromised mitochondrial function. Mitophagy disruption caused accumulation of enlarged mitochondria in cardiac tubes and dilated cardiomyopathy in a Parkin knockout Drosophila model (Dorn et al., 2015). Mitochondrial dynamics directly influence pancreatic function as well. In ob/ob mice, OPA1 levels decrease in pancreatic islet cells before the onset of diabetes. Silencing Opa1 in pancreatic β cells using a Cre-loxP system yields similar results. β cells lacking OPA1 maintained normal mtDNA copy number, but the levels and activity of the electron transport chain complex IV were dramatically decreased, resulting in poor glucose stimulation of ATP generation and insulin secretion. Thus, changes in many dynamic characteristics of mitochondria can cause many diseases, such as metabolic syndrome, many of which are related to changes in mitochondrial DNA.
6 Conclusion mtDNA abnormalities have emerged as a significant factor in the development of MetS. The mechanisms by which these abnormalities contribute to MetS are complex and include quantitative changes in mtDNA copy number, mutations affecting mtDNA-encoded proteins, and imbalances in mitochondrial dynamics. Evidence suggests that these abnormalities can affect key metabolic pathways, including glucose metabolism, lipid metabolism, and insulin signaling, ultimately leading to the development of metabolic syndrome. Despite the significant progress made in understanding mtDNA abnormalities and their contribution to MetS, several challenges still remain. One significant limitation is the small size of the mitochondrial genome, which can make identifying specific mutations challenging. Additionally, mtDNA mutations can occur at varying levels of heteroplasmy, which can make their effects difficult to predict. Furthermore, there is often significant variability in the presentation and progression of these disorders, which can make it difficult to establish clear cause-andeffect relationships between specific mtDNA mutations and clinical outcomes. The study of mtDNA abnormalities and MetS is of critical importance to clinical care. Identifying mtDNA abnormalities can aid in the diagnosis, prevention, and treatment of this condition. mtDNA copy number has been proposed as a biomarker for metabolic disease, and its assessment can be used to monitor disease progression and the response to therapy. Also, identifying new mtDNA mutations associated with metabolic disease can help improve the accuracy of diagnosis and identify new therapeutic targets. Based on the many different properties of mtDNA and nDNA, the current widely used gene editing technologies have many limitations in the field of mitochondrial genes. Future prospects depend largely on the development and application of mitochondrial gene-specific editing technologies, which has clearly become an urgent technical challenge.
Author contributions
XD and NZ contributed to drafting the article and revising it critically for important intellectual content. XuP, XiP, AT, ZL and SZ contributed to acquisition of data, analysis and interpretation. TF were responsible for language writing embellishment and submission. All authors read and approved the final manuscript.
Frontiers in Cell and Developmental Biology frontiersin.org
Funding
The work was supported by the National Natural Science Foundation of China (FUND#81800763) and the Natural Science Foundation of Liaoning Province of China (FUND#2022-MS-236). | 6,473.6 | 2023-03-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Revolution , or not Revolution : That is the Question : Investigating the Pedagogies of ‘ Crazy English ’
Crazy English’ (CE) is one of the most popular, radical, yet controversial English training programs in China. There has been a tension between CE advocates and the academics about whether CE can really help Chinese people learn English. However, for more than a decade, CE is clearly more than just a passing phenomenon. It seems that CE has become a ‘subculture’ in China [47]. If CE offers anything new or valuable, we just cannot afford to ignore it. Thus a qualitative study was undertaken to identify the espoused concepts of CE in terms of foreign language learning and to examine its pedagogical practices. The study was framed by the examination of major views of language and schools of thought of learning theories. Results of data analysis show that CE classroom activities are underpinned by a blend of theoretical approaches and practices.
Introduction
With China's increasingly active involvement in the process of economic globalization and international cooperation, English teaching and learning in China has become a nationwide endeavour. All of a sudden, it seems that everyone wants to learn English, especially in big cities. To meet such huge demand for English learning, various private training centres have been set up to provide more educational accesses that were only available in universities. Among these private training centres, the commercial program 'Crazy English' (CE) is one of the most radical and most popular.
'Crazy English' (CE) appeared in early 1990s and was founded by Li Yang in response to "the tragedy of traditional teaching" in China based on his personal learning experience [29]. CE heavily focuses on practicing English orally. Among its radical teaching methods, a core feature is to 'shout': to shout out sentences repeatedly [55]. A typical scene of Li Yang's public lecture, reported by Huang [25], is to get his audience motivated, force their mouths open, thrust their arms into the air and repeat the sentences he uttered in English at the top of their lungs. His way of learning English looked different, or abnormal, or somewhat crazy, his method was thus called 'Crazy English' [42].
CE has attracted attention and aroused controversy at home and abroad [33]. On the one hand, it has been welcomed by the Chinese public and its learners, and is highly successful commercially. On the other hand, it has drawn constant antagonism from the Academy due to its "revolutionary" [28] teaching practices that use highly physical involvement, the extravagant claims it makes for learning outcomes, and an unabashed pursuit of commercial success. Despite this reprobation, CE is still flourishing after more than a decade. It has clearly shown itself to be more than a passing phenomenon, and instead has become "a subculture" in China [47].
Thus, CE can no longer be ignored. One objective for undertaking this study is to determine whether CE has, in fact, found a means to teach English to Chinese people. How revolutionary are the pedagogical principles underpinning its classroom activities? How might we look at CE? A few studies have been conducted [1,2,3,5,47,15,55], but none of them have seriously investigated CE's pedagogical principles over a period of time. It is therefore time to critically investigate CE, its pedagogy herein -to see how revolutionary its principles are.
Data Collection
The selection of data in this study was inspired by its purposive nature, that is, to seek information. Data comprised the Guangzhou CE Centre, a class of 30 at the Beijing CE Centre, and six individual learners from the class.
Sites
The information sought in this study consists of the basic principles and practice of CE, which is located in CE centres. CE has some 44 centres scattered across China, all using the Revolution, or not Revolution: That is the Question: Investigating the Pedagogies of 'Crazy English' same curriculum, pedagogies and textbooks. Only the centres in Guangzhou, Beijing, Shenzhen and Changsha, however, are directly run by the founder, Li Yang. The others operate as a franchised business. It was, therefore, decided to select centres from the latter group, as they were considered more likely to directly reflect the founder's vision and values, and thus fairly represent CE thought and practice.
Among the four directly-run CE centres, Guangzhou and Beijing were chosen as the sites for the study. CE was established in 1994 in Guangzhou, the capital of Guangdong Province, and its headquarters is still there. Beijing is the only large, modern, culturally developed base of CE other than Guangzhou, and the only base outside China's deep south. As well, the Beijing Li Yang Crazy English Centre (Beijing Centre for short) was the earliest branch to be established and so has the most experienced and on-going teaching staff. Beijing Centre offers a wide range of CE classes and they must compete with many other commercial English schools and programs available in the capital. For all these reasons, the Beijing Centre is clearly important and may be considered a solid representative of CE teaching.
The Guangzhou Centre houses the largest collection of documents concerning CE and hence it was here that documentary data were gathered. Interviews were also conducted with the CE Director and a teacher in Guangzhou, while data of class observation, and interviews with Li Yang, another two teachers and six learners, were collected at the Beijing Centre.
Class
Though CE does offer children's programs, the target group in this study is adult learners, because the project aims to explore the results of CE experience in relation to course on offer to adults in universities. On the CE program list, there are intensive week-long camps and one-to-one tailored programs available, but most teaching is carried out in one of three major program types: a six-month 'Professional English Training Program'; a two-month 'Adult Training Program' and an 'Adult Weekend Training Program'. The first type of course was both too long and, with 60-70 students per class, too big for the purposes of this study. The weekend courses were extremely short and intensive, allowing no time for researcher and participants to meet for interviews. The two-month course comprising 2 x 2-hour & 2 x 3-hour classes per week, with an enrolment of 30, however, seemed particularly suitable in all respects: the type of program, the size of class, the extent of time for an investigation of complexity in depth. Thus a two-month, beginning level adult CE training program with one teacher, one tutor and 30 learners at the Beijing Centre was selected to be the case. As discussed above, CE is a complex method of learning English. To investigate such a complex phenomenon, data were gathered in multiple ways, using several instruments, undertaken in two stages. The first comprised document analysis and interviews with senior administrative and teaching staff at Guangzhou, the CE head office; the second, a two-month (10hrs/pw) case study of teacher-student experience throughout a typical adult program in Beijing, using survey, observation, interviews and reflective journals. This range was necessary to capture the complexity of the theories underpinning CE's classroom practices.
The program was taught by one principal teacher (Teresa), supplemented by a tutor, who helped by giving individual support during practice. The tutors varied from time to time and were mostly new staff under training. Each class group was required to have a slogan, chosen by the teacher from one of CE's catchphrases. The one for this group was Don't be shy! Just try!, which was written on one of the classroom walls.
Each class consisted of four phases: warm-up, revision, new lesson and recap, presented in Table 1 (see p.4). Most instruction was conducted in Chinese.
To obtain their bio-data and understandings of CE, a survey was conducted using a questionnaire. Responses were also used to select participants for the follow-up interviews. Twenty eight out of the thirty students agreed to participate in the research, which they started by completing the questionnaire. Learners varied in demographic features, educational qualifications and self-ascribed personal characteristics. Their English levels also varied greatly, from beginner to CET 6 1 Six out of the twenty-eight learners were selected as subjects for close-up study. These six were selected as representative of the group in terms of their varied responses to the survey questions, their demographic features, participation, performance, learning outcomes, and their willingness to co-operate in the study. In a similar trajectory to the whole class, their responses and behaviours showed development throughout the program. The six learners examined closely were called Xu, Ao, Zhang, Xiao, Ji and Feng 2 .
Reliability
A key criterion for the trustworthiness and further the quality of the study is reliability of the data collected. Although Lincoln and Guba [32] do propose two measures to enhance reliability of qualitative research: peer debriefing and inquiry audit, it was not quite feasible for this study, given the bilingual and cross-cultural nature of the appreciative repertoire needed to make any meaningful comment on the data. However, the need for scrutiny of data as they were being gathered and considered was recognized as essential to reliability, and to this end, the researcher made use of a variety of means to make these available to public scrutiny and comment, both in China and Australia throughout the research process. The informal and formal means of doing this include (i) constant talking with other doctoral students at the Melbourne Graduate School of Education, about the methodology, the data and the framing of the study; (ii) formal presentations in the School to peers who included people familiar with China and others; (iii) presenting at conferences in China, to an audience made up of members of the Academy which had criticized CE; and (iv) submitting to an international publication some key findings from the project.
Data analysis
Data involved in this study consisted of CE publicity documents, newspaper articles, survey responses, field notes, transcribed interviews and journals. Analysis comprised of coding the data, both deductively using the general categories derived from the research questions and examination of key constructs, and inductively by identifying themes as they emerged from the data. Miles and Huberman's [37] three-stepped framework was adopted to analyse most data, including documents, field notes and reflective journals, and transcripts of interviews. The author started from sorting, discarding and organizing the data into different datasets and different kinds within each set, to allow for thorough analysis. As well, the author transcribed interviews, reviewed observation notes and reflective journals. After that, the reduced data were displayed in an organized, compressed way so that conclusions were more easily drawn. Data were organized using descriptive codes by attaching "words" and "phrases" to "chunks" of data. In this process, the author identified the emergent themes, which were topics that run through-out the data or recur with regularity. The process of identifying themes was guided by the goal of the study, that is, to identify the basic pedagogical principles of CE. The emergent themes were displayed in tables and figures so that conclusions could be more easily drawn. Responses to the small-scale survey were managed in a simple, but clear way. A profile of the learners was constructed using a matrix. Responses were sorted into ranges, with any interesting but complex ones marked. These provided the basis of selection of the six participants for the close study.
Results and Discussion
Integrating all sources of data, results of analyses revealed that CE's classroom practices are underpinned by a blend of traditional and modern pedagogical principles.
Traditional Practices Found in CE
About 60% of activities in the CE classroom are traditional language teaching and learning exercises, similar Investigating the Pedagogies of 'Crazy English' to practices found in both GTM and ALM. The substantial use of bilingual translation between English and Chinese in CE texts and classroom activities is typically a traditional exercise found for centuries in the GTM. In CE, all texts are presented bilingually. In class, oral translation between English and Chinese at the sentence-level is conducted as one of the major methods and activities, particularly in the learning of short passages. It is also the most common way used to review sentences. Translation is sometimes also used in chanting slogans and recapping. Parts of lessons are also taught in Chinese.
There are, however, some differences between these practices when used in CE and as originally practiced in GTM. Unlike the latter, in CE there are fewer elaborations on grammar, unless requested by the learners, and there is no required memorization of vocabulary lists. Unlike students in the traditional GTM classroom, CE learners play an active role in various practices, though the nature of interaction is still mostly teacher-led. CE places great emphasis on spoken English rather than on accuracy of reading and writing, the usual focus of the GTM which resulted in little attention being paid to phonological features beyond segmental articulation. CE, by contrast, highlights this domain and a great deal of time is spent on it. Thus, in sum, the content of a number of CE practices are similar to GTM, but many have been modified to accommodate a goal of real-life spoken use, and an appreciation of the learner as active agent in the learning process.
CE's key beliefs about language and learning and most of its class activities are strikingly consistent with the principles of the ALM. For example, to achieve oral proficiency, a large amount of time is spent on imitation, repetition and recitation of texts. The teacher guides these activities, modelling and gesticulating like the conductor of an orchestra. Vocabulary is learned in example sentences given by the teacher, or in translating between English and Chinese. Learners are actively involved in these practices. These activities are consistent with the mimicry drills and pattern practices of the audio-lingual methodology.
Nevertheless, there are some dissimilarity in the practices of CE and ALM. Most significant of these is in the text types used in the two approaches. In CE, texts include sentences, dialogues and passages, while in ALM material is always presented in dialogue form. In CE, Chinese is used for both instruction and organization of class activities, along with the occasional use of half Chinese and half English. By contrast, very little use of the mother tongue by teachers is permitted in ALM.
Modern Practices in CE
Modern pedagogical practices are also found in the CE class. Some of these practices embrace CLT approaches, and some involve innovative practices.
CLT practices
In addition to the traditional approaches discussed above, some CLT principles and practices were identified in CE's pedagogy and activities. CE and CLT share the same primary goal: to cultivate the learners' communicative competence. A hallmark of CLT is its learner-centeredness, which is also reflected in CE to some extent. Firstly, for example, most of the learners in the CE program observed had taken it up in order to improve their pronunciation and spoken English, and they were actively involved in the various exercises provided to achieve this. At the end of the program, the learners believed that CE had largely met their needs. Secondly, following their request, the usual texts were supplemented by longer ones. Supporters of CLT suggest that errors are a natural and valuable part of the language learning process [e.g. 16], and, thirdly, CE echoes this principle by encouraging learners to speak out at the cost of losing face and making mistakes, despite also placing emphasis on accuracy.
Unlike traditional approaches, CLT emphasizes fluency and accuracy, and includes work on both segmental and suprasegmental features of language. Seidlhofer [46] proposes that "[in CLT], pronunciation is a means of negotiating meanings in discourse, embedded in specific socio-cultural and interpersonal contexts" (p.12). Likewise, pronunciation instruction and practice in CE include mastery of stress, intonation and rhythm, as well as of individual sounds. CE's emphasis on the role of pronunciation in successful communication reflects the same communicative view. In practice, some oral class activities in CE also reflect characteristics of CLT teaching and learning. For example, many activities are conducted using pair work; dialogues are role-played in front of the class; a variety of game-like activities are used to facilitate learning. Interview data revealed that the learners were enthusiastic about these activities and games and believed that they had made progress from taking part in them.
Contextualization is a basic premise in CLT [11], and thus, the learning of words and dialogues centres on communicative functions. Similarly, Li Yang claims that to learn a language is to learn how to use it in real-life communication. In the CE class, it was observed that many single sentences were learned in context. The student Feng said that she had developed a habit to think when and how to use the sentence whenever she was learning a new one. In the CE textbooks, there is often some additional commentary on the usage of certain words and phrases as part of the lessons, which link book language to real-life use. This realizes Munby's suggestion that learners' communicative awareness should be aroused by asking "[w]ho is communicating with whom, why, where, when, how, at what level, about what, and in what way?" [39].
The extensive data on CE's provision of context and overt drawing of learners' attention to it and its communicative significance contradict the claim made by Woodward [53] that CE only teaches single words and sentences "without giving any context" (p.21). Woodward does note that when learning the expression, The very idea!, a note from a native-speaker teacher is attached to illustrate the usefulness of the phrase: A wonderful way to express indignation. Similarly, Don't talk to me like that! is explained in dialogue form: A: You're fat and ugly and I hate you. B: Don't talk to me like that! (p.24). However, it is true that CE does not contextualize every sentence taught.
The above characteristics of CE match those of CLT, but other key aspects of CLT are missing in CE's pedagogy. Firstly, the introduction of authentic texts into the learning situation is central to CLT [40]. In CE, all texts were written or edited by Li Yang himself, except the limited notes given by the native-speaker teachers and a couple of demonstrators on the CDs. Speakers in short dialogues are marked as A and B, instead of being given real names, although the longer supplementary dialogues used later do include real names. Among other aspects of CE to note, the teacher is a non-native speaker. Thus, compared to a CLT classroom, language use or communication in CE is often still distant from authentic. Secondly, in CE class, little attention is given to the rules of grammar. From its inception [e.g. 16], the proposal that in a communicative classroom, someone competent in English should demonstrate the rules of grammar and their use, has been present. Thirdly, CLT advocates developing activities that integrate listening, speaking, reading, and writing skills, and as used in China, translating skills as well. The focus of CE classroom practices, however, is almost exclusively on speaking and translating, and occasionally listening skills. Fourthly, in the communicative classroom, students ultimately have to use the language in their world outside the classroom even while taking the course, while most CE learners still rest on rehearsed responses.
In the program studied, there were two exceptions to this, two students who effectively applied what they had practiced in class in novel settings while still on course. On his trip to Japan, Zhang fully tested his communicative competence in English. Although he sometimes was not as fluent as he aimed to be, he found he could get his meaning across, and won praise from his senior colleague for managing their travel. Likewise, while Xiao's "blurt-out" about parking could be understood as an automatic response due to drills practiced in class, the incident entailed negotiating meaning appropriately and spontaneously with a foreigner in a real-life context outside the classroom. This was certainly communicative.
Innovative practices
Beyond common modern activities of CLT, there is also one innovative practice used extensively in the CE classroom: the hand gestures to assist articulate. This is very much in line with emerging contemporary approaches to L2 learning in western countries. Although the use of hand gestures in CE as explained is the result of the personal intuition of Li Yang, it does have strong theoretical support in general pedagogy [e.g. [17][18], and in language pedagogy in particular [e.g. 20,48].
The relationship between gesture and learning has been drawing increasing attention. Studies by researchers in psychology, neurology, linguistics and education show that gestures generally function in two ways in learning. First, learners gesture naturally to coordinate their attempts at mastery and to indicate their cognitive state during learning [21][22]36]. Secondly, gestures have been used deliberately in language learning to facilitate mastery [e.g. 41,35].
Just as in CE, these remedial gestures have been used, in particular, to effect change in pronunciation and rhythm. One of the earliest practices involving gestures was the verbo-tonal approach developed by Guberina [20], in which techniques to modulate the frequency of oral language included learners walking, tapping and humming the rhythm of what they will say. Others have included Chan [7], who proposed the use of basic hand gestures in teaching pronunciation -to show syllables, indicate stress, illustrate linking and intonation; Larsen-Freeman [27], who observed that gestures are effective in teaching and correcting pronunciations. Chela Flores [8][9] also found that gestures enable learners develop an awareness of rhythm, and particularly help fossilized learners develop more acceptable rhythm patterns. Orton et al. [41] targeted synchrony of voice and movement as the means to develop L2 rhythm and intonation.
Data gathered in this study show that hand gestures and rhythm training are both used in the CE program. In fact, in CE each sound is assigned a corresponding gesture which shows where in the mouth it is produced, and mirrors the contour of the sound as it is expressed. It is a technique very similar to Guberina's improvised gestures which match the tension, voice contour, and length of the target sound or phrase. Similar to the contemporary Canadian and European approaches to language teaching and learning [48,35], gestures and vocalic expression are used in the CE class to combine utterances. However, CE does not use the gesture-based approach exclusively and does not insert a gesture for every word as Maxwell does, but only on stressed syllables.
The systematic use of gestures to accompany and guide phonological accuracy is one innovative, contemporary practice in CE that has not been appreciated by its critics. It challenges the judgment Gao [15] made that CE is not much different from the outdated audio-lingual method, and also challenges the Academy's overall judgment that CE does not offer anything new, that it is just "old wine in a new bottle" [31].
The nature of language
Pedagogical practice reflects perspectives on the nature of language and language learning. The pedagogical practice of CE manifests aspects of four different views of language: the traditional formal view, the structural, the socio-functional and the intercultural view.
CE's classroom activities include some basics of the GTM approach to language learning, in particular, bilingual Investigating the Pedagogies of 'Crazy English' translation. One of the goals of CE is to enable learners to develop automatic oral translation proficiency: to become an "all-round translating machine" as Li Yang has been called. Like the GTM, CE thus acknowledges the formal system of language and considers mastery of the decontextualized utterances to constitute one part of mastery of the language.
In addition, the heavy use of activities in CE classrooms which appear also in the ALM approach, shows a strong reliance on a structural conceptualization of language. Thus, CE learners are led to repeat deconstructed morphemes, words and phrases, before finally forming them into sentences. The heuristic approach to translating Chinese into English is congruent with the contrastive way of thinking of structural linguistics; and the significance of phonological units is also typical of structuralist belief.
In addition to these, CE's communicative practices reflect the functional view of language as communication. In CE, language is regarded as a tool for communication, an instrument of social interaction. This is a view of language which reflects Halliday's [23][24] social theory of language, in which a text cannot be understood without knowing the situational context in which it is embedded.
The communicative approach understands language learning as an interactive process and incorporates culture as an important part of communicative competence, including the interaction and culture of the classroom. Although Li Yang maintains that English in China is learned just as a tool, he still claims that language embodies culture and thus language learning also involves cultural learning. He proposes that foreign language education in China should provide four orientations, one of which is orientation towards intercultural communication. He even suggests the Academy should make more efforts in cultivating learner's intercultural awareness and competence. Despite these averrals, throughout the 10-week class observation, limited evidence was found relating to developing learners' intercultural competence.
The Nature of Learning
In relation to the nature of learning, the findings showed that there are several different views of learning underpinning CE's classroom activities. The major tenets of CE learning are: (i) Language learning is physical labour rather than mental work; (ii) language learning is skill training, like learning to swim or to play the piano; (iii) teachers are the trainers or coaches and learners are active participants. (iv) learners' motivation is crucial for successful learning; (v) learning is both a painstaking and a joyful experience; (vi) adult learners are different from child learners; (vii) appropriate seating can facilitate and enhance learning; (viii) the 3-ly training technique is an essential for effective language learning. These views of learning rest on a blend of learning theories which will be discussed in detail below.
Li Yang believes that language learning is a matter of physical labour rather than mental work and that language can only be learned through intensive imitation, repetition and recitation. This is consonant with the behaviourist view that learning is a process of habit formation and over-learning. Mimicry, memorization of set phrases, and repetitive drills are the primary teaching techniques, which are justified by the Skinnerian concept of conditioning [43]. Likewise, CE's blurt-out is similar to the behaviourist concept of internalization. As Teresa explained, "[i]f it is repeated enough, it will develop into a pattern of thinking and become part of your own. Then it will respond automatically when needed". This falls exactly into the behaviourist paradigm that with sufficient practice, the language structure will be internalized and come automatically [14].
The practice of error correction and positive feedback in the CE class also echoes the behaviourist tradition. In the CE classroom, whenever a sound was mispronounced, the teacher would correct it again and again until the learner got it right. Praise and encouragement in verbal and nonverbal forms were provided to help learners with the experience of success.
The ultimate aim of CE is to blurt out English. It refers to "the kind of naturalness or sub-consciousness produced in speaking English in lifelike communications, as if without thinking" [30]. Blurt out is the result of having achieved automatic Chinese-English translation. While the idea of blurt-out fits with the behaviourist paradigm, the evidence from contemporary neuroscience suggests that this is achieved through automatic Chinese-English translation is not correct. At the level of competence in question, the second language of a bilingual is able to be invoked directly through perception, without being first processed in the mother tongue. Even in the early stages, using L2 involves effortful processing of information, although it may proceed rapidly and efficiently when speech has been highly practiced [19].
Although Li Yang claims that language learning is physical work, in practice CE activities also reflect learning as skill training, a stance which mirrors cognitive theory to some extent. CE's creative use of hand gestures to facilitate pronunciation is also consistent with Fraser's [13] suggestion of using visual representations as well as verbal explanations in learning. And, indeed, Feng said she repeated and gesticulated in order to compare some sounds, and Zhang stated that he repeated with gestures in order to figure out whether gestures worked on him. Hence even the most evidently physical CE practices involve some thinking, or cognition, on the part of the learners.
However, it should be noted that CE's view of language as skill is only a partial consonant with cognitivist ideas. According to Li Yang, language use is a matter of skill, not knowledge; language learning is physical labour, not mental work, or as he later revised the view, as 90% physical work and 10% mental work; and language learning depends upon how many times one opens one's mouth to practice speaking to develop automaticity. The CE view is quite evidently more behaviourist-based than cognitive-based.
In addition to the above, two features in the CE classroom indicate constructivist thinking. The first one is related to the teacher-learner roles. Rather than being the "transmitter of knowledge", teachers in the CE classroom are "trainers", or "coaches" involved in guiding and facilitating various activities, while learners are highly involved in these activities. The second feature involves the concepts of scaffolding and peer practice which are supported by a range of key philosophers and their social constructivist views [38,52,50]. These authors see the conceptualization of knowledge as a social artefact that is maintained through a community of peers. Data from observation and teacher-learner interviews, in particular with learners Zhang, Xiao and Feng, show that CE values peer-to-peer work, not just teacher-learner interactions. As Vygotsky determined, social interaction plays a fundamental role in the development of cognition. The scaffolding and peer practice that CE provides fit into the constructivist view that the support of learning from peers in a 'community of practice', provides opportunity for the novice practitioner to reflect upon propositional knowledge.
The deep concern about learners' affect and motivation in CE is strongly in agreement with the humanistic theory about the well-being of the learner as Maslow [34] and Rogers [44] propose. Li Yang claims that CE aims at "education rather than simply the teaching of English". By education, he means "cultivating learners into whole persons". He argues that sometimes motivation is more important than knowledge. There are echoes here of Rogers' statement that learning involves cognition and feelings. The results also show that learners benefited from the supportive and relaxed class atmosphere, and their self-confidence increased. This is in harmony with Maslow's belief of establishing a secure environment to help learners build up self-respect and achieve their potential.
Rogers [45] offers some practical micro strategies to cater for learners' affective side. First, an arousal of interest through an intensely involving first few minutes, in which everyone in the group takes part, including the teacher, and is fun is an essential (p.33). Secondly, praising is important as "it makes us feel secure and confident" and thus helps to form positive learning cycle (p.39). Thirdly, enhancing group dynamics among learners and between teacher and learners is important. Fourthly, make sure the room arrangement is appropriate and comfortable. Finally, it is important to carefully design the first session as "it is in this first session that vital first impressions are formed and motivation built on or crushed" (p.92). And a better tactic to make the best use of the first few minutes is for the teacher "to arrive in plenty of time" (p.97), greeting learners warmly, asking their names and introducing themselves. Learning their names is another way to impress them. Most of all, CE's major concern about learners' affect, and the positive and encouraging environment, are aimed at making a CE program a happy learning experience.
Although Li Yang claims that CE is an unorthodox, revolutionary method of learning English, when it comes down to its actual practice, many of its beliefs and exercises rely on the traditional learning style of Chinese culture. In particular, CE relies very heavily on imitation, repetition and recitation. Li approves of the saying from Chinese ancestors that "when a book is read for a hundred times, its meaning will appear automatically". In class, learners are encouraged to practice speaking a sentence or dialogue repeatedly until they can recite it. Such practices are similar to the traditional memorization of texts in antiquity found in the Four Books and Five Classics.
As well, CE's paradoxical belief that learning is both pain and joy resonates well with the Confucian view. Li Yang defines CE as "the extreme diligence" and learning English as "a painful process", but on the other hand, he promotes happy learning, claiming "learning English is interesting" and "I enjoy learning English". For Confucianists also, learning requires painstaking effort which is driven by self-determination, will power, perseverance and patience [4,49]. Based on the view of effortful learning, memorizing is a widely adopted practice, yet at the same time, Confucius was enthusiastic about learning, saying in the opening sentence of his Analects: "[i]sn't it pleasant to learn with a constant perseverance and application?" (I.1). In this respect, Li Yang is a clear follower of the Confucian tradition.
Finally, like the use of hand gestures discussed earlier, another two practices of CE actually suggested a sound knowledge of learning: the seating arrangement and the 3-ly method. Although the CE classroom may look normal to Western eyes, its U-shaped seating arrangement is in contrast to the rows of fixed seats in most academic institutions in China. Observation data show that this seating arrangement makes it easier for learners to move around practicing, and thus facilitates learning, activities not typical of traditional Chinese classrooms, or even contemporary ones.
It has been argued that the appropriate arrangement of seats can facilitate and enhance learning, while traditional fixed seating style restricts the nature of communicative possibilities for each student [16,51]. Likewise, similar voices were heard from practitioners. In his pilot study on the Vygotskian notion of the zone of proximal development, McCafferty [36] places great emphasis on the importance of the 'V' shape seat arrangement in helping to create spaces which lead to the co-construction of learners' identities in transforming themselves into natural performance. As observed, compared to the traditional style in academic institutions, the U-shaped seating arrangement in the CE class made it easier, more natural, and more convenient for learners to work in pairs and move around, gesticulating, repeating and reciting.
The second intuitive understanding of learning concerns the 3-ly method. Based on his own learning experience, Li Yang believes that good English can only be learned through practice and the most effective way to practice is the 3-ly method. Data analyses revealed different results for the 3-ly method. Interviews with learners showed that they all approved of practicing speaking as clearly as possible in Investigating the Pedagogies of 'Crazy English' order to achieve accuracy. Although they also enjoyed practicing speaking as loudly as possible, academics have been antagonistic to such a "hysterical" learning style [54]. They maintain that this practice is unscientific and may do harm to learners. Yin, however, does admit that shouting can enhance memory, although he does not elaborate. Otherwise, sparse literature had been found as to whether or not speaking as loudly as possible is effective in learning a foreign language. The result of data analyses showed a conflict about practicing speaking as quickly as possible. On the one hand, it contradicts Fraser's conclusion that the quicker you say a sentence, the more fluent you will sound is not true [12]. On the other hand, it conforms to Cook's [10] finding that memory span is restricted by the speed of articulation. In particular, CE's training to speak as quickly as possible fits well with Cook's suggestion that training students to speak swiftly and accurately can help enhance memory, as well as their general ability to process language. Learners also reported its effectiveness from their own experience. Li Yang claims that he borrowed the idea from athletes' extra-intensity training. The author finds a resemblance to the practice of 'tongue twisters' used in the training of traditional Chinese 'cross talk' performers to attain fluency.
The study found that CE pedagogical practices for English learning comprise an eclectic set of traditional, modern and innovative methods drawn from China and abroad, which are promoted without reference to theories or scholarship, and used without being subjected to formal testing and evaluation. Their employment stems from Li Yang's largely intuitive sense of what is useful, combined with methods he has found empirically effective in turning his own failure at English into success, and are continued as their value is supported by CE teachers' and learners' own experience with them. There are tensions if not actual contradictions between some of the fundamental positions adopted, especially concerning the nature of language and learning. Nonetheless, staff and students all believe CE leads them to make progress, and there was some experiential and observation evidence to suggest this was true for those studied.
Although CE employs a great many practices which would be quite familiar to those teaching university classes, there are several aspects in CE that would be new to academic classes and that may be worth academic teachers noticing, thinking about, attending to in some way, or even copying. CE learning involves high frequency of practicing language skills used in real-life context. This is an emphasis that today's university students could also benefit from. In achieving this, the use of hand gestures in teaching pronunciation is of particular benefit.
As CE learners spend a lot of time on English, it is hardly surprising if they improve. But at the same time, there is very evident care taken to ensure that the content provided is focused on what they want to learn, and that learning is scaffolded over a lesson and across a course. The practices include individual, pair and group work, teacher-learner and learner-learner interactions, all of which are known to be sound. These areas also may offer something useful for the design of academic classes.
At the same time, it is also evident that the Academy could offer CE some theorizing of its practice to lift them off a basis of pure intuition, and of the intuition of only one man at that.
Implication
The findings of this study suggest that CE's classroom practices are underpinned by a blend of traditional and modern pedagogical principles. Its practices are a combination of old and new, and as a result, its Founder Li Yang's claim that CE is a revolution in learning English is not completely true albeit some innovative and intuitive activities. Such a claim may be more of a concern of publicity.
The findings of this study will evidently be significant to the EFL community in China. First, this significance can be established within the Academy. The project's results will provide accurate, first-hand critically appraised information about CE's practices and outcomes, which is essential for any rational debate within the Academy about the value of CE. Secondly, it is important for EFL teachers and learners. Factors revealed will carry some practical implications for both EFL teachers and learners, enabling them to benefit from some effective and innovative methods that can complement their existing repertoire and thus improve learning. Thirdly, the results can also be significant to education policy makers. It is anticipated that some CE values, concepts and practices will inspire the development and reform of ELT in China.
Conclusion
To sum up, while many activities in the CE class continue traditional practices, others are grounded in modern and contemporary approaches. The emphasis on cultivating the learner's ability in using language in real-life contexts, the focus on both fluency and accuracy of speech, and the attempt to satisfy learner's needs, are congruent with a CLT approach. Significantly, the frequent and systematic use of gesturing is cutting-edge in practical repertoire, and its value is supported by a solid body of recent research. However, it appears that gesture use in CE owes its inclusion not to these studies, but rather to the intuition of its Founder. The activities in a CE class are thus an evidently compatible blend of traditional, modern and contemporary practices used in foreign language teaching and learning. It is evident that CE is not completely revolutionary in English learning as its Founder claims, who coined it likely to attract attention or for publicity's sake. | 9,394 | 2013-01-01T00:00:00.000 | [
"Education",
"Linguistics"
] |
Computer Optimization of Biodegradable Nanoparticles Fabricated by Dispersion Polymerization
Quality by design (QbD) in the pharmaceutical industry involves designing and developing drug formulations and manufacturing processes which ensure predefined drug product specifications. QbD helps to understand how process and formulation variables affect product characteristics and subsequent optimization of these variables vis-à-vis final specifications. Statistical design of experiments (DoE) identifies important parameters in a pharmaceutical dosage form design followed by optimizing the parameters with respect to certain specifications. DoE establishes in mathematical form the relationships between critical process parameters together with critical material attributes and critical quality attributes. We focused on the fabrication of biodegradable nanoparticles by dispersion polymerization. Aided by a statistical software, d-optimal mixture design was used to vary the components (crosslinker, initiator, stabilizer, and macromonomers) to obtain twenty nanoparticle formulations (PLLA-based nanoparticles) and thirty formulations (poly-ɛ-caprolactone-based nanoparticles). Scheffe polynomial models were generated to predict particle size (nm), zeta potential, and yield (%) as functions of the composition of the formulations. Simultaneous optimizations were carried out on the response variables. Solutions were returned from simultaneous optimization of the response variables for component combinations to (1) minimize nanoparticle size; (2) maximize the surface negative zeta potential; and (3) maximize percent yield to make the nanoparticle fabrication an economic proposition.
Introduction
One of the great challenges in drug development and medicine today is finding more effective forms of treatment for a large number of life-threatening but curable diseases, such as cancer. At the moment, there is an imbalance between the knowledge of cancer biology and the success achieved in cancer treatment: efforts in the treatment of cancer have not met with much success [1,2]. To achieve the goal of eliminating death and suffering from cancer, the USA National Cancer Institute has embraced the power of nanotechnology to radically change the procedure for diagnosis, imaging and treating cancer. The integration of the developments in nanotechnology with advances in cancer research can be done using polymeric nanoparticles which are capable of specific delivery of large amounts of single or multiple therapeutic agents as well as imaging agents embedded in the core per targeting biorecognition event compared to simple immunotargeted drugs. Both targeting (spatial/distribution control) and controlled release (temporal control) of therapeutic agents can be achieved [3,4].
The evolution of nanoparticles for biomedical applications has moved from the first generation nanoparticles (mainly suitable for liver targeting) through the second generation (stealth nanoparticles for long blood circulation and passive targeting) to the third generation nanoparticles with molecular recognition [1,5]. The fourth generation has been dubbed theranostics: multifunctional nanoscale devices which allow for a combination of diagnostic agent with a therapeutic agent and even a reporter of therapeutic efficacy in the same nanodevice package [6]. Aside from biocompatibility and biodegradability, the physicochemical properties of nanoparticles (size and surface modification, among others) and their interactions with biological systems are important considerations in their design and fabrication. Polymeric nanoparticles can be prepared mainly by two methods: (i) dispersion of preformed polymers and (ii) polymerization of monomers (i.e., in situ polymerization). In situ polymerization of monomers and crosslinkers offers many advantages including one-pot synthesis of nanoparticles [7]. Our laboratory has focused on surfactant free, free radical dispersion polymerization (in situ polymerization) technique for the fabrication of stealth crosslinked nanoaprticles for biomedical applications [2,[7][8][9][10]. We report here our efforts on the development of stealth biodegradable cross-linked nanoparticles by dispersion polymerization suitable for the delivery of bioactive agents.
The relatively new concept of quality by design (QbD) and process analytical technology (PAT) in pharmaceutical dosage form design and development, already incorporated into automakers' production principles, involves designing and developing drug formulations and manufacturing processes which ensure predefined drug product specifications. It is believed that product and process understanding is a key element of QbD-PAT [7,10,11]. Thus an important part of QbD-PAT is to understand how process and formulation variables affect product characteristics and subsequent optimization of these variables vis-à-vis the final specifications. Statistical design of experiments (DoE) is a well-established method for identifying important parameters in pharmaceutical dosage form design and for optimizing the parameters with respect to certain specifications [7,[10][11][12]. Two major approaches to the design of experiments to be able to examine all of the variables simultaneously are factorial and mixture experimental designs [7,10]. The statistical experimental designs involving mixture methodology is an efficient method for studying products made from components at various levels. We used D-optimal mixture design for experimental design, analysis and optimization. When a formulation is a mixture of various components (proportion of the constituents) as studied in our work and the levels of the components are constrained, D-optimal mixture design is more useful than a factorial design because it accounts for the dependence of response on proportionality of constituents.
Materials and Methods
Two types of nanoparticles were fabricated and characterized as discussed previously: poly-L-lactide-based [10] and poly-ε-caprolactone-based nanoparticles [7].
We used mixture design (D-optimal mixture statistical experimental design) in this work for the response surface method (RSM). The responses (particle size and percent yield (for poly-L-lactidebased nanoparticles) and particles size and surface zeta potential (for poly-ε-caprolactone-based nanoparticles) are functions of the proportions of the formulation variables investigated: macromer, initiator, stabilizer, and crosslinker. Based on preliminary data, constraints were introduced to the proportions of the components to allow the fabrication of smooth spherical particles. In D-optimal mixture design, there are restrictions on component proportions such that a lower and upper limits are specified [7,10,[13][14][15][16]. Aided by statistical software for the design of the experiments and analysis of the data (Design-Expert , Stat-Ease Inc., Minneapolis, MN, USA), and using D-optimal mixture statistical experimental design, we varied the components (critical material attributes (CMAs): crosslinker, initiator, stabilizer and poly-L-lactide-HEMA macromonomer) to obtain twenty nanoparticle formulations for poly L-lactide-based nanoparticles (Table 1) and thirty nanoparticle formulations (Table 2) for poly-ε caprolactone-based nanoparticles. The particle size for poly-L-lactide-based nanoparticles ranges from 261 to 326 nm; while the polydispersity index (PDI) ranges from 0.20 to 0.29. For poly-ε-lcaprolactone based nanoparticles, the particle size obtained ranges from 130 nm to 788 nm; while PDI ranges from 0.133 to 0.605. The particle size distribution is given by the PDI. A PDI value of <0.1 indicates a homogenous monodisperse formulation; while a PDI of >0.3 indicates polydispersity with variations in particle size.
be able to examine all of the variables simultaneously are factorial and mixture experimental designs [7,10]. The statistical experimental designs involving mixture methodology is an efficient method for studying products made from components at various levels. We used D-optimal mixture design for experimental design, analysis and optimization. When a formulation is a mixture of various components (proportion of the constituents) as studied in our work and the levels of the components are constrained, D-optimal mixture design is more useful than a factorial design because it accounts for the dependence of response on proportionality of constituents.
They were characterized for surface morphology (scanning electron microscopy (SEM)), particle size (dynamic light scattering (DLS) using Zetasizer Nano-ZS (Malvern Instruments, Malvern, UK), yield, and surface zeta potential (Zetasizer Nano-ZS). Typical electron micrographs are shown in Figure 1. We used mixture design (D-optimal mixture statistical experimental design) in this work for the response surface method (RSM). The responses (particle size and percent yield (for poly-L-lactide-based nanoparticles) and particles size and surface zeta potential (for poly-ɛ-caprolactone-based nanoparticles) are functions of the proportions of the formulation variables investigated: macromer, initiator, stabilizer, and crosslinker. Based on preliminary data, constraints were introduced to the proportions of the components to allow the fabrication of smooth spherical particles. In D-optimal mixture design, there are restrictions on component proportions such that a lower and upper limits are specified [7,10,[13][14][15][16]. Aided by statistical software for the design of the experiments and analysis of the data (Design-Expert ® , Stat-Ease Inc., Minneapolis, MN, USA), and using D-optimal mixture statistical experimental design, we varied the components (critical material attributes (CMAs): crosslinker, initiator, stabilizer and poly-L-lactide-HEMA macromonomer) to obtain twenty nanoparticle formulations for poly L-lactide-based nanoparticles
Followed by Optimization
The selection of the best models for modeling the response variables (particle size and yield for poly-L-lactide-based nanoparticles) and particles size and surface zeta potential for poly-ε-caprolactone-based nanoparticles is important since the fitted models will be used to predict the variables following simultaneous numerical optimization [17]. With a mixture design, the response determined by any possible component mixtures can be identified by a point in the experimental domain called the design space. When working with three different variables (components), the experimental domain corresponds to an equilateral triangle with the vertices corresponding to the pure components while different points within the design space correspond to a mixture of components [7,10,18]. In this work, four components (macromonomer, initiator system, crosslinking agent and stabilizer) were combined to prepare nanoparticles; however, the proportion of the macromonomer was kept constant in all the experiments thereby yielding a triangular experimental domain. Further, as a result of constraints introduced, the region of interest (design space) that allows for the formation of smooth spherical particles is only a fraction of the possible experimental domain.
Poly-L-Lactide-Based Nanoparticles
Model fitting to the data (Table 1) was carried out and the quadratic model was found significant and was selected. To improve the model, insignificant terms were removed by backward elimination. Analysis of variance (ANOVA) of the selected model and terms (Table 3) reveal that the selected model is significant (p = 0.0020). Further, the model (Scheffe Polynomial) was also selected based on the estimation of several statistical parameters: multiple correlation coefficient (R 2 ), adjusted multiple correlation coefficient (adjusted R 2 ) and the predicted residual sum of squares (PRESS). Also, "lack of fit" was not statistically significant (p = 0.4994) which is desirable. The resulting model (Scheffe polynomial) is shown in Equation (1) Diagnostic plots ( Figure 2) show the validity of the model. The normal probability plot of the residuals (Figure 2A) is the most important diagnostic and it checks for non-normality in the error term. A linear normal probability plot of the residuals, which indicates normality in the error term, was obtained. Figure 2B shows a diagnostic plot that tests the assumption of constant variance. Both plots show no problem with our data. The Scheffe polynomial (Equation (1)) was used to generate the model graph ( (2) where: A = Crosslinker (mmol); B = Initiators (mmol); C = Stabilizer (mmol); D = Macromonomer (mmol) Diagnostic plots (Figure 4) show the validity of the model. The Scheffe polynomial (Equation (2)) was used to generate the model graph (
Poly-ɛ-Caprolactone-Based Nanoparticles
Logarithmic transformation was carried out before model fitting to particle size data ( Table 2). The quadratic model was found significant and was selected. To improve the model, insignificant terms were removed by backward elimination. Analysis of variance (ANOVA) of the selected model and terms (Table 5) reveals that the selected model is significant (p < 0.0001). The linear mixture (component linear terms) and the square of A term (crosslinker term) are significant: p < 0.0001 and p = 0.0005 respectively. Additionally, "lack of fit" is not significant (p = 0.6921). Non-significant lack of fit is good as our desire is for the model to fit. The "Pred R-Squared" of 0.9326 is in reasonable agreement with the "Adj R-Squared" of 0.9455. Adequate Precision measures the signal to noise ratio. A ratio greater than 4 is desirable. The ratio of 27.332 obtained in this work indicates an adequate signal. Consequently, this model can be used to navigate the design space.
Poly-ε-Caprolactone-Based Nanoparticles
Logarithmic transformation was carried out before model fitting to particle size data ( Table 2). The quadratic model was found significant and was selected. To improve the model, insignificant terms were removed by backward elimination. Analysis of variance (ANOVA) of the selected model and terms (Table 5) reveals that the selected model is significant (p < 0.0001). The linear mixture (component linear terms) and the square of A term (crosslinker term) are significant: p < 0.0001 and p = 0.0005 respectively. Additionally, "lack of fit" is not significant (p = 0.6921). Non-significant lack of fit is good as our desire is for the model to fit. The "Pred R-Squared" of 0.9326 is in reasonable agreement with the "Adj R-Squared" of 0.9455. Adequate Precision measures the signal to noise ratio. A ratio greater than 4 is desirable. The ratio of 27.332 obtained in this work indicates an adequate signal. Consequently, this model can be used to navigate the design space. where: A = Crosslinker (mmol); B = Initiators (mmol); C = Stabilizer (mmol); D = Macromonomer (mmol). Diagnostic plots show the validity of the model ( Figure 6). The normal probability plot of the residuals is the most important diagnostic plot; it checks for non-normality in the error term. A linear normal probability plot of the residuals was obtained which indicates normality in the error term and therefore there is no problem with our data. Further, Residuals vs. Predicted tests the assumption of constant variance and it should be a random scatter within the upper and lower boundaries. The Scheffe polynomial (Equation (3)) was used to generate the model graph (Figure 7) which shows the design space and variation in particle size as a function of the mixture composition. A = Crosslinking agent; B = Initiators; C = Stabilizer and D = Macromonomer (Poly-ε-caprolactone based nanoparticles).
Int. J. Environ. Res. Public Health 2015, 12 11
The empirical model (Scheffe polynomial) is shown in Equation (3) where: A = Crosslinker (mmol); B = Initiators (mmol); C = Stabilizer (mmol); D = Macromonomer (mmol). Diagnostic plots show the validity of the model ( Figure 6). The normal probability plot of the residuals is the most important diagnostic plot; it checks for non-normality in the error term. A linear normal probability plot of the residuals was obtained which indicates normality in the error term and therefore there is no problem with our data. Further, Residuals vs. Predicted tests the assumption of constant variance and it should be a random scatter within the upper and lower boundaries. The Scheffe polynomial (Equation (3)) was used to generate the model graph (Figure 7 Following square root transformation, model fitting for zeta potential data was carried out. Quadratic model was found significant and the model was selected. To improve the model, insignificant terms were removed by backward elimination. Analysis of variance (ANOVA) ( Table 6) reveals that the selected model is significant (p = 0.0437). The linear mixture terms (component linear terms) are not significant (p = 0.7487); the quadratic term of A (crosslinker) by C (stabilizer) is significant p = 0.0037. In addition, "lack of fit" is not significant (p = 0.4389). Non-significant lack of fit is good; we want the model to fit. Adequate precision measures the signal to noise ratio. The ratio of 6.96 indicates an adequate signal. This model can be used to navigate the design space. Following square root transformation, model fitting for zeta potential data was carried out. Quadratic model was found significant and the model was selected. To improve the model, insignificant terms were removed by backward elimination. Analysis of variance (ANOVA) ( Table 6) reveals that the selected model is significant (p = 0.0437). The linear mixture terms (component linear terms) are not significant (p = 0.7487); the quadratic term of A (crosslinker) by C (stabilizer) is significant p = 0.0037. In addition, "lack of fit" is not significant (p = 0.4389). Non-significant lack of fit is good; we want the model to fit. Adequate precision measures the signal to noise ratio. The ratio of 6.96 indicates an adequate signal. This model can be used to navigate the design space. where: A = Crosslinker (mmol); B = Initiators (mmol); C = Stabilizer (mmol); D = Macromonomer (mmol). Diagnostic plots show the validity of the model (Figure 8). The Scheffe polynomial (Equation (4)) was used to generate the model graph (Figure 9
Discussion
Scheffe polynomial models were generated to predict particle size (nm) and percent yield for poly-L-lactide-based nanoparticles as functions of the composition of the formulations. The models are shown in Equations (1) and (2). Further, Scheffe polynomial models were generated to predict particle size (nm) and zeta potential (mV) for poly-ɛ-caprolactone-based nanoparticles as functions of the composition of the formulations.
The models are shown in Equations (1) and (2).
Simultaneous Numerical and Graphical Optimizations of Nanoparticle Size and Percent Yield for Poly-L-Lactide-Based Nanoparticles
Following simultaneous numerical optimization of nanoparticle size and percent yield of poly-L-lactide-based nanoparticles using Equations (1) and (2), four solutions were returned. Three of the solutions were used to fabricate nanoparticles to compare the predicted values with the actual laboratory values. The observations from the confirmation experiments are within the confirmation 95% prediction interval (95% PI low and 95% PI high), where PI is point prediction, showing the confirmation of the models. A typical overlay plot is shown in Figure 10. The focus on particle size and yield in this aspect of the work is based on the fact that particle size plays a key role in determining body distribution of nanoparticles after in vivo administration by injection and in facilitating their access to cancer cells (internalization) either by passive or active targeting to tumors [19,20]. Optimization of the nanoparticle fabrication for a high percent yield will make the drug development effort an economic proposition.
Discussion
Scheffe polynomial models were generated to predict particle size (nm) and percent yield for poly-L-lactide-based nanoparticles as functions of the composition of the formulations. The models are shown in Equations (1) and (2). Further, Scheffe polynomial models were generated to predict particle size (nm) and zeta potential (mV) for poly-ε-caprolactone-based nanoparticles as functions of the composition of the formulations.
The models are shown in Equations (1) and (2).
Simultaneous Numerical and Graphical Optimizations of Nanoparticle Size and Percent Yield for Poly-L-Lactide-Based Nanoparticles
Following simultaneous numerical optimization of nanoparticle size and percent yield of poly-L-lactide-based nanoparticles using Equations (1) and (2), four solutions were returned. Three of the solutions were used to fabricate nanoparticles to compare the predicted values with the actual laboratory values. The observations from the confirmation experiments are within the confirmation 95% prediction interval (95% PI low and 95% PI high), where PI is point prediction, showing the confirmation of the models. A typical overlay plot is shown in Figure 10. The focus on particle size and yield in this aspect of the work is based on the fact that particle size plays a key role in determining body distribution of nanoparticles after in vivo administration by injection and in facilitating their access to cancer cells (internalization) either by passive or active targeting to tumors [19,20]. Optimization of the nanoparticle fabrication for a high percent yield will make the drug development effort an economic proposition.
Simultaneous Numerical and Graphical Optimizations of Nanoparticle Size and Zeta Potential for Poly-Ɛ-Caprolactone-Based Nanoparticles
Following simultaneous numerical optimization of nanoparticle size and nanoparticle surface zeta potential using the two models (Equations (3) and (4)), ten solutions were returned. Three of the solutions were used to fabricate nanoparticles to compare the predicted values with the actual laboratory values. The observations from the confirmation experiments are within the confirmation 95% prediction interval (95% PI low and 95% PI high), showing the confirmation of the models ( Figure 11). As indicated earlier, we showed interest in particle size because particle size plays an important role in determining the drug release behavior of drug-loaded nanoparticles and the fate of the nanoparticles after in vivo administration [19,20]. The particles should be small enough to avoid the mechanical spleen or lung filtering effects. Moreover, the cells of the reticuloendothelial system (RES) or mononuclear phagocyte system recognize and rapidly clear nanoparticles from the circulation by phagocytosis and RES uptake has been shown to increase with particle size [21].
Zeta potential data in this work show predominantly negative values. Following injection into the blood stream, nanoparticles with a positive zeta potential pose a threat of causing transient embolism and rapid clearance compared to negatively charged particles [22]. Consequently, we decided to carry out simultaneous numerical optimization on particle size (with emphasis on minimization) and zeta potential (with emphasis on maximization of the negative zeta potential values).The overlay plots (Figures 10 and 11) show the regions meeting the specifications for the optimizations (colored yellow). The yellow regions show the windows of operability where the components can be set to meet the requirements for both responses (particle size and percent yield ( Figure 10) for poly-L-lactide-based nanoparticles and particle size and surface zeta potential ( Figure 11) for poly-ɛ-caprolactone-based nanoparticles).
Simultaneous Numerical and Graphical Optimizations of Nanoparticle Size and Zeta Potential for Poly-ε-Caprolactone-Based Nanoparticles
Following simultaneous numerical optimization of nanoparticle size and nanoparticle surface zeta potential using the two models (Equations (3) and (4)), ten solutions were returned. Three of the solutions were used to fabricate nanoparticles to compare the predicted values with the actual laboratory values. The observations from the confirmation experiments are within the confirmation 95% prediction interval (95% PI low and 95% PI high), showing the confirmation of the models ( Figure 11). As indicated earlier, we showed interest in particle size because particle size plays an important role in determining the drug release behavior of drug-loaded nanoparticles and the fate of the nanoparticles after in vivo administration [19,20]. The particles should be small enough to avoid the mechanical spleen or lung filtering effects. Moreover, the cells of the reticuloendothelial system (RES) or mononuclear phagocyte system recognize and rapidly clear nanoparticles from the circulation by phagocytosis and RES uptake has been shown to increase with particle size [21].
Zeta potential data in this work show predominantly negative values. Following injection into the blood stream, nanoparticles with a positive zeta potential pose a threat of causing transient embolism and rapid clearance compared to negatively charged particles [22]. Consequently, we decided to carry out simultaneous numerical optimization on particle size (with emphasis on minimization) and zeta potential (with emphasis on maximization of the negative zeta potential values).The overlay plots (Figures 10 and 11) show the regions meeting the specifications for the optimizations (colored yellow). The yellow regions show the windows of operability where the components can be set to meet the requirements for both responses (particle size and percent yield ( Figure 10) for poly-L-lactide-based nanoparticles and particle size and surface zeta potential ( Figure 11) for poly-ε-caprolactone-based nanoparticles). Figure 11. Simultaneous graphical optimization (overlay plot) of the design space variation in particle size and zeta potential as functions of the mixture composition. A = crosslinking agent; B = initiators; C = stabilizer and D = macromonomer.
Conclusions
One mission of a drug product development scientist is to develop drug delivery systems that enhance the optimal performance of bioactive agents. Many strategies are used to accomplish this purpose, including measuring the effect of several combinations of formulation and process variables on the properties of nanoparticles. By carefully selecting which combinations of these variables to evaluate, it is possible to optimize nanoparticle properties for specific purposes as embodied in quality by design (QbD) and process analytical technology (PAT) in pharmaceutical dosage form design and development. We have used D-optimal mixture statistical experimental design of experiments and analyze data in two types of nanoparticles (poly-L-lactide-based nanoparticles and poly-ɛ-caprolactone-based nanoparticles) in which the components are in proportions. The negative terms in the empirical model (Equation (1)) corresponding to the amounts of crosslinking agent and stabilizer included in the reaction mixture are the terms to be controlled for particle size minimization. Further, the resulting model (Scheffe polynomial) shown in Equation (2) indicates that terms with positive sign (initiator and stabilizer) will increase the yield if increased. The same reasoning is true in other models: Equations (3) and (4).
Following simultaneous numerical and graphical optimizations of the two models generated (Scheffe polynomials), the optimum formulations were identified. Figure 11. Simultaneous graphical optimization (overlay plot) of the design space variation in particle size and zeta potential as functions of the mixture composition. A = crosslinking agent; B = initiators; C = stabilizer and D = macromonomer.
Conclusions
One mission of a drug product development scientist is to develop drug delivery systems that enhance the optimal performance of bioactive agents. Many strategies are used to accomplish this purpose, including measuring the effect of several combinations of formulation and process variables on the properties of nanoparticles. By carefully selecting which combinations of these variables to evaluate, it is possible to optimize nanoparticle properties for specific purposes as embodied in quality by design (QbD) and process analytical technology (PAT) in pharmaceutical dosage form design and development. We have used D-optimal mixture statistical experimental design of experiments and analyze data in two types of nanoparticles (poly-L-lactide-based nanoparticles and poly-ε-caprolactone-based nanoparticles) in which the components are in proportions. The negative terms in the empirical model (Equation (1)) corresponding to the amounts of crosslinking agent and stabilizer included in the reaction mixture are the terms to be controlled for particle size minimization. Further, the resulting model (Scheffe polynomial) shown in Equation (2) indicates that terms with positive sign (initiator and stabilizer) will increase the yield if increased. The same reasoning is true in other models: Equations (3) and (4).
Following simultaneous numerical and graphical optimizations of the two models generated (Scheffe polynomials), the optimum formulations were identified. | 5,873 | 2015-12-22T00:00:00.000 | [
"Engineering",
"Materials Science",
"Medicine"
] |
Comparing Vision-Capable Models, GPT-4 and Gemini, With GPT-3.5 on Taiwan’s Pulmonologist Exam
Introduction The latest generation of large language models (LLMs) features multimodal capabilities, allowing them to interpret graphics, images, and videos, which are crucial in medical fields. This study investigates the vision capabilities of the next-generation Generative Pre-trained Transformer 4 (GPT-4) and Google’s Gemini. Methods To establish a comparative baseline, we used GPT-3.5, a model limited to text processing, and evaluated the performance of both GPT-4 and Gemini on questions from the Taiwan Specialist Board Exams in Pulmonary and Critical Care Medicine. Our dataset included 1,100 questions from 2012 to 2023, with 100 questions per year. Of these, 1,059 were in pure text and 41 were text with images, with the majority in a non-English language and only six in pure English. Results For each annual exam consisting of 100 questions from 2013 to 2023, GPT-4 achieved scores of 66, 69, 51, 64, 72, 64, 66, 64, 63, 68, and 67, respectively. Gemini scored 45, 48, 45, 45, 46, 59, 54, 41, 53, 45, and 45, while GPT-3.5 scored 39, 33, 35, 36, 32, 33, 43, 28, 32, 33, and 36. Conclusions These results demonstrate that the newer LLMs with vision capabilities significantly outperform the text-only model. When a passing score of 60 was set, GPT-4 passed most exams and approached human performance.
Introduction
Artificial intelligence (AI) has been widely applied in the field of healthcare over the past years with deep learning, neural networks, and image processing.Notable applications include medical image diagnosis and models predicting mortality rates for specific diseases [1,2].The emergence of large language models (LLMs) such as OpenAI's Chat Generative Pre-trained Transformer (ChatGPT; OpenAI, San Francisco, CA, United States), which debuted in 2022, has opened up a new field of applications in healthcare.
Accuracy and minimal error margins are paramount in medical diagnosis, making it important to evaluate LLMs' effectiveness.Some studies have implemented statistical methods such as receiver operating characteristic curves, precision-recall curves, or confusion matrices for assessment.Alternatively, some have assessed LLMs using real medical examination texts.ChatGPT, as the first LLM, was tested with the text of exams for medical staff, primarily focusing on English text.ChatGPT shows a significant improvement in natural language processing (NLP) in 2023, performing at or near the passing threshold for various medical exams without specialized training [3].It achieves scores equivalent to those of a third-year medical student [4].In non-English texts, ChatGPT's performance varies [5].It has shown proficiency in basic science medical knowledge and applied clinical knowledge [6].
The next generation of language models includes multimodal capabilities, retrieval-augmented generation, and enhanced processing of images, audio, and video.These capabilities are crucial for the medical field, which heavily relies on images and sound.GPT-4 and Gemini, as the next-generation LLMs, offer vision features, handling both text and images.In subspecialties like gynecology, thoracic surgery, radiology, and diagnostic imaging, GPT-4 outperformed GPT-3, but its image processing capabilities are less explored [7][8][9].Since Gemini was released in mid-December 2023, there is no related information available yet [10].
For non-English, ChatGPT exhibits lower scores than medical students in the context of simplified Chineselanguage medical exams [11].Chest medicine, gastroenterology, and general medicine are scored relatively well in medical exams in Taiwan.A key limitation is the reliance on non-English text, which may impact performance due to the model's primary training in English [12].In subspecialties like family medicine in Taiwan, the results were not satisfactory [13].
In Taiwan's pulmonary specialist board exam, key areas such as infectious diseases (e.g., pneumonia and tuberculosis), cancer, respiratory disorders, intensive care, sleep medicine, and esophageal diseases are prominent.Building on this study, we focused on non-English texts, specifically chest subspecialties, and utilized next-generation models with vision features.This approach allowed us to incorporate both textual and graphical data into our research.
Materials And Methods
We sourced pulmonary specialist exam questions and answers from 2013 to 2023 from the Taiwan Society of Pulmonary and Critical Care Medicine (TSPCCM) website [14], categorizing them into text-and image-based sections.Two pulmonologists reviewed and subdivided these 1,100 questions into specific topics: infection (bacterial, fungal, and viral origin), tuberculosis (lung and extrapulmonary origin), esophagus topic, thorax anatomy, sleep, pharmacology, lung neoplasms (lung cancer and other origins in thorax), critical care medicine, pathophysiology, mechanical ventilation and oxygen therapy, interstitial lung disease, surgery, pulmonary embolism and vascular disease, asthma, chronic obstructive pulmonary disease (COPD) including bronchiectasis, lung function test, pulmonary vasculitis, autoimmune disease, sarcoidosis and lymphangioleiomyomatosis, pneumothorax and chylothorax, bronchoscopy and image examination, musculoskeletal disease, tracheal disease, pleura disease, diaphragmatic disease, miscellaneous.
We meticulously organized the pulmonary exam questions into a text file, while separately storing the images.Because these one-answer questions have multiple choices to select, the prefix "Please give me only one answer in a single letter form" will be added in the front part of the question content before being fed to LLM.Combine these two parts to form one prompt to input into the model.
For evaluation, we employed GPT-3.5, which lacks image processing capabilities, alongside GPT-4 Vision and Gemini, both equipped with image functionalities.The text components were analyzed using APIs provided by OpenAI and Gemini.In this study, we used a specific model by setting the model name by API.In ChatGPT, "GPT-3.5-turbo"and "GPT-4" were selected.These two model versions were the same on December 20, 2023.In Gemini, "Gemini-pro" in text-only questions and "Gemini-pro-vision" in questions with both text and images were selected, which was the December 25, 2023 version.For the visual elements in GPT-4, we used the web interface to input text and upload images.The Gemini API, on the other hand, facilitated input for both text and images.The flowchart of the study is presented in Figure 1.
TABLE 1: Scores in text-only and text-and-image questions by year
In the entire set of exam questions, categorized according to the number of questions, the analysis is as follows: for lung neoplasms (lung cancer and other origins in the thorax), there are a total of 172 questions.GPT-4 answered 124 correctly, Gemini answered 91 correctly, and GPT-3.5 answered 62 correctly.In the section on infections (bacterial, fungal, and viral origin), there are 120 questions, with GPT-4, Gemini, and GPT-3.5 scoring 77, 65, and 35 correct answers, respectively.For critical care medicine, there are 98 questions, with the scores being 66, 45, and 34, respectively.In mechanical ventilation and oxygen therapy, there are 91 questions, with the scores being 60, 39, and 28, respectively.For tuberculosis (lung and extrapulmonary origin), there are 71 questions, with the scores being 38, 32, and 26, respectively.In the topic related to the esophagus, there are 64 questions, with scores of 41, 35, and 23, respectively.For asthma, there are 63 questions, with the scores being 40, 27, and 26, respectively.For COPD, including bronchiectasis, there are 63 questions with scores of 33, 30, and 20, respectively.The details of these results are listed in In the category of questions involving both text and images, the analysis according to the number of questions is as follows: for mechanical ventilation and oxygen therapy, there are a total of 22 questions, with GPT-4 answering 15 correctly and Gemini answering 9.In the sleep category, there are seven questions, with GPT-4 and Gemini scoring 3 and 2, respectively.For other areas, both GPT-4 and Gemini provided correct answers, with details available in Table 3.
TABLE 3: Scores in categories with both text-only and text-with-image questions
We use the total number of questions as the denominator and the number of correct answers as the numerator.Due to the total number of questions being less than 10 and categorized as miscellaneous, we have excluded six categories: pulmonary vasculitis, pleura disease, trachea disease, musculoskeletal disease, diaphragmatic disease, and miscellaneous.Additionally, we have sorted the categories based on the correct answer ratio.With 0.6 as the threshold, the ratios for each category separately for GPT-4, Gemini, and GPT-3.5 are shown in Figure 3. Different preferences for answering questions were observed in these three models.
FIGURE 3: Answer rates in different categories
The dashed line represents the 60% passing threshold.For categories with more than 60 questions, the answer rates from highest to lowest in GPT-4 are as follows: lung neoplasm, critical care medicine, mechanical ventilation, infection, esophageal disease, asthma, tuberculosis, and COPD and bronchiectasis.In Gemini, the order is esophageal disease, infection, lung neoplasm, COPD and bronchiectasis, critical care medicine, tuberculosis, mechanical ventilation, and asthma.For GPT-3.5, it is asthma, tuberculosis, lung neoplasm, esophageal disease, critical care medicine,
Discussion
AI capable of understanding human language has been a focus of research for many decades.Due to the complexity of human language, significant progress in this field remained elusive until the development of the ChatGPT, built on the GPT-3.5 architecture [15].It has been trained with extensive and massive text data from the internet, making it able to comprehend and respond to human language with remarkable accuracy and efficiency [16].
GPT-4 vision ability represents a significant evolution in language models.Traditionally, such models were constrained to text-based inputs, limiting their application scope.GPT-4 incorporates image processing, thereby enhancing the model's utility and applicability across diverse scenarios that require multimodal understanding [17].Gemini, an advanced AI model proposed by Google DeepMind, was introduced on December 6, 2023.It is designed for multimodality, with text, images, videos, audio, and code processing.It also stands out as the first model to surpass human experts in massive multitask language understanding, a key benchmark for AI knowledge and problem-solving [10].
In this study, we categorized the exam questions by topic and observed that the accuracy rates of the three models varied across different subjects.However, for common thoracic conditions such as neoplasms, infections, critical care medicine, and asthma, the accuracy rates were above average, likely due to the abundance of data available for these conditions.Notably, despite the growing evidence linking sleep to various internal medicine conditions, the accuracy rates for this topic were consistently low across all three models.We speculate that this may be due to the general public's lack of awareness of the importance of sleep medicine, leading to insufficient training data provided by the companies.When all available LLMs have insufficient knowledge on a particular topic, there is a risk of misleading the public.This phenomenon warrants further investigation.
The examination questions are distributed over 11 years, from 2013 to 2023; GPT-3.5, GPT-4, and Gemini show no differences in answer trends over varying years.For categories with more than 60 questions, the answer rates also vary in these three LLMs.This may be due to the differences in the training datasets.A professional medical team preparing relevant data for training could be a direction for future medical LLMs.
Interestingly, the next generation of LLMs, in addition to being multimodal with multimedia input and recognition, also incorporates retrieval-augmented generation (RAG).RAG is an NLP framework that combines search retrieval and generative capabilities [18].Through this architecture, models can search for relevant information fetched from external databases and use this information to generate responses or complete specific NLP tasks, therefore enhancing the accuracy and reliability of generative AI models [19].
To understand whether the above answers were generated by the original model training due to its knowledge base being up to date only until January 2022, we inputted "102-year chest and critical care medicine specialist physician examination questions" in Chinese words into the ChatGPT web user interface.GPT-3.5 responded that it could not answer questions about the exam of year 102 (the year 102 in the Taiwan calendar, which corresponds to the year 2013 in the Gregorian calendar).GPT-4 was able to search for information through the integrated Microsoft Bing search engine.It successfully found the exam questions and answers files (in PDF file type) on the TSPCCM website.It responded to the correct file links but subsequently failed to find them in later searches again, indicating possible inconsistencies in search results.Gemini exhibited what is known as the "language model illusion problem," responding with content unrelated to the query [20].
Although GPT-4 and Gemini both possess basic RAG capabilities, they have not yet demonstrated the ability to search medical websites for relevant exam questions and directly analyze corresponding answers.However, we believe that similar capabilities are highly likely to appear in the next generation of language models, making it challenging to assess whether a language model has comprehensive knowledge capabilities.Medical models require highly accurate data and responses from professionals.Due to integration with search engines, the RAG capabilities might introduce problematic information from the internet, leading to issues with answer accuracy.
Over the last year, language models have evolved remarkably due to advancements made by various companies, now featuring capabilities for both text-and image-based analysis.Language models specialized in medical research outperform general models in the medical domain.For instance, Google's Med-PaLM has already demonstrated this superiority [21].However, the latest version, Med-PaLM 2, shows even more significant progress in the US Medical Licensing Exam, scoring 86.2 as opposed to Med-PaLM's 67.2, nearly reaching the expert level [22].According to our study, there is a possibility that current LLMs may have made notable progress in highly specialized areas and non-English domains.Not only GPT-4, but Gemini, upon its release, had already surpassed GPT-3.5.This could be due to Gemini being the successor model to Med-PaLM.
TABLE 2 : Number of correct answers by category
COPD: chronic obstructive pulmonary disease | 3,033.8 | 0001-01-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Beyond Western and Indigenous Perspectives on Sustainability: Politicizing Sustainability With the Zapatista Rebellious Education
This article discusses the contributions of different worldviews to the debates on what a transformative sustainability education could be. It focuses on mainstream and alternative strands of thought present in the West, as well as Indigenous worldviews, taking the Zapatistas as an example. The Zapatista social movement of Mexico fights for the autonomy of Mayan communities and the preservation of their culture, social organization and relationship to nature, all of which centers on traditional Mayan cosmovision. To secure the survival of the culture and of the movement itself, these ecological values are included in the education system. The Zapatistas remind us that it is possible to rethink the world and the ideologies underlying the discourses on sustainability and suggest that taking a stand and organize politically should be at the center of an education for sustainability that aims at being transformative.
Introduction
The current ecological crisis calls for a reorientation towards more sustainable ways of living together and with the world. It calls for a deep transformation of the ways; we as humans understand and envision our place within the world and the ecosystem, a transformation that implies changes at epistemological, ontological, and relational levels (Williams, 2018).
Internationally, education for sustainable development-often erroneously used as synonymous to education for sustainability-is promoted as the kind of transformative education necessary to cope with the ecological crisis (UNESCO, 2017). There are, however, fundamental differences between the concepts of sustainability and sustainable development, with implications in the realm of education. The two concepts remain ambiguous (Arias-Maldonado, 2013;Bonnett, 2003b). Although some argue that sustainability is a product of the anthropocentric modernist ideology where nature is necessarily exploited and oppressed (Taylor, 2017), others view sustainability as concerned with the long-term relationship between humans and nature (Arias-Maldonado, 2013) or as a frame of mind (Bonnett, 2003b). The UNESCO associates' sustainability with the long-term, abstract, goals of living sustainably, while it defines sustainable development as the concrete means to achieve such goal (UNESCO, 2021).
In this article, the differences between sustainable development and sustainability and their associated values and strategies will be discussed using lines of thoughts present in the West and in some Indigenous communities. These present tensions and dilemmas, but also possibilities: that of fostering greater understanding by moving beyond taken for granted assumptions and practices, and stimulating imaginative and constructive ways of going forward in the debates and practices surrounding education for sustainability (Four Arrows, 2016).
Sustainability, associated to the ideal of a good life, is inevitably linked to moral and value-based judgments. Worldview is intimately related to the relationship with nature and attitudes towards sustainability that people hold, and there seems to be an increasing interest for learning from Indigenous worldviews (Bonnett, 2003a). In fact, "achieving true sustainability will require both a fundamental redefinition and restructuring of development, and a greater sensitivity and respect for local perceptions and knowledge of nature." (Walker, 1998, p. 141). This is taken further by transformative sustainability education, which advocates for a radical transformation of humans' ways of being in the world. It envisions an education that would participate in a shift of consciousness through learning principles and experiences that would foster an embodied sense of entanglement with nature (Williams, 2018). This implies challenging the underlying ideologies that play out at global and local levels and have led us to the ecological crisis we are currently experiencing (Lange, 2018;Williams, 2018). The social movement of the Zapatistas in Mexico are an inspiration in that respect. Their defense of Indigenous Mayan worldview is paired with an explicit anticapitalist position, shedding light on the epistemological, ontological, ethical, and political implications of sustainability.
After presenting the roots and underlying discourses of the concepts of sustainability and sustainable development, the Zapatistas of Southeast Mexico will be introduced together with the importance they give to beliefs and traditional knowledge for environmental and cultural preservation, social organization, and political empowerment. Although this is a theoretical article, the author draws her reflections on both literature and her direct experiences of visiting Zapatista communities. Finally, inspirations from both Western and Indigenous worldviews and their possible contributions to a transformative education for sustainability will be discussed.
Sustainability and the Western mind
Nature and the Anthropocene. In the West, Nature has always had ambivalent meanings. Sometimes associated with the original, authentic way of life, an ideal to get closer too, it is still often associated with the negative connotation of the uncivilized and underdeveloped (Straume, 2016).
Foundational for Western civilization and modernity itself, is the grand narrative of the rational enlightened Man, featuring a Cartesian divide between nature and culture, and justifying Man's superior right to examine, conquer, and exploit the natural world (Straume, 2016;Taylor, 2017). This culminated with the industrial revolution and the heavy exploitation of natural resources made possible by the development of heavy machinery and capitalist modes of production. Today the impacts on the planet caused by human beings are so deep and long lasting, that a new geological era has been coined: the Anthropocene (Crutzen, 2002).
As much as it is the product of a Cartesian dualism and the symptom of a crisis, with the Anthropocene, natural and social phenomena are no longer separate; they are discussed together (Straume, 2016). This can be viewed as a potential redirection of humans' efforts to understand the world.
Sustainability and Sustainable development
The impact of human activity on nature has been the object of worries and criticism for centuries (Du Pisani, 2006). Already in the 17 th century, the holist philosophy of Baruch Spinoza, who has inspired modern philosophers such as the Norwegian Arne Naess, defended the importance of acknowledging humans' interrelatedness with nature and environment as a whole (Hansson, 2012). The term sustainability itself was first used in the early 18 th century by German von Carlowitz regarding forest conservation (Waas et al., 2011).
The long-term ideal of sustainability is commonly conflated to that of development, making sustainable development a rather new, but already mainstream concept, leading international policies and agendas. Over the course of history, progress, with which the idea of development is associated, went from connoting moral and ethical betterment of humankind to the idea of modernity and economic growth. Linking material and economic growth to ideas of the good society helped legitimize the domination and exploitation of nature by humans (Du Pisani, 2006).
The concerns for environmental conservation that became more salient towards the 1970s emerged mainly from the realization that an endangered environment was a threat for human beings' future on the planet and called for a model of development that would mitigate economic growth with conservation of natural resources. This led to the first international conference dedicated to the ecological crisis, held in Stockholm in 1972 (Waas et al., 2011). In 1987, the Brundtland Report underlined the multiple dimensions inherent to sustainable development, with social and environmental dimensions living alongside an economic one, with growth as the leading economic model (WCED, 1987).
As mentioned earlier, distinguishing between sustainable development and sustainability is crucial. Both are linked to value judgments of the desirable relationship between humans and their environment and of the good society (Arias-Maldonado, 2013;Bonnett, 2003b), and are in the West framed by the modernist anthropocentric paradigm. How these terms are defined reflect the central values one wishes to be the foundation for research, policies, and practices. The UNESCO defines sustainable development as composed of four dimensions: a political dimension (referring to the promotion of democracies), a natural dimension (natural conservation), a social dimension (peace, social justice, and human rights), and an economic dimension (the creation of jobs and economic growth) (UNESCO, 2010). According to this framework, the four dimensions are interrelated and equally important. However, whether they have or should have equal weight in the quest for sustainability is questionable.
Criticisms of Sustainable development
The adoption of sustainability-framed as sustainable development-as a global strategy has thus far failed to successfully mitigate climate changes and other environmental disasters (Benson & Craig, 2014;Waas et al., 2011).
A major critique raised towards the mainstream model of sustainability is how it projects a capitalist economic model for development as the only appropriate one. In this weak understanding of sustainability, the laws of a market economy and the neoliberal ideology it builds upon are taken for granted, legitimized, and even institutionalized, implicitly leading discourses, policies, and practices, as if operating outside of this framework was simply impossible (Berryman & Sauvé, 2016). Instead, critics argue that real sustainability-one that puts the environment first-requires challenging the underlying neoliberal values that permeate societies globally and promote an instrumental view of our relationship with nature (Kopnina & Cherniak, 2016). In fact, weak sustainability, attached to the idea of development, takes for granted human beings' right as species to exploit nature to its advantage, thus reproducing the same patterns of domination that engendered the Anthropocene and the ecological crisis (Bonnett, 2003b).
In practice, the four dimensions of sustainable development mentioned earlier are subject to trade-offs where the economic dimension is given priority at the expense of the natural one (Waas et al., 2011). Such separation of social and natural realms is problematic, as it does not propose new and transformative ways of understanding sustainability as interlacing nature with culture. Rather than merely interrelated, these dimensions should be viewed as integrated, placing humans within environment, as will be discussed later (Waas et al., 2011). This is what a "strong sustainability paradigm" entails, nesting humans within an ecological equilibrium and seeking to transform the ways humans and more-than-humans relate to each other (O'Neil, 2018, p. 370). This goes hand in hand with a profound reconsideration of how humans organize socially and envision their place in the world, as well as of the crucial role of education in this endeavor (O'Neil, 2018).
Sustainability is thus a pluralistic concept that builds on different values and entails a variety of understandings and paths to it. As such, some have argued that sustainability should be open to discussion and deliberation. Such democratic spaces would allow to engage politically with the question of our relationship with nature and the good society (Arias-Maldonado, 2013).
Sustainability in the Indigenous worldview
A challenge to a dualistic worldview. Although generalizing may be dangerous, Indigenous beliefs about the relationship between humans and nature would mostly qualify as ecocentrism, which acknowledges the intrinsic value of nature, independently of the use humans may make of it. Instead of separating humans from nature, Indigenous worldviews interlace the two in a reciprocal relationship (Frisancho & Delgado, 2016). Dichotomies such as nature and culture, or mind and body, are irrelevant and even absurd in most Indigenous cosmovisions, as the level of identification with nature is high (Berkes, 1999). The relationship with nature has a moral character, at the striking opposite of anthropocentrism and instrumentalism that view nature as a commodity. Common to many Indigenous peoples are the qualities given to the land and the ecosystem, which foster an environmental ethic and lead social behaviors. A strong sense of place and belonging to the land goes hand in hand with spiritual beliefs that give meaning to interactions, socially and with nature (Berkes, 1999;Williams, 2018).
Indigenous epistemologies and ways of thinking challenge the modern Western views on science and truth stemming from positivistic epistemological traditions. These Western views tend to dismiss the spiritual component of Indigenous knowledges as a threat to rationality, proof of their illegitimacy. As if Western epistemologies were not themselves embedded in a belief system (Berkes, 1999). Besides, as will be further discussed with the case of the Zapatistas of Chiapas, Mexico, the preservation of traditional knowledges and practices is a matter of survival and empowerment for Indigenous peoples, and is thus political (Berkes, 1999).
The Zapatista Uprising
The Zapatista movement made itself known on January first, 1994, the day that the international free trade NAFTA agreement between Mexico, the United States and Canada was enforced. Opening for the privatization of lands which were until then protected by the traditional Ejidal system-allowing community-owned lands-, this agreement was viewed by the already marginalized Indigenous people as an additional threat to their rights, traditional social organization, and very existence (Harvey, 2001;Martinez-Torres, 2001).
With their armed uprising, the Zapatista and their army (the EZLN) demanded for their dignity and rights as Mexicans, Indigenous and peasants to be acknowledged (Barmeyer, 2008;Harvey, 2001). The demands voiced by the EZLN in the aftermath of the uprising were work, land, shelter, food, health, education, independence, freedom, democracy, justice, and peace (EZLN, 2016). Several failed attempts of dialogues led to the San Andres Accords in February 1996. However, the Accords were never legally implemented by the government, and the Zapatistas declared their autonomy unilaterally. They started redistributing land among peasants, creating autonomous municipalities organized in a complex democratic network (Baronnet et al., 2011;Van Der Haar, 2005).
Resisting decades of state education policies, accused of homogenizing, marginalizing and erasing Indigenous peoples, language and cultures, the Zapatistas started building their own autonomous education system, one born "from below," that would preserve Indigenous worldviews and practices, and honor communities' real needs, while offering tangible opportunities and future prospects to its youth (Gómez Lara, 2011;López, 2017).
The Dimensions of Sustainability Reflected in Zapatista Life
Environmental preservation. The struggle for autonomy and self-sufficiency has strengthened ecological concerns linked to the use of land for agriculture, but also as "motherland," placing the environment at the center of Zapatista communities' politics. Prohibiting pesticides, GMO crops, limiting hunting, and making use of traditional agroecological knowledge, are some of the measures taken by these communities. Traditional beliefs about the earth and humans' relationship to nature, such as the myth of creation where humans were made from corn, also deepen protective and respectful ecological behaviors (Gómez Bonillo, 2011).
To face the deterioration of the environment, the exchange of experiences, knowledge and strategies between communities is vital. Older generations play an important role in passing on their knowledge of preservation as they are living witnesses of environmental changes (Gómez Bonillo, 2011). Besides, the Zapatistas link environmental degradation to social, economic, and political strategies of the State and paramilitaries viewed as aiming to weaken the communities' quality of life and destabilize the EZLN. These include the systematic exploitation and social inequality indigenous people have endured, and the burning down of forests on Zapatista territory. Besides, historically, Mexican developmental projects, including ecotourism, have not taken into account the voices and needs of local populations, and are viewed as engendering destruction in the natural environment, fragmentation of the territory and threaten the cohesion of communities. (Gómez Bonillo, 2011).
By redefining their interaction with their habitat, their territory, the Zapatistas are resisting the politics of the government and its model of sustainable development. In that sense, cultivating the land and cultivating agricultural, ecological, and political knowledge go hand in hand (Baronnet et al., 2011).
An anticapitalistic movement. The Zapatistas are an anticapitalistic movement and contest a neoliberal understanding of sustainability (Stahler-Sholk, 2011).
Resistance to the capitalistic economic model is materialized through a social restructuring where the economic capital is replaced by social capital: In this collectivist framework, the social subjects are the foundation of the community's reproduction and of its environment (Baronnet et al., 2011). Labor is generally unpaid, to avoid monetizing the relationship between service providers and the community (Stahler-Sholk, 2011). Starting at the microlevel, the Zapatistas intend to change the structures of the neoliberal model that threaten the community and the environment. The Zapatistas are thus showing the capacity of peasants to organize and provide alternatives to neoliberal organization. Anticapitalism is further embodied through a relationship with the environment where nature is not merchandised, which challenges the neoliberal discourse that measures sustainability in terms of profit and thus frames subsistence economy in rural Mexico as unsustainable (Gómez Bonillo, 2011;Stahler-Sholk, 2011). Besides, the autonomous system is mostly self-financed, thanks to "collectives" of production and cooperatives (Van Der Haar, 2005). This ideal of selfsufficiency implies material sacrifices and challenges, but also fosters a sense of collective identity and pride (Stahler-Sholk, 2011).
Communality as social organization. Dating back to traditional Indigenous modes of living together and central to the Zapatista organization is the notion of community (Stahler-Sholk, 2011). The community has always been the Mayan central space of encounter and interaction between families, with their assemblies and different functions, territory, services etc. (Paoli, 2003). In a Zapatista context, solidary and participatory practices are fostered by the collective sharing, experiences and culture of the land (Stahler-Sholk, 2011). The redistribution of land following the 1994 uprising-from rich landlords to poor peasants-was a way to reorganize socially, building communities whose land would depend on unity and cohesion for their protection (Stahler-Sholk, 2011). Zapatismo explicitly acknowledges diversity as the foundation for an ethics of solidarity (Evans, 2009). ''The first task for any new politics'' is to ''recognize that there are 'differences between us all' and that in light of this, we aspire to a politics of tolerance and inclusion.'' (Subcomandante Marcos in Evans, 2009, p. 92). This acknowledgment of difference and diversity as a strength is represented in their now famous phrase: "A world in which many worlds fit" (Olesen, 2004, p. 262).
Another feature of the Zapatista social organization is its decentralization. Embedded in local realities, municipal councils and local assemblies differ greatly depending on local characteristics and needs (Van Der Haar, 2005). This decentralization applies to the education system, with curricula being developed locally through participatory processes where all members of the community are involved (Baronnet, 2010;Núñez Patiño, 2011). This will be further discussed later.
A democratic organization. Unlike other Latin American social movements of the 20 th century which have heavily relied on the armed struggle, the Zapatistas have favored democratic processes and the efforts of civil society to bring about social change (Máiz, 2010;Olesen, 2004). The Zapatistas' understanding of democracy is a hybrid between political Marxism of the first revolutionaries and Indigenous worldview and decision-making processes (Martinez-Torres, 2001;Olesen, 2004). The current democratic system, associated with the neoliberal ideology, in Mexico and the world at large, is deemed elitist, corrupt, and oppressive to the Indigenous and the poor; hence the urge for a new way of doing politics (Máiz, 2010).
The interrelatedness of the economic and the political is explicit in Zapatista anticapitalistic discourse that views capitalism as fundamentally antidemocratic. Their democratization project takes form through revisiting institutions, including the economic ones, from within and from below. Unlike other revolutionary movements, the Zapatistas do not aim at taking state power (Marchart, 2004). Instead, they are about a reorganization of power giving the population an actual say in the decisions concerning their lives within an alternative system. This materialized with the creation of the Caracoles-administration centers-and decentralization, allowing the decisions on the use of the land and resources to be in the hand of local actors (Gómez Bonillo, 2011;Stahler-Sholk, 2011).
This builds on a common culture of political participation shared by the majority of rural Mayan families in Chiapas, through socialization arenas such as the assembly and other meeting spaces (Baronnet, 2010;Evans, 2009). The Zapatistas claim that "Democracy is that, independently of who is in office, the majority of the people have decision-making power over the matters that affect them (…) Democracy is something that is built from below and by everyone, including those who think differently from us. Democracy is the exercise of power by the people all the time and in all places." (EZLN, 2000).
Beyond the four dimensions: Politicizing sustainability
Thinking in terms of separate dimensions does not do justice to the integrated complexity of sustainability-in its strong sense-nor does it allow grasping the worldview underpinning the Zapatista organization and discourse.
At the striking opposite to a mechanistic and separationist view of the world and humans, this worldview-or cosmovision-is based on a holistic vision of the universe, on unity and interconnection. These principles, essential to ecology, suggest that true sustainability is not about connecting separate dimensions but integrating them into a whole, a mode of living together (Duenkel & Pratt, 2013;Retamal Montecinos, 1998).
Embedded in Mayan worldview, a key value and ideal of life of the Mayan people, and of the Zapatistas, is that of autonomy. Autonomy permeates all levels and arenas of life-the individual, the family, the community, and the wider Indian people (Paoli, 2003). This ideal of life can be summed up by the phrase Lekil Kuxlejal meaning "the good (or "worthy") life." According to Mayan cosmovision this ideal of life is not utopic, but was a reality in remote times, and can be restored, through autonomy, an ethics of intersubjective relationships between all beings-humans and more-thanhumans-sustaining the community. Building autonomy is thus both a right and a responsibility, and is linked to the survival of culture itself (Paoli, 2003).
Autonomy is strategic to conserving local decision-making processes over the environment and its resources, but also to preserve local identities and cultural practices rooted in cosmovision (Gómez Bonillo, 2011). As noted earlier, preserving traditional ways of interacting with environment, is viewed as a mode of resistance to the politics of development led by the government (Baronnet et al., 2011;Gómez Bonillo, 2011). Political action and cosmovision are thus interlaced.
Further, these beliefs inform and regulate aspects of the group's social organization and cohesion. In fact, the Zapatistas, reclaim traditional symbols in order to strengthen peasant and Indigenous identity. They also incorporate new elements to them, thus adapting them to the present realities of the communities and the struggle (Gómez Bonillo, 2011). The Zapatistas view culture as dynamic and adopt a critical position towards traditions. They acknowledge that a lot of what is now known as traditions is in fact a product of the influence of the Spanish conquest. An example of this is the Zapatistas' gender politics, which breaks with traditional gender roles and give women a prominent position in the organization (Stahler-Sholk, 2001). Traditional customs partly regulate the social organization and institutions, but the context and systematization of the fight for the land has contributed to alter these customs and beliefs. Evolving identities and evolving beliefs affect each other reciprocally (Gómez Bonillo, 2011). The Zapatista worldview thus actualizes ancient cosmovision through critical political thought, placing autonomy, self-determination, and reflexivity at the center: "Not all traditions are good. The important thing is, we want to choose what we want to accept from outside and how we want to live." (Zapatista community leader in Stahler-Sholk, 2001, p. 515).
On January 1st, 2021 the EZLN released a communiqué, a "declaration for life," translated in 18 languages and that has gathered hundreds of signatures from collectives and individuals around the globe. This declaration states "That we make the pains of the earth our own, violence against women; persecution and contempt of those who are different in their affective, emotional, and sexual identity; annihilation of childhood; genocide against native peoples; racism; militarism; exploitation; dispossession; the destruction of nature" (EZLN, 2021). This declaration makes explicit the interdependence of all dimensions of life, social, and natural, and the impossibility to address the ecological crisis without simultaneously resisting systems of oppression at a wider and global scale. In other words, this call for a struggle for Life as a whole and in all its complexity, reminds us that sustainability ought to be politicized.
From worldviews to an education for sustainability
The Zapatista Rebellious Education. The Zapatista education aims at strengthening the political engagement of its members, on which the survival of the movement depends. It is precisely in this sense that it is transformative, as it both offers prospects of a better future for indigenous communities, and intends to create an alternative to the hegemonic system (Baronnet, 2010).
To do so, the Zapatista education system is embedded in local participatory practices and community life, promoting a highly integrated social structure and making the education process a collective responsibility involving both students, families, assemblies and authorities.
For the Zapatistas, education encompasses all areas of life, in and outside school (Gómez Lara, 2011). The milpa, the field, is one of the main pedagogical spaces where children from an early age are brought to "work" with their parents, as the cultivation of corn is fundamental to community life (Pinheiro Barbosa, 2015). They also participate in the daily domestic tasks, such as making corn tortillas. The family is thus a fundamental didactic space where children develop skills crucial to community life, as well as for understanding the integration of the community's social fabric (Gómez Lara, 2011;Paoli, 2003). Besides, rather than being a separate space, the school is integrated to and interwoven with the community (Baronnet, 2010;Núñez Patiño, 2011): The practices within schools adapt to the rhythms of the community, its resources, and the knowledge of the elders. They develop in continuity with the informal learning arenas of the family and community (Paoli, 2003). This fosters a sense of belonging in children, to the land, the community and the movement itself.
Opening the pedagogical space to arenas other than the school situates experience, everyday life, at the center of the learning process, the construction of concepts and epistemology, and of an identity as community member (Pinheiro Barbosa, 2015). It also enables lifelong learning processes, through which all members, including adults, engage in a continuous process of reflexivity. In fact, from early childhood, members of the community are encouraged to develop their individual autonomy and selfawareness. These qualities are crucial to the autonomy of the community itself, as they enable its members to participate in decision-making processes (Pinheiro Barbosa, 2015). Assemblies are the meeting point where all members of the community, across generations, define education experiences, and that allow the knowledge of the elders and the experience of families to find their way into classrooms and curricula. The direct participation and influence of the community members on the education system allows developing relevant education practices adapted to local needs, contributes to strengthen a sense of dignity as people, and strengthens the resistance movement. (Gómez Lara, 2011).
Indigenous-Maya-worldview plays an important part in that matter, and is included in the education model of the Zapatistas (Pinheiro Barbosa, 2015). For instance, the person's development and learning process is understood as Ch'ulel-a Mayan concept relating to the formation of conscience and soul, and which is present in all entities on earth (Gómez Lara, 2011). The concept reflects an understanding of social life, encompassing all relationships, in between humans and with nature, transcending material, and immaterial planes (Pinheiro Barbosa, 2015). As it will be further discussed later, the fundamental values an alternative and transformative education for sustainability would build on thus promote communality as a social organization and as a form of living between humans and more-than-humans.
Beyond the dichotomies of a modernist worldview
If the objective of The Brundtland report was to create a bridge between the needs and interests of the countries of the North and the South (Waas et al., 2011), it was on the premise of one particular developmental paradigm and did not succeed in including other worldviews in its conceptualization of sustainability. Indigenous worldviews and relationship to nature are still exoticized, which undermines the potential transformative learning process that a true dialogue between worldviews could foster. Such dialogue would build on an awareness of power relations and inequalities, and seek to share and learn from each other in a reciprocal and horizontal manner. (Frisancho & Delgado, 2016).
The prejudice which depicts Indigenous people as irrational and uncivilized because of the importance they give to the affective in their knowledge formation is still strong (Frisancho & Delgado, 2016). The Mayan concept of Ch'ulel mentioned above contests and transcends the classic dichotomy between the intellect and the heart, the rational and the natural. This stands in stark contrast with the Cartesian dualism foundational to the Western modernist worldview and education system. Similar understandings of the need to look at the mind and nature in unison rather than as opposition can be found in the West too, as in the work of Spinoza, who viewed the world as an integrated whole (Hansson, 2012). These holistic perspectives have become central in the discussions on sustainability. They acknowledge the need to overcome a distantiated relationship to nature and promote experiential embodied knowing developed from direct contact with nature. This kind of knowledge is not value-free, but embedded in an ethics of care, and is thus argued to be fundamental for a radical and meaningful change in the humanmore-than-human relationship (Bonnett, 2003a). Challenging the standard of valuefree, rational knowledge, it brings in experience, the senses and affect, where nature becomes an integrated part of humans' life, and vice versa. This understanding views the relationship as reciprocal, with an acknowledgment of nature as subject and agent (Bonnett, 2003a;Taylor, 2017).
In its weak sense, education for sustainability reflects a modernist worldview. It thus does not break with the very fundaments of the challenges we face (Taylor, 2017). In the West, various initiatives propose alternatives to mainstream education, based on pedagogical principles that acknowledge interactions between children and the natural environment as fundamental. Deep ecology, common world pedagogies and transformative sustainability education, for example, propose that humans engage in a form of collective thinking with more-than-humans, acknowledging that they too exert agency upon us and upon the world (Naess, 1976;O'Neil, 2018;Taylor, 2017). However, the question of the practical feasibility of such alternatives remains. Can we really separate ourselves from the modernist framework with which we are so deeply entangled? If not from our human subjectivities, then how can we act? (Stables & Scott, 2001).
Latin America is the scene of multiple cases of Indigenous resistance to the Western project of modernity. These constitute an alternative modernity, that combine the rejection of the capitalist worldview, and a cosmovision based on the interrelatedness of humans and more-than-human world, as described earlier. The classic dichotomy between modern and Indigenous can thus be challenged. Indeed, the Zapatistas do not fit within the stereotypical image of the naïve Indigenous, victimized, and powerless in the face of modernity. The way they make use of the internet to make themselves heard and network with civil society at a global scale, is an example of their engagement with modernity (Martinez-Torres, 2001). They build on their Indigenous identity to redefine what modernity means to them, a modernity that does not conform with that of the Western capitalist world (Rojas, 2018)-aiming at replacing "the monoculture of modernity" (Esteva et al., 2014, p. 5) with "a world in which many worlds fit" (Olesen, 2004, p. 262).
Resisting the modernist worldview also implies positioning oneself critically towards the modern, hegemonic school system, designed to fit the needs of Nation States in their capitalist and industrial development (Retamal Montecinos, 1998). Multiple educational programs are being launched worldwide with an emphasis on teaching children about sustainability, developing skills, values and attitudes that would help foster a sustainable future and fulfill the global development goals (Siraj-Blatchford & Pramling-Samuelsson, 2016). However, these alone are not enough to really foster a sustainable future. They need to be accompanied by a reconsideration of the structural conditions these pedagogical activities are embedded in. To break with the mechanistic vision of the world, sustainability education will not only have to reconnect humans with nature; it will also have to build on a radically different social structure (Retamal Montecinos, 1998). An adult education that supports teachers in reflexively engaging with the socio-cultural context influencing their practice and role is crucial in that respect, so that they can effectively become the agents of change that a transformative sustainability education calls for (Freire, 2005).
The way forward: the educator as political actor
It is now clear that sustainability is a complex concept and can be used ambiguously. It is also embedded in a worldview that serves a global agenda. Education for sustainability is thus in danger of losing its meaning to serve these global agendas, including the interests of neoliberal international organizations, which have become the framework within which educators must negotiate their work. The dependency of educators on the ground to these agendas weakens and limit the scope of their work with environmental education (Jickling & Wals, 2008;Kopnina, 2012).
The bias posed by operating within a specific worldview that is not reflected upon is a form of indoctrination where what to think becomes more important than how to think. Becoming aware of this bias may make encounters between multiple and disparate voices, essential to a reappropriation of the meaning of education for sustainability, possible (Jickling & Wals, 2008;Kopnina & Cherniak, 2016). However, if the plurality of ethical perspectives is often encouraged, it can become paralyzing if it comes to serve relativistic positions that prioritize pluralism at the expense of proenvironmental attitudes (Kopnina & Cherniak, 2016). Rather, a critical environmental education that advocates for the environment would be oriented "towards both human and more-than human interests, simultaneously, and not with one subordinated to the other" (Kopnina & Cherniak, 2016, p. 836). This could bridge the gap between anthropocentric and eco-centric positions, and, by redefining humans' ways of being in the world, education for sustainability could then become truly transformative.
Furthermore, Indigenous worldviews and modes of living should be explicitly acknowledged and made space for in educational practices. This would contribute to decolonize the discourse on sustainability, and to awaken practitioners and learners in the West to the possibility of thinking and acting "outside the box" of the totalizing epistemologies and ontologies dictated by the hegemonic capitalist model (Williams, 2018). As an Indigenous and peasant movement, the Zapatista struggle for territory, land, and community, is testimony that the notion of place, belonging and community are crucial to a strong sustainability. An education for sustainability should then seek to ignite in both teachers and learners a deep sense of interpersonal connection with place, fostering an engagement, both affective and cognitive, with the material and cultural contexts and realities as well as the communities, constituting the place they are situated in (Lange, 2018). This implies a decolonizing process whereby sustainability education seeks to expand horizons, make space for multiplicity and question one's own assumptions, by listening to and learn from indigenous groups without disconnecting their practices from the cosmovision and cultural context they are embedded in. (Harmin et al., 2017).
Although the Zapatistas are an inspiration for many activist groups around the globe and are engaged in a vast network of solidarity, they do not seek to "Zapatize" the world. Rather, they call for people to engage with their own socio-cultural realities in order to develop modes of resistance that are locally relevant (Olesen, 2004).
In the face of a neoliberalism, that favors technical questions at the expense of critical ones, and "seeks to de-politicise life" (Moss, 2007, p. 8), a transformative education requires the critical engagement of educators-citizens-in matters concerning sustainability (Waas et al., 2011). This means that for education to enact its transformative functions, space must be found-or taken-for educators to reclaim their role as citizens, capable of autonomous thinking, and of acting as democratic agents of change. If the interest for representative democracy is decreasing in the West, engagement in alternative forms of democratic politics such as civil society initiatives and activist organizations is growing. This growing interest could inspire to think of other spaces, such as the school, as political spaces. This implies looking at education institutions as places for the collective reflection and action of citizens, where community, and bridges between what happens within the schools' walls and on the other side of them, are built (Moss, 2007).
Conclusion
This article has aimed at describing, contrasting, and discussing the contributions of different worldviews to the debate on sustainability and more specifically to education for sustainability. On the one side, mainstream Western worldview underpins the discourse on sustainable development exported globally. It frames the relationship between humans and nature as a utilitarian one and has led to the present environmental crisis. On the other side, Indigenous worldviews are presented in their main traits as eco-centric, based on principles of interconnectedness and reciprocity between humans and more-than-humans. They represent a challenge to the Western capitalist ideology, which has become so mainstream that it is taken for granted as the only possible way to understand and relate to the natural world. As Berkes (1999) argues, "Perhaps the most fundamental lesson of traditional ecological knowledge is that worldviews and beliefs do matter" (Berkes, 1999, p. 163). Critical reflection on the discourses and values that underpin practices are thus fundamental to an education for sustainability that seeks to be both transformative and decolonizing. These can become meeting points where learning not only about, but first and foremost with other worldviews can spark action and change. For that purpose, the development of attitudes and values within the educational institution should be paired with an engagement of the adults in charge with the implicit and explicit discourses that their practices are a part of, as well as concrete attempts to organize in political grass-root initiatives that can strengthen the democratic capabilities of local communities. This means that an education for sustainability that is truly transformative should make space for the contestation of hegemonic worldviews when they do not make the fulfillment of values and practices that are consistent with sustainability possible. The Zapatistas of Mexico remind us that educating is a political act that require of educators to engage politically with societal issues, take a stand and challenge what is taken for granted, as citizens who take ecology, democracy and action-Life-to heart.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 8,736 | 2022-03-12T00:00:00.000 | [
"Environmental Science",
"Political Science",
"Education",
"Philosophy",
"Sociology"
] |
Reply on RC2
1) This manuscript is a revised work of a paper we submitted in May 2016 (due to extended delay in getting the required data). While it is true that “VLF/LF waves have been extensively studied for several decades”, we consulted and duly cited the works at our disposal at the time (e.g., see lines 56-73). This work is also built on our previous effort (e.g., Nwankwo et al. 2016) in which we cited many other supporting works. However, your observation is noted, and we will include relevant prior work in the revised version.
Reply on RC2
Victor U. J. Nwankwo et al.
Author comment on "Diagnostic study of geomagnetic storm-induced ionospheric changes over VLF signal propagation paths in mid-latitude D-region" by Victor U. J. Nwankwo et al., Ann. Geophys. Discuss., https://doi.org/10.5194/angeo-2021-42-AC2, 2021 We (authors) thank the reviewer (Referee #2) for accepting and making time to review our manuscript. Your effort and expertise are highly appreciated. We are happy that this effort is appreciated and hope to take advantage of your suggestions to ensure a better version of work.
We have taken note of your observations and depending on the Editor's recommendations, we will address them in a revised version accordingly. However, I will provide preliminary short responses to them in the meantime.
1) This manuscript is a revised work of a paper we submitted in May 2016 (due to extended delay in getting the required data). While it is true that "VLF/LF waves have been extensively studied for several decades", we consulted and duly cited the works at our disposal at the time (e.g., see lines 56-73). This work is also built on our previous effort (e.g., Nwankwo et al. 2016) in which we cited many other supporting works. However, your observation is noted, and we will include relevant prior work in the revised version.
2) Some of the factors on which our characterized metrics are based include i) the diurnal signature and ii) the propagation characteristics of VLF narrowband measurements. We will include the significance accordingly. Some authors reported the overall depression of the diurnal signal with respect to a baseline but these metrics allowed us to study both the storm effects and the local time-variant signal responses.
3) We have already addressed this issue in a separate work in which we combined simultaneously observed VLF variations with TEC data from multiple GNSS/GPS stations (around the transmitter and receiver) to probe geomagnetic storm effects as they propagate down to the lower ionosphere from the magnetosphere. Although there is a revised version of this work, you may look up the idea here: https://www.essoar.org/doi/10.1002/essoar.10504067.1. Appropriate connection between the two will be done in the revised version. 4) There is an important observation/finding associated with the statistical analysis done here. We have statistically analysed the metrics for (i) 1-day (mean value) before, during and after the storms (figure 7) and (ii) 2-day (mean value) before, during and after the storms (figure 9). Interestingly, the percentage dip of the MBSR and MASS increased significantly in the 2-day mean signals before the events (when compared with the 1-day mean value). It will be challenging to summarise the statistics in one/two figures because of the need to show results of the two propagation paths (GQD-A118 and DHO-A118). Also, the plots need be large enough for readers to see and compare. However, we will look into ways of better summarizing the results. 5) We will work on this important suggestion and revert accordingly. However, we speculate that the responses are related to positive storm effect which affects, albeit small, the attenuation of the VLF radio waves (Fagundes et al. 2016).
6) The SRT and SST indirectly relate to ionospheric responses at sunrise and sunset. Our findings show that storms-induced disturbances do not have significant impact on such responses, and since the sunrise and sunset signatures relate to mode conversion in the VLF propagation path this might imply that the D-region density is not a significant contributor to this effect. This and more detailed explanation will be provided in the revised manuscript. 7) This is a very good scientific question! We observed a trend associated with the DHO-A118 region in our TEC analysis (not included here), which may (to a good extent) address this important question. This will be updated in the revised manuscript. We will also check with satellite electron precipitation data as suggested, and perhaps perform Ovation-Prime auroral model runs for the intervals of interest -see https://www.ngdc.noaa.gov/stp/ovation_prime/data/ 8) The name of the transmitters will be mentioned in the caption as suggested.
Thank you very much for your valuable comments. | 1,073.8 | 2021-09-10T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
The whack-a-mole governance challenge for AI-enabled synthetic biology: literature review and emerging frameworks
AI-enabled synthetic biology has tremendous potential but also significantly increases biorisks and brings about a new set of dual use concerns. The picture is complicated given the vast innovations envisioned to emerge by combining emerging technologies, as AI-enabled synthetic biology potentially scales up bioengineering into industrial biomanufacturing. However, the literature review indicates that goals such as maintaining a reasonable scope for innovation, or more ambitiously to foster a huge bioeconomy do not necessarily contrast with biosafety, but need to go hand in hand. This paper presents a literature review of the issues and describes emerging frameworks for policy and practice that transverse the options of command-and-control, stewardship, bottom-up, and laissez-faire governance. How to achieve early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab, from deliberate misuse, or from the public realm, will constantly need to evolve, and adaptive, interactive approaches should emerge. Although biorisk is subject to an established governance regime, and scientists generally adhere to biosafety protocols, even experimental, but legitimate use by scientists could lead to unexpected developments. Recent advances in chatbots enabled by generative AI have revived fears that advanced biological insight can more easily get into the hands of malignant individuals or organizations. Given these sets of issues, society needs to rethink how AI-enabled synthetic biology should be governed. The suggested way to visualize the challenge at hand is whack-a-mole governance, although the emerging solutions are perhaps not so different either.
Introduction
Synthetic biology, the multidisciplinary field of biology attempting to understand, modify, redesign, engineer, enhance, or build biological systems with useful purposes (El Karoui, Hoyos-Flight and Fletcher, 2019;Singh et al., 2022;Plante, 2023), has the potential to advance food production, develop new therapies, regulate the environment, generate renewable energy, edit the genome, predict the structure of proteins, and invent effective synthetic biological systems, and more (Yamagata, 2023).It is arguably moving from the lab to the marketplace (Hodgson, Maxon and Alper, 2022;Lin, Bousquette and Loten, 2023).However, the immense promise of synthetic biology has been subject to much hype and it is a paradox that it is still a nascent technology that has not scaled beyond the microscale (Hanson and Lorenzo, 2023).The next major breakthrough might relate to plants (Eslami et al., 2022) or even to mammalian systems (Yan et al., 2023).Despite the small scale, the intermediate term risks are significant, and include contaminating natural resources, aggravation of species with complex gene modifications, threats to species diversity, abuse of biological weapons, laboratory leaks, and man-made mutations, hurting workers, creating antibiotic resistant superbugs, or damaging human, animal, or plant germlines (Hewett et al., 2016;O'Brien and Nelson, 2020;Nelson et al., 2021;Sun et al., 2022).Some even claim synthetic biology produces potential existential risks from lab accidents or engineered pandemics (Ord, 2020), especially in combination with AI (Boyd and Wilson, 2020).AI-enabled synthetic biology, while surely adding to the risk calculations, has tremendous medium term potential to provide a vehicle for scaling synthetic biology so it may finally deliver on its promise (Hillson et al., 2019;A Dixon, C Curach and Pretorius, 2020;Ebrahimkhani and Levin, 2021;Bongard and Levin, 2023).That being said, the prospect of AI-enabled synthetic biology significantly increases biorisks and particularly brings about a new set of dual use concerns (Grinbaum and Adomaitis, 2023).Although biorisk is subject to an established governance regime (Mampuys and Brom, 2018;Wang and Zhang, 2019), and scientists generally adhere to biosafety protocols if they receive the appropriate training and build a culture of responsibility (Perkins et al., 2019), even experimental, but legitimate use by scientists could lead to unexpected developments (O'Brien and Nelson, 2020).Additionally, recent advances in chatbots enabled by generative AI, technology capable of producing convincing real-world content, including text, code, images, music, and video, based on vast amounts of training data (Feuerriegel et al., 2023), accelerates knowledge mining in biology (Xiao et al., 2023) but has revived fears that advanced biological insight can get into the hands of malignant individuals or organizations (Grinbaum and Adomaitis, 2023).It also further blurs the boundary between our understanding of living and non-living matter (Deplazes and Huppenbauer, 2009).The picture is complicated given the vast innovations envisioned to emerge by combining emerging technologies, as synthetic biology scales up bioengineering turning it into industrial biomanufacturing.Given these sets of issues, society needs to rethink how AI-enabled synthetic biology should be governed.
The research question in this paper is: what are the most important emergent best practices on governing the risks and opportunities of AI-enabled synthetic biology?Relatedly, is stewardship or laissez-faire governance the right approach?How can humanity seek to maintain a reasonable scope for synthetic biology innovation, and integration of its potential into manufacturing, agriculture, health, and other sectors?Do we need additional early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab, from deliberate misuse, or from the public realm?From these questions, the following hypotheses were derived: [1] there is a nascent literature on the impact of AI-enabled synthetic biology, [2] active stewardship is emerging as a best practice on governing the risks and opportunities of AI-enabled synthetic biology, [3] to achieve proper governance, most, if not all AI-development needs to immediately be considered within the Dual Use Research of Concern (DURC) regime, [4] even with the appropriate checks and balances, with AI-enabled synthetic biology, industrial biomanufacturing can conceivably scale up beyond the microscale within a decade or so.
The paper first describes the methods used for the literature review followed by a presentation of the results.A discussion of these findings ensues, addressing the research question and support for the hypotheses, followed by a brief conclusion and suggestions for further research.
Methods
The purpose of this paper is to conduct a literature review of the issues surrounding AI-enabled synthetic biology and present a set of recommendations for policy and practice.The research goal is to show that a reasonable scope for innovation can be maintained even with instituting early warning systems that enable prevention and mitigation of future AI-enabled biohazards.
Literature review (Sauer and Seuring, 2023), a comprehensive summary of existing research on the topic, was pursued because AI's influx into the synthetic biology field is a very recent development.There is a need to identify gaps in the knowledge that ensue from AI's emerging impact.The goal is to align the AI literature with the synthetic biology risk literature, and develop a theoretical framework for future research.The approach broadly followed Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PRISMA) for systematic reviews, using formal, repeatable, transparent procedures with separate steps for identification, screening, eligibility, and inclusion of papers (Moher et al., 2015).That being said, in the end a mix of search terms (clearly identified below), plus backward/forward citation searches, where used, which means attempts at replication might yield slightly different results (see Fig. 1).However, because of the nascent research field, in this case, the benefit of flexibility outweighs the costs.
Figure 1. PRISMA diagram for literature review and citation analysis
A literature review using the search terms "generative AI", "synthetic biology" and "governance" in Google Scholar generated only 97 results, so the search was broadened to "AI", which generated 5,880 results, also capturing important articles before the generative AI discoveries of 2022-2023.Similar searches in Scopus (artificial AND intelligence AND synthetic AND biology AND governance) generated only 9 documents.Using the terms "AI synthetic biology governance" in PubMed generated only 14 results of which only 1 paper (on synthetic yeast research and techno-political trends) was retained.However, removing the term governance gave 4,210 results.The search was limited to 2020-2024 publication dates, and further filtered to only review or systematic reviews to get 153 results which were screened down to 14 relevant articles.
Searching Social Sciences Citation Index (pub dates 2021-2023) searches yielded 257 results for "synthetic biology", which were all reviewed, and 13 abstracts were selected into the sample.Searching Business Source Complete (pub dates 2020-2023) for 'synthetic biology' yielded 183 academic journal papers, 15 of which were relevant and from which 7 were retained after deduplication (this search was finalized last).Other search terms such as 'bioeconomy' and 'governance' performed better in this database.
The final search protocol borrows from Shapira et al. who note that papers that don't explicitly use "synthetic biology" in their title, abstract or key words could still be relevant because it is an interdisciplinary field (Shapira, Kwon and Youtie, 2017).Shapira et al. track the emergence of synthetic biology over the 2000-2015 period, first retrieving benchmark records, extract keywords from there, and then searching.With that insight, having selected 150 articles, read their abstracts, indexed their keywords, and scanned the content of all papers, I then went back, did new searches based on the keyword clusters that seemed promising, and, as a result, found additional papers to include in the sample.
The analysis was also complemented with papers that did discuss the overall impact of generative AI on biology or science, or research, using the search terms: "generative AI" AND "Science" OR "research".Because generative AI is such a recent term, the search was expanded to preprints in the gray literature.The final research protocol included a much wider set of search terms, including a fuller set of keywords such as AI risk, bioethics, bioinformatics, biohacking, biorisk, biosafety, biosecurity, computational biology, DIY biology, Do-It-yourself laboratories, dual risk, dual-use research of concern (DURC), emerging technology, generative AI, industry, large language models (LLMs) , multi-omics, risk mitigation, systems biology, AI-bio capabilities, chatGPT, biomanufacturing, biosurveillance, bioweapons (always used in combination with AI and/or risk).Separate searches for 'synthetic biology' AND legislation OR 'policy' OR 'regulation' were also conducted.Similarly, when few papers were found on the management and industry aspects, specific searches on 'synthetic biology' AND/OR 'startups', 'industry', 'market', and 'economy' were pursued.
The final inclusion criteria involved any type of published scientific research or preprints (article, review, communication, editorial, opinion, etc.) as well as any high quality article (based on subjective review) published by a government agency, think tank or consulting firm.A total of 653 abstracts were considered, but only 204 sources and a subset of 169 peer reviewed papers were included in the final review (see Appendix A-papers in sample), representing 111 different journals (average impact factor: 9.94) from 4 fields.The overwhelming number of papers (114) originated from journals in Science, Engineering & Technology, 36 from interdisciplinary journals and only 9 from Social science and Humanities journals and 9 from Management journals (see Appendix B-journals in sample), as well as 4 preprints and 3 other types of publications (such as chapters in books as well as think tank white papers and memos).Once papers were identified for synthesis and analysis, 6-10 keywords were manually extracted from each article, starting with the ones identified by the authors (if any), and the diversity of journal types was recorded.
No human data was collected for this study.However, ethical considerations, such as how to discuss whether synthetic biology is significantly different from nature, were carefully addressed throughout the study.
The study's findings may not be generalizable to biological research that does not rely on synthetic approaches or that only have limited use of AI technologies.Given that scale-up seems to be a much desired future development that industry and researchers both expect, future research could explore the complex factors influencing the scale-up of industrial biomanufacturing.
Results
There was no significant concentration of papers in any specific journal, instead the topic was covered broadly across journals.However, 23 percent of the journals (25 journals) were published by Elsevier, and 23 percent of the journals (25 journals) were published by Springer Nature, each highly overrepresented in the sample.The world's two top publishers (in number of published journals) each publish nearly 3000 journals (Curcic, 2023).)The showing of the third (Taylor & Francis, 2508 journals total 8 in our sample), and fourth (Wiley, 1,607 journals total, 6 in our sample) was far lower, grouped with the fifth (Oxford Academic, 7 journals in sample), and sixth (MDPI, 5 journals in sample), who only publish about 500 journals total (Curcic, 2023).The country of publication provided another slight surprise compared to data presented by Shapira et al. 's (Shapira, Kwon and Youtie, 2017) findings of a US and UK dominance when tracking the emergence of synthetic biology over the 2000-2015 period.Our data, in contrast, shows the US a bit behind in synthetic biology publishing (the Netherlands 23 percent, UK 21 percent, US 19 percent, Germany 12 percent, and Switzerland 11 percent).One explanation might be that in several cases US professional societies use a European publisher.
At least 8 breakout papers contained especially innovative, useful, or surprising observations for scholars and policy makers alike (Camacho et al., 2018;Trump et al., 2019;Hagendorff, 2021;Eslami et al., 2022;Hanson and Lorenzo, 2023;Holzinger et al., 2023;Sundaram, Ajioka and Molloy, 2023;Yan et al., 2023), each summarized in a sentence: [1] As long as the black box issues of deep learning models are addressed they will transform insights into molecular components and synthetic genetic circuits and reveal the design principles behind so one can iterate rapidly and create complex biomedical applications (Camacho et al., 2018).
[2] An interdisciplinary approach between the physical and social sciences is necessary (and seems to be proceeding), fostering sustainable, risk-informed, and societally beneficial technological advances that are driven by safety-by-design and adaptive governance that properly reflects uncertainty (Trump et al., 2019).
[3] Machine learning for synthetic biology can yield (forbidden) knowledge with dual-use implications that needs to be governed given legitimate misuse concerns that we have seen in other areas such as nuclear energy (Hagendorff, 2021).
[4] If synthetic biology can deploy the Design-Build-Test-Learn (DBTL) cycle, bridging the cultures of bench scientists and computational scientists, and properly quantify uncertainty, it will impact every activity sector in the world (Eslami et al., 2022).
[5] For the field of synthetic biology, considering all the hype, it is high time to deliver, likely by toning down claims to have all the answers and capitalize on the achievable goals, and enlist tool builders in universities, realizing that biofoundries will not be generalized industrial factories near term but will remain fermentation plants for enzymes (Hanson and Lorenzo, 2023).
[7] In the future, AI will be a driving force of biotechnology whether we like it or not (Sundaram, Ajioka and Molloy, 2023).
[8] So far, AI in synthetic biology has been used for foresight, data collection, and analysis but in the future it will be used to design complicated systems (Yan et al., 2023).
The 1297 unique keywords found in these 204 sources were clustered into 81 broad categories, 42 of which seemed particularly important as literature search keywords (see Fig. 2).
The impact of AI-enabled synthetic biology
Among the 169 peer-reviewed papers in the sample, there were 81 papers that explicitly discussed the impact of AI-enabled synthetic biology.The other 88 papers discussed risk but not explicitly from AI. Equally surprising was the exclusion of AI in all the other papers, given that several were review articles or otherwise covered state-of-the art or emerging technologies and tools for synthetic biology.There is no ready explanation for this omission, except to say that perhaps (a) those researchers are not familiar with the potential of (generative) AI for biology, (b) don't think it is as big of a deal as others do, or (3) consider it less relevant for today's concerns in synthetic biology, or (4) consider AI (machine learning) an essential tool but prefer not to elevate it beyond its obvious place as a key research tool.
Another finding is that many papers that I found relevant to the future of synthetic biology's governance, risk, and innovation trajectory, did not in fact use that term.Dozens and dozens of papers included in the sample happily discussed AI and the impact on their field, be it metagenomics of the microbiome (Wani et al., 2022), health and intelligent medicine (Achim and Zhang, 2022), applied microbiology (Xu et al., 2022), designer genes (Hoffmann, 2023), drug discovery (Yu, Wang and Zheng, 2022), oncology (Wu et al., 2022), systems biology (Helmy, Smith and Selvarajoo, 2020) without realizing that synthetic biology is bound to intersect with it at some point soon (or at least explicitly omitting the use of the term).What could the reason be?Is the term upsetting to part of the biology community or establishment?Synbio scholars clearly frame their problems differently from the biology establishment.Perhaps there is a disparity in age, experience, and skills between patchy biological knowledge of bio-IT nerds and lacking IT skills among biologists?AI has been applied to material discovery, finding 700+ new materials so far (Merchant et al., 2023) and it is a question of time before it will be used for scalable materials design using AI-enabled synthetic biology (Tang et al., 2020;Burgos-Morales et al., 2021) although for real world applications we might first need better standardized vocabularies for biocompatibility (Mateu-Sanz et al., 2023).
Only 5 papers (Kather et al., 2022;Grinbaum and Adomaitis, 2023;Morris, 2023;Ray, 2023;Xiao et al., 2023), a popular science article (Tarasava, 2023), and an editorial ('Generating "smarter" biotechnology', 2023) discussed the impact of generative AI on synthetic biology.This is expected to increase dramatically quite soon, given the success of this latest wave of AI technology and the platform aspects of its spread.However, as one paper put it, synthetic biology has a natural synergy with deep learning (Beardall, Stan and Dunlop, 2022).The use cases discussed in various papers include: as a classifying text, generic search engine, generating ideas, helping with access to scientific knowledge, coding, patient care (Clusmann et al., 2023), protein folding, proofreading, sequence analysis, summarizing knowledge, text mining of biomedical data, translation (Clusmann et al., 2023), workflow optimization, foresight of future research directions (Yan et al., 2023); collection of related synthetic biology data, and more (Beardall, Stan and Dunlop, 2022;Clusmann et al., 2023;Tarasava, 2023).The promise of AI-enabled cell-free synbio systems (Lee and Kim, 2023), which use molecular machinery extracted from cells, is particularly significant for automation and scale-up of biosensors among other things.It bears pointing out that the significant advances in cell-free synbio systems enabling the acceleration of biotechnology development, specifically its ability to enable rapid prototyping as well as the ability to conduct predictive modeling, pre-date generative AI by a decade (Moore et al., 2018;Müller, Siemann-Herzberg and Takors, 2020).Even today, he the barriers seem to be limited availability of relevant data either because it does not exist yet, because data is scarce, the data set is small, because it is not publicly available, or because it is not formatted in useful ways (Rosenbush, 2023).Not all of these challenges can be immediately resolved by generative AI.
What matters most to the governance and innovation concern would be those barriers, areas, or workflows where AI could make the biggest impact, not just for the field applying it but for the shared resource that is AI-enabled synbio that would grow the pie.Based on the literature review, I've attempted to suggest which topics fit in that perspective (see Fig 4).For example, progress on interoperability would benefit all, as would AI-enabled lab operations workflows.Each would be a synbio building block.Big ticket items such as protein folding is in a bit of a different category.When AlphaFold achieved near-perfect protein fold predictions, it was the most important moment for AI in science so far, yet left plenty of work for structural biology (Perrakis and Sixma, 2021), including the application of coiled coils as a self-assembly building block in synthetic biology (Woolfson, 2023).Similarly, when the mRNA platform became a successful vehicle for a COVID-19 vaccine that changed the world, this happened in vitro, yet, the production of synthetic mRNA (Hınçer et al., 2023) or miRNA (Matsuyama and Suzuki, 2019) in the cell itself would be an even more significant breakthrough-and getting there might require the use of AI (Naderi Yeganeh et al., 2023).
Figure 4. AI-synbio accelerants and use cases
On the other hand, there is no reason to believe that the top labs will be overtaken, quite the contrary, in fields such as consulting, it seems that generative AI accelerates the work of top teams (Moran, 2023).However, the worrying aspect is that the previous assumption from biorisk work pre-generative AI was that developing pathogens is an activity only possible in highly advanced biolabs.The increasing availability of insight, instructions, as well as wetlabs and foundries on demand (Sandberg and Nelson, 2020), would seem to be a potential issue to watch.
Best practice in AI-synbio governance
In the literature, there is ample evidence of what constitutes biosafety and biosafety governance best practice (Perkins et al., 2019;Wang and Zhang, 2019;Li et al., 2021;Mökander et al., 2022;Sandbrink, 2023b) and the emphasis is on a mix of specific training and, relatedly, developing a safety and responsibility work culture.In previous decades, the few advanced labs that existed were "compliant" biocontainment actors, for which acceptable systems were in place.However, regulating synthetic DNA comes with new challenges, including scalability, and less ability to create genetic firewalls to natural organisms, and it has become easier to circumvent oversight (Hoffmann et al., 2023).The more accessible (Wang and Zhang, 2019)and generally useful synbio potentially is regarded to be (Sun et al., 2022), the less likely it is that prohibition will remain an effective tool.Decades-old bioinformatics resources originally developed to compare gene sequences, such as BLAST, have been re-used, with mixed results, as biosafety tools to identify pathogens (Beal, Clore and Manthey, 2023).Newer tools, such as machine learning-based topic models, enable spotting trends across a wide set of biosafety research publications (Guan et al., 2022).AI-synbio governance (Achim and Zhang, 2022;Mökander et al., 2022;Grinbaum and Adomaitis, 2023;Holland et al., 2024) is expected to be more of the above, but also requires AI skills and perspectives that go far beyond wet lab practices and will require updates to biosafety laws, regulation, governance, standardization (Pei, Garfinkel and Schmidt, 2022).It will change the role of the state (Djeffal, Siewert and Wurster, 2022) as it will no longer be the primary norm setter or enforcer of responsibility.
AI is already contributing to the fragmentation of biology (Hassoun et al., 2022) and will challenge medical expertise among specialists (Patel et al., 2009).Generative AI, and especially other advancements in multi-modal AI, combined with better multi-omics synbio dataset interoperability (Topol, 2019) and standardization, will eventually lead to fundamentally new playing fields.Vigilance is required (Harrer, 2023) both to get us there, predict when we will get there, and decide what to do when we get there.The initial issue surrounds AI-synbio lab safety practices (D'Alessandro, Lloyd and Sharadin, 2023) when the "lab" suddenly is a dispersed concept, and decisions around forbidden knowledge (Hagendorff, 2021), new sets of responsibilities in the research community (Blok and von Schomberg, 2023) and among health practitioners (Achim and Zhang, 2022), avoidance of doom speak (Bray, 2023), handling the reality of malicious actors (Carter et al., 2023), and will represent an enormous challenge for reskilling and upskilling those who want to work with the topic (Xu et al., 2022).
Getting it right will mean balancing brave investments (Hodgson, Maxon and Alper, 2022) with monitoring the effects, including developing an ethics and a taxonomy for working with AI-synbio-human hybrids and intelligence (Nesbeth et al., 2016;Damiano and Stano, 2023), dealing with new synthetic pathogens (O'Brien and Nelson, 2020), saying carefully goodbye to the natural world (Lawrence, 2019;Webster-Wood et al., 2022;Bongard and Levin, 2023) or at least radically enhancing biocontainment (Schmidt and de Lorenzo, 2016;Aparicio, 2021;Vidiella and Solé, 2022;Hoffmann, 2023), as well as developing new approaches to worker safety (Murashov, Howard and Schulte, 2020).This leads into the issue of dual use of concern, which currently is a binary issue even though it is about to become immensely complex, requiring a more nuanced approach (Evans, 2022;Sandbrink, 2023a), given the legitimate concern with deliberate, perhaps even deliberate synthetic pandemics (Sandbrink, 2023b).
Giving a complete regulatory overview of synthetic biology is complex (Beeckman and Rüdelsheim, 2020) and is not the task of this paper, but Table 1 still lists some key standards and guidelines discussed in the sample, and relevant to DURC issues.
Scaling industrial (bio)manufacturing
Synbio is not yet a mature engineering industry with well-understood costs and timelines (Watson, 2023) and investments fluctuate from year to year (SynBioBeta, 2023).A recent Schmidt futures report defines commercial production scale as a fermentation capacity of 100,000 liters or more and states only a few U.S. companies currently have infrastructure at this scale and relatively inaccessible to small-and medium enterprises at the present moment (Hodgson, Maxon and Alper, 2022).Achieving pilot scale is the first hurdle to pass and would cost in excess of $1 billion to build a dozen pilot facilities to fuel the U.S infrastructure alone (Hodgson, Maxon and Alper, 2022).Synbio was not truly part of the industry 4.0 paradigm either (Jan et al., 2023).The keywords to describe the industrial aspect of synthetic biology included: 'bioeconomy', 'bio-capitalism', 'biomanufacturing', 'biotech industry'.Surprisingly few path breaking peer reviewed articles were found on these topics.The six key ingredients for biomanufacturing derived from our sample are: biological insights, AI, bioprocessing, engineering scale-up, governance frameworks, and gigascale investments (see Fig. 5).
Figure 5 Biomanufacturing ingredients
Attempting to pinpoint exactly when a sci-tech paradigm will take off commercially is a fool's errand.Exceptional growth in research communities can be tentatively forecasted from citation analysis (Klavans, Boyack and Murdick, 2020).The emergence of new industries is significantly more complex but the growth in intangible assets (Börner et al., 2018), such as generative AI, applied to an industry (manufacturing) would be a clear indicator.One article in our sample proposed a taxonomy of four innovation types specific to the bioeconomy: Substitute Products, New (bio-based) Processes, New (bio-based) Products, and New Behavior, each carries their own commercialization challenges (Bröring, Laibach and Wustmans, 2020).Deriving insights from other papers, existing or emerging business models in synthetic biology would include automation, contract research, increasing crop yields in agriculture (Bhardwaj, Kishore and Pandey, 2022;Wang, Zang and Zhou, 2022), data driven design (Freemont, 2019), efficiencies, new components, DNA synthesis (Seydel, 2023), infrastructure, licensing, manufacturing molecules for the food industry (Helmy, Smith and Selvarajoo, 2020), modularity, new materials, new platforms, new products, open source tools, services, or substitution, such as a new technology stack (Freemont, 2019).
That being said, despite the relatively low number of papers describing the synbio industry (van Doren et al., 2022), there are signs in the gray literature and in the consulting literature (Candelon et al., 2022) that things are changing within this decade.Arguably, the synbio startup boom in pharma and food industries will be duplicated in health and beauty, medical devices, and electronics, with cost-based competition from syn-bio alternatives in chemicals, textiles, fashion, and water industries, soon to be followed by the mining, electricity, and construction sectors (Candelon et al., 2022).The way it might happen is not necessarily only through flashy, radical innovations, but incrementally because synbio is becoming a useful tool to improve performance, quality, and sustainability of almost all types of manufacturing (Candelon et al., 2022).
Made-to-order synthetic DNA is faster and cheaper than before but is still a massive bottleneck to building scalable biological systems based on synthetic components (Seydel, 2023).The future role of synthetic biology in carbon sequestration into biocommodities could be of major industrial importance provided the bioproduct could be commercialized (Jatain et al., 2021).
Discussion
One of the papers in the sample reports that synbio discourse is framed in six major ways: as science, social progress, risks and control, ethics, economics, and governance (Bauer and Bogner, 2020), which roughly matches the eight clusters identified based on the papers in the present sample: Applications, Bioeconomy, Countries, Governance, Science, Tools, Materials, and Risks.These frames tend to belong to different camps (particularly citizens, corporations, governments, nonprofits, and startups), with separate agendas and concerns, as opposed to characterizing aspects of a discussion that all actors should be having.There are signs this is changing towards more adaptive approaches to address the uncertainty surrounding the effects of novel technologies (Millett et al., 2020;Mourby et al., 2022) in parts of the system, such as in innovation communities such as iGEM (Millett et al., 2020;Kirksey, 2021;Millett and Alexanian, 2021;Vinke, Rais and Millett, 2022), or in entrepreneurial ecosystems (Nylund et al., 2022).However, those are not characteristic of the governance system as a whole.
Terminological and sectoral confusion, growing pains
Given the nascent state of AI-enabled synthetic biology, there is an overload of related and relatable search terms and keywords that proliferate in the scientific community and online, making it difficult to compare, find, and cluster case studies, research, and policy relevant insight.Even after considerable search efforts, we were left with 1297 unique keywords, which were boiled down to 81 broad categories, and further to 42 literature search keywords.The situation will persist, and in all likelihood, it will get worse before it gets better.There are those hoping for a taxonomic renaissance (Bik, 2017) to remedy the problem, including a taxonomy for engineered living materials (Lantada, Korvink and Islam, 2022).An early article attempted to do the same for the field of synthetic biology (Deplazes, 2009), but it might have been too early in the cycle.
Historically, synbio has been seen as a disruptive innovation yielding products and processes which may not be well aligned with existing business models, value chains and governance systems (Banda and Huzair, 2021), but this might now be changing and synbio approaches get integrated into traditional industries and sectors.That is exciting for industrial innovation but challenging for governance, risk and regulation.
Even though commercially available synthetic biology-derived products are already on the market that are, arguably 'changing the world' (Voigt, 2020), the economics of synthetic biology (Henkel and Maurer, 2007), the biomanufacturing industry overall, is in its infancy.McKinsey might be right that it is a $4 trillion gold rush waiting to happen (Cumbers, 2020), or as BCG claims, $30 trillion by the end of the decade (Candelon et al., 2022), across food and ag, consumer products and services, materials and energy production, and human health and performance (Ang, 2022;Clay and September, 2023).However, the conspicuous absence of management and business articles on synbio in our sample might indicate that the business dimension is so embryonic that these visions are not yet a story worthy of sustained business school attention.The umbrella term bioeconomy (Baker, 2017;Bröring, Laibach and Wustmans, 2020;Marvik and Philp, 2020;Banda and Huzair, 2021;Hodgson, Maxon and Alper, 2022;Bröring and Thybussek, 2023;Clay and September, 2023;Rennings, Burgsmüller and Bröring, 2023) is perhaps convenient, but encompasses so much that it is hard to know what it means..
Transdisciplinary barriers to growth
What seems to be missing in the literature is a clear vision for how AI-enabled synthetic biology would be truly different from previous approaches.Most of the papers imply that AI will remain only one of many technologies relevant to progress in the synthetic biology field.No papers paint a picture where there is a straightforward path to massive scale-up, with possible exception of AI for multi-omics.The shift would happen once the synbio field was able to shift from its current systems-centric approaches (requiring slow, cumbersome wet lab experiments and trial-and-error tinkering) to data-centric bioprocessing approaches (not just using AI for data processing) that are themselves digitally scalable (Owczarek, 2021;Scheper et al., 2021), and constitute automated design-build-test systems (Holland et al., 2024), supported by digital twins (Manzano and Whitford, 2023).Having said that, enormous efficiencies could be had through even much simpler process automation and operations improvements in biomanufacturing, for example through no-code methods (Linder and Undheim, 2022).
The barriers to the field of synthetic biology's growth are many, from (1) technical feasibility, including the scientific problems connected with the fusion of three disciplines; synthetic biology, artificial intelligence, and social science (Trump et al., 2019), via (2) various forms of risk to (3) industrial challenges, to (4) institutional challenges, or (5) social dynamics.
On the technical side, we find the challenges surrounding data quality (Patel et al., 2009) the fragmentation of knowledge (Hassoun et al., 2022)interoperability (Mateu-Sanz et al., 2023)or standardization (Endy, 2005;Hanczyc, 2020;Garner, 2021;Pei, Garfinkel and Schmidt, 2022;Mateu-Sanz et al., 2023).For example, even though there is great need, and the desire is there, standardizing complex biological systems is difficult (Garner, 2021).As many of the papers in the sample point out, there are also scientific problems connected with the fusion of two disciplines, synthetic biology and artificial intelligence.A multi-layer technology stack is evolving (Freemont, 2019).There is transdisciplinary training and perspective required (Hammang, 2023).There is also considerable uncertainty produced when three domains (or more), and their methodologies, technologies, and tools, are merging (Trump et al., 2019).
Notably, (1) biological insight is needed to deploy AI correctly, yet cell behavior is unpredictable (Lawson et al., 2021)(2) AI insight is needed to capture what ends up retranslated as biological patterns in the data, but AI insight alone is not sufficient to identify what data might be relevant and (3) social science insight, including business models, sociotechnical issues (Marris and Calvert, 2020), social dynamics, social implications, governance, risk, ethics, and psychological reactions, is needed to assess the feasibility of R&D, product development, and commercialization of the emergent field's output.That's a tall order for individual researchers, teams, companies, and political institutions alike.As the field grows in importance, scale, and impact, it will entail an enormous societal reskilling effort (Hammang, 2023).
Various forms of risk will impact synbio growth, notably AI risk (O'Brien and Nelson, 2020;Grinbaum and Adomaitis, 2023), the potential for a slew of catastrophic risks (DiEuliis et al., 2019) such as new pathogens, or even the specter of existential risks (Boyd and Wilson, 2020) threatening the flourishing or survival of humanity.
Whack-a-mole governance
The most reasonable way to look at it would be: what can generative AI do within the frame of all of these barriers?From this we can wonder whether generative AI-enabled synthetic biology really would be truly different from previous approaches.We could, of course, also wonder how different the situation would be if many of those previously mentioned barriers somehow went away.Interestingly, what AI fanatics would respond is that once those barriers are gone, AIs would themselves produce such approaches that are far superior to what could be conceived by human experts.Alternatively, it is always possible that emerging, superior and multi-modal AI systems would be able to overcome enough barriers to transform the field anyway.
For now, the most prudent governance path seems to be to keep fostering a responsibility mindset in a distributed manner at global scale.Machine learning enabled digital processing is already improving diagnostic accuracy and reducing turnaround time for even complex lab tests (Undru et al., 2022).The smart laboratory, with AI-automation of biosecurity, has arguably moved from concept to reality, enabling self-control process management flows of personnel, materials, water, and air, automated operation, automated risk identification and alarms (Li et al., 2022).However, when technologies merge, uncertainties multiply (O'Brien and Nelson, 2020), cybervulnerabilities and circumvention options increase (O'Brien and Nelson, 2020).If it indeed was the case that AI lifts all boats, it wouldt mean that mediocre labs can more rapidly gain the ambition to modify their facilities and work practices, and start doing work regulated by BSL-3 and BSL-4 designations.But while more lab researchers than before might potentially deploy AI to carry out experiments that should be carried out in a lab with a stricter designation (a higher BSL), this would, in most cases, be against the regulations.Having said that, in India, for example, there are no national reference standards, guidelines, or accreditation agencies for biosafety labs, so those labs that do comply, rely on the international ones (Mourya et al., 2014(Mourya et al., , 2017)).China, also, lacks a comprehensive regulatory system for BSL-2 labs, and lacks trained biosafety staff (Wu, 2019).The numbers game is indeed instructional.The International Laboratory Accreditation Cooperation (ILAC) accredits over 88,000 laboratories (ILAC, 2023).If you consider that there are currently 64 BSL-4 labs, just imagine if all BSL-3 labs wanted to do BSL-4-type work, and were capable of it.There are already some 57 BSL-3+ labs (Kaiser, 2023).There are currently great experiments on regarding the feasibility of rapid response mobile BSL-2 lab deployment to areas with a public health crisis, but those labs carry additional risks from rogue elements (Qasmi et al., 2023).Or, what about if all BSL-2 labs (which include most labs that work with agents associated with human diseases) suddenly started doing BSL-3 or BSL-4 type work?With AI, and without national control regimes, more BSL-2 labs will be tempted to think they can take on more advanced work, too quickly.
As has been pointed out, there is a need to deploy a governance continuum (Hamlyn, 2022).Based on our reworking of the issues based on the literature review (Linkov, Trump, Poinsatte-Jones, et al., 2018), there are six governance levels (global, national, corporate, lab, scientist, citizens) and four governance types (precautionary, stewardship, bottom-up, and laissez-faire) to be considered in an emerging Framework for AI-enabled synbio governance (see Table X).Each of these need constant monitoring and renewal based on assessing threats, hazards, and opportunities.Each governance level might prioritize one approach, but must have aspects of all governance types.Each governance type must be reflected at all governance levels.Today, we only have elements of such a framework implemented, and the skills required to make a comprehensive approach happen are formidable and require an all-of-society effort.
At any level, the process is quite complex.For example, the corporate AI governance at British biopharma AstraZeneca includes compliance documents, a responsible AI playbook and consultancy service, an AI resolution board, and AI audits-emphasizing procedural regularity and transparency-and interlinking with existing procedures, structures, tools, and methods (Mökander et al., 2022).
However, the literature review points to the fact that the true governance challenge is not only about the individual elements doing things right.Rather, proper governance is interactive, and adaptive, and requires working on all levels of governance simultaneously while not ignoring any one level for much time at all.One could describe the process as whack-a-mole governance (see Fig. 6) where there are many actors using small rubber mallets (aka laws, rules, norms, votes) that need to hit each level simultaneously for the button (risk) to stay down.The fact that some types may adversely affect others, for example soft law might delay or undermine regulation or hard law, must be monitored and dealt with through responsible innovation (RI) approaches (Hamlyn, 2022).The entire governance structure (See Table 2) must work in a holistic way.
Conclusion
The research question was: what are the most important emergent best practices on governing the risks and opportunities of AI-enabled synthetic biology.Indeed, some best practices are emerging, but it is still a disjointed picture.The first hypothesis that [1] there is a nascent literature on the impact of AI-enabled synthetic biology only found partial support.In fact the literature is nascent but there is scarce evidence on whether generative AI makes a big difference, or only adds to the emergence, and we had to consult related literature on generative AI in science and research to get closer to an answer.
The second hypothesis found more support, because [2] active stewardship is emerging as a best practice on governing the risks and opportunities of AI-enabled synthetic biology.Having said that, top-down governance, especially the command-and-control flavor, is not sufficient, and the literature points to decentralized governance as a remedy.A whack-a-mole governance model was formulated to describe and possibly also to address these challenges.
Hypothesis three which said that [3] to achieve proper governance, most, if not all AI-development needs to immediately be considered within the Dual Use Research of Concern (DURC) regime has some support.The larger point is that in some ways all research is dual use (Evans, 2022) because research always has many meanings and uses and compliance with the letter of imperfect, imprecise and rapidly outdated laws can only get you so far and also limits research in undesirable ways.Whose security are we trying to protect?Whose security typically is not protected?The DURC regime itself, instigated with the Fink report in 2003, is in serious need of an update in light of generative AI-enabled synthetic biology, and dual use is understood differently internationally (Lev, 2019).The review should begin immediately, but clarity on the threat is not likely to emerge for a few years, as generative AI-ready synbio-datasets and related functionality still needs to mature.
The last hypothesis [4] that even with the appropriate checks and balances, with AI-enabled synthetic biology, industrial biomanufacturing can conceivably scale up beyond the microscale within a decade or so, found some support but the field is still largely dependent on innovations that still have not materialized such as bioprocessing workflow, standardization of multi-omic datasets, and a design-build-test cycle that would be required to enable such scale-up.In fact, delving into the impact of AI-enabled synthetic biology for industrial biomanufacturing is a fruitful direction for future research.
At the end of the day it is safe to assume that AI-enabled synthetic biology is both a catalyst for risk (through creating novel synthetic organisms) and a potential for risk reduction and mitigation (through optimizing or restoring natural organisms and detecting pathogens).Governance of the phenomenon, and any attempts to megascale the bioeconomy in short order (by the US, UK, EU, China, or others) needs to keep both perspectives firmly in mind.
In closing, the premise of the article was that it is possible to identify best practices for governance, innovation, research, or policy on AI-enabled synthetic biology, and that these issues have commonalities and are best together.The subtext was to be more resilient towards risks but still being able to capture the opportunities.The topics do seem related, and relatable, but it is complex both for researchers, entrepreneurs, corporations, and policymakers to do so because of the transdisciplinary efforts required (Lee and George, 2023;Taylor et al., 2023).However, in light of the revolutionary potential of AI-enabled synthetic biology, admittedly not yet fulfilled beyond single-cell microorganisms, one would have to conclude that best practices will change rather rapidly.If so, one implication might be that we chase such best practices in vain and that synthetic biology cannot deliver them (Hanson and Lorenzo, 2023).
Whack-a-mole type games seemingly are about quick reactions.However, it turns out that, according to the inventor of the version of the game with air cylinders, Aaron Fetcher, the best way to get a high score is to gaze in a relaxed way at the center of the playing field with the side moles in your peripheral vision (Brown, Fenske and Neporent, 2011).It is exactly that mix of calm focus with minimum effective effort which is needed for safe and sound AI-enabled synthetic biology scale-up.We are dealing with an environment with many possible distractions.
As soon as one problem is fixed, another one will appear.Terminological and sectoral confusion, and growing pains within the industry, in the scientific establishment, and across the industries that are touched, will persist for some time.The obvious transdisciplinary barriers to growth are not easily or quickly resolved, even with a major reskilling effort.Generative AI might be a gamechanger, but biology will still be complex and surprising even to experts (and certainly surprising to machines).That's why emerging frameworks for AI-enabled synbio governance likely should contain a mix of precautionary (command-and-control, hard law), stewardship (soft law), bottom-up, and laissez-faire (industry-driven) approaches.
Declaration of Generative AI and AI-assisted technologies in the writing process
During the preparation of this work the author did not use AI in the writing process.The author takes full responsibility for the content of the publication.
Conflict of interests
Competing interests: The author(s) declare none.
Funding
The study was partially supported by Open Philanthropy.
Figure 2 .
Figure 2. Keyword categories relevant to Synthetic biology, AI, governance and innovation
Table 1
List of biorisk standards and guidelines
Table 2 .
Framework for AI-enabled synbio governance | 10,105.2 | 2024-02-01T00:00:00.000 | [
"Environmental Science",
"Biology",
"Philosophy",
"Computer Science",
"Political Science"
] |
Reduced expression of brain cannabinoid receptor 1 (Cnr1) is coupled with an increased complementary micro-RNA (miR-26b) in a mouse model of fetal alcohol spectrum disorders
Background Prenatal alcohol exposure is known to result in fetal alcohol spectrum disorders, a continuum of physiological, behavioural, and cognitive phenotypes that include increased risk for anxiety and learning-associated disorders. Prenatal alcohol exposure results in life-long disorders that may manifest in part through the induction of long-term gene expression changes, potentially maintained through epigenetic mechanisms. Findings Here we report a decrease in the expression of Canabinoid receptor 1 (Cnr1) and an increase in the expression of the regulatory microRNA miR-26b in the brains of adult mice exposed to ethanol during neurodevelopment. Furthermore, we show that miR-26b has significant complementarity to the 3’-UTR of the Cnr1 transcript, giving it the potential to bind and reduce the level of Cnr1 expression. Conclusions These findings elucidate a mechanism through which some genes show long-term altered expression following prenatal alcohol exposure, leading to persistent alterations to cognitive function and behavioural phenotypes observed in fetal alcohol spectrum disorders.
Fetal alcohol spectrum disorders (FASD) describe the continuum of phenotypic effects that may result from prenatal alcohol exposure (PAE). PAE is the most common cause of preventable neurodevelopmental disorders in North America [1,2] and is associated with attention deficit, impaired learning and memory, and hyperactivity [3], as well as an increased risk for anxiety and mood disorders [4]. These cognitive and behavioural changes persist throughout the life of an individual following PAE, though the mechanisms involved in maintaining these life-long changes are not well understood. However, it has been suggested that the effects of PAE may involve long-term changes in gene expression [5] that may be maintained through alcohol-induced epigenetic changes. In particular, we have previously reported that the expression of microRNAs (miRNAs) may be globally altered in the adult mouse brain following PAE [6], which supports recent data by other groups [7,8]. More specifically, these changes in miRNA expression may subsequently alter the expression of target genes, with one miRNA having the potential to regulate many different genes [9]. One such gene may be cannabinoid receptor 1 (Cnr1).
We have previously shown that early neonatal ethanol exposure in mice results in reduced Cnr1 gene expression in the adult brain [5]. Cnr1 acts within the endocannabinoid (eCB) system, involved in modulating neurophysiological processes controlling mood, memory, pain sensation, and appetite [10]. Cnr1 is also thought to be involved in the neuropharmacological effects of alcohol [11] through inhibition of glutaminergic and GABAergic interneurons [12]. Variations in this gene or alterations in its expression are also associated with mood disorders, particularly fear and anxiety phenotypes [13].
Here, we use a C57BL/6J mouse model of binge-like exposure during the period of synaptogenesis [5] to assess a potential relationship between Cnr1 and its putative regulatory miRNA, miR-26b. We evaluated the inverse expression patterns of these two transcripts, hypothesizing that the up-regulation of the miRNA following PAE may in part be responsible for the observed reduction in transcript of a target gene in the adult brain. In these experiments, mice were exposed to two acute doses of alcohol (5 g/kg) at neurodevelopmental times representing the human third trimester equivalent. This method has been previously reported and induces a peak blood alcohol level of over 0.3 g/dL for 4 to 5 hours following injection, and is sufficient to induce neuronal apoptosis and result in FASD-related behaviour [5,14,15]. Our results suggest that ethanol exposure during neurodevelopment may exert its long-term effects by altering the expression of regulatory miRNAs, which may then reduce the expression of a number of target genes that may contribute to the spectrum of phenotypes observed in FASD.
Gene expression data previously was generated through microarray analysis (GEO # GSE34539) of RNA isolated from whole brain tissue of 60-day-old male mice exposed to binge-like levels of alcohol during the third trimester equivalent on postnatal days 4 and 7 (see [5] for methods). miRNA expression array data (GEO # GSE34413) was also generated from the same sample (see [6] for methods).
Analysis of these data show a reduction of Cnr1 (fold change = −1.33, P = 6.07 x 10 -5 ) in ethanol-treated brains as compared to the saline controls. Also, the miRNA miR-26b increased in ethanol-treated mice (fold change = 1.284, P = 0.0364) compared to controls.
The potential interaction of the genes and miRNAs identified as differentially expressed by the array studies were analysed using Ingenuity's® Micro-RNA Target Filter. This analysis identified miR-26b as a high-confidence predicted regulator of Cnr1 expression.
The reduction of Cnr1 transcript was confirmed by real time RT-PCR [5], showing a 1.14-fold decrease in expression in ethanol-treated male brains as compared to matched controls (P = 0.004; Figure 1A). Further, we demonstrated a significant increase in the level of miR-26b miRNA in ethanol-treated samples (fold change = 3.71, P = 0.012) compared to matched controls (see [6] for methods) ( Figure 1B). This inverse relationship within the same sample set suggests that the two observations may be biologically related. This potential interaction was further analysed using the TargetScan® Human 6.2 predictor for miRNA targets [16], which shows that the seed region of miR-26b possesses complementarity to the 3'-UTR of the Cnr1 transcript and has a significant potential to bind this region ( Figure 2). The probability of conserved targeting (P CT ) analyses the preferential conservation of binding sites [16]. It has the advantage of identifying targeting interactions that are not only more likely to be effective but also those that are more likely to be consequential for the animal, given the evolutionary conservation. The analysis Figure 1 Analysis of Gene and miRNA expression via qPCR. (A) Change in Cnr1 mRNA levels in male control and alcohol-treated whole brain samples normalized to control. This figure was reproduced with permission from the authors [5]. (B) Change in miR-26b levels in male control and alcohol-treated whole brain samples normalized to control. Data are fold change ± SEM. Control n = 5, alcohol n = 5. *P <0.01, **P <0.05. calculated a P CT score of 0.84, which indicates a significant degree of confidence in the predicted interaction. Next, we evaluated expression of Cnr1 and miR-26b to confirm their relative expression levels.
miR-26b is encoded from an intron of small Cterminal domain phosphatase [17]. Interestingly, it is involved in neuronal differentiation as its transcription results in a negative feedback loop that is absent in neural stem cells [18]. miR-26b has also been shown to regulate the expression of brain-derived neurotrophic factor (BDNF), a gene strongly implicated in neurodevelopment and related disorders (i.e., schizophrenia) [19], including the effects of PAE [5].
This altered expression of miR-26b may have the ability to affect downstream gene expression by binding to the mRNA transcripts of its target genes. We have demonstrated that miR-26b shows complementarity to a region of the 3'-UTR of the Cnr1 transcript (Figure 2), which gives it the potential to regulate the expression of Cnr1. This regulation by miRNAs generally occurs through blocking of translation and/or promoting degradation of the target transcript [9]. The up-regulation of miR-26b correlates with the reduced Cnr1 transcript observed in the adult brain of mice neurodevelopmentally exposed to alcohol. [7] Our results suggest that this regulatory mechanism also occurs in vivo, and that the stable alteration of miRNA as a result of neurodevelopmental teratogenesis may affect longterm gene expression of its target transcript(s) long after exposure.
It is possible that relationships such as these may have the ability to influence the aberrant behavioural phenotypes seen in FASD. The eCB system, for instance, plays a strong role in anxiety-related behaviour [20], which has been shown to increase in adult mice following PAE [21]. Previous studies evaluating Cnr1 knockout mice have demonstrated increased anxiety-like phenotypes [13]. This suggests that the observed reduction in Cnr1 expression demonstrated here may contribute to our observation of anxiety-like behaviour following PAE.
Ultimately, these findings provide a mechanism by which the long-term change in Cnr1 expression is maintained following PAE. They also suggest that the alteration of neurodevelopmentally-important miRNAs can influence the long-term function of biological pathways that influence cognition and behaviour. Epigenetic regulators of gene expression may then be affected by PAE, subsequently exerting pleiotropic effects on numerous gene targets that then contribute to the longterm and variable neurobehavioural effects associated with FASD.
Competing interests
The authors declare no competing financial interests. | 1,954.4 | 2013-08-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
Endothelial cells release microvesicles that harbour multivesicular bodies and secrete exosomes
Abstract Extracellular vesicles (EVs) released by endothelial cells support vascular homeostasis. To better understand endothelial cell EV biogenesis, we examined cultured human umbilical vein endothelial cells (HUVECs) prepared by rapid freezing, freeze‐substitution, and serial thin section electron microscopy (EM). Thin sections of HUVECs revealed clusters of membrane protrusions on the otherwise smooth cell surface. The protrusions contained membrane‐bound organelles, including multivesicular bodies (MVBs), and appeared to be on the verge of pinching off to form microvesicles. Beyond cell peripheries, membrane‐bound vesicles with internal MVBs were observed, and serial sections confirmed that they were not connected to cells. These observations are consistent with the notion that these multi‐compartmented microvesicles (MCMVs) pinch‐off from protrusions. Remarkably, omega figures formed by fusion of vesicles with the MCMV limiting membrane were directly observed, apparently releasing exosomes from the MCMV. In summary, MCMVs are a novel form of EV that bud from membrane protrusions on the HUVEC surface, contain MVBs and release exosomes. These observations suggest that exosomes can be harboured within and released from transiting microvesicles after departure from the parent cell, constituting a new site of exosome biogenesis occurring from endothelial and potentially additional cell types.
by a liquid biopsy (Mitchell et al., 2022).Furthermore, EVs are being developed as vehicles for delivery of therapeutic agents (Sil et al., 2020).Despite these important physiological functions and medical utilities, much remains to be discovered about the biosynthesis of EVs.
It is generally viewed that EVs fall into two categories based on their site of biogenesis.Microvesicles arise at the plasma membrane, by outward budding and pinching off directly from the cell surface.In contrast, exosomes are of endosomal origin, produced when intraluminal vesicles (ILVs) contained in multivesicular bodies (MVBs) are exocytosed upon MVB fusion with the plasma membrane (Mathieu et al., 2019;van Niel, D' Angelo & Raposo, 2018).Microvesicles range in diameter from 50 nm to over 1 µm and include ectosomes, microparticles, shedding vesicles, migrasomes, and oncosomes (if released from cancer cells).Apoptotic bodies are an additional type of microvesicle, formed when a cell undergoing programmed cell death breaks up into 500 nm -5 micron-diameter fragments with limiting membranes derived from the cell's plasma membrane (Battistelli & Falcieri, 2020;Xu et al., 2019).In accord with the idea that exosomes represent ILVs that have been released by exocytosis from MVBs, they exhibit a narrower size range between 40 and 150 nm (Kang et al., 2021).
Endothelial cells form the endothelium, a single cell-thick lining of the blood and lymphatic vessels that controls the exchange of oxygen and nutrients between the vessel contents and underlying tissues (Ricard et al., 2021).In their role on the "front lines" of vessels, exposed to circulating cells and plasma, endothelial cells release a significant proportion of the EVs found in blood (Mathiesen et al., 2021).EVs released by the endothelium contribute to its role in supporting vascular homeostasis, which includes maintenance of the antithrombogenic surface of the vessels (blood fluidity) and vasodilation, inhibition of inflammation, cell survival and angiogenesis (Trisko et al., 2022).
To gain a better understanding of the mechanisms and structural aspects of EV release from endothelial cells under proangiogenic but noninflammatory conditions, we used thin section electron microscopy (EM) to examine Human Umbilical Vein Endothelial Cells (HUVECs) and look for structural features consistent with microvesicle budding from the plasma membrane and exocytic release of exosomes from MVBs.Cells were preserved by ultra-fast freezing, which is optimal for capturing fast events like exocytosis, and processed by a freeze substitution protocol optimized for plasma membrane enhancement (Walther & Ziegler, 2002).
In thin sections, groups of protrusions were observed on the otherwise smooth HUVEC plasma membrane that were often branched and contained vesicular organelles, including MVBs with ILVs.Beyond cell peripheries, vesicles that contained MVBlike vesicles were observed, suggesting that they were microvesicles that had pinched off from the protrusions, diffused, and occasionally adhered to the coverslip.Serial sections through the presumptive microvesicles on the coverslip confirmed that they were not connected to cells by cellular extensions and that ILV-like vesicles were within the MVB-like vesicles.Further examination revealed omega figures, the structural hallmarks of exocytosis (Douglas, 1973), occurring between MVB-like vesicles inside the microvesicles and their limiting membrane.On occasion, such omega figures contained small vesicles that were identical to ILVs.These observations support the notion that microvesicles containing multiple membrane compartments (referred to as multi-compartmented microvesicles, or MCMVs) pinch off from MVB-containing protrusions at specialized sites on the cell surface.MCMVs contain MVBs that apparently can release exosomes after transiting away from the parent cell.
. HUVEC culture on Aclar coverslips for thin section EM
HUVECs were obtained from American Type Culture Collection (ATCC, Manassas, VA, USA; #PSC-100-013).HUVECs between passages 4-6 were used in this study, screened for mycoplasma contamination (ATCC, Kit #30-1012K), and cytotoxicity (Ther-moFisher, Cat.No. L3224) and analyzed at a subconfluent density.Cells were plated on Aclar 33C plastic film coverslips (Electron Microscopy Sciences [EMS], Hatfield, PA, USA).Compared to glass coverslips which adhere strongly to embedding resin used for thin section EM, Aclar 33C plastic separates easily and cleanly from embedding resin without scratching or damaging the surface of the resin, which corresponds to the surface of the coverslip where the cells are adhered (Kingsley & Cole, 1988).To plate HUVECs on Aclar, the plastic sheet of Aclar was cut into about 12 mm-wide rectangular coverslips to fit into wells of a 12-well plate using clean scissors.After cutting into desired size and shape, coverslips were washed at least ten times with distilled water and then at least ten times with 70% ethanol.Coverslips were then placed into wells of a 12-well plate and rinsed at least ten times with cell culture grade sterile water (ThermoFisher Scientific, Waltham, MA, USA), and then once with endothelial cell culture medium (ScienCell, Carlsbad, CA, USA, Cat.No. 1001) which contains 5% fetal bovine serum, 1% endothelial cell growth supplement, and 100 U/ml Penicillin/Streptomycin antibiotic solution.Cells were plated (30,000 cells per well) and maintained for 2-3 days in an incubator at 37 • C and 5% CO 2 .Medium was changed every 48 h.
. Plunge freezing of cultured HUVECs
To preserve cells for EM, cultured HUVECs on Aclar coverslips were plunge frozen by hand in liquid ethane.Ethane was liquified in the "ethane pot" of a Vitrobot Mark IV plunge freezer (ThermoFisher Scientific) following manufacturer's instructions.A vessel of liquid nitrogen was placed next to the ethane pot for transfer of frozen coverslips to storage containers under liquid nitrogen.
To freeze, a coverslip was gently withdrawn from well of 12-well plate with pair of clean, fine-tipped forceps (Dumont #5, EMS).Holding the coverslip vertically, excess culture medium was wicked away from the edges of the coverslip using Whatman #1 filter paper (EMS).Total blotting time did not exceed 5 s, with careful observation to avoid drying.After blotting, the coverslip was inverted over the liquid ethane, and plunged into the liquid ethane by hand as quickly as possible.The coverslip was immediately transferred to liquid nitrogen and stored under liquid nitrogen until processing by freeze substitution.
. Freeze-substitution of cultured HUVECs on Aclar coverslips
Frozen cells were fixed by freeze substitution, a process that dissolves and replaces ice in the tissue by a solvent while at low temperature, and in the presence of fixatives (Feder & Sidman, 1958).The freeze substitution (FS) process used was a slightly modified version of a protocol developed to enhance preservation and staining of lipid membranes (Walther & Ziegler, 2002).FS staining cocktail was prepared in 20 ml glass scintillation vials with plug caps (EMS) and contained: 0.2% osmium tetroxide, 0.1% uranyl acetate (UA) in glass distilled acetone with 5% water.To prepare the FS cocktail, working under a fume hood, 12 mg of UA (EMS) was added per vial, followed by 600 µl of 4% aqueous osmium (EMS) and vortexed for 30 s to dissolve.Then, 11.4 ml of acetone from a freshly opened bottle of glass distilled acetone (EMS) was added to each vial.The vial was tightly capped, and contents mixed well by swirling.Then caps were loosened and placed in a liquid nitrogen bath until the FS cocktail was frozen solid.Liquid nitrogen was poured directly in the vials, and then frozen coverslips bearing cells were transferred into vials while being maintained under liquid nitrogen.One coverslip was processed per vial.Vials were loosely capped and transferred to the chamber of the automatic freeze substitution machine (EM AFSII, Leica Microsystems, Wetzlar, Germany) that contained an approximately 15 mm-deep bath of ethanol to improve thermal conductivity, and was pre-cooled to −112 • C. Within about 10 min in the freeze substitution machine, the liquid nitrogen inside the vials was evaporated and vial caps were tightened.The following FS program was applied: warm from −112 • C to −90 • C, then hold at −90 • C for 8 h, then warm to 20 • C for 36 h.
. Room temperature processing and embedding of freeze-substituted HUVECs
When the FS program was completed, vials were removed from the machine and processed at room temperature.Coverslips were rinsed with acetone three times and then stained with 0.5% tannic acid in acetone for 45 min, protected from light.Tannic acid was removed by four acetone rinses using freshly opened bottle of glass distilled acetone.Cells were infiltrated with a graded series of EmBED812 resin (EMS) in acetone (50%, 75%, 95%, and 100% resin twice.)Each infiltration step lasted for at least 1 h.The coverslips were embedded as follows: a square "backing piece" of Aclar was cut into a square and laid flat into either the lid or base of a 35 mm plastic Petri dish, which serves as a carrier for the embedding assembly.Then, the coverslip to be embedded was removed from the scintillation vial with clean forceps, and excess resin was allowed to drain from the coverslip for a few seconds.Then the coverslip was placed cell-side-up on the Aclar backing piece in the Petri dish.A gelatin capsule, with the cap removed, (EMS, size 0) was filled with freshly prepared 100% resin until it was level with the rim of the open capsule, and any bubbles were allowed to float to the surface and removed.Then the uncapped, resin-containing gelatin capsule was inverted and placed directly on top of the cell-side of the coverslip.Two or three gelatin capsules were placed side-by-side to cover most of the surface of the coverslip, and the entire assembly was put into the 60 • C oven for 48 h to polymerize.
. Serial thin sectioning of embedded HUVECs
After polymerization, the backing piece of Aclar was removed, and the gelatin capsules attached to coverslips were separated from each other with a jewelers saw into individual blocks.Often the gelatin capsule could be sawed into three to four pieces to allow sectioning of multiple areas per block.After trimming away excess resin with a double-edged razor blade, the Aclar was removed by inserting the edge of the razor blade at a corner, between the Aclar and the resin surface, and lifting carefully to flick away the Aclar coverslip, revealing the smooth bottom surface of embedded cells.The block face was then trimmed to an approximately 600 µm wide × 400 µm tall trapezoid shape for sectioning.Sections were cut using an ultramicrotome (EM UC7, Leica Microsystems) to a thickness of 70 nm using a diamond knife (DiATOME, Hatfield, PA, USA).Typically, four to five serial sections were positioned with an eyelash brush and picked up per 2 × 1 mm slot grid that was formvar and carbon coated (EMS).Because the Aclar coverslip, and therefore the resin surface, was not perfectly flat, and inevitably the knife edge was not perfectly aligned with the block face, the knife grazed portions of the block face (which corresponded to the coverslip surface) and missed other portions of the block face at first, with successive sections becoming more complete until the knife fully entered the resin.The partial sections were collected and used to determine the distance from the coverslip of structures imaged in serially collected thin sections.For instance, if a structure was present near the grazed edge of a partial section where the knife first entered the resin, and that area of resin was not yet cut in the previous section, then it could be ascertained as the first 70 nm-thick section in which the structure appeared, corresponding to the surface of the coverslip.Adjacent sections could then be examined, in order, each containing the next 70 nm slice of the structure.Sections on the grids were post-stained with 3% Reynolds lead citrate (EMS) for 5 min.
. Chemical fixation and EM processing of HUVECs
All steps were performed at room temperature, and all reagents were obtained from EMS unless otherwise stated.HUVECs grown on glass or Aclar coverslips were fixed with 2% paraformaldehyde and 2% glutaraldehyde in 0.1 M sodium cacodylate buffer containing 2 mM calcium chloride for at least 60 min.After fixation, cells were rinsed with cacodylate buffer and then postfixed for 60 min with 0.25% osmium and 0.25% potassium ferrocyanide (Fisher Scientific, Pittsburgh, PA, USA) in cacodylate buffer.After rinsing with cacodylate buffer, cells were incubated in 0.5% tannic acid in cacodylate buffer for 30 min, rinsed with acetate buffer at pH 5.2, and then stained with 2% uranyl acetate in acetate buffer for 60 min.Finally, cells were dehydrated, embedded, and sectioned as described above for the freeze substituted HUVECs.
. Transmission EM of serial thin sections and image processing
Thin sections were viewed using a Tecnai T20 transmission electron microscope (ThermoFisher Scientific) operated at 200 KeV.When a structure of interest was discovered that spanned multiple sections, the structure was tracked back in the serial sections to the first section in which it appeared, and then imaged in consecutive sections moving up in the z-axis away from the coverslip.Images were collected with an NanoSprint1200 side-mount CMOS detector (Advanced Microscopy Techniques, Woburn, MA, USA).Adobe Photoshop 2022 (San Jose, CA, USA) was used to adjust images for display by applying a Gaussian blur of 0.5 pixelradius, followed by adjustment of grey levels, brightness, and contrast.Serial section images were aligned using the TrakEM2 plugin of Fiji image analysis software (Schindelin et al., 2012) which is a version of ImageJ2 (Rueden et al., 2017) equipped with additional plugins for image analysis.Fiji is available as an open-source software (https://imagej.net/software/fiji/downloads).
. Quantification of structures in EM thin sections
To quantify the proportion of cells with protrusion sites from individual thin sections, cut from at least two independent blocks of resin prepared from coverslips of three independent cultures were examined.The number of total cells found in these thin sections, and the fraction of cells with and without protrusion sites were counted (omitting partial cells on the edge of sections).Because we could not see the entire membrane, we constructed a simplified model to adjust to the total cell membrane.We estimated the total cell membrane area to be about 1200 µm 2 using a pancake-like model based on a vertical section showing the height of the HUVEC (∼4 µm) and a mean diameter of 24 µm (Chen et al., 2017).Because in the thin section (X-Y plane) the protrusion site base measured 2-8 µm wide (see Supplemental Figure S1) we chose 2 µm as a conservative measure of the extent of a protrusion site in Z.Then, 2 µm worth of serial sections would show the same protrusion site.But only 9% of the thin section cells showed even one protrusion site.The ratio of the 2 µm band of surface (which for a 24 µm cell has ∼150 µm 2 ) to the total cell surface (1200 µm 2 ) gives the mean number of about ¾ protrusion sites per cell.
Similarly, the frequency of MVB-like vesicles inside MCMVs and omega figures observed on the periphery of MCMVs was quantified.Thin sections from three independent cultures were surveyed and images collected of all observed MCMVs in each of these thin sections.The morphological criteria for identifying a vesicle as an MVB was based on its size being greater than 200 nm in size, and lack of dense staining compared to the neighbouring MCMV contents.While this was usually obvious (see Supplemental Figure S8), when uncertainty arose, a region-of-interest was drawn that included many pixels inside and outside of a candidate vesicle.If the histogram of pixel grey values showed a clear separation of two peaks, with the inside lighter than the outside, the criteria was met, and the vesicle was counted as MVB-like.Once the mean fraction of MVBs per MCMV was established in the individual thin section, an adjustment was made based on the average size of an MVB in an MCMV (367 nm for the major axis of 50 randomly selected MVB-like vesicles), the section thickness of 70 nm, the average size of the MCMV (average of major and minor axes of 1.08 micron), and an idealized spherical MCMV.The number of omega figures per MCMV was counted by inspection of the MCMV limiting membrane (see Supplemental Table S2).
. HUVECs display a localized cluster of plasma membrane protrusions
To look for sites of EV biogenesis occurring from HUVECs using thin section EM, cells were cultured on Aclar plastic coverslips and then preserved by ultrafast freezing in liquid ethane.Frozen cells were fixed and heavy metal-stained by freeze substitution (Walther & Ziegler, 2002), and embedded in epoxy resin.The Aclar coverslip was separated from the hardened resin, leaving the cells cleanly transferred from the coverslip to the resin, with the surface of the resin corresponding to the bottom of the cells.Seventy nanometer thick sections were cut in the en face orientation to cells, parallel to where the coverslip had been.As the diamond knife entered the resin, all sections were collected, including partial sections that had only grazed portions of the resin surface.Sections were collected in order so that structures spanning multiple sections could be interpreted, and distance from the surface of the coverslip in the z-axis could be determined.
Cells were prepared at a subconfluent density, allowing isolated peripheries of individual cells where EVs might be emerging to be clearly examined.Thin sections revealed the characteristic elongated HUVEC morphology (Figure 1a).Cells displayed mostly smooth plasma membrane surfaces, however, occasionally a site spanning several micrometres of the cell surface membrane became irregularly contoured (Figure 1a, boxed area shown in 1b).Evaluation at high magnification revealed the area to be composed of a cluster of protrusions emanating from the surface of the HUVEC.
To explore the three-dimensional organization of this 'protrusion site' , images of the area were taken across consecutive serial sections and aligned (Supplemental Movie 1).The first eight serial sections starting from the surface of the coverslip (Figure 1c) show that the site consists of numerous protrusions, many of which were branched (Figure 1b, magenta outline).Thin necks connecting protrusions to the cell are visible in serial section 4, about 280 nm above the coverslip surface.In sections that occur below or above the level at which the protrusions connect to the cell, the protrusions cut in cross-section appeared to be a cloud of vesicles, hovering next to the cell.For example, Figure 1c, section 1 shows the cross-section of tips of protrusions closer to the coverslip than section 4 where they are seen connecting to the cell.At protrusion branch points and where protrusions connected to the cell, diameters often constricted to 65-200 nm wide necks (Figure 1b, yellow bars).Some protrusions spanned only three to four serial sections (280 nm in z-height with respect to the coverslip), while others extended away from the surface of the coverslip for about 10 sections (700 nm in z-height).Additional serial sections through this protrusion site are shown in Supplemental Movie 1.A gallery of images showing protrusion sites on other HUVECs is shown in Supplemental Figure 1.Two more examples of serial sections through protrusion sites on other HUVECs are shown in Supplemental Figures S2 and S3 and corresponding Supplemental Movies 2 and 3.These examples show protrusions that span over 1200 nm (17 sections) in height.Protrusion sites were also observed in HUVECs grown on glass and Aclar coverslips, that were prepared by conventional room temperature chemical fixation EM (Supplemental Figure S4).
To estimate the proportion of HUVECs that had a protrusion site, individual 70 nm-thick sections from three independent cultures prepared by freeze substitution were examined, and the number of cells with and without protrusion sites were counted.Of these cells, 9% had a protrusion site (Supplemental Table S1).A model was used to estimate the mean number of protrusions per pancake-shaped HUVEC (see methods).Based on this, it is estimated that the mean number of protrusion sites per cell is about three-quarters.A gallery showing the range of appearances of protrusion sites in single thin sections is shown in Supplemental Figure S1.
. Organelle-rich protrusions contain MVBs with ILVs, endosomes, ER, and mitochondria
Organelles in the cell cytoplasm could be identified based on their stereotypical EM ultrastructure (orange boxed areas in Figure 1b, enlarged in Figure 1d-h).In the protrusions, membrane-bound organelles of matching structure were observed, including MVBs containing ILVs, endosomes (round, tubular, and clathrin coated), and endoplasmic reticulum (ER).Occasionally, mitochondria, recognizable by their distinct cristae, double membranes and dark electron dense staining were also observed in the protrusions (green boxed areas in Figure 1c, enlarged in Figure 1i-l).
. Vesicles were present on the coverslip surface that were not connected to cells and contained MVBs and other membrane-bound compartments
Based on the protrusion morphology-thin necks connecting protrusions to the cell and at branch points-it was hypothesized that the protrusion site is a specialized site for the assembly and budding of EVs.Because the membrane would be derived from the HUVEC plasma membrane, such an EV would qualify as a microvesicle.Furthermore, should a protrusion containing membrane-bound organelles pinch off and diffuse into the extracellular milieu, it would create an microvesicle containing multiple compartments, i.e., a multi-compartmented microvesicle (MCMV).If so, diffusing MCMVs may occasionally contact and stick to the coverslip, and remain attached during preparation for EM.To explore this possibility, areas beyond cell peripheries on seemingly empty expanses of coverslip were searched in the en face sections that were closest to the coverslip surface (illustrated in Supplemental Figure S5 and corresponding Supplemental Movie 4).
In such areas, presumptive MCMVs were found beyond the periphery of cells.An example shown in Figure 2a was located about 7 µm from the nearest cell.A complete set of 10 serial sections through this MCMV was obtained starting at from the very first section that contained the MCMV within the first 70 nm of the surface of the coverslip, to the 10 th and last section in which the MCMV appeared (Figure 2b and Supplemental Movie 5).Examination of the serial sections confirmed that the MCMV was not connected to a neighbouring cell by a membrane extension, nor were remnants of detached membrane connections observed.
. MCMVs contain MVBs with ILVs and other membrane-bound compartments
Serial sections through MCMVs demonstrated that they contained round and tubular membrane-bound compartments in addition to a dense cytoplasm (Figure 2b).The vesicular compartments varied in diameter from 45 to 650 nm.Lumens of the vesicles below 200 nm in diameter varied from electron lucent to electron dense with a smooth or granular texture (Figure 2c).The larger compartments were most often electron lucent, like the lumens of MVBs (Figure 1b,e,i,j).Often, an MCMV inner vesicle contained another vesicle, that is, a vesicle inside a vesicle, or double vesicle (Figure 2c).The lumens of the double vesicles often differed in texture or electron density from the vesicle in which they were enclosed (see Figure 2c, double vesicles from section 5).On occasion, MCMV inner vesicles contained multiple vesicles.A quadruple vesicle in Figure 2c from section 6 shows an inner vesicle that contains a round 250 nm vesicle and a 60 nm thick tubule, both with granular textured lumens.The 250 nm round vesicle contains four additional vesicles with diameters ranging from 50 to 75 nm with smooth light to dark grey lumens.
One of these vesicles contains a sixth, 45 nm vesicle, also with a smooth grey lumen.In this configuration, there are five layers of membrane between the contents of the sixth vesicle and the extra-MCMV milieu.Also present in the MCMV shown in Figure 2b, is a 650 nm sub-compartment (shaded yellow), along with some ∼300 nm compartments that are similar in roundness and electron lucency to MVBs, but they do not contain any ILVs.The presence of putative empty MVBs inside MCMVs raised the possibility that MVBs might be capable of releasing their ILVs (to become exosomes) from an MCMV after it has transited away from its parent cell.This observation provided circumstantial evidence for ILV secretion (exosome release) from MCMVs.
. MVBs appear to fuse with MCMV membrane and release exosomes
If exosomes are released from MCMVs, MVBs containing ILVs must be observed inside MCMVs, and evidence of membrane fusion between the MVB the MCMV membrane must be detected.Sections of MCMVs were examined (Figure 3), and examples of MVB-like vesicles, indistinguishable from MVBs observed in cell cytoplasm were observed (boxed orange in Figure 3b,f and 3b' ,3f').The MVB in Figure 3b' appears to be undergoing an outward budding event (orange arrow), suggesting active remodelling of MVBs inside MCMVs.Other MVB-like vesicles inside MCMVs were observed (boxed orange in Figure 3a and 3a', and see Supplemental Figure S6).On average, about one MVB-like vesicle per MCMV was detected (Supplemental Table S2).
The most stringent structural test for membrane fusion between organelles and the plasma membrane is the omega-shaped figure, describing the shape of a fusion intermediate in a cross-section thinner than the fusion pore (Douglas, 1973).Thus, if MCMVs secrete exosomes, omega figures of fusing MVBs should occasionally be visible on the MCMV limiting membrane, especially since the cultures were preserved by ultrafast freezing which occurs in milliseconds and can capture dynamic events like exocytosis.
Further examination revealed about 43% of MCMVs had one or more omega-shaped profiles on their limiting membrane (Supplemental Table S2 and turquoise boxes in Figure 3a-d,f and a' ,b' ,c' ,d' ,f').These profiles resembled smaller versions of the exocytic profiles of MVBs fusing with the cell plasmalemma in reticulocytes that also preserved by ultra-fast freezing and processed by freeze substitution and thin section EM (Harding et al., 1983).
If exosomes are released from MCMVs via these omega profiles, ILV-like vesicles might be seen inside or nearby the fusion event.Indeed, some of the omega figures contained ∼50-100 nm vesicles with grey lumens that were indistinguishable from ILVs (magenta boxes in Figure 3d-f and d'-f').Omega figures of fusion can span two sections and may appear empty in one section but seen to contain an ILV in the neighbouring section (Supplemental Figures S6 and S7, and Supplemental Movies 6 and 7).About 65% of the MVB-like vesicles had ILV-like vesicles within them (Supplemental Table S2).A gallery showing examples of MCMVs preserved by freeze substitution are shown in Supplemental Figure S8, and MCMVs preserved by conventional room temperature chemical fixation are shown in Supplemental Figure S4.
. Single-membrane vesicles corresponding to MCMV's internal vesicles are found on coverslip surface
If MCMVs release some of their internal vesicles, including ILVs (exosomes), they too might attach to the coverslip surface, and it might be possible to find single-membrane vesicles that match the size and appearance of MCMV inner vesicles in the first one or two sections cut along the surface of the coverslip.To test the prediction of this hypothesis, such areas were searched at high magnification, and single-membrane vesicles ranging in size from ∼50 to 200 nm were found (Figure 3g).Like the vesicles observed inside MCMVs, the vesicles had grey smooth or granular interiors.The observation of these vesicles was consistent with their release from MCMVs.
DISCUSSION
Here a new class of microvesicle is described, termed MCMV, that buds from cellular protrusions clustered on the plasma membrane of HUVECs (Illustrated in Figure 4).The cellular membrane protrusions contain vesicular cargo that, when compared with cytoplasmic organelles, could be identified as MVBs with ILVs, endosomes (round, tubular and clathrin coated), ER, and mitochondria.Serial sections showed that the protrusions are a few hundred nanometers to 1 micron thick and in some cases extended up in the Z axis relative to the coverslips for more than ∼1200 nm, beyond the scope of the serial sections analyzed.Protrusions were often branched and intermingled.At branch points and connections to the cell, the protrusions often became constricted to thin necks of 65-200 nm (yellow bars in Figure 1b).Exploration of the coverslip surface between cells revealed MCMVs that were immobilized on the coverslip and contained MVB-like vesicles containing ILV-like vesicles, in addition to other vesicle types.The direct observation of omega figures joining membranes of internal vesicles of MCMVs with the peripheral membrane of MCMVs, and the many images of ILV-like vesicles in and immediately adjacent to the fusion pore of omega figures, and associated with the periphery of MCMVs (Figure 2d), together suggest to us that ILVs can be released from MCMVs-a function akin to the exocytosis of ILVs from cells.Since that could only be conceivable if MCMVs contain compartments akin to MVBs of cells, such a hypothesis would be unique to MCMVs.Preservation, by fast-freezing, of omega figures on the MCMV-limiting membrane showed a lack of any membrane coat.Thus, these omega figures are not the result of any coat-mediated endocytosis such as those mediated by clathrin or caveolin.It also seems of low probability, but not impossible, that ILV-sized vesicles are captured and internalized into MCMVs from the relatively vast volume of culture medium.Taken together, these observations suggest a novel pathway by which a subset of exosomes are released from a transiting MCMV after pinching off from a protrusion on the HUVEC surface.
A previous scanning and transmission EM study of HUVECs described what is likely the same site of protrusions as we document and reported that vesicles shed from this specialized site on the plasma membrane contain proteases that promote angiogenesis (Taraboletti et al., 2002).Similarly, a separate scanning EM study showed discrete areas of membrane protrusions on otherwise relatively smooth surface membranes of unstimulated HUVECs (Combes et al., 1999).Indeed, various specialized domains of membrane protrusions have been shown to shed microvesicles on a number of cell types (Rilla, 2021).
The rapid freezing, freeze substitution and serial section EM methodology utilized here (Walther & Ziegler, 2002) is currently being applied to other cell types to determine if MCMVs are released by other cell types, or are unique to HUVECs.A growing number of cryo-EM studies document EVs containing internal vesicles derived from diverse sources such as human plasma, cerebrospinal fluid, urine, and ejaculate, a human mast cell line, and rodent neuronal culture media suggesting that release of MCMVs is a mechanism conserved in other cell types (Arraud et al., 2014;Brisson et al., 2017;Emelyanov et al., 2020;Gamez-Valero et al., 2019;Hoog & Lotvall, 2015;Matthies et al., 2020;Zabeo et al., 2017).Similarly, EVs containing vesicles have been documented in some thin section EM studies of chemically fixed cells (Valcz et al., 2019).
A challenge in the study of EVs has been the isolation of EV subpopulations.Though possessing different sites of origin, microvesicles and exosomes share overlapping size ranges, molecular compositions, and densities, rendering biochemical enrichment and characterization of EV subsets a challenge (Kowal et al., 2016).The findings of this paper suggest that part of the difficulty may arise from EVs that consist of exosomes inside of microvesicles.Their presence could go unrecognized or misinterpreted as apparent overlap in biophysical properties.Moreover, attempts to separate microvesicles from exosomes may prove futile when MCMVs are both.
Other cellular processes produce EVs containing internal vesicle compartments, similar to MCMVs.For example, cells undergoing apoptosis fragment into apoptotic bodies containing micronuclei and other organelles, and range in size from ∼500 nm to several microns in diameter (Xu et al., 2019).However, none of the MCMVs described in the current paper appeared to contain micronuclei.In addition, cultures screened for cytotoxicity showed minimal cell death (estimated at 1 dead cell per 6000 cells surveyed).A second class of EV that contains internal compartments is exophers, large (∼4 µm diameter) membrane-bound organelles which are extruded from touch neurons in Caenorhabditis elegans (Melentijevic et al., 2017) and murine cardiomyocytes (Nicolas-Avila et al., 2020) to expel potentially toxic materials such as dysfunctional mitochondria and protein aggregates.Additionally, in C. elegans, exophers released by muscle cells in pregnant females transfer nourishing yolk proteins to support developing embryos (Turek et al., 2021).Exophers are the largest class of EV (up to 15 µm), and characteristically contain numerous mitochondria, which is not a distinguishing feature of MCMVs (Turek et al., 2021).There can also be a tube connecting the exopher to the cell from which it was extruded, but no connections were observed between MCMVs and cells.
Migrasomes are a third type of EV that contains numerous internal vesicles, and they perhaps share the most similarities with MCMVs, however migrasomes were shown not to contain MVBs (Ma et al., 2015).Migrasomes form by a migration-dependent mechanism at the termini and branch points of retraction fibres emanating from the trailing edge of migrating cells such as NRK cells (Ma et al., 2015) and other cell types and tissues (Di Daniele et al., 2022).However, MCMVs were not attached to retraction fibres, nor did they have 'tails' of broken fibres as seen in electron micrographs of migrasomes (Jiao et al., 2021;Ma et al., 2015;Yu & Yu, 2021).Also, remnants of retraction fibres were not observed on the coverslip in abundance, and the distribution of MCMVs on the coverslip surface appeared random relative to neighbouring cells and were not concentrated on one side of a cell (potentially the trailing edge).Since the protrusions on HUVECs extend away from the coverslip, rather than adhering to the coverslip substrate like retraction fibres, our interpretation is that MCMVs pinch off and diffuse in the culture medium before occasionally landing and becoming immobilized on the coverslip surface.It will be interesting to confirm this interpretation in future studies and determine if the location of the protrusion site on HUVECs correlates with the trajectory of cell movement.
Of note, the HUVECs cultured in vitro used in this study differ in some ways from tissue vascular endothelium, and future studies are needed to explore these findings in more physiological system.The glycocalyx of HUVECs may differ from those in tissues (Chappell et al., 2009), and an ILV could have more trouble traversing the glycocalyx of a tissue cell once released.Also, the vascular endothelium in tissue is not proliferating, whereas HUVECs in culture are exposed to several angiogenic growth factors in the medium and replicate many features of endothelial cells undergoing an angiogenic response.Angiogenic responses are known to alter the production and contents of endothelial-derived EVs (Alfi et al., 2021), and EVs produced by endothelial cells can regulate angiogenic responses (Chen et al., 2018;Lamichhane et al., 2017).The EV production studied here may be relevant to their role in angiogenesis.Furthermore, the HUVECs analyzed were subconfluent to better observe potential sites of EV biogenesis on the cell peripheries-possibly release of MCMVs occurs in response to a wound-like state to influence wound healing.Future studies are needed to determine if MCMVs are released from confluent endothelial cells in in vivo and in vitro.
F
I G U R E HUVECs display a localized site of membrane protrusions that contain MVBs and other membrane-bound organelles.(a) A low-mag transmission EM view of a portion of a HUVEC in a 70 nm-thick section cut in the en face orientation, showing its elongated cell morphology and predominantly smooth plasma membrane (arrows).The number in the upper right corner indicates the number of the section in the series of serial sections shown in (c), N indicates the nucleus.Additional serial sections through this area are shown in Supplemental Movie 1. Asterisks are placed to the right of two wrinkles in the plastic section; open arrowheads point to fuzzy material that is typically observed in freeze-substituted culture medium.(b) Boxed area in (a) from serial section 4 shows the cluster of membrane protrusions.A magenta dashed line outlines a branched protrusion; turquoise dashed line outlines an unbranched protrusion.Yellow lines indicate width in nm across protrusion necks.(c) Eight consecutive serial sections through the protrusion site with the serial section number indicted in the upper right (the greater the number, the higher above the coverslip in the z-axis).(d-h) Enlarged views of cytoplasmic organelles from orange boxed areas in (b) that can be identified based on their ultrastructure include: MVBs containing ILVs, mitochondria (M), endoplasmic reticulum (ER), round endosome (e), clathrin coated vesicle (cc), and tubular endosome (t).(i-l) Enlarged views of individual protrusions from green boxed areas in (c) are shown.Arrows indicate organelles that are structurally indistinguishable from organelles identified in the cytoplasm, shown in (d-h).F I G U R E MCMVs are found on the coverslip surface and are completely independent from neighbouring cells.(a) A low-magnification view of a presumptive MCMV (boxed) located 7 µm beyond the periphery of the nearest cell.Arrows point to fuzzy material that is typically observed in freeze-substituted culture medium.The complete series of serial sections through the MCMV boxed in (a) is shown in (b) and Supplemental Movie 5.The number in the upper right corner indicates the number of the serial section (ascending number indicates moving higher in the z-axis with respect to the surface of the coverslip).Examples of vesicles inside the MCMV are boxed in orange or turquoise and shown in (c).A 650-nm diameter MVB-like vesicle that spans almost the entire series of sections but is devoid of ILVs is shaded yellow in (b).(c) Examples of MCMV inner vesicles that are single vesicles or double vesicles, boxed orange, or a quadruple vesicle, boxed in turquoise.Turquoise arrows indicate four layers of membrane of the quadruple vesicle, inside the MCMV limiting membrane (black arrow), amounting to five membrane layers.Inner vesicles displayed various combinations of dark, light, or granular lumens.(d) Small vesicles associated with the outer periphery of the MCMV are boxed in magenta in (b) and shown enlarged in (d).
F
I G U R E MCMVs contain MVBs that fuse with the MCMV membrane to release exosomes.MCMVs containing organelles that are structurally similar or identical to MVBs are boxed orange in (a), (b), and (f) and shown to the right enlarged in orange boxes, (a'), (b'), and (f').The MVB in (b') appears to undergo an outward budding remodelling step (orange arrow).Black arrows indicate ILVs in (b') and (f').(a-c) MCMVs with one or more omega-figures on their limiting membrane that do not contain ILVs (boxed turquoise and enlarged to the right in turquoise boxes (a'), (b'), and (c').(d-f) Three serial sections through an MCMV (section number indicated in upper right corner) in which omega figures occur in each section.Some omega figures contain ILVs (magenta boxes, enlarged to the right in (d'), (e'), and (f')) and some do not contain ILVs (turquoise boxes, enlarged to the right in (d') and (f')).(g) Single-membrane vesicles located on the coverslip surface that are structurally like the inner vesicles of MCMVs.Scalebar in (a) applies to (a-f).Supplemental Figure 6 and Movie 6 show serial sections of the MCMV shown in (a).Supplemental Figure 7 and Movie 7 shows additional serial sections of the MCMV shown in (d-f).
F
I G U R E Artistic representation based on data sets of a protrusion site on a cultured endothelial cell and omega figures on a MCMV a distance from a cell.(a) Depiction of several protrusions clustered on a cultured endothelial cell surface, and a nearby pinched off MCMV floating in the extracellular milieu.A slice plane in the en face orientation relative to the coverslip is represented by the dashed line.(b) A view at a slightly different angle shows that the protrusions are often branching.(c) A slice through the protrusions and the MCMV shows internal round and tubular vesicular organelles, including MVBs containing ILVs, inside the protrusions and the MCMV.An omega figure on the MCMV limiting membrane indicates exosome secretion from the MCMV. | 9,203.6 | 2022-10-27T00:00:00.000 | [
"Medicine",
"Biology"
] |
Continuous Catalytic Deoxygenation of Waste Free Fatty Acid-Based Feeds to Fuel-Like Hydrocarbons Over a Supported Ni-Cu Catalyst
While commercial hydrodeoxygenation (HDO) processes convert fats, oils, and grease (FOG) to fuel-like hydrocarbons, alternative processes based on decarboxylation/decarbonylation (deCOx) continue to attract interest. In this contribution, the activity of 20% Ni-5% Cu/Al2O3 in the deCOx of waste free fatty acid (FFA)-based feeds—including brown grease (BG) and an FFA feed obtained by steam stripping a biodiesel feedstock—was investigated, along with the structure-activity relationships responsible for Ni promotion by Cu and the structural evolution of catalysts during use and regeneration. In eight-hour experiments, near quantitative conversion of the aforementioned feeds to diesel-like hydrocarbons was achieved. Moreover, yields of diesel-like hydrocarbons in excess of 80% were obtained at all reaction times during a BG upgrading experiment lasting 100 h, after which the catalyst was successfully regenerated in situ and found to display improved performance during a second 100 h cycle. Insights into this improved performance were obtained through characterization of the fresh and spent catalyst, which indicated that metal particle sintering, alloying of Ni with Cu, and particle enrichment with Cu occur during reaction and/or catalyst regeneration.
Introduction
Due to the interest in renewable fuels resulting from environmental and sustainability concerns and the high fuel demand of the transportation sector, biofuel production requires multiple feed sources. Currently, edible feeds are used in considerable amounts to achieve lower greenhouse gas emissions relative to liquid fossil fuels [1,2]. However, these feeds are needed to meet the food demand of a growing population, food prices rising when edible crops are used for fuel production [3]. Inedible oleaginous plants that thrive in arid soil avoid disrupting the food supply [1,4], but their current scale of cultivation is insufficient. Microalgae produce more oil than terrestrial crops, but energy and cost challenges associated with their cultivation remain to be overcome [5,6]. Thus, additional inedible feeds are required to meet fuel demand, fats, oils, and grease (FOG) waste streams showing great promise due to their low cost and widespread availability.
Noteworthy oleaginous waste streams include yellow grease (waste cooking oil) and brown grease (FOG collected from grease traps, i.e., devices engineered to separate insoluble oils from commercial kitchen wastewater streams [7]). Although a significant fraction of yellow grease (YG) is currently At 275 °C the yield of diesel-like hydrocarbons (C10-C20) decreased from 88.8% at 1 h to 71.3% at 8 h on stream, while the selectivity to heavier products (C21-C35) progressively increased over the same time period. The heavy products, listed in Table A2, are mainly long chain esters that are formed when alcohol intermediates undergo esterification with fatty acids on the catalyst surface [21]. These side reaction products are less prominent at high reaction temperatures, as evinced by the lack of any oxygenates (heavy or otherwise) at 375 °C, albeit the small amount of heavier (C21-C35) hydrocarbons formed at this temperature likely stems from the direct decarboxylation of long chain ester intermediates [21]. Alternatively, these esters can undergo hydrogenolysis to form alcohols and aldehydes, while the former can also be dehydrogenated to the latter as these two products have been shown to exist in equilibrium under hydrogen-rich conditions [21]. In turn, aldehydes can undergo decarbonylation to form diesel-like hydrocarbons. At 325 °C the yield of diesel-like hydrocarbons is ≥95% irrespective of time on stream. Notably, near complete deoxygenation occurs at this temperature, with the highest amount of oxygenates (accumulated during the last hour of the experiment) totaling only 0.2% of the liquid products. Heavy products are still formed, but in significantly lower quantities than at 275 °C. Complete deoxygenation occurs at all reaction times sampled when the reaction temperature is increased to 375 °C, the selectivity to diesel-like hydrocarbons being ≥98% irrespective of time on stream. The increase in selectivity to diesel-like hydrocarbons as the reaction temperature increases is most likely due to: 1) the cracking of the long chain hydrocarbons (≥C20) observed at lower reaction temperatures; and 2) direct deoxygenation of the fatty acids without formation of long chain ester intermediates.
Catalytic Deoxygenation of Brown Grease at Different Temperatures
The GC-MS analysis of the lipids extracted from the BG feed is also displayed in Appendix A within Table A1. The composition is similar to the industrial FFA waste stream above, the extracted lipids being mainly fatty acids (97.3%) the most abundant of which are oleic, palmitic, and stearic acids (ca. 64. 5, 20.4, and 8.7%, respectively). In addition, the BG contains 2.66% glycerides (mostly At 275 • C the yield of diesel-like hydrocarbons (C10-C20) decreased from 88.8% at 1 h to 71.3% at 8 h on stream, while the selectivity to heavier products (C21-C35) progressively increased over the same time period. The heavy products, listed in Table A2, are mainly long chain esters that are formed when alcohol intermediates undergo esterification with fatty acids on the catalyst surface [21]. These side reaction products are less prominent at high reaction temperatures, as evinced by the lack of any oxygenates (heavy or otherwise) at 375 • C, albeit the small amount of heavier (C21-C35) hydrocarbons formed at this temperature likely stems from the direct decarboxylation of long chain ester intermediates [21]. Alternatively, these esters can undergo hydrogenolysis to form alcohols and aldehydes, while the former can also be dehydrogenated to the latter as these two products have been shown to exist in equilibrium under hydrogen-rich conditions [21]. In turn, aldehydes can undergo decarbonylation to form diesel-like hydrocarbons. At 325 • C the yield of diesel-like hydrocarbons is ≥95% irrespective of time on stream. Notably, near complete deoxygenation occurs at this temperature, with the highest amount of oxygenates (accumulated during the last hour of the experiment) totaling only 0.2% of the liquid products. Heavy products are still formed, but in significantly lower quantities than at 275 • C. Complete deoxygenation occurs at all reaction times sampled when the reaction temperature is increased to 375 • C, the selectivity to diesel-like hydrocarbons being ≥98% irrespective of time on stream. The increase in selectivity to diesel-like hydrocarbons as the reaction temperature increases is most likely due to: 1) the cracking of the long chain hydrocarbons (≥C20) observed at lower reaction temperatures; and 2) direct deoxygenation of the fatty acids without formation of long chain ester intermediates.
Catalytic Deoxygenation of Brown Grease at Different Temperatures
The GC-MS analysis of the lipids extracted from the BG feed is also displayed in Appendix A within Table A1. The composition is similar to the industrial FFA waste stream above, the extracted lipids being mainly fatty acids (97.3%) the most abundant of which are oleic, palmitic, and stearic acids (ca. 64. 5, 20.4, and 8.7%, respectively). In addition, the BG contains 2.66% glycerides (mostly diolein), signifying that the feed predominately consists of unsaturated lipids. Notably, this composition is comparable to other BG samples described in the literature [14,22,23], which confirms that the simple extraction method employed (see Section 3.3) was effective in affording lipids representative of those contained in BG. The lipid concentration for these experiments was increased from that used in the FFA upgrading runs discussed in Section 2.1 above to 50 wt% lipids in dodecane, while the WHSV was kept at 1 h −1 . This allowed an evaluation of the effect of feed concentration on the deoxygenation of free fatty acid-based feeds. In turn, the most promising reaction temperatures identified through the FFA upgrading runs-namely, 325 and 375 • C-were investigated to assess the effect of temperature on the deoxygenation of BG lipids. The simulated distillation GC-MS analyses of the liquid products obtained during each hour are summarized in Figure 2 and in Appendix A (Tables A5 and A6), while the incondensable gaseous products are provided as Supplementary Material ( Figure S1). diolein), signifying that the feed predominately consists of unsaturated lipids. Notably, this composition is comparable to other BG samples described in the literature [14,22,23], which confirms that the simple extraction method employed (see Section 4.3) was effective in affording lipids representative of those contained in BG. The lipid concentration for these experiments was increased from that used in the FFA upgrading runs discussed in Section 2.1 above to 50 wt% lipids in dodecane, while the WHSV was kept at 1 h -1 . This allowed an evaluation of the effect of feed concentration on the deoxygenation of free fatty acid-based feeds. In turn, the most promising reaction temperatures identified through the FFA upgrading runs-namely, 325 and 375 °C-were investigated to assess the effect of temperature on the deoxygenation of BG lipids. The simulated distillation GC-MS analyses of the liquid products obtained during each hour are summarized in Figure 2 and in Appendix A (Tables A5 and A6), while the incondensable gaseous products are provided as Supplementary Material ( Figure S1). Quantitative conversion of the BG lipids was achieved regardless of the reaction temperature employed. However, the deoxygenation of the BG lipids at 325 °C yields more than double the amount of heavy products that was obtained in the free fatty acid experiments at the same temperature, reaching 11% of the total liquid products at 8 h on stream. Other authors have seen an increase in heavy products as the feed concentration increases and suggested that this stems from an increase in the amount of fatty acids present on the catalyst surface [14]. The rise in fatty acid concentration increases the possibility of esterification reactions with the alcohol intermediates, forming long chain esters that can undergo direct decarboxylation to yield heavy hydrocarbons. However, the esters observed in the FFA upgrading experiments were not detected in the BG upgrading experiments, suggesting that rapid decarboxylation occurs upon formation of the ester intermediates. Remarkably, >94% of the liquid products obtained during the deoxygenation of the BG lipids at 375 °C are diesel-like hydrocarbons at all reaction times sampled. Parenthetically, at 375 °C there were a number of liquid products-from 9.74% at t = 1 h to 7.62% at t = 8 h (see Appendix A Table A6)-deemed to be hydrocarbons by the GC-MS, although their specific identity could not be determined. Nevertheless, according to the simulated distillation GC of these unidentifiable products, they all boil within the diesel range (180-350 °C). The heavy product formation also decreased at the higher reaction temperature, the total amount obtained being <6% of the liquid products irrespective of time on stream. This is consistent with higher temperatures favoring the occurrence of cracking reactions, which is also indicated by the higher amount of gaseous C1-C4 products obtained at 375 °C (see Supplementary Material). Nevertheless, only a relatively small amount of the carbon in the feed becomes C1-C4 gaseous products. Indeed, at 325°C only 3.5 mol% Quantitative conversion of the BG lipids was achieved regardless of the reaction temperature employed. However, the deoxygenation of the BG lipids at 325 • C yields more than double the amount of heavy products that was obtained in the free fatty acid experiments at the same temperature, reaching 11% of the total liquid products at 8 h on stream. Other authors have seen an increase in heavy products as the feed concentration increases and suggested that this stems from an increase in the amount of fatty acids present on the catalyst surface [14]. The rise in fatty acid concentration increases the possibility of esterification reactions with the alcohol intermediates, forming long chain esters that can undergo direct decarboxylation to yield heavy hydrocarbons. However, the esters observed in the FFA upgrading experiments were not detected in the BG upgrading experiments, suggesting that rapid decarboxylation occurs upon formation of the ester intermediates. Remarkably, >94% of the liquid products obtained during the deoxygenation of the BG lipids at 375 • C are diesel-like hydrocarbons at all reaction times sampled. Parenthetically, at 375 • C there were a number of liquid products-from 9.74% at t = 1 h to 7.62% at t = 8 h (see Appendix A Table A6)-deemed to be hydrocarbons by the GC-MS, although their specific identity could not be determined. Nevertheless, according to the simulated distillation GC of these unidentifiable products, they all boil within the diesel range (180-350 • C). The heavy product formation also decreased at the higher reaction temperature, the total amount obtained being <6% of the liquid products irrespective of time on stream. This is consistent with higher temperatures favoring the occurrence of cracking reactions, which is also indicated by the higher amount of gaseous C1-C4 products obtained at 375 • C (see Supplementary Material). Nevertheless, only a relatively small amount of the carbon in the feed becomes C1-C4 gaseous products. Indeed, at 325 • C only 3.5 mol% of the carbon in the BG feed is converted to these gaseous products, this value reaching 6.1 mol% when the reaction temperature is increased to 375 • C.
Catalytic Deoxygenation of Brown Grease at Different H 2 Partial Pressures
The presence of H 2 in the reaction atmosphere has been shown to improve catalyst performance during the deCO x of lipids even if H 2 is not directly involved in the deoxygenation reaction [24][25][26]. Interestingly, some authors have also shown that supported Pd catalysts perform better under lower H 2 partial pressure conditions than under pure H 2 atmospheres [27], and the same effect has been reported for some Ni catalysts [26]. Therefore, it is instructive to investigate the effect of H 2 partial pressure on the deoxygenation of waste fatty acid-based feeds. To this end, the deoxygenation of BG was conducted under a reaction atmosphere of 20% H 2 /Ar at 375 • C, the GC-MS analysis of the liquid products being shown in Figure 3 and in Appendix A (Table A7), while the incondensable gaseous products are also shown in Figure 3. Note that the part of the y-axis not shown in Figures 3 and 5 consists entirely of the reaction atmosphere, i.e., either H 2 or a mixture of H 2 and Ar. of the carbon in the BG feed is converted to these gaseous products, this value reaching 6.1 mol% when the reaction temperature is increased to 375 °C.
Catalytic Deoxygenation of Brown Grease at Different H2 Partial Pressures
The presence of H2 in the reaction atmosphere has been shown to improve catalyst performance during the deCOx of lipids even if H2 is not directly involved in the deoxygenation reaction [24][25][26]. Interestingly, some authors have also shown that supported Pd catalysts perform better under lower H2 partial pressure conditions than under pure H2 atmospheres [27], and the same effect has been reported for some Ni catalysts [26]. Therefore, it is instructive to investigate the effect of H2 partial pressure on the deoxygenation of waste fatty acid-based feeds. To this end, the deoxygenation of BG was conducted under a reaction atmosphere of 20% H2/Ar at 375 °C, the GC-MS analysis of the liquid products being shown in Figure 3 and in Appendix A (Table A7), while the incondensable gaseous products are also shown in Figure 3. Note that the part of the y-axis not shown in Figures 3 and 5 consists entirely of the reaction atmosphere, i.e., either H2 or a mixture of H2 and Ar. Under 20% H2, the 20% Ni-5% Cu/Al2O3 catalyst quantitatively converts the BG feed, no trace of starting material being observed at any reaction time sampled (see Figure 3c). However, the amount of diesel-like hydrocarbons in the reaction products decreases to 86-89% of the total liquid products relative to the 94-98% values observed when the reaction is performed under pure H2 (see Figure 3a). Additionally, the amount of heavy products increases when the reaction is performed under reduced H2 partial pressure, albeit it is worth noting that only a relatively small amount (an average of 14.8%) of the total heavy products collected are long chain esters. The increase in the amount of long chain hydrocarbons suggests that after the formation of esters, the lower H2 partial pressure favors their direct decarboxylation, as opposed to their hydrogenolysis to afford alcohols and aldehydes [21]. Interestingly, while there is only a small decrease in the yield of diesel-like products, the gaseous products show a noticeable decrease in methane formation when the reaction is performed under reduced H2 partial pressure (see Figures 3b,d). Indeed, while the amount of CO or CO2 present in the gaseous products obtained using pure H2 is negligible and methane represents the vast majority of the gaseous products, the gas products contain considerably more CO2 and less methane when the Under 20% H 2 , the 20% Ni-5% Cu/Al 2 O 3 catalyst quantitatively converts the BG feed, no trace of starting material being observed at any reaction time sampled (see Figure 3c). However, the amount of diesel-like hydrocarbons in the reaction products decreases to 86-89% of the total liquid products relative to the 94-98% values observed when the reaction is performed under pure H 2 (see Figure 3a). Additionally, the amount of heavy products increases when the reaction is performed under reduced H 2 partial pressure, albeit it is worth noting that only a relatively small amount (an average of 14.8%) of the total heavy products collected are long chain esters. The increase in the amount of long chain hydrocarbons suggests that after the formation of esters, the lower H 2 partial pressure favors their direct decarboxylation, as opposed to their hydrogenolysis to afford alcohols and aldehydes [21]. Interestingly, while there is only a small decrease in the yield of diesel-like products, the gaseous products show a noticeable decrease in methane formation when the reaction is performed under reduced H 2 partial pressure (see Figure 3b,d). Indeed, while the amount of CO or CO 2 present in the gaseous products obtained using pure H 2 is negligible and methane represents the vast majority of the gaseous products, the gas products contain considerably more CO 2 and less methane when the reaction is run under 20% H 2 . Therefore, under reduced H 2 partial pressures, methanation of CO x is disfavored and does not proceed to completion. Instead, the methane formed at lower H 2 partial pressures is likely formed from the fatty acids alkyl chains via the end-chain cracking mechanism described in a recent contribution [13]. As with other experiments, only a relatively small amount of the carbon in the BG feed becomes C1-C4 gaseous products, this value being 5.8 mol% for the experiment performed under reduced H 2 partial pressure.
Catalyst Deactivation and Online Regeneration
Tellingly, the composition of the gas products is still changing not only at the end of the reactions discussed in Section 2.3, but also at the end of previously reported YG and hemp seed oil upgrading experiments [13], all of which lasted eight hours. Therefore, it appears that eight hours do not constitute sufficient time for the attainment of steady state. Obviously, this also hinders an assessment of whether significant catalyst deactivation occurs when upgrading a realistic feed under these conditions. Therefore, the most promising conditions evaluated in this study, i.e., 375 • C and a pure H 2 atmosphere, were employed in a run in which the deoxygenation reaction time was extended to 100 h on stream. After this (first) 100 h cycle, the catalyst was regenerated in situ by washing, drying, calcining (in air), and re-reducing the catalyst (under flowing H 2 ). Catalyst calcination was performed for 5 h at 450 • C, which is both the top temperature rating of the furnace employed and a temperature at which the majority of the carbonaceous deposits can be eliminated according to the results of thermogravimetric analysis (see Section 2.5 and Figure 7). Results from the analysis of representative liquid samples (recovered at 24 h intervals) are shown in Figure 4 and in Appendix A (Tables A8 and A9), the analysis of the gaseous products being shown in Figure 5. It should be noted that the results shown correspond to products recovered in a single hour and not to products accumulated over a 24 h period, such that the catalyst performance can be assessed at the specified time intervals with a "time resolution" equal to one hour. reaction is run under 20% H2. Therefore, under reduced H2 partial pressures, methanation of COx is disfavored and does not proceed to completion. Instead, the methane formed at lower H2 partial pressures is likely formed from the fatty acids alkyl chains via the end-chain cracking mechanism described in a recent contribution [13]. As with other experiments, only a relatively small amount of the carbon in the BG feed becomes C1-C4 gaseous products, this value being 5.8 mol% for the experiment performed under reduced H2 partial pressure.
Catalyst Deactivation and Online Regeneration
Tellingly, the composition of the gas products is still changing not only at the end of the reactions discussed in Section 2.3, but also at the end of previously reported YG and hemp seed oil upgrading experiments [13], all of which lasted eight hours. Therefore, it appears that eight hours do not constitute sufficient time for the attainment of steady state. Obviously, this also hinders an assessment of whether significant catalyst deactivation occurs when upgrading a realistic feed under these conditions. Therefore, the most promising conditions evaluated in this study, i.e., 375 °C and a pure H2 atmosphere, were employed in a run in which the deoxygenation reaction time was extended to 100 h on stream. After this (first) 100 h cycle, the catalyst was regenerated in situ by washing, drying, calcining (in air), and re-reducing the catalyst (under flowing H2). Catalyst calcination was performed for 5 h at 450 °C, which is both the top temperature rating of the furnace employed and a temperature at which the majority of the carbonaceous deposits can be eliminated according to the results of thermogravimetric analysis (see Section 2.5 and Figure 7). Results from the analysis of representative liquid samples (recovered at 24 h intervals) are shown in Figure 4 and in Appendix A (Tables A8 and A9), the analysis of the gaseous products being shown in Figure 5. It should be noted that the results shown correspond to products recovered in a single hour and not to products accumulated over a 24 h period, such that the catalyst performance can be assessed at the specified time intervals with a "time resolution" equal to one hour. During the first 100 h run, quantitative conversion of the BG lipids is achieved during the initial 48 h. After that, the amount of fatty acids in the products begins to increase, reaching a total of 8.1% of the total liquid products collected at the end of the first 100 h cycle. The yield of diesel-like hydrocarbons is >92% of the liquid products for the first 48 h on stream, this value decreasing to During the first 100 h run, quantitative conversion of the BG lipids is achieved during the initial 48 h. After that, the amount of fatty acids in the products begins to increase, reaching a total of 8.1% of the total liquid products collected at the end of the first 100 h cycle. The yield of diesel-like hydrocarbons is >92% of the liquid products for the first 48 h on stream, this value decreasing to 82.8% after 96 h on stream. Notably, catalyst performance improves after the catalyst is regenerated in situ, the yield of diesel-like hydrocarbons being~90% or greater for 72 h on stream during the second 100 h cycle. Interestingly, improvements in performance following catalyst regeneration have been reported for other Ni-based formulations, which was attributed to an increase in the number of strong basic sites during the treatment of the spent catalyst in hot air [26]. A closer look at the individual components of the liquid products (listed in Tables A8 and A9) provides valuable information vis-à-vis changes in selectivity that take place as the reaction progresses and/or after catalyst regeneration. Indeed, while the vast majority of the feed (93.6%) comprised C18 and C16 fatty acids-73.2 and 20.4% of the feedstock, respectively-the most abundant products are C17 and C15 hydrocarbons, suggesting that the reaction proceeds mainly via deCO x as opposed to HDO. Nevertheless, C18 and C16 hydrocarbons are also produced in significant amounts, which indicates that HDO also occurs to some extent. Additionally, of note is the trend followed by lighter diesel-like hydrocarbons (C10-C14), whose abundance in the product mixtures tends to decrease with time on stream and after regeneration (see Tables A8 and A9). This indicates that cracking reactions progressively become less prevalent and implies that surface sites responsible for cracking deactivate gradually and irreversibly. This is in line with the evolution of the incondensable gas products with time on stream, which also provides valuable insights regarding the changes in selectivity observed as the reaction advances and after catalyst regeneration (see Figure 5). 82.8% after 96 h on stream. Notably, catalyst performance improves after the catalyst is regenerated in situ, the yield of diesel-like hydrocarbons being ~90% or greater for 72 h on stream during the second 100 h cycle. Interestingly, improvements in performance following catalyst regeneration have been reported for other Ni-based formulations, which was attributed to an increase in the number of strong basic sites during the treatment of the spent catalyst in hot air [26]. A closer look at the individual components of the liquid products (listed in Tables A8 and A9) provides valuable information vis-à-vis changes in selectivity that take place as the reaction progresses and/or after catalyst regeneration. Indeed, while the vast majority of the feed (93.6%) comprised C18 and C16 fatty acids-73.2 and 20.4% of the feedstock, respectively-the most abundant products are C17 and C15 hydrocarbons, suggesting that the reaction proceeds mainly via deCOx as opposed to HDO. Nevertheless, C18 and C16 hydrocarbons are also produced in significant amounts, which indicates that HDO also occurs to some extent. Additionally, of note is the trend followed by lighter diesel-like hydrocarbons (C10-C14), whose abundance in the product mixtures tends to decrease with time on stream and after regeneration (see Tables A8 and A9). This indicates that cracking reactions progressively become less prevalent and implies that surface sites responsible for cracking deactivate gradually and irreversibly. This is in line with the evolution of the incondensable gas products with time on stream, which also provides valuable insights regarding the changes in selectivity observed as the reaction advances and after catalyst regeneration (see Figure 5). First, the difference in the amount of light (C1-C4 alkanes) observed before and after regeneration confirms the irreversible deactivation of cracking sites. Indeed, these gases result from the shortening of alkyl chains via terminal carbon loss (-C1) and internal chain cracking (-C2, -C3, and -C4), both of which also lead to the formation of C10-C14 fuel-like hydrocarbons [13]. However, the fact that both light (C10-C14) fuel-like hydrocarbons and methane tend to decrease with time on stream prior to remaining fairly constant-while C2-C4 hydrocarbons follow the opposite trendsuggests that chain shortening occurs mainly through the removal of terminal carbons, which is in line with the conclusions of a recent contribution [13]. Moreover, changes in the amounts of COx and methane-which progressively increase and decrease, respectively ( Figure 5, left)-may also suggest that sites responsible for methanation are gradually and irreversibly poisoned as the reaction First, the difference in the amount of light (C1-C4 alkanes) observed before and after regeneration confirms the irreversible deactivation of cracking sites. Indeed, these gases result from the shortening of alkyl chains via terminal carbon loss (-C1) and internal chain cracking (-C2, -C3, and -C4), both of which also lead to the formation of C10-C14 fuel-like hydrocarbons [13]. However, the fact that both light (C10-C14) fuel-like hydrocarbons and methane tend to decrease with time on stream prior to remaining fairly constant-while C2-C4 hydrocarbons follow the opposite trend-suggests that chain shortening occurs mainly through the removal of terminal carbons, which is in line with the conclusions of a recent contribution [13]. Moreover, changes in the amounts of CO x and methane-which progressively increase and decrease, respectively ( Figure 5, left)-may also suggest that sites responsible for methanation are gradually and irreversibly poisoned as the reaction progresses. This represents a noteworthy result, since hydrogen consumption should correspondingly decrease with time on stream without sacrificing significant deoxygenation activity. Lastly, it is worth noting the induction period (observed at the beginning of both cycles in Figure 5 but also in Figure 3d) in which the concentration of CO x gradually increases with time on stream prior to becoming relatively constant at steady state. Given that methane formation is negligible during the second 100 h cycle (right of Figure 5), the low concentration of CO x during this induction period cannot be assigned to CO x methanation. Instead, this can be attributed to the accumulation of CO x on the catalyst surface, which has to become saturated before CO 2 and CO can break through. Indeed, a previous contribution has shown that CO 2 accumulates on the surface of 20% Ni-5% Cu/Al 2 O 3 as alumina-bound carbonates [13]. Based on the results in Figure 5-and ignoring the effects of potentially confounding reactions (including methanation, water gas shift and the Boudouard reaction)-it seems that deoxygenation proceeds fairly equally via decarboxylation (-CO 2 ) and decarbonylation (-CO) once the system reaches steady state. Although-as with other experiments-only a relatively small amount of the carbon in the feed becomes C1-C4 gaseous products, the values obtained for this experiment are particularly informative, since this is the only experiment in which steady state was attained before and after catalyst regeneration. Tellingly, in the first 100 h cycle, the fraction of carbon in the feed converted to C1-C4 gaseous products starts at~2 mol% before reaching a maximum of~7 mol% at 11 h on stream, an increase that can be ascribed to the initial accumulation of CO 2 on the catalyst surface mentioned above. Subsequently, this value stabilizes between~2.5 and~4.9 mol% for the remainder of the first 100 h cycle. During the second 100 h cycle (after catalyst regeneration), the fraction of carbon in the feed converted to C1-C4 gaseous products starts at~0.4 mol% before stabilizing (at around 24 h on stream and for the remainder of the experiment) between~2.6 and~3.6 mol%. This is consistent with both the initial accumulation of CO 2 on the regenerated catalyst surface and the lower cracking activity observed after catalyst regeneration.
Catalyst Characterization
Although the characterization of the 20% Ni-5% Cu/Al 2 O 3 catalyst used in this contribution has been described previously [18], additional characterization work was performed in this study, including an investigation of room temperature CO adsorption on the fresh and spent catalyst via diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS), the resulting spectra being shown in Figure 6. progresses. This represents a noteworthy result, since hydrogen consumption should correspondingly decrease with time on stream without sacrificing significant deoxygenation activity. Lastly, it is worth noting the induction period (observed at the beginning of both cycles in Figure 5 but also in Figure 3d) in which the concentration of COx gradually increases with time on stream prior to becoming relatively constant at steady state. Given that methane formation is negligible during the second 100 h cycle (right of Figure 5), the low concentration of COx during this induction period cannot be assigned to COx methanation. Instead, this can be attributed to the accumulation of COx on the catalyst surface, which has to become saturated before CO2 and CO can break through. Indeed, a previous contribution has shown that CO2 accumulates on the surface of 20% Ni-5% Cu/Al2O3 as alumina-bound carbonates [13]. Based on the results in Figure 5-and ignoring the effects of potentially confounding reactions (including methanation, water gas shift and the Boudouard reaction)-it seems that deoxygenation proceeds fairly equally via decarboxylation (-CO2) and decarbonylation (-CO) once the system reaches steady state. Although-as with other experimentsonly a relatively small amount of the carbon in the feed becomes C1-C4 gaseous products, the values obtained for this experiment are particularly informative, since this is the only experiment in which steady state was attained before and after catalyst regeneration. Tellingly, in the first 100 h cycle, the fraction of carbon in the feed converted to C1-C4 gaseous products starts at ~2 mol% before reaching a maximum of ~7 mol% at 11 h on stream, an increase that can be ascribed to the initial accumulation of CO2 on the catalyst surface mentioned above. Subsequently, this value stabilizes between ~2.5 and ~4.9 mol% for the remainder of the first 100 h cycle. During the second 100 h cycle (after catalyst regeneration), the fraction of carbon in the feed converted to C1-C4 gaseous products starts at ~0.4 mol% before stabilizing (at around 24 h on stream and for the remainder of the experiment) between ~2.6 and ~3.6 mol%. This is consistent with both the initial accumulation of CO2 on the regenerated catalyst surface and the lower cracking activity observed after catalyst regeneration.
Catalyst Characterization
Although the characterization of the 20% Ni-5% Cu/Al2O3 catalyst used in this contribution has been described previously [18], additional characterization work was performed in this study, including an investigation of room temperature CO adsorption on the fresh and spent catalyst via diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS), the resulting spectra being shown in Figure 6. CO adsorption on the reduced fresh catalyst produced a strong absorbance peak at 2100 cm -1 . Since CO adsorption on the spent catalyst did not afford this peak, the spent catalyst was reduced in situ (i.e., inside the DRIFTS cell) to counter any oxidation that may have occurred when the catalyst Figure 6. DRIFTS spectra of the fresh catalyst after reduction, the spent catalyst, and the spent catalyst after reduction. CO adsorption on the reduced fresh catalyst produced a strong absorbance peak at 2100 cm −1 . Since CO adsorption on the spent catalyst did not afford this peak, the spent catalyst was reduced in situ (i.e., inside the DRIFTS cell) to counter any oxidation that may have occurred when the catalyst was exposed to air while being transferred from the reactor to the DRIFTS cell. The reduced spent catalyst also lacked the CO adsorption peak, which indicates that the surface active sites that readily adsorb CO on the reduced fresh catalysts are irreversibly poisoned-or otherwise eliminated-under the reaction conditions employed. This could explain the reduced methane formation observed during the second 100 h cycle, as less CO adsorbs on the catalyst surface. In fact, the C1 gaseous products observed during the second 100 h cycle are almost entirely CO and CO 2 , which is consistent with decreased methanation. Given than in previous work on 20% Ni-5% Cu/Al 2 O 3 and 4.7% Cu/Al 2 O 3 CO DRIFTS bands at 2121 and 2115 cm −1 were assigned to CO adsorbed on Cu sites [18,28], it is possible that during reaction these sites are made inaccessible by the deposition of carbonaceous material on the surface of metal particles (vide infra and see Figure A1 in Appendix B). However, another possibility is that in the course of the reaction these sites tend to disappear via sintering and/or alloying with Ni (see Figure 8 and Figure A2 in Appendix B). Since previous CO DRIFTS measurements on 20% Ni-5% Cu/Al 2 O 3 have also confirmed an electronic interaction between Cu and Ni [18], it can be argued that these changes would also have an electronic effect on surface Ni, which offers one potential explanation for the absence of a band at 2050 cm −1 attributable to CO adsorbed on Ni sites [29]. Interestingly, other authors have observed improved catalytic performance when the CO binding energy to Ni is lowered, as this causes active sites to be freed up more readily during deCO x thereby resulting in enhanced activity [30]. In fact, the CO produced via decarbonylation is a known catalyst poison-as well as a coke precursor (vide infra)-and, thus, a lower amount and/or a reduced residency time of CO on the catalyst surface should benefit deCO x activity [31].
The conclusions garnered from DRIFTS measurements are reinforced by the results of x-ray photoelectron spectroscopy (XPS) analyses performed on both the fresh and spent catalysts (see Figure A3 in Appendix B). Indeed, while the spent catalyst displays features indicative of deactivation via coking and fouling-i.e., increased intensity of carbon peaks (not shown) and lower intensity of peaks associated with metals-a shift in both Ni and Cu peak maxima consistent with surface reduction to form metallic phases and a Ni-Cu alloy [32,33] is also observed. While the extent of alloying is difficult to determine quantitatively in the absence of density of state measurements, qualitative examination of the peak positions indicate a shift of the Cu 2p 3/2 binding energy tõ 0.2 eV higher than that expected for copper metal, consistent with significant Ni-Cu alloying [34,35]. Integrating the respective Ni and Cu peaks after correcting for background and elemental sensitivity also shows evidence of Cu enrichment on the surface of the spent catalyst, with the surface Ni/Cu atomic ratio decreasing from 5.55 Ni atoms per Cu atom to 4.14 Ni atoms per Cu atom after 200 h on stream [36]. Taken together, the CO-DRIFTS and the XPS data provide compelling evidence that both surface enrichment of copper and alloying of Ni with Cu have occurred, with other authors reporting depth profiling of similar species evincing near total coverage of metal particles by Cu despite XPS integrals consistent with the bulk composition [36].
The thermogravimetric analysis (TGA) results in Figure 7 show that the total mass loss displayed by the spent catalyst is 9.5 and 5.1% after the first and second 100 h cycle, respectively. This indicates that less coke forms during the second 100 h cycle, i.e., after the catalyst regeneration step. Catalytic cracking of the feed is known to exacerbate catalyst deactivation via coking and to be particularly problematic for Ni-based catalysts [37]. As mentioned above, light hydrocarbons such as ethane, propane and butane are observed in greater abundance in the gaseous products evolved during the first 100 h cycle relative to the amount observed during the second 100 h cycle. This is consistent with the notion that the increased coking observed during the first 100 h stems from a higher cracking activity. Thus, sites responsible for cracking reactions appear to become irreversibly deactivated during the first 100 h cycle. Additionally, since adsorbed CO can also contribute to coking via the Boudouard reaction, the lower CO adsorption discussed above could also explain the decrease in carbonaceous deposits formed on the catalyst surface during the second 100 h cycle. Important insights were also gained through the analysis of the fresh and spent (after the second 100 h on stream) catalyst via transmission electron microscopy-energy dispersive X-ray spectroscopy (TEM-EDS). First, TEM results show that while the metal particle size distribution in the fresh catalysts is narrow and centers around 4 nm particles, the spent catalyst has a wider particle size distribution with a considerable number of particles larger than 10 nm and as large as 40 nm (see Figure 8a and Figure B2 in Appendix B). The fact that large particles arise at the expense of smaller particles (see Figure 8a) suggests that sintering occurs via Ostwald ripening. TEM-EDS results show that the composition of metal particles in the fresh catalyst is very consistent and close to the composition of the bulk catalyst (particles of 80% Ni-20% Cu are expected for a 20% Ni-5% Cu/Al2O3 formulation, atomic and weight percent being very close due to the similar atomic weight of Ni and Cu), albeit a significant number of Ni-rich (85-95% Ni) particles is also observed (see Figure 8b). However, the composition of metal particles in the spent catalyst is considerably different, as Cu-rich particles-from slightly enriched (75% Ni-25% Cu) to greatly enriched (35% Ni-65% Cu)-arise at the expense of the Ni-rich particles in the fresh catalyst. In short, the TEM and TEM-EDS results in Figure 8 suggest that in the course of the experiment metal particles tend to both grow and become Cu-rich. The latter is consistent with the results of elemental mapping included in Figure B2 in Appendix B, which also show that Ni and Cu are closely associated irrespective of the catalyst state (fresh or spent). Nevertheless, more detailed studies-which fall outside the scope of this contribution-are needed to more thoroughly ascertain the relationship between particle size and both surface and bulk composition. Important insights were also gained through the analysis of the fresh and spent (after the second 100 h on stream) catalyst via transmission electron microscopy-energy dispersive X-ray spectroscopy (TEM-EDS). First, TEM results show that while the metal particle size distribution in the fresh catalysts is narrow and centers around 4 nm particles, the spent catalyst has a wider particle size distribution with a considerable number of particles larger than 10 nm and as large as 40 nm (see Figures 8a and A2 in Appendix B). The fact that large particles arise at the expense of smaller particles (see Figure 8a) suggests that sintering occurs via Ostwald ripening. TEM-EDS results show that the composition of metal particles in the fresh catalyst is very consistent and close to the composition of the bulk catalyst (particles of 80% Ni-20% Cu are expected for a 20% Ni-5% Cu/Al 2 O 3 formulation, atomic and weight percent being very close due to the similar atomic weight of Ni and Cu), albeit a significant number of Ni-rich (85-95% Ni) particles is also observed (see Figure 8b). However, the composition of metal particles in the spent catalyst is considerably different, as Cu-rich particles-from slightly enriched (75% Ni-25% Cu) to greatly enriched (35% Ni-65% Cu)-arise at the expense of the Ni-rich particles in the fresh catalyst. In short, the TEM and TEM-EDS results in Figure 8 suggest that in the course of the experiment metal particles tend to both grow and become Cu-rich. The latter is consistent with the results of elemental mapping included in Figure A2 in Appendix B, which also show that Ni and Cu are closely associated irrespective of the catalyst state (fresh or spent). Nevertheless, more detailed studies-which fall outside the scope of this contribution-are needed to more thoroughly ascertain the relationship between particle size and both surface and bulk composition. Finally, since it could be argued that the Cu enrichment of the catalyst can be attributed to the reaction between CO and Ni resulting in the formation of volatile Ni carbonyl and the concomitant loss of Ni, the fresh and spent (after the second 100 h cycle) catalyst was analyzed by inductivelycoupled plasma-atomic emission spectroscopy (ICP-AES). Tellingly, while the material loaded into the reactor (a 1:1 mixture of catalyst and SiC diluent) had a Ni and Cu content of 8.87 and 1.98%, respectively, the corresponding values for the material recovered from the reactor (which also included 9.5% of coke deposits) were 7.39 and 1.62% (or 8.16 and 1.79% when corrected for coke). This corresponds to a Ni/Cu ratio of ~4.48 and ~4.56 for the fresh and spent catalysts, respectively, which clearly indicates that the catalyst experiences no loss of Ni. This is consistent with a report on the kinetics of nickel carbonyl formation [38]. Indeed, the authors of this report concluded that the highly exothermic reaction that forms nickel carbonyl reaches a maximum rate at ~75 °C, after which nickel carbonyl formation decreases to reach negligible levels by 150 °C irrespective of the pressures (partial CO, H2, or total pressure) employed. In deoxygenation experiments, the catalyst would only be exposed to CO at the reaction temperature of 375 °C, i.e., well above the range in which nickel carbonyl would form to an appreciable degree per the reference above. These results further confirm both the robustness of the catalyst under reaction conditions and its industrial applicability.
Catalyst Preparation
Catalysts were prepared by excess wetness impregnation using Ni(NO3)2·6H2O (Alfa Aesar, Haverhill, MA, USA) and Cu(NO3)2·3H2O (Sigma Aldrich, St. Louis, MO, USA) as the metal precursors. Beads of γ-Al2O3 (Sasol, Johannesburg, South Africa; surface area of 216 m 2 /g) were used as the support and were crushed to a particle size of <150 µm before the impregnation. The target metal loadings for the catalyst were 20 wt% Ni and 5 wt% Cu. The impregnated catalyst was dried overnight at 60 °C under vacuum prior to calcination for 3 h at 500 °C in static air. The catalyst and SiC catalyst diluent (Kramer Industries, Piscataway Township, NJ, USA) were sieved separately to a particle size between 150 and 300 µm and stored in a vacuum oven at 60 °C until use. Finally, since it could be argued that the Cu enrichment of the catalyst can be attributed to the reaction between CO and Ni resulting in the formation of volatile Ni carbonyl and the concomitant loss of Ni, the fresh and spent (after the second 100 h cycle) catalyst was analyzed by inductively-coupled plasma-atomic emission spectroscopy (ICP-AES). Tellingly, while the material loaded into the reactor (a 1:1 mixture of catalyst and SiC diluent) had a Ni and Cu content of 8.87 and 1.98%, respectively, the corresponding values for the material recovered from the reactor (which also included 9.5% of coke deposits) were 7.39 and 1.62% (or 8.16 and 1.79% when corrected for coke). This corresponds to a Ni/Cu ratio of~4.48 and~4.56 for the fresh and spent catalysts, respectively, which clearly indicates that the catalyst experiences no loss of Ni. This is consistent with a report on the kinetics of nickel carbonyl formation [38]. Indeed, the authors of this report concluded that the highly exothermic reaction that forms nickel carbonyl reaches a maximum rate at~75 • C, after which nickel carbonyl formation decreases to reach negligible levels by 150 • C irrespective of the pressures (partial CO, H 2 , or total pressure) employed. In deoxygenation experiments, the catalyst would only be exposed to CO at the reaction temperature of 375 • C, i.e., well above the range in which nickel carbonyl would form to an appreciable degree per the reference above. These results further confirm both the robustness of the catalyst under reaction conditions and its industrial applicability.
Catalyst Preparation
Catalysts were prepared by excess wetness impregnation using Ni(NO 3 ) 2 ·6H 2 O (Alfa Aesar, Haverhill, MA, USA) and Cu(NO 3 ) 2 ·3H 2 O (Sigma Aldrich, St. Louis, MO, USA) as the metal precursors. Beads of γ-Al 2 O 3 (Sasol, Johannesburg, South Africa; surface area of 216 m 2 /g) were used as the support and were crushed to a particle size of <150 µm before the impregnation. The target metal loadings for the catalyst were 20 wt% Ni and 5 wt% Cu. The impregnated catalyst was dried overnight at 60 • C under vacuum prior to calcination for 3 h at 500 • C in static air. The catalyst and SiC catalyst diluent (Kramer Industries, Piscataway Township, NJ, USA) were sieved separately to a particle size between 150 and 300 µm and stored in a vacuum oven at 60 • C until use.
Catalyst Characterization
A small sample of the catalyst after the first 100 h on stream (in an experiment involving two 100 h cycles and an intermediate catalyst regeneration step) was subjected to TGA under flowing air (50 mL/min) on a TA instruments (New Castle, DE, USA) Discovery Series thermogravimetric analyzer. The temperature was ramped from room temperature to 800 • C at a rate of 10 • C/min. The same TGA procedure was used on the spent catalyst after the second 100 h cycle. DRIFTS was performed on the fresh and the spent catalyst after CO adsorption using a Thermo Scientific (Waltham, MA, USA) Nicolette 6700 FTIR instrument equipped with a Harrick Scientific (Pleasantville, NY, USA) praying mantis DRIFTS cell. The catalysts were dried under flowing Ar (50 mL/min) at 200 • C for 2 h and allowed to cool to 25 • C. The catalyst sample was then subjected to flowing 1% CO/Ar (50 mL/min) at 25 • C for 1 h, followed by Ar purging to remove gas phase CO and weakly absorbed CO before the spectrum was collected. The sample was then heated to 200 • C under flowing Ar (50 ml/min) and complete CO desorption was monitored by a return of the DRIFTS signal to a previously established baseline. The gas flow was then switched to 10% H 2 /N 2 (50 mL/min) as the temperature was increased to 400 • C and held for 1 h. The catalyst was then cooled to 25 • C under flowing Ar (50 mL/min) to remove excess H 2 . The catalyst was again subjected to flowing 1% CO/Ar (50 mL/min) at 25 • C for 1 h, followed by Ar purging to remove gas phase CO and weakly absorbed CO before spectra were collected. XPS measurements were performed on a Thermo Scientific (Waltham, MA, USA) K-Alpha system utilizing Al K-α radiation and a spot size of 100 µm. A minimum of 15 scans were collected for the full range spectra as well as for individual element scans, each spectrum produced being the average of two sample points. A linear background correction was used for sample integration and an elemental effectiveness factor of 1.1 was applied to copper to compensate for signal intensity variation. TEM observations were conducted on the fresh and spent (after 2 nd 100 h on stream) Ni-Cu catalyst. The catalyst powders were loaded on lacey carbon 400 mesh gold (C/Au) grids via a sonication-assisted method. This required a sample of the catalyst powder to be sonicated in 1 mL of ethanol for 20 min before placing one drop of the resulting suspension solution onto a blank C/Au grid that was subsequently allowed to dry in air. Samples thus prepared were then introduced into a Thermo Scientific (Waltham, MA, USA) Talos F200X analytical electron microscope, operated at 200 keV and equipped with four silicon drift detector (SDD)-based EDS systems for quantitative chemical composition analysis and elemental distribution mapping.
Feed Preparation
A waste FFA feedstock was obtained from the steam stripping of triglyceride-based feeds used for biodiesel synthesis. BG from waste water grease traps was supplied by Pincelli and Associates (Chattanooga, TN, USA). The BG was mixed vigorously with 25 wt% chloroform (HPLC grade, supplied by J.T. Baker, Radnor, PA, USA) for 2 h at 30 • C. The resulting solution was filtered with occasional heating to counter the solidification of fats. The water layer from the resulting filtrate was then decanted and the remaining organic mixture was dried using Na 2 SO 4 . The chloroform was removed by rotatory evaporation and the remaining lipids were diluted to the desired concentration with n-dodecane (99+%, Alfa Aesar, Haverhill, MA, USA).
Continuous Fixed-Bed Deoxygenation Experiments
Experiments were operated in continuous mode using a fixed bed stainless steel tubular reactor (1/2 in o.d., Parr, Moline, IL, USA) with a stainless steel porous frit to hold the catalyst bed in place. A mixture of equal parts catalyst and SiC diluent (by weight, 1 g total) was used for all experiments. The catalyst was reduced in situ at 400 • C for 3 h under flowing H 2 (60 mL/min). Temperature was monitored using two K-type thermocouples, one introduced from above and made to contact the catalyst bed and one introduced from below and made to contact the aforementioned porous frit under the catalyst bed. For experiments in which the reaction atmosphere was not pure H 2 , the reactor was purged after the reduction with Ar and pressurized with the reaction gas to 40 bar. The pressure was monitored both upstream and downstream of the catalyst bed by Omega (Norwalk, CT, USA) digital pressure gauges. After the system was pressurized, the catalyst bed was heated to the desired reaction temperature (275-375 • C). The liquid feed solution was introduced to the reactor using a Harvard Apparatus (Holliston, MA, USA) syringe pump equipped with an 8 mL syringe, either at 0.043 mL/min for the FFA feed (25 wt% in C12) or at 0.021 mL/min for BG (50 wt% in C12) to achieve a weight hour space velocity (WHSV) of 1 h −1 . The flow of the reaction gas was held at 60 mL/min for the duration of the experiment. Liquid products were sampled from a liquid-gas separator (kept at 0 • C) placed downstream from the catalyst bed. Incondensable gases were directed to a dry test meter before being collected in Tedlar ® gas sample bags. Gas sample bags were changed every time a liquid sample was taken to ensure that the gas samples analyzed and the liquid samples collected could be correlated. Representative experiments were performed in duplicate to ensure reproducibility. The average standard deviation of diesel-like hydrocarbons and heavy products formed was 2.77% and 1.47% in duplicate FFA and BG upgrading experiments, respectively.
Liquid and Gaseous Product Analysis
The liquid products were analyzed using a combined simulated-distillation-GC and GC-MS method specifically devised to identify and quantify the products obtained in the upgrading of FOG to hydrocarbons. Detailed information about the development and application of this method is available elsewhere [39]. Briefly, the analyses were performed using an Agilent (Santa Clara, CA, USA) 7890B GC system equipped with an Agilent (Santa Clara, CA, USA) 5977A extractor MSD and flame ionization detector (FID). The multimode inlet, which contained a helix liner, was run in a split mode (15:1; split flow, 48 mL/min) with an initial temperature of 100 • C. Helium was used as the carrier gas and a 1 µL injection was employed. Upon injection, the inlet temperature was immediately increased to 380 • C at a rate of 8 • C/min, and the temperature was maintained for the course of the analysis. The oven temperature was increased upon injection from 40 • C to 325 • C at a rate of 4 • C/min, followed by a ramp of 10 • C/min to 400 • C, which was maintained for 12.5 min. The total analysis run time was 91.25 min. An Agilent (Santa Clara, CA, USA) J and W VF-5ht column (30 m × 250 µm × 0.1 µm) rated to 450 • C was used. Column eluents were directed to a Siltek MXT™ connector that split the flow into two streams, one leading to the MSD (J and W Ultimetal Plus Tubing, 11 m × 0.25 mm i.d.) and one leading to the FID (J and W Ultimetal Plus Tubing, 5 m × 0.25 mm i.d.). The MS zone temperatures (MS source at 230 • C and quadrupole at 150 • C) were held constant for the duration of the analysis. A 1.75 min solvent delay was implemented and the MSD scanned from 10 to 700 Da. The FID was set to 390 • C with the following gas flow rates: H 2 at 40 mL/min; air at 400 mL/min; He makeup at 25 mL/min. Quantification was performed using cyclohexanone as an internal standard. Agilent (Santa Clara, CA, USA) MassHunter Acquisition and SimDis Expert 9-purchased from Separation Systems Inc. (Gulf Breeze, FL, USA)-software were respectively used to perform chromatographic programming and to process the GC-FID data acquired. Solvents (i.e., chloroform and dodecane) were quenched and/or subtracted prior to processing the data.
Gaseous samples were analyzed using an Agilent (Santa Clara, CA, USA) 3000 Micro-GC equipped with 5 Å molecular sieve, PoraPLOT U, alumina, and OV-1 columns. The GC was calibrated for all of the gaseous products obtained, including CO x , as well as straight chain C1-C6 alkanes and alkenes.
Conclusions
20% Ni-5% Cu/Al 2 O 3 was found to be an effective catalyst in the deoxygenation of fatty acid waste streams to fuel-like hydrocarbons. Near quantitative conversion of a free fatty acid feed obtained by steam stripping a biodiesel feedstock (25 wt% in dodecane) was achieved at all the reaction temperatures employed, i.e., 275, 325, and 375 • C. Notably, upon increasing the lipid concentration-from the 25 wt% used in experiments involving the feed obtained by steam stripping to the 50 wt% used in experiments involving BG-quantitative conversion of the BG lipids to hydrocarbons was obtained in experiments performed at 325 and 375 • C. While the 20% Ni-5% Cu/Al 2 O 3 catalyst yielded a larger amount of heavy chain hydrocarbons during the upgrading of BG under reduced H 2 partial pressure, CO x methanation (and the associated H 2 consumption) was significantly reduced. Saliently, the 20% Ni-5% Cu/Al 2 O 3 catalyst gave yields of diesel-like hydrocarbons in excess of 80% at all reaction times during a BG upgrading experiment lasting 100 h. Moreover, the catalyst was successfully regenerated in situ and displayed improved deoxygenation and decreased cracking and methanation activity during a second 100 h cycle, which is ascribed to irreversible poisoning of cracking and CO adsorption sites and/or to the decrease in the CO binding energy to Ni sites. Indeed, the lower concentration of CO on the catalyst surface decreases catalyst poisoning from CO-resulting in the superior activity and stability observed by the regenerated catalyst-as well as the likelihood of methane and coke formation. This conclusion is supported by data pertaining to the characterization of the fresh and spent (after the second 100 h on stream) catalyst via TEM-EDS, CO-DRIFTS, and XPS, analytical results showing that metal particle sintering, alloying of Ni with Cu, as well as both surface and bulk particle enrichment with copper all occur in the course of reaction and/or catalyst regeneration. Taken together, these results explain the superior properties of the regenerated catalyst in lipid deoxygenation relative to its fresh counterpart. Overall, these results confirm that 20% Ni-5% Cu/Al 2 O 3 is a robust and effective catalyst for the conversion of waste lipid streams to diesel-range hydrocarbons. Acknowledgments: Brad Davis of ESC Energy is thanked for providing the sample of waste FFAs used in this study. Beth Hamilton of Pincelli and Associates is thanked for providing the sample of brown grease used in this study. Maria Wright is thanked for her assistance with feed preparation.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Appendix B Figure A1. TEM micrographs of the fresh (left) and spent (right) catalyst. Distinct layers of carbonaceous deposits can be seen surrounding a metal particle in the spent catalyst outlined in red. | 13,605.2 | 2019-01-30T00:00:00.000 | [
"Chemistry",
"Engineering",
"Environmental Science"
] |
A backtracking evolutionary algorithm for power systems
. This paper presents a backtracking variable scaling hybrid differential evolution, called backtracking VSHDE, for solving the optimal network reconfiguration problems for power loss reduction in distribution systems. The concepts of the backtracking, variable scaling factor, migrating, accelerated, and boundary control mechanism are embedded in the original differential evolution (DE) to form the backtracking VSHDE. The concepts of the backtracking and boundary control mechanism can increase the population diversity. And, according to the convergence property of the population, the scaling factor is adjusted based on the 1/5 success rule of the evolution strategies (ESs). A larger population size must be used in the evolutionary algorithms (EAs) to maintain the population diversity. To overcome this drawback, two operations, acceleration operation and migrating operation, are embedded into the proposed method. The feeder reconfiguration of distribution systems is modelled as an optimization problem which aims at achieving the minimum loss subject to voltage and current constraints. So, the proper system topology that reduces the power loss according to a load pattern is an important issue. Mathematically, the network reconfiguration system is a nonlinear programming problem with integer variables. One three-feeder network reconfiguration system from the literature is researched by the proposed backtracking VSHDE method and simulated annealing (SA). Numerical results show that the perfrmance of the proposed method outperformed the SA method.
Introduction
Recently, many mathematic algorithms have used to solve various industrial systems. Among mathematical algorithms, the evolutionary algorithms (EAs) have been widespread interest based on advantages including ease to use, robust, and so on. EAs are a class of stochastic search and optimization methods that include genetic algorithms (GAs), evolutionary programming (EP), evolution strategies (ESs), genetic programming (GP), and their variants [1]. These algorithms, based on the principles of natural biological evolution, have received considerable and increasing interest over the past decade. EAs operate on a population of potential solutions, applying the principle of survival of the fittest to produce successively better approximations to a solution. EAs are robust and suitable for effectively obtaining optima, and with a smaller probability than other algorithms of falling into local optima. The methodology of the EAs has been discussed in many literatures [2][3][4][5][6][7][8][9], and EAs are also successfully used in various industrial applications [10][11][12][13][14][15].
Variant methods of EAs were proposed to increase the convergence speed and the capability of global search during development of EAs. Hybrid differential evolution (HDE) [16][17] is a EAs' variant method that is one of the most excellent EAs. HDE is a stochastic search and optimization method. The fittest of an offspring competes one-to-one with that of the corresponding parent, which is different from the other evolutionary algorithms (EAs). This one-to-one competition gives rise to a faster convergence rate. However, this faster convergence also leads to a higher probability of obtaining a local optimum because the diversity of the population descends faster during the solution process. To overcome this drawback, migrating operation and accelerated operation act as a trade-off operator for the diversity of population and convergence property in HDE. Migrating operation maintain the diversity of population, which guarantees a high probability of obtaining the global optimum. And, accelerated operation is used to accelerate convergence. However, a fixed scaling factor is used in HDE. Using a smaller scaling factor, HDE becomes increasingly robust. However, much computational time should be expanded to evaluate the objective function. HDE with a larger scaling factor should results generally fall into local solution or misconvergence. Lin et al. [16] used a random number that its value is between zero and one as a scaling factor. However, a random scaling factor could not guarantee the fast convergence. The selection of mutation operator is also a very important issue in HDE. The proper mutation operator can accelerate to search out the global solution [14]. However, the selection of mutation operator is problem-dependence. In HDE, the proper mutation operator is not easy to select. To overcome the drawback of the fixed and random scaling factor and alleviate the problem of selection of mutation operator in HDE, the concept of variable scaling factor is used in variable scaling hybrid differential evolution (VSHDE) method [18][19]. The rule of updating scaling factor based on the 1/5 success rule of the evolution strategies (ESs) [1,20] is used to adjust the scaling factor. The 1/5 success rule emerged as a conclusion of the process of optimizing convergence rate of two functions (the so-called corridor mode and sphere model [1,[20][21]). To increase the population diversity, the concept of backtracking is used in the VSHDE method (called backtracking VSHDE method).
Power systems across the world have been recently deregulated, which resulted in competition among the power sellers. This resulted in increased complexity of distribution networks. Further the distribution system planner and operator are facing new challenges that put emphasis on optimal planning and operation of the distribution system. Reduction of power loss, improvement of power quality, cost minimization and load balance among the branches are important factors for better operation of the distribution system. These can be accomplished by optimal network reconfiguration [22]. Distribution systems consist of groups of interconnected radial circuits. The configuration may be varied via switching operations to transfer loads among the feeders. Two types of switches are used in primary distribution systems. There are normally closed switches (sectionalizing switches) and normally open switches (tie switches). Those two types of switches are designed for both protection and configuration management. Network reconfiguration is the process of changing the topology of distribution systems by altering the open/closed status of switches. Because there are many candidateswitching combinations in the distribution system, network reconfiguration is a complicated combinatorial, non-differentiable constrained optimization process aimed at finding optimal operation of the distribution system. Optimal feeder reconfiguration has been investigated over decades with heuristic and meta-heuristic techniques for single and multi-objectives in order to find optimal planning and operation of the distribution system [22][23][24][25][26][27][28]. A multi-objective invasive weed optimization algorithm is proposed by Sudha et al. [22] to solve the optimal network reconfiguration. While solving optimal network reconfiguration of the radial distribution system, minimization of active power loss, maximum node voltage deviation, number of switching operations and the load balancing index are considered as the objectives simultaneously. Huang et al. [23] proposed parallel undirected spanning tree-based genetic algorithm (PSTGA) to solve the network reconfiguration in parallel. Liu et al. [24] proposed an improved type of immune algorithm to ensure that the algorithm can quickly converge to the global optimal solution to improve the efficiency of the algorithm solving and solution accuracy. Nguyen et al. [25] proposed a reconfiguration methodology based on a cuckoo search algorithm (CSA) for minimizing active power loss and the maximizing voltage magnitude. 46 Syahputra et al. [26] proposed particle swarm optimization (PSO) algorithm based multi-objective optimization for reconfiguration of radial distribution network with the presence of distributed energy resources (DER). The benefits of DER integration in distribution system are reducing power losses, improving voltage profiles and load factors, eliminating system upgrades, and reducing environmental impacts. Sureshkumar et al. [27] presented the use of differential evolution method to reduce the losses and balancing the loads in the radial distribution network. Iswarya et al. [28] deals with reconfiguration of distribution system to minimize the power loss and considering reliability.
IMETI 2016
In this study, a backtracking VSHDE for solving the network reconfiguration of distribution systems is proposed. Here, 1/5 success rule of evolution strategies (ESs) [20][21] is used to adjust the scaling factor to accelerate searching out the global solution. In addition, the concept of backtracking used in the proposed method to increase the population diversity. One three-feeder distribution system from the literature is solved by the proposed method is sloved respectively by the proposed method and SA. From the computational results, it is observed that the convergence property of the bracktracking VSHDE method is better than that of the SA method.
Backtracking VSHDE algorithm
The backtracking VSHDE method is briefly described in the following.
Step 1. Initialization The initial population is chosen randomly and would attempt to cover the entire parameter space uniformly as (1).
is a random number, In addition, the initial historical population is also generated using equation (1) to maintain the population diversity.
Step 2. Backtracking Operation [29] Increasing the population diversity to increase the probability of searching out the global solution is the aim of the backtracking operation. Backtracking operation has two strategies. First, two random numbers between 0 and 1 are generated. Second, if the first random number is less than the second random number, then current population is replaced with the historical population and re-permuting the every individual's position randomly in the population. Otherwise, if the first random number is greater or equal than the second random number, then current population is only re-permuting the individual's position in the historical population.
Step 3. Mutation operation The essential ingredient in the mutation operation is the difference vector. Each individual pair in a population at the G-th generation defines a difference vector jk D as The mutation process at the G-th generation begins by randomly selecting either two or four population individuals , , , A mutant vector is then generated based on the present individual in the mutation process by Where F is the scaling factor. Furthermore, , , , l k j and m are randomly selected. The concept of variable scaling factor used in the proposed method is to overcome the drawback of the fixed and random scaling factor. The rule of updating a scaling factor based on the 1/5 success rule of the ESs is used to adjust the scaling factor. The 1/5 success rule emerged as a conclusion of the process of optimizing convergence rate of two function(the so-called corridor mode and sphere model) [1,[20][21].The scaling factor should be updated in every q iterations as follow: Where t s p is the frequency of successful mutations measured. The initial value of the scaling factor, [1] are used for adjustment, which should be taken place for every q iterations. When the migrating operation performed or the scaling factor is too small to find the better solution, the scaling factor is reset as follow: Where iter and itermax are the number of current iteration and the maximum iteration, respectively.
Step 4. Crossover operation The perturbed individual of is assigned by the user. Step
Boundary Control Mechanism
When the gene of the child is less than the lower bound, two random numbers between 0 and 1 are generated. If the first random number is less than the second random number, the gene of child set to lower bound, otherwise, the gene of child is chosen randomly in an attempt to cover the entire parameter space uniformly. Similarly, when the gene of the child is greater than the upper bound, two random numbers between 0 and 1 are generated. If the first random number is less than the second random number, the gene of child set to upper bound, otherwise, the gene of child is chosen randomly in an attempt to cover the entire parameter space uniformly.
Step 6. Estimation and selection The evaluation function of a child is one-to-one competed to that of its parent. This competition means that the parent is replaced by its child if the fitness of the child is better than that of its parent. On the other hand, the parent is retained in the next generation if the fitness of the child is worse than that of its parent, i.e.
Where arg min means the argument of the minimum.
Step 7. Migrating operation if necessary In order to effectively enhance the investigation of the search space and reduce the choice pressure of a small population, a migration operation is introduced to regenerate a new diverse population of individuals. The new population is yielded based on the best individual 1 + G b Z . The g-th gene of the i-th individual is as follows: The migrating operation is executed only if a measure fails to match the desired tolerance of population diversity. The measure is defined as follows: Where express the desired tolerance for the population diversity and the gene diversity with respect to the best individual.
Step 8. Accelerated operation if necessary When the best individual in the present generation cannot be improved any longer by the mutation and crossover operations, a decent method is then employed to push the present best individual towards attaining a better point. The accelerated phase is expressed as follows: Z denotes the best individual, as obtained from equation (9). The step size ( ] (13) is determined by the descent property. Initially, α is set to one to obtain the new individual.
Step 9. Repeat step 2 to step 8 until the maximum iteration quantity or the desired fitness is accomplished.
The main calculation procedure of the backtracking VSHDE method is as shown in Fig. 1.
Where Loss T P , is the total real power loss of the system.
The voltage magnitude at each bus must maintain within its limits. The current on each branch has to lie within its capacity rating. These constraints are expressed as follows Fig.2, the following set of recursive equations is used for power flow computation [32][33].
Application of the proposed method
Implementation of the problem begins from the parameter encoding. A tie switch and some sectionalizing switches with the feeders form a loop. A certain switch of each loop is then selected to open to make the loop become radial and the selected switch naturally become a tie switch. The network reconfiguration problem is identical to the problem of selection of an appropriate tie switch for each loop so that the power loss can be minimized. A coding scheme that recognizes the tie switch positions is proposed. The total number of tie switches is kept constant regardless of the change in the system's topology or the tie switches' positions. Fig. 3 shows an individual composing of wherein the tie switch (TS) positions. Different switches from a loop are respectively selected to cut as a tie switch to decide its associated fitness value to determine a feasible solution (radial configuration) with minimum loss. The fitness function to be maximized is defined as follows Where M is the total feeder number of the system and k n is the total section number of feeder k . One three-feeder distribution system from the literature is investigated and the results are used to compare the performance of the proposed backtracking VSHDE method with the SA method. The Fortran SA [34] algorithm solver accessed from http://www-aig.jpl.nasa.gov/public/home/decoste /HTMLS/NN/glopt/glopt.html#sa_codes, is used to solve the network reconfiguration of distribution systems. The SA solver recommended some setting factors for a user. The recommended factors are used for solving network reconfiguration of distribution systems. For comparison, SA package is rewritten by Matlab software.
Example:
The three-feeder distribution system [35] as shown in Fig. 4 Input data of this example system are shown in Table 1. The system consists of 3 feeders, 13 normally closed switches, and 3 normally open switches. The system load is assumed to be constant and MVA S base 100 = . The setting factors used in backtracking VSHDE to solve this example are as follows. The population size, p N , is set to 5. The maximum generation, itermax , is set to 50. The crossover factor, R C , is set to 0.5. Two tolerances, ε and 1 ε , used in the migrating operation are both set to 0.1. The scaling factor is updating in every
Conclusion
Two heuristic methods including backtracking VSHDE and SA for solving the network reconfiguration of distribution systems have been described in this work. The bactracking VSHDE method utilized the 1/5 success rule of the evolution strategies (ESs) to adjust the scaling factor to accelerate searching out the global solution. The variable scaling factor is used to overcome the drawback of fixed and random scaling factor used in HDE. And, the concept of backtracking is used in this paper to increasing the population diversity to increase the possibility of searching out the global solution. The computational results obtained of sloving one three-feeder distribution system from the literature is investigated. From the computational results of solving one three-feeder distribution system from the literature showed the performance of the backtracking VSHDE method is better than those obtained by the simulated annealing (SA). | 3,817.8 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
An Integrative Analysis of PIK3CA Mutation, PTEN, and INPP4B Expression in Terms of Trastuzumab Efficacy in HER2-Positive Breast Cancer
The phosphoinositide-3-kinase (PI3K) pathway is commonly deregulated in breast cancer through several mechanisms, including PIK3CA mutation and loss of phosphatase and tensin homolog (PTEN) and inositol polyphosphate 4-phosphatase-II (INPP4B). We aimed to evaluate the predictive relevance of these biomarkers to trastuzumab efficacy in HER2-positive disease. We evaluated the effect of trastuzumab in 43 breast cancer patients with HER2-overexpression who received neoadjuvant treatment. PIK3CA mutation was examined by direct sequencing and digital PCR assay, and PIK3CA copy number was assessed by digital PCR assay of pretreatment tissues. PTEN, pAkt, and INPP4B were assessed by immunohistochemistry. Direct sequencing detected mutant DNA in 21% of all patients, but the incidence increased to 49% using digital PCR. The pathological complete response (pCR) rate in patients with PIK3CA mutations was 29% compared with 67% for those without PIK3CA mutations (P = 0.093), when the mutation was defined as positive if the mutant proportion was more than 10% of total genetic content by digital PCR. Low PTEN expression was associated with less pCR compared to high expression (33% versus 72%, P = 0.034). There were no significant associations of PIK3CA copy number, pAKt, or INPP4B with trastuzumab efficacy. In multivariate analysis, activation of the PI3K pathway due to either PIK3CA mutation or low PTEN were related to poorer response to trastuzumab (OR of predictive pCR was 0.11, 95%CI; 0.03–0.48). In conclusion, activating the PI3K pathway is associated with low pCR to trastuzumab-based treatment in HER2-positive breast cancer. Combined analysis of PIK3CA mutation and PTEN expression may serve as critical indicators to identify patients unlikely to respond to trastuzumab.
Introduction
Anti-human epidermal growth factor 2 (HER2) therapy has been approved as a standard practice for patients with HER2-positive breast cancer, leading to an improvement of patient outcome during the past decade [1,2,3]. Despite the considerable efficacy of trastuzumab therapy, some patients with metastatic breast cancer either do not respond to it or have a limited benefit [4,5]. This resistance to trastuzumab is a major issue in clinical practice and the molecular basis of the resistance has not been completely elucidated.
The phosphoinositide-3-kinase (PI3K) pathway, which is a downstream target of most growth factor tyrosine kinase receptors (TKRs) including HER2 and insulin-like growth factor-1 receptor (IGF1R), contributes to cell proliferation, metabolism, autophagy, and cell survival and also confers resistance to trastuzumab [6,7]. Aberrations of this pathway are extensively found in many human cancers in a variety of forms, including mutation or amplification of PIK3CA and loss of phosphatase and tensin homolog (PTEN) and inositol polyphosphate 4-phosphatase-II (INPP4B) [8,9,10].
Activating mutations in the PIK3CA gene, which encodes the p110a catalytic subunit of PI3K, are frequent in breast cancer as are mutations in the p53 gene [11,12]. The frequency of PIK3CA mutations in HER2-positive breast cancer has been reported as 22.7% to 39% [11,12,13]. Approximately 90% of these mutations are localized in 3 major hotspots concentrated in the helical (E542K and E545K) and kinase (H1047R) domains [14]. Loss of function of PTEN, a negative regulator of PI3K signaling, has been reported in 15%-65% of HER2positive breast cancer [7,8,15,16,17,18]. Likewise, the expression of a putative tumor suppressor, INPP4B, is frequently lost in breast cancer, and is reported to be associated with decreased patient survival [10].
To date, a number of in vitro studies have demonstrated the putative mechanism of resistance to trastuzumab therapy in terms of PI3K pathway activation [7,19,20], but clinical confirmations of this association are limited. Chandaelapaty et al. revealed that the incidence of PTEN loss and/or PIK3CA mutation in trastuzumab-refractory tumors was 71% compared with 44% in primary tumors from an untreated cohort [21]. This supports the findings that showed that, during cancer evolution, tumors accumulate molecular or genetic events to overcome exposure to the drug [22]. When investigating the molecular basis of treatment resistance, it is difficult to obtain the tumor tissues consistently after disease progression in a metastastic setting. However, neoadjuvant treatment offers an opportunity to explore the efficacy of therapy and allows us to predict de novo resistance using primary tissues. To our knowledge, studies of the association between PI3K-pathway activation and pathological complete response (pCR) to trastuzumab treatment have been limited.
For the detection of mutations, direct sequencing has been commonly used in clinical research. However, it can only detect mutant sequences constituting more than 20% of the total genetic content [23]. Digital PCR technology with an analytical sensitivity of 0.01-0.1%, represents an attractive approach for the detection of low-abundance mutations [24], and allows accurate quantitative measurement of mutant DNA. A recent study reported that digital PCR could detect additional mutations in primary tumor tissues not found by traditional Sanger sequencing [25], suggesting that previous evaluations of mutations in pretreatment tumor tissues might have underestimated the true frequency.
The objective of this study is to evaluate the predictive relevance of PI3Kpathway related biomarkers to trastuzumab efficacy in HER2-positive disease. In a neoadjuvant setting, we can identify de novo resistance to trastuzumab therapy, which requires alternative treatment such as inhibitors of the PI3K/AKT/mTOR pathway in early stage of the disease. Additionally, we aim to determine accurately the frequency of PIK3CA mutation using the novel technology of digital PCR.
Results
Patient characteristics in this study are summarized in S2 Table. The mean age at diagnosis was 53 years (range 30-75). Forty-four percent had tumors of the luminal-HER2 subtype and 56% HER2-enrich. Most patients (81%) received an anthracycline-and taxan-containing regimen in combination with trastuzumab in the neoadjuvant treatment. Of the 43 patients, 26 (60%) achieved pCR.
Frequency of PIK3CA mutation detected by direct sequencing and digital PCR assays We first investigated PIK3CA mutations in the pretreatment tissues using direct sequencing, which confirmed 7 (16%) mutations of exon 20 and 3 (7%) of exon 9. Subsequently, the tissues were subjected to digital PCR assays, in which the fractional abundance of mutant DNA ranged from 0.1% to 58.2% (Table 1). The corresponding frequencies of PIK3CA mutations were 35% of exon 20 and 28% of exon 9. Fig. 1 shows representative images of the detection of mutant PIK3CA DNA by both techniques. The first illustrated case (Fig. 1A), in which an apparent mutation was detected by direct sequencing, showed more than 50% mutant DNA in a background of wild-type DNA by digital PCR analysis. In contrast, we found some cases in which it was difficult to differentiate mutant DNA from artifact by direct sequencing (Fig. 1B and C). In these cases, we could identify the mutation clearly using digital PCR (25.8% and 20.9%). Mutations detected by both assays are shown in Table 1. In total, direct sequencing detected mutant DNA in 21% of patients, but the incidence increased to 49% using digital PCR. Most mutations detected by direct sequencing were also identified by digital PCR, and the proportion of mutant DNA was more than 10% in all cases except numbers 15 and 41 (Table 1). In 12 (28%) samples, PIK3CA mutation was detected only using the digital PCR, as the mutant proportion was around 1% or less.
Associations of PIK3CA mutation, gene copy number with efficacy of trastuzumab therapy We next evaluated whether PIK3CA mutations influence the efficacy of trastuzumab therapy. As the significance of low-abundance mutation, especially those less than 1%, has been unclear, we set out to determine an appropriate cutoff point for the proportion of mutant DNA. Using a cutoff of 10%, the pCR rate in patients positive for PIK3CA mutations was 29% compared with 67% for those below the cutoff (P50.093, Fig. 2, Table 2). This effect was more evident when the cutoff point was 20%, with none of the mutation-positive patients achieving the pCR (Fig. 2). The location of the mutation in the PIK3CA gene (exon 9 vs. exon 20) made no significant difference in the response to trastuzumab therapy. PIK3CA copy number was also analyzed by digital PCR. The median value for the ratio of PIK3CA to RNaseP was 1.04. In total, 6 cases (14%) showed a ratio.1.5, which was considered as copy number gain. There was no significant difference in pCR rate between the patients with low or normal copy number vs. copy number gain (59% versus 67%, P51.00).
Associations between PTEN, pAkt, INPP4B expression and treatment response
We then examined the expression of PI3K pathway-related proteins by IHC analyses. Forty-one, 40, and 39 tumors were evaluable for PTEN, pAkt, and INPP4B, respectively. Representative staining patterns for each parameter are shown in Fig. 3. For PTEN and INPP4B, 12 (29%), and 7 (18%) cases, respectively, were considered absent or low expression, and for pAkt 30 cases For the digital PCR assay, signals from wild-type-specific probe are detected as blue, and signals from mutant-type are detected as red. A; mutation is detectable by direct sequencing and harbors more than 50% of mutant DNA using digital PCR. B, C; detection of mutation is difficult by direct sequencing alone, harbors around 20% of mutant DNA by digital PCR. (75%) were considered high expression. Cases with reduced PTEN expression were less likely to achieve pCR than those with high expression (33% versus 72%, P50.034, Table 2). We found no significant associations between pAkt, INPP4B and the response.
Correlations between PIK3CA mutation and gene copy number, protein expressions PIK3CA mutation was defined positive if mutant DNA was more than 10% by digital PCR assay for all of the following analyses. Patients with PIK3CA mutations tended to have high levels of pAkt expression but the difference was not significant (S3 Table). Three patients (7%) showed both PIK3CA mutation and low PTEN expression, and 2 (5%) had PIK3CA mutation and low INPP4B. None of the patients had PIK3CA mutation and concurrent gain of gene copy number.
Relationship of each parameter with clinicopathological features
There were no significant correlations of clinicopathological features with PIK3CA mutation, gene copy number status, PTEN, or pAkt expression ( Table 2). Low INPP4B expression was associated with larger tumor size (P50.035) and higher nuclear grade (P50.031), compared to high expression. There were no correlations between, pAKT, PTEN, and INPP4B (data not shown).
Discussion
In recent years, there has been a concerted effort to identify biomarkers of drug response in neoadjuvant treatment. Although a number of studies regarding the PIK3CA gene or PI3K pathway in HER2-positive breast cancer have been reported, they mainly focused on the impact on prognosis or response to anti- HER2 therapy in a metastastic disease [7,13,14,15,21]. In the present study, we sought to evaluate associations between activation of PI3K pathway and efficacy of trastuzumab therapy in neoadjuvant treatment. The rate of pCR was significantly lower among the patients with high activation of PI3K pathway, who had at least one of PIK3CA mutation and/or low PTEN expression, than those without either PIK3CA mutation or low PTEN. Further, we performed mutational analysis by digital PCR to complement the direct sequencing data, leading to more accurate measurement of the frequency of PIK3CA mutation.
Activating mutations and amplification of the PIK3CA gene are commonly found in breast cancer, particularly in ER-positive or HER2-positive disease [11,12]. There are several studies concerning PIK3CA mutation, indicating its importance in determining the survival and treatment efficacy [7,13]. Jensen et al. showed that patients with PIK3CA mutation or increased PI3K pathway activity had poor survival despite adjuvant chemotherapy and trastuzumab [13]. Some similar studies [7,13,26] but not all [14,17] have emphasized the prognostic relevance of PIK3CA mutation. With regard to the tumor response, a few studies have shown the predictive meaning of PIK3CA mutation [27]. A recent study from the German Breast Group showed that the rate of pCR in their neoadjuvant trials was significantly lower among the patients with HER2-positive disease who harbored PIK3CA mutations, than among those who did not (17% versus 37%) [28,29]. Another neoadjuvant trial (NeoALTTO) which compared trastuzumaband lapatinib-containing regimens revealed that low PTEN or activating mutation in PIK3CA conferred resistance to the trastuzumab regimen, whereas low PTEN was predictive for response to lapatinib [27]. Our study provides additional support for these findings, and further suggests that integrated biomarkers of PIK3CA mutation and low PTEN are even stronger predictor of trastuzumab response than either one alone (Table 3), apparently reflecting their interacting roles in the biology of HER2-positive breast cancer. Several ongoing trials targeting the PI3K pathway performed biomarker analysis [30,31,32] and found that patients with PIK3CA mutations or PTEN loss having a poorer response to trastuzumab therapy may derive increased benefit from PI3K pathway-targeted drugs. Hence, it is of critical importance to understand the mechanisms of trastuzumab resistance and identify those patients before initiation of treatment.
In the present study, we compared two methods of mutational analyses: direct sequencing and digital PCR. The latter is an attractive novel technology that has previously been used for various applications, including mutation detection and analysis of copy number variation [33,34]. In our study, the digital PCR assay was an extremely useful complement when direct sequencing was unable to identify a mutation. The frequency of patients with mutant DNA found by digital PCR was more than twice that found using direct sequencing (49% versus 21%, Table 1), and appeared to exceed the previously reported frequency (,39%) [12]. This is presumably due to digital PCR's greater sensitivity in detecting mutant DNA of low abundance, even less than 1%. However, careful consideration is required of the degree to which a small amount of mutation can be implicated in molecular function. In addition, the proportion of mutant DNA detected may depend on the abundance or variation of tumor cells among total genetic content because we used core-biopsy samples for analysis. Due to this issue, we set a cutoff value of the proportion of mutant DNA and found that the pCR rate decreased with elevated proportions of mutant DNA (Fig. 2). To our knowledge, our study is the first to suggest the dose-dependent association of PIK3CA mutation with treatment response, though our sample size was limited and further validation in a large population is needed.
Copy number gain of the PIK3CA appears to be less common than mutation, having been indentified in 1-14% of breast cancers [35,36,37]. A recent study by Gonzalez-Angulo et al. demonstrated that a high copy number of MET or PIK3CA was associated with poorer prognostic features [38]. No study has reported an association between PIK3CA copy number gain and trastuzumab response. Our present study observed no relationship among them, suggesting that a gain of PIK3CA gene copy number is not the main molecular event in activating the PI3K pathway in HER2-positive breast cancer.
We also evaluated the expression of the PI3K-pathway-related proteins, pAkt, PTEN, and INPP4B, by IHC. PTEN and INPP4B are negative regulators of the PI3K pathway and are frequently lost in breast cancer [12]. PTEN is a dual phosphatase that mainly dephosphorylates phosphatidylinositol-3,4,5 trisphosphate (PI(3,4,5)P3) to produce PI (4,5)P2 [39]. INPP4B is also a phosphatase which dephosphorylates the 4-position phosphate from PI (3,4)P2 to form PI(3)P [40]. Loss of PTEN or INPP4B results in prolonged activation of Akt and subsequently in increased cell proliferation, migration, and invasion. Consistent with these molecular findings, reduced PTEN expression was associated with lower response to treastuzumab in our study (Table 2). Further, PTEN appeared to be a more robust indicator of pCR following treatment than PIK3CA mutation, also in agreement with some previous data [17,21]. Nagata et al. reported that PTEN activation directly contributes to the antiproliferative effect of trastuzumab, based on the findings that trastuzumab increased PTEN membrane localization and phosphatase activity, leading to decreased PI3K pathway activation and inhibition of proliferation in an in vitro and in vivo study [8]. For INPP4B expression, we found no the significant impact on the response, although high expression was associated with worse prognostic factors, such as large tumor size and high nuclear grade (Table 2). Recent studies have proved evidence about INPP4B in several human cancers, including prostate cancer [41], melanoma [42], and breast cancer [10,43]. According to the latter reports, reduced INPP4B has been observed predominantly in basal-like breast carcinoma and appears to be associated with unfavorable patient outcome [10,43]. Further evidence for any potential role of INPP4B as a biomarker in breast cancer should be accumulated.
In conclusion, we showed that activation of the PI3K pathway as judged by the presence of oncogenic PIK3CA mutations or low PTEN expression was associated with a lower frequency of pCR in neoadjuvant treatment. Additionally, we found that digital PCR leads to detection of additional mutations not found by conventional sequencing, and a more accurate determination of the prevalence of PIK3CA mutations, thus identifying more patients who are candidates for targeted therapies. In the near future, biomarker analysis to identify women most likely to benefit from the trastuzumab therapy may be conducted routinely in a clinical practice. The present study, thus offers useful evidence that the PI3K-pathwayrelated components PIK3CA, PTEN and INPP4B may be candidate predictive biomarkers for trastuzumab therapy.
Subjects and tissues
A total of 43 patients with HER2-positive breast cancer who received both neoadjuvant treatment and surgery at Kumamoto University Hospital between 2004 and 2012 were selected. All patients underwent pretreatment biopsies using a needle with a more than 14G needle and were diagnosed with invasive breast carcinoma. We evaluated biomarkers in the pretreatment specimens. Neoadjuvant chemotherapy and trastuzumab treatment was assigned to each patient according to their risk on the basis of clinical parameters, and in accordance with the recommendation of the St. Gallen International Expert Consensus on primary therapy of early breast cancer at the time. Only patients who received at least six courses of treatment (commonly up to eight courses) with anthracycline or taxane-containing regimens in combination with trastuzumab were included. The representative regimens of chemotherapy were as follows; FEC (5-fluorouracil 500 mg/m 2 , epirubicin 100 mg/m 2 , and cyclophosphamide 500 mg/m 2 , every 3 weeks) followed by docetaxel (75 mg/m 2 , every 3 weeks) or paclitaxel (80 mg/m 2 , every week) each for 4 cycles, TC (docetaxel 75 mg/m 2 and cyclophosphamide 600 mg/m 2 , every 3 weeks) for 6 cycles.
Ethics Statement
Written informed consent was obtained from all subjects for the collection and research use of breast tumors. Our complete study was approved by the ethics committee of Kumamoto University Graduate School of Medical Sciences.
Evaluation of treatment response
The response of primary breast cancer during treatment was evaluated using clinical diagnostic imaging (ultrasound and magnetic resonance imaging). The achievement of pCR on postoperative specimens was defined as the absence of invasive residuals in breast or nodes. Noninvasive breast residuals were allowed.
Immunohistochemical analysis
Histological sections (4 mm) were deparaffinized and incubated for 10 min in methanol containing 0.3% hydrogen peroxide. They were then immunostained with monoclonal antibodies against ERa (SP1; Ventana Japan, Tokyo, Japan), progesterone receptor (PR) (1E2; Ventana Japan), HER2 (4B5; Roche, Tokyo, Japan), Ki67 (MIB1; Dako Japan, Kyoto, Japan), PTEN (136G6; 1:200, Cell Signaling Technology, Tokyo, Japan), pAkt (Ser473) (D9E; 1:50, Cell Signaling Technology) and INPP4B (EPR3108; 1:100, Abcam). The staining was carried out in a NexES IHC immunostainer (Ventana Medical Systems, Tucson, AZ), in accordance with the manufacturer's instructions. ER and PR were considered positive if more than 1% of nuclei were stained. HER2 expression was also determined by IHC staining based on the Hercep test. We considered a tumor to be HER2-positive if the specimen either scored 3+ by IHC, or showed HER2/ CEP17 ratio with$2.2 in fluorescence in situ hybridization (FISH) according to the 2007 ASCO/CAP guideline [44]. Tumor subtypes were defined according to the expression of ER, PR and HER2. Ki67 was scored as the percentage of nuclearstained cells out of all cancer cells in the hot spot of the tumor regardless of the intensity in a 6400 high-power field (Ki67 labeling index [45]). We counted between 500 and 1,000 tumor cells as recommended by the International Ki67 in Breast Cancer Working Group [46]. For PTEN, pAkt and INPP4B expression, the H-score was calculated by multiplying the percentage of positive cells (0-100) by the staining intensity score (0-3). PTEN expression level was scored semiquantitatively based on the immunoreactive score (IRS) as described by Sakr RA et al. [47]. Low PTEN expression was considered as H-score ,60. Similarly, the Hscore cutoffs for INPP4B and pAkt were set at 60 and 12.5, respectively.
Mutational analysis
Genome DNA from formalin-fixed paraffin-embedded (FFPE) tissue samples which included more than 3 tissue cores was isolated using the AllPrep DNA/RNA Mini kit (Qiagen, Germantown, MD, USA) according to the manufacturer's instructions. The extracted DNA was quantified using a Nanodrop spectrophotometer (Thermo Scientific, Tokyo, Japan).
Direct sequencing analysis
First, all samples were genotyped by direct dideoxynucleotide sequencing. PCR primers used for amplifying segments were designed for analysis of PIK3CA hotspot mutations in exon 9 and exon 20. These sequences are shown in S1 Table. Each PCR was performed on 50 ng of genomic DNA with AmpliTaq Gold 360 Master Mix (Applied Biosystems, CA, USA) according to the manufacturer's protocol. PCR products were purified using a QIAquick PCR Purification Kit (Qiagen) and were sequenced using the BigDye Terminator v3.1 Cycle Sequencing Kit (Life Technologies, Tokyo, Japan) and an ABI 3130 automated capillary sequencer. We used 6 ng of PCR product as a DNA template in sequencing reactions.
Analysis by digital PCR
We used Custom TaqMan SNP Genotyping assays (Applied Biosystems), consisting of a pair of primers and two TaqMan probes, for the detection of 3 common mutations in PIK3CA (E542K, E545K, and H1047). One probe was specific to the wild-type sequence and another was specific to the variants of corresponding mutations. The sequences of the primers and TaqMan MGB probes are described in S1 Table. Digital PCR was performed using the BioMark System with the BioMark qdPCR37K IFC (Fluidigm, Tokyo, Japan) according to the manufacturer's instructions. We prepared 4-mL reaction mixes containing 2 mL of TaqMan Gene expression Master Mix (Applied Biosystems), 0.4 mL of 206GE sample loading reagent, 0.2 mL of 206gene-specific assay, 0.2 mL of DNA-free water, and 1.2 mL of target gDNA (24 ng). PCR runs were as follows; 120 s at 50˚C, a hot start at 95˚C for 10 min, and 40 cycles of 15 s of denaturation at 95˚C and 1 min of annealing and extension at 60˚C. The number of positive amplified signals in each panel was used for quantifying the different DNA sequences and analysis of PCR data was done using the BioMark Digital Array software. This uses a Poisson model to estimate the number of DNA molecules from the count of positive wells. Finally, the proportion of mutant signals in total signals was regarded as a mutant allele frequency.
PIK3CA copy number assay
For the copy number application, we used the BioMark System with the BioMark qdPCR37K IFC (Fluidigm) as described above. RNase P, which is regarded as a reference for gene dosage, was substituted for 0.2 mL of DNA-free water in the 4-mL reaction mixes. The reactions were performed in the panels including both PIK3CA and RNaseP assays. Each probe was obtained for TaqMan copy number assay (4316844 for RNase P, Hs02708380_cn, for PIK3CA, Applied Biosystems). We count the number of positive chambers for each and estimated PIK3CA: RNase P ratio as a relative copy number.
Statistical analysis
The significance of differences in categorized demographic variables was evaluated using Chi-square or Fisher's exact test and the nonparametric Mann-Whitney U test. Logistic regression methods were adopted for univariate and multivariate analyses to assess the associations of clinical and biological parameters with pCR. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated. All statistical analyses were carried out using STATA ver.12 (Stata Corp, College Station, TX). All tests were two-sided and P values ,0.05 were considered statistically significant.
Supporting Information S1 | 5,510.4 | 2014-12-26T00:00:00.000 | [
"Biology"
] |
Experimental Band Structure of Pb(Zr,Ti)O3: Mechanism of Ferroelectric Stabilization
Abstract Pb(Zr,Ti)O3 (PZT) is the most common ferroelectric (FE) material widely used in solid‐state technology. Despite intense studies of PZT over decades, its intrinsic band structure, electron energy depending on 3D momentum k, is still unknown. Here, Pb(Zr0.2Ti0.8)O3 using soft‐X‐ray angle‐resolved photoelectron spectroscopy (ARPES) is explored. The enhanced photoelectron escape depth in this photon energy range allows sharp intrinsic definition of the out‐of‐plane momentum k and thereby of the full 3D band structure. Furthermore, the problem of sample charging due to the inherently insulating nature of PZT is solved by using thin‐film PZT samples, where a thickness‐induced self‐doping results in their heavy doping. For the first time, the soft‐X‐ray ARPES experiments deliver the intrinsic 3D band structure of PZT as well as the FE‐polarization dependent electrostatic potential profile across the PZT film deposited on SrTiO3 and La x SrMn1− x O3 substrates. The negative charges near the surface, required to stabilize the FE state pointing away from the sample (P+), are identified as oxygen vacancies creating localized in‐gap states below the Fermi energy. For the opposite polarization state (P−), the positive charges near the surface are identified as cation vacancies resulting from non‐ideal stoichiometry of the PZT film as deduced from quantitative XPS measurements.
Introduction
Every ferroic order involves the breaking of a symmetry operation: ferromagnetism (FM) breaks time inversion, ferroelasticity (FS) breaks the spatial rotation symmetry, and ferroelectricity (FE) breaks the inversion symmetry. [1] In condensed matter, the macroscopic fingerprint of ferroelectricity is the spontaneous electric polarization P, stable in time and reversible under external electric fields exceeding the coercive field. It is well established that the paraelectric to ferroelectric transition involves a structural transition from high to low symmetry, with consequent displacement and offcentering of the atoms from their symmetric positions. [2][3][4] At microscopic level, such non-centrosymmetric configuration of the atomic positions in the unit cell is consistent with a picture of dipoles aligned along the direction of the internal electric field. Current in-use applications of FE materials range from non-volatile memories, [5] sensors, [6,7] transducers [8,9] to catalysis [10,11] and photovoltaics. [12,13] For solar energy storage and conversion, [14,15] the strong internal field of FEs is an essential ingredient to efficiently separate the electrons from holes. [13,16] A major drawback is their generally wide bandgap which drastically limits the absorption efficiency in the visible range. Strategies to circumvent this shortcoming include heterostructuring, [17] doping [18] and/or defect engineering [19] in order to either decrease the electronic band gap or to introduce localized electronic states in the band-gap, which may account for light absorption at convenient energies in the visible range. Assessing the energy and width of these localized levels or possible hybridization mechanisms between the dopant and the bulk band structure are critical to understanding and tuning the properties of the ferro-functional systems.
The best known FE material is PbZr x Ti 1−x O 3 (PZT), which in its aristotype, or fundamental unit cell, is described by a tetragonal (TG) ABO 3 formula, having Pb in the corners and Ti/Zr in the center of the O 6 octahedra. It is accepted that this is only a general description and lower symmetries such as orthorhombic, rhombohedral (RH) or monoclinic, coexist with the TG phase, [20] with their variable coexistence depending on the Zr amount or temperature. Tuning these parameters is a practical route to increase piezoelectricity [21] or to control the domain wall patterns. [22] At the x = 0.2 value of Zr content, the bulk phase has the TG symmetry with the permanent dipole moment and stable FE polarization generated by displacement along the c axis of the cations with respect to the oxygen atoms. [5,23] In thin films, the accompanying electrostatic effects associated to such dipole orientation is the accumulation of opposed charges at its extremities, defining a depolarizing field (DF) which opposes the internal field and tends to cancel the FE polarization.
For any application based on FEs, it is important to consider the mechanisms (intrinsic and extrinsic) which contribute to the stabilization of a well-defined FE state and to isolate their impact on the electronic properties.
Possible extrinsic compensation mechanisms include screening of the DF by the carriers of a metallic electrode; or, adsorption of polar molecules at the surface of the FE. The resulting consequence is the modulation of the charge density close to the contact region, in the outer material.
Intrinsic mechanisms involve migration across the FE material [24,25] of the already existing positive (holes, ionized donor impurities, or p-type dopants) and negative charges (electrons, ionized acceptor impurities, or n-type dopants), intentionally introduced [26] or resulting from self-doping. [27] Such charge reorganization is accompanied by band bending which manifests either in the outer material for purely extrinsic compensation or within the FE in the case of intrinsic mechanisms. For imperfect screening of the DF by the metallic electrodes, the compensation mechanism is a combination of intrinsic and extrinsic contributions, accompanied by band bending in opposite directions in both the FE and the outer material (metallic electrode or contamination layer). [25] The FE-induced potential profile adds at the material-dependent band alignment (derived from joining systems with different work functions, W f ). This impacts the hopping of charges across the interface. It also explains the preferential stabilization of opposite FE states of a layer when grown on substrates with different W f s and different conducting character such as metallic, insulator, n or p-doped.
The most direct method to access the intrinsic electronic structure of materials encoded in its band structure is angle-resolved photoelectron spectroscopy (ARPES). Tracking the experimental band structure of insulating materials, FE ones in particular, is challenging due to charging effects and broadening of the spectra due to the band-bending in the region close to the surface. With some notable exceptions, [28][29][30] the FEs remain underrepresented in ARPES measurements compared with their FM ferroic analogues. GeTe and GeMnTe were studied extensively, [29,30] however, unlike FE oxides which are insulators, GeTe-derived compounds are semiconductors with only small band-gap and large, p-type conductivity. The only k-resolved study of a FE oxide remains up to now only BaTiO 3 , with its band structure recorded in surface-sensitive measurements, in the heavily n-doped case. [28] Soft X-ray (SX)-ARPES extends the probing region more toward the bulk, away from the surface band-bending potential of ferroelectrics, while increasing the k z resolution, critical for 3D materials such as Pb(Zr,Ti)O 3 and BaTiO 3 . Moreover, SX-ARPES is naturally suited to selectively extract the fingerprints of possi-ble hybridization between the impurity states and the bulk band structure using resonant photoemission, [31][32][33] specifically shining out the contribution of impurity states to the total ARPES signal.
Here, we use SX-ARPES to explore the 3D band structure of PZT with Zr doping x = 0.2, prepared in opposite FE states when grown on substrates with different W f and n/p conduction character, focusing on the spectral signatures of the charge carriers stabilizing its polarization state. By separating the FE-induced effects from the substrate-induced ones, we clarify fundamental aspects related to the electronic structure of oxide interfaces and formulate further directions to enrich the functionality of multiferroic systems.
The structure of the paper is the following: I. The first section explores the k-resolved valence band structure of PZT in two opposed FE states and separates the contribution of the substrate-induced distortions from the FEinduced one. II. The second section identifies the FE polarization-dependent band alignment mechanism of PZT in two opposed FE states and clarifies the mechanism of FE compensation from combined band structure and core levels analysis. III. The third section addresses the effects of X-ray irradiation and identifies the distinct mechanism of charged vacancies creation under external perturbations in order to stabilize and preserve the FE state. IV. The last section is devoted to outlook and conclusions.
k-Resolved Valence Band Structure
Two PZT samples with thickness of 5 nm were grown by pulsed laser deposition on LaSrMnO 3 (LSMO) buffered TiO 2 -terminated (001) SrTiO 3 (STO) substrate and on TiO 2 -terminated (001), Nbdoped SrTiO 3 (Nb:STO). Throughout the paper we will refer to them as DW respectively UP samples. Detailed explanations on the growth protocol are given in the Experimental Section.
Such a small thickness means that: i) the FE state is an out-ofplane, mostly single domain, with P oriented either inward, toward the substrate (P−) or outward, toward the vacuum (P+); and ii) the screening of the DF and the stabilization of the FE order is maintained by a significant concentration of charge carriers existing in the thin PZT layer (charged impurities, cation vacancies [CV]). [27,34] These carriers also protect from the charging effects, allowing direct observation of the k-resolved band structure.
We select the two different substrates due to their larger, respectively smaller work-functions (W f ): LSMO = 4.9-5.1 eV, [35,36] Nb:STO = 4.1-4.2 eV [36] compared to PZT, PZT = 4.5 eV. [34] The W f -induced band lineup at the interface with the substrate define a material-dependent band bending, Δ DW(UP) which drive the migration of negative (positive) charges from PZT at the bottom interface. In combination with the available positive (in LSMO) and negative (in Nb:STO) charges in the substrate, we expected the stabilization of well-defined, opposed out-of-plane P−(P+) FE states of PZT.
XRD measurements performed at the room temperature (Figure S1, Supporting Information) on both samples indicate that PZT is fully strained at the in-plane lattice constant of the STO substrate, and elongated along the c axis. The ratio between the out-of-plane (c) and the in-plane (a) lattice parameters of the pseudo-cubic unit cell, c/a ≈ 1.07-1.08 exceeds the c/a ≈ 1.04-1.05 value of fully relaxed, bulk PZT. [27] Such geometry involves strong cation displacement from the energetically unfavorable centrosymmetric position, [37,38] indirectly supporting the ferroelectric character of the thin films. We identify the distinct outof-plane FE polarization, pointing outward (P+) or inward (P−) in local piezoresponse force microscopy measurements ( Figure S2, Supporting Information) performed on the two samples after the synchrotron measurements. The result identifies PZT in the UP sample with FE state oriented away from the surface (P+) while PZT in DW, features FE polarization oriented inward (P−).
We will now explore the distinct hallmarks of the different crystalline structure of the substrates: cubic Nb:STO (UP sample) and RH-distorted LSMO [39] (DW sample), emphasizing the signature of the different crystalline state and FE polarization in the electronic structure of PZT.
We investigate first the k-space topology of PZT in the UP sample with tetragonal unit cell (u.c.), by navigating along the out-ofplane k z direction. In Figure 1a, the photon energy ℎ is varied between 350 and 520 eV at constant k || . The resulting iso-energy (iso-E) map in the XΓZ plane of the bulk Brillouin zone (BZ), presented in Figure 1a, identifies the valence band maximum (VBM) in the X point at a binding energy (BE) of 2.2 eV relative to the Fermi level.
The (k x − k y ) iso-E map recorded on UP sample in the XΓM plane at the VBM using hv = 520 eV (Figure 1b) derives from the square-like symmetry of the PZT in the ab plane in accordance with the iso-E surface derived from DFT calculations for the tetragonal PZT u.c. (Figure 1c).
The iso-E map recorded in the ZAR plane 0.5 eV below VBM with hv = 465 eV (Figure 1d) also shows the DFT-predicted feature centered in the Z point for the tetragonal u.c. (Figure 1e).
Such topology of the k-space is particular to this heavily strained PZT layer, whereas in fully relaxed, bulk PZT with the c/a ratio of 1.05, the features in the ZRA plane open in the R point, not in the Z point as seen in Figure S3, Supporting Information.
However, for the DW sample, the iso-E map recorded in the ZAR plane 0.5 eV below VB is not consistent with the iso-E calculated in the tetragonal cell as shown in Figure 2a. It shows in addition to the expected signature of the hole-like band in the Z point, four elliptical-shaped features centered in the A points of the k-space. Their appearance is consistent with a RH reconstruction of the PZT unit cell. This is evident from the calculated 3D iso-E surface of PZT in a RH unit cell at the same energy with respect to the VBM as in the experimental (k x ,k y ) map and presented in Figure 2b. The calculated iso-E map shows that the major axis of the pocket around the A point derives from the projection of the WXW direction of the RH cell on the (001) direction of the TG cell. In turn, the minor axis of the A-centered ellipse results from projecting the UXU direction of the RH cell on the (001) plane of the TG-cell BZ.
Indeed, it has been shown recently how across the interface between transition metal oxides, [40] the magnitude and pattern of the octahedral tilts extend the picture of epitaxial-induced strain by the substrate, [41] to additional structural reconstruction propagating from the substrate into the top layer across a thickness of 2-3 nm. Such an effect explains the morphing of the substrate crystalline structure into PZT top layer The mechanism of imprinting the substrate crystal structure into the top epitaxial layer is independent of the ferroelectric state, involving a combination of substrate-induced strain and continuity of the octahedral rotations. [42] Notably, the reconstruction observed for PZT when grown on LSMO keeps up to the room temperature, excluding the scenario of a low-temperature RH reconstruction, characteristic to many perovskite oxides in the bulk phase [43] ( Figure S4, Supporting Information), with the additional temperature-dependent X-ray Figure 2. Signature of RH reconstruction. In-plane (k x ,k y ) iso-E maps of PZT recorded with hv = 465 eV at 0.5 eV below the VBM in the ZAR plane of the TG cell. The red line indicates the calculated isocontour at the corresponding energy assuming TG geometry of the unit cell. Gray contours represent the additional signature of the RH-distorted unit cell for DW (a). Such a signature is absent for the UP sample. The signature of the RH distortion is rendered into the TG unit cell in (b), with the iso-energy surfaces calculated at the same energy, 0.5 eV below VBM, as the experimental maps. absorption measurements further supporting the different crystalline state of PZT grown on the two different substrates ( Figure S5, Supporting Information, and accompanying discussion).
The effect of the RH distortion which manifests as folding of the fundamental pseudocubic unit cell along the k-space diagonal is illustrated for the band structure recorded across the RAR and XMX direction in Figure S6, Supporting Information, and the mechanism described in the accompanying discussion.
The electronic band structure recorded along two high symmetry directions, XΓX and RAR of the tetragonal BZ is given in Figure 3. The bands in the XΓX directions derive from Pb 4s states while those along RAR correspond to O 2p states hybridized with Ti 2p as seen in Figure S7, Supporting Information. PZT band dispersions in UP and DW samples are qualitatively similar, however, rigidly shifted by 1.25 eV one with respect to the other. This indicates an existing offset Λ, of the surface potential which results from the FE-induced band bending. Hence, the electronic states of PZT in the UP sample are displaced toward higher BE compared to PZT in DW. Such variation of the potential inside PZT is the standard indication of the opposite, out of plane FE states [25,[44][45][46][47][48][49] in free-standing ferroelectrics. It occurs in principle when negative/positive charge carriers accumulate close to the surfaces in order to screen the accompanying DF established in the material with opposed P+/P− FE states. [50,51] The sharp band structure obtained in our ARPES measurements suggests that the expected band bending close to the surface is small, not exceeding 200-300 meV, [52][53][54][55] as otherwise it would smear the entire band structure and obscure the observation of clearly dispersing electronic states. This is not surprising since the potential inside the FE may flatten in the proximity of the interface with metallic electrodes or polar adsorbates. Such effect is accompanied by bending of the potential inside the electrode or in the contamination layer in the opposite direction from that of the FE due to negative/positive charge accumulation in the outer material, [25,48,49] at the interface with P+/P− FEs.
Most important, the well-defined electronic dispersions observed in our ARPES experiments performed on samples which are normally strongly insulating indicates the existence of a considerable amount of free carriers [34] which prevents the inherent charging effects in photoemission. Their origin was discussed before and the mechanism identified in the self-doping effect which develops as a route to stabilize the FE state in thin films. [27] However, their explicit signature in the band structure has not been yet identified.
We will now clarify the mechanism of band alignment from the analysis of the core-levels recorded on both UP and DW samples, identifying the contamination layer as a source of imperfect screening for the DF at the PZT surface. Consequently, the remaining uncompensated field, by opposing the FE-induced field, tends to reduce the band bending potential close to the surface. [25]
Polarization-Dependent Band Alignment Mechanism
Assuming that we have only the un-screened, fixed polarization charges at the top and bottom extremities of a FE, the internal field should be described by a linear variation of the potential, rigidly followed by the entire electronic structure, including both the valence states and the deep core levels. In order to overcome this energetically unfavorable situation and compensate for the depolarizing field (DF), thick films will develop domains with different FE polarization and minimize the total energy [16,27] while in thin films, in the absence of compensation charges, the ferroelectricity may be simply suppressed. [56] In order to sustain their FE state, thin films compensate for the DF through intrinsic or extrinsic mechanisms. Intrinsic one involves migration of charge carriers already available in the film, resulting in creation of negative and positive charge sheets at the opposite surfaces of the layer. These charge carriers are created through spontaneous alteration of the ideal stoichiometry during the growth with creation of cation or oxygen vacancies. [27] Extrinsic one involves opposite charge accumulation or depletion with respect to the fixed polarization charges from the FE material, in the outer material: the metallic electrode or inside the contamination layer at the surface with the air in order to screen the DF. Such charge modulation effect induced into the joining material exceeds a pure electrostatic picture as the electric field outside the ferroelectric is rigorously zero, and should not impact the electronic structure of the joining material. Nevertheless, it depends on two factors: i) on the particular band alignment at the metal/FE or FE/contamination layer interfaces which involves both the work function difference between the FE and the metallic contact or surface contamination layer and the FE state. [25,34] The band alignment picture and the resulting potential profile across the FE defines the regions with positive/negative charge accumulation at the extremities of the FE. Under applied bias, it controls how and if the required electrons and/or holes are injected across the interface and how, by diffusing through material, accumulate at the surface/interfaces of the FE [16,25,46,47,57] in order to compensate for the DF. This further translates into material-dependent potential inside the FE as well as in the joining contacts; and ii) on the polar character of the interface where the electric dipoles possibly resulting at the interface may locally account for additional charges required to stabilize the FE state. Importantly, the dipolar field defined by the fixed polarization charges in the FE and the mobile depolarizing charges localized at the surface or interface decays as d −3 , where d is the distance from the surface dipole, confining the induced charge modulation at the first unit cell of the metallic contact. [58] The mechanism of band alignment at the interface of the PZT samples with the substrate and at the surface depending on their FE state is given in the sketch from Figure 4a. It shows the variation of the potential inside the FE film going from the surface toward the interface with the substrate and is deduced by correlating the surface sensitive ARPES data with more bulk sensitive results of the core levels, which extend the probing region down to the interface with the substrate. The dashed line represents the potential profile when opposite charges accumulate into the FE film close to the interfaces for screening the DF. With full lines are qualitatively traced the potentials V(z) inside our FE films, for both UP and DW samples, derived from the experimental data. The relative BE shift of 1.25 eV between the UP and DW samples identified in the ARPES measurements and the sharp band dispersions is compatible with the small band bending close to the surface and with the observed FE-induced band offset over the first ≈ 1-2 nm close to the surface.
Going toward the bottom interface, we extract the information on the band lineup by analyzing the BE difference between the UP and DW samples of the Pb 4f 7/2 core levels, Δ BE (Pb) = Pb 4f 7/2 (UP) -Pb 4f 7/2 (DW) (Figure 3b) and Ti 2p 3/2 , Δ BE (Ti) = Ti 2p 3/2 (UP) -Ti 2p 3/2 (DW) (Figure 3c). The calculated inelastic mean free paths ( ) of Pb 4f and Ti 2p photoelectrons corresponding to our excitation energy of 1100 eV are in the 2.5 nm-3 nm range. [59] This translates in a probing depth l ≈ 3 , enough to penetrate through the thin FE layer and record the Sr 3d signal from the substrate as well. At the bottom interface, the band bending between PZT and the LSMO|Nb:STO substrates in the DW/UP samples and the corresponding V(z) profile follows the expected trend derived from their W f differences. More exactly, with the values for W f : eV, [35,36] we estimate Δ UP = PZT -Nb:STO = 0.4 eV, corresponding to upward band bending at the bottom interface and consequent shift toward lower BEs across the bulk of PZT in the UP sample. For DW sample, Δ DW = PZT -LSMO = -0.3 eV identifies the downward band bending of PZT at the LSMO substrate, accompanied by a shift toward higher BEs of the electronic structure in the FE layer close to the PZT|LSMO interface. In addition, the large Δ BE (Sr) = Sr 3d 5/2 (UP) − Sr 3d 5/2 (DW) = 1.1 eV in the Nb:STO and LSMO substrate (Figure 3b, inset, and Table 1), by exceeding Δ BE (Pb) = 1.0 eV and Δ BE (Ti) = 0.9 eV indicates the opposite variation of V(z) in the two substrates of UP/DW samples. In the UP sample, it decreases and consequently Sr 3d increases to higher BEs, with the opposite increase of V(z) in DW and the shift toward lower BEs of its electronic structure.
Such opposite variation is not expected for the W f -dependent band alignment at metal|FE interfaces where the metal band structure and its V(z) should in principle remain immune to the interface formation. This rather points at a FE-induced alteration of the carrier density into LSMO and Nb:STO substrates as well, where close to the interface with PZT, the substrate supplies positive/negative carriers to screen the fixed interface dipole charges [24,58] and stabilize the UP/DW FE state. Accordingly, V(z) in Figure 4b describes the positive charging into Nb:STO close to the interface to compensate the FE polarization pointing away from the substrate (P+), while a negatively charged LSMO interface stabilizes the FE state pointing toward the substrate (P−). These results are consistent with previous findings when metal/FE interfaces feature positive or negative charging in order to stabilize opposed FE states. [25,34,46,47,57,60] The substrate-induced band bending at the bottom interface appears as an essential condition to stabilization of the FE state, facilitating the accumulation of negative/positive charge sheets at the surface/bottom interface to stabilize the P+/P− FE polarization.
On the other hand, the quantity Δ BE (C) = C 1s (UP) − C 1s (DW significantly deviates from the value of Λ = 1.25 eV, derived from ARPES surface sensitive measurements and expected for the band alignment at the upper interface. Its significantly smaller value Δ BE (C) = 0.3 eV ( Figure S8, Supporting Information) indicates the band bending toward higher BEs in the DW sample and toward lower BEs in UP. Such opposite variations of V(z) clarifies the mechanism for FE state stabilization, [25] with the charged contamination species providing the required fixed dipoles to stabilize the FE state. [44,58,61] However, the fact that the FE-induced band bending is lost at the surface suggests that they may simply be not enough to fully compensate for DF. In the measurements performed at RT, presented in Figure 4d,e for Pb 4f respectively Ti 2p the same trend is identified, featured however by a smaller band offset at the PZT surface, Λ = 0.7 eV deduced from the ARPES measurements ( Figure S9, Supporting Information), Δ BE (Pb) = 0.6 eV and Δ BE (Ti) = 0.3 eV. Since W f should not depend on temperature, this implies that the decrease of Λ and Δ BE s relates to changes in the FE nature of PZT. Indeed, in Figure 4d we identify the development of an additional component at lower BEs in the Pb 4f spectra indicating the clustering of metallic Pb resulted from broken Pb-O bonds, [16,44] accompanied by formation of oxygen vacancies (OVs) in the UP sample. The absence of such a component in the DW sample indicates that the OVs and the accompanying negative charges developing into the UP sample are a requirement to compensate for the DF and stabilize the P+ FE state. They accumulate close to the PZT surface to create a negative charge sheet which screens the fixed, polarization charges. In the DW sample featured by P− FE state on the other hand, the DF generated by the fixed negative polarization charges at the PZT surface and positive at the bottom interface with LSMO are screened by the creation of CVs (Pb and Ti).
Such ferroelectric-dependent composition is confirmed from evaluating the stoichiometry using a monochromatized laboratory X-ray source (Al K = 1486.74 eV from Kratos Analytical Ltd.). Due to its flux, orders of magnitude smaller compared to the synchrotron radiation facilities, the additional beam damage effects are negligible. Based on the data presented in Figure S10, Supporting Information, we derive the O/Pb and O/(Ti+Zr) ratios in UP and DW samples which are collected in Table 2.
While the O/Ti ratio in the UP sample is 3.01, close to the ideal value of 3, in DW it is 4.28, indicating a rather large Ti and Zr deficiency. O/Pb ratio indicates the same trend, with creation of Pb vacancies in the DW sample and OVs in UP sample. Such deviations of the Pb, Ti and Zr ratio from the ideal values should impact the crystallinity of the sample through defect formation. The fact that the ARPES images from Figure 2 are sharp and well-resolved on both samples, suggests that such defects accumulate more toward the bottom electrode, outside the thickness probed in the ARPES measurements. This means that the UP/DW samples stabilize their well-defined FE states already during their growth at the temperature exceeding their Curie temperature T C , [27] with the identified altered stoichiometry appearing as a fundamental requirement for screening of the DF and stabilization of the FE state.
However, we may reasonably assume that the additional generation of OVs, or CVs and their migration at RT, accompanied by alteration of the ideal stoichiometry are detrimental to the FE polarization, a fact seen in the smaller band offset, and FE-induced band bending, Δ BE (Pb) and Δ BE (Ti).
Nevertheless, the fact that the measurements performed at T = 12K and at RT are featured by only quantitative variations of Λ, Δ BE s values is a convincing evidence that the band bending profiles keep similar shape, although with reduced curvature, indicating that the FE state of UP/DW state remains the same P+/P− both at RT and at T = 12 K.
On the other hand one cannot a-priori exclude that the subtle balance between the positive and negative charges generated under the X-ray beam may influence the screening of the DF by building up additional charges which may migrate and accumulate at both faces of the FE layer, eventually altering the FE phase. [16,62] Previous studies have also identified development of OVs accompanied by migration and clustering of metallic Pb [16,62] under irradiation with intense X-ray synchrotron radiation and possible lost out of plane polarization state.
A more detailed view on the dynamics of charge creation under the X-ray beam and of the compensation mechanisms of the FE states, extracted from the resonant photoemission performed at the L absorption edge of Ti is discussed in the following section. The technique offers further insight into the physics of PZT being particularly effective in separating the band-resolved spectral signature of Ti in different valence states. [33,63,64]
Effect of X-Ray Irradiation; Oxygen and Cation Vacancies
Exploring the nature of the defects developing under intense synchrotron X-ray flux is informative in our case for at least two reasons: i) firstly, it emphasizes the mechanism of FE state stabilization ramping up as a reaction to the gradual creation of charged OVs and CVs; ii) secondly, certifies that the measurements correspond unambiguously to well-defined FE state of PZT which do not change (remains stable) under X-ray exposure. Figure 5a presents the angle-integrated resonant photoemission (ResPE) intensity within the VB for UP sample, recorded in saturation conditions at RT readily identifying a strong signal at ≈0.80 eV below E F .
The ResPE spectra near the VBM recorded at the temperature 12 K (Figure 5b) and RT (Figure 5c) are collected using energies resonant with the dipole allowed 2p-3d transitions of Ti 3+/4+ (ℎ = 465 eV), and outside the resonance (ℎ = 467.5 eV) marked with dotted lines in Figure 5a. They show the development of in-gap states (IGS) spectral weight signatures significantly increasing at RT.
Such in-gap states have been observed in SrTiO 3 , [65] BaTiO 3 [66] and at SrTiO 3 -based interfaces [33,63,64,67] and their origin identified in OV states created either under X-ray irradiation or annealing. For our UP samples, the candidates to these states may be the OVs, CVs, metallic Pb or combinations of them. We can exclude metallic Pb generated by the X-ray exposure since the IGS signal should have been accompanied by intensity at E F too while as seen in the ARPES images in Figure 3b,d and in the integrated ResPE data from Figure 5b,c, this is not our case. This also excludes Ti vacancies with Ti accumulating at the surface since there is no signature of Ti 0 resonance in the ResPE spectra. This, Figure 5. Resonant angle-integrated intensity within the VB recorded across the Ti L 2,3 absorption edge for UP sample (a). Overlaid is the X-ray absorption spectrum recorded across the same energy interval. Angle integrated spectra of the VBM in resonance with Ti 3+ and Ti 4+ absorption (yellow and orange lines in a) and outside the resonance (blue line in a) at T = 12 K (b) and at room temperature (c).
together with the on/off enhancement when going through the Ti 3+/4+ resonance identifies OVs at the origin of the IGS.
It is however surprising that Ti 3+ signal is mostly absent in the XPS spectrum (Figures S10 and S11, Supporting Information). This indicates that OVs build up only in the first 1-2 unit cells at the top PZT region and their observation requires both the surface sensitive ARPES coupled with the resonant enhancement of ResPES. [68] Their weaker intensity at T = 12K supports our scenario with OVs generation under the X-ray beam and their migration toward the surface in order to screen the DF and stabilize the P+ FE state in the UP sample. The migration is expected to be frozen at 12 K while at RT they can easily travel toward the surface and stabilize a sheet of negative charges.
This behavior differentiates the dynamics of OVs in PZT from STO and other STO-based heterostructures, where upon increase of temperature the OVs created by X-rays diffuse toward the bulk. [64] In the hypothesis of OVs creation in order to stabilize the P+ FE state, no signature of IGS should be visible in the DW sample, which rather requires positive charge accumulation close to the surface. Indeed, as seen in Figure 6a, which presents the angle-integrated ResPE intensity within the VB for DW sample, recorded in saturation conditions at RT reveals no trace of IGS. Valence band spectra collected with on and off resonant hv values, marked with dashed lines in Figure 6a and corresponding to the Ti 3+ /Ti 4+ absorption are given in Figure 6b for the T = 12K measurements and in Figure 6c for RT. Apart from the resonant enhancement by a factor of 3-4 [33] observed for the states located deeper into the VB, there is no indication of IGS formation, as expected for the P− PZT in the DW sample, when the FE polarization pointing toward the interface requires positive charge sheet toward the surface in order to screen the DF and stabilize the FE state.
These positive charges are detected in the FE-dependent stoichiometry variations which stabilize the P− state as seen in Figures S11 and S12, Supporting Information, and the accompanying discussion.
The non-dispersive character of the IGS associated with OVs can be seen in the ARPES images presented in Figure 7a,b for UP and in Figure 7(c,d) for DW. The k-resolved VB region recorded at RT along the AZA direction of the BZ identifies the nondispersive OV-related band, manifesting the standard flat signature of a defect state. [31,33,69,70] We may compare the formation of such impurity bands in our P+ PZT with the extensively studied case of STO. There, similar to PZT, the OV site may be either occupied by two electrons or may have single occupancy with the additional electron released into an itinerant 2D electron system (2DES). The onset between localization of the doubly occupied OV with filled Ti 3d e g orbitals and itinerancy of t 2g electrons corresponding to singly occupied OV is given by the increase of the electronic correlations. [71,72] Moreover, it has been shown that the energy of the IGS in STO depends on the strength of the electronic correlations, moving it away from E F as the correlations strength increases. [70,73,74] Compared to STO, where the position of the IGS is reported at ≈1.3 eV Figure 6. Resonant angle-integrated intensity within the VB recorded across the Ti L 2,3 absorption edge for DW sample (a). Overlaid is the X-ray absorption spectrum recorded across the same energy interval. Angle integrated spectra of the VBM in resonance with Ti 3+ and Ti 4+ absorption (yellow and orange lines in a) and outside the resonance (blue line in a) at T = 12 K (b) and at room temperature (c).
below E F , [75] in our case, the IGS of PZT at ≈0.85 eV as seen in in Figure 5 and in Figure 7a,b suggests weaker correlation effects in PZT compared to STO.
Indeed, from Figures 5 and 7 it is evident the absence of any itinerant t 2g -derived bands in the form of 2D electron system (2DES) since there is no intensity at the E F . With the presence of IGS only having e g character it is reasonable to assume that PZT is a weakly correlated system, and the electronic correlations are not strong enough to drive the system into a dichotomic regime of localized e g states away from E F and itinerant electrons into t 2g orbitals at E F like in STO [65,69,75] or STO-based interfaces. [33,76] In addition, we expect that the IGS identified in PZT to carry significant magnetic moments, in the range of 0.1-0.3 μ B similar to the STO case. [70,72,74] The observed dynamic effects of gradual negative carriers generation in the form of OVs under the X-ray beam as a requirement to stabilize and maintain the FE state is characteristic to FE PZT, However, we emphasize that this gradually developing mechanism can be extended to other perturbations to which the FE system must adapt, for example, to the applied bias which switches the FE polarization between opposed states, which should be accompanied by dynamic creation of positive or negative charges which migrate through the bulk and redistribute close to the surfaces.
Outlook and Conclusions
Our pioneering SX-ARPES measurements on strained thin films of PZT grown on STO and LSMO substrates establish the intrinsic k-resolved electronic structure of this paradigm FE material in two opposite polarization states. The large photoelectron escape depth achieved in the soft-X-ray energy range has been essential for resolve the 3D band structure as well as reconstruct the electrostatic potential profile across the PZT films and their band offset to the substrates. Moreover, resonant ARPES at the Ti L-edge has elucidated the Ti character of the delocalized and localized defect valence states.
These results have informed, first, about the mobile and fixed charges involved in the stabilization of the FE polarization and in screening of the DF. We have found that the negative charges near the surface, required to stabilize the P+ state of PZT grown on n-doped Nb:STO, are V O s. In the ARPES data, they are identified as non-dispersive Ti-derived localized states falling into the PZT band gap at 0.8 eV below E F . The PZT film accumulates this surface charge by diffusion of V O s toward the surface, as evidenced by the dynamics of their resonance response in the Ti L-edge spectra under X-ray irradiation. In analogy to STO, we conjecture that the V O s in PZT may form ferromagnetic clusters. In turn, the positive charges required to stabilize the P− state of PZT grown on LSMO, are generated by a self-doping mechanism through creation of Pb, Ti and Zr vacancies as derived from our quantitative XPS measurements.
The second aspect of our findings is the interaction of PZT with the substrate, crucial for the stabilization of either P+ or P− polarization state. From the experimental k-resolved band structure, we find that the crystallographic structure of the substrate propagates through the full thickness of our PZT thin films. Specifically, the PZT films inherit the TG structure when grown on Nb:STO, and the RH structure when grown on LSMO as manifested by replica bands in the ARPES spectra. Such a structural shift goes beyond the conventional treatments of this material assuming only its small deviations from the pseudocubic structure.
In a wide perspective of our findings, the observed effect of doping with anion and cation vacancies induced by the FE polarization can be utilized for tuning electronic structure of ferroelectric and multiferroic layers in heterostructures. Such a way of defect engineering may find applications, for example, in energy storage and conversion materials where creating localized electron energy levels is a way to enhance absorption of light in the visible range. Furthermore, the energy levels of V O s, if in ferromagnetic clusters and properly aligned in energy, can be used for spin injection in FE heterostructures. The observed structural variance of FE materials, in turn, can find applications in spintronics. For example, they can be interfaced to non-collinear spin antiferromagnets, where symmetry-controlled propagation of the FE instability across the interface can modify the period of the spin cycloid [77][78][79] or generate topologically non-trivial spin textures. [80] For the RH-distorted polar perovskites, this can be achieved by selecting one of the four FE displacements along the [111] diagonals of the pseudocubic unit cell, while the TG structures display distortions along the c-axis only. Furthermore, the incorporation of atoms with high atomic number (Z) into the FE materials possessing inherent breaking of the inversion symmetry can constitute a route to enhance spin-orbit interaction effects where the spin-split bands carry locked spins and even net spin currents.
Experimental Section
Pulsed Laser Deposition: Single crystalline STO(001) and Nb:STO (001) substrates with a miscut angle of 0.05-0.2°(CrysTec, Berlin) were used to prepare the investigated structures. In order to obtain high quality thin films on single crystalline STO and Nb:STO substrates, preliminary substrate preparations were performed. They consist in the transformation of the optically polished surface into a stepped and terraced surface, which is well ordered at the atomic scale. For this purpose, STO substrates were etched in NH 4 -HF solution for 15 s in order to remove Sr residues and then thermally treated for ≈4 h at elevated 1000°C. In this manner a purely TiO 2 -terminated surface was obtained. All resulting steps are approximately equal in height (single unit cell ≈ 0.4 nm), parallel, and equidistant.
Atomic force microscopy images of etched substrates before the PLD deposition and of the resulting PZT layers are given in Figure S13, Supporting Information.
Ablation of commercial PZT and LSMO targets from Praxair was performed using a KrF laser ( = 248 nm) with a repetition rate of 1 Hz for LSMO and 5 Hz for PZT at a laser fluence of 2 J cm −2 . During LSMO deposition, the substrate was kept at T = 700°C and then decreased with a ramp of 10°C min −1 to 575°C for the PZT layer. On the Nb:STO substrate PZT was grown at T = 575°C. The oxygen pressure during the deposition was 0.2 mbar for PZT and 0.27 mbar for LSMO. After deposition, PZT films were post-annealed in the deposition chamber at 575°C for 1 h in 1 bar O 2 atmosphere. They were then transferred in the N 2 atmosphere from the preparation chamber to the analysis chamber, which was observed to bring only minimal surface contamination, easily overcome by the large probing depth of the soft X-ray range.
X-Ray Diffraction: X-ray diffraction studies (XRD) were performed at room temperature on a Rigaku SmartLab diffractometer in high resolution settings (X-ray mirror and two bounce Ge(220) monochromator, l K 1 = 1.5406 Å). The reciprocal space mapping (RSM) was performed on a Bruker D8 Advance diffractometer (Bruker AXS GmbH Germany) with copper anode X-ray tube in medium resolution parallel beam setting (X-ray mirror and nickel filter; l K 1 = 1.5406 Å, l K 2 = 1.5445 Å, l Kb = 1.3922 Å).
Density Functional Theory: The calculations were performed within the generalized gradient approximation (GGA) using the quantum ESPRESSO plane-wave code, [79] and the exchange and correlations functional in the Perdew-Burke-Ernzerhof (PBE) parametrization. Norm conserving pseudopotentials from PseusoDojo were used. [80] Ti-Zr substitutional doping was treated by means of virtual crystal approximation (VCA) replacing each A-site of the perovskite with a fictitious atom with fractional valence, instead of explicit doping which would require a significant computational burden due to larger supercells.
The kinetic-energy cut-off for the plane waves was set at 60 Ry and for the charge density at 240 Ry. The BZ integration was performed on an automatically generated Monkhorst-Pack 10 × 10 × 10 k-mesh, with Gaussian energy level smearing of 0.02 Ry.
Then, the internal coordinates were relaxed at the experimental c/a = 1.07 ratio, with the in-plane parameters resulting from the cell relaxation. Visualization of the crystalline structures had been performed by using the Vesta software. [83] ARPES Measurements: SX-ARPES experiments were carried out at ADRESS beamline at Swiss Light Source which delivers high photon flux in soft X-ray range, [81] allowing the band structure investigation of 3D materials with the additional benefit of a remarkably sharp momentum resolution along the out-of-plane direction k z and thus full 3D momentum. [82] The relationship between electron momentum (k || ,k ⊥ ), photoelectron kinetic energy, and the photoelectron emission angle are given in Ref. [81]. ARPES measurements had been performed in pressure better than 10 −11 mbar, at the temperature of T = 12 K and at T = 300 K. Fermi level was calibrated using a silver foil in electrical contact with the sample. The combined resolution (thermal broadening in addition to the photon beam and the ARPES analyzer) was of ≈70 meV.
The XAS measurements had been performed in total electron yield mode, by measuring the secondary electrons current photogenerated in the sample, with the angle between the sample and incident X-rays of 23°. X-ray linear dichroism (XLD) signal, which is the footprint of orbital energy and occupation, is calculated as the difference between the normalized signal generated by linear horizontal and linear vertical-polarized radiation on the sample.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 10,491.6 | 2023-01-02T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Lime and Cement Plasters from 20th Century Buildings: Raw Materials and Relations between Mineralogical–Petrographic Characteristics and Chemical–Physical Compatibility with the Limestone Substrate
: This paper deals with the “modern” plaster mortars based on air lime, hydraulic lime, and cement used between the 1950s and 1990s of the last century, taking, as a case study, a historical building of the Cagliari city whose foundations and ground floor are cut into in-situ limestone. Different plaster layers (i.e., arriccio and intonachino , paint), applied on the excavated limestone walls, were collected from cave-room. All samples were analysed by optical and electron (SEM-EDS) microscopy and X-ray diffractometry (XRD) in order to define their microstructures, textures and compositional features. In addition, real and bulk density, water and helium open porosity, water absorption kinetic, and saturation index were measured. By microscopic imaging analyses, the binder/aggregate ratio as vol.% was determined. Results revealed that cement mortars, composed mainly of C-S-H, C-A-H, and C-F-H phases, given their high hydraulicity, low open porosity, and a rigid behaviour, showed a good chemical but not physical–mechanical adherence, as they were often found detached from the substrate and frequently loaded with salt efflorescence. On the contrary, the hydraulic lime-based mortars, characterised by a binder composed of C-S-H and C-A-H phases and calcite derived from the portlandite carbonation, showed a greater affinity with limestone substrate and other plasters. Thus, they are more suitable to be used as a repair mortar, showing a long durability on the time. The thin air lime-based plasters ( intonachino ) showed a good adhesion to the substrate, exerting their coating function better than the harder, cement-based mortars. Lime-based wall paints have a good chemical adhesion and adaptability to the irregular surface of the substrate, due to low thickness of lime paint layers (<1 mm) that confers an elastic
State of Art and Aims of Research
The use of mortars has been well documented since ancient times. Mud and clay were likely the first binders, given not only their wide availability but also the low technology required for their application [1,2]. Lime-based mortars have been used since at least 6000 B.C. as testified by several archaeological sites, among others, in Israel, Syria [3], and Turkey [4]. During the later centuries, air lime mortars were adopted by several civilizations, such as the Egyptians, Minoans, Greeks, Romans, etc. The raw materials, the calcination technologies and the building techniques evolved in different ways from one locality to another, leading each place to obtain its own style and best practices [2]. A
Location of Site in the Historical Context of Cagliari
Cagliari is the capital city of Sardinia and is located in the southern part of the island, which is situated in the middle of the Mediterranean between the Balearic and Tyrrhenian seas. The city is rich in history, of which numerous monuments still remain. These include those from the Neolithic period (domus de janas and some huts of the IV-III millennium BC), to the Phoenician-Punic period following the eighth century BC (e.g., founding ports near the pond of Santa Gilla, creating the Tuvixeddu necropolis, considered the largest Several mortar samples were collected from different points of the patchy-plastered walls. The sample set included plasters of different compositions (air lime-, hydraulic lime-, and cement-based) and with different functions, i.e., arriccio coat (from the traditional Italian plastering) and finishing coats (hereafter intonachino, paint). The following features have been studied and determined: (i) mineralogical and petrographic characteristics of the aggregate and C-S-H and C-A-H phases present in the binder, through optical polarised light microscopy and XRD analysis; (ii) physical (density, porosity, water absorption) and mechanical properties by H 2 O-and He-picnometry; (iii) the relationships between compositional and physical-mechanical characteristics; (iv) the differences in physical-mechanical behaviour between air lime-, hydraulic lime-and cement-based mortars; (v) compatibility of the materials used to restore the plasters on the walls above the rocky substrate and their structural and aesthetic-decorative durability.
Punic necropolis in the Mediterranean), to the Roman period, from 238 BC (with the important amphitheatre and suburban villas such as the so-called Villa di Tigellio), passing to the vandals in the mid-fifth century (in which the Basilica of San Saturnino was built, remodelled in the Romanesque period). The city was then reconquered by Justinian's Eastern Romans in 534 AD and was again in Byzantine hands, until the Giudicale, period when the centre of the city became the village of Santa Igia (contraction of Santa Cecilia). Subsequently, with the arrival of the Pisans (1216-1217) and the destruction of the village of Santa Igia (1258), the centre of Cagliari became the current fortified district of Castello (hence the name of the city Castellum Castri de Kallari, Casteddu in Sardinian dialect), with the adjoining of the port of Bagnaria (later La Pola), connected to the Castle, through the current Marina district. In this period, several monuments were built, such as San Pancrazio (1305) and Elephant (1307, Figure 2a) Towers. Starting from 1323, the Aragonese besieged Cagliari, and built their stronghold on the southernmost hill of Bonaria where they settled a new port, leaving the Castle to the Pisans until 1325 when they completely conquered the city. In the XV-XVII centuries, the important wall fortifications were built substantially close to the Castello district and are still observable today. The Spanish domination lasted until 1708, following the War of the Spanish Succession, with the arrival of the Anglo-Dutch and the various subsequent socio-political vicissitudes. Starting from the nineteenth century, after the unification of Italy, the fortification walls were demolished and the foundations were laid for the great expansion of the last century, with the participation of important architects, including Gaetano Cima and Dionigi Scano, who redesigned the urban centre according to the neo-classical and neo-gothic style, with the construction of In the XV-XVII centuries, the important wall fortifications were built substantially close to the Castello district and are still observable today. The Spanish domination lasted until 1708, following the War of the Spanish Succession, with the arrival of the Anglo-Dutch and the various subsequent socio-political vicissitudes. Starting from the nineteenth century, after the unification of Italy, the fortification walls were demolished and the foundations were laid for the great expansion of the last century, with the participation of important architects, including Gaetano Cima and Dionigi Scano, who redesigned the urban centre according to the neo-classical and neo-gothic style, with the construction of the municipal building in Pietra forte, the characteristic liberty buildings, and numerous other palaces or buildings of historical and cultural interest. The site under study belongs to one of these twentieth century buildings, located in Piazza Yenne, one of the most important historical squares in Cagliari, located at the end of Largo Carlo Felice in the Stampace district. It represents the basement part (cave) of an historic building, excavated entirely within the outcrops of the most important limestone lithologies of the Cagliari area, namely the Pietra forte and the Tramezzario, which are found extensively along the slopes of the Castello hill. The site, called Grotta Marcello, was built in 1943 according to a project approved by the Military Authorities, for the construction of a large room to be used by citizens of Cagliari as air raid shelter during the Second World War. The works were carried out by the widening of a natural cavity, probably attributable to the karst process of the limestone rocks. The basement room includes a semi-circular central body with a vaulted roof (Figure 1b), with an area of about 180 square meters, from the bottom of which two branches branch off, which are opposite to each other and comprising six side niches, realised in later times. The floor area of the two aforementioned branches, including the niches, is 270 square meters. The cave, due to its importance it has had in the recent history of Cagliari and for its position in the building fabric of the historic city centre, and for the considerable interest it still has today, was declared in 2007 to be an "asset of cultural, historical and artistic interest" pursuant to Article 10 (paragraph one) of Legislative Decree No. 42 of 22 January 2004, by the Ministry for Heritage and Cultural Activities.
Geological Setting
Sardinia, together with Corsica, forms a continental microplate consisting of a Palaeozoic basement (Variscan metamorphics and syn-post Variscan granitoids) and widespread volcanic and sedimentary covers, from the Upper Carboniferous to Quaternary periods. The major thickness of unmetamorphosed covers are reached in an N-S trending depression known as Fossa tettonica sarda [57] or Rift of Sardinia [58,59], or the Sardinia Trough [60,61], that extends for 220 km (from the Sassari to Cagliari gulfs), and in the Campidano Plain, a Plio-Pleistocenic graben between the Cagliari and Oristano gulfs ( Figure 3).
Four marine sedimentary cycles, associated with as many volcanic events, have occurred in Sardinia since the late Oligocene to the Pleistocene periods, leading to the deposition of thick volcano-sedimentary covers. The significance, the extent, the tectonic regimes, and the ages of these cycles are still a matter of debate leading, to different interpretations of the geodynamic scenarios and of the sedimentary environments (for instance, compare [58,[60][61][62][63]. The area of Cagliari and its hinterland (southern Sardinia, Italy, Figure 3) is characterised by scattered outcrops of Miocene sedimentary covers, mainly represented by fossil-rich marine deposits, belonging to the second and third Oligo-Miocene cycles.
The Miocene series of the Cagliari area consists, from the bottom to the top, of the following formations: Marne di Gesturi Fm., Argille del Fangario Fm., Arenarie di Pirri Fm., and the mainly carbonatic succession known as Calcari di Cagliari Auct.
The Marne di Gesturi Fm. consists of a sandy to silty marls facies with arenaceous intercalations and a pyroclastic-epiclastic facies, of Upper Burdigalian to Middle-Upper Langhian age, referred to a bathyal environment. This formation is overlaid by the Argille del Fangario Fm. (Middle-Upper Langhian Lower Serravallian), consisting of a sequence of clay deposits of bathyal environment that, towards the top, become progressively more arenaceous, indicating a decrease of the bathymetric depth. The appearance of arenaceous littoral deposits belonging to the Arenarie di Pirri Fm., widely outcropping in the Cagliari The Pietra cantone unit consists of yellowish marly-sandy limestones hosting abundant fossils that indicate a shallow marine depositional environment (60-80 m depth b.s.l.; The Pietra cantone unit consists of yellowish marly-sandy limestones hosting abundant fossils that indicate a shallow marine depositional environment (60-80 m depth b.s.l.; [72]) and a Tortonian-Messinian age. A sharp, erosive surface separates the Pietra cantone from the above lying Tramezzario, comprising whitish biocalcarenites, which are locally marly. The abrupt change of biocenosis indicates the change of paleobathymetry (40 m according to Leone et al. [73]) that tends to decrease toward the top of the succession. The regressive tendency could have led to erosive processes, explaining the local absence of this unit. The top of the Miocene succession is represented by the Pietra forte facies, mainly characterised by biohermal whitish and compact limestones, locally massive, and subordinated biostromal limestones. Sedimentary structures suggest a littoral/infra-littoral environment with paleobathymetry lower than 30 m; fossils, although abundant, do not allow a precise age determination, however, based on its stratigraphic position, Pietra forte is referred to the Messinian age. The foundations and the walls of the ground floor of the studied building were carved into the in-situ limestones belonging to the Tramezzario (Figure 2b).
Use and Decay of Limestone Rocks in Historical Period
Sedimentary rocks (e.g., limestone, sandstone, etc.), particularly carbonate type, have been widely used in the construction of historical buildings in the Sardinian Island, as well as in many Italian or other monuments. This is generally due to their easier availability in the territory and especially to their better workability when compared to silicate igneous or metamorphic rocks [23,74].
The Miocene limestones outcropping in the Cagliari city area are frequently used in civil and historical architecture. The Pietra forte is a compact limestone with high physical-mechanical resistance, and therefore is hard to work (Figure 2a). The Pietra cantone ( Figure 2c) is a marly limestone characterised by low cementing degree, high porosity (28-36 vol% [39]), and, for these reasons, by an easy workability. The Tramezzario is a more compact limestone with intermediate petrophysical behaviour. For these reasons, and given their wide availability in the territory around Cagliari, Tramezzario and Pietra cantone limestones have been widely used in historical buildings ( Figure 2) of all periods, from Nuragic, to Phoenician-Punic, Roman, and medieval [75]. Pietra cantone owes its name to the ashlars (= cantone) being remarkable easy to be cut and squared off [76].
When these limestones are used on monuments in the presence of humidity or circulating aqueous solutions, they frequently undergo decay problems [39]. The chemical-physical decay is due to hygroscopic volume variations of clay minerals and sea salts in the rock, as well as to the dissolution and re-precipitation of calcite that make the limestone easily degradable and subjected to decreases in mechanical strength. When the limestone is used in the structural elements of monuments (e.g., ashlars in the wall, column, jambs), decay can lead to the formation of serious static-structural criticality in the buildings, as a strong retreat of vertical profile of the facade or detachment of material portions from the decorative elements, due to exfoliation and flaking processes (Figure 2d).
To prevent such a decay of carbonate rocks used in the monuments, numerous efforts regarding their water protection and surface consolidation have been necessary since ancient times and solutions can be retrieved thanks to laboratory experimentation. These chemical treatments differ both in the typology of products and in application methods. However, due to the different chemical-physical-petrographic characteristics of these lithologies, microclimatic conditions, and the alteration degree of the artefacts, the conservative techniques must be adapted to each case individually.
Materials and Methods
The survey was carried out according to the following operative phases: (i) architectural reading and analysis of the structural aspects (plan distribution, building systems, wall); (ii) in situ mapping of the macroscopic characteristics of geomaterials and their stratigraphy on the wall, including the decay forms and conservation state; (iii) sampling [77]); (iv) mineralogical-petrographic investigations by optical microscopy, X-ray powder diffraction (XRD) and by SEM-EDS microanalysis; (v) physical and mechanical analyses (porosity open to helium and water, real and bulk density, water absorption kinetic, imbibition coefficient, saturation index).
30 mortar samples (in some cases including the limestone substrate) were taken from eight different points (labelled from SM1 to SM8, Figures 4 and 5) of the building walls. [77]); (iv) mineralogical-petrographic investigations by optical microscopy, X-ray powder diffraction (XRD) and by SEM-EDS microanalysis; (v) physical and mechanical analyses (porosity open to helium and water, real and bulk density, water absorption kinetic, imbibition coefficient, saturation index). 30 mortar samples (in some cases including the limestone substrate) were taken from eight different points (labelled from SM1 to SM8, Figures 4 and 5) of the building walls. The sampling was carried out both at the surface of plaster layers and on the less altered rock substrate. The material was collected from the shallow parts of the masonry, according to the recommendations of the local Superintendence of Cultural Heritage, which imposes strict limits on the quantity of sample to be collected. The volumes collected are, however, representative and adequate for the analytical studies. From each sample, the following were realised: 30 μm thick, polished thin sections for optical and electron microscopy; prismatic-like specimens for determining the The sampling was carried out both at the surface of plaster layers and on the less altered rock substrate. The material was collected from the shallow parts of the masonry, according to the recommendations of the local Superintendence of Cultural Heritage, which imposes strict limits on the quantity of sample to be collected. The volumes collected are, however, representative and adequate for the analytical studies.
From each sample, the following were realised: 30 µm thick, polished thin sections for optical and electron microscopy; prismatic-like specimens for determining the physical and mechanical properties; a small aliquot of finely ground and homogenised powder for determining some physical properties (see below) and the mineral assemblage by XRD.
Physical tests were carried out according to [43] and [78,79]. The specimens were dried at 105 ± 5 • C for 72 h, then the dry solid mass (m D ) was determined by an analytical balance with four decimals.
A helium pycnometer (Ultrapycnometer 1000, Quantachrome Instruments) was used to determine the solid phase volume (V S ) of 5-8 g of powdered rock specimens (fraction less than 0.063 mm), and the real volume (V R = V S + V C , where V C is the volume of pores closed to helium) of cubic specimens (side of 15 mm).
The wet solid mass (m W ) of the samples was determined after water absorption by immersion for ten days. The hydrostatic mass of the wet specimen (m Hy ) was measured by a hydrostatic analytical balance and was then used to calculate the bulk volume (V B ) as follows: where ρ W T X is the water density at a T X temperature. V B is the bulk or apparent volume of the sample, resulting from the sum of the volumes V S + V O + V C (solid phases, open pores and closed pores to helium, respectively). Thus, the volume of pores open to helium can be easily obtained as Total porosity (Φ T ), open porosity to water and helium (Φ O H 2 O; Φ O He, respectively), closed porosity to water and helium (Φ C H 2 O; Φ C He), bulk density (ρ B ), real density (ρ R ), were calculated as: The weight imbibition coefficient (IC W ) and index of saturation (SI) were calculated as: The image analysis was performed by the JMicrovision v1.3.3 software, in order to describe and quantify the binder/aggregate (B/A ratio) of the mortars under study, to detect their porosity, and finally to classify the mortars on the basis of these parameters. In the case study, a "Point Counting" was used, that is a count of the points of an image, differentiating three classes in three different colours: binder (red colour); aggregate (green colour); macro-porosity with average pore radius > 50 µm (yellow colour). By setting the total point count on the image with a number equal to 750 units, the different percentages of the classes set on JMicrovision were determined.
Classification of Samples
Based on macroscopic and microscopic observation, compositional aspects, and technical-constructive function, the samples have been grouped into three categories ( Table 1): (1) Cement mortars (signed as CM) are present unevenly in the cave inner wall, and sometimes are also used for the installation of hydraulic or lighting systems, or to fill wall voids and/or to consolidate fractures and discontinuities (Figure 5b,c,g). They are characterised by a typical greyish to brown-grey cement-based binder (thus with high hydraulic degree) and a silicate aggregate (mainly quartz and feldspars). During the sample collection they appeared quite hard, suggesting heavy mechanical strength. [80] can be classified as HL mortars. Mechanical strength and hydraulic grade appear to be lower than those of cement mortars. All HLM samples look quite similar in grain-size, colour, and B/A but strongly differ in thickness; this parameter has been chosen to distinguish the three categories, and to evaluate if different HLMs were employed to achieve different thicknesses. According to the EN 459-1:2015 standard [80], the PL samples can be classified as air lime plasters. PL samples have been subdivided into three types depending on their macroscopic aspect, on the adhesion to the substrate and on their "stratigraphic" position.
If considering the function of mortars, two categories can be identified: arriccio layers (AR) are the plasters with coarse-grained aggregate (mainly ranging in 1-2 mm) used to fill voids and fractures, to flatten the rock substrate, and to create a rough surface that allows the grip of the finishing plaster. They are commonly 7 to 12 mm thick but can reach 25-30 mm when used as filler for voids. The AR binder layers can be either cement or hydraulic lime and they were found to be applied directly on the rocky substrate or above older plasters. -intonachino layers (INT) are the finishing plaster characterised by finer aggregates (<0.5 mm) and a low thickness (2-4 mm). They usually adhere to the AR layer, although one sampling point was found to lie directly on the rock substrate. All INT layers have a lime-based binder and a little amount of fine aggregates.
In addition, four coats of paint, alternating between the INT plaster layers, were found. The coats' thickness is commonly lower than 0.5 mm and shows a strong adhesion to the underlying plaster. These coats have not been analysed, but a rough observation indicates a lime-based matrix for three of them, whereas the last, fourth one seems to have another composition (with acronym PA).
Hydraulic lime mortar Finishing air lime plaster
Stratigraphy of Plasters and Decay
The complex stratigraphy is the result of the superimposition of several restoration interventions performed with different mortar materials, different aims (fill voids, uniform the surface, limit humidity, aesthetic improvement, etc.), and in different times. Moreover, many of these interventions were just patchy-like repairs that did not involve the whole wall surface and consisted of re-plastering with or without removing the older underlying plasters, which had, in some cases, partly detached due to the decay, or well adhered somewhere else. Thus, the eight samples (SM1 SM8) collected in different points of the cave room ( Figure 5) showed significant differences in the sequence substrate/plaster/paint and different macroscopic forms of chemical-physical decay. The lower one is represented by the rocky substrate, constituted by Tramezzario (TR) and subordinately by a stronger limestone (TR-S), with similar characteristics to the Pietra forte limestone, while the Pietra cantone was not found.
A general scheme of the plasters' sequence, from the limestone substrate to the surface, is summarised as follows and is shown in Figure 6 and in the synoptic scheme of Figure 7. A general scheme of the plasters' sequence, from the limestone substrate to the surface, is summarised as follows and is shown in Figure 6 and in the synoptic scheme of Figure 7. Layer (1): cement mortars (signed as sub-layers CM-AR1 and CM-AR2), locally present in the rocky wall of the cave and generally in direct contact with the substrate; occasionally, a third layer (CM-AR3, not sampled) over the following plaster layers could be observed. CM consists of grey to dark grey cement-based mortars with millimetre-sized aggregates, used as arriccio, often showing saline efflorescence; Layer (2): a very fine-grained air lime plaster, probably based on aerial lime; it was labelled as PL-INT1 (intonachino) and used as a finishing plaster. Where cement mortars are absent, INT1 is absent too, except for one sample where INT1 lays directly on the rock substrate. Over this layer there is a painting coat with lime composition, characterised by beige (PA1) and light blue (PA2) colours; Layer (3): a finishing coat represented by a millimetre-thick intonachino level (PL-INT2), lime-based and almost free of aggregates; Layer (4): a hydraulic lime mortar (arriccio layer, named as HLM-AR1), which represents a plaster coat applied in a more recent restoration and is composed of a hydraulic lime with millimetre-sized, light beige aggregates; Layer (5): the second arriccio layer of hydraulic lime mortar (HLM-AR2), that has undergone further plastering intervention that, such as the previous one; it was based on hydraulic lime binder; however, it can be distinguished by the finer grain-size of its aggregates; the fourth layer of arriccio (HLM-AR3) has been found in only one sampling Layer (1): cement mortars (signed as sub-layers CM-AR1 and CM-AR2), locally present in the rocky wall of the cave and generally in direct contact with the substrate; occasionally, a third layer (CM-AR3, not sampled) over the following plaster layers could be observed. CM consists of grey to dark grey cement-based mortars with millimetre-sized aggregates, used as arriccio, often showing saline efflorescence; Layer (2): a very fine-grained air lime plaster, probably based on aerial lime; it was labelled as PL-INT1 (intonachino) and used as a finishing plaster. Where cement mortars are absent, INT1 is absent too, except for one sample where INT1 lays directly on the rock substrate. Over this layer there is a painting coat with lime composition, characterised by beige (PA1) and light blue (PA2) colours; Layer (3): a finishing coat represented by a millimetre-thick intonachino level (PL-INT2), lime-based and almost free of aggregates; Layer (4): a hydraulic lime mortar (arriccio layer, named as HLM-AR1), which represents a plaster coat applied in a more recent restoration and is composed of a hydraulic lime with millimetre-sized, light beige aggregates; Layer (5): the second arriccio layer of hydraulic lime mortar (HLM-AR2), that has undergone further plastering intervention that, such as the previous one; it was based on hydraulic lime binder; however, it can be distinguished by the finer grain-size of its aggregates; the fourth layer of arriccio (HLM-AR3) has been found in only one sampling point, probably used to fill some void or surface irregularity; Layer (6): a light grey finishing plaster (PL-INT3), 2 mm thick due to the last intervention that lies above HLM-AR2, characterised by few, very fine-grained aggregates and a lime binder.
In addition, a light beige coat-paint (PA3), probably lime based, has been found, with a semi-transparent paint of different composition appearing above.
The paint coats have not been studied but were extremely useful to distinguish between the different plastering interventions, since they act as markers of the different phases, allowing us to recreate a synoptic scheme (Figure 7). The paint coats have not been studied but were extremely useful to distinguish between the different plastering interventions, since they act as markers of the different phases, allowing us to recreate a synoptic scheme (Figure 7). In regards to the decay of materials, the limestone and plasters of the wall cave-room often showed decohesion and the presence of fano-(on the surface) and crypto-efflorescence, due to the constant presence of humidity and/or circulating aqueous saline solutions in the rock. The cyclic mechanisms of hydration/dehydration and solubilisation/crystallisation of salts produce a hygroscopic volume variation in the limestone with consequent exfoliation and flaking. The degradation apparently manifests itself in the same way on all mortar layers, regardless of their composition. However, it was observed that for CM cement-based mortars, by virtue of a different physical-mechanical behaviour characterised by a higher mechanical strength, that when the process of decohesion and spalling has occurred, the detachment of larger flakes was noted. HLM mortars showed less frequent detachment of material which, in any case, is of minor entity (thin and localised flakes). The intonachino plaster layers (PL-INT), having very thin thicknesses, tend to exfoliate and detach from the substrate only where there is constant moisture from the inside of the rock towards the interior of the cave-room. Moreover, HLM and especially PL-INT also showed sulphation processes, with the formation of gypsum. In regards to the decay of materials, the limestone and plasters of the wall cave-room often showed decohesion and the presence of fano-(on the surface) and crypto-efflorescence, due to the constant presence of humidity and/or circulating aqueous saline solutions in the rock. The cyclic mechanisms of hydration/dehydration and solubilisation/crystallisation of salts produce a hygroscopic volume variation in the limestone with consequent exfoliation and flaking. The degradation apparently manifests itself in the same way on all mortar layers, regardless of their composition. However, it was observed that for CM cement-based mortars, by virtue of a different physical-mechanical behaviour characterised by a higher mechanical strength, that when the process of decohesion and spalling has occurred, the detachment of larger flakes was noted. HLM mortars showed less frequent detachment of material which, in any case, is of minor entity (thin and localised flakes). The intonachino plaster layers (PL-INT), having very thin thicknesses, tend to exfoliate and detach from the substrate only where there is constant moisture from the inside of the rock towards the interior of the cave-room. Moreover, HLM and especially PL-INT also showed sulphation processes, with the formation of gypsum.
Petrographic Characteristics
The observation of thin sections under a polarised microscope allowed us to identify the petrographic characteristics of limestone substrate and plasters, defining the kind and size of aggregates and the binder/aggregate ratio in mortars ( Figure 8). However, it was not very effective in recognising the nature of the binders, since they is cryptocrystalline or amorphous and commonly affected by degradation phenomena, such as oxidation of Fe-bearing phases, dissolution/precipitation of secondary phases, or development of brownish-grey stains of undefined origin (probably due to the deposition of impurities by circulating fluids). (e) gypsum crystals growing perpendicularly to some fractures, tending to expand them (SM4 sample, cross polarised light); (f) clusters of radial needle-shaped crystals (thaumasite or ettringite) within a cement mortar (SM6, cross polarised light); (g) crystallites surrounded by opaque phases in a cement mortar (SM6, plane polarised light); (h) appearance of fossil-rich limestone belonging to the Tramezzario lithology (SM4, plane polarised light).
The most evident feature that allows a discrimination between the different mortar mixtures is in the size and number of aggregates. PL-INT layers are characterised by homogeneous and very fine aggregates, having a fairly constant grain-size (0.05-0.1 mm) and mineralogy mainly consisting of rare quartz and minor feldspars (Figure 8a). On the contrary, HLM-AR layers are characterised by 0.2 to 4 mm sized aggregates, mainly consisting of quartz and feldspars, but including also minor amounts of poly-mineral lithoclasts, (belonging to metamorphic and igneous rocks), pyroxenes, amphiboles, micas, and marine fossil skeletons (Figure 8b). This complex assemblage suggests a polygenic origin and different sources of aggregates supply used for arriccio hydraulic lime mortars. Medium-coarse aggregate fraction (0.3-4 mm) of CM-AR layers consists of quartz, Kfeldspar, plagioclase, biotite, pyroxene, titanite, occasional lithoclasts, marine fossils of different origin, and other accessory minerals not identifiable by polarised microscope.
The contact between cement/hydraulic lime mortars and limestone substrate of the cave-room differs depending on the nature of the binder. Hydraulic lime mortars (HLM-AR) adhere quite well on the substrate (Figure 8c) and neither discontinuities nor secondary phases were detected along the contact. Cement-based mortars (CM-AR1, CM-AR2), on the contrary, are commonly detached from the substrate and the contact is marked by discontinuous elongated fractures (Figure 8d). PL-INT layers, although based on a lime binder, did not show a good adhesion with the limestone substrate. Indeed, fractures running along the contact and filled by gypsum growing perpendicular to it (Figure 8e) were found in several microdomains.
The observation under optical polarised microscope revealed the formation of secondary minerals within the cement mortars. Acicular crystals of ettringite and/or thaumasite have been found in fibrous radial aggregates with sizes smaller than 0.5 mm (Figure 8f). In addition, several clusters of micrometre-sized rounded crystallites, surrounded by a matrix of opaque minerals (mainly titanite) were also found ( Figure 8g).
Tramezzario samples of limestone substrate are characterised by a micritic matrix, in which a high amount of bioclastic grains (especially bivalves, foraminifera and algae) could be found (Figure 8h). Sparite crystals are quite rare. Based on the Folk's classification (Folk, 1959), the analysed samples are fossiliferous biomicrite whereas, according to Dunham (1962), they can be regarded as wackestone, locally tending to packstone. Thin section observation allowed also a first estimate of the porosity that ranges between 10 and 15% and appeared as single voids, probably due to dissolution phenomena or, more rarely, as a network of thin channels and fractures, some of which could have been produced during the sampling.
X-ray Diffraction
The results of the XRD analysis of plaster samples collected from the wall surface of rocky cave room are summarised in Figure 9.
crystallinity when compared with the PL-INT and HLM-AR samples. This is testified by the peak's shape (Figure 9), commonly showing lower intensities and higher values of FWHM (full width at half maximum). The stronger intensities of quartz and feldspars indicate the lower B/A ratio in CM samples than in PL-INT and HLM-AR samples.
The four samples of the CM group share similar patterns except for CM-AR2-B and CM-AR1-B, showing abundant gypsum and weak peaks matching to the vaterite (metastable calcite polymorph) reference pattern, respectively. Samples HLM-AR1 and HLM-AR3 consist almost totally of calcite, with traces of quartz recognised just by the most intense peak. HLM-AR2 as well mainly consists of calcite with subordinate quartz whose content is higher than the previous samples; furthermore, K-feldspar, plagioclase, and gypsum were detected by their most intense peaks.
PL-INT samples are quite similar to HLM-AR ones, being mainly composed of calcite and minor quartz, feldspars, and gypsum. However, their patterns show some differences, such as a lack of quartz in PL-INT2, the higher amount of gypsum in PL-INT3, and the different relative abundances of K-feldspar and plagioclase. In addition, PL-INT2 and PL-INT3 show a small peak at about 54.3 • 2θ, not detected in the other samples, which do not clearly match to any phase in the database.
CM-AR samples show an aggregate composed of variable amounts of quartz, Kfeldspar, plagioclase, and biotite, thus comprising a mineral assemblage resembling the other samples. However, several differences are evident, such as the higher amount of quartz and feldspars, the presence of biotite, and the higher noise of the background signal that suggests lower crystallinity for the presence of C-S-H and C-A-H phases in the binder. The binder also consists of calcite (often altered for sulphation in gypsum), having a low crystallinity when compared with the PL-INT and HLM-AR samples. This is testified by the peak's shape (Figure 9 quartz, plagioclase, biotite, K-feldspar with perthite exsolutions, and clasts of mafic rocks (formed by Ca-rich plagioclase, pyroxenes, titanite) were found to be mixed together. The binder consists of 0.3 to 1.5 mm sized plagues consisting of massive calcite (Figure 10b) and rounded calcite crystallites (Figure 10c), with interstitial Si-Al-Ca-rich hydrated phases (C-S-H, C-A-H). This has been confirmed by EDS spectra collected on the binder, suggesting the presence of calcite even if, in each analysed point, little amounts of Si and Al were detected (Figure 10d). The thin layer of fine-grained intonachino PL-INT2 (Table 1) is non-homogeneous in composition, as highlighted by the BSE imaging showing alternating darker (calcite) and lighter (calcite + gypsum) levels (Figure 10e). The contact . SEM images of selected thin sections of cement mortars in SM6 sampling point: (a) anhedral to subhedral grains (type (i)) of an aggregate consisting of quartz, oligoclase, and perthitic Kfeldspars, plus accessory phases as zircon and Fe-Ti, with 91X image magnification; (b) BSE imaging on the cement binder: microcrystalline texture of the C-S-H fibrous aggregates of hydrated Ca-rich calcium silicates (type (ii)) (691X); (c) anhedral subrounded grains (type (iii)) of hydrated Si-rich calcium silicates and mixed phases of C-A-H and C-S-H (type (iv)) (1754X); (d) subrounded aggregates of fibrous crystals (type (v)) (1119X); (e) clusters of subhedral micrometre-sized (10-20 μm) grains of magnesium silicates (type (vi)) with interstitial fibrous phases, with a composition similar to (iv), and rare, anhedral calcite grains (type (vii)) randomly distributed within the binder (873X in full image, 3159X in small frame); (f,g) chemical spectra of (v) and (vi) phase types.
Binder/Aggregate Ratio of Mortars
The analysis of the microscopic images of the sampled plasters allowed us to determine the binder-aggregate ratio (B/A) expressed in vol.% ( Table 2). The count did not include the coarser aggregate (>6 mm) occasionally present in the mortars, and the very fine aggregate (<50 μm), which not well observed in the selected images ( Figure 12) and therefore not included in the black/white binarisation of the microscopic images. The analysis also allowed us to determine the porosity (Table 2), but, for the same reasons as mentioned above, the calculation did not include fine and coarse mesoscopic porosity. Figure 11. SEM images of selected thin sections of cement mortars in SM6 sampling point: (a) anhedral to subhedral grains (type (i)) of an aggregate consisting of quartz, oligoclase, and perthitic K-feldspars, plus accessory phases as zircon and Fe-Ti, with 91X image magnification; (b) BSE imaging on the cement binder: microcrystalline texture of the C-S-H fibrous aggregates of hydrated Ca-rich calcium silicates (type (ii)) (691X); (c) anhedral subrounded grains (type (iii)) of hydrated Si-rich calcium silicates and mixed phases of C-A-H and C-S-H (type (iv)) (1754X); (d) subrounded aggregates of fibrous crystals (type (v)) (1119X); (e) clusters of subhedral micrometre-sized (10-20 µm) grains of magnesium silicates (type (vi)) with interstitial fibrous phases, with a composition similar to (iv), and rare, anhedral calcite grains (type (vii)) randomly distributed within the binder (873X in full image, 3159X in small frame); (f,g) chemical spectra of (v) and (vi) phase types. SM5 thin section comprises a part of limestone substrate (Tramezzario, TR) in direct contact with a coarse-grained hydraulic mortar (HLM-AR1) and a thin layer of fine-grained intonachino (PL-INT2). The rock mainly consists of microcrystalline calcite with rare fossil skeletal fragments and gypsum. The aggregates of the hydraulic mortar are heterogeneous in size (0.1-1 mm) and polygenic ( Figure 10a); indeed, grains of calcite, quartz, plagioclase, biotite, K-feldspar with perthite exsolutions, and clasts of mafic rocks (formed by Ca-rich plagioclase, pyroxenes, titanite) were found to be mixed together.
The binder consists of 0.3 to 1.5 mm sized plagues consisting of massive calcite ( Figure 10b) and rounded calcite crystallites (Figure 10c), with interstitial Si-Al-Ca-rich hydrated phases (C-S-H, C-A-H). This has been confirmed by EDS spectra collected on the binder, suggesting the presence of calcite even if, in each analysed point, little amounts of Si and Al were detected (Figure 10d). The thin layer of fine-grained intonachino PL-INT2 (Table 1) is non-homogeneous in composition, as highlighted by the BSE imaging showing alternating darker (calcite) and lighter (calcite + gypsum) levels (Figure 10e). The contact between HLM-AR1 and PL-INT2 is locally marked by micrometre-sized fractures filled by fibrous gypsum crystals growing perpendicular to the interface (Figure 10f).
The SM2 thin section contains a small part of the strong limestone substrate of Tramezzario (TR-S), in direct contact with the cement mortars CM-AR1 (Table 1). According to the results of the polarised microscopy analysis, the aggregates consist mainly of submillimetreto millimetre-sized grains of quartz and feldspars, with minor biotite and rare calcite. K-feldspar shows perthite exsolutions and is commonly altered to sericite. Plagioclase is oligoclase or rarely andesine with a variable degree of sericite alteration. Accessory phases are epidote, commonly found among the aggregates, and titanium oxide, locally found within quartz grains. Rare poly-mineral grains and fossils were also observed. The binder showed a microcrystalline texture, where the hydrated Ca-rich calcium silicates and aluminates (C-S-H, C-A-H) and rare calcite were observed that appeared as clusters of subrounded grains. The EDS spectra revealed the presence of little and variable amounts of Mg, Si, and Al probably deriving from C-S-H and C-A-H and from impurities of the raw marly-limestone used for cement production.
The SM6 thin section mainly consists of cement mortar (CM-AR1-A, CM-AR1-B, Table 1) and a small fragment of TR limestone substrate. The two parts are separated by a 0.3-0.5 mm wide fracture and a thin calcite layer (0.2-0.4 mm) that rims the substrate. The aggregates are heterogeneous in size (Figure 11a), however, in contrast to SM2 sample, they are not polygenic; they consist of quartz, oligoclase, and perthitic K-feldspars, plus accessory phases such as zircon and Fe-Ti, suggesting a granitoid source. BSE imaging on the cement binder revealed a microcrystalline texture (Figure 11b) and different shades of the grey-scale, indicating the coexistence of different phases, further supported by EDS microanalyses. Although EDS analyses are not accurate enough to determine the stoichiometry of these phases, the relative proportions of the major elements and the semiquantitative chemical data allowed us to distinguish the main constituents of the binder. Among tens of analyses, the following phases were distinguished: (i) anhedral to subhedral grains (commonly micrometre-sized but locally reaching 100 µm; Figure 11a (vi) clusters of subhedral micrometre-sized (10-20 µm; Figure 11e) grains of magnesium silicates (SiO 2~6 0-65 wt.%, MgO~18 wt.%, CaO~6-9 wt.%, Al 2 O 3~5 wt.%; Figure 11f) with interstitial fibrous phases with composition similar to iv); (vii) rare anhedral calcite grains randomly distributed within the binder, derived by the carbonatation of residual Ca(OH) 2 , produced by the hydration reaction of alite (C 3 S) and belite (C 2 S) phases.
Binder/Aggregate Ratio of Mortars
The analysis of the microscopic images of the sampled plasters allowed us to determine the binder-aggregate ratio (B/A) expressed in vol.% ( Table 2). The count did not include the coarser aggregate (>6 mm) occasionally present in the mortars, and the very fine aggregate (<50 µm), which not well observed in the selected images ( Figure 12) and therefore not included in the black/white binarisation of the microscopic images. The analysis also allowed us to determine the porosity (Table 2), but, for the same reasons as mentioned above, the calculation did not include fine and coarse mesoscopic porosity.
Since the CM mortars are curl layers with higher thicknesses and, therefore, represent a different mixing of the parts, where, therefore, a higher amount of aggregate is required, they showed a lower B/A than all the samples, equal to about 3:2, with an average binder value of 58.2%. The curl layers of the HLMs, which also typically have smaller thicknesses (Figure 12
Plaster Samples
The following petrophysical properties of plaster samples were analysed: real and bulk density, porosity open to water and helium, water absorption kinetic, imbibition coefficient, water saturation index. Data are reported in Table 3. The real density (ρ R ), which is controlled by the density of solid phases and by the closed to helium porosity, ranged between the average values of 2.55 ± 0.02 g/cm 3 and 2.59 ± 0.02 g/cm 3 for the cement mortars CM-AR1 and CM-AR2, respectively (Table 3; Figure 13b). The hydraulic lime mortars showed higher real densities, ranging from 2.61 ± 0.02 g/cm 3 in HLM-AR1, to 2.62 g/cm 3 in HLM-AR2, to 2.68 ± 0.003 g/cm 3 in HLM-AR3. The real density of finishing air lime plasters (PL-INT) has lower values than the other mortars, ranging between 2.47-2.57 g/cm 3 . At the end of the 120 h long absorption test for water immersion, the samples were positioned under the line of 100% (Figure 14a), with a mean of saturation index ranging from 39.7 to 55.2% in PL-INT samples, from 63.8% to 80.3% for HLM samples, and from 63.7% to 67.8% in CM mortars (Table 3, Figure 14a). Considering the aggregate mineralogy (mainly quartz and feldspar with density of 2.65 and 2.5-2.8 g/cm 3 , respectively) and the binder/aggregate ratio, which are similar for both CM and HLM, the difference in real density values must lie in differences between the density of the binder.
The lower real density of CM can be ascribed to the presence of C-S-H and C-A-H phases (i.e., hydrated calcium silicates and aluminates), which are typical of cement. Completely cured cements, as the here-studied ones, are usually composed by: (1) calcium silicate hydrate, known as C-S-H (I), with a tobermorite-like crystal structure, and C-S-H (II), with a jennite-like structure [81,82], having densities of 2.23 and 2.33 g/cm 3 , respectively [83]; (2) calcium aluminate hydrate C-A-H as C 3 AH 6 and C 4 AH 13 , with density 2.04 g/cm 3 whose formation implies the presence of portlandite C-H, having a density of 2.26 g/cm 3 [84].
Furthermore, the hydration reactions occurring in cement mortars produce a porous structure, mainly made by non-interconnected micropores [85], in which He is not able to penetrate, possibly leading to underestimations of the real density.
On the contrary, HLM samples mainly consist of calcite (2.71 g/cm 3 ) and subordinate vaterite (2.66 g/cm 3 ), derived from the C-H carbonation; the low amount of C-S-H and C-A-H phases, barely detected by EDS microanalyses, as Ca-rich phases with low Si and Al contents, suggests a feebly hydraulic behaviour of these mortars and excludes a significant contribution of these phases to the measured density values.
The low real density of finishing air lime plasters (PL-INT) is an unexpected feature, since these mortars are characterised by calcite binder and quartz-feldspars aggregates, and thus they should have densities higher than approximately 2.6 g/cm 3 . A possible explanation of parameters that reduce the real density are the presence of gypsum (density = 2.36 g/cm 3 ) and/or the wide presence of very fine intracrystalline porosities that, considering the low number of aggregates, are likely to be closed, leading us to overestimate the samples' volume.
As regards the bulk density, the higher values were found in CM samples (1.88-1.93 g/cm 3 ), whereas the lower ones were detected in HLM (1.57-1.78 g/cm 3 ) and PL (1.57-1.77 g/cm 3 ) samples (Table 3, Figure 13a). The higher bulk density in those samples showing the lower real density is explained by the lower open-to-helium porosity in CM (24-27%) than in HLM (32-40%) and in PL-INT (31-37%). The open-to-water porosity, as expected, is lower than open-to helium one in all mortars, having values of 15-18% in CM, 17-25% in HLM, and 16-17% in PL (Table 3, Figure 13b).
The degree of saturation (SI) ranges between 64-72% in both CM and HLM samples and is significantly lower in PL samples (45-55%), further confirming the statements above regarding the more packed and closed structure of PL.
At the end of the 120 h long absorption test for water immersion, the samples were positioned under the line of 100% (Figure 14a), with a mean of saturation index ranging from 39.7 to 55.2% in PL-INT samples, from 63.8% to 80.3% for HLM samples, and from 63.7% to 67.8% in CM mortars (Table 3, Figure 14a).
The absorption kinetic of water (Table 4) is shown in Figure 14b. Almost all CM and HLM samples reach the 80% of the maximum absorbed water after 24 h, then the absorption continues slowly and constantly. In PL samples, although most of the water absorption occurs within the first 24 h (as well as the other samples), a significant and discontinuous increase was observed after longer and variable times.
This suggests that PL are characterised not only by a low open-to-water porosity but also by a certain tortuosity of the pore network that makes it difficult for water to enter within the plaster layer. It cannot be excluded that wall painting, mainly applied on this render plaster, could have contributed to prevent water entrainment.
Limestone Samples
The samples of Tramezzario limestone substrate have a mean value of real density of 2.71 g/cm 3 , thus being close to that of pure calcite. The sample TR-S of strong limestone (more similar to the limestone called Pietra forte = hard rock) shows a lower real density, with 2.68 g/cm 3 (Table 3, Figure 13b). The bulk density strongly differs between the two Tramezzario lithofacies, with means of 2.02 and 2.57 g/cm 3 , respectively (Figure 13a), due to the different porosity of the two samples that are 25.5 and 4.0%, respectively, for opento-helium porosity, and 15.9 and 5.6%, respectively, for open-to-water porosity ( Table 3). The measured differences are not surprising, since the massive limestone TR-S showed a high hardness and low permeability, whereas Tramezzario limestone (=suitable for partition walls) is a marly to arenaceous limestone, which is relatively soft and workable and that was largely employed as a building stone in the Cagliari area until recent times. At the end of the 120 h long water absorption test, both the samples were strongly undersaturated, as indicated by the average index of saturation, which is 60.9% for first lithofacies and 56.9% for massive limestone (Table 3, Figure 14a). About 90% of the water content measured at the end of the test is absorbed within the first 24h and then a plateau is reached (Figure 14b). Table 4. Data of water absorption kinetic for 120 h.
Limestone Samples
The samples of Tramezzario limestone substrate have a mean value of real density of 2.71 g/cm 3 , thus being close to that of pure calcite. The sample TR-S of strong limestone (more similar to the limestone called Pietra forte = hard rock) shows a lower real density, with 2.68 g/cm 3 (Table 3, Figure 13b). The bulk density strongly differs between the two Tramezzario lithofacies, with means of 2.02 and 2.57 g/cm 3 , respectively (Figure 13a), due to the different porosity of the two samples that are 25.5 and 4.0%, respectively, for opento-helium porosity, and 15.9 and 5.6%, respectively, for open-to-water porosity ( Table 3). The measured differences are not surprising, since the massive limestone TR-S showed a high hardness and low permeability, whereas Tramezzario limestone (=suitable for partition walls) is a marly to arenaceous limestone, which is relatively soft and workable and that was largely employed as a building stone in the Cagliari area until recent times. At the end of the 120 h long water absorption test, both the samples were strongly undersaturated, as indicated by the average index of saturation, which is 60.9% for first lithofacies and 56.9% for massive limestone (Table 3, Figure 14a). About 90% of the water content measured at the end of the test is absorbed within the first 24h and then a plateau is reached (Figure 14b).
Discussion
This research was focused on defining the compositions and physical properties of the plasters applied on the walls of a room carved into limestone rocks.
The aim of this study was to understand which of the various materials used in the last decades is more compatible with the rock substrate and ultimately more suitable for the function it was chosen for. The superimposed strata of plasters, deriving from restoration/preservation/finishing interventions and performed since the second post-war period until now, differ from each other in the type and number of binders and aggregates. The plaster stratigraphy is further complicated by the local lack of some layers due to chemical-physical decay and/or to the partial removal of older, deteriorated layers and/or to patchy-like interventions. The decay affecting the plasters and the wall substrate is mainly related to environmental humidity and fluids percolating within the rock. The dissolution of primary phases and the precipitation of the secondary ones is probably enhanced by the daily on/off cycles of the air conditioning system.
The results highlight the following main compositional and physical features of the three types of plasters.
Cement-based mortars (CM) were prepared using a medium-grained (1-5 mm) silicatic aggregate (Qtz, K-fds, Pl, Bt) with the occasional presence of lithoclasts, constituting approximately 40-45% vol.% of mortars. In a first hypothesis, its supply could have come from the incoherent deposits of medium-coarse sands and gravels, mainly resulting from the alteration (arenisation process) of the Carboniferous-Permian granitoid rocks abundantly outcropping in the south-eastern and south-western sectors of Sardinia (Figure 3). In fact, especially in the first mentioned sector, for decades there have been quarries extracting inert materials (some of which are still active today) to be used in the construction industry for the production of concretes and mortars. However, considering the presence of some accessory minerals of metamorphic or volcanic origin (e.g., epidote, pyroxene, titanite), especially of skeletal remains of marine Ca-carbonate fossils, their provenance from natural sediments of the local beaches near Cagliari city (e.g., Poetto, Figure 3, or Giorgino) is more likely. Indeed, the sands of these beaches have been extensively exploited certainly for more than a century, and perhaps even further back in time, for mortar production in the construction of buildings in Cagliari and its hinterland. CM mortars are characterised by a higher hydraulic binder content (55-60 vol.%) when compared to the other plasters, a lower He open porosity (ranging from 22 to 28 vol.%), and a consequently higher bulk density (1.84-1.99 g/cm 3 ). The real density is lower (2.52-2.60 g/cm 3 ) with respect to the HLM mortars, due to the different aggregate composition, consisting of polygenic and heterometric grains with a most common size ranging from 1 to 3 mm. C-S-H phases, well detected by SEM analysis, are predominant within the binder, although they were been detected by XRD, suggesting their low-crystallinity/amorphous nature.
Hydraulic lime mortars (HLM) were made using a medium-grained aggregate (frequently 0.5-4 mm), with a lower percentage than that of cement mortars (on average 31 vol.%) and a variable mineralogical composition, consisting mainly of silicatic minerals and lithoclasts from magmatic rocks (quartz, K-feldspar, plagioclase, biotite, pyroxene, titanite) and subordinately from sedimentary rocks (e.g., microcrystal-clasts of calcite) and marine fossils. HLM mortars show lower binder contents with an average of 30-35 vol.%, with a greater He open porosity (28-41 vol.%) and lower bulk density (1.57-1.87 g/cm 3 ), with respect to the CM mortars. HLMs are less hydraulic than first thought; indeed, both XRD and SEM-EDS analyses revealed their predominant presence in the binder of calcite with a subordinate amount of Ca-rich Si-Al-poor phases that could have been derived by the calcination of an impure limestone as a raw material. These different compositional and microstructural features of the binder lead to a higher real density (ranging from 2.58 to 2.68 g/cm 3 ) with respect to CM mortars.
PL-INT are binder-rich air lime plasters with a low amount of very fine-grained (50-100 µm) aggregates, ranging in grain-size in 12-14 vol.%, with a mainly carbonatic composition and rare silicatic components (mainly quartz). They have an almost pure lime binder, lacking any hydraulic properties, characterised by a heterogeneous microstructure, highlighted by high variability of He open porosity (ranging from 27 to 37 vol.%) and bulk density (1.53-1.87 g/cm 3 ).
The difference in porosity values of three kinds of plasters can be ascribed to the different compositions, and thus microstructures, of the binder and aggregate, and their proportions [56]. Generally, the total porosity is positively affected by the shape and selec-tion grade of the aggregate, an excess of the mixed water content, and the thickness/volume of the mortar, while it is negatively correlated with the mixing degree before its application. With regard to the open porosity, as in this case, a general negative correlation with the hydraulic degree and a positive correlation with binder/aggregate ratio were observed, i.e., CM mortars are the least porous followed by HLM. The lower open porosity of CM is likely due to the development of C-S-H and C-A-H phases that form aggregates of crystallites growing from the C-S grains and that tend to fill the air voids. This microstructure is characterised by intracrystalline pores developing between the aggregation nuclei of C-S-H phases, which are likely closed to fluid entrainment. Furthermore, in old cement mortars, it is common to observe pores filled by secondary portlandite (C-H) and/or ettringite [86]. The low anomalous values of open-to-water porosity shown by PL-INT (especially the INT3 sample, Table 2) can be explained by the nature and microstructural characteristics of the binder, which consists of alternating calcite and calcite + gypsum levels. Gypsum, deriving from the sulphation process, is fibrous, and filled the pores and microfractures present in the air lime layer, as highlighted by the BSE imaging (Figure 10e). Also, the carbonation of C-H can lead to a reduction of porosity [87].
In this case, it can be supposed that the low number of aggregates contributes to a more packed and less porous structure, at least concerning large pores. This assumption was confirmed by comparing PL with HLM features; despite its higher hydraulic degree, HLM showed significantly higher open-to-water porosity than PL suggesting the influence of aggregates in increasing macropores.
SEM investigations, coupled to petrophysical properties, allowed us to make several considerations. Cement mortars, applied on the rock substrate, are the hardest and strongest mortars among the three identified types. Nevertheless, this kind of material seems to not be a proper choice for limestone plastering for different reasons.
Firstly, CM appeared to be detached from substrate by 0.5 mm-wide fractures filled by calcite, suggesting the incompatibility between the two materials and a fluid movement within the wall that leads to the dissolution and re-precipitation of calcite. It is not clear whether calcite crystallisation pressure was produced the fracture or if it filled a preexistent discontinuity; in any case, the calcite precipitation can be ascribed to the different permeability of the two media, rock and mortar [88].
Secondly, several secondary phases were observed within the cement mortars, indicating the non-equilibrium of cement phases in this environment. Ettringite-thaumasite-like phases that fill the voids are clear indicators of secondary reactions occurring within the cement binder. The presence of zeolite-like phases could be explained in three ways: (a) "primary" component of the mortar used as an additive; (b) secondary phase formed in freshly hydrated cement by the interaction of aluminosilicates and strongly basic pH alkali-rich porewater [89]; (c) secondary phases formed after a long time in the well cured the cement via the reaction between ettringite and/or C-S-H under lower pH (~10) [90]. In this studied case, the first option can be reasonably excluded, since these mortars are too old to contain such a relatively recent additive; moreover, grain size, shape, and distribution of these phases within the mortar are more compatible with a secondary formation. The second option is unlikely as well, since zeolite was not found to be in contact with aluminosilicates and no evidence of this kind of reaction was found. Thus, it is more plausible that zeolite-like compounds are the products of long-lasting reactions at the expense of ettringite or C-S-H phases. The C-S-H-consuming reactions, in addition to the low crystallinity of C-S-H, are the reasons why these phases were not detected by XRD analyses. Fluid phases, water, and/or vapour driving these reactions are provided from air moisture in the humid environment of the carved room and from fluids' percolation within the substrate.
Air lime-based mortars, both aerial and feebly hydraulic ones, seem to be a most appropriate choice for plastering this kind of substrate. Both optical and electron microscopy analyses generally showed a good adhesion of the mortar onto the substrate which, in some cases, made it difficult to distinguish the contact between the two parts. This is obviously linked to the perfect chemical and physical affinity between a natural limestone and a limebased mortar consisting of almost pure calcium carbonate. A further proof of the strong bond between mortar and limestone arises from considering the strikes and vibrations that unavoidably affected the samples during their collection and that produced random fractures not preferentially concentrated along the air lime layer-limestone interface. From the above-mentioned consideration, the suitability of air lime mortars in plastering limestone is clear and, after all, is not a surprising result, since several papers [48][49][50][51][52]56], especially those dealing with mortar repair in historic buildings [91][92][93][94][95], came to the same conclusion in similar contexts.
The intonachino layers of finishing air lime plasters, characterised by essentially lime binder, the lowest content of aggregates and thin thickness, are often detached from the substrate, either wall-rock or arriccio mortars. In the first case, this happens, especially in certain areas, in the wall where there is a constant presence of moisture derived from the rock substrate, associated with the daily temperature variations induced by the on/off cycles of the air-conditioning system in the cave-room. In some cases, the discontinuities between intonachino and substrate could be due to the traumatic sampling but generally are linked to the precipitation of secondary phases [88] along the fractures/discontinuities, as already observed for CM mortars. As seen by the SEM imaging and EDS microanalyses on thin section SM5, the contact between the intonachino and the underlying arriccio is marked by a fracture filled by gypsum crystals growing perpendicularly to the contact, likely favouring the fracture opening, and by alternating calcite-rich and gypsum-rich micrometre-sized levels. Where the SO 4 2− , necessary to the gypsum crystallisation, comes from is unclear, since sulphur was not found either in intonachino nor in the arriccio, but what is evident is the localisation of this mineral underneath the former. It could be argued that the low porosity of intonachino allowed percolating water to be entrapped within the mortars and favoured the precipitation of dissolved ions.
Wall paints (PA) based substantially on air lime are coloured with the use of inorganic oxides, with colours ranging from intense beige, light blue, light green (probably dating back to the 1960s and 1970s, PA1, PA2) to lighter coloured (beige, PA3) or transparent paints, probably applied recently (in the 2000s, PA4). These layers are usually overlaid on the three layers of PL-INT or CM-AR plasters, in order to vary the chromatic appearance of the walls. By virtue of their mainly lime-based composition and their low thickness (<1 mm), which gives them a more "ductile" physical-mechanical behaviour than the other hydraulic mortar layers observed, paint layers adapt very well to the irregular morphology of the wall surface when they are on top of CMs, HLMs, and PL-INT plasters. The low percentage of aggregate and the probable imperfect carbonation of the C-H (Ca(OH) 2 ) binder, due to the overlapping of several paint layers, also contribute to this. Despite this, paint detachment was observed in some areas of the wall, especially where, due to cyclical thermo-hygrometric variations, secondary phases (mainly fano-efflorescence) are present on the substrate on which they are laid.
Conclusions
The study of the compositional and physical characteristics of the plasters from Grotta Marcello room has made it possible to understand the aspects concerning the physical compatibility of the materials used in the coatings of the limestone in particularly hygrometric conditions. Indeed, the internal environment of the cave-room is characterised by high water pressure (either as liquid or vapour phases) from the porous limestone rock to the ambient and a consequent high air relative humidity. The various layers of arriccio mortars, intonachino finishing plasters, and paints are superimposed on one another, succeeding one another in a complex stratigraphy. The stratigraphy, due to the various interventions carried out over about a century, has a non-homogeneous sequence equally repeating in all the internal walls of the cave's chambers, because some layers are missing in certain areas. This aspect has undoubtedly made the understanding and interpretation of the results even more difficult. However, a clearly recognisable basic sequence has been distinguished and this essentially reflects a chronology in the use of the various products that depends on the evolution of production technologies for hydraulic lime and cement binders.
The investigations revealed four main types of plasters stratified in the following order (from the inside out), and their consequent behaviour in relation to each other and to the rock substrate: (1) two layers of cement mortars (CM-AR1, CM-AR2), usually adhered to the limestone substrate, with a cement binder composed mainly of C-S-H, C-A-H, and C-F-H phases, and a subordinate amount of calcite derived from the carbonation of Ca hydroxide, resulting from the hydration of the anhydrous alite and belite phases. Such mortars are generally fat, as they show a greater binder/aggregate (about 3/2) ratio than the standard mix. The aggregate mainly consists of quartz, K-feldspar, plagioclase, biotite, lithoclasts and a subordinate amount of marine Ca-carbonate fossil remains, indicating a probable supply from the sands of local Cagliari beaches. Given their high hydraulic degree and physical properties, characterised by low water open porosity (on average from 15.4 to 18.5%) and a high stiffness in mechanical behaviour, the cement mortars were often found detached from the substrate or leading to the detachment of the overlying layers, although at times demonstrating good adhesion to the limestone from a chemical point of view; moreover, they are frequently loaded with salt efflorescence, especially of intrinsic derivation, not only from the rock water circulating solutions. The laying of these cement layers can be ascribed to the first restoration interventions in the Grotta Marcello room, probably during or immediately after the Second World War, which locally affected the walls of the cave (perhaps to cover the traces of the electrical installations of the internal lighting). (2) mortars based on hydraulic lime (HLM-AR), characterised by a binder substantially composed of C-S-H, C-A-H phases, and, to a greater extent than cement mortars, calcite, derived from the carbonation of portlandite (Ca(OH) 2 ) that is normally present in lime and only to a much lesser extent due to the belite hydration. The aggregate is mainly silicatic and it is similar to those of CM mortars, although with a more variable mineralogy that also includes subordinate amounts of sedimentary rocks. These mortars showed a good adhesion to PL-INT plaster layers, as well as to the limestone substrate, demonstrating excellent adaptability on a physical-mechanical and chemical point of view. Moreover, due to a greater He-gas open porosity (on average from 32 to 38.9%), HLM mortars show a good breathability. Their use is attributable to more recent periods (especially HLM-AR2 and HLM-AR3 layers). This is due to the need to level out some of the gaps created in the internal walls over time as a result of degradation, trying to maintain chromatic characteristics similar to those of the underlying stone as far as possible. In fact, it must be remembered that Grotta Marcello is a state property and under the control of the Superintendency, and for these reasons must respond as much as possible to respect the original locations. (3) finishing plasters (intonachino, PL-INT) consist of air lime-based binder with high incidence (86-88 vol.%) and a very fine aggregate (generally <1 mm) with a mainly carbonatic composition and rare presence of quartz crystals. It is possible to refer their laying to a more or less long-time span (about 60 years). The first layer (PL-INT1, light beige in colour) was almost always laid directly on the rocky substrate of the local limestone (Tramezzario) and locally also on the cement mortars (CM). The PL-INT layers can be traced back to the first treatment of the walls immediately after the Second World War (1950s). The subsequent PL-INT2 and INT3 layers, also based on lime, have similar compositional characteristics to PL-INT1. The intonachino PL-INT2 is often inhomogeneous in composition, and showed alternating calcite and calcite/gypsum levels. The contact between PL-INT2 and HLM-AR/CM-AR is locally marked by micrometre-sized fractures filled by fibrous gypsum crystals, growing perpendicular to the interface. The intonachino layers were laid in the following decades; they were probably used in part to sanitise (for the same reasons mentioned above), and in part to cover up, missing parts of the previous layers.
(4) lime paints (beige and light blue coloured), overlapping the other plasters and consisting of one/two layers, were probably used as "quicklime" to be slaked on site (CaO + H 2 O), to eliminate (given the exothermic reaction that produces the slaking, up to 80-90 • C) the moulds created in the large wet areas due to the persistence of moisture in the rock-walls of the cave-room, by virtue of their medium-high porosity (from 27.6 to 36.6%). Due to the low amount of aggregate and the thin thickness that gives an elastic physical-mechanical behaviour, the paints showed a good adaptability to the irregular surface of the cave-room walls. However, sometimes there are some evident detachments from the wall substrate.
In conclusion, it can be stated that hydraulic lime-based mortars have the strongest affinity with limestone substrate and intonachino layers and thus are more suitable to be used as a repair mortars. The most interesting finding of this study lies in the long durability of this kind of intervention. Indeed, even after such long-term lifespans of decades, hydraulic lime mortars and intonachino finishing air lime plasters used in a complex stratigraphy, characterised by several layers with different compositions, showed a good adhesion on the substrate, exerting their coating function better than the harder cement-based mortars. | 15,740.2 | 2022-02-10T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
A multiscale finite element method for Neumann problems in porous microstructures
In this paper we develop and analyze a Multiscale Finite Element Method (MsFEM) for problems in porous microstructures. By solving local problems throughout the domain we are able to construct a multiscale basis that can be computed in parallel and used on the coarse-grid. Since we are concerned with solving Neumann problems, the spaces of interest are conforming spaces as opposed to recent work for the Dirichlet problem in porous domains that utilizes a non-conforming framework. The periodic perforated homogenization of the problem is presented along with corrector and boundary correction estimates. These periodic estimates are then used to analyze the error in the method with respect to scale and coarse-grid size. An MsFEM error similar to the case of oscillatory coefficients is proven.
A critical technical issue is the estimation of Poincare constants in perforated domains. This issue is also addressed for a few interesting examples. Finally, numerical examples are presented to confirm our error analysis. This is done in the setting of coarse-grids not intersecting and intersecting the microstructure in the setting of isolated perforations.
1. Introduction. The modeling and simulation of porous media has wide ranging applications from subsurface flows to simulation of charge and discharge of lithium-ion batteries. At the pore-scale the simulation is constrained by the complex geometry of the material microstructure. As the small pore scale features must be resolved, however, this makes solving such a problem by direct numerical simulation very costly. In this work, we propose a Multiscale Finite Element Method (MsFEM) based on the now classical works in [10,16]. In this method, we compute many local problems with linear boundary conditions to build a coarse-grid basis. This helps to encode fine-scale information into coarse-grid basis functions. The computation of the local problems can be done in parallel in an offline phase, then computation can be completed on a cheaper coarse-grid in the online phase.
Multiscale methods of this nature have been explored vigorously in recent years. The primary motivation being multiscale phenomenon arising from oscillatory or heterogenous coefficients. Many multiscale frameworks to attack such problems have arisen, examples include the Variational Multiscale methods [18,19] and the Heterogenous Multiscale Method (HMM) [1,8] and references therein, to name a few. In this work, we will develop a method in the MsFEM framework [10,16], and utilize the theory of homogenization to obtain error bounds with respect to coarse-grids and micro-scale parameters for the case of multiscale behavior arising from perforated domains and Neumann boundary conditions.
The problem of considering partial differential equations (PDEs) in domains with perforated and complicated microstructure has a long history. This is especially true in the area of effective media theory and homogenization c.f. [5,7,24,31], where an effective, non-perforated, PDE is derived and auxiliary cell problems are proposed. Several computational approaches to perforated problems have been applied and explored. Using an HMM, the authors in [14] solve for an unknown diffusion coefficient at each coarse-grid node by solving a local auxiliary cell problem. Then, computation of the coarse-grid problem is carried out on a non-perforated domain. Utilizing a non-conforming Crouzeix-Raviart approach, a perforated MsFEM is developed for the Laplace problems in perforated domains with Dirichlet data in [21]. This approach was extended to the case of Stokes flow in complicated microstructures in [30], however, with an orthogonal splitting approach close to Variational Multiscale Methods [18,19]. In the work [4], the authors develop a Local Orthogonal Decomposition (LOD) [13,23] for perforated Neumann Problems. By truncating multiscale corrections to coarse-scale basis functions, an efficient computational scheme was developed and analyzed.
In this work, we develop and analyze a MsFEM method for Neumann problems in perforated domains. This is similar to the equation considered in [4], however, here we shall utilize homogenization techniques to obtain error estimates based on characteristic scale and coarse-grid size. The advantage of the MsFEM approach is the localized support of the basis functions compared to the patch extensions of the method based on the LOD [4,13,23]. However, due to the limitations of homogenization theory to mostly periodic problems, the error estimates obtained in MsFEM are confined to periodic media. Applicability beyond this setting must be verified numerically. Due to technical considerations arising from the Poincaré constants in perforated domains c.f. [4,29], we consider here domains only created by isolated particles so that the Poincaré constant remains uniformly bounded with respect to pore and coarse-grid sizes. We will briefly discuss details of this technicality.
The paper is organized as follows. We first give a problem setup and a brief necessary background on periodic homogenization in perforated domains. With this foundation we are able to state our MsFEM algorithm for perforated Neumann problems. Due to technical considerations of Poincaré constants in perforated and complicated domains, we give a brief overview of their possible impact to error estimates in this setting. With the assumption that these constants are bounded with respect to scale and coarse-grid block size, we are able to derive the standard error expression for MsFEM. The auxiliary results needed for this estimate are given in the appendix for clarity of presentation. Finally, we test our error estimates by implementing the method on two geometries. The first geometry has the perforations contained entirely in the coarse-grid blocks, and the second where the perforations intersect the coarse-grid blocks and the nodal points reside inside the perforations. The second example is critical to demonstrate that the technical restriction of having the perforations inside the coarse-grids is only for simplicity and the method has possible wider applicability.
2. Neumann problem in porous microstructures. In this section, we briefly give the problem statement of Neumann problems in porous microstructures. Due to the multiscale nature of such microstructures and complex geometries, we use the tools of periodic homogenization to derive the effective equation and generate auxiliary cell problems that connect macro-and micro-scale information. From this homogenization framework we develop a MsFEM method where local basis functions are computed in parallel. These basis functions are then used to compute on a coarse-grid as multiscale information is contained in the basis functions. We then present error analysis that closely follows the work of [10,16], and references therein and the standard MsFEM analysis. In this context, we extend this idea to domains with perforations. Finally, we remark on the possible effects of the microstructure on the Poincaré constants in perforated domains [4,29].
We suppose we have a Lipschitz domain Ω ⊂ R n such that it is decomposed into a solid microstructure S ε and an open connected pore space Ω ε with a characteristic pore size of ε. The interior interface of such a microstructure is denoted as Γ ε and the outer global boundary we denote as ∂Ω ε \Γ ε . Given a f ∈ L 2 (Ω ε ), we consider the following Neumann problem in Ω ε . We wish to find a c ε such that Note that we could consider a similar Neumann problem with oscillatory diffusion coefficients in addition to the pore microstructure, however in terms of MsFEM such problems are well studied both numerically and theoretically cf. [9,16,17].
2.1. Periodic homogenization. As noted, solving the equations (1) is complicated by the fact that the geometry has many scales and can be very complex. A useful tool in circumventing this issue is homogenization. In particular we employ the use of two-scale asymptotic expansions to homogenize the system [31]. These techniques are closely related to the design and analysis of MsFEM. A key assumption in many of these methods is the assumption of periodic structure, however, this assumption may be relaxed in the application of MsFEM. We suppose that our medium has additional periodic structure and can be written as where Y is the reference cell, Y * is the open pore space, and Y Γ is the interface, or perforations boundary, in the cell. We briefly recall the periodic homogenization of (1), similar derivations can also be found in [5]. We expand using the two-scale asymptotic expansion as here y = x ε , the periodic variable in the unit cell. Applying the above expansion into (1), and gathering the ε −2 terms in (1a) and ε −1 terms in (1b), we obtain −∆ yy c 0 = 0 in Y * , −∇ y c 0 · n = 0 on Y Γ , hence, c 0 (x, y) = c 0 (x). Taking the next orders in ε we obtain the cell equations with c 1 being y-periodic. We denote the average over a domain by · Y * = 1 |Y | Y * · dy, and require c 1 Y * = 0 to eliminate the arbitrary constant. We may simplify the above cell problem by writing the two-scale function Here, Q(y) ∈ (H 1 # (Y * )) n , where # signifies periodic, satisfies the following cell problems with Q being y-periodic and Q Y * = 0. Here we write I to mean the n × n identity matrix or (I) ij = δ ij . Taking the final order in ε of (1), we arrive at Averaging over the y variable, the second term vanishes on the boundary of the interface. The homogenized problem can then be written as where (D * ) ij = δ ij + (∇ y Q) ij Y * and we write c T = c 0 Ω .
2.2.
A perforated multiscale FEM. In this section, we outline the method of MsFEM for perforated Neumann problems. Due to the Neumann condition, we are able to construct a conforming multiscale finite element method. This is in contrast to the case when the problem has a Dirichlet condition on the perforations and weak conditions can be effectively utilized [3,30]. Moreover, the coarse-grid blocks may intersect the perforations and so the basis functions can have holes on portions of the boundary. Our analysis however, will focus on the case where the holes are entirely contained in each coarse-grid block. This assumption is to avoid complicated technical details in the homogenization and error estimates [6,12]. We will present numerical examples in Section 3 with coarse-grids intersecting the microstructures. We will highlight the areas where this non-intersecting assumption is made and can be weakened. We begin first with some notation. Suppose we have a domain with microstructure Ω ε , not necessarily periodic, and a quasi-uniform coarse-grid T h (of characteristic grid size h) of the domain. In our error analysis, we will suppose that the coarse-grid does not intersect the perforations along the edges. We suppose ε < h, as the case where h < ε corresponds to a full direct numerical simulation and the coarse mesh will resolve the geometry. The standard finite element error of local problems is ignored in the analysis.
We let K ∈ T h be an element in the partition (may also be triangulation) without perforations and denote K = K ∩ Ω ε to be a perforated element. We denote the boundaries of the perforations Γ K and the boundary of the element ∂K. We denote the global set of vertices N h associated with T h . For x i ∈ N h , we build the corresponding multiscale basis function φ i such that for K ∈ T h we have Here we denoted the linear classical nodel shape function associated with the global node x i ∈ N h by φ i L . Note here we have dropped the K index from the multiscale basis.
Remark 2.1. If the coarse-grid does not intersect the perforations, then, ∂K\Γ K = ∂K, and this is the case we will consider for our analysis and proofs. However, the variational form for a local problem with intersections is to find for all ψ ∈ H 1 ( K), ψ = φ i L on ∂K\Γ K . In our algorithm, we suppose that the perforations leave portions of the linear Dirchlet condition intact so that the above right hand data gives a useful basis function.
We denote the space of solutions by
Note that this approximation space is conforming. We will approximate the solutions of (1) by c h ∈ V h ms . The corresponding variational form is given by where the boundary term over Γ ε vanishes due to condition Neumann condition. We take a similar approach as in [16,17]. We denote the Lagrange interpolation operator for the multiscale basis as I h ms : V → V h ms . We let I h ms (c 0 ) be the interpolant of c 0 (the solution of (7)) using the multiscale basis φ i . More specifically, the interpolant is Here we emphasize what domain the interpolant its expressed on as in the analysis it will be important to distinguish between perforated and unperforated quantities. For example, even if x i ∈ S ε , c 0 (x i ), a solution to (7), is well defined as this function is expressed on the unperforated domain Ω and is what we refer to as a coarse-grid quantity.
2.3.
Poincaré constants in perforated domains. Before we start the error analysis of our MsFEM, it is important to note that we do not track the Poincaré constants as we suppose that such a constant is uniformly bounded with respect to the microstructural parameters of scale and separation length. For our case of isolated particles of size and separation length ε, this uniform bound is known to hold c.f. [4,29]. Here we briefly discuss some of the cases that may occur. The analysis of Poincaré constants has a long history and too vast of literature to discuss here. We will follow the method and examples introduced in [29], and the references therein, for the application of weighted Poincaré inequalities. It is known that the shape of the domain and moreover, the microstructure may have an effect on the Poincaré constant. First recall that for K, with diam(K) = h, we have where C P (h, ε) is the Poincaré constant that may depend on the size of the diameter of the domain as well as the separation length of the microstructure and scale. We take the scale of the pore and the scale of the separations to both be of order ε.
A classical illustration of domain dependence of the Poincaré constant is the dumbbell shaped domain. Suppose we take K = [0, h] 2 and remove the two thin pieces Then, we let K = K\S ε . In Figure 1, we see S ε is black, and K as the white background. Using isoperimetric inequality methods to bound C P for the dumbbell domain, [25,26] yields the pessimistic bound where C is a benign constant independent of h or ε. By using methods derived for high-contrast problems and weighted Poincaré inequalities, the authors in [29] are able to obtain a more optimistic bound by a constructive method. For the dumbbell domain it is known that A particularly bad Poincaré constant occurs when the domain is particularly tortuous. Using methods from [29] adapted to the perforated case, the authors in [4] show that for a reticulated interlocking filamented structures, Figure 2, have the bound However, in the geometries we will consider in our analysis and in our numerics we largely consider isolated particles that do not create the ill effects that may be possible in the above domains. This was first demonstrated for high-contrast domains in [29] and in perforated domains in [4]. Given that the particles are of characteristic size and separation length ε in R 2 , along with technical assumptions for the constructive approach it can be shown in [4] that where C is a benign constant independent of h or ε. The above bound is the Poincaré bound we shall suppose throughout the rest of the paper.
2.4.
Error analysis of the MsFEM. We will now show the errors introduced in the method are similar to that of the classic elliptic MsFEM error estimates. Indeed, due to the structure of (1), the homogenization procedure and related analysis are very similar to the elliptic oscillatory case considered in [16,17]. Thus, the proofs here follow closely those works. In this analysis, we suppose that the perforations do not intersect the coarse-grid, i.e. all the microstructure is in the interior of the basis functions (8). This is to avoid technical considerations of periodic unfolding methods, however, we will discuss the possible extensions in this direction. We will ignore the fine-scale discretization error throughout and consider only coarse-grid error h. This is also the case when we investigate the numerics. Finally, we will also suppose the microstructure has periodic structure and, the domain and data are sufficiently smooth for the following estimates to hold. More specifically, we suppose regularity so that c ε ∈ H 1 (Ω ε ), c 0 ∈ C 2,1 (Ω), and Q ∈ (C 1 (Ȳ * )) n . We proceed as in [16], note that the basis functions may be expanded in K as This is simply the two-scale expansion similar to (3), applied to the multiscale basis. We denote the residual in this expansion by R ε and estimate its magnitude in Lemma 2.4. Lemma Again, as with the multiscale basis for x i ∈ N h , we define φ i 0 that satisfies the homogenized equation with φ i L the classical nodal shape function associated to x i , and φ i where Q is given by (6) after rescaling, and As we would like to multiply quantities that are on the periodic cell and the homogenized domain and then rescale to the perforated media, we note here that there is some ambiguity in the notation of Q(y) for y ∈ Y * and the same function rescaled to Ω ε . When there is ambiguity in the notation we shall denote in Ω ε . In the case of linear Lagrange triangular finite elements, for example, the functions φ i 0 are a basis for solving the homogenized equations (7) with approximation order h in H 1 (Ω). Note that linear Lagrange is just a choice of convenience and other MsFEM elements have been developed, for example in the case of non-conforming elements in [11], and recent work in the setting of higher order finite elements in [15]. Using the same notation as in [16], we denote the space V h 0 ⊂ H 1 0 (Ω) to be the space of first order finite elements (order h approximation) with zero global Dirichlet conditions. More specifically, we define V h 0 to be the space spanned by φ i 0 , satisfying (18). Further, we define the Lagrange interpolation operator I h 0 : V → V h 0 , to be the operator that gives the Lagrange interpolation in the V h 0 basis. Note that I h ms (c 0 ), given by (11), satisfies −∆I h ms (c 0 ) = 0 in K, −∇I h ms (c 0 ) · n = 0 on Γ K .
Hence, we may expand, as an ansatz as in (17) I h ms this is simply the two-scale expansion similar to (3), applied to the multiscale interpolant in K. We denote the residual in this expansion by R I,ε and estimate its magnitude in Lemma 2.5. Lemma Here we may expand the first term as where φ i 0 satisfy (18). The above quantity may be expressed on the unperforated domain. The next order corrector is given by Q ε ∇I h 0 (c 0 ) in K. Finally, the local boundary layer correction θ I,ε satisfies −∆θ I,ε = 0 in K, −∇θ I,ε · n = 0 on Γ K , Further we will need the so-called global boundary corrector −∇θ ε · n = 0 on Γ ε (25b) We will need the following technical lemma that we will restate and prove in the Appendix B related to the above global boundary corrector.
Lemma 2.4. Let c ε be a solution of (1), let c 0 be a solution to (7), and Q given by (6). We suppose the global boundary layer correction θ ε satisfies (25). Then, we have the following estimate In addition, we have the following estimate for Lemma 2.5. The interpolants satisfy the following corrector estimate Proof. We may compactly write the differential operator associated to (1) as L ε , and by (21), we have that L ε I h ms (c 0 ) = 0. Hence, we may write the formal two-scale ansatz expansion in K of I h ms (c 0 ) as I h ms (c 0 ) = I h 0 (c 0 ) + ε Q ε ∇I h 0 (c 0 ) − θ I,ε + · · · , we may apply the corrector estimate (26) to I h ms (c 0 ) so that we have This is proved in Appendix B in the general setting. Again, by briefly following the arguments of [16], we let c bl be a standard bilinear approximation of c 0 . We have by standard approximation theory and elliptic regularity of c 0 that and . Thus, using the above inequality and (28), summing over K, using the bound (29) we obtain (27).
We are now in a position to state our main theorem. Again, we note that it is for the most part a translation of the elliptic oscillatory case ( [16,17]) into the language of perforated homogenization and perforated multiscale finite elements. The main differences being the need to emphasize what are perforated fine grid quantities and what can be represented on the unperforated coarse-grid.
We will be assuming sufficient smoothness on the domains Ω ε and Ω and that the perforations do not intersect the global boundary ∂Ω or the edges and vertices of the discretization T h . Moreover, we suppose that the Poincaré constants in perforated domains do not interfere with the estimate and satisfy the uniform bound (16). These assumptions may be loosened by periodic unfolding ( [6]) or carefully tracking the Poincaré constants in perforated domains ( [4,29]), however, these may be considered in future works. First we proceed by noting that from the so-called Cea's Lemma we have the following error.
Theorem 2.6. Suppose that c ε is a solution to (1) and c h satisfies the variational form (10). Further, we suppose that the Poincaré constant C P satisfies a uniform bound (16). Then, we have the following error estimate, that there exists a C > 0 independent of h and ε such that Proof. From the classical Cea's Lemma and Galerkin orthogonality we have for C > 0, independent of h and ε, that Taking c I = I h ms (c 0 ) given by (11), and using Theorem 2.7, we arrive at our result.
The above error relies on the following estimate.
Theorem 2.7. Suppose that c ε is a solution to (1) and I h ms (c 0 ) given by (11). Further, we suppose that the Poincaré constant C P satisfies a uniform bound (16). Then, we have the following error estimate, that there exists a C > 0 independent of h and ε such that Proof. Using the expansions (3) and (22) for c ε and I h ms (c 0 ) respectively and the corresponding corrector estimates (26) and (27) we have We begin by estimating each of these term by term. Recall that from (23) I h 0 (c 0 ) is in V h 0 and is a finite element approximation to (7) with the basis spanned by φ i 0 . Thus, we have the estimate Here, we used that both c 0 and I h ms (c 0 ) are able to be represented as unperforated quantities. Note that assuming sufficient smoothness of the perforations, we have Q L ∞ ≤ C and ∇Q L ∞ ≤ C/ε. Using the expression c 1 = Q ε ∇c 0 , we have element-wise Taking the integral over the whole domain on the unperforated coarse-grid quantities, squaring, and summing over K, and using the approximability of V h 0 we have Finally, using the estimates (45) and (47) in the Appendix A for θ ε , θ I,ε we have Combining the estimates (33), (34), and (35) into (32) and summing over K we have Remark 2.8. In terms of the analysis from homogenization, the assumptions that perforations do not intersect the global boundary or the boundary of the coarse-grid may also be relaxed. This can be achieved through the method of periodic unfolding cf. [6] and references therein. In this methodology, assumptions on perforations intersecting boundaries may be relaxed. In addition, corrector estimates such as those derived in Appendix B may be extended to this setting by combining periodic unfolding methods to standard corrector estimate techniques [12]. However, this comes at the cost of much technical overhead and we focus on the theory without intersections and verify that it may be extended via our numerical results.
Numerical examples.
We will now demonstrate our error estimate from Theorem 2.6 on a few numerical examples. We begin by setting up how we will compute the numerical solutions by constructing multiscale basis functions. We will set up numerical experiments that vary both scales ε and coarse-grid value h. This will be done for coarse-grids not intersecting perforations and coarse-grids intersecting the perforations to show that the theory may be extended beyond the theory of non-intersection.
3.1. Problem setup. In our numerics we have the flexibility to add in an oscillatory coefficient, k ε , in addition to oscillations created by the multiscale geometry. We restate the main problem (1) here for our specific case. Recall, we wish to solve the following boundary value problem in the perforated domain Ω ε = Ω\S ε , where Ω = [0, 1] × [0, 1] ⊂ R 2 , S ε is the collection of the perforations, and Γ ε is the interface as shown in Figure 3. We wish to solve Again, we will not utilize the flexibility in our code to add in the oscillatory coefficient in our results, we just state it here to show that it is an easy extension.
Figure 3. Perforated solution domain with coarse mesh of finite elements
To begin construction of the multiscale basis functions, let T H be a partition of the perforated domain Ω into elements K, which are shown in Figure 4. Then, we denote the perforated element K = K ∩ Ω ε and we have that Ω ε = K∈T H K. For a single element, we denote with B the domain of the perforation inside K and ∂B is its boundary. With "•", as shown in Figure 4, we indicate the vertices of K in which the coarse-grid nodal values are calculated, and with N we denote the number of vertices in T H . Then, the multiscale basis functions φ M i , i = 1, 2, . . . , N , are solutions to the following local problems which we solve for each coarse finite element K ∈ T H and {φ L i (x)} N i=1 is the standard piecewise bilinear basis. We show an example of a perforated multiscale basis function with 9 holes in Figure 5. 3.2. Numerical setup and methods. Again, we do not consider the case when k ε (x) is a highly oscillating function of x because we are interested only in oscillations coming from the perforations. Therefore, we take k ε to be a constant. In the MsFEM framework, we use finite elements which are perforated squares and the coarse-grid size is denoted by h. The small parameter ε characterizes the size of the microstructures. For a consistent numerical analysis of the convergence of the method, we need to decrease uniformly h and ε. Hence, we consider solution domains Ω ε with periodically arranged identical perforations. However, the method as a computational tool is not restricted to periodic media, only the analysis is restricted to such simple cases. For the reference solution, we use the total fine-scale grid used in the computation of the local problems with size n f . Thus, the fine-grid is also of order N micro , however, we lose the parallel structure and it must be solved in entirety. This is done using standard bilinear finite elements on the fine-grid rectangles. We run numerical simulations with different number of holes per coarse element. We will vary both h and ε and record the results in tables. In Figure 6 we show example of two geometries and the placement of the coarse-grid elements. In Figure 6(a) we show macro-elements with 4 holes entirely included in them. The second geometry has coarse elements with the edges and vertices inside the perforations, see Figure 6(b). The first example matches our theory, while the second demonstrates the applicability beyond the assumption of the perforations not intersecting the coarse-grids.
We solve the local problems (38) using the standard Finite Element Method with linear Lagrange triangular elements. For the numerical integration we use a Gaussian quadrature rule and we use the preconditioned Stabilized Biconjugate Gradient Method as a linear solver. We show numerical results for the L 2 norm and the L ∞ norm. We recall that for the standard MsFEM , the following L 2 error estimate holds, for the case of oscillatory coefficients in [16], and the proof of Theorem 2.6 in the case for perforated Neumann problems where this estimate can be achieved by a use of the standard Aubin-Nitsche lemma.
The following improved error estimate is shown numerically again in [16], 3.3. Numerical results. We now will present our numerical results. Throughout we take the coefficient to be k ε (x) = 5 and the right-hand side to be f (x) = 16. We begin by performing tests on the first geometry in Figure 6(a) where the coarse-grid does not intersect the geometry. Then, we will perform similar test on the second geometry where the coarse-grid nodes and edges intersect the perforations in Figure 6(b). We perform two tests on each geometry. Fixing h then decreasing ε and then fixing ε and decreasing h.
3.3.1.
Non-intersecting coarse-grid. We now begin with tests on Figure 6(a). Here we keep h = 1 16 fixed and decrease ε. In this test case we have 16 2 = 256 coarse-grid blocks with varying number of holes per block. We start with 1 perforation per block and we finish with 8 2 = 64 holes per macro finite element. The results from Table 1 show a convergence rate, which is even better than the theoretical one given by (39). We also observe that when ε becomes very small, which is equivalent to the ε h 1 2 term getting very small, the convergence rate starts to decline. This is most likely due to the fact that the h 2 term becomes dominant for very small values of ε. In Figure 7 and 8, we show numerical results for different values of ε. Now keeping ε = 1 128 fixed and decrease h. In this experiment we have 128 2 = 16384 perforations and a varying number of coarse-grid blocks. The convergence rates are given in Table 2. When h is relatively large, the error first decreases, and then, when h becomes small enough, the error starts to increase as predicted by the theoretical estimate with a convergence rate close to the improved one (40). Note that the final error in in Table 2, when h = 0.0078125, we are in the h = ε regime. Here we may experience resonance errors from the order ( ε h ) 1 2 error terms. The microscale and the MsFEM solutions are shown in Figures 9 and 10. 3.3.2. Intersecting coarse-grid. Now we will perform the same tests, but this time on a geometry that has the perforations intersecting the coarse-grids, moreover, the nodes of the coarse-grid are directly centered on the perforations. Let us assume that we have some fixed coarse-grid size h and we want to vary the number of holes Figures 11(a), 11(b), and 11(c), respectively.
We now begin with tests on 6(b). Here we fix h = 1 16 and decrease ε. In this test case we have 16 2 = 256 coarse-grid blocks with different number of holes per block.
Here h is sufficiently small and, as we can see from Table 3, we obtain a very good convergence rate, which is even better than the theoretical result (39) and coincides decreases and then, when h becomes sufficiently small we obtain a convergence rate, which tends to the improved one (40). The microscale and the MsFEM solutions are given in Figures 14 and 15.
Conclusion.
In this work we developed and analyzed a multiscale finite element for domains with porous microstructures. The standard error estimates for the case of oscillatory coefficients were shown to also hold in the case of perforated media. Due to the complexity of the Poincaré constants in perforated domains a discussion on their dependence of the microstructure is warranted and was presented. It should be noted that for our case of isolated particles the geometry has no effect on this constant. Further, to develop and support the theory, we provided corrector estimates and boundary correction estimates. Finally, we implemented the algorithm on two different geometries and recorded the results in tables varying geometry size and coarse-grid sizes. The results were in good agreement with the theory. Future works include the incorporation of these methods into more complicated nonlinear and multiphysics problems such as lithium ion transport to speed up computation. In addition, the investigation of classical improvements methods such as oversampling would also be of interest.
Appendix A. Estimating boundary correctors θ ε , θ I,ε . We will need the following inequalities. The following Gagliardo-Nirenberg interpolation inequality [28] w H 1/2 (∂K) ≤ C w 1/2 and the trace inequality for w ∈ H 1 (K) Figure 11. Coarse-grid blocks with different number of perforations per block. we refer the reader to [2] for proof. From the global boundary correction equation, −∇θ ε · n = 0 on Γ ε (43b) we see that we may easily obtain the bounds using the interpolation inequality (41) on the global boundary ∂Ω ε \Γ ε = ∂Ω, (since the perforations do not intersect the global boundary) we have using the classical estimate from [22] θ ε H 1 (Ωε) ≤ C c 1 H 1/2 (∂Ω) ≤ C c 1 1/2 By smoothness, we have Q L ∞ ≤ C and ∇Q L ∞ ≤ C/ε , and so using a trace inequality we have the estimate Thus, since we suppose global regularity of the homogenized quantity c 0 ∈ C 2,1 (Ω), the solution to (7), and therefore is in H 3 (Ω). We have the bound on the quantity of interest εθ ε H 1 (Ωε) ≤ Cε 1/2 . Similarly on the element K, from (24) we see that using (41) and estimates of L ∞ estimates of Q, ∇Q we obtain Finally, using (42), we obtain θ I,ε H 1 ( K) ≤ h −1/2 ε −1/2 C|∇c 0 | H 1 (K) + C(h −1/2 |∇c 0 |) Here we have used the global regularity c 0 ∈ C 2,1 (Ω). Thus, we have after summing over K, εθ I,ε H 1 (Ωε) ≤ C ε h 1/2 . Remark B.2. In addition, to avoid technical details we suppose the perforations do not intersect the global boundary. These assumptions may be relaxed with the use of periodic unfolding techniques to handle perforations intersecting the boundaries [6,12].
Here, we used ω ε = 0 on ∂Ω, (6b), and (43b). On the right hand side, using (7), and for clarity moving to indicial notation we have Taking a closer look at the first term we see that using n j=1 ∂ ∂x j δ ij + ∂Q i ∂x j = 0, from (6) and using the symmetry of D * , we obtain This last term is precisely equation 2.18 of [27], where the oscillatory coefficients are constant and the oscillations arise from the domain alone. In the same way as [27], | 8,514.8 | 2016-10-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Environmental Science",
"Mathematics"
] |
Spin-Hall-effect-modulation skyrmion oscillator
The electric-current-induced spin torque on local magnetization allows the electric control of magnetization, leading to numerous key concepts of spintronic devices. Utilizing the steady-state spin precession under spin-polarized current, a nanoscale spin-torque oscillator tunable over GHz range is one of those promising concepts. Albeit successful proof of principles to date, the spin-torque oscillators still suffer from issues regarding output power, linewidth and magnetic-field-free operation. Here we propose an entirely new concept of spin-torque oscillator, based on magnetic skyrmion dynamics subject to lateral modulation of the spin-Hall effect (SHE). In the oscillator, a skyrmion circulates around the modulation boundary between opposite SHE-torque regions, since the SHE pushes the skyrmion toward the modulation boundary in both regions. A micromagnetic simulation confirmed such oscillations with frequencies of up to 15 GHz in media composed of synthetic ferrimagnets. This fast and robust SHE-modulation-based skyrmion oscillator is expected to overcome the issues associated with conventional spin-torque oscillators.
Spin-torque oscillators offer promising applications such as wide-range-tunable frequency generation/detection 1,2 , signal processing 3 , and dynamic recording 4 . The spin-torque nano-oscillator (STNO) [5][6][7][8] was the first example, which utilizes the spin precession induced by the spin-polarized current 5,[9][10][11][12] passing through the point contact. Despite a successful demonstration of its high oscillation frequency with wide-ranging tunability 5,[9][10][11][12] , the STNOs still require improvement of their output power and linewidth, as well as their method of magneticfield-free operation 13 . To overcome these issues, techniques such as synchronization between multiple point contacts 14 and self-injection locking 15 have been investigated. Otherwise, an enhanced output power with a much narrower linewidth has been attained by a spin-torque vortex oscillator (STVO) 16,17 , which utilizes the gyration of a vortex core, confined within a point contact. By replacing the vortex core with a magnetic skyrmion, the concept of a spin-torque skyrmion oscillator (STSO) [18][19][20][21][22][23][24] has also been suggested, which would further improve the output power and be able to operate without an external magnetic field.
The narrow linewidth of the STVOs and STSOs is inherently attributed to the larger magnetic volume involved in the gyration 16 . However, the resulting frequencies are limited up to a maximum of a few GHz, which is much lower than typical STNOs [16][17][18][19][20] . To increase the frequencies, it is essential to improve the speed efficiency of the vortex and skyrmion to the external current. Recently, many studies have reported on faster domain walls and skyrmions at the angular-momentum-compensation points of synthetic ferrimagnetic (SFi) systems [25][26][27] . It has also been shown that the frequency of a STSO can increase up to tens of GHz utilizing the SFi systems 21 . Albeit all these promising qualities, there remains several issues in the STSOs. Due to the current-perpendicular-toplane (CPP) geometry of the conventional nano-pillar structures 18,21 , it is challenging to separate the detecting current channel 19,20 from the driving current channel without breaking the cylindrical symmetry. Moreover, the driving channel is relatively large, of which the cross section has to cover the whole area of the skyrmion oscillation path. These issues can be solved by introducing current-in-plane (CIP) geometry, which enables easy separation of the detecting and driving current channels and also, reduces the cross-sectional area of the driving current channel. In this sense, we propose a new spin-torque oscillator-namely, the spin-Hall-effect-modulation skyrmion oscillator (SHEM-SO)-utilizing skyrmion motion directly driven by the horizontal SHE current. In this SHEM-SO geometry, the CPP detecting channel can be separated from the CIP driving channel and also, the cross sectional area of the CIP driving channel is reduced in comparison to the CPP driving channel by the factor of the film thickness (several nanometers) over the skyrmion oscillation path diameter (several tens of nanometers). As the cross-sectional area decreases, the total driving current through the structure and thus, the operation power also decreases by the same factor for the case of a given driving current density. By utilizing Scientific RepoRtS | (2020) 10:11977 | https://doi.org/10.1038/s41598-020-68710-y www.nature.com/scientificreports/ a SFi system, we demonstrate that the SHEM-SO produces a high frequency comparable to the STNOs, while maintaining all the other merits including the narrow linewidth of the STVOs. Figure 1a is a schematic illustration of the spin-Hall current being injected from adjacent nonmagnetic (NM) layers into the ferromagnetic (FM) layer. Upon the injection of an electric current ( I ), the top and bottom NM layers generate vertical spin-Hall current ( I SH ) of opposite spin polarizations 28,29 . The counterbalance between the I SH 's of each NM layer determines the net spin polarization injected into the FM layer. Since the amount of the I SH 's depends on the thicknesses of the NM layers 29,30 , it is possible to control the sign of the net spin polarization by adjusting the thicknesses of the NM layers, as exemplified by the two tri-layered structures with thicker and thinner top NM layers. These two structures experience opposite signs of net spin polarization and consequently opposing SHE torques (Fig. 1b). Due to the skyrmion-Hall effect 31,32 , the gyrational torque tilts the skyrmion trajectories by the skyrmion-Hall angle ( θ SkH ) from the driving force direction (Fig. 1c). Note that these skyrmion trajectories are bound to opposite sides of the wire. Due to these reversed tendencies to the wire sides, if these two structures are joined www.nature.com/scientificreports/ as shown in Fig. 1d, it becomes possible for a skyrmion to form a closed oscillation path around the modulation boundary. We named this system "SHEM-SO". A micromagnetic simulation was performed to confirm this prediction (see "Methods" section for parameter details). The skyrmion indeed exhibited closed steady-state oscillation around the modulation boundary ( Fig. 1e) (see also Supplementary Movie S1, 2). The simulation also confirmed that a skyrmion converges to a single steady-state oscillation path regardless of its initial position. Since the oscillation continued for more than thousands of periods (~ 10,000 ns), we could conclude that the oscillation is not a transient behavior.
Results
The key features of the SHEM-SO are the non-zero θ SkH and the tilted modulation boundary with an angle ( θ B ). Depending on the relation between θ SkH and θ B , two distinctive oscillation paths appear. Figure 2 depicts (a) the parallelogram-like path for θ SkH ≥ θ B and (b) the parallel path along the modulation boundary for θ SkH < θ B , respectively.
For θ SkH ≥ θ B , the SHE torque exerts a driving force on the skyrmion in the direction of the electric current and then, the skyrmion moves in the direction at an angle θ SkH from the F SHE (Fig. 2c). As the skyrmion approaches the edge, the edge repulsion force ( F edge ) increases until the skyrmion moves parallel to the edge (Fig. 2d);The F edge comes from the exchange interaction between the opposite in-plane magnetizations of the wire edge and the skyrmion boundary, whose magnetization directions are fixed due to the chirality induced by the Dzyaloshinskii-Moriya interaction (DMI). When the skyrmion reaches the modulation boundary, the F SHE starts to decrease due to the cancellation between the opposite SHE regions and the skyrmion drifts away from the edge by F edge (Fig. 2e). The same processes then repeat in the opposite SHE region to form a closed path.
For θ SkH < θ B , the skyrmion first reaches the modulation boundary. The net F SHE at the modulation boundary now tilts away from the electric current direction until the trajectory becomes parallel to the modulation boundary (Fig. 2f) (see Supplementary Discussion). Then, upon approaching the wire edge, F edge pushes the skyrmion into the opposite SHE region (Fig. 2e). The same processes then repeat in the opposite SHE region to form a closed path.
The oscillation characteristics of the SHEM-SO is now discussed. The length of the oscillation path and the skyrmion speed along the path determine the oscillation frequency ( f ). For heuristic reasons, θ B is fixed to 45°, since f is a slow varying function of θ B with a maximum near θ B ∼ = 45° (see Supplementary Discussion). Since the oscillation path is confined within the wire, the path length directly depends on the wire width ( w ) and therefore, a narrower wire is preferred for a higher f . The simulation results confirm that f is inversely proportional to w over a range down to the practical lower limit (around 50 nm), close to the skyrmion size (Fig. 3a).
For fixed oscillation paths with a given geometry of w and θ B , the skyrmion speed now solely determines f . Since the skyrmion motion is driven by the SHE torque by a DC current, a higher current density ( J ) leads to a faster skyrmion and consequently, a higher f . However, since J also increases the skyrmion-Hall force, the skyrmions can be destroyed at the wire edge, if one injects excessively large J . Therefore, the maximum applicable current density ( J max ) is determined by the counterbalance between the skyrmion-Hall force and the edge-repulsion force. The maximum possible frequency ( f max ) is attained at J max .
To increase f max further, the SFi system was adopted that are known to exhibit high DW/skyrmion speeds near the angular-momentum compensation point [25][26][27] . The SFi system consists of two FM layers with an antiferromagnetic coupling separated by a spacer. By controlling the compositions of the FM layers, the ratio r www.nature.com/scientificreports/ ( ≡ γ 2 M S1 t 1 /γ 1 M S2 t 2 ) can be adjusted to reach the angular-momentum compensation condition (i.e. r = 1 ), where γ 1,2 , M S1,2 , and t 1,2 are the gyromagnetic constants, saturation magnetizations, and thicknesses of the FM layers, respectively. As r approaches 1, the SHE-torque efficiency increases considerably and the gyroscopic force eventually decreases to zero [25][26][27] . The former enhances the skyrmion speed while the latter enhances J max . Figure 4 clearly shows that f max increases drastically as r approaches 1 and, f max up to about 15 GHz is successfully demonstrated at J max = 5.0 × 10 11 A/m 2 , which corresponds to 30 μA through the magnetic layer.
Discussion
Ideally, the SHEM-SO can exhibit frequencies from 0 to f max since the magnitude of the applied dc current can linearly control the speed of a skyrmion and the resulting frequencies of the system. Thus, highly tunable nanoscale oscillator can be realized via the SHEM-SO. Additionally, one can find in the Fig. 3b-d, that as w decreases, the skyrmion motion becomes more confined, but still exhibits an elliptical path, elongated along the wire length. Since the skyrmion is driven by the CIP geometry, it is easy to put a CPP magnetic tunnel junction (MTJ) nano-pillar structure on the tip of the elliptical path and thus, makes it possible to detect a large output signal. Since the magnetization under nano-pillar completely switches in this case unlike the STNOs, where the magnetization rotates slightly away from the magnetization direction of the pinned layer, the output power from magnetoresistance (MR) effect is expected to increase significantly. Moreover, a skyrmion-creation channel can be placed outside the oscillation area, where the created skyrmion can be transported to the oscillation area. Therefore, the oscillation, detection, and creation can be operated independently by different channels, allowing much more versatile architecture. Alike the STVOs where the oscillation of a vortex core exhibits narrow linewidth 16 , the large magnetic volume of the well-defined skyrmion structure is also expected to provide a robust oscillation resulting in a narrow linewidth. Since the SHEM-SO operation does not require external magnetic field, the SHEM-SO can solve all the issues of the conventional spin-torque oscillators, with frequencies up to tens of GHz utilizing the SFi systems. www.nature.com/scientificreports/ Although we can claim that the SHEM-SO can have all the great features described above, we should also discuss the obstacles in realizing the concept. First, we assume that a stable and isolated skyrmion that does not annihilate at the wire edge is possible, while this is not true at least for now. Further studies to realize a stable skyrmion is needed. Second, the SHEM-SO does not account for the nucleation of a skyrmion, which is a quest of its own. Although we stated that a separate nucleation channel can be placed outside the oscillation area, the nucleation method should be well established beforehand. Most nucleation methods include perturbation of a skyrmion-favoring system, mostly with high enough current 33,34 . The method described in the Ref. 34 should be attachable to the SHEM-SO rather easily, since it is also in CIP geometry. Finally, for a reading mechanism, integrating a small MTJ onto the oscillating path of a skyrmion would not be so easy since the frequency of the SHEM-SO is directly related to the dimension. Scaling an MTJ would be a bottleneck in realizing high-frequency SHEM-SO.
In summary, we have proposed an entirely new concept of a spin-torque oscillator. With the promising features regarding output power, linewidth, magnetic-field-free operation and versatile CIP geometry, the SHEM-SO can open a whole new chapter of designing nanoscale tunable microwave oscillators. Although the concept still faces some obstacles to overcome, such as the realization of a stable skyrmion and integrating/scaling rather complicated structure, the realization of the SHEM-SO would not be too hard/far considering all the efforts being put into skyrmion studies nowadays.
Methods
Micromagnetic simulation. A finite-difference micromagnetic simulation was carried out using OOMMF code 35 with a DMI module 36 . The cell-size was set to 1 nm * 1 nm * x (thickness of a single ferromagnetic layer) nm. For the tri-layered ferromagnetic films (Figs. 1, 2, 3), the FM layer thickness was set to 0.6 nm. Typical magnetic parameters of Pt/Co/Pt films were used 33 , that are 580 kA/m for the saturation magnetization, 15 pJ/m for the exchange stiffness (~ 8.4 nm exchange length), 0.8 mJ/m 3 for the perpendicular magnetic anisotropy, and 3.5 mJ/m 2 for the DMI strength. For the synthetic ferromagnetic films (Fig. 4), the thicknesses of the two FM and space layers were set to 0.4 nm. The same magnetic parameters as those of the above tri-layered ferromagnetic films were used, with the exception of the magnetization of the top FM2 layer, which varied from 0 to 580 kA/m. The exchange stiffness between the layer 26 was set to − 0.3 pJ/m. The damping-like spin-orbit torque efficiency from the SHE was set to ± 10 -13 Tm 2 /A for each opposite SHE region. The current density varied over a range of 0.01-0.5 × 10 12 A/m. The Gilbert damping parameter was 0.01 in most cases, except in the case of Figs. 1 and 2 that utilized a value large enough to ensure the clear visualization of the oscillation paths. Finally, the initial state of the magnetization was manually set to be an isolated single skyrmion, which automatically stabilizes into its energetically-favored profile and size (see Supplementary Discussion for the image of the initial state). The type of a skyrmion was Nèel-type and the chirality were chosen to be counter clockwise, however the chirality does not affect the result in any ways, since it only reverts the direction of an oscillation. | 3,575.6 | 2020-07-20T00:00:00.000 | [
"Physics"
] |
Mathematical model of blood and interstitial flow and lymph production in the liver
We present a mathematical model of blood and interstitial flow in the liver. The liver is treated as a lattice of hexagonal ‘classic’ lobules, which are assumed to be long enough that end effects may be neglected and a two-dimensional problem considered. Since sinusoids and lymphatic vessels are numerous and small compared to the lobule, we use a homogenized approach, describing the sinusoidal and interstitial spaces as porous media. We model plasma filtration from sinusoids to the interstitium, lymph uptake by lymphatic ducts, and lymph outflow from the liver surface. Our results show that the effect of the liver surface only penetrates a depth of a few lobules’ thickness into the tissue. Thus, we separately consider a single lobule lying sufficiently far from all external boundaries that we may regard it as being in an infinite lattice, and also a model of the region near the liver surface. The model predicts that slightly more lymph is produced by interstitial fluid flowing through the liver surface than that taken up by the lymphatic vessels in the liver and that the non-peritonealized region of the surface of the liver results in the total lymph production (uptake by lymphatics plus fluid crossing surface) being about 5 % more than if the entire surface were covered by the Glisson–peritoneal membrane. Estimates of lymph outflow through the surface of the liver are in good agreement with experimental data. We also study the effect of non-physiological values of the controlling parameters, particularly focusing on the conditions of portal hypertension and ascites. To our knowledge, this is the first attempt to model lymph production in the liver. The model provides clinically relevant information about lymph outflow pathways and predicts the systemic response to pathological variations.
Introduction
The liver is one of the vital organs in the human body, and it plays a fundamental role in numerous functions, including protein synthesis, metabolism, bile secretion, and detoxification. Diseases of the liver are increasingly prevalent in the West, and they represent the fifth most common cause of death in Europe. There are many possible causes of liver disease, including alcohol, viruses, and drugs.
The liver has a circulatory system specific to its function. It is supplied by two major blood vessels: the hepatic artery, which contains fully oxygenated blood, and the hepatic portal vein, which contains partially deoxygenated blood that is rich in nutrients, since it originates from the intestines. Blood flows out of the liver through the hepatic veins. Within the liver, each of the hepatic artery and hepatic portal vein repeatedly bifurcates into successively smaller vessels forming two trees of vessels. On the microscale, the terminal generation of the trees of the hepatic artery and the hepatic portal vein lies, together with bile ducts, in structures called portal tracts. From the portal tracts, blood flows into the sinusoids, a network of small, tortuous, interconnected vessels that carry blood to the central vein, the terminal generation of the hepatic venous tree of vessels. Through successive confluences, blood is carried to the hepatic veins that drain into the inferior vena cava. There are typically around three to seven portal tracts supplying each central vein, and each portal tract supplies about three central veins (Teutsch et al. 1999).
The sinusoids are lined by a layer of fenestrated endothelium. Fenestrations are small holes of approximately 100 nm diameter covering 2-3 % of the area (Burt et al. 2006), which allow plasma to pass from the sinusoids to the space of Disse, a region surrounding each of the sinusoids that is filled with interstitial fluid. The flow from the sinusoids to the interstitial space is driven by both mechanical and oncotic pressure differences between the two spaces. The oncotic pressure difference arises due to proteins in the plasma, but it is normally small compared to the mechanical pressure differences (Laine et al. 1979). The rate of flow from the sinusoids to the interstitium is given by the hepatic filtration coefficient multiplied by the total pressure difference (mechanical plus oncotic) between sinusoids and interstitium. An estimate of this coefficient for cats was found by Greenway et al. (1969).
On the microscale, the liver can be visualized as being composed of functional units called lobules (Vollmar and Menger 2009). The classic model of a lobule is a prism with a hexagonal cross section, a cylindrical central vein running along the central axis of the prism, and portal tracts along each of the six axial edges (see Fig. 1). The boundaries between lobules are called vascular septa; in some species, such as the pig, these are quite distinct, while in humans the distinction between lobules is less clear (Lautt 2010).
Interstitial fluid is removed from the liver via one of two pathways. The first is through the lymphatic ducts within the liver. There are lymphatic vessels distributed throughout the lobule, and these take up interstitial fluid actively at a regulated rate; however, the dependence of the rate of uptake upon the interstitial pressure and other parameters is not fully known. Elk et al. (1988) performed experiments on livers of anesthetized dogs to determine typical rates of uptake by the lymphatic vessels. In their experiments, they determined the effective resistance of the lymphatic vessels, that is, the increase in interstitial pressure required to produce a unit increase in volumetric flux taken up by the lymphatics. The lymphatic vessels have valves to prevent backflow, and they transport the fluid toward the main lymphatic vessels located in the portal tracts, from where the fluid flows out of the liver. The fluid eventually drains into the venous system at the junction of the left subclavian vein and left jugular vein.
Secondly, interstitial fluid can leave the liver by passing directly through its surface. Conditions of high intrahepatic pressure lead to a pressure imbalance across the surface of the liver, which drives more fluid across it. Different regions of the surface have different properties: On the lower surface, a double membrane comprising Glisson's capsule and the peritoneal membrane separates the liver from the peritoneal cavity, while the upper surface of the liver is not peritonealized, and there is a space between the liver and the diaphragm in which interstitial fluid can collect. Flow across the liver surface is of particular interest in this paper, because if the flow of interstitial fluid into the peritoneal cavity is too large, fluid can build up in the cavity, leading to a condition called ascites. Ascites, in turn, causes the peritoneal pressure to rise; for example, Laine et al. (1979) performed experiments on anesthetized dogs and found that for every 9.5 ml per kg body weight added to the peritoneum, there is a 1 mmHg rise in the pressure there.
In this paper, we investigate the effect of changes in blood pressure within the liver on the production of lymph by the liver. Such changes are common in small-for-size liver syndrome, which occurs when the functioning liver mass is too small relative to the patient's body weight and is a relatively (a) (b) Fig. 1 a Sketch of a cross section of a single lobule, showing relevant geometrical parameters. b Sketch illustrating the arrangement of lobules in the model liver. A section of the outer surface of the model liver is also shown. The surface is assumed to be flat and the axes of the portal tracts parallel to the surface. The surface cuts the lobules so that the out-ermost lobules have area equal to the interior lobules, although they are pentagonal, rather than hexagonal, as shown. With this arrangement, the outer surface of the liver is at a distance L lob /4 from the nearest portal tracts and 3L lob /4 from the nearest central veins frequent complication after partial resection of the liver, after a liver transplantation when the donor is smaller than the recipient, or after living-donor liver transplantation, in both donor and recipient.
There are some previous works on mathematical modeling of the hemodynamics in the liver. Rani et al. (2006) developed a computational fluid dynamics model of flow along a terminal portal vein, hepatic artery, and two sinusoids with fenestrations. They used a non-Newtonian shear-thinning model for the blood rheology. Van Der Plaats et al. (2004) and Debbaut et al. (2011) used electrical analog models to describe the generations of vessels, finding the pressure and flow in each generation. Hoehme et al. (2010) developed a model to quantify regeneration of the liver after lobular damage.
Since the sinusoids are small, numerous, and interconnected, it is reasonable to describe them as a porous medium, and a few models have used this technique. Ricken et al. (2010) developed a poroelastic model of the liver tissue and combined this with a model of the development of sinusoidal orientation to model remodeling of the liver tissue after injury. Bonfiglio et al. (2010) also considered a porous medium model of a single classic hexagonal lobule and analyzed the effects of anisotropic permeability, non-Newtonian effects, and compliance of the tissue. Debbaut et al. (2012a) used a cast of a liver, combined with a computational fluid mechanical simulation, to find the effective permeability of the sinusoids in different directions through the tissue, while Debbaut et al. (2012b) employed these data to develop a three-dimensional lobular model, which they used to investigate the role of the vascular septa.
In this paper, we develop a mathematical model of blood and interstitial fluid flow in a lobular model of the liver, in order to estimate the rate of uptake of lymph and the flux of fluid across the surface of the liver. Following Bonfiglio et al. (2010) and Debbaut et al. (2012b), we treat the liver as composed of lobules that are prisms all of equal length, and with no variations in the third dimension. We use a porous medium description of the tissue of the lobules, to describe both the flow in the sinusoids and that in the interstitium. We assume that each spatial point in the model represents a multitude of both sinusoidal vessels and interstitial space, as illustrated in Fig. 2. We prescribe the blood pressure at the portal tracts and central veins, and we assume that blood vessels do not cross the vascular septa from one lobule to its neighbor.
Geometry
In the classic lobule model by Kiernan (1833), each lobule is described as a regular hexagonal prism with portal tracts at each vertex and a central vein along the axis. Since then, this morphological description has been generally accepted as a Burt et al. 2006); and 'I'-interstitial space (typical width of space of Disse approximately 500 nm, Straub et al. 2007) good idealized representation of the liver lobular structure, and it is reported in many anatomy textbooks. In this paper, we adopt this model and model the entire liver as a lattice of identical hexagonal lobules, each of which has a circular central vein of diameter D CV along its axis and a number of circular portal tracts of diameter D PT at each vertex, as shown in Fig. 1.
We assume that the axial dimension of the lobules is long compared to their width and that the axial pressure gradient is sufficiently small, so that we may treat the flow as twodimensional in the cross-sectional plane.
The Glissonian-peritoneal membrane is treated as a covering of the lateral faces of the outermost lobules (and not the end faces of the lobules, because end effects are neglected), and similarly to the bare part of the liver surface. We denote the total volume of the liver by V liv and its total surface area by A liv (estimated in Appendix 6.1) and define χ as the proportion of the surface area covered by the bare area (the rest being covered by the Glisson-peritoneal membrane). We also assume that the axes of the lobules are parallel to the surface and furthermore assume that the surface cuts the lobules in such a way that the areas of the outermost lobules (now pentagonal prisms) are the same as those of the interior hexagonal lobules, as shown in Fig. 1b.
Governing equations
Within each lobule, the sinusoids and lymphatic vessels are numerous, and they are small compared to the lobule size.
This motivates using a homogenized model for the flow in the sinusoids and in the interstitial space, similar to those considered by Bonfiglio et al. (2010); Debbaut et al. (2012b) and Ricken et al. (2010). We work in terms of the spatially averaged flux per unit area u instead of the particle velocities v; the spatially averaged flux is the Darcy velocity. In particular, we introduce u S and u I as the volume-averaged flux per unit area in the sinusoids and in the interstitium, respectively, defined as (1) In the above expressions, Ω is an elementary volume, which is significantly larger than the microscale (see Fig. 2) but much smaller than the characteristic scale of a lobule. Moreover, Ω S is the blood volume contained within Ω and Ω I the volume of interstitial space in Ω, so that φ S = Ω S /Ω and φ I = Ω I /Ω are the corresponding porosities, with φ S + φ I ≤ 1. We model the flow using Darcy's law for flow in a porous medium. Thus, where p S and p I are the mechanical pressures of the blood and interstitial fluid, respectively, k S and k I are the permeabilities of the sinusoids and interstitial space, respectively, μ S is the viscosity of blood, and μ I is the viscosity of interstitial fluid. The value of k I is estimated in Appendix 6.2. Following Laine et al. (1979), we assume that fluid passes from the sinusoids to the interstitium through the fenestrations in the walls of the endothelial cells at a rate proportional to the pressure difference between the blood and interstitial fluid. The effective pressure difference equals the mechanical pressure difference plus the oncotic pressure difference, but Laine et al. (1979) argue that both the osmotic reflection coefficient and the typical oncotic pressure differences are small, meaning that the flux of plasma from sinusoids to interstitium per unit volume of liver tissue only depends on the mechanical pressure difference and is given by where C f is the hepatic filtration coefficient, equal to the volume flux from the microcirculatory system to the interstitium per unit pressure drop per unit volume of tissue, found experimentally by Greenway et al. (1969) (see also Appendix 6.3).
Within the interstitium, following Elk et al. (1988), we assume that lymph uptake follows a linear relationship where C l is the conductance of the lymphatic vessels and p 0 is the pressure within the flowing lymph; negative uptake is not possible, due to the presence of valves. See also the papers by Stewart and Laine (2001) and by Quick et al. (2008), and Appendix 6.4. Applying conservation of mass in both the sinusoids and interstitium, we have We can rewrite the system of Eqs.
(2)-(5) in terms of the pressures alone as
Boundary conditions
We assume that blood does not flow across boundaries between neighboring lobules, due to the presence of the vascular septa, while interstitial fluid flows freely between them, the former condition corresponding to no flux and the latter to continuity of pressure and flux at the boundaries. At the boundaries of the portal tracts and central veins, and for both the sinusoidal space and the interstitial space, we could choose to prescribe either the pressure or the flux there. In this paper, since there are more relevant data available on the blood pressure, we prescribe the sinusoidal pressures, which are p S,PT at the portal tracts and p S,CV at the central veins. In Sect. 3.3, we argue that reasonable choices of the boundary conditions on the interstitial flow and pressure at the portal tracts and central veins do not significantly affect the results, and in this paper, we assume that there is no flux of interstitial fluid into these vessels.
At the outer surface of the liver, we assume that no blood crosses the surface, corresponding to a no-flux condition, and for the interstitial fluid, we assume that the conductivity of the surface for the interstitial flow equals M, and thus, where p ext is the pressure external to the liver. The liver surface has two distinct regions with different properties: -the 'bare area' at the upper surface, which has permeability M = M B A and external pressure p ext = p DS , and -the lower surface, which is covered by the Glissonianperitoneal membrane, with permeability M = M G P and external pressure p ext = p PC .
Parameter values
A list of the relevant physiological parameters and their typical values is given in Table 1, along with references. These values will be used to produce the results presented in this paper, except where stated otherwise.
Numerical computation
We developed a code to simulate the mathematical model using the commercial software COMSOL Multiphysics, which uses a finite element algorithm. The results were validated by successively refining the mesh and checking for convergence, and also comparing against previous data, where possible. In Appendix 5, we also present the analytical solution of a similar problem, in which portal tracts and centrilobular veins are treated as point sources and point sinks, respectively, and their strength is prescribed, rather than the value of the pressure.
The graphical results presented in this paper were plotted using either COMSOL Multiphysics or Matlab.
Single lobule
We first consider the solution for a single lobule in an unbounded lattice of lobules, representing a lobule well into the interior of the liver. Due to the symmetrical setting of the lobule, we apply no-flux boundary conditions on the interstitial flow at each of its straight edges.
With the parameter values listed in Table 1, the sinusoidal and interstitial pressures are shown in Fig. 3. As expected, the sinusoidal pressure peaks near the portal tracts and is minimized near the central vein. The sinusoidal pressure in the absence of interstitial flow (C f = 0), which was studied by Bonfiglio et al. (2010), only differs by about 0.008 % from that obtained in this study (using the same parameter values). The rows corresponding to N and L liv have been deleted The interstitial pressure follows a similar qualitative pattern, but its range is only about 20 % of that of p S . The ranges can be seen in Fig. 4, which shows the pressure on two cut lines through the lobule. The pressure is minimized at the central vein and rises steeply away from this point, which is also where the fastest Darcy velocities are obtained. Figure 5 shows the magnitudes of the Darcy velocities in the sinusoids and interstitium. In the models by Bonfiglio et al. (2010) and Debbaut et al. (2012b), it was found that the magnitude is maximized near the vessels and minimized at points midway between portal tracts, which is also found in this model. On the other hand, interstitial velocity is minimized near the vessels, due to the boundary conditions imposed there.
The total volume flux of blood into the liver can be estimated using the following formula: where n is the outward-pointing unit normal vector, and the integral is taken around the edge of the lobule. We find this to be approximately 0.66 l/min, which is around 39 % of the measured physiological value, 1.717 l/min (Wynne et al. 1989). Using the higher value of the permeability, 3.3 × 10 −13 m 2 , estimated by Bonfiglio et al. (2010) (and a proportionately higher value of k I , given by Eq. 37), we find Q blood ≈ 14.0 l/min, about eight times the physiological value, which is almost exactly in proportion to the increase in k S . It is also of interest to find the rate of fluid taken up by the lymphatics, which equals the average flux per unit volume, q l , integrated over the volume of the liver: This gives approximately 0.17 ml/min, corresponding to about 0.026 % of the total blood volume flux, which is slightly higher than the experimentally derived proportion, γ ≈ 0.01%, estimated in Appendix 6.5.
The principle of mass conservation implies that there is a relationship between the spatial averages of the pressures, owing to the fact that the flux of blood into the sinusoids minus the flux out equals the net volume flow rate from sinusoids to interstitium equals the rate of uptake of lymph. As long as p I > p 0 everywhere (which is expected to be the case in normal physiological conditions), we have where a bar indicates the spatial average. This equation can also be derived by integrating Eq. (7) over the domain.
Multiple lobules
Simulations on a lattice consisting of as many lobules as was possible to resolve indicate that the fluid pressure distribution in the lobules that are away from the outer boundary of the model is very close to the pressures in the single lobule simulation that was described in Sect. 3.1. Thus, the effects of the outer surface of the liver seem to be confined to those lobules that are very close to, or bordering, the surface. This suggests that the arrangement of the interior lobules does not significantly influence the rate of interstitial fluid crossing the liver surface; instead, it depends only on the arrangement of the lobules near the outer surface. Thus, in order to estimate the flux across the surface, there is no need to consider a model incorporating the details of the whole liver, and only a model of the near-surface region is required. Therefore, in this section, we consider a simulation of the flow and pressure in a few lobules in a region that borders on the outer surface (see Fig. 6). The interstitial pressure in the few outermost lobules at the Glissonian-peritoneal membrane is shown in Fig. 7a. As expected, the interstitial pressure in the innermost lob-ules in this model is similar to that of the single lobule presented in Sect. 3.1, while the pressure distributions in the two outermost lobules are visibly different, which is due to the boundary conditions imposed at the outer boundary. The range of interstitial pressures in the outermost lobule is about six times that of an internal lobule. Figure 7b shows the interstitial pressure in the outermost lobules near to the bare area. In this case, the effect of the liver surface penetrates through a larger number of lobules than it does near the Glissonianperitoneal cavity, so a larger number of lobules are needed to resolve the solution. As in Fig. 7a, the pressure distributions in the innermost lobules in Fig. 7b are similar to that in the single lobule solution in Sect. 3.1. The range of interstitial fluid pressure in the outermost lobule at the bare area is about nine times that of the inner lobules.
The fluid loss through the surface of the liver equals the average flux per unit area through the surface multiplied by the surface area. Thus, the flux through the Glissonianperitoneal membrane equals where A G P = (1 − χ)A liv is the area covered by the Glissonian-peritoneal membrane, the integral is taken along the lower edge of the bottom lobule in Fig. 7a, and the numer- ical value is derived using the parameter values in Table 1. Similarly, the flux through the bare area is given by where A B A = χ A liv is the bare area, and the integral is taken along the bottom edge in Fig. 7b. The proportion of the space covered by the bare area, χ , is not well known: However, the results presented here are qualitatively independent of its value. The total flux crossing the surface is Q surface = Q G P + Q B A ≈ 0.19 ml/min under normal physiological conditions, which is about 0.028 % of the flux Q blood listed in Table 1, and about 1.1 times Q L . This implies that the bare area leads to an increase in the total flux crossing the liver surface by around 10 % compared to the flux that would be obtained if the entire surface were peritonealized. The total rate of lymph production by the liver equals Q liver−lymph = Q L + Q surface ≈ 0.36 ml/min, which corresponds to 0.51 liters per day of fluid production; this is of the same order of magnitude as the measured physiological values.
Effect of abnormal physiology and variation in the model parameter values
In this section we consider the effect of changing certain model parameters on the rates of lymph and peritoneal fluid production; we investigate the parameters whose values are uncertain and also parameters that are known to vary in medical conditions of interest. One of the parameters to which the sensitivity of the model is of most interest is the sinusoidal pressure at the portal tracts, because this is known to increase during portal hypertension in small-for-size liver syndrome. As can be seen in Fig. 8, the flows increase linearly with increasing pressure, and the rate is about 0.032 ml/min per mmHg pressure rise for lymphatic uptake and about 0.035 ml/min per mmHg for fluid crossing the liver surface.
In Fig. 9 we show the effect of changing the distance between neighboring portal tracts L lob on lymph uptake in the liver and production of peritoneal fluid by the liver (a) and on blood flux (b). The first figure shows that the effect of the lobule size on lymph flow is fairly small over a very wide range of values of L lob (much wider than physiologically realistic). The effect becomes strong only for unrealistically small values of the size of the lobule. This is because, in this case, the lymph has to flow around the vessels that for small values of L lob occupy a large percentage of the whole cross section of the lobule. On the other hand, as expected, blood flux is extremely sensitive to the lobule size, as shown in Fig. 10b.
Since histological images show wide variation in vessel diameters, in Fig. 10a we plot the effect of vessel diameter on the flux. Increasing the size of the portal tracts increases both the uptake of lymph and the production of fluid by the liver. This is because in this case there is less resistance near the portal tracts, meaning the sinusoidal pressure is higher in regions close to them. In turn, this means the interstitial pressure has a higher average value, and thus, both the uptake of lymph (directly related to interstitial pressure via Eq. (4)) and the flux across the surface of the liver (given by (8)) are increased. Increasing the size of the central vein decreases both of these fluxes because there is less resistance, meaning the sinusoidal pressure is lower near it, and thus, the average interstitial pressure is smaller too, leading to both a lower rate of lymphatic uptake and also less fluid crossing the liver surface. Increasing the diameters of both portal tracts and central veins in proportion to one another has relatively little effect on these fluxes because the sinusoidal pressure distribution stays approximately unchanged, and thus, the interstitial pressure is largely unaffected. The flux of blood, shown in Fig. 10b, increases monotonically if the size of any vessel increases, since the vessel's surface area increases, which decreases the resistance to blood flow. The size of the central vein has a greater effect on the flux than the portal tracts, because there are more portal tracts, so they collectively offer less resistance. During ascites, the peritoneal pressure increases. In Fig. 11 we investigate the effect of different values of p ext . For simplicity, we used p PC = p DS and denote them both by p ext . In this case, the lymph uptake, Q L , does not change significantly, whereas the outflow from the liver surface, Q surface , decreases significantly and, according to the model, might even be reversed. Although the mathematical boundary condition (8) does not prevent flow from the abdomen to the liver, we are not aware of any evidence of its possible occurrence.
We also investigated the effect of changing the value of the flowing lymph pressure, p 0 , as values for this parameter were not found in the literature, which is shown in Fig. 12. Increasing p 0 has only a small effect on the outflow from the liver surface, whereas it strongly affects the uptake from lymphatic vessels, which decreases approximately linearly with p 0 as p 0 increases. For sufficiently large p 0 , it vanishes, because p I < p 0 everywhere, so no fluid is taken up by the lymphatics. We also note that the effects of p ext on Q surface and of p 0 on Q L are analogous to one another; how- Fig. 12 Effect of flowing lymph pressure p 0 on the rates of uptake by the lymphatics and flow through the surface of the liver ever, there is a qualitative difference for high values of these external pressures, which occurs because, for high p 0 , the valves in the lymphangions prevent backflow, and there is no corresponding mechanics for high p ext . The permeability of the interstitial space, k I , could not reliably be determined from experimental data, and it is estimated in Appendix 6.2. In Fig. 13 we show how the lymph production depends on k I . The flux through the surface increases as k I increases, and tends to zero for vanishingly small values of the permeability, while Q L is unaffected by the value of k I . The increase in Q surface is due to a reduction in the resistance of the outflow pathway through the liver surface for higher values of k I .
Since there could be interstitial flow within the portal tracts and central veins, the authors also considered a modified model. In this model, within the portal tracts and central veins, the interstitial flow satisfies along with continuity of pressure and flux conditions on the interface between the vessel and the interior of the lobule. Implementing these conditions leads to an increase in the predicted lymph production of just under 1 %, while the proportionate change in the predicted overall blood flow was much smaller. We also investigated the possible effect of alternative geometrical arrangements of the lobules. To do this, we considered a cuboid liver model consisting of lobules with a square cross section. We scaled the lobules so that their crosssectional areas and the proportion of this area taken up by both portal tracts and central veins were preserved. We also ensured that the surface area and volume of the liver were preserved. With this model, we found that the predicted blood flow was reduced by about 24 %, while both the rate of lymph uptake and the total flux of fluid across the liver surface were about 12 % smaller than those in the case of the hexagonal lattice. The reduction in blood flow is to be expected; since the hexagonal arrangement has six portal tracts supplying each central vein, that arrangement has less resistance to flow than the square one.
Finally, we note that the results presented are based on the geometrical assumption that the lobules are orientated faceon to the surface of the liver, as opposed to end-on. To our knowledge, there is no indication about which case is more realistic. However, since our model suggests that variations in interstitial pressure are small, it is likely that such changes would have a relatively small effect on the predicted rate of lymph production.
Concluding remarks
We have developed a new model of the microcirculation in the liver, which incorporates production and flow of lymph through the two major pathways: uptake by the lymphatic vessels and flow out of the liver through the surface into the peritoneal cavity or diaphragmatic space. We were able to estimate nearly all of the parameters from experimentally derived measurements, and we showed that the expected effect of geometrical variations in the lobules is relatively small. Even though the model is idealized, it provides useful information about lymph outflow and response to pathological states. The results of the model are consistent with physiological measurements.
The model is based on numerous simplifying assumptions on the geometry and mechanics. The most major geometrical assumptions are as follows: -Cylindrical vessels (portal tracts and central veins) that are parallel to one another. -Vessels arranged in a regular hexagonal lattice.
-With regard to the surface of the model liver, the vessels run parallel to it, the outermost lobules have the same cross-sectional area (see Fig. 1).
The main assumptions on the mechanics are as follows: -Both sinusoids and interstitium can be modelled as a porous material obeying Darcy's law. -The flow is two-dimensional.
-Flux from sinusoids to interstitium is proportional to pressure difference (no oncotic effects). -Lymph uptake has a linear relationship to pressure. -Flux across liver surface is proportional to pressure difference.
The major weaknesses of the model are as follows: -No account of effects of irregular geometry, especially near the surface. -Various pressures are required as inputs to the model (pressures in portal tracts and central veins, base lymphatic pressure, and pressures in peritoneal cavity and diaphragmatic space). In practice, these pressures vary in response to blood flow conditions, and ideally, the model should be extended so that these are an output. -Model cannot account for other orientations of lobules relative to the surface.
Many processes take place during liver disease, some of which are not fully understood. Gordon (2012) describes the current understanding of the main processes leading to the development of ascites. These commonly include fibrosis of the liver and active vasodilation, which are not accounted for in the model described in this paper. There is scope to extend our model to include some of these effects, and this should be undertaken in a future work.
Under normal physiological conditions, spatial variations in the interstitial pressure are much smaller than those in the sinusoidal pressure, while approximately 1.1 times as much fluid leaves the liver through the surface as that leaving via the lymphatic ducts.
If the portal pressure were increased, such as would occur in small-for-size liver syndrome, the model predicts significant increases both in the uptake by the lymphatic ducts and in the rate of fluid leaving through the surface of the liver. In order to develop this model into a predictive model for the severity of ascites, a model of the portal venous tree must be added so that pressures in the portal tracts can be related to those in the portal vein, and a model of the peritoneal cavity must be added so that the equilibrium pressure for a given flow rate of lymph from the liver can be found. The extended model would, for example, enable us to predict the consequences of different applied drainage rates.
Appendix A: Analytical model
Here we consider a simplified model of a lobule in the interior of the liver that we can solve analytically, which is based on the analytical model by Bonfiglio et al. (2010). As in that paper, we consider a regular lattice of lobules with portal tracts at the vertices and central veins along the axis and assume symmetry, but for simplicity, we treat these vessels as points in the plane. We denote by x PT,i the location of the ith portal tract, and by x CV, j the location of the jth central vein. In the case of point vessels, we cannot prescribe the sinusoidal and interstitial pressures there, so instead we prescribe the fluxes per unit length of the vessels. We assume that the flux of interstitial fluid from each of the portal tracts and central veins into the interstitium is zero, since these vessels have zero size in the model. The total length of all the lobules is the volume of the liver divided by the area of a lobule, V liv /(3 √ 3L 2 lob /2), and thus, the total length of the portal tracts is twice this value. The volumetric flux of blood per unit length of portal tract from the portal tract into the sinusoids is assumed to be homogeneous throughout the model and equal to where we used the flux found in the numerical calculations in the main part of the paper, rather than the physiological flux, for the purposes of comparison. We also assume that the volumetric fluxes from the sinusoids into the central vein per unit length are homogeneous and define these as where γ is the fraction of blood that is taken up by the lymphatic vessels, which is estimated in Appendix 6.5. Across the boundaries of the lobules, there is no flux in the sinusoids and free flow in the interstitial space. The principle of superposition allows us to write the solutions of the governing Eqs. (6) and (7) where p S,PT,i and p I,PT,i are the pressures in the sinusoids and the interstitium, respectively, in the case with a single portal tract (with the same boundary conditions) at x PT,i and no central veins, and p S,CV, j and p I,CV, j are the corresponding pressures in the case with a single central vein at x CV, j . Solving (6) for p I , substituting into (7), and rearranging, we obtain a single governing equation for p S : where Assuming that the pressures p S,PT,i and p I,PT,i are axisymmetric about the portal tract at x PT,i , where r PT,i is the distance from x PT,i , K 0 is a modified Bessel function of the second kind, C 1 and C 2 are constants to be determined by applying boundary conditions, and we have used the fact that the pressures must decay far from the vessel to eliminate the modified Bessel functions I 0 that would also appear in the general solution. Hence, Similarly p S,CV, j = C 3 K 0 λ 1 r CV, j + C 3 K 0 λ 2 r CV, j , where r CV, j is the distance from the jth central vein and C 3 and C 4 are constants to be determined. The volumetric flux per unit length out of the ith portal tract into the sinusoids equals where we used the fact that lim z→0 (zK 0 (z)) = −1, and, similarly, the flux per unit length from sinusoids to central vein is where the minus sign comes from the direction of the flux. The corresponding fluxes per unit length from the portal tracts into the interstitium and from the interstitium into the central vein both equal to zero, and hence, Solving these for the constants yields Substituting these expressions into (21)-(24), and then into (16) and (17), we obtain expressions for the pressures: We note that, by symmetry, these solutions automatically satisfy the conditions on the boundaries between lobules. The solution is calculated using Matlab and is shown in Fig. 14.
The analytical solution has the advantage with respect to the numerical one presented in the main part of the paper that it allows us to resolve better the details of the pressure near portal tracts and central veins, which is where changes are more significant.
Appendix B: Estimation of model parameters from experiments
6.1 Surface area of the liver Negrini et al. (1990) measured the surface area of five rabbit livers and found them to be 240 ± 13cm 2 . We use the data from Boxenbaum (1980) to scale this up to the human: Typical body masses for rabbit and human are 2.88 and 62.8 kg, while liver masses as a fraction of body mass are 4.78 and 2.42 %, respectively. Estimating that areas scale as the twothirds of the power of volumes gives a surface area of 6.2 Interstitial permeability We were unable to find experimental data on the value of the model parameter k I , so here we develop a model to estimate its value. Interstitial fluid is contained in the space of Disse and also in the gaps between cells of the liver. The space of Disse surrounds the sinusoids and contains the vast majority of the interstitial fluid and also, due to its relatively large width, offers much less resistance to fluid flow than the gaps between cells. Thus, we assume that the permeability of the interstitial space as a whole is dominated by the permeability of the network of vessels comprising the space of Disse.
We use a simplified model of the geometry of the space of Disse and the Kozeny-Carman relationship to estimate the permeability. This relationship states that the permeability equals φ 3 /(cS 2 ), where φ is porosity, S is the specific surface, defined as wet surface area per unit total volume, and c is the Kozeny constant. We treat the sinusoids as cylinders of diameter D sin = 10 µm (Burt et al. 2006) surrounded by an annular region of width D S D = 0.5 µm (estimated from a diagram in Burt et al. (2006)) representing the space of Disse. Assuming that the Kozeny constants of the sinusoids and interstitium are equal gives the relationship: where φ S and φ I are the porosities of the sinusoids and interstitium, respectively, and S S and S I are the corresponding specific surfaces. The ratio of porosities is estimated as the ratio of cross-sectional areas, that is, The wet surface area of the space of Disse is approximately twice that of the sinusoid, meaning that S I /S S ≈ 2. Hence, 6.3 Hepatic filtration Greenway et al. (1969) performed experiments on the livers of anesthetized cats in which they controlled the hepatic venous pressure and measured arterial and portal pressure, liver volume, and blood flow through the liver in order to determine the filtration coefficient, which is the volumetric flow from the sinusoids to the interstitium per unit mechanical pressure difference between the sinusoids and the interstitium and per unit liver mass. They found that the flow rate was F = 0.30 ± 0.03 ml/min per mmHg pressure difference between sinusoids and interstitium per 100 g of liver tissue. In our model, we define the hepatic filtration coefficient, C f , as the volumetric rate of blood flow from sinusoids to interstitium per unit pressure drop between sinusoids and interstitium per unit volume of tissue. This is given by C f = ρ t F = 1060 × 0.30 × 1 10 6 × 60 × 10 = 5.3 × 10 −5 /(mmHg s), where ρ t = 1,060 kg/m 3 is the density of liver tissue (Kotiluoto and Auterinen 2004).
6.4 Conductance of the lymphatic ducts Elk et al. (1988) performed experiments on anesthetized dogs weighing 20-30 kg to determine the flow rate into the lymphatic vessels as a function of interstitial pressure. They found that the volumetric flux of lymph leaving the liver equaled max( p I − p 0 , 0)/R l , where R l = 0.056 cmH 2 O min/μl = 0.056 × (10/13.6) × (60 × 10 9 ) = 2.5 × 10 9 mmHg s/m 3 is the resistance of the ducts. Boxenbaum (1980) gives the typical mass of the dog liver as 2.91 % of body weight, and, taking the typical mass of the dogs in the experiment as 25 kg, this gives the volume flux per unit volume of liver tissue as C l max( p I − p 0 , 0), where C l = 1 R l ρ t 0.0291 × 25 = 1 2.5 × 10 9 1060 0.0291 × 25 = 5.9 × 10 −7 /(mmHg s), where ρ t = 1,060 kg/m 3 is the density of liver tissue (Kotiluoto and Auterinen 2004).
6.5 Fraction of blood that is taken up by the lymphatic vessels Laine et al. (1979) measured the typical outflow of lymph via the lymphatic vessels (not the surface) from the livers of anesthetized dogs, finding it to be 3.5 ± 1.19 ml/h. We scale this up to the typical flow rate for humans by multiplying by the ratio of liver mass of humans to that of dogs. The weight of the animals was recorded as at least 17 kg (here we take it as 17 kg), 62.8 kg is used as a typical human body mass, and the liver masses are 2.91 and 2.42 % of the body masses for dogs and humans, respectively (Boxenbaum 1980). Thus, the flux of lymph uptake from a human liver is estimated as Q L = 3.5 × 62.8 × 0.0242 17 × 0.0291 = 10.75 ml/h = 3.0 × 10 −9 m 3 /s.
The flux of blood through the liver is 1,717 ml/min (Wynne et al. 1989 see also Table 1), and thus, we estimate that under normal physiological conditions, γ = Q L Q blood = 1.0 × 10 −4 . | 10,540.6 | 2013-08-02T00:00:00.000 | [
"Mathematics",
"Medicine"
] |
Inferring gene-to-phenotype and gene-to-disease relationships at Mouse Genome Informatics: challenges and solutions
Background Inferring gene-to-phenotype and gene-to-human disease model relationships from annotated mouse phenotypes and disease associations is critical when researching gene function and identifying candidate disease genes. Filtering the various kinds of genotypes to determine which phenotypes are caused by a mutation in a particular gene can be a laborious and time-consuming process. Methods At Mouse Genome Informatics (MGI, www.informatics.jax.org), we have developed a gene annotation derivation algorithm that computes gene-to-phenotype and gene-to-disease annotations from our existing corpus of annotations to genotypes. This algorithm differentiates between simple genotypes with causative mutations in a single gene and more complex genotypes where mutations in multiple genes may contribute to the phenotype. As part of the process, alleles functioning as tools (e.g., reporters, recombinases) are filtered out. Results Using this algorithm derived gene-to-phenotype and gene-to-disease annotations were created for 16,000 and 2100 mouse markers, respectively, starting from over 57,900 and 4800 genotypes with at least one phenotype and disease annotation, respectively. Conclusions Implementation of this algorithm provides consistent and accurate gene annotations across MGI and provides a vital time-savings relative to manual annotation by curators.
Background
Genetic mutations in mouse models have proven a valuable tool in investigating gene function and facilitating research into human disease. The phenotypes associated with these mutations in mice occur in the context of other defined or undefined mutations in their genome. To determine if a phenotype is caused by a mutation in a specific gene, providing insight into gene function, the impact of each allele in the genotype needs to be evaluated. Doing this manually is a laborious and time-consuming process. Intensely researched genes may have dozens of alleles each with multiple genotypes. The mouse gene Pax6 (MGI:97490) alone has 53 mutant alleles present in some 150 mouse genotypes with phenotype annotations in Mouse Genome Informatics (MGI, as of 12/29/2015). Only a fraction of these reported phenotypes are caused solely by the mutation(s) in Pax6.
MGI (www.informatics.jax.org) provides gold-standard annotations to describe mouse models in the context of both the known alleles and strain backgrounds of the mice [1]. In MGI, phenotype and disease annotations are ascribed to a genetic representation (allele pairs and strain background) of the mice that displayed the phenotype. Sophisticated genetic engineering techniques have allowed for the production of multi-genic models with spatiotemporal control of gene expression and the introduction of multi-color reporters. These increasingly complex models may include both causative mutations and non-causative transgenic tools [2]. To relate phenotype and disease annotations made to a genotype in MGI with the gene, genomic marker, or transgene containing the causative mutation, non-causative markers, such as transgenic tools (e.g., recombinases and reporters), need to be computationally excluded from consideration. For example, mice carrying an inducible knock-in of a mutant form of mouse Kcnj11 in the Gt(ROSA)26Sor locus and a transgene expressing cre recombinase in pancreatic cells, Tg(Ins2-cre)23Herr (genotype MGI:4430413), are annotated to the Mammalian Phenotype ontology (MP) [3] term 'decreased insulin secretion' (MP:0003059) and are a model of permanent neonatal diabetes mellitus (OMIM:606176) [4]. The phenotype and disease annotations are correctly associated with Kcnj11. However, the annotations should not be linked with the cre recombinase transgene or Gt(ROSA)26Sor since neither directly causes the phenotypes or disease displayed by the mice.
MGI is implementing improvements throughout the database to enhance the ability of users to evaluate the function of genes. As part of this, phenotype and disease associations at the level of the gene are now being presented (see below) in multiple locations in the MGI website. The gene-level associations give users an overview of the phenotypes and diseases associated with a gene that can be challenging to decipher from detailed model annotations. For both phenotypes and disease, creating a gene-level annotation implies that mutations in this gene cause the associated phenotype or disease. Therefore, the gene-level annotations may be useful to identify candidate genes for specific phenotypes and/or diseases. To create these gene-level associations, we have developed rules to algorithmically identify and computationally separate causative mutations from transgenic tools in complex mouse genotypes.
The first and simplest implementation of the rules excluded all complex genotypes and removed recombinase and wild-type alleles prior to inferring relationships. The need to separate causative mutations from transgene tools can best be illustrated by example. The complex genotype Apoe tm1Unc / Apoe tm1Unc Fasl gld /Fasl gld on an inbred C57BL/6 strain genetic background (MGI:5514345) is annotated to the human disease Systemic Lupus Erythematosus, SLE (OMIM:152700) [5]. Inferring a causal relationship between Apoe and/or Fasl and SLE may or may not be correct, since it is unclear whether one or both genes are responsible for the observed phenotype. For complex genotypes such as this one, the algorithm does not derive any gene annotations. Conversely, Smo tm1Amc /Smo tm2Amc Isl1 tm1(cre)Sev /Isl1 + mice on a mixed 129 strain genetic background (MGI:3689403) are annotated to the phenotype 'perinatal lethality' (MP:0002081) [6]. The Isl1 recombinase allele is present to drive deletion of the loxP-flanked Smo allele in the cardiovascular system; thus, we do not want to associate the perinatal lethality phenotype with Isl1. As we can clearly identify the non-causative allele and distill this genotype to alleles associated to a single gene, we derive a relationship between the phenotype 'perinatal lethality' and the gene Smo.
Other databases presenting phenotype and disease annotations for model organisms also have to decide when an annotation to a model can used to infer information about gene function. For example, the Zebrafish Model Organism Database (ZFIN, www.zfin.org, [7]) annotates phenotypes to a fish line that includes the alleles, transgenes and/or morpholinos used in an experimental cohort. Each allele and morpholino has an asserted relationship to a gene. Gene level annotations are then inferred for lines where only 1 asserted gene relationship exists (Y. Bradford, personal communication). Gene level annotations are not inferred for fish with more than one asserted gene relationship or for fish expressing nonreporter transgenes. This is similar to the early stages of the MGI algorithm. A key difference between mouse and zebrafish models, for the purpose of inferring gene annotations, is the widespread use of knock-in mutations in mouse where asserting the gene to allele relationship is less straightforward.
In contrast to the restrictive approach taken by ZFIN and MGI, the Monarch Initiative (monarchinitiative.org, [8]), which integrates data from both MGI and ZFIN as well as many other sources, infers gene annotations for all genes in a model. Thus, in the example above (Apoe tm1Unc / Apoe tm1Unc Fasl gld /Fasl gld ) gene annotations would be inferred for both Apoe and Fasl (M. Brush, personal communication). This approach maximizes the number of gene-to-phenotype annotations but means the user will need to evaluate the results to remove false positive associations.
In the current implementation, presented below, the algorithm we have developed excludes additional transgenic tools, accounts for the introduction of expressed genes in alleles, and deals with multi-genic mutations. This approach increases the number of derived gene annotations, while attempting to reduce both the number of false positive and false negative annotations. While the precise implementation would not be of use to other databases the logic behind the algorithm should be transferable.
Gene annotation derivation rules
Refinement of the derivation rules to eliminate additional types of transgenic tools has been an iterative process. Various changes to the MGI database schema have facilitated the identification and removal of many types of transgenic tools and non-causative marker associations. Throughout this process we have worked to minimize the number of false positive associations. The overall goal of these rules is to eliminate transgenic tools alleles and then infer gene, multi-genic marker, or transgene relationships from genotypes with only a single remaining associated locus. Genotypes with multiple associated loci are not used to infer gene relationships, with a few exceptions (see below). Recent re-implementation of these rules in a consistent manner across all MGI products has improved the gene annotation data quality at the display level and allowed us to make this data set available for export.
Details of the annotation derivation rules
In the application of the derivation rules, genotypes are processed in a step-by-step fashion (see Fig. 1). First, the number of genetic loci associated with all alleles in the genotype is determined (Fig. 1, box 1). Genetic loci include: genes within the mutation region, genes expressed by the allele, transgene markers, and phenotypic markers. For example, the alleles App tm1Dbo , Tg(tetO-Notch4*)1Rwng, and Del(7Coro1a-Spn)1Dolm (MGI:2136847, MGI:4431198, MGI:5569506 respectively) are associated with one, two, and forty loci, respectively. The two loci associated with Tg(tetO-Notch4*)1Rwng are the transgene itself and the expressed mouse gene, Notch4. The forty loci associated with Del(7Coro1a-Spn)1Dolm include the deletion region itself (recorded in MGI as a single, unique genetic marker) and all thirty nine endogenous mouse genes overlapping the deletion region. Gene-to-phenotype and gene-to-disease annotations can then be derived for the genes in nearly all genotypes with a single associated genetic locus (see docking sites below for the exception).
For genotypes including more than one locus, such as those described above, non-causative alleles are identified and computationally excluded from consideration. Non-causative allele types in the algorithm include: transgenic transactivator alleles, transgenic reporter alleles, knock-in and transgenic recombinase alleles, and wild-type alleles. Since many knock-in transactivator and reporter alleles may also be knock-out alleles that are causative for a phenotype, only transgenic alleles of these types are excluded. For recombinase alleles, curation in MGI distinguishes between conditional genotypes, where these alleles function as a recombinase, and nonconditional genotypes, where these alleles may be causative; therefore, both transgenic and knock-in recombinase alleles may be eliminated when the genotype is conditional. When the genotype is not conditional, recombinase alleles are retained. For a recombinase or transactivator allele to be excluded, it must express only a single gene. In cases where another gene is expressed, the allele is retained. For example the recombinase allele Tg(Tyr-cre/ERT2)1Lru (MGI:3617509) is excluded at this stage, so no derived annotation to the transgene is computed as a result of this allele. But the allele Tg(Tyr-cre/ERT,-Hras1*,-Trap1a)10BJvde (MGI:4354013) is retained, as it expresses both Hras1 and Trap1a in addition to cre. Additional rules described below address whether and how to derive annotations to those genes. Motifs (ERT2, ERT) designed to alter the expression of cre are not curated as expressed genes and are therefore ignored by the algorithm.
After excluding non-causative alleles, the number of remaining loci is determined for each genotype. Geneto-phenotype and gene-to-disease annotations are then derived for genes and genomic markers in genotypes with a single remaining locus. For genotypes with more than one remaining locus, further processing is done to identify additional cases where gene annotations can be derived. If the genotype is associated with a single multigenic marker (e.g., Del(7Coro1a-Spn)1Dolm) and one or more affected genes located in the region, then annotations are derived for the multi-genic marker and not for the individual endogenous genes in the region (Fig. 1, box 4). Genotypes associated with more than one multigenic mutation or with a multi-genic marker and any markers outside the mutation region are excluded and The number of inserted expressed genes is then considered. Inserted expressed genes are genes that have been introduced into the mouse genome and the gene product is expressed in one or more tissues of the mouse. Genotypes with multiple associated markers and no inserted expressed genes are eliminated. Genotypes associated with multiple inserted expressed genes are associated to the transgenic locus only, if there is a single transgene associated with the genotype and no additional endogenous genes ( Fig. 1, box 6). In this case, it is assumed that the transgene is expressing all of the inserted expressed genes and that the transgene as a whole, not the individual expressed genes, is causative for the phenotypes or diseases annotated to the genotype. For these genotypes, transgene-to-phenotype and transgene-to-disease annotations are derived. Derived annotations are not created for the inserted expressed genes. Other genotypes having more than one inserted expressed gene are excluded and no gene or transgene annotations are derived.
Genotypes associated with only a single inserted expressed gene (Fig. 1, box 7) are divided into two types: those expressing a mouse gene and those expressing a non-mouse gene. Genotypes associated with an expressed non-mouse gene are eliminated. No assumption is made that the phenotypes or diseases displayed would also be produced if the orthologous mouse gene had been used instead. Gene-to-phenotype and gene-to-disease annotations may be derived for a transgene and also an endogenous mouse gene in two cases: 1) if the genotype contains only a single transgene which carries a single inserted expressed mouse gene ( Fig. 1, box 8); 2) if the transgene, inserted expressed mouse gene, and the single endogenous gene that is the same as the inserted expressed mouse gene are associated with the genotype (Fig. 1, box 9). In both cases annotations are derived for both the endogenous mouse gene and the transgene (Fig. 1, "transgene + ").
Three genes (Gt(ROSA)26Sor, Col1a1, Hprt) are commonly used, based on examination of alleles in MGI, as 'docking sites' in mouse to knock-in expressed genes, frequently under the control of a heterologous promoter sequence. For example, of the 63 alleles of Col1a1 in MGI with the attribute "inserted expressed sequence", 55 have a construct inserted in the untranslated region based on the molecular description in MGI (12/7/15). For genotypes associated with a docking site and a single expressed mouse gene, gene-to-phenotype and gene-to-disease annotations are derived for the expressed gene and not for the docking site. There are no known phenotypes or diseases ascribed to mutations in Gt(ROSA)26Sor (MGI:104735, [9]). Therefore, no derived annotations are created for Gt(ROSA)26Sor, even when there are no associated expressed genes in MGI. MGI currently only annotates expressed genes with an ortholog in mouse; therefore, not all Gt(ROSA)26Sor alleles with an inserted expressed gene have an associated expressed gene. For example the allele Gt(ROSA)26Sor tm1(gp80,EGFP)Eces (MGI:5004724) expresses a gene from the Kaposi sarcoma herpes virus that does not have an ortholog in mouse. The phenotypes displayed by mice carrying this allele are the result of expression of the viral gene but as there is no display in MGI for any gene-to-phenotype annotations for a viral gene with no mouse ortholog, no derived annotations are created. Insertions in Col1a1 (MGI:88467) and Hprt (MGI:96217) are typically made without altering normal endogenous gene function. For Col1a1 and Hprt alleles, annotations are derived for the inserted expressed gene when one is present. If no expressed genes are present then annotations are derived for the docking site gene itself ( Fig. 1, box 10).
The final case where gene annotations are derived is when the inserted expressed mouse gene is identical to the endogenous gene ( Fig. 1, box 11). No gene annotations are created for any remaining genotypes.
Gene annotation derivation examples
To illustrate the function of the derivation algorithm, four example genotypes have been overlayed on the flow chart (Fig. 2). For mice hemizygous for Tg(tetO-Notch4*)1Rwng and Tg(Tek-tTA)1Rwng (genotype MGI:5502689, Fig. 2a), the transactivator expressing transgene Tg(Tek-tTA)1Rwng is excluded from consideration. This leaves 2 remaining genes, Tg(tetO-Notch4*)1Rwng and Notch4. As this leaves a single transgene marker and a single expressed mouse gene, gene level annotions are derived for both the transgene and the expressed mouse gene. For mice homozygous for Prnp tm1Cwe and Tg(Prnp*D177N*M128V)A21Rchi (genotype MGI:3836994, Fig. 2b) there are no non-causative alleles to remove. The single transgene in this case expresses the same mouse gene that is mutated by the allele Prnp tm1Cwe leaving the genotype associated with two genes, mouse Prnp and Tg(Prnp*D177N*M128V)A21Rchi. As this fits the requirements for the transgene exception (Fig. 2, box 9) annotations are derived for both the endogenous mouse gene and the transgene. For mice heterozygous for the deletion Del(7Coro1a-Spn)1Dolm and hemizygous for the reporter transgene Tg(Drd2-EGFP)S118Gsat (genotype MGI:5571091, Fig. 2c), the reporter transgene is excluded from consideration. As the deletion marker is associated with the 39 genes in the deletion region, this genotype falls into the Phenotypic mutation class for purposes of the algorithm. Gene annotations are derived for the deletion marker but not for the 39 genes in the deletion region (Fig. 2c, box 4). Mice heterozygous for Ewsr1 tm2(FLI1*)Sblee and hemizygous for Tg(CAG-cre/Esr1*)5Amc (genotype MGI:4429149, Fig. 2d) illustrate a case where gene annotations are not derived. While two non-causative alleles are removed by the algorithm, the cre transgene and wild-type allele of Ewsr1, after processing is complete there are still two genes associated with the genotype, Ewsr1 and FLI1. As the gene knocked into Ewsr1 is not a mouse gene this genotyope is excluded at box 7 in the flow chart. Even if the expressed gene had been a mouse gene this genotype would have been excluded as the expressed gene is not the same as the mutated endogenous gene.
Output of the rules
Once all genotypes with phenotype or disease annotations have been processed by the derivation rules the set of derived gene annotations are used throughout MGI, HMDC and MouseMine. As currently implemented, the rules result in derived gene-to-phenotype and gene-todisease annotations for over 16,000 and 2200 mouse markers, respectively, starting from over 57,000 and 4800 genotypes with at least one phenotype and disease annotation, respectively (as of 1/4/2016). Of the over 57,000 genotypes processed, almost 40,000 contain only mutations in a single marker (Table 1). Gene level annotations could be derived from these genotypes using the simplest possible rule (only derive annotations when there is one marker associated with the genotype). Use of the derivation algorithm allows a further almost 8000 genotypes to be processed and marker level annotations created. This represents an almost 14 % increase in the number of genotypes contributing phenotype annotations at the marker level. Of the approximately 18,000 multiple marker genotypes, conditional genotypes and genotypes involving alleles expressing inserted genes are two important subsets. Conditional genotypes are primarily processed by removal of recombinase alleles. There are currently over 7000 genotypes where a recombinase allele is removed ( Table 2). The ability to include special and temporal specific phenotypes in the gene level annotations enhances the overall picture of gene function MGI provides to users. There are over 3700 alleles (knock-in and transgenes) expressing at least one inserted sequence involved in nearly 4800 genotypes currently in MGI (as of 12/28/15). Over 2000 of these alleles express a mouse gene and may therefore potentially contribute to gene level annotations. Incorporation of these overexpression and misexpression induced phenotypes improves both the overall picture of gene function and the relation of mouse models of human disease to genes.
There is a potential for the creation of false positive and false negative annotations by the derivation algorithm. One possible source of false positive annotations is the use of expressed gene relationships to identify when an allele is expressing a transcript that may alter the phenotype. For example, the gene Col1a1 has 64 targeted alleles with the attribute "inserted expressed sequence" of these 58 have an association to an expressed gene. Of the remaining 6 alleles, 5 are alleles where an interfering RNA (RNAi) has been inserted into the gene. Determining how to represent the relationship between an RNAi expressing allele and the gene targeted by the RNAi is one of MGI's future projects. During the development of the algorithm the use of the "inserted expressed sequence" attribute was still in development so the presence of an association to an expressed gene was used. We are reviewing the possibility of changing the algorithm to use the presence of the "inserted expressed attribute" instead of the presence of an expressed gene association, as this would improve our handling of these cases.
(See figure on previous page.) Fig. 2 Overlay of specific genotype examples on the flow chart of the gene annotation derivation rules. a Processing of a genotype that results in annotations to a transgene and endogenous mouse gene. b Processing of a genotype that fits the transgene exception rule, where the transgene expresses a mouse gene and the same endogenous mouse gene is mutated in the mice. c Processing of a genotype with a reporter transgene and phenotypic mutation affecting multiple genes. d Processing of a conditional genotype where no gene annotations can be derived One possible source of false negative annotations is the limitation of "docking site" alleles to only Col1a1, Hprt and Gt(ROSA)26Sor. For example, annotations from the genotype MGI:5544092 could be associated with the mouse gene Edn2 if the marker for the intergenic insertion site in the allele Igs1 tm11(CAG-Bgeo,-Edn2)Nat was excluded from consideration. Instead of expanding the list of markers used for docking sites, we are exploring implementation of a "Docking Site" attribute that could be applied to specific alleles. This would avoid the need to modify the algorithm when new docking sites are encountered but would require back annotation of existing alleles. Another source of false negative annotations is the use of reporter genes that are a mouse gene or with an ortholog in mouse. For example, there are 63 knock-in alleles that use the mouse gene Tyr as a coat color reporter. Other than the pigmentation phenotype, phenotypes in these mice are the result of the mutated endogenous locus and not due to the expression of Tyr. However, using the current algorithm gene annotations are not derived for any of the annotated phenotypes. Correcting these would require modifying the algorithm to both ignore Tyr and teasing apart the phenotypes due to the reporter from those due to the mutated endogenous locus.
Impact of MGI improvements
The development of these rules has relied heavily on the implementation of other database improvements in MGI. For example, the introduction of allele attributes allowed a distinction to be made between reporter transgenes that express only a reporter and transgenes that express a reporter and some other gene. The attributes were introduced as part of a restructuring of allele types into generation method and attributes. Attributes include both changes to the endogenous gene function (null/knockout, hypomorph) and characteristics of the inserted sequence (reporter, recombinase). Some attributes may apply to either the endogenous gene or the inserted sequence (hypomorph, modified isoform). An allele may have zero to many attributes but only one generation method. Certain attributes were then incorporated into the rules. These attributes include: reporter, recombinase, transactivator, and inserted expressed sequence. For example, exclusion of a reporter transgene requires the allele to have the generation method "transgenic" and the attribute "reporter" but not the attribute "inserted expressed sequence". Therefore, the reporter transgene Tg(Cspg4-DsRed.T1)1Akik (MGI:3796063) that has only the attribute "reporter" is excluded as a non-causative allele. However, the reporter transgene Tg(CAG-Bmpr1a*,-lacZ)1Nobs (MGI:5473821) has multiple attributes including "reporter" and "inserted expressed sequence" and is retained.
The recent introduction of formalized data associations between transgenic and knock-in alleles and the genes expressed by these alleles has also been incorporated into the rules. MGI now annotates alleles expressing either a mouse gene or gene with a mouse ortholog to the gene being expressed. Alleles expressing inserted genes are then displayed on both the detail page for the endogenous locus where the insertion occurred and on the detail page for the mouse gene or mouse ortholog of the inserted gene being expressed. The rules make use of these associations to avoid assigning phenotypes to the endogenous gene in cases where an inserted expressed gene may be causative. They also allow annotations for phenotypes and diseases caused by transgenes expressing a mouse gene to be derived for the expressed mouse gene. For example, phenotypes for the knock-in allele Ctnnb1 tm1(Nfkbia)Rsu (MGI:3039783) may be the result of loss of expression of Ctnnb1 or the expression of Nfkbia and therefore no derived annotations are created. However, phenotype and disease annotations for the transgene Tg(Prnp*D177N*M128V)A21Rchi (MGI:3836986) are assumed to be the result of the expression of the mouse Prnp gene and derived annotations may be created for both the transgene and the expressed mouse gene.
Use of the derived annotations in MGI
Implementation of the annotation derivation rules described here has improved both searching and display of gene-to-phenotype and gene-to-disease annotations in MGI. Gene level annotations are used on multiple displays and by multiple search tools in MGI. These displays and tools provide users with different ways to access, group, and filter the data. Regardless of how the user accesses the data, consistent results sets are now returned when searching for genes by a phenotype or disease.
One way a user may access the derived annotations for a gene or set of genes is using the Human-Mouse: Disease Connection (HMDC, www.diseasemodels.org, Fig. 3). In the HMDC, searches for mouse data are restricted to only the derived gene-to-phenotype and gene-to-disease annotations. In the results, users may also access the set of genotype annotations used to generate the gene annotations, but multi-genic genotypes are excluded from the display. In MGI, the display of a mouse gene on a disease detail page is based both on the derived gene-to-disease annotations and on orthology relationships to known human disease genes. A gene that has both a derived gene-to-disease annotation and is orthologous to a known human disease gene is displayed in the human and mouse section of the page. Those without an orthology relationship but with a derived annotation are shown in the mouse only section. A similar division is made on the all models page for a disease, with multi-genic models that have neither gene orthologs nor derived annotations shown in the additional complex models section. The derived gene annotations are also incorporated into the updated design of the MGI gene detail page. With this modification, users see a summary graphic of the types of phenotypes caused by mutations in the gene (Fig. 4). On both the gene detail page and in the HMDC, gene level annotations are shown at the MP system level. Users may click through to see the detailed MP terms and associated allele pairs. This avoids the problem of displaying conflicting phenotypes (i.e., increased vs decreased body weight) at the gene level. From both locations users can access details and references to follow up on annotations of interest.
The Genes & Markers Query Form uses the derived annotations when a user searches by phenotype or disease to determine the set of genes and markers returned. The Batch Query tool uses the derived annotations to determine the set of phenotype terms returned for a gene. In this case, unlike in the HMDC, the details link includes both the genotypes used to derive the annotations and complex genotypes annotated to the same term or to a subclass of that term. The Gene Expression Database (GXD) Query Form uses the Fig. 3 Display of derived gene-to-phenotype and gene-to-human disease annotations in the HMDC. A search was done for the genes Apc, App, Erbb2, Fig4 and Kcnj11. Each row shows the derived gene-to-phenotype and gene-to-disease annotations for a mouse gene (in blue). Direct annotations of human genes to disease (in orange) are shown in the same row as the homologous mouse gene. Results have been filtered to reduce the number of rows and columns Fig. 4 Display of derived gene-to-phenotype annotations on the Shh gene detail page in MGI. All Mammalian Phenotype system-level terms are shown. Blue boxes indicate abnormal phenotypes have been reported for that system. Blank boxes indicate absence of data for Shh mutants in that system in MGI derived annotations to define a set of genes associated with a phenotype or disease. Users can then retrieve expression data for the genes in the set. MGI FTP reports for gene-to-phenotype and gene-to-disease associations (HMD_HumanPhenotype.rpt and MGI_OMIM.rpt) include only the derived annotations. Finally, MouseMine (www.mousemine.org [10]) makes use of the same set of rules and allows users to trace back to the alleles and genotypes underlying the derived annotation set. The connection to the source alleles allows users to filter the phenotypes based on allele attributes to find, for example, phenotypes for a gene caused by null mutations.
Other searches in MGI, such as the Quick Search and Phenotypes, Alleles & Disease Models Search, return the set of alleles for a phenotype or disease term and include annotations for both single-and multi-genic genotypes. Since these queries return alleles rather than genes, the rules for the derived annotations are not applied.
The return and display of gene-to-phenotype and gene-to-disease annotations are critical to evaluation and comparison of genes and disease models. In the HMDC, the gene level annotations allow users to refine a set of genes based on the phenotypes or diseases resulting from mutations in the gene before delving into the specifics of the models. On a disease detail page, users can identify disease models associated with mouse genes that are orthologous to known human disease genes and those that are not. The latter class provides a valuable source of potential new candidate human disease genes. With the Batch Query tool, a user can retrieve all phenotypes and diseases associated with a gene that can be exported for further analysis. The summary graphic on the gene detail page will allow users to rapidly review and compare the phenotype profiles of genes.
Discussion
The use of rules to derive annotations has two major advantages over direct curation. First is the hands-on curatorial time-savings benefit. Curators need to enter only the genotype-to-phenotype or genotype-to-disease annotations and do not need to also annotate the gene relationships. Given the large number of existing annotations and the ongoing need to focus curation efforts to newly published literature, the elimination of the requirement for manual curation of gene relationships is vital. Second, using the rules insures consistency of annotation. While we strive for inter-curator consistency at MGI, some variability is inevitable. With the use of unified rules, the derived annotations are always consistent.
Despite the advantages of the derived annotation rules, a limitation of the use of rules to derive annotations as opposed to direct curation of these relationships is the loss of some potential annotations. One way annotations may be lost is due to failure to exclude non-causative alleles. For example, knock-in transactivator alleles cannot currently be excluded. Thus, no derived annotations can be made for mice with the genotype Foxg1 tm1(tTA)Lai / Foxg1 + , Tg(tetO-Gsx2,-EGFP)1Kcam/0 (MGI:4412090). Further, cases where a reporter gene is a mouse gene or has an ortholog in mouse (e.g., mouse Tyr, human ALPP) are captured in the count of expressed genes, but rarely do these genes contribute to a disease phenotype, when one is displayed. With modifications to MGI annotations and additional refinements to the rules we may be able to eliminate more of these allele types from gene relationship consideration, through automated processing.
The use of these rules currently also limits the derived annotations to only those caused by a single gene. The inclusion of disease and phenotype annotations that rely on the presence of mutations in multiple genes are completely excluded by the current algorithm. So gene-tophenotype annotations are not created for either gene based on annotations for mice homozygous for both Epn1 tm1Ocr and Epn2 tm1Ocr (MGI:4356019), where the phenotypes are the result of combined loss of both genes and loss of either gene alone does not produce an abnormal phenotype [11]. While it would be possible in such a case to ascribe all phenotypes from the double homozygote to both genes, the situation is frequently more complex. In many cases, only some of the phenotypes displayed are caused by the double mutation while others are caused by only one of the mutations. Thus, decisions may need to be made at the individual Mammalian Phenotype term annotation level and not at the level of the genotype. In addition the potential for differences in strain background and annotation depth between genotypes to create false positive associations is increased relative to annotations inferred for genotypes with a single causative gene. For example, a subsequent paper looking at the impact of loss of expression of both Epn1 and Epn2 in the vasculature on tumor development [12] did not include either single homozygote as a control making it difficult to determine conclusively that loss of both genes is required for the phenotype. Similarly, mice homozygous for mutations in both Cd80 and Cd86 (MGI:3620124) have been reported to be a model for Insulin-Dependent Diabetes Mellitus (OMIM:222100) but single homozygotes were not examined and the strain background is different from that reported previously for the single homozygotes [13]. In this case, it is likely the mutations in Cd80 and Cd86 modify the disease phenotype but do not cause the disease as the mutations were moved into a strain (NOD) known to develop diabetes. Due to these issues and questions of how to distinguish multi-genic from monogenic phenotypes in the web display, attempting to distinguish between causal mutations, modifying mutations and annotation gaps for multi-genic genotypes was determined to be beyond the scope of the current algorithm.
Clarity of display also drove the decision to infer only gene-to-phenotype and gene-to-disease annotations for expressed mouse genes and not for expressed orthologs of mouse genes. Inferring a gene-to-disease relationship to the mouse gene for phenotypes in mice heterozygous for Col1a1 tm1(CAG-IDH2*R140Q)Kkw (MGI:5582197) [14] would have resulted in the display of the mouse gene Idh2 on the disease detail page for D-2-Hydroxyglutaric Aciduria 2 (OMIM:613657), giving the impression that the mouse gene has been used to model the disease when it is the human gene being expressed. However, since the species of the ortholog is currently stored in the database, future implementations of the MGI disease displays could use this information by, for example, providing links to humanized mouse models of a disease.
Another focus for improvement of the algorithm is the reduction of the number of remaining false-positive derived annotations. One source of false positives is genotypes where the strain background is responsible for the phenotype or disease displayed. In Mora et al. [15], mice homozygous for Sell tm1Flv on a congenic NOD background (MGI:3039435) were generated to investigate the effect of loss of Sell expression on insulin dependent diabetes (OMIM:222100). These mice show the same diabetic phenotype as wild-type NOD controls. However, the rules derive an annotation of Sell to diabetes based on the annotation of this genotype to this OMIM term. Refinements to MGI annotations and incorporation of strain background information into the derivation rules may allow us to exclude these genes from the results sets in the future.
Conclusion
The conversion of gene-to-phenotype and gene-to-disease relationships in MGI from several variable rules used only for web page display to a single set of welldefined rules used to create derived annotations in the database improves both the consistency and accessibility of these relationships, as well as facilitates easier modifications to the rules. The derived gene-tophenotype and gene-to-disease annotations are used for web display, downloads, and public reports and are available for export. Consumers of the exported data need to be aware of the restrictions placed on the annotations by the algorithm as this may alter interpretations of the data. Changes made to the rules can be seen throughout the database after any data update. The increased adaptability of these rules will aid our ability to keep pace with the changes in transgenic technology in the future. | 8,202.6 | 2016-05-20T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Automatic Detection of Atrial Fibrillation Based on CNN-LSTM and Shortcut Connection
Atrial fibrillation (AF) is one of the most common persistent arrhythmias, which has a close connection to a large number of cardiovascular diseases. However, if spotted early, the diagnosis of AF can improve the effectiveness of clinical treatment and effectively prevent serious complications. In this paper, a combination of an 8-layer convolutional neural network (CNN) with a shortcut connection and 1-layer long short-term memory (LSTM), named 8CSL, was proposed for the Electrocardiogram (ECG) classification task. Compared with recurrent neural networks (RNN) and multi-scale convolution neural networks (MCNN), not only can 8CSL extract features skillfully, but also deal with long-term dependency between data. In particular, 8CSL includes eight shortcut connections that can improve the speed of the data transmission and processing as a result of the shortcut connections. The model was evaluated on the base of the test set of the Computing in Cardiology Challenge 2017 dataset with the F1 score. The ECG recordings were cropped or padded to the same length. After 10-fold cross-validation, the average test F1 score was 84.89%, 89.55%, and 85.64% when the segment length was 5, 10, 20 s, respectively. The experiment results demonstrate excellent performance with potential practical applications.
Introduction
Cardiovascular disease is one of the major causes of death worldwide. According to an uncompleted statistic, three million people die of cardiovascular disease every year in China (i.e., one patient dies of cardiovascular disease every 10 s 1).
Atrial fibrillation (AF) is one of the most common persistent arrhythmias, which has a close connection to a large number of cardiovascular diseases [1][2][3][4]. When AF occurs, the patient's heart rate is fast, sometimes up to 100-160 beats/min, and irregular. AF can be subdivided into paroxysmal AF, persistent AF, and permanent AF, according to the duration. However, if detected early, it can improve the clinical treatment effect and effectively prevent the occurrence of serious complications.
The electrocardiogram (ECG), invented by Muirhead in 1872, is a non-invasive method that is widely used in the clinical diagnosis of AF and other types of arrhythmia. Furthermore, ECG records the heartbeats by connecting wires to the wrists. RR interval refers to the time limit between two R waves on an ECG. The normal RR interval should be between 0.6 and 1.0 s. Additionally, AF has distinctive characteristics such as the disappearance of P waves or different RR intervals.
In recent years, researchers have proposed many automatic detection methods before deep learning, and sometimes, the cost of feature extraction is too large to improve the effect significantly.
In recent years, with the development of deep learning, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been widely used in detecting AF with excellent results. As for CNN, it performs well in feature extraction and achieves good results in image classification and retrieval [5][6][7]. Furthermore, many researchers use CNNs to process and identify AF. For example, Ghiasi et al. [8] proposed a CNN algorithm for automatically detecting AF signals from ECG signals. Pourbabaee et al. [9] developed a new automatic AF detection method based on deep convolutional neural networks (DCNN). Qayyum et al. [10] proposed converting ECG signals into 2D images using short-time Fourier transform and put them into a pre-trained CNN model. Cho et al. [11] proposed an approach for the prediction of AF by using DCNN. Wang et al. [12] adopted the CNN and the improved Elman neural network for detection of AF. Xiong et al. [13] proposed a 16-layer 1D CNN to classify the ECGs including AF with a testing accuracy of 82%. For RNN, it takes the time series of data into account while processing data, so of course, it can also be applied to ECG signals. For example, Schwab et al. [14] introduced a novel task formula to simplify the learning of the time dimension and RNN was used to detect AF signals. Sujadevi et al. [15] explored and adopted three deep learning methods: RNN, the long short term memory network (LSTM), and the gate recurrent unit (GRU) neural network, and achieved accuracies of 0.950, 1.000, and 1.000, respectively, on the MIT-BIH Physionet dataset. Faust et al. [16] proposed the LSTM combined with RR intervals to detect AF. Additionally, studies have shown that LSTM is superior to other traditional RNN architectures [17]. Wang et al. [18] developed a novel approach of an 11-layer neural network and the modified Elman neural network (MENN) for the automated AF detection, and the proposed model achieved exceptional results with the accuracy, sensitivity, and specificity of 97.4%, 97.9%, and 97.1%, respectively. Jonathan et al. [19] combined a signal quality index (SQI) algorithm and CNN to detect AF, the results achieved on the test dataset were an overall F1 score of 0.82. In the above research, the researchers did not better solve the long-term dependence between ECG data under the premise of ensuring accurate feature extraction, and did not pay attention to the time of processing and transmitting data. Shortcut connection has also been proven in theory and practice for a long time. In [20][21][22], the importance of shortcut connections in improving neural networks was introduced through theoretical research. In [23,24], some intermediate layers were connected directly with auxiliary classifiers for addressing vanishing and exploding gradients. In [25,26], to solve the problems of response, gradient, and propagation error in the middle layer, the validity of the shortcut connection was verified through a comparison of several common methods and the shortcut connection. Table 1 shows the arrangement of the methods and results in the references. However, some problems still remain. For example, the amount of data that deep learning needs to process is substantial and diverse. Thus, two challenges still exist: (1) the speed of data transmission and processing; and (2) the right deep learning model for the data type to achieve excellent results.
In this paper, we proposed a model that employed CNN based on the shortcut connection and LSTM to address these two challenges. The proposed model was named 8CSL. The main contributions are: (1) The data transmission and the data processing were sped up by 38% by using the shortcut connection; and (2) combining CNN and LSTM, and adjusting the number of network layers and parameters to improve the accuracy of AF detection while ensuring efficient feature extraction where the best F1 score was 89.55%.
The rest of the paper is organized as follows. In Section 2, the basic knowledge of CNN, LSTM, and the shortcut connection is introduced. In Section 3, the Computing in Cardiology Challenge 2017 dataset and the data processing are described. In Section 4, the 8CSL is proposed. In Section 5, the experiments were designed to validate the performance of the proposed model and discuss the effects of different segment lengths.
Convolutional Neural Network (CNN) Structure
CNNs can extract features skillfully and reduce network complexity at the same time [27]. Weight sharing and receptive field play an important role in this.
Receptive Field
In CNN, each neuron only needs to sense the local part and integrate it at a higher level to obtain global information. The temporal convolution is shown in Figure 1. In this paper, we proposed a model that employed CNN based on the shortcut connection and LSTM to address these two challenges. The proposed model was named 8CSL. The main contributions are: (1) The data transmission and the data processing were sped up by 38% by using the shortcut connection; and (2) combining CNN and LSTM, and adjusting the number of network layers and parameters to improve the accuracy of AF detection while ensuring efficient feature extraction where the best F1 score was 89.55%.
The rest of the paper is organized as follows. In Section 2, the basic knowledge of CNN, LSTM, and the shortcut connection is introduced. In Section 3, the Computing in Cardiology Challenge 2017 dataset and the data processing are described. In Section 4, the 8CSL is proposed. In Section 5, the experiments were designed to validate the performance of the proposed model and discuss the effects of different segment lengths.
Convolutional Neural Network (CNN) Structure
CNNs can extract features skillfully and reduce network complexity at the same time [27]. Weight sharing and receptive field play an important role in this.
Receptive Field
In CNN, each neuron only needs to sense the local part and integrate it at a higher level to obtain global information. The temporal convolution is shown in Figure 1. where represents the length of the input signal; stands for the size of the receptive field; and the three corresponding weights ( , , ) are the ℎ filter.
Weight Sharing
To further reduce the number of parameters, weight sharing is employed. This means that different convolution kernels working on a CNN will not change the weight of the convolution kernel as the position changes.
Long Short-Term Memory (LSTM) Structure
The working mechanism of LSTM is the continuously updated memory . The LSTM memory block is shown in Figure 2. [28]. (Where N represents the length of the input signal; k size stands for the size of the receptive field; and the three corresponding weights (w i1 , w i2 , w i3 ) are the ith filter.)
Weight Sharing
To further reduce the number of parameters, weight sharing is employed. This means that different convolution kernels working on a CNN will not change the weight of the convolution kernel as the position changes.
Long Short-Term Memory (LSTM) Structure
The working mechanism of LSTM is the continuously updated memory c n . The LSTM memory block is shown in Figure 2. where is the input data at time ; ℎ is the data output by LSTM at time − 1; is the sigmoid activation function; is the input gate; is the forget gate; is the output gate; and the is updated by partially forgetting the existing memory and adding a new content .
The architecture of an LSTM memory block has also been named a cell, which has three gates: the input gate, forget gate, and output gate. Data are sent in the LSTM through the input gate, processed through the sigmoid layer, the status is updated to the cell, and outputted through the output gate.
It is noteworthy that the output gate depends not only on the input gate and the previous output gate, but also on the current memory [29].
Shortcut Connection
A neural network containing shortcut connections to jump over some layers in residual neural networks is called a residual block. The architecture is shown in Figure 3.
The idea of a shortcut connection of the residual neural network is adopted to streamline the network optimization and speed up data transmission and processing. In this paper, the part of the data were transferred to the shortcut connection and finally pooled with the rest. where is the input data; ( ) is the network map before summation; and ( ) + is the network map after summation. Figure 2. The architecture of a long short-term memory (LSTM) memory block [28]. (Where X n is the input data at time n; h n−1 is the data output by LSTM at time n − 1; ϕ is the sigmoid activation function; i n is the input gate; f n is the forget gate; o n is the output gate; and the C n is updated by partially forgetting the existing memory and adding a new content C n .) The architecture of an LSTM memory block has also been named a cell, which has three gates: the input gate, forget gate, and output gate. Data are sent in the LSTM through the input gate, processed through the sigmoid layer, the status is updated to the cell, and outputted through the output gate.
It is noteworthy that the output gate depends not only on the input gate and the previous output gate, but also on the current memory [29].
Shortcut Connection
A neural network containing shortcut connections to jump over some layers in residual neural networks is called a residual block. The architecture is shown in Figure 3. where is the input data at time ; ℎ is the data output by LSTM at time − 1; is the sigmoid activation function; is the input gate; is the forget gate; is the output gate; and the is updated by partially forgetting the existing memory and adding a new content .
The architecture of an LSTM memory block has also been named a cell, which has three gates: the input gate, forget gate, and output gate. Data are sent in the LSTM through the input gate, processed through the sigmoid layer, the status is updated to the cell, and outputted through the output gate.
It is noteworthy that the output gate depends not only on the input gate and the previous output gate, but also on the current memory [29].
Shortcut Connection
A neural network containing shortcut connections to jump over some layers in residual neural networks is called a residual block. The architecture is shown in Figure 3.
The idea of a shortcut connection of the residual neural network is adopted to streamline the network optimization and speed up data transmission and processing. In this paper, the part of the data were transferred to the shortcut connection and finally pooled with the rest. where is the input data; ( ) is the network map before summation; and ( ) + is the network map after summation. [30]. (Where X is the input data; F(X) is the network map before summation; and F(X) + X is the network map after summation.) The idea of a shortcut connection of the residual neural network is adopted to streamline the network optimization and speed up data transmission and processing. In this paper, the part of the data were transferred to the shortcut connection and finally pooled with the rest.
Data Source
The dataset used in this paper was from the Computing in Cardiology Challenge 2017 including 8528 pieces of data ranging in length from 9 s to 61 s. The ECG recordings were collected by the AliveCor device with the data sampling rate of 300 Hz. In this challenge, we treated all non-AF abnormal rhythms as a single class. Datasets were manually annotated as four categories: normal sinus rhythm (N), AF rhythm (A), other rhythms (O), and noisy recordings (∼). Detailed information on the data are shown in Table 2. Figure 4 is an example of four categories.
Data Source
The dataset used in this paper was from the Computing in Cardiology Challenge 2017 including 8528 pieces of data ranging in length from 9 s to 61 s. The ECG recordings were collected by the AliveCor device with the data sampling rate of 300 Hz. In this challenge, we treated all non-AF abnormal rhythms as a single class. Datasets were manually annotated as four categories: normal sinus rhythm (N), AF rhythm (A), other rhythms (O), and noisy recordings (∼). Detailed information on the data are shown in Table 2. Figure 4 is an example of four categories.
Normalization
The amplitude of ECG data varies greatly among different people or even the same people with different lead positions [31]. When the data distribution is uniform, the convergence of the neural network is better. Therefore, Equation (1) is used to reduce the impact of different amplitudes in the data: subtract the average value from each ECG data, and then divide it into standard deviations.
Normalization
The amplitude of ECG data varies greatly among different people or even the same people with different lead positions [31]. When the data distribution is uniform, the convergence of the neural network is better. Therefore, Equation (1) is used to reduce the impact of different amplitudes in the data: subtract the average value from each ECG data, and then divide it into standard deviations.
where X refers to the ECG recording values, X and S refers to the average and standard deviation of these values, correspondingly.
To verify that the normalized data were conducive to the classification of the model, the model was used to classify the normalized data and the unprocessed data, from which the accuracy was obtained. The experimental results are shown in Table 3. Table 3. Accuracy for the different data.
Data Acc
Normalized Data 85.06% Original Data 80.23% It can be seen from Table 3 that the accuracy of the model processing normalized data was significantly higher than that of the original data.
Data Balance
The distribution of datasets also affects the results of the training. It can be seen from Table 2 that the number of AF and other rhythm ECG data were far less than that of the normal ECG data, and there were only 46 noisy data.
This imbalanced dataset made it more difficult to detect AF than normal ECG. At the same time, because the number of normal ECG was much larger than that of AF, the normal ECG will play a leading role in the process of model training, and the over-fitting phenomenon will appear.
To solve this problem, in this paper, noise and other rhythm ECG data were discarded, and the experiment was converted to detect AF in AF data and normal data. The dataset was randomly divided into a training dataset and test dataset with a proportion of 7:3. To balance the amount of ECG data with normal ECG data, four copies of AF ECG data was added to the training dataset [31].
Cropping
All of the data in the input neural network model must be consistent in length. However, the length of the ECG data was in the range of 9 s to 61 s. Therefore, in this paper, we cropped the data into 5 s segments. The sampling rate of the experimental data was 300 Hz. A total of 1500 points were taken as a segment, and the segment with less than 1500 points was deleted. At the same time, the data were transformed into 10 s and 20 s segments. In this process, longer data should be cropped and shorter data should be deleted. To verify whether the time of data can affect the performance of the model in the experiment, we cropped it into 5 s, 10 s, and 20 s.
Model
In this paper, three deep learning models were compared: recurrent neural networks (RNN), multi-scale convolution neural networks (MCNN), and the proposed model of the combination of the 8-layer CNN with shortcut connection and one layer LSTM (8CSL) for the ECG classification task. where is the input data of time and ℎ is the output data of time .
Multi-Convolutional Neural Network (MCNN)
Instant heart rate sequence is extracted from the ECG signal, then an end-to-end multi-scale convolution neural network (MCNN) uses the instantaneous heart rate sequence as the input and the detection result as the output to detect AF [33]. MCNN automatically extracts features at different locations and scales, which makes the model obtain better accuracy in time series data. The overall architecture of MCNN is shown in Figure 6. The architecture of the MCNN is shown in Figure 6. As shown in the figure, the MCNN framework has three sequential stages: the transformation stage, the local convolution stage, and the full convolution stage.
The MCNN detects AF with the instant heart rate sequence (IHR) as input. First, the R location is read from the corresponding annotations. Determine RR intervals according to the R position. Then, IHR is calculated by: where is the ℎ IHR; is the sample rate of ECG signal; and is the ℎ RR interval. Afterward, considering that 128 beats are required for the detection of AF, we took 63 IHRs forward and backward for each IHR.
Multi-Convolutional Neural Network (MCNN)
Instant heart rate sequence is extracted from the ECG signal, then an end-to-end multi-scale convolution neural network (MCNN) uses the instantaneous heart rate sequence as the input and the detection result as the output to detect AF [33]. MCNN automatically extracts features at different locations and scales, which makes the model obtain better accuracy in time series data. The overall architecture of MCNN is shown in Figure 6. where is the input data of time and ℎ is the output data of time .
Multi-Convolutional Neural Network (MCNN)
Instant heart rate sequence is extracted from the ECG signal, then an end-to-end multi-scale convolution neural network (MCNN) uses the instantaneous heart rate sequence as the input and the detection result as the output to detect AF [33]. MCNN automatically extracts features at different locations and scales, which makes the model obtain better accuracy in time series data. The overall architecture of MCNN is shown in Figure 6. The architecture of the MCNN is shown in Figure 6. As shown in the figure, the MCNN framework has three sequential stages: the transformation stage, the local convolution stage, and the full convolution stage.
The MCNN detects AF with the instant heart rate sequence (IHR) as input. First, the R location is read from the corresponding annotations. Determine RR intervals according to the R position. Then, IHR is calculated by: where is the ℎ IHR; is the sample rate of ECG signal; and is the ℎ RR interval. Afterward, considering that 128 beats are required for the detection of AF, we took 63 IHRs forward and backward for each IHR. The architecture of the MCNN is shown in Figure 6. As shown in the figure, the MCNN framework has three sequential stages: the transformation stage, the local convolution stage, and the full convolution stage.
The MCNN detects AF with the instant heart rate sequence (IHR) as input. First, the R location is read from the corresponding annotations. Determine RR intervals according to the R position. Then, IHR is calculated by: where IHR i is the ith IHR; f is the sample rate of ECG signal; and RRI i is the ith RR interval. Afterward, considering that 128 beats are required for the detection of AF, we took 63 IHRs forward and backward for each IHR.
8-Layer CNN with Shortcut Connection and 1-Layer LSTM (8CSL)
A combination of the 8-layer CNN with shortcut connection and 1-layer LSTM was developed for the ECG classification task. The model was named 8CSL, and includes eight shortcut connections to improve the data transmission and processing speed of traditional CNN.
When training, the data are sent in convolutional neural networks in batches. This article modified the network while ensuring network convergence and improving the generalization ability of the network, which is why the model uses batch-normalization [35]. Rectified linear activation (ReLU) units are introduced as a model requiring a non-linear relationship [36]. Due to the definition of ReLU, when the input is positive, the problem of gradient saturation can be better avoided. The ReLU function is shown in Equation (3).
Dropout is then used to reduce the over-fitting of CNN on the training data after the convolution layer [37]. Figure 7 presents the architecture of the dropout.
where ( ) represents the weighted sum of the input of the ℎ unit in the ( + 1) ℎ layer; weight where is the mask vector randomly generated by the Bernoulli probability distribution (0-1); the vector element is 0 or 1; the probability of 1 is ; and the probability of 0 is 1 − . In the dropout layer, the vector is multiplied by the corresponding element of the neuron. If the element in is 1, it is reserved; if it is 0, it is set to 0. Then only the corresponding parameters of the reserved neuron are trained. The convolution layer is an important component of the learning features of CNN including a 10 × 1 filter to extract features from the data. When AF features are detected, the features are marked by convolution kernels.
To reduce the risk of over-fitting and the calculation of parameters, the average pool layer and max pool layer are used. Primarily, in this paper, the max pool layer was used as a shortcut connection, which processes a part of the transmitted data.
In 8CSL, a convolution block based on shortcut connections is composed of batch-normalization, 1D CNN, ReLU, Dropout, 1D Average pool, and 1D Max pool, as shown in Figure 8. Equations (4) and (5) are the network calculations without the dropout layer: represents the weighted sum of the input of the ith unit in the (l + 1)th layer; weight w (l+1) i is the (l + 1)th neuron; y l is the neuron of lth; b (l+1) i is the bias of the ith unit in the (l + 1)th layer; f is the activation function; and y (l+1) i represents the weighted sum of the input of the ith unit in the (l + 1)th layer.
Equations (6)-(9) are the network calculations with the dropout layer: where r is the mask vector randomly generated by the Bernoulli probability distribution (0-1); the vector element is 0 or 1; the probability of 1 is p; and the probability of 0 is 1 − p. In the dropout layer, the r vector is multiplied by the corresponding element of the neuron. If the element in r is 1, it is reserved; if it is 0, it is set to 0. Then only the corresponding parameters of the reserved neuron are trained. The convolution layer is an important component of the learning features of CNN including a 10 × 1 filter to extract features from the data. When AF features are detected, the features are marked by convolution kernels.
To reduce the risk of over-fitting and the calculation of parameters, the average pool layer and max pool layer are used. Primarily, in this paper, the max pool layer was used as a shortcut connection, which processes a part of the transmitted data.
In 8CSL, a convolution block based on shortcut connections is composed of batch-normalization, 1D CNN, ReLU, Dropout, 1D Average pool, and 1D Max pool, as shown in Figure 8. The convolution layer is an important component of the learning features of CNN including a 10 × 1 filter to extract features from the data. When AF features are detected, the features are marked by convolution kernels.
To reduce the risk of over-fitting and the calculation of parameters, the average pool layer and max pool layer are used. Primarily, in this paper, the max pool layer was used as a shortcut connection, which processes a part of the transmitted data.
In 8CSL, a convolution block based on shortcut connections is composed of batch-normalization, 1D CNN, ReLU, Dropout, 1D Average pool, and 1D Max pool, as shown in Figure 8. The feature extracted by the convolutional neural network was input into LSTM to process the feature data. LSTM was added to address the long-term dependency of the data.
The reason why LSTM can better handle the long-term dependency of data is that it relies on the internal memory cell , which is controlled by various gates to add or delete information.
Equations (10) and (11) are used to describe the information update of the memory cell.
where is the forget gate; is the input gate; ̃ is the newly added information; is the bias; is the vector corresponding to the input gate; is the data of the input sequence at time ; and ℎ is the hidden layer information at time n−1.
To generate a result, the data processed by LSTM were converted into vector values of 2 × 1 using a full connection layer, corresponding to each class (N, A). A Softmax function is used to represent these values as a probability by normalizing them between 0 and 1. The feature extracted by the convolutional neural network was input into LSTM to process the feature data. LSTM was added to address the long-term dependency of the data.
The reason why LSTM can better handle the long-term dependency of data is that it relies on the internal memory cell c n , which is controlled by various gates to add or delete information.
Equations (10) and (11) are used to describe the information update of the memory cell.
c n = f n c n−1 + i n c n (10) where f is the forget gate; i is the input gate; c n is the newly added information; b is the bias; U is the vector corresponding to the input gate; X n is the data of the input sequence at time n; and h n−1 is the hidden layer information at time n−1.
To generate a result, the data processed by LSTM were converted into vector values of 2 × 1 using a full connection layer, corresponding to each class (N, A). A Softmax function is used to represent these values as a probability by normalizing them between 0 and 1.
To verify the effect of length factors on performance, the model took 5, 10, and 20 s long segments as input. The output of the model was the probability of each class, and the predictive class of the experimental results is the class with the maximum probability. To reduce memory requirements and better tune parameters, the Adam optimizer was used in the model.
The overall architecture of 8CSL is shown in Figure 9. To verify the effect of length factors on performance, the model took 5, 10, and 20 second long segments as input. The output of the model was the probability of each class, and the predictive class of the experimental results is the class with the maximum probability. To reduce memory requirements and better tune parameters, the Adam optimizer was used in the model.
The overall architecture of 8CSL is shown in Figure 9.
Classification Performance Evaluation Index
In this paper, the experiment was carried out in the Keras framework of the Windows 7 operating system. Three models were evaluated based on the test set of the Computing in Cardiology Challenge 2017 dataset with sensitivity (Sen), specificity (Spe), precision (Pre), accuracy (Acc), and F1 score. The experiments were divided into three groups, and the variables were the length of the experimental data.
To calculate the evaluation indexes, true positive (TP), true negative (TN), false positive (FP), and false negative (FN) were adopted, and the calculation formulae are as follows: At the same time, the loss and accuracy curves of the three deep learning models were also calculated. Then, the experimental results with the experimental data of 5 s, 10 s, and 20 s are described, respectively.
In this paper, we used categorical_crossentropy loss as the loss function of the model, which is used to evaluate the difference between the probability distribution obtained from the current training and the true distribution.
The loss is derived from Equation (17): Figure 9. The overall architecture of 8CSL.
Classification Performance Evaluation Index
In this paper, the experiment was carried out in the Keras framework of the Windows 7 operating system. Three models were evaluated based on the test set of the Computing in Cardiology Challenge 2017 dataset with sensitivity (Sen), specificity (Spe), precision (Pre), accuracy (Acc), and F1 score. The experiments were divided into three groups, and the variables were the length of the experimental data.
To calculate the evaluation indexes, true positive (TP), true negative (TN), false positive (FP), and false negative (FN) were adopted, and the calculation formulae are as follows: At the same time, the loss and accuracy curves of the three deep learning models were also calculated. Then, the experimental results with the experimental data of 5 s, 10 s, and 20 s are described, respectively.
In this paper, we used categorical_crossentropy loss as the loss function of the model, which is used to evaluate the difference between the probability distribution obtained from the current training and the true distribution.
The loss is derived from Equation (17): where y is the desired output and a is the true output.
Experiments of 5 Second Segment
The loss and accuracy curves of the three models are shown in Figures 10 and 11 when the segment length is 5 s.
Experiments of 5 Second Segment
The loss and accuracy curves of the three models are shown in Figures 10 and 11 when the segment length is 5 s.
As can be seen from the Figures 10 and 11, in 8CSL, the minimum loss was 0.4254, and the maximum accuracy was 83.06% on the training set with the minimum loss of 0.4382 and maximum accuracy of 81.53% on the test set. Compared with the other models, 8CSL had the highest accuracy and the lowest loss value, according to the stability and minimum value of the loss curve change and the maximum value of an accuracy curve to judge the best performance of 8CSL.
Experiments of 10 Second Segment
The loss and accuracy curves of the three models are shown in Figures 12 and 13, when the segment length was 10 s.
As can be seen from Figures 12 and 13, in 8CSL, the minimum loss was 0.4168, and the maximum accuracy was 86.23% on the training set with the minimum loss of 0.4285 and maximum accuracy of 85.06% on the test set.
Experiments of 5 Second Segment
The loss and accuracy curves of the three models are shown in Figures 10 and 11 when the segment length is 5 s.
As can be seen from the Figures 10 and 11, in 8CSL, the minimum loss was 0.4254, and the maximum accuracy was 83.06% on the training set with the minimum loss of 0.4382 and maximum accuracy of 81.53% on the test set. Compared with the other models, 8CSL had the highest accuracy and the lowest loss value, according to the stability and minimum value of the loss curve change and the maximum value of an accuracy curve to judge the best performance of 8CSL.
Experiments of 10 Second Segment
The loss and accuracy curves of the three models are shown in Figures 12 and 13, when the segment length was 10 s.
As can be seen from Figures 12 and 13, in 8CSL, the minimum loss was 0.4168, and the maximum accuracy was 86.23% on the training set with the minimum loss of 0.4285 and maximum accuracy of 85.06% on the test set. Figure 11. Accuracy curves of the three models on the five second data.
As can be seen from the Figures 10 and 11, in 8CSL, the minimum loss was 0.4254, and the maximum accuracy was 83.06% on the training set with the minimum loss of 0.4382 and maximum accuracy of 81.53% on the test set. Compared with the other models, 8CSL had the highest accuracy and the lowest loss value, according to the stability and minimum value of the loss curve change and the maximum value of an accuracy curve to judge the best performance of 8CSL.
Experiments of 10 Second Segment
The loss and accuracy curves of the three models are shown in Figures 12 and 13, when the segment length was 10 s.
Experiments of 20 Second Segment
The loss and accuracy curves of the three models are shown in Figures 14 and 15 when the segment length was 20 s.
Experiments of 20 Second Segment
The loss and accuracy curves of the three models are shown in Figures 14 and 15 when the segment length was 20 s. As can be seen from Figures 12 and 13, in 8CSL, the minimum loss was 0.4168, and the maximum accuracy was 86.23% on the training set with the minimum loss of 0.4285 and maximum accuracy of 85.06% on the test set.
Experiments of 20 Second Segment
The loss and accuracy curves of the three models are shown in Figures 14 and 15 when the segment length was 20 s.
As can be seen from Figures 14 and 15, in 8CSL, the minimum loss was 0.4168, the maximum accuracy was 86.23% on the training set with the minimum loss of 0.4285 and maximum accuracy of 85.06% on the test set. Figure 13. Accuracy curves of the three models on the 10 second data.
Experiments of 20 Second Segment
The loss and accuracy curves of the three models are shown in Figures 14 and 15 when the segment length was 20 s. As can be seen from Figures 14 and 15, in 8CSL, the minimum loss was 0.4168, the maximum accuracy was 86.23% on the training set with the minimum loss of 0.4285 and maximum accuracy of 85.06% on the test set. Table 4 is derived from Equations (12)- (16). The overall experiments of the three models are shown in Table 4. As shown in Table 4, in 8CSL, when the segment length is 5 seconds, the Sen, Spe, Pre, Acc, and F1 score are 84.36%, 89.26%, 85.43%, 81.53%, and 84.89% respectively. When the segment length is 10 seconds, the Sen, Spe, Pre, Acc, and F1 score are 87.42%, 91.37%, 91.78%, 85.06%, and 89.55% respectively. When the segment length is 10 s, the Sen, Spe, Pre, Acc, and F1 score are 83.08%, 87.21%, 88.37%, 81.86%, and 85.64% respectively. Compared with other models, 8CSL has the best performance on the test set. Because 8CSL has a deeper network model and more ways to prevent over-fitting.
Overall Results
And compared with the same model, the three models all perform best in the segment length is 10 s. Because most of the processed data is distributed around 10 s, the information is relatively complete. Table 4 is derived from Equations (12)-(16). The overall experiments of the three models are shown in Table 4. As shown in Table 4, in 8CSL, when the segment length is 5 s, the Sen, Spe, Pre, Acc, and F1 score are 84.36%, 89.26%, 85.43%, 81.53%, and 84.89% respectively. When the segment length is 10 s, the Sen, Spe, Pre, Acc, and F1 score are 87.42%, 91.37%, 91.78%, 85.06%, and 89.55% respectively.
Overall Results
When the segment length is 10 s, the Sen, Spe, Pre, Acc, and F1 score are 83.08%, 87.21%, 88.37%, 81.86%, and 85.64% respectively. Compared with other models, 8CSL has the best performance on the test set. Because 8CSL has a deeper network model and more ways to prevent over-fitting.
And compared with the same model, the three models all perform best in the segment length is 10 s. Because most of the processed data is distributed around 10 s, the information is relatively complete.
Efficiency Experiment of Shortcut Connection
To validate the effectiveness of shortcut connection in speeding up the data processing, this paper compares the average time of 10 epochs in the case of adding the 8 shortcut connections and no shortcut connection, as shown in Table 5. Table 5. Data processing speed with or without shortcut connections.
Model Speed
Shortcut connection 11 s No shortcut connection 18 s As shown in Table 5, if the model use shortcut connection, the data processing time will be saved by 7 s, i.e., the data transmission and data processing are speeded up by 38%.
To verify whether the number of shortcut connections affects the experimental results, we use 6 shortcut connections, 7 shortcut connections, 8 shortcut connections, 9 shortcut connections and 10 shortcut connections to compare.
As shown in Figure 16, the time decreases with the increase of the number of shortcut connections, but the accuracy reaches the highest when the number of shortcut connections is 8. In this paper, on the premise of ensuring the accuracy and time, we choose 8 shortcut connections.
Efficiency Experiment of Shortcut Connection
To validate the effectiveness of shortcut connection in speeding up the data processing, this paper compares the average time of 10 epochs in the case of adding the 8 shortcut connections and no shortcut connection, as shown in Table 5. Table 5. Data processing speed with or without shortcut connections.
Model
Speed Shortcut connection 11 s No shortcut connection 18 s As shown in Table 5, if the model use shortcut connection, the data processing time will be saved by 7 s, i.e., the data transmission and data processing are speeded up by 38%.
To verify whether the number of shortcut connections affects the experimental results, we use 6 shortcut connections, 7 shortcut connections, 8 shortcut connections, 9 shortcut connections and 10 shortcut connections to compare.
As shown in Figure 16, the time decreases with the increase of the number of shortcut connections, but the accuracy reaches the highest when the number of shortcut connections is 8. In this paper, on the premise of ensuring the accuracy and time, we choose 8 shortcut connections.
Conclusions
In this paper, we proposed a combination of an 8-layer CNN with eight shortcut connections and a 1-layer LSTM model, which was to detect AF from single lead short ECG recordings. This can speed up the data transmission and processing of traditional convolutional neural networks by adding shortcut connections. It also consists of a 1-layer LSTM and a fully connected layer, which can not only extract features skillfully, but also deal with long-term dependence between data. The three deep learning models were evaluated based on the test set of the Computing in Cardiology Challenge 2017 dataset with an F1 score. Through three groups of comparative experiments, performance on all indexes of 8CSL was better than that of RNN and MCNN. At the same time, the effectiveness of adding shortcut connections was verified through an efficiency experiment of the shortcut connection. Moreover, 8CSL can be improved by detecting atrial fibrillation in a 12-lead ECG. This direction is the focus of our future research.
Conclusions
In this paper, we proposed a combination of an 8-layer CNN with eight shortcut connections and a 1-layer LSTM model, which was to detect AF from single lead short ECG recordings. This can speed up the data transmission and processing of traditional convolutional neural networks by adding shortcut connections. It also consists of a 1-layer LSTM and a fully connected layer, which can not only extract features skillfully, but also deal with long-term dependence between data. The three deep learning models were evaluated based on the test set of the Computing in Cardiology Challenge 2017 dataset with an F1 score. Through three groups of comparative experiments, performance on all indexes of 8CSL was better than that of RNN and MCNN. At the same time, the effectiveness of adding shortcut connections was verified through an efficiency experiment of the shortcut connection. Moreover, 8CSL can be improved by detecting atrial fibrillation in a 12-lead ECG. This direction is the focus of our future research.
Conflicts of Interest:
The authors declare no conflicts of interest. | 9,902.2 | 2020-05-20T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Effect of Potassium Periodate Oxidizer on Germanium Chemical Mechanical Planarization Using Fumed Silica as Abrasive
The chemical mechanical planarization (CMP) removal rate (RR) of germanium using potassium periodate as oxidizer with fumed silica based slurries have been investigated. Static etch rate (ER) of germanium was performed as a function of solution temperature, pH and concentration of potassium periodate. Change in the enthalpy ( (cid:2) H act ) and entropy ( (cid:2) S act ) of activation for the Ge in the proposed oxidizer was found to be 11.029 kJ/mol and − 271.06 J/mol.K, respectively. The values suggest that the dissolution of germanium in the proposed oxidizer is endothermic in nature and the etching is controlled by activation complex. The effect of slurry pH, KIO 4 concentration, turntable speed and down pressure on Ge RR were studied. ER and RR of Ge was found to increase with pH. In the absence of oxidizer, polishing of germanium with 3 wt% fumed silica showed almost zero removal. With the addition of 1 wt% KIO 4 + 0.1M KOH to 3 wt% fumed silica slurry, significant increase in the RR of Ge was observed for complete range of pH. Ge is oxidized to form germanium dioxide in the presence of KIO 4 , which on subsequent oxidation resulted in the formation of soluble species.
Chemical mechanical planarization process is an crucial step for successful integration of integrated circuits (IC) using complementary metal oxide semiconductor (CMOS) technology. Since 1947, silicon (Si) is dominating the semiconductor industry. 1 However, with the continuous shrinking size of the device, there is a potential need to replace silicon with higher mobility channel materials. Germanium (Ge) is promising candidate, with lower bandgap and three times higher electron and hole mobility as that of Si, to manufacture the next generation advanced, fast and miniaturized devices in microelectronics industry. 2 Ge is considered for integrating p-type metal oxide semiconductor (PMOS) channel for future metal oxide semiconductor field effect transistor (MOSFET) devices. 3 Chemical mechanical planarization step is of great importance to take away the excess material from the surface during device integration to provide a flat planarized surface. 4 For Ge CMP, very few research articles are available in public literature. Peddeti et al. 5 reported H 2 O 2 based colloidal silica slurry to polish germanium. Ge removal rate was found to be maximum at pH 12. Zeta potential of silica particle and Ge surface were measured as a function of pH. Strong repulsion exists between the particles and surface at pH 6 and 10. Higher silica concentration of 3 wt% silica with oxidizer was used to enhance the Ge removal rate. The importance of ionic strength of silica-based slurry containing H 2 O 2 oxidizer with KCl, K 2 SO 4 , KNO 3 and NaNO 3 electrolytes was studied by Matovu et al. 6 The bond strength reported for Ge-Ge bond and Ge-O bond is 37.6 kcal/mol and 86 kcal/mol, respectively. The material removal reported was due to the rupture of weaker Ge-Ge bond. Hsu et al. 7 used Ge removal rate enhancers such as methylpyridine compounds, in addition to H 2 O 2 . Out of various methylpyridine derivatives, 4-methylpyridine showed highest Ge removal of 3,478 Å /min. Tooth lock removal mechanism was reported by Gupta et al. 8 for Ge polishing in the absence of oxidizer. Maximum removal rate of 160 nm/min was reported for germanium with rutile titania based slurry at pH 3. Higher material removal was attributed to structural similarity between germanium dioxide and the rutile titania, which resulted in the formation of Ti-O-Ge bond. In alkaline region, pH≥11, germanium dioxide was found to change in amorphous form, which suppresses tooth lock between germanium dioxide and the rutile titania and resulted in lower removal rate. Reshma et al. 9 proposed oxone as oxidizer for polishing Ge using fumed silica based slurry.
In the presence and absence of 3 wt% fumed silica, higher material removal was reported in alkaline region as compared to acidic and neutral region. Maximum removal of ∼284 nm/min was reported at pH 11 with 1 wt% oxone in 3 wt% fumed silica based slurry.
The presence of sodium salts is not preferred in IC chip fabrication. 10 Few literature suggests KIO 4 as oxidizer to polish ruthenium and copper. 11-14 H 2 O 2 is widely used oxidizer for polishing Ge. But the hydrogen peroxide is a thermodynamically unstable compound, which decomposes to give oxygen and water. Moreover, hydrogen peroxide can reduce the pot life of the CMP slurry. Hence, there is always a need for an alternate oxidizer to overcome the above challenges. [15][16][17] This is the first report proposing KIO 4 as oxidizer to polish Ge using fumed silica based slurry.
Materials and Methods
Ge disk with 99.999% purity, 1 in. diameter and 0.5 in. tall, was procured from RWMM (Rare World Metals Mint, USA). Fumed silica procured from Cabot India (CAB-O-SIL M-5) with an average particle length of 0.2-0.3 μm 18 and KIO 4 purchased from Pallav Chemicals & Solvents Pvt. Ltd, India were used for the slurry formulation. Prior to CMP experiments, dissolution study was conducted to determine the Ge ER and to analyze the impact of chemical action on the material removal. Etchant solution was prepared using oxidizer, KIO 4 , which has very low solubility of 0.018 moles/liter at 20°C. 11 According to literature, 11,13 addition of KOH increases the solubility of KIO 4 in water. Synergetic effect of both the compounds results in the formation of a soluble dimesoperiodate complex, which is an effective oxidizer with similar properties as of KIO 4 . 11 Unless mentioned otherwise, KOH concentration of 0.1 M was used with all the etchant solutions and slurry for the CMP experiments. Slurry pH was adjusted using HNO 3 . Benchtop chemical mechanical polisher, purchased from Struers, Denmark (LaboPol-5/LaboForce-3), was used for performing the polishing experiments. The detailed experimental procedure for polishing Ge is reported elsewhere. 8,9,19 Zeta potential and effective diameter of the particles in the slurry with and without oxidizer were investigated over the pH using NanoBrook 90 Plus PALS zeta potential-cum-particle size analyser. Surface roughness of polished and unpolished samples were evaluated using 3D optical microscope, Bruker, Contour GT-K0, Germany. Etch rate and Ge removal rate was calculated using difference in intial and final weight of Ge. As the density and polishing area are available the weight difference can be converted to film thickness. Ge ER and RR were found by dividing film thickness by polishing time. 20 Ge removal using the proposed slurry was tested for Prestonian behavior, RR(nm/min) = APv where A is Preston coefficient, P is the down pressure and v is the table speed. 4
Results and Discussion
Etching experiments.-Effect of KIO 4 concentration on Ge ER.-Etching experiments on Ge were performed by varying KIO 4 concentration ranging between 0.25 wt% to 1.5 wt%. The pH of the solutions with different KIO 4 concentration were adjusted to 11 using HNO 3 . It can be depicted from Figure 1 that ER significantly depends on KIO 4 concentration. Ge ER was increased to ∼52 nm/min from ∼6 nm/min with KIO 4 concentration increased to 1 wt% from 0.25 wt%, respectively. This might be due to increased oxidation of Ge surface with IO − 4 ions and also due to the formation of soluble dimesoperiodate complex. 11,13 Further increase beyond 1 wt% of KIO 4 concentration (1.25 wt% and 1.5wt%), as seen in Figure 1, Ge ER was found to be saturated. The observed saturation can be attributed to the limitated Ge surface available for oxidation. 21 KIO 4 concentration of 1 wt% with 0.1 M KOH was selected for further study, as there was no remarkable increase in Ge ER above 1 wt% KIO 4 concentration. Ge ER was zero for the pH adjusted distilled water in the absence of KIO 4 . 9 Effect of pH on Ge ER.-At fixed KIO 4 concentration, the effect of solution pH on Ge ER was studied. KIO 4 concentration of 1 wt% was used by adjusting the pH using HNO 3 for this study. An increase in Ge ER with increasing solution pH can be seen in Figure 2. Ge ER was enhanced to ∼52 nm/min from ∼15 nm/min, when pH of the solution was increased to pH 11 from pH 3, respectively. Increase in ER while moving toward alkaline region might be due to the oxidation of Ge surface with IO 4 − anions formed from dissociation of KIO 4 , as shown in Eq. 1 and with * OH radicals, formed from the dissociation of KOH, as shown in Eq. 2. The observed ER is in line with the reported literature of H 2 O 2 system. 5,6 Effect of temperature on ER.-The relation between solution temperature and ER was studied in 1wt% KIO 4 range 30°C-70°C was selected for dissolution study. The results are shown in Figure 3. The etch rate was found to increase with increase in temperature. At 30°C the ER was ∼58 nm/min, which enhanced to ∼114 nm/min at 70°C. Activation energy for Ge dissolution in potassium periodate solution was calculated using Arrhenius equation, as shown in Eq. 3. Where, A is the Arrhenius pre-exponential factor, E a is the activation energy, R is the universal gas constant (8.314 J/mol.K) and T is the absolute temperature (K). 22 Figure 4 shows the semilogarithm plot of etch rate and 1/T. The activation energy was found to be 13.7 kJ/ mol.
303RT [4] In order to determine the thermodynamic property, the transition state equation was applied to evaluate enthalpy and entropy of activation, as shown in Eq. 4. Where, H act is the enthalpy of activation, S act is the entropy of activation, R is the universal gas constant, h is Planck's constant (6.626176 × 10 −34 J-s) and N is Avogadro's number (6.02252 × 10 23 mol −1 ). Figure 5 shows the plot of log (ER/T) versus 1/T. It can be seen that a straight line is obtained with a slope of (-H act /2. Figure 6 illustrates the effect of pH on Ge RR. In the absence of fumed silica, Ge RR enhances from ∼37 nm/min to ∼160 nm/min when the pH of the slurry was varied from 3 to 11, respectively. Whereas, in the presence of 3 wt% fumed silica with 1 wt% KIO 4 + 0.1 M KOH slurry, Ge removal rate was found to be ∼49 nm/min at pH 3 and ∼222 nm/min at pH 11. The main cathodic reactions are the reduction of IO − Effect of KIO 4 concentration on Ge RR.- Figure 7 shows Ge RR at constant pH in presence and absence of 3 wt% fumed silica. For this study the pH was maintained at 11. Ge RR increased with increase in KIO 4 concentration from 0.25 wt% to 1 wt%. Ge RR was found to be zero with 3 wt% fumed silica over the entire range of pH. In the absence of abrasive, the increase in oxidizer concentration from 0.25 wt% to 1 wt% enhances Ge RR from ∼61 nm/min to ∼160 nm/min. Whereas with addition of 3 wt% fumed silica in polishing slurry, Ge RR was found to increase from ∼74 nm/min to ∼222 nm/min with KIO 4 concentration increasing from 0.25 wt% to 1 wt%, respectively. The enhanced RR of Ge with increasing KIO 4 concentration might be due to the increased oxidation of Ge with IO − 4 and IO − 3 anions, as shown in Eqs. 6 and 7. As the concentration of KIO 4 increases, Ge surface oxidizes rapidly to form GeO 2 with subsequent formation of soluble species. With the addition of abrasive, Ge RR is relatively higher, which might be attributed to the combined effect of free abrasive polishing and chemical etching. 5 Effect of pH on zeta potential.-The effect of pH on zeta potential (ζ ) of fumed silica, fumed silica with KOH and fumed silica with KOH + KIO 4 is shown in Figure 8. It can be clearly seen that fumed silica particles are negatively charged over entire pH range in the presence and absence of KIO 4 and KOH. There is no significant variation in the zeta potential value with the addition of the KOH and KIO 4 to the fumed silica. The isoelectric point (IEP) of Ge surface in DI water is between pH 4 and 4.5. 5,6 Thus, at pH 3 Ge surface is positively charged and fumed SiO 2 particles are negatively charged ( Figure 8) leading to electrostatic attraction between them. Whereas, for the slurry pH between 5 and 11, both Ge surface and SiO 2 particles are negatively charged leading to repulsion between them. Thus, the observed removal beyond pH 5 cannot be attributed alone to electrostatic interactions. There might be some other reason for removal observed in this study.
Effect of pH on effective diameter.-The effective diameter of fumed silica particles in the presence and absence of KIO 4 and KOH is shown in Figure 9. For the entire pH range, the effective diameter was found to be larger for 3 wt% fumed silica slurry as compared with slurry containing KIO 4 and KOH or either of them. Larger effective diameter of fumed silica particles is due to its presence in agglomerated form. It can be clearly seen that effective diameter is reduced with the addition of KOH for entire pH range. This reduction in effective diameter can be attributed to KOH, which increases the solubilty of KIO 4 and acts as a dispersing agent. The oxidizer, KIO 4 , serves as a stabilizing agent for the slurry, by refraining the particles to get agglomerate. Literature suggests that the total contact area of active abrasive particles between wafer and pad affects material removal. 24 An increase in concentration of abrasive particle, enhances the total contact area resulting in higher removal till the saturation limit. While increasing the pH toward the alkaline region in presence of KIO 4 and KOH, agglomerated fumed silica particles gets dispersed providing more total contact area between Ge surface and the abrasive particle and resulting in higher removal rate.
Effect of pressure on Ge RR.- Figure 10a shows the effect of pressure on Ge RR using 3 wt% fumed silica containing 1 wt% KIO 4 + 0.1 M KOH at pH 11. It was observed that with increase in pressure from 1.974 N/cm 2 to 7.898 N/cm 2 Ge RR enhances from ∼222 nm/min to ∼309 nm/min, respectively. In CMP, planarization is achieved when higher points on the surface were subjected to more pressure when compared with the lower points. 25 With increase in downpressure on Ge surface, the effective local pressure increases resulting in higher contact area between germanium surface and abrasive. 26 At zero pressure, Ge RR has a non-zero intercept following non-Prestonian behavior. This might be attributed to the formation of oxide film, which acts as a rate limiting step. 27 Non-prestonian behavior of material removal rate with pressure was distinctly reported for ruthenium/germanium CMP in literature. 8,9,28 Effect of table speed on Ge RR.- Figure 10b shows the effect of table speed on Ge RR using 3 wt% fumed silica + 1 wt% KIO 4 + 0.1M KOH slurry at pH 11. From Figure 10b, it is clear that as the platen rotational speed is increased from 50 rpm to 250 rpm, Ge RR increases from ∼98 nm/min to ∼321 nm/min. Higher RR is likely due to the rapid formation of GeO 2 film on the Ge surface 27 Surface morphology.-Surface roughness (Ra) of Ge coupons were evaluated at pH 3, pH 7 and pH 11. Before polishing, the surface roughness was found to be 0.72 ± 0.03 μm. Ge coupon polished using 3 wt% fumed silica + 1 wt% KIO 4 + 0.1 M KOH slurry at pH 3 showed no significant change in surface roughness. While the germanium coupon polished using the proposed slurry at pH 7 and pH 11, shown surface roughness of 0.66 ± 0.05 μm and 0.41 ± 0.05 μm, respectively. With increase in the pH value of the slurry, Ge surface roughness was found to be decrease. Ge coupons polished with 3 wt% fumed silica + 1 wt% KIO 4 + 0.1M KOH at pH 11, shows a surface roughness reduction of ∼42%.
Removal mechanism.-Fumed silica particles are negatively charged over the entire pH range for all the three slurries i.e., 3 wt% 5,6 Ge surface have isoelectric point (IEP) between pH 4 and 4.5 in pH adjusted DI water. At pH 3, Ge surface is positively charged and SiO 2 particles are negatively charged leading to electrostatic attraction among them and from pH 5 onwards both Ge surface and SiO 2 particles are negatively charged leading to repulsion between them. In alkaline region, Ge polishes at higher rate with higher concentration silica based abrasives, where electrostatic repulsion exists between the SiO 2 particles and Ge surface. 5 Apart from electrostatic interactions, there might be some other reason behind for the higher material removal in the alkaline region.
The increase in material removal with abrasive concentration in the solid-solid contact mode of CMP was reported by Luo 24 with a qualitative explanation about active and inactive abrasives. The material removal increases with more number of active abrasive between the wafer and pad surface, which in turn increases the total contact area among them, till saturation occurs. Ge polished with 3 wt% fumed silica has larger effective diameter, which is due to its presence in agglomerated form contributing to less total surface area. With the addition of 0.1 M KOH, which acts as dispersing agent, the abrasive particles are dispersed with pH, providing more total surface area for interaction. Potassium periodate works as a stabilizer and refrains particles to agglomerate. The interaction between agglomerated / dispersed fumed silica particles and Ge surface, which in turn affects the total contact between the particles and the Ge surface is schematically shown in Figure 11.
Potassium periodate dissociates to form potassium (K + ) cation and periodate (IO − 4 ) anion, as shown in Eq. 1. The main cathodic reac-tions for polishing Ge in KIO 4 solution is the reduction of IO − 4 to IO − 3 anions, [11][12][13][14] as shown in Eq. 5. The IO − 4 and IO − 3 anions oxidizes Ge to form germanium dioxide, according to Eqs. 6 and 7. Literature suggest that the GeO 2 undergoes hydration in the aqueous environment and form germanium hydroxide complex, GeO(OH) − 3 and GeO 2 (OH) 2− 2 . 28 However, the formation of those species strongly depends on the pH value. It can be explained that in the acidic region (pH<8), the reaction proceeds as shown in Eq. 8 to form of germanic acid, Ge(OH) 4 , which is not easily soluble in water resulting in lesser RR. While in the weak alkaline pH range between 8 and 11, the concentration of * OH radicals increases, which enhances the oxidation of germanium and hydration of GeO 2 to form a more soluble dissolution product GeO(OH) − 3 as shown in Eq. 9. In the highly alkaline region (pH>11), the most soluble species, GeO 2 (OH) 2− 2 is prevalent in aqueous solution as shown in Eq. 10. 5,6 pH < 8 : GeO 2 + 2H 2 O ⇔ Ge(OH ) 4 (aq) [ 8 ] 8 < pH < 11 : pH > 11 :
Conclusions
CMP slurry for polishing Ge disk using 3 wt% fumed silica + 1 wt% KIO 4 with 0.1 M KOH was proposed. With increase in temperature ER was found to increase, the activation energy was found to be 13.7 kJ/ mol. The enthalpy and entropy of activation are 11.029 kJ/mol and −271.06 J/mol.K, respectively, which suggest that the dissolution of germanium is endothermic process and the dissolution is controlled by means of activation complex. For the same potassium periodate concentration and pH, there was an appreciable increase in the Ge MRR over Ge ER, due to the synergic effect of free abrasive polishing and chemical etching. The effective diameter of fumed silica abrasive is larger for 3 wt% fumed silica slurry than 3 wt% fumed silica + 0.1 M KOH and 3 wt% fumed silica + 1 wt% KIO 4 + 0.1 M KOH slurry for complete range of pH. At pH 11, maximum RR of ∼222 nm/min was obtained with 3 wt% fumed silica + 1 wt% KIO 4 + 0.1 M KOH slurry. The higher removal in alkaline region could be due to the presence of * OH radicals with increasing pH and also due to increased oxidation of Ge with IO − 4 and IO − 3 anions. Moreover, with increase in the pH, agglomerated particles gets dispersed, leading to result more contact between Ge coupon and SiO 2 particles. Subsequently, Ge oxidizes to form germanium dioxide followed by formation of rapidly soluble Ge hydroxide species. Surface roughness was found to be reduced by ∼42% at pH 11. Ge removal with potassium periodate follows non-Prestonian behavior. Figure 11. Interaction between aggromerated and dispersed fumed silica particles in presence and absence of KIO 4 for Ge removal. | 4,964.4 | 2019-01-01T00:00:00.000 | [
"Chemistry"
] |
Effect of Electrolytic Bath Temperature on Magnetic and Structural Properties of Electrodeposited NiFeW Nano Crystalline Thin Films
Nickel, iron films are vitally used in new generation recording heads, designing of microelectro mechanical devices and magnetic random access memory. The addition of tungsten (W) can enhance magnetic and mechanical properties of NiFe thin films. Electrodeposited Ni-FeW films were prepared at different temperature (30, 50, 70 and 900C) and they were subjected tomorphological, structural, magnetic and mechanical characterization analysis. Nickel content was maximum as 61.41 wt% at 90oC. The tungsten content increased when electrolytic bath temperature was increased. Ni-Fe-W films were bright and uniformly coated on the surface. Also the deposits of Ni-FeW films were in nano scale and the average crystalline size was around 22 nm. Thin films prepared at high temperature exhibited a high saturation magnetization and low coercivity. The micro hardness of Ni-Fe-W was 221 VHN at 90oC.
INTRODUCTION
The chemical,mechanical and thermal electrochemical interaction of materials is initiated from materials surface 1 .Therefore, surface is the most important engineering part of a material.The role of technologies which modifying surface in the production and manufacturing process is predictable [2][3][4] .Also the surface technology has vital role in production process.Electrodeposition is an electrochemical process and an old method which applied for surface structure modification 5,6 .Nickel and its alloys have important advantages such as good wear and corrosion resistance.So deposition of nickel is needed to improve wear and corrosion resistance in the industry, repairing eroded metals, to improve magnetic properties, to change the dimensions of small metal pieces and preparing organic coating.NiFeW, NiCoW, and NiW are most commonly used magnetic thin film materials in MEMS and NEMS [7][8][9][10] .The electroplating of NiFeW thin film is used to obtain enhanced electrical conductivity, good soft magnetic characters, special optical properties and improved corrosion resistance [11][12][13][14] .
Electrodeposition method is widely used in MEMS, NEMS, communication, optical and sensors industries 15 .In this study, the effects of different temperatures on characters of NiFeW alloy thin films are analysed.This paper explains the preparation and characterization of electroplated Ni-Fe-W thin films.
EXPERIMENTAL
The electroplated NiFeW alloy films were prepared for different temperature 30, 50, 70 and 90 o C. The time duration of deposition process was 15 minute.In this investigation, copper and stainless steel substrates acted as cathode and anode respectively with 1.5 cm x 7.5 cm.dimension [16][17][18][19] .The NiFeW thin films were prepared from electrolytic solution which contain Ferrous Sulphate (10 g/L), Nickel sulphate(30 g/L), Sodium tungstate(15 g/L), Ammonium Sulphate(40 g/L) , Boric acid(10 g/L) and Saccharin(10 g/L) [20][21] .The pH value of electrolytic solution was fixed as 6.0 by mixing ammonia solution and electroplating process was carried out with current density 3 mA/cm 2 .The copper or cathode was carefully removed from the bath after 15 min and dried for few minutes [22][23] .The surface nature of Ni-Fe-W films was characterized by Scanning Electron Microscope.The elemental content of deposits of films was examined by Energydispersive X-ray Spectroscopy and crystal structure of deposits was analyzed by X-ray diffraction study.The micro hardness of films was measured by Vickers Hardness Test.The important magnetic properties of deposits are saturation magnetization and coercivity.Analysis of thin films with Vibrating Sample Magnetometer gives magnetic properties.
Composition of NiFeW films
Energy-dispersive X-ray analyser (EDAX) gives elemental composition of deposits.Table 1 shows weight percentage of Fe, Ni and W at different electrolytic bath temperature by EDAX analysis.From results, films prepared at 90 o C have high content of tungsten.The highest ferrous content of 22.67 wt% was found at electrolytic temperature of 30 o C. While electrolytic temperature increases, the content of nickel and tungsten also increase.The higher nickel content of 61.41 wt% was found at 90 o C temperature.
Morphological study of nifew films
The surface structure of Ni-Fe-W thin films with temperature 30, 50, 70 and 90 o C were analysed by Scanning Electron Microscope (SEM) images and they are shown in Fig. 1.The thin films are bright and uniformly coated on the surface.They looks crack free in appearance.
Structural analysisof NiFeW films
Crystal structure of Ni-Fe-W alloy films was analyzed by powder X-ray diffraction (XRD). is concluded that crystal size of thin film deposits decreases by increasing electrolytic temperature.Also deposits of thin films reveals nano scale and average crystalline size is around 22 nm.When bath temperature increases, strain value increases from 1.333 x 10 -3 to 2.095 x 10 -3 dislocation density increases from 13.57 x 10 14 / m 2 to 33.49 x 10 14 / m 2 .
The crystal size, strain value and dislocation density of Ni-Fe-W alloy films are shown in table 2. The crystalline size of deposits decreases due to onset orientation of crystals while bath temperature increases.Fig. 3 shows that crystal size reduces with increasing bath temperature.
Mechanical propertiesof NiFeW films
Micro hardness measurement of deposits was done by Vickers hardness tester.The hardness values of thin films prepared for temperature 30, 50, 70 and 90 o C are 114, 139, 187, 221 VHN respectively.So hardness test shows that the micro hardness increases by increasing electrolytic bath temperature due to lower stress associated with thin films.Fig. 4 shows the variation of hardness with increasing bath temperature.
Magnetic properties of NiFeW films
The important magnetic characters coercivity, saturation magnetization, retentivity and Squareness of NiFeW alloy films were obtained by VSM and they are given in table.3.The hysteresis curves for thin film NiFeW with various temperature are shown in Fig. 5.The elemental composition of deposits, applied potential, additive materials, temperature and electrolytic bath are the deciding factors of thin film magnetic properties.
When electrolytic temperature is changed as 30 o C, 50 o C, 70 o C and 90 o C, the magnetization increases from 38.713 x 10 -3 emu/cm 2 to 152.824 x 10 -3 emu/cm 2 .Also when temperature is increased from 30 o C to 90 o C, the coercivity decreases from 220.634 G to 121.82 G. From result of vibrating sample magnetometer, it is concluded that films obtained at higher temperature reveals high value The magnetization and other important magnetic property coercivity can be changed by decreasing the grain size of deposits.The film stress is reduced by increasing bath temperature.The Ni-Fe-W thin films prepared at 90 o C have lower coercivity and higher magnetization due to low stress formation during deposition.So it is concluded that soft magnetic nature of NiFeW thin films are enhanced by increasing bath temperature.
CONCLUSION
The NiFeW alloy thin films with different temperature were prepared by electrodeposition by maintaining current density 3 mA/cm 2 and pH of solutions 6.0.The thin films are bright and uniformly coated on the surface.The XRD result reveal the existence of FCC type crystal .The crystalline size of deposits decreases due to online orientation of crystal while increasing bath temperature.The hardness increases with increasing bath temperature due to lower stress associated with thin films.When bath temperature is increased from 30 o C to 90 o C, the magnetization increases from 38.713 x 10 -3 emu/cm 2 to 152.824 x 10 -3 emu/ cm 2 .Also when temperature is increased from 30 o C to 90 o C, the coercivity decreases from 220.634 G to121.82 G.This is due to nano crystalline structure of deposits.The addition of tungsten (W) with NiFe alloy in electroplating can enhance its magnetic, mechanical and structural properties and this alloy films can be used in NEMS,MEMS and memory devices.
Fig. 2 .
shows diffraction patterns of Ni-Fe-W films prepared with different temperature.The occurrence of sharp peaks in X-ray diffraction pattern exhibit crystalline nature of deposits.The XRD patterns of all samples deposited at 30, 50, 70, and 90 o C are being indicative of the (111), (200) and (220).Also XRD result reveal the existence of FCC type crystal.The particle sizes of Ni-Fe-W deposits are 27.15 nm, 22.18nm, 20.72nm and 17.28 for the bath temperature 30, 50, 70, and 90 o Crespectively.So it | 1,832.6 | 2017-11-10T00:00:00.000 | [
"Materials Science"
] |
Higher-Derivative Corrections to Entropy and the Weak Gravity Conjecture in Anti-de Sitter Space
We compute the four-derivative corrections to the geometry, extremality bound, and thermodynamic quantities of AdS-Reissner-Nordstr{\"o}m black holes for general dimensions and horizon geometries. We confirm the universal relationship between the extremality shift at fixed charge and the shift of the microcanonical entropy, and discuss the consequences of this relation for the Weak Gravity Conjecture in AdS. The thermodynamic corrections are calculated using two different methods: first by explicitly solving the higher-derivative equations of motion and second, by evaluating the higher-derivative Euclidean on-shell action on the leading-order solution. In both cases we find agreement, up to the addition of a Casimir energy in odd dimensions. We derive the bounds on the four-derivative Wilson coefficients implied by the conjectured positivity of the leading corrections to the microcanonical entropy of thermodynamically stable black holes. These include the requirement that the coefficient of Riemann-squared is positive, meaning that the positivity of the entropy shift is related to the condition that $c - a$ is positive in the dual CFT. We discuss implications for the deviation of $\eta/s$ from its universal value and a potential lower bound.
IV. Thermodynamics from the On-Shell Euclidean
The aim of the swampland program [1] is to understand what subset of the infinite space of possible effective field theories can arise at low energies from theories of quantum gravity.
This requires finding simple criteria for classifying these theories; those that admit UV completions including quantum gravity are in the landscape, and those that do not are in the swampland. Such criteria are far more useful if they are detectable in the infrared without any knowledge of the particulars of the UV completion. In practice, one fruitful source of such IR intuition is the physics of black holes, which are believed to conform to general relativity and the laws of black hole thermodynamics in the semiclassical regime.
So far a number of criteria have been proposed (see e.g. [2] for a review of the program).
One that has attracted considerable interest is the Weak Gravity Conjecture (WGC), which roughly states that there are states whose mass is smaller than their charge [3] -hence, they are states for which "gravity is the weakest force." Such states are labelled "superextremal," and may be provided by fundamental particles or non-perturbative states such as black holes. The original motivation was to provide a mechanism for black holes to decay. These black holes are believed to obey an "extremality bound" on its mass to charge ratio, which arises from requiring that the solution does not contain a naked singularity. It was believed that if a theory of gravity did not have a superextremal particle in its spectrum, then simple conservation of charge would prevent it from decaying to lighter components without violating cosmic censorship. This, in turn, would be problematic because it would lead to an infinite number of stable states or remnants.
Another mechanism for the decay of nearly extremal black holes in flat space was pointed out in [4]. Generically, the low-energy limit of a theory of quantum gravity should include higher-derivative corrections that encode the UV physics in a highly suppressed way. Such corrections can change the allowed charge to mass ratio of Reissner-Nordström black holes so that they can decay to smaller black holes. In particular, the classical extremality bound would be corrected into a form that schematically may look like M Q > 1 − 1 Q 2 (2c 1 + 8c 2 + ...) , (1.2) shifted slightly by the Wilson coefficients c i of the higher-derivative operators. It then becomes possible for the nearly extremal black holes to decay as long as this combination of coefficients is positive. The statement that black holes can always decay through these higher-derivative corrections is sometimes called the "Black Hole Weak Gravity Conjecture." This idea has inspired a large amount of work [5][6][7][8][9][10][11] on bounding the EFT coefficients, thereby proving the conjecture. While no existing proof is completely general, the work so far covers a large number of possibilities and assumptions.
One intriguing proof of the WGC in flat space relates the extremality shift to the shift in the Wald entropy [7]. The authors first show that, near extremality, the shift to the extremality bound at fixed charge and temperature is proportional to the shift in entropy at fixed charge and mass. They then present an argument that the higher-derivative corrections should increase the entropy, thereby proving the Black Hole WGC. The argument for the entropy shift positivity is not expected to be fully general; it applies to higher-derivative corrections that arise from integrating out massive particles at tree-level. Nonetheless, it is curious that the entropy shift is proportional to the extremality shift. This fact was given a simple thermodynamic proof in [12], where no assumptions were made about the particulars of the background.
So far, however, these ideas have not fully made their way to Anti-de Sitter space. From the WGC point of view, it is easy to see why: the relationship between mass and charge of an extremal black holes in AdS is already non-linear at the two-derivative level 1 . Therefore it is not at all clear what is gained by studying the higher-derivative corrections to the extremal mass-to-charge ratio 2 . Furthermore, massive particles emitted from a black hole cannot fly off to infinity in AdS as they can in flat space, so if the WGC allows for the instability of black holes in AdS, it must be through a completely different mechanism. (See the Discussion for more commentary on this possibility).
Regardless, the entropy-extremality relationship is expected to hold in AdS as it does in flat space (and indeed, an example in AdS 4 was given in [12]). Therefore, this paper 1 By "extremal," we mean that the temperature is zero. This is not the same as the BPS limit in AdS. 2 Other aspects of the WGC have been discussed in AdS. See e.g. [13][14][15][16].
addresses two main issues in Anti-de Sitter space. First, we check the purported relationship between the entropy shift and the extremality shift, and indeed we find that it holds for the AdS-Reissner-Nordström backgrounds. Second, we examine the conjecture that the entropy shift is positive when the leading-order solution is a minimum of the action. By computing the entropy shift explicitly, we see that its positivity for stable black holes implies that the coefficient of Riemann-squared is universally positive. This has interesting consequences for the structure of η/s, as we comment on in the Discussion section.
This paper is organized as follows. In section II we introduce the theory we will examine, and find the solutions to the equations of motion at first order in the EFT coefficients. In section III we use the solution to compute the shift to extremality, considering the result both at fixed charge and fixed mass. We then compute the shift to the Wald entropy, and we find that the conjectured relationship of [7,12] between the shift to mass and shift to entropy is valid for AdS-Reissner-Nordström black holes. Furthermore, we notice that both the mass shift and entropy shift are also proportional to the charge shift, which allows us to We present a simple thermodynamic derivation of these relationships in Appendix A.
In section IV, we reproduce these results from a thermodynamic point of view. It has recently been shown [17] that the first-order corrections to the solutions are not needed to compute the first order corrections to thermodynamic quantities. In this section, we verify that this is the case for AdS-Reissner-Nordström backgrounds by computing the fourderivative corrections to the renormalized on-shell action. From this we may compute the free energy and other thermodynamic quantities. We find that the results of this calculation match the results from section III in even dimensions, while in odd dimensions the free energy and associated thermodynamic quantities are renormalization scheme dependent, and agree with the geometric calculation in a physically motivated zero Casimir scheme.
In section V, we first review the argument given in [7] for the positivity of the entropy shift, and comment on a potential issue with applying it to AdS. The positivity of the entropy shift requires that the black hole solutions are local minima of the path integral, so we compute the specific heat and electrical permittivity to determine the regions of parameter space where the black holes will be stable. Finally, we determine the constraints placed on the EFT coefficients by assuming that the entropy shift is positive for all stable black holes.
We compare this to the results obtained by requiring that the entropy shift is positive for only extremal black holes, which is equivalent to the condition that the extremality shift at fixed charge and temperature is positive. The constraints include the requirement that the coefficient of Riemann-squared is positive. As this coefficient is proportional to the difference c − a between the central charges of the dual CFT, we conclude that the positivity of the entropy shift will be violated in theories where c − a < 0.
We summarize our results in the Discussion section, where we comment on the implications of our results for the behavior of η/s, as well as the nature of the WGC in AdS.
We relegate to Appendix A the specific form of the entropy shifts and bounds on the EFT coefficients for AdS 5 through AdS 7 . In Appendix B, we present a general proof of the entropy-extremality relation of [12], and we comment on some recent results concerning the entropy-extremality relationship in specific stringy models [18,19].
II. CORRECTIONS TO THE GEOMETRY
We consider Einstein-Maxwell theory in the presence of a negative cosmological constant in a (d + 1)-dimensional AdS spacetime of size l. The first non-trivial terms in the derivative expansion of the effective action arise at the four-derivative level, and by appropriate field redefinitions we may choose a complete basis of dimension-independent operators: (2.1) Note that additional CP-odd terms can arise in specific dimensions, but will not contribute to the static, stationary spherically symmetric black holes that we are considering here. This basis parallels that of [20], which used the same set of dimensionless Wilson coefficients, but focused on the (4 + 1)-dimensional case. Depending on the origin of the AdS length scale l, one may expect these coefficients to be parametrically small, of the form c i ∼ (Λl) −2 , where Λ denotes the scale at which the EFT breaks down. In particular, this will be the case in order for the action (2.1) to be under perturbative control. We have also introduced the small bookkeeping parameter , which will allow us to keep track of which terms are first order in the c i coefficients.
A. The Zeroth Order Solution
At the two-derivative level, this action admits a family of AdS-Reissner-Nordström black holes parametrized by uncorrected mass m and charge q, Here r h is the outer horizon radius, and the parameter k = 0, ±1 specifies the horizon geometry, with k = 1 corresponding to the unit sphere. The constant Φ is chosen so that the A t component of the gauge field vanishes on the horizon, and represents the potential difference between the asymptotic boundary and the horizon.
Typically, we will consider lower case letters (m, q, ...) to be parameters in the theory, while upper case letters (M, Q, S, T, ...) will denote physical quantities that may or may not receive corrections. We will add a subscript zero (e.g. M 0 ) to denote the uncorrected contribution to quantities that do receive order c i corrections. The shifts, which are equal to the corrected quantities minus the uncorrected ones, will be denoted by the derivative.
However, we will sometimes use ∆ when it is convenient, with subscripts indicating quantities held fixed, for example, we have (2.3) Finally, in sections IV and V we will use dimensionless quantities (ν, ξ) for convenience.
B. The First Order Solution
We now turn to the first order solution in terms of the Wilson coefficients c i . We follow the procedure outlined in Ref. [21], but work in an AdS d+1 background. While general (d + 1)-dimensional results may be worked out analytically, we took a shortcut of working with explicit dimensions four through eight and then fitting the coefficients to extract results for arbitrary dimension. Since the four-derivative terms are built from tensors with eight indices and hence four metric contractions, the resulting expressions will scale at most as The coefficients are hence fully determined by working in five different dimensions.
Following [21], we start with the effective stress tensor, where corrections come from two sources. The first is from substituting in the corrected Maxwell field to the zeroth order electromagnetic stress tensor, and the second is from the explicit four-derivative corrections to the stress tensor evaluated on the zeroth order solution. The result of computing both of these contributions to the time-time component of the stress tensor is (2.4) The shift to the geometry may be obtained from the corrections to the stress tensor [21], (2.6) The time component of the metric can then be obtained using the relation [21] f (r) = (1 + γ(r))g(r), where γ(r) is defined by 3 drr T t t − T r r . (2.8) For our particular case we find: Finally, we have which we note is independent of the geometry parameter k, as was the case in [22]. 3 We note that the definition of γ implies that it is positive provided that the null energy condition holds.
C. Asymptotic Conditions and Conserved Quantities
The first order solution can be summarized as (2.12) The corrected metric functions, ∆g and γ(r), are given in (2.6) and (2. In order to make the correspondence between the parameters of the solution, m and q, and the physical mass and charge of the black hole more precise, consider the part of ∆g that is leading in r. We can see that there is a term that goes like c 1 r 2 l 2 that dominates over all other terms in the correction. Therefore, for large values of r, the solution takes the form (2.13) Our first observation is that the AdS radius gets modified because the Riemann-squared term is non-vanishing on the original uncorrected background. This suggests that we define an effective AdS radius This shift by λ is unavoidable when turning on the c 1 Wilson coefficient. However, in principle we still have a choice of whether we hold l or l eff fixed when turning on the fourderivative corrections.
In what follows, we always choose to keep l fixed. Then, since the effective AdS radius is shifted, the asymptotic form of the metric is necessarily modified as well. From a holographic point of view, this leads to a modification of the boundary metric This is generally undesirable, as we would like to compare thermodynamic quantities in a framework where we hold the boundary metric fixed while turning on the Wilson coefficients.
One way to avoid this shift in the boundary metric is to introduce a 'redshift' factor t =t/λ, (2.16) to compensate for the shift in l eff . In terms of the timet, the solution now takes the form (2.18) We now turn to the charge and mass of the solution measured with respect to the redshiftedt time. For the charge Q, we take the conserved Noether charge where F is the effective electric field The result is where ω d−1 is the volume of the unit S d−1 . The 1/16π factor arises from the prefactor in the action (2.1) where we have set Newton's constant G = 1.
Unlike in the asymptotically Minkowski case, some care needs to be taken in obtaining the mass of the black hole. With an eye towards holography, we choose to define the mass from the boundary stress tensor [23]. The standard approach to holographic renormalization involves the addition of appropriate local boundary counterterms so as to render the action finite. This was performed in [22] for R 2 -corrected bulk actions, and since only the c 1 R abcd R abcd term in (2.1) leads to an additional divergence, we can directly use the result of [22]. The result is where we have taken into account the scaling of the mass by the redshift factor λ. Substituting in λ from (2.14) then gives Note that we are taking the mass here to exclude the Casimir energy that is normally part of the boundary stress tensor. This will be important when comparing with the thermodynamic quantities extracted from the regulated on-shell action in section IV. Working in the setup of holographic renormalization ensures that the mass M and charge Q defined in (2.23) and (2.21), respectively, yield a consistent framework for black hole thermodynamics.
III. MASS, CHARGE, AND ENTROPY FROM THE GEOMETRY SHIFT
Given the first-order solution, we now consider shifts to the mass, ∆M , and entropy, ∆S, of the black hole induced by the four-derivative corrections. In these computations it is important to keep in mind what is being held fixed as we turn on the Wilson coefficients c i .
The main parameters we consider here are the mass M and charge Q, which are related to the two parameters, m and q, of the solution by (2.23) and (2.21), respectively. In addition we consider the thermodynamic quantities T (temperature) and S (entropy), although they are not all independent. Note that we always consider the AdS radius l to be fixed, although interesting results have been obtained by mapping it to thermodynamic pressure.
Singly-charged, non-rotating black holes may be described by any two of mass M , charge Q and the horizon radius r h . Of course, any number of other parameters may be used as well, such as the temperature T or an extremality parameter, such as was used in [7]. If we further impose the extremality condition T = 0 on the solution, then only a single parameter is needed. Clearly this is only true for non-rotating black holes with a single gauge field, as more general solutions may have additional charges or angular momenta.
Here we mainly focus on the effect of higher-derivative corrections on extremal or near extremal black holes. In particular, we consider the extremality shift ∆(M/Q) and the entropy shift ∆S. However, it is important to keep in mind what is being held fixed when we turn on the higher-derivative corrections, as the results will depend on this choice. For example, we will see below that the shift to M/Q depends on whether the mass, charge or horizon radius is held fixed when comparing the corrected with uncorrected quantities.
A. Mass, Charge, and Extremality
Recall that, in our first-order solution, the geometry is essentially given by the radial function where ∆g denotes the contributions of the higher-derivative corrections to the geometry, and is a small parameter we use to keep track of where O(c i ) corrections come in. Using the fact that both g(r h ) and g (r h ) vanish at extremality, we may express the extremal mass and charge as a function of the horizon radius, ∆g ,
Extremality at Leading Order
Before discussing the extremality and entropy shifts, we consider the leading order relations between M 0 , Q 0 and (r h ) 0 for extremal black holes. We will repress the 0 subscripts in this subsection, but we mean the uncorrected quantities. Setting = 0 in (3.2) immediately gives the relations In principle, we can eliminate r h from these equations to obtain the relation between mass and charge for extremal AdS black holes. However, for general dimension d, there is no simple expression that directly encodes this relation. Nevertheless, we can consider the limit of small and large black holes.
For small black holes (r h l), we take k = 1 (ie a spherical horizon) and find In fact, this is precisely the scaling behavior expected based on the relationship between minimal scaling dimension and charge for boundary operators with large global charges [24].
Mass Shift at Fixed Charge
Now we consider the effect of four-derivative corrections. If we hold the charge fixed, then the shift to extremality is entirely due to the change in the mass. This may computed from the expression (3.2) for the mass by taking a derivative with respect to , which parametrizes the higher-derivative corrections, leading to where we have taken into account the fact that when the charge is fixed, we must allow the horizon radius r h to vary with . To compute the shift ∂r h /∂ , we use the fact that we are holding Q fixed. Then we use the expression for Q ext in (3.2) and demand that (∂Q/∂ ) T =0 = 0 to obtain an equation for ∂r h /∂ . This procedure leads to the rather simple Note that the dependence on ∆g has vanished. From the geometric point of view, this non-trivial cancellation is crucial for the extremality-entropy relation to hold.
Charge Shift at Fixed Mass
If we instead hold the mass fixed, the entire shift in the extremality is due to the shift in charge. Following the same procedure as in the fixed charge case, but this time demanding ∂M ext /∂ = 0, we find the relation: Here we also find a cancellation of all ∆g terms. Moreover, this shift is proportional to the mass shift at fixed charge This relationship more clear when we write this as the shift of Q rather than Q 2 . Using So we see that the mass shift is related to the charge shift times the potential. In Appendix A, we derive this statement for a general thermodynamic system and show that it holds for any extensive charge and its conjugate.
One physical consequence of this fact is that the entropy-extremality relationship (with a different proportionality factor) will hold regardless of whether the mass or charge is held fixed. As far as we know, this has not been noticed before in the literature.
Summary of Extremality Shifts
The shifts to extremality may be obtained from these mass and charge shifts. For completeness, we also present calculation at fixed horizon radius, as this extremality shift has previously been considered in the literature as well [20,22], Note that, in (3.12), the horizon radius r h may be taken to be the uncorrected radius, and can be obtained from either M or Q using the leading order expressions (3.3). In (3.13), the leading order expression for r h should be used. Finally, note that ∆g depends on the parameters m and q as well as the radius r. The m and q parameters are directly obtained from M and Q using (2.23) and (2.21), and again the leading order horizon radius can be used in ∆g.
B. Wald Entropy
We now compare the shift in mass at fixed charge and temperature to the shift in entropy at fixed mass and charge. The entropy for black holes in higher-derivative theories is given by the Wald entropy [25]: For spherically symmetric backgrounds, the integral over the horizon Σ gives a factor of the area A. The two-derivative contribution to the entropy is simply S (2) = A/4, while the four-derivative terms yield The total entropy is the sum of these terms, where we once again introduced to parametrize the expansion. Here the horizon area is where r h is the corrected horizon radius. On the other hand, the R trtr and F tr F tr terms need only be computed on the zeroth-order background, (3.17) It does not matter whether we use the corrected or uncorrected quantities here because they already show up in a term that is order . Note also that, while the expression for the Wald entropy (3.16) is given in terms of M , Q and r h of the fully corrected solution, only two of these quantities are independent.
We now examine the entropy shift for a given solution at fixed mass M and charge Q.
For the moment, we work at arbitrary M and Q, and not necessarily at extremality. The general expression for the entropy shift is then where the first term was obtained by Here it is important to note that the horizon radius r h receives a correction when working at fixed M and Q. If, on the other hand, we were to keep the horizon radius fixed (as is done in [20]), we would find only the second (interaction) term in (3.18), and the entropy shift would be independent of c 3 and c 4 .
To compute ∂r h /∂ , we start with the horizon condition g(r h ) = 0 where g(r) is given by (3.1) with m and q rewritten in terms of M and Q. Taking a derivative and solving for .
where (M ext ) 0 is the leading order extremal mass given in (3.3). As we can see, this expression diverges if the leading order solution is extremal. This is in fact not a surprise, as leading order extremality implies a double root at the horizon. The higher order corrections will lift this double root and hence cannot be parametrized as a linear shift in .
In order to avoid the divergence, we can instead consider a leading order solution taken slightly away from extremality. As long as we are sufficiently close to extremality, the first term in (3.18) will dominate the entropy shift. Noting further that, at extremality, the numerator of (3.20) becomes proportional to the mass shift (3.7) at fixed charge, we can rewrite (3.18) as The deviation away from extremality can be written in terms of the leading order temperature, The total shift to the entropy is then given by Finally, as T 0 → 0 we reproduce the relation [7,12] ∂M ∂ Note that this relation was obtained using only the general feature that the corrected geometry may be written in terms of a shift ∆g to the radial function g(r). In particular, we never had to use the explicit form of ∆g given in (2.6).
C. Explicit Results for the Entropy Shifts
In order to compare with the next section, we include some explicit results for the mass shifts. In section V, we will see what constraints may be placed on the EFT coefficients by imposing that entropy shift is positive. We'll use the mass shift here, to remove the factor of T 0 . The entropy shift is positive when the mass shift at constant charge is negative. It is easy to see that the shifts here are positive when all the coefficients are positive.
IV. THERMODYNAMICS FROM THE ON-SHELL EUCLIDEAN ACTION
The ultimate goal of this paper is to determine the leading higher-derivative corrections to relations between certain global properties of black hole solutions. These relations are of a thermodynamic nature, and arise by taking various derivatives of the free-energy corresponding to the appropriate ensemble. As is well-known [26], the classical free-energy of a black hole can be calculated using the saddle-point approximation of the Euclidean path integral with appropriate boundary conditions. In the Gibbs or grand canonical ensemble, the appropriate quantity is the Gibbs free-energy, which may be calculated from the on-shell Euclidean action where β = T −1 , and g E µν (T, Φ) and A E µ (T, Φ) are Euclideanized solutions to the classical equations of motion with temperature T and potential Φ. Similarly in the canonical ensemble the corresponding quantity is the Helmholtz free-energy, given by where g E µν (T, Q) and A E µ (T, Q) are Euclideanized solutions with temperature T and electric charge Q. In both expressions, I E is the renormalized Euclidean on-shell action.
The Euclidean action with cosmological constant is IR divergent when evaluated on a solution. However, it may be given a satisfactory finite definition by first regularizing the integral with a radial cutoff R. To render the variation principle well-defined on a spacetime with boundary we must add an appropriate set of Gibbons-Hawking-York (GHY) [27,28] (in the case of the canonical ensemble, also Hawking-Ross [29]) terms in addition to a set of boundary counterterms. The complete on-shell action then consists of three contributions If the counterterms are chosen correctly, they will cancel the divergence of the bulk and Gibbons-Hawking-York terms, rendering the results finite as R → ∞. In AdS there is a systematic approach to generating such counterterms via the method of holographic renormalization [23,30,31]; since the logic of this approach is well-described in detail elsewhere (see e.g. [32]) we will not review it further, but simply make use of known results. Explicit expressions for the needed GHY and counterterms (including the four-derivative corrections used in this paper) valid in AdS d , d = 4, 5, 6 can be found in [22,33].
Once the free-energy is calculated, the remaining thermodynamic quantities can be determined straightforwardly by using the definitions of the free-energies and the first-law of black hole thermodynamics The expressions calculated using these Euclidean methods should agree with the Lorentzian or geometric calculations in the previous section. Note, however, that there is a bit of a subtlety with the notion of black hole mass here, as the thermodynamic relations are for the energy E of the system. In holographic renormalization, there is always an ambiguity in the addition of finite counterterms that shift the value of the on-shell action. The standard approach is to fix the ambiguity by demanding that even-dimensional global AdS has zero vacuum energy while odd-dimensional global AdS has non-zero vacuum energy that is interpreted as a Casimir energy in the dual field theory. In this case the thermodynamic energy is the sum of the black hole mass and the Casimir energy 5) and the mass M of the black hole is only obtained after subtracting out the Casimir energy contribution, as we did in section II.
The purpose of introducing this alternative approach is not just to give a cross-check on the results of the previous section, but also to verify a recent general claim by Reall and Santos [17]. In this paper, the O( ) corrections we are considering can be calculated by first evaluating the free-energy or on-shell action at the same order. Naively, this would require evaluating three contributions where (2) and (4) denote two and four derivative terms in the action and their corresponding perturbative contributions to the solution. The central claim in [17] is that the first term at O( ) is actually zero, and that therefore we do not need to explicitly calculate the O( ) corrections to the equations of motion. For black hole solutions of the type considered in this paper, we can evaluate the leading corrections without much difficulty, but for more general situations with less symmetry this may not be possible. In such a case the Euclidean method is more powerful, as has recently been demonstrated with calculation of corrections involving angular momentum [34] or dilaton couplings [35].
Although the result of [17] was demonstrated in the grand canonical ensemble, it is straightforward to see that it implies an identical claim about the leading corrections in the canonical ensemble. While the quantities of interest can be extracted from either, the explicit expressions encountered in the latter are usually far simpler and therefore more convenient. Recall that we can change ensemble by a Legendre transform of the free-energy where the right-hand-side is defined in terms of the implicit inverse function Φ(Q). At fixed T and Q, the potential Φ receives corrections from the higher-derivative interactions, and so, expanding the right-hand-side to O( ), we have Recognizing that we see that the leading correction to the Helmholtz free energy is simply given by In terms of the on-shell Euclidean action, using the result of Reall and Santos, this is then equivalent to where here I E denotes the contribution of the four-derivative terms to the renormalized onshell action. Note that this includes potential four-derivative Gibbons-Hawking-York terms, but as this argument makes clear, will not include any additional Hawking-Ross terms.
This expression is the analogue of the Reall-Santos result, but in the canonical ensemble.
It says that the leading correction to the Helmholtz free-energy is given by evaluating the four-derivative part of the renormalized on-shell action on a solution to the two-derivative equations of motion with temperature T and charge Q.
Below we will give a brief review of the well-known thermodynamic relations at twoderivative order, and then using the above result we will calculate the leading corrections and verify explicitly that they agree with the results of the previous section.
A. Two-Derivative Thermodynamics As described above, the regularized on-shell action has a bulk as well as various boundary contributions. At two-derivative order and in d-dimensions these have the explicit form where h ab and R ab are the metric and Ricci tensor of the induced geometry on the boundary at r = R. Note that in I CT we have included the minimal set of counterterms necessary to cancel the IR divergence in d = 3 and d = 4. For d > 4, additional counterterms beginning at quadratic order in the boundary Riemann tensor are necessary to cancel further divergences.
The regularized bulk action has a well-defined variational principle provided that δA a = 0 at r = R. This amounts to holding Φ fixed, and thus it corresponds to boundary conditions compatible with the grand canonical ensemble. For many applications, we will want to hold the charge fixed. From a thermodynamic point of view, we want to use the extensive quantity Q instead of the intensive Φ, so we must compute the Helmholtz free energy instead of the Gibbs free energy. Holding Q fixed requires different boundary conditions, and in particular the further addition of a Hawking-Ross boundary term [29] I (2) where n µ is the normal vector on the boundary and A a is the pull-back of the gauge potential.
To summarize, the total two-derivative on-shell action bulk + I GHY + I HR + I CT , (4.14) evaluated on the Euclideanized solution to the two-derivative equations of motion is equal to βF (2) (T, Q), where F (2) is the two-derivative contribution to the Helmholtz freeenergy. In the above we have introduced the dimensionless variable ν ≡ (r h ) 0 /l, where (r h ) 0 is the location of the outer-horizon of the two-derivative solution with temperature T and charge Q. Note also that here, and for the remainder of this section, we will consider only spherical k = 1 black holes. Since ν satisfies f (ν) = 0, we can solve for the parameter m as (4. 16) In the Euclidean approach to calculating the leading corrections to the thermodynamics, it will prove natural to continue to use ν and q to parametrize the space of black hole solutions, even when the four-derivative corrections are included. This means that it is also natural to write all thermodynamic quantities in these variables, which requires the use of standard thermodynamic derivative identities to rewrite derivatives. Recall that the parameter q and the physical charge Q are not the same, but are related by an overall constant given in (2.21).
Therefore holding Q fixed is the same as holding q fixed. Explicitly, the two-derivative free-energy calculated in this way in AdS 4 is given by , (4.17) and in AdS 5 by Once the free-energy is calculated, the entropy and energy are given by In terms of our natural variables, we can reexpress the entropy as where the temperature is given by Note that this expression is exact, meaning it does not receive corrections when we include the four-derivative interactions. It is therefore useful to introduce the function such that taking the limit q 2 → q 2 ext (ν) is equivalent to taking the extremal limit T → 0. Physically, it is useful work in a scheme in which the energy E coincides with the mass M of the black hole, without a Casimir contribution. In such a zero Casimir scheme, the energy of pure AdS 5 is defined to be zero. Calculating the free-energy from the on-shell action of pure AdS 5 with generically parametrized four-derivative counterterms we find that this scheme requires the following modification from the minimal subtraction counterterms The free energy calculated with this modified on-shell action agrees exactly with the expectation using (2.23). Note that the entropy, since it is given by a derivative of the free-energy, is independent of the choice of scheme. The zero Casimir scheme is a physically motivated choice, but certainly not unique.
B. Four-Derivative Corrections to Thermodynamics
To evaluate the four-derivative corrections we make use of the result (4.11). As in the two-derivative contribution, the on-shell action is properly defined by a regularization and renormalization procedure. For the operators in (2.1) with Wilson coefficients c 2 , c 3 and c 4 the required I bulk contribution is actually finite, while for the term in (2.1) proportional to c 1 , we must again regularize and renormalize by adding infinite boundary counterterms. The required explicit expressions, as well as the complete set of four-derivative GHY terms, can be found in [22,33]. The calculation is otherwise identical to the two-derivative contribution described above, and in AdS 4 we find (4.24) The complete free-energy, up to O( 2 ) contributions, is then given by From this explicit expression we can then calculate the entropy and mass (which coincides with the thermal energy) Taking the extremal limit we find the following expression for the mass shift Similarly we can calculate the shift in the microcanonical entropy, which will be important in the subsequent section for analyzing conjectured bounds on the Wilson coefficients. The actual expression is given in (5.10), and can be calculated straightforwardly using standard thermodynamic derivative identities (4.29) The calculation for AdS 5 is similar, but in this case we have to be cautious about the Casimir energy. We calculate the free-energy in the physically motivated zero Casimir scheme. To do so, we again fix the finite counterterms by evaluating the four-derivative on-shell action on pure AdS 5 . Requiring the Casimir energy to vanish requires the following modification from the minimal subtraction counterterm action Using this we calculate the four-derivative contribution to the renormalized free-energy We also obtain the entropy and mass The extremal mass shift is given by (4.29), the explicit expression is given in (A2).
V. CONSTRAINTS FROM POSITIVITY OF THE ENTROPY SHIFT
Having derived the general entropy shift at fixed mass, we may now determine what constraints on the EFT coefficients are implied by the assumption that it is positive. Recall that the argument of [7] for the positivity of the entropy shift assumes the existence of a number of quantum fields φ with mass m φ , heavy enough so that they can be safely integrated out.
In particular, such fields are assumed to couple to the graviton and photon in such a way that, after being integrated out, they generate at tree-level the higher-dimension operators we are considering (with the corresponding operator coefficients scaling as c i ∼ 1/m φ ). This assumption is essential to the proof; it may be that the entropy shift is universally positive (see [34] for a number of examples), but proving such a statement for non-tree-level completions would require a different argument from the one laid out here.
We revisit the logic of [7] in the context of flat space, before discussing how it may be extended to AdS asymptotics, and denote the Euclidean on-shell action of the theory that includes the heavy scalars φ by I UV [g, A, φ]. First, note that when the scalars are set to zero and are non-dynamical, the action reduces to that of the pure Einstein-Maxwell theory, This is a statement relating the value of the functionals I UV and I (2) (the two-derivative action) when we pick particular configurations for the fields. These fields may or may not be solutions to the equations of motion. Next, consider the corrected action, I C = I (2) + I (4) , and note that it obeys as those of the UV theory. One then finds the following inequality, In general, different theories will have different relationships between mass, charge, and temperature. We are interested in the entropy shift at fixed mass and charge. Therefore we must compare the two action functionals at different temperatures. For simplicity, we use T 4 /T 2 for the temperature that corresponds to mass M and charge Q for the theory with/without higher-derivative corrections, respectively. Then we have: ∆S > 0, (5.4) at fixed M and Q (and in the zero Casimir energy scheme). Now that we have outlined the argument in flat space, we can ask whether it can be immediately extended to AdS. One subtle point in the derivation outlined above is that the free-energy is only finite after the subtraction of the free-energy of a reference background.
In the flat space context, the contributions of such terms to the two actions are identical because the asymptotic charges are the same. Thus, this issue does not affect the validity of the argument.
In AdS, the story is a little different-the free-energy is computed using holographic renormalization. Different counterterms are required to render the two-derivative action I (2) and the corrected action I C finite. Moreover, I U V may also require a different set of counterterms involving contributions from the scalar, and unlike the bulk contribution, there is no reason to expect that their on-shell values are less than their off-shell values. This is a potential hole in the positivity argument in AdS. Apart from this issue, the rest of the argument can be immediately applied to AdS.
A. Thermodynamic Stability
As we've seen, the above proof requires that the uncorrected backgrounds are minima of the action. Thermodynamically, this amounts to the condition that the black holes are stable under thermal and electrical fluctuations. This translates to the following requirements on the free-energies, These conditions may be rewritten in terms of the specific heat and permittivity of the black hole, which can be used to determine, respectively, the thermal stability and electrical stability of the black hole [36,37]. We'll ignore the specific heat at constant Φ now, as we are interested in the stability in the canonical ensemble, and consider Positivity of the specific heat is equivalent to the statement that larger black holes should heat up and radiate more, while smaller ones should become colder and radiate less. When the quantity T is negative the black hole is unstable to electrical fluctuations, meaning that when more charge is placed into it, its chemical potential decreases. We expect that it should instead increase, to make it more difficult to move a charge from outside to inside the black hole -thus making it harder to move away from equilibrium [37]. We may compute these quantities using the results of the previous section. For AdS 4 , we find where we recall that ν = r h /l and Q = (1 − ξ)Q ext . These results have been obtained previously e.g. in [38]. We find that both of these coefficients are positive when either holds, or when is satisfied.
Thus, for small black holes stability requires that the extremality parameter be less than some function of the radius, ξ < ξ * . In particular, extremal black holes, for which ξ → 0, are stable while neutral black holes, which correspond to ξ → 1, are not. The implication of (5.9) is that above a certain radius (r h > l/ √ 3) all black holes are thermodynamically stable. This behavior is visible from Fig. 1, where we have plotted the allowed parameter space based on the C Q and T conditions separately. This raises an interesting point in making contact with the flat space limit: if we require both parameters to be positive, there are no stable black holes at ν = 0. Note that in [7] only C Q was considered. However, in applications involving AdS/CFT, we believe that both the specific heat and electrical permittivity should be taken into account.
Here we have only considered the leading-order stability. The higher-derivative corrections will shift the point where the specific heat crosses from positive to negative. However, in proving the extremality-entropy relation, we are only interested in the extremal limit, which is not affected by this consideration. In principal we could compute the order shifts to the stability conditions to obtain small corrections to the entropy bounds.
B. Constraints on the EFT Coefficients
The entropy shift in AdS 4 for a black hole with an arbitrary size and charge takes the following form, where the temperature is given by the expression We can see from the ξ dependence of the latter that in the ξ → 0 limit the shift to the entropy blows up. If we examine the leading part in 1/ξ, we find that it is proportional to the mass shifts we have computed above. Thus, in the extremal limit we have ∂S ∂ ξ→0 = l 2 5r h T 4c 1 (1 + 3ν 2 ) 2 + 2c 2 (1 + 3ν 2 )(1 + 18ν 2 ) + 8(2c 3 + c4)(1 + 3ν 2 ) 2 .
(5.11)
It is also interesting to note that in the chargeless limit ξ → 1 the dependence of (5.10) on c 2 , c 3 and c 4 drops out entirely, and we are left with an entropy shift of the simple form Our results above show that the large black holes are stable in the chargeless limit, which implies that the c 1 coefficient must be positive.
In Fig. 2, we have graphed the constraints on the coefficients that arise from demanding that the entropy shift is positive. We have included both the constraints from the extremal entropy shift and from considering the shift of all stable black holes. Considering only extremal black holes may be interesting because it is equivalent to the condition that the extremality shift, ∆(M/Q), is negative. Thus we may look at the constraints implied by positive entropy shift and by negative extremality shift independently. Note that we have divided by c 1 , which we have already proven to be positive. We may write out the all the constraints obtained: We have computed the corresponding bounds for AdS 5 through AdS 7 . The results may be found in Appendix B. We would, however, like to comment on AdS 5 , where the positivity of the coefficient of the Riemann-squared term is of particular interest. The stability analysis yields results that are qualitatively similar to (5.8) and (5.9), but with the following (5.14) Once again, we see that large black holes are stable for all values of the charge.
When we examine the entropy shift in the neutral limit, we find Thus, a positive entropy shift for stable black holes implies that c 1 is positive. In fact, a positive value of c 1 was the necessary ingredient in [4] for obtaining the violation of the KSS bound 4 . It is also interesting to note that in d > 3, this sign constraint was shown to follow from the assumption of a unitary tree-level UV completion [39]. The entropy constraints given in this paper are then strictly stronger since they also apply in d = 3.
In closing, we stress that we are not claiming that the entropy shift should be universally positive; the proof outlined above only applies when the higher-derivative corrections are generated by integrating out massive fields at tree-level (and relies on assuming that the corresponding solutions minimize the effective action). However, it is interesting that the conjecture that the entropy shift is universally positive appears to suggest that violations of the KSS bound are required to occur. Our results extend and make more precise the earlier claim by some of us [22] of a link between the WGC and the violation of the KSS bound.
We will come back to this point in the discussion section. 4 We have checked the calculation with a different basis, choosing to use Gauss-Bonnet instead of Riemann squared. As expected, we find that the coefficient of the Gauss-Bonnet term is positive.
C. Flat Space Limit
As we've pointed out above, we can not compare the results we have given above to the flat space limit. This is because if we impose both C Q > 0 and T > 0, we find that there are no stable black holes in the flat space limit ν → 0 (as suggested by figure 1). In AdS/CFT, we expect that both conditions are necessary to ensure thermodynamic stability; nonethless, we may remove the condition T > 0 in order to compare with the flat space limit. In this case, we find that stability requires for the AdS 4 black holes, and for the AdS 5 black holes. This allows for a more direct comparison between the two cases.
In figure 3, we contrast the bounds obtained in AdS and flat space. The bounds in AdS are stronger, as they should be given that there is an extra parameter's worth of stable black holes. Note also that c 1 > 0 is implied by positivity in AdS, but not in flat space, because in flat space there are no stable neutral black holes.
VI. DISCUSSION
In this paper, we have examined the relationship between the higher-derivative corrections to entropy and extremality in Anti-de Sitter space. As we have seen, extremality is considerably more complicated in AdS because the relationship between mass, charge, and horizon radius at extremality is non-linear. Nonetheless, we have verified the relation [7,12] between the entropy shift at fixed charge and mass and the extremality shift at fixed charge and temperature. There is a sharp dependence on which quantities are held fixed in AdS. This is in contrast to flat space, where the linear relationship between mass, charge, and horizon radius removes this issue. We have also provided a more general proof of this relation in Appendix B, and extended the result to show that there is a third proportional quantity, which is the extremality shift at fixed mass and temperature.
When viewed geometrically, these statements seem almost accidental. In section IV, we performed the same calculation from a thermodynamic point of view by computing the free energy from the renormalized on-shell action. From this point of view, issues concerning "which quantity is held fixed" translate to "which ensemble is used." In addition to providing an additional check on the results from section III, this provides a non-trivial confirmation of the results of [17], which states that the shifted geometry is not needed to compute the thermodynamic quantities.
Assuming that the entropy shift is positive places constraints on the Wilson coefficients.
However, a crucial difference appears in AdS when compared to flat space. The stability criterion depends on the horizon radius over the AdS length, and goes to zero at large horizon radius. This means that there are stable neutral black holes that are asymptotically AdS. For neutral black holes, the entropy shift is dominated by c 1 , which is the coefficient of the Riemann squared term, so the positivity of the entropy shift implies the positivity of this coefficient. In AdS 5 , this coefficient may be related to the central charges of the dual field theory [30,40,41] by Thus, the positivity of the entropy shift appears to be violated in theories where c − a < 0.
In [42], a number of superconformal field theories were examined, and all were found to satisfy c − a > 0. It is worth noting there are non-interacting theories where c − a < 0; for example, a c = 31 18 for a free theory of only vector fields [43]. However, such theories do not have weakly curved gravity duals. If there are any bulk theories where c 1 < 0, we are not aware of them. The question of whether holographic theories necessarily correspond to c − a non-negative is interesting for a number of reasons -both from a fundamental point of view and for phenomenological applications.
In particular, recall that the range of the Wilson coefficients and the sign of c−a played an important role in the physics of the shear viscosity to entropy ratio η/s and how it deviates from its universal 1/4π result [44,45], as discussed extensively in the literature (see [46] for a review of the status of the shear viscosity to entropy bound). Indeed, it is interesting to compare our results to the higher-derivative corrections to η/s, which (for the AdS 5 case of interest to us here) were shown [20] to be given by where r 0 is a parameter of the solution defined in [20]; the factor q 2 /r 6 0 goes from 0 (for neutral black holes) to 2 (at extremality). Our bounds on c 1 imply that neutral black holes will necessarily have a negative viscosity shift, violating the KSS bound. Models where this is realized are known to exist-the first UV complete counter-example to the KSS bound was given in [21]. For extremal black holes, the dependence on c 1 drops out and only the sign of c 2 matters, η/s = 1 4π (1 + 8c 2 ). For AdS 5 , the c 2 coefficient may have both positive and negative values. However, imposing the null energy condition implies an additional constraint on the range of c 2 , which in AdS 5 takes the form 13 12 This may be seen by first noticing that the definition of the parameter γ in equation (2.8) implies γ > 0 as long as the null energy condition holds. Then the bound in (6.3) may be derived from the specific form of γ given in (2.9). This alone is sufficient to bound c 2 from below, when c 1 is non-negative. Thus, one can see that utilizing such constraints it is at least in principle possible to bound η/s from below, in specific cases. To what extent this can be done generically is still an open question.
It might be interesting to try to relate the extremality bounds to the transport coefficients of the boundary theory in a more concrete way. As the corrections to η/s depend only on c 1 and c 2 in five dimensions, it is clear that the shift to extremality is not captured by the physics that controls η/s alone. One might wonder, however, if some other linear combination of transport coefficients, such as the conductivity or susceptibility 5 , might be related to the extremality shift. From a purely CFT point of view, this is certainly not that strange; the philosophy of conformal hydrodynamics is that scaling symmetry ties together ultraviolet quantities (a, c) that characterize the CFT to the transport coefficients, which characterize the IR, long-wavelength behavior of the theory. If we believe that EFT coefficients in the bulk are related to these UV quantities (as is known in the case of c 1 ), then a correspondence between higher-derivatives and hydrodynamics is very natural. The question is to what extent this can be used to efficiently constrain IR quantities. Finally, we should note that extending our analysis to holographic theories that couple gravity to scalars would be useful to make contact with the efforts to generate non-trivial temperature dependence for η/s (see e.g. the discussion in [48,49]), which is expected to play a key role in understanding the dynamics of the strongly coupled quark gluon plasma.
Our results also have potential to make contact with the work on CFTs at large global charge [50]. As we've seen above, the extremality curve for AdS-Reissner-Nordström black holes is non-linear even at the two-derivative level. In an analysis of the minimum scaling dimension for highly charged 3D CFTs states of a given charge, it was found [24] that ∆ ∼ q 3/2 . This is in striking agreement with the extremality relationship m ∼ q 3/2 that holds for large black holes. The large charge OPE may be powerful because it offers an expansion parameter, 1/q, which may be used even for CFTs which are strongly coupled. In principle, it should be possible to match our higher-derivative corrections to the extremality bound with corrections to the minimum scaling dimension that are subleading in 1/q. This might allow one to use the large charge OPE to compute the EFT coefficients of the bulk dual of specific theories where the minimum scaling dimensions are known.
A. Weak Gravity Conjecture in AdS
One of the motivations for this work is to address to question of to what extent the WGC is constraining in Anti-de Sitter space. It is not obvious that it should be. In flat space, one looks for higher-derivative corrections to shift the extremality bound m(q) to have a slope that is greater than one. In that case, a single nearly extremal black holes is (kinematically) allowed to decay to two smaller black holes, which can fly apart off to infinity and decay further if they wish.
In AdS, the extremality bound m(q) has a slope that is greater than one at the twoderivative level. Therefore one might expect that large black holes are already able to decay without any new particles or higher-derivative corrections. This picture may be too naive, however; the AdS radius introduces a long range potential that is proportional to r 2 l 2 . This causes all massive states emitted from the black hole to fall back in, contrary to the situation in flat space.
A different decay path is provide by the dynamical instability [51][52][53][54], whereby charged black branes are unstable to formation of a scalar condensate. This occurs only if the theory also has a scalar with charge q and dimension ∆ that satisfies Note that, even in the limit of large AdS-radius l, this does not approach the bound we have for small black holes, which is m ≤ q. Numerical work in [52] suggests that the endpoint of the instability is a state where all the charge is carried by the scalar condensate. Similar requirements appear for the superradiant instability of small black holes [55,56]. For a more thorough review, see [13]. In either case, it is curious that in AdS, a condition similar to the flat space WGC allows for black holes to decay through an entirely different mechanism.
Another remarkable hint of the WGC comes from its connection to cosmic censorship. In [57,58], it is shown that a class of solutions of Einstein-Maxwell theory in AdS 4 that appear to violate cosmic censorship [59] are removed if the theory is modified to include a scalar whose charge is great enough to satisfy the weak gravity bound 6 .
It may be possible to study these solutions in the presence of higher-derivative corrections. 6 The bound they consider is the bound for superradiance of small black holes, which requires ∆ ≤ ql.
One might ask whether there is a choice of higher-derivative terms such that the singular solutions are removed. It would be interesting to check if this occurs when the higherderivative terms are those that are obtained by integrating out a scalar of sufficient charge.
It would also be interesting to compare constraints obtained by requiring cosmic censorship with constraints due to positivity of the entropy shift.
A more general proof of the WGC in AdS was given in [16]. In that paper, it was shown that, under mild assumptions, entanglement entropy for the boundary dual of an extremal black brane should go like the surface area of the entangling subregion, which is in tension with the volume law scaling predicted by the Ryu-Takayanagi formula. The contradiction is removed when one introduces a WGC-satisfying state. This violates one of the assumptions that imply the area law for the entropy-that is, the assumption that correlations decay exponentially with distance.
This form of the WGC in particularly interesting to us because it makes no reference to whether or not the WGC-satisfying state is a particle, or a non-perturbative object like a black hole. Therefore, the contradiction pointed out in that paper may be lifted if the higher-derivative corrections allow for black holes with charge greater than mass. Heavy black holes in AdS have masses far greater than their charge-therefore we expect that the WGC-satisfying states might be provided by small black holes whose higher-derivative corrections shift the extremality bound to allow slightly more charge. We find the following expression for the extremal limit, while in the neutral limit we have Once again, the entropy shift is proportional to c 1 in this limit. It is interesting that we do not find a positivity constraint on c 2 , as we did in AdS 4 . There is a lower bound on c 3 /c 1 of about -0.5339. The general constraints obtained by the Reduce function of Mathematica are extremely complicated and probably of little interest.
(A8) Finally, in the neutral limit we find Note that no Casimir energy subtraction is needed in AdS 6 . We again find that c 1 is positive.
The other bounds are displayed in figure 5. In AdS 6 and AdS 7 , the Reduce function of Mathematica was not able to find the general constraints over all stable values of ξ and ν. However, we believe that the strongest constraints will come from the boundaries of the region of stable black holes. Specifically, we imposed positivity at the neutral ξ → 1 limit, the extremal ξ → 0 limit, the planar limit ν → ∞ limit, and at ξ = ξ * . We believe this method should give the same answer, and we have checked explicitly that it does in the case for AdS 4 and AdS 5 .
The Casimir energy that must be removed from the thermodynamic energy in AdS 7 is where ω 5 = π 3 .
Once again, c 1 is positive. The other bounds are displayed in figure 6. Again, we used the method of extremizing over the boundaries of the space of stable black holes.
Appendix B: Another Proof of the Entropy-Extremality Relation
Recent work [7,12] suggests a remarkable universal relationship between the corrections to extremality and corrections to entropy. Here we will present a simple derivation of this relation using standard thermodynamic identities, including a slight generalization of the relation away from extremality. The statement itself is not specific to black holes, and is in fact a relatively universal statement about infinitesimal deformations of thermodynamic systems.
Consider a thermodynamic system, let E be the total thermal energy, T the temperature, S the entropy and X collectively label a set of extensive thermodynamic variables (for black holes this could be the charge Q and spin J). Now consider a small deformation of this system parametrized by a continuous parameter . The only assumption we will make about this deformation is that it preserves the third law of thermodynamics in the form Recent work has considered the leading α -corrections to dyonic Reissner-Nordström black holes embedded in heterotic string theory [18,19]. Though the four-dimensional backgrounds considered in these papers are asymptotically flat, we would like to briefly comment on them in connection with the universal entropy-extremality relationship.
From the dimensionally reduced, effective four-dimensional solutions the authors calculated explicit expressions for the Wald entropy, and Hawking temperature, where we have defined µ = M 2 − P 2 2 . Here P denotes the charge of the black hole, and we have adopted the same small expansion parameter as earlier. From these results it is straightforward to verify the following relation up to errors of O( 2 ). Consequently, the parameter M corresponds to the thermal mass of the black hole. With these explicit expressions we can verify the entropy-extremality relation derived in [12]. The differential change in the mass at fixed temperature is given by a simple application of the triple product identity, . (B19) The differential change in the extremal mass is given by taking T → 0 + . This limit must be taken indirectly since the function (B17) is too complicated to be inverted directly. The zero temperature limit is then the same as taking M → M ext . Since (B19) is already a relation between two quantities at O(α ), we only require M → (M ext ) 0 , which is the leading-order extremality relationship The correction to the extremal mass is then found to be in agreement with [19]. To verify the entropy-extremality relation we also need the shift to the entropy at fixed charge and mass. Since the Wald entropy given above is already parametrized in terms of the thermal mass and charge, this is trivial to calculate, ∂S ∂ M,P = α π (M 2 + 18M µ + 21µ 2 ) 40µ (µ + M ) . (B22) According to [12], we should take the zero temperature limit of this expression multiplied by the uncorrected temperature. This is equivalent to taking M → (M ext ) 0 . Indeed, taking | 15,265.6 | 2019-12-24T00:00:00.000 | [
"Physics"
] |
ENTERPRISE RISK MANAGEMENT IN KOSOVO ’ S BANKING SECTOR
Today risk management plays a vital role in business. Each firm, whether big or small, makes an effort to manage risk more effectively. Risk management is very important in the financial system, especially in banks. Billions of Euros are spent each year on the financial reporting of banks. Banks should implement effective solutions in risk management to mitigate their risks. Great financial debate that originated in the 1990s is reportedly linked to errors that occurred in the banking sector due to poor risk management. It should be noted that today technology plays a key role in risk management and it has already had a positive effect on the financial industry. Analysis of risk and its management has become significant in the Kosovo economy since the post-war period. The nature of the banking business is threatened by risks because more financial products are becoming complicated. The main role of banks is intermediation between those who have resources and those seeking them. Banks face various risks at the corporate level, such as operational, liquidity, legal, credit, and market risks; thus, these risks should be converted into a composite measure. This research aims to determine practices and effects of risk management in the banking sector. Relevant data were collected from banks through questionnaires and telephone interviews; analysis has been conducted using statistical tools. This study will engage both the quantitative and qualitative methods of data analysis. Dependent variables will be separated from independent variables, and regression analysis will be used to analyse the quantitative data.
INTRODUCTION
The concept of risk includes various financial and managerial aspects.It is used to refer to different agents and different events.Whenever it is used, the word risk concerns related to financial issues or can be used as a technical term that is difficult to define precisely.Indeed, the word risk means the uncertainty that comes from different sources that impact directly or indirectly the nature of a business.When a business functions, it faces operational risk, for example, when British Petroleum decided to go to the Gulf of Mexico, the company did not anticipate that its routine exploration activity for oil would result in the biggest environmental catastrophe in the United States of America.This event also took some lives in an explosion and caused a great amount of oil having leaked into the ocean despite many attempts to stop it.
Financial systems worldwide have a fundamental effect on the growth and development of the economy, especially in mediating between units in surplus and in deficit.Effectiveness and efficiency in performing these roles depend on the level of development of the financial system.To ensure viability, the financial sector should be controlled by the government and its bodies.
The stage of development and efficiency of the financial system varies depending on the time and state.The more sophisticated and developed financial systems tend to be associated with mature economies, while less developed financial systems appear in emerging economies.The financial system, as a process, adapts to changes in the real economy.Moreover, as a part of this process, banks are affected by the performance of their three main functions: − To transform short-term deposits held by households into liquid resources generated by firms; − Monitor debtors towards depositors; − Facilitate transactions between agents providing free services.Recently, the banking sector is strengthening the rules and will set limits on the granting of loans.At the same time, banks are increasing their internal controls, especially in strengthening the management of credit risk.Bad loans negatively affect economic development and lead to deterioration in the efficiency of the banking system.
Nature of Risk
Genesis of the term risk comes from the Latin and Arabic, Risicum and risk, respectively (Kedar, 1970), and according to the definition the Oxford English Dictionary, risk is the possibility of financial loss.Thus, risk is the probability of changes in results.
According to the ARMIC, Alarm, IRM (2010), risk is an effect on uncertainty of objectives of an event.Edwards (2004) and Jorion and Khoury (1995) defined risk as unsustainable results.Pike and Neale (2003) provided a justification that results can be assessed for risk and assigned probabilities but not for probabilities and outcomes that may have uncertainty.According to Allayannis, G., U. Lei, and D. Miller. (2005), risk can be measured mathematically and statistically to find the standard deviation of the cash flow of a firm, although it can be measured in simple terms by assigning probability figures of less than 1 for the possibility of the occurrence of an event.
Risk brings unforeseen results (Edwards, 2004).Risk exists when we make a decision and are not sure about the outcome.Risk occurs in all existing businesses taking risky actions and being charged with managerial responsibilities to maximise their profit.Financial institutions face special types of risks because financial products are becoming more complex and, at the same time, financial institutions have closer links with risk management, given the challenges to changes in the financial markets because of innovations of products, high speed of transactions, new technologies and the increased level of regulatory requirements.The value of _____________________________________________________________________________ 2017 / 5 40 products and instruments in the financial world is unstable.Variables can move up or down and can cause loss or profit due to financial indices and variables in financial markets (Jorion & Khoury, 1995).Other risks can happen when we extend the award of loans by banking institutions (Jorion, 2009), which indicates that it is a risky business.
Risk Management by Financial Institutions
Banks and financial institutions face financial and non-financial risks (Jorion, 2009).Financial risks are losses that occur if there is an error in the financial markets due to movement in the exchange rate and loan interest (Jorion, 2009), whereas operational risks, legal risks, and so on are non-financial risks.
The role of the banking sector in each economy is very important because it provides financial resources for economic development from those who save money to those who require financial resources.This role becomes even more important in emerging economies, such as Kosovo, where borrowers have limited access to capital markets.Barth et al. (2004) stated that when institutions work, economic growth occurs, while when banks malfunction, this delays economic growth and intensifies poverty.
The core business of banks is to attract funding and invest these resources.Banks must manage risk to maintain their boundaries and fulfil their role in the economy.When banks take extreme risks, they may soon fail and go bankrupt.Risk is the probability of a negative uncertain event (Van Gestel & Baesens, 2008).Banking risk is associated with the potential loss of financial products (Bessis, 2003) that deal with different risk factors that must be understood, identified, measured, and managed.
The Basel Committee on Banking Supervision (2006) has identified four main sources of risk with regard to risk management.− Credit risk; − Market risk; − Liquidity risk; − Operational risk.
In credit risk, borrowers face uncertainty that exists in the possibility of failing to meet the obligations that have been agreed upon with the bank (Fatemi & Fooladi, 2006).For most banks, this is the main source of risk.On the other hand, if the bank credit is not sufficient, it may result in substantial loss of profit (Glantz, 2003).For many banks, lending to people and organisations is a significant risk that is noticeable in the risk category.Banks face also other types of risks including acceptance of international transactions, trade financing, foreign exchange transactions, bonds, and equity options.Management associated with credit risk requires thorough analysis to arrive at a solution to this problem.Exposure to credit risk is a danger, and management should learn from past experience and develop techniques to avoid this type of risk (Basel Committee on Banking Supervision, 2006).Thus, credit risk is the risk that a borrower is not able to fulfil the obligation to redeem the debt (Van Gestel & Baesens, 2009), which can be manifested in several ways: − When the borrower is unable to repay the obligation on time; and − When the borrower refuses to perform any obligation due to fraud or conflict of laws.Market risk is the risk that relates to financial products, instruments, and assets that are traded on financial markets (Edwards, 2004).These are exposed to market risk due to the movement of prices and the indices of these instruments.Change of these indicators constitutes a potential threat to the banking business.Banks that compete in the market with instruments such as debt, equity, money exchange, goods, and various derivatives are exposed to the risk of loss due to price change positions.In market risk, two types of risks exist: systematic and unsystematic risks.Systemic risks relate to the movement of prices of all products in general, and unsystematic risks focus on changing the price of a specific product.
There are several types of market risks; this diversity exists because of the difference in prices of the same products in different markets.The most common types of risk are interest rate risk, exchange rate risk, currency risk, and capital risk.It should be emphasised that the risk of interest rates is more important for banks and financial institutions that provide loans to their clients.
Liquidity Risk
Banks are obliged to fulfil their obligations.When liquidity becomes insufficient, liquidity risk occurs (Saiful, 2005).A bank faces liquidity risk when it is in financial difficulty and when it fails to finance its assets and perform its obligations.
Operational Risk
Operational risk is the risk that relates to the conduct of daily duties and tasks of the enterprise.All operational risks include internal operating procedures, control systems, information technology system, organisational structure, accounting systems, training and quality of staff members.Mismanagement of these can heavily affect the business and result in significant financial losses.
Operational risk is the risk of loss of a business from staff that fail due to not following these procedures (Edwards, 2004).These losses can arise from fraud, waste, errors, and inefficiencies in the banking system.
BANKING SYSTEM IN KOSOVO
After the war with Serbia in 1999, Kosovo implemented considerable economic transformations from a centralised economy into a market economy.At present, Kosovo has a new and dynamic economy.The banking sector in Kosovo is evaluated among the sectors with the best performance in the economy.Loans and deposits are growing, while the rate of financial services is being increasingly advanced.The Central Bank plays the leading role and has the authority to license, supervise, and regulate financial institutions in Kosovo.
Baltic Journal of Real Estate Economics and Construction Management
_____________________________________________________________________________ 2017 / 5 42 The banking sector in Kosovo consists of 10 commercial banks.These banks provide various services for their clients including loans, guarantees, current accounts, savings accounts, time deposits, transfers in the country and abroad as well as services for storing items of value (CBK Financial Supervision, 2015).
The banking sector, which represents the bulk of the financial sector, is characterised by an increase in financial intermediation (Ministry of Foreign Affairs, 2015).The main source of deposits continues to be households, dominated, in terms of maturity, by short-term deposits (Ministry of Foreign Affairs, 2015).
The Central Bank of Kosovo continues to be dedicated to ensuring financial stability in the country, which represents the main target of the law (Central Bank of the Republic of Kosovo, 2014).
The Central Bank of Kosovo, like all the banks of other countries, functions in accordance with the Basel II framework (Feiguine & Nikitina 2008) that was standardised from 1 January 2008.The Basel II framework determines the minimum level of capital requirement that is required to be created by banks to maintain the funds of depositors and investments in their value.As a form of legislation, Basel II directs banks to consider the risks that they face and develop capacity for their risk management.Banks are obliged to submit their annual accounts in accordance with International Financial Standards (IFRS) and accounting standards.
Organisational Structure of Risk Management through Kosovo Banks
Kosovo commercial banks have adopted the organisational structure of risk management by KPMG, as it better fits the business environment to mitigate potential risks.Risk management committees that exist in banks determine the policies of risk management and then make recommendations to the board of directors.As a result, strategic objectives of banks for risk management are set by the board of directors who determine the limits and suitable methods related to risk actions.A robust system for management is based on reporting of adequate processes for internal monitoring and includes appropriate procedures for granting and setting deadlines.Risk assessment, monitoring, and control functions are connected to each other to meet the objectives.
Challenges Banks Face Regarding Risk Management
With the development of technology, consumers are entering the market with rising expectations, and this has increased the efforts of banks and financial institutions to adapt to those changes.Economic globalisation is developing rapidly, giving space to national and international players operating in the financial area, especially in banking.If banks have not managed to improve their delivery mechanisms with the best customer service, they are exposed to environmental risk (Raghavan, 2003).
It is almost impossible to decrease risk without the help of key players in this industry, and this paper focuses on a company that has been chosen as an object of a case study and can help us understand the topic of risk management.In Table 1, the time challenges the bank faces are described regarding the adaption of ERM for business.
Table 1. Challenges and procedures of banking institutions in the adoption
of ERM are described by Vaidyula & Kavala (2011) as shown below: Improving efficiency Achieving greater efficiencies in risk and control processes, improving coordination, and unifying and streamlining approaches.
Challenging the regulatory environment
Ever-changing regulatory demands, high degree of regulatory scrutiny, variation of regulations across jurisdictions, preparing to operationalize, and compliance with Basel II.
Keeping pace with business growth and complexity
Rapid business growth, competitive intensity, M&A activity, global expansion, increasing product complexity, and increasing customer expectations.
Attracting and retaining talent
Shortage of good talent in competitive markets, especially in specialised areas or emerging geographies.
Managing change
Dealing with people and organisational issues as new processes demand new methods of work.
Fear of compliance failures and emerging risks
Fear of compliance failures despite best efforts due to human error or unanticipated events and identifying and preparing for future risks.
Baltic Journal of Real Estate Economics and Construction Management
_____________________________________________________________________________ 2017 / 5
44
The implementation of ERM is a great challenge, and all aspects should be supported and directed by senior managers.Risk management is a process that never ends, and the phases of risk management can be divided into coordination, alignment, and integration.Good implementation of the ERM provides us with better control of the business and helps reduce costs, also increases public and stakeholder confidence in banking actions (Vaidyula & Kavala, 2011).The phase of risk management process cannot be successful if high-level managers do not interfere because of the complexity of the ERM processes.
Procedures and Steps that Should be Included in the Management of Banking Risk
There are adequate procedures for effective risk management.The question raised is which techniques and guides are set for diverse types of risk and what is the effect of each.Should there be adequate procedures for effective risk management?What techniques are established methods for diverse types of risk, and what is the effect of each risk?Banks should conduct large-scale research on risk management and not spend a lot on the ERM system (Santomero, 1996).There are four steps to be involved in banking risk management: Implementation of standards and reports -For risk management in banks, the first step that should be included is the establishment of standards and financial reporting.These are connected to each other because they represent the backbone of any risk management system.Standards related to risk assessment and risk control enable understanding of the nature of the risk portfolios and enable the stakeholders to see what actions should be taken to reduce the risk.Financial reporting should be standardised because it presents the ranking of quality assessment so that investors take further steps for investment.
Rules, borders, and positions to be taken -The second method for monitoring internal control of risk management are rules, set positions and limits, and aspects that involve minimum standards.Thus, every person who is in a set position must have a clear understanding of limitations.This is important for lenders, traders, and portfolio managers who must operate within the rules and boundaries set by the company to avoid risk.
Investments and strategies -Third, strategies are described in the instructions regarding which conditions and commitments should be allocated to the market.This method also defines the guidelines of the above mentioned investment-related risks.
Simulation schemes -During risk management, contracts are given to managers to stimulate the control of expenditures and to manage the financial situation of the institution.These incentive contracts require accurate assessment of positions and internal control.
RESEARCH METHODOLOGY
Research is the core of practice for the detection of proven achievements, tested and trusted in response to questions of interpretation between the systematic collections and processing of data.Research methodology is a procedure in which the researcher conducts a research project or gathers information about research topics, irrespective of its nature (Turabian, 2013).
This research adapts the qualitative and quantitative methodology.Quantitative data are processed by statistical methods.Results are shown using numbers and tables (Emory, 1991).The qualitative method is a technique of data collection that is supported by the description and explanation of qualitative data through the use of concepts (Saunders, 2000).
The primary data used in this research are collected from the commercial banks in Kosovo through a questionnaire.The questionnaire in this survey consists of 15 questions; using this questionnaire, we will discover the significance of risk management in banks.The questionnaire includes closed and open questions rated on a Likert scale to examine to what extent participants agree or disagree with the given statements to avoid misunderstandings.This form of the questionnaire can easily be transferred into a numerical design, which is suitable for statistical analysis.
Mean and standard deviation -Mean describes the central data position.The statistical mean is a discrete random variable in all its conditions (Patton, 1980).The standard deviation is useful to compare datasets with the same meaning but in a different range (Ryan, 1999).The mean is used with the standard deviation and describes the central position of data, whereas the standard deviation is a distribution measure of probability of a variable case, a population, or a group.Based on the administrative risk results, the mean is 4.2 and the standard deviation is 1.7, which adds a special importance for services because the administrative risk is high.The analysis of risk liquidity shows that the average is 5 because banks have receivable assets and therefore face very high risk.Analysis of the lending section provides a mean of 4.5.Respondents stated that banks verify customers before lending money to them because it is one of the key points of banking risk.Analysis of the importance of risk management policies (mean is 4.3) shows that banks pay attention to risk management policies and, through these policies, have achieved success in preventing losses and increasing financial performance.The analysis of the importance of crediting policies shows a mean of 3.6, which indicates that there is policy enforcement because of increased level of bad loans in financial institutions.The analysis of internal control has a mean of 4.1, which shows high internal control starting from the administration all the way to the head office.The analysis of performance of risk management shows a mean of 2.4, which indicates that employees have not achieved the desired level of performance, whereas the analysis of the importance of training shows that banks do not provide regular training for their employees; they pay less attention to this issue.
Percentage of Respondents
The questionnaire was sent out to 40 managers and employees in commercial banks in Kosovo.Only 79% of them responded.This shows that banks have a high degree of interest in analysis during the lending process.In turn, this indicates that borrowers pay their obligations on time.On the other hand, 21% of the respondents indicated that bank tariffs are very high because of high interest rates.This makes those who borrow unable to pay their loans on time.
Regression Analysis
The earliest form of regression was the method of least squares, which was published by Legendre in 1805 (Legendre & Adrien-Marie 1805)) and Gauss in 1809 (Stigler, 1981) 47 moindres carrés.However, Gauss claimed that he had known about the method since 1795.Regression analysis is a technique for modelling and analysing numerical data containing a dependable variable (response variable) and one or more independent variables (explanatory variables).The dependent variable in the regression equation is modelled as a function of the independent variables, corresponding parameters (constants), and an error term.The error term is treated as a random variable (Fuller, 1987).It represents the unexplained variation in the dependent variable (Hamel & Dufur, 1993).The factors are evaluated to give the best fit of the data.Most usually, the best fit is analysed using the least squares method, but other criteria can also be used (Hamel & Dufur, 1993).Regression can also be used for prediction (including forecasting of time-series data), inference, hypothesis testing, and modelling of casual relationships.In linear regression, the model specification is that the dependent variable yi is a linear combination of the parameters (but need not be linear in the independent variables (Fisher, 1922) For example, in simple linear regression for modelling N data points, there is one independent variable: xi, and two parameters, β0 and β1: Based on the response, we see that the maximum loan that a financial institution or a person could take depends on the source of their revenues.Kosovo commercial banks, apart from individual and business loans, also provide mortgage loans and overdraft salary.Analysis of the credit limit shows regression on the risky assets of banks with the value of 1.723 using t-statistics.Analysis of the profitability of credit shows regression of −0.613.It did not show any relevance on the level of profitability.For the analysis of credit control, based on the results, the regression of 3.105 shows a higher degree of credit control and that risk management in banks is very important.
From the table, the regression of the question shows the effect of risk management.The effect of risk management is the dependent variable, while the net profit level, risk management and credit control, and the importance of risky assets are the independent variables.The reasons we use three questions is to test the effect of risk management on the performance of banks and financial institutions.Interviews were also done to verify the effect of risk management and that the staff that fill out the survey all have knowledge of the questions.The qualifications of the members of staff include PhD, masters, undergraduate, and diploma holders.
CONCLUSION
Risk management is a necessity for financial institutions to survive and thrive in the long term.Financial institutions should ascribe critical importance to a tight system of control when providing loans.Banks have a stable cultural system of risk management.During the study, issues that relate to credit risk management and internal control techniques were treated in practical as well as in conceptual terms.
From the presented and analysed data, risk management is one of the most important operational aspects of banking and financial institutions.It should be noted that the findings show that banks that have huge profits have an advanced risk management system in place.Based on this, they also gain public support and trust of clients as a result.Risk management at the banks covers the following activities: internal control, credit control, market control, and investment analysis.
The following recommendations for banks in Kosovo are made to improve banking efficiency: − First, banks need to be aware of assessments of loan requests and try to get as much information of value to assess the general effectiveness of the department.− Second, banks must have an efficient control mechanism in granting loans and must be equipped with reliable information about customers requesting loans.It should be noted that the banks do not need to make decision on granting the loan based on one or more pieces of client information that they provide, rather, the information should be gathered on a broader level to determine correctly the required decision.− Third, the staff should undergo training and should also attend seminars to keep up with the current global trends.− Last, banks should focus more on credit control, since it is the main aspect of banking risk management, and banks must continue to give it great consideration, as the main profits come from crediting activities.
Table 2 .
Mean and standard deviation [table made by authors]
Table 3 .
Questionnaires with regard to borrowing [table made by authors]
Analysis of internal control
-Analysis of the data from this table shows that 80 % of the respondents indicate that banks have internal control systems in place, which are fully in compliance with bank regulations.
Table 4 .
Analysis of internal and external control [table made by authors] . The term least squares comes from Legendre's vocabulary,
Table 5 .
Importance of risk in assets management [table and calculations made by authors] | 5,812.8 | 2017-11-27T00:00:00.000 | [
"Business",
"Economics"
] |
Surface Water Body Detection in Polarimetric SAR Data Using Contextual Complex Wishart Classification
Detection of surface water from satellite images is important for water management purposes like for mapping flood extents, inundation dynamics, and water resources distributions. In this research, we introduce a supervised contextual classification model to detect surface water bodies from polarimetric Synthetic Aperture Radar (SAR) data. A complex Wishart Markov Random Field (WMRF) combines Markov Random Fields with the complex Wishart distribution. It is applied on Single Look Complex Sentinel 1 data. Using Markov Random Fields, we utilize the geometry of surface water to remove speckle from SAR images. Results were compared with the Wishart Maximum Likelihood Classification (WMLC), the Gaussian Maximum Likelihood Classification, and a median filter followed by thresholding. Experiments demonstrate that the statistical representation of data using the Wishart distribution improves the F‐score to 0.95 for WMRF, while it is 0.67, 0.88, and 0.91 for Gaussian Maximum Likelihood Classification, WMLC, and thresholding, respectively. The main improvement in the precision increases from 0.80 and 0.86 for WMLC and thresholding to 0.96 for WMRF. The WMRF model accurately distinguishes classes that have a similar backscatter, like water and bare soil. Hence, the high accuracy of the proposed WMRF model is a result of its robustness for water detection from Single Look Complex data. We conclude that the proposed model is a great improvement on existing methods for the detection of calm surface water bodies.
Introduction
Surface water body detection using satellite data has been addressed in many studies. The retrieved information has been utilized for water management tasks like indication the presence of water bodies and their extent, inundation dynamics, and flood extent. Yet there is at present insufficient knowledge of the spatial and temporal dynamics of available surface water (Alsdorf et al., 2007), since the variation of the spatial extent of inland water bodies both seasonally and interannually is strong (Papa et al., 2010). Also, over the past three decades several permanent water bodies have vanished or become seasonal, due to human and natural causes. Over 70% of global permanent water loss has occurred in five countries. Iran is among these five countries with 56% loss of permanent surface water between 1984 and 2015 (Pekel et al., 2016). Such losses raise major concerns of water security and sustainability. It is thus important to obtain accurate and updated information about the distribution of available surface water bodies.
Remote sensing technology provides advanced means for detecting, characterizing, and monitoring water bodies. It overcomes shortcomings of traditional ground-based surveys, such as being expensive, timeconsuming, and influenced by other unknown factors in the field (Wang et al., 2011). Synthetic Aperture Radar (SAR) data have many advantages over optical images as they are independent of cloud cover; the sensors are able to operate day and night and are not subject to sun glint (Kutser et al., 2009). Applicability of SAR data for surface water detection has been demonstrated in the past (Henry et al., 2006;Hoque et al., 2011;Mertes, 2002;Tholey et al., 1997). Thresholding methods have been extensively used based upon the assumption of a strong contrast between the low backscatter of water and the higher backscatter of main land cover classes in the intensity images (Brisco et al., 2009;J. Li & Wang, 2015;White et al., 2014). Since backscatter values vary depending upon the incidence angle, image quality, and wind-induced surface water roughness, the threshold needs to be modified on a scene-by-scene basis and automating thresholding methods is still a challenge (Bolanos et al., 2016). Also, active contour methods have been applied for water mapping in SAR images (Hahmann & Wessel, 2010;Heremans et al., 2003;Horritt et al., 2001;Mason et al., 2007;Silveira & Heleno, 2009). Although these studies have improved thresholding methods for water delineation, they involve postprocessing steps that rely upon availability of ancillary data to determine candidate pixels for water as well as on morphological operators (Bolanos et al., 2016). Recently, segmentation algorithms using auxiliary data have provided more successful results Martinis et al., 2009). Their dependence upon the availability of high-resolution digital elevation models or a predetermined water mask limits their usefulness. In addition, they are not useful for mapping small ephemeral water bodies (Bolanos et al., 2016). Automatic processing chains also have recently been applied for inland water and flood mapping. Huang et al. (2017) and Twele et al. (2016) used threshold-based algorithms and Shuttle Radar Topography Mission (SRTM) data for water detection and compared automated classification results from using SRTM and Dynamic Surface Water Extent (DSWE). Bioresita et al. (2018) defined an automatic processing chain using the Finite Mixture Models to produce probability maps. Although encouraging results have been obtained, the preprocessing steps are time consuming and they rely as well on predetermined water mask data.
Using contextual information in classification of optical and SAR images leads to improvement in the accuracy and reliability of classification (Ardila et al., 2011;Hiremath et al., 2013). The potential of Markov Random Field (MRF) to effectively integrate contextual information associated with the image data during the analysis is desired, and research has been applied on MRF for SAR image classification (Fjortoft et al., 2003;Kenduiywo et al., 2014;Moser & Serpico, 2009;Reigber et al., 2010;Serpico & Moser, 2006). According to criteria of the maximum a posteriori probability (MAP), MRFs allow a global Bayesian optimization of the classification results. Also, a complex Wishart distribution in classification of polarimetric SAR images has been applied (Akbari et al., 2011;Doulgeris et al., 2008Doulgeris et al., , 2011Frery et al., 2007;Lee et al., 1994). Akbari et al. (2012) proposed an unsupervised contextual clustering model for classification of multilook SAR images with a κ-Wishart distribution for data statistics and the Pott model for the spatial context. Their research shows a clear improvement in using an appropriate statistical representation, but it does not use any backscattering signature of classes. Therefore, it affects detection of any class of interest, which in our case is water. A major and recurring problem is that this class could be mixed with other similar classes and that it could be completely missed by an unsupervised classifier.
The main objective of this research is to develop a contextual supervised algorithm based upon Bayesian statistics for mapping surface water bodies from Single Look Complex (SLC) images. We use complex values of single look polarimetric SAR data to avoid speckle filtering and information loss, whereas the high resolution of SLC images is preserved. We use a contextual model that effectively tackles speckle in SAR images. Using MRFs, we are utilizing the geometry of surface water to remove speckle in SAR images and improve classification. The proposed model is applied on Sentinel 1 data, and results are compared with the Gaussian Maximum Likelihood Classification (GMLC) Wishart Maximum Likelihood Classification (WMLC) models and thresholding. We finally investigate the effect of class definition by using different numbers of classes.
WMRF
Bayesian statistics is widely used in remote sensing classification. Based upon backscatter values, we classify each pixel to the desired categories. The pixel values of a SAR image are denoted by d = {d i , i = 1, …, L}, where L represents the entire image. Each pixel takes a label from user-defined information classes defined as w i ∈ {1, …, m}, where m is the number of classes. The classification criterion is based upon the MAP probability criterion which maximizes the product of conditional and prior probability. The posterior probability can be expressed in terms of the posterior energy function (Li, 2009) where Here, U w i jw Ni ð Þis the prior energy function and U(d i | w i ) is the conditional energy function for pixel i. Use of energy functions allows a more convenient way to express contextual relation compared to probability (Geman & Geman, 1984).
The prior energy, U w i jw Ni ð Þ;is modeled as an MRF for each pixel i,adapted to a neighborhood system, N i . For an image with L pixels, it is defined as follows: where s il is the distance from pixel i to pixel l, in the neighborhood N i of pixel i, and ω(s il ) denotes the weight of contribution from l ∈ N i to the prior energy. The term θ(w i , w l ) is defined as θ(a,b) = 1 if a ≠ b, and 0 otherwise. Weights s il are defined as ω s il ð Þe 1 sil and normalized, such that ∑ l∈Ni ω s il ð Þ ¼ 1. This term favors smooth class labels in the neighborhood and penalizes deviations for smooth classification.
The conditional energy, U(d i | w i ),is modeled by a complex Wishart density function (Goodman, 1985). Dual polarization sensors have a scattering matrix u with two elements that can be considered as a vector where S vhi is the polarized backscattering element of a vertically transmitted and horizontally received signal for pixel i and S vvi is a vertically transmitted and vertically received signal. These elements are complex numbers since they carry both the magnitude and phase of the signal. Then the estimated single look covariance matrix equals where n is the number of pixels to estimate A i . The distribution of A i follows by a complex Wishart probability density function (Goodman, 1985) where Tr(B) is the trace of B; C wi is the complex covariance matrix given class w i , and K(n,q) is defined as The parameter q is dimension of the matrix u i , and Γ(.) is the Gamma function (Lee et al., 1994).
Then, the corresponding conditional energy based on equation (6) is The posterior energy function, equation (2), based on equations (8) and (3) can be written as follows: Minimizing equation (8) with respect to w yields the MAP solution, where an additional parameter λ controls the relative contribution of the prior and conditional energy functions with 0 ≤ λ < 1. For λ = 0, the prior model is completely ignored and the classifier is not contextual. For 0 < λ < 1 the MAP classifier is contextual, explicitly incorporating spatial contextual information by means of prior energy.
Energy Minimization
To maximize P(w i | d i ), the energy function equation (9) has to be minimized. In order to find the global minimum of the energy function, simulated annealing is employed (Geman & Geman, 1984;Metropolis et al., 1953)). This research applies the Metropolis-Hastings sampler (Geman & Geman, 1984). The algorithm starts at a high temperature τ = τ 0 . The value of τ decreases using a cooling schedule. An iterative process follows until the system becomes is frozen (τ → 0). The temperature at the iteration k is changed such that 10.1029/2019WR025192
Water Resources Research
for σ ∈ (0; 1). Any τ 0 can be chosen for optimization, but its value can affect the solution. Optimal values of the annealing schedule (τ 0 and σ) depend upon complexity of the problem which in our study depends on class separability. For each iteration, the Metropolis-Hastings sampler updates all pixels and the number of successful updated pixels is counted which leads to a change of pixel value. A threshold of 0.1% of the total number of pixels (Tolpekin & Stein, 2009) is defined to stop the optimization process when for three consecutive iterations the counted updated pixels are below the threshold.
We compare our WMRF model with three current methods from the literature. One is the GMLC, which is based upon a Bayesian probabilistic framework, but without contribution of a prior energy function. For this classifier, the conditional energy, that is, the distribution of u i is assumed to follow a Gaussian distribution. Then, equation (9) can be written as follows: The second method for comparison is the WMLC which follows a similar framework as the WMRF model but only considers the conditional energy. For the WMLC model, the distribution of the single look covariance matrix is assumed to follow the complex Wishart distribution. Then the posterior energy function for all pixels in a SAR image is based on equation (8).
The third method for comparison is the thresholding method based upon the low backscattering of water with respect to land in SAR images. We applied a median filter followed by thresholding.
Accuracy Assessment
Evaluation of the results is done using the m × m confusion matrix as a common way to summarize performance of classification methods. A confusion matrix assesses the results by means of labeled pixels that relate classified data to reference data. Using the confusion matrix, two measures of precision and recall are used to evaluate the success rate of classifiers. Precision denotes the proportion of the number of true positive predictions divided by the total predicted positives, and recall is the number of true positive predictions divided by the total number of actual positives. We also use the F-score as a measure for accuracy assessment, defined as follows: In addition, Cohen's κ (Hudson & Ramm, 1987) measures how closely the instance classified by the classifier matched the data labeled as reference data. To test the significance of results, we used the test statistic Z, (Congalton & Green, 2009) defined as follows: It tests the null hypothesis H 0 : Z = 0 against the alternative hypothesis H 1 : Z ≠ 0. Under the assumption of normality, we decide in favor of H 1 if Z > 1.96; otherwise, we decide in favor of H 0 .
As our purpose is to detect water, the confusion matrix for more than two classes is formed by two elements: water and nonwater. More classes are defined during the classification to adequately model spectral variation of nonwater pixels and overcome the problem of misclassification due to the poor class separability of water and bare soil.
Water Resources Research
GOUMEHEI ET AL.
Data Description
We used Sentinel-1 SAR image in Interferometric Wide swath mode to test the performance of the proposed method. Sentinel-1 is operating in C-band and collects images of a 250-km swath and a 5 × 20-m spatial resolution. The image acquisition date is 17 April 2017. We used a Level 1 SLC image with amplitude and phase of the SAR signal. It contains dual-polarization (VV+VH) data with both real and imaginary information.
For preprocessing of the image, the Sentinel-1 toolbox (S1TBX) from SNAP software, version 5.0.8 of European Space Agency (ESA), was used (Veci et al., 2015). Interferometric Wide products have three subswaths, and each subswath image consists of a series of bursts, thus requiring debursting. Next, orbit file correction, radiometric calibration, and geometric terrain correction were applied. We used geometric terrain correction to reduce topographic effects. Range Doppler terrain correction (Small & Schubert, 2008) was applied, using SNAP software. The Digital Elevation Model for terrain correction was the SRTM elevation model of a 3-arcsecond resolution, whereas the nearest neighbor method was chosen as the resampling method. Complex values of the image data were preserved during the preprocessing steps. Figure 1 shows the intensity of the SLC image.
Two different reservoirs, located in the western part of Iran, were selected as the study area. These two reservoirs have different characteristic in terms of shape and size. Reservoir A is a reservoir with 10-km 3 capacity covered by 1,687 × 649 pixels, whereas Reservoir B has a triangular shape with a rocky island in the middle. It is smaller with capacity of 1 km 3 and is covered by 401 × 220 pixels. The two reservoirs are both in a mountainous environment that mainly consists of surface water bodies, bare soil, rocks, and agricultural lands.
Three different legends were considered: C2 (water, nonwater), C3 (water, bare soil, and others), and C4 (water, bare soil, rock, and others). Upon the classification results of C3 and C4, nonwater classes were recorded into a single class.
Training samples of Reservoir A contain 10 polygons for each class. The class water has 1,123 pixels, bare soil has 2,178 pixels, rocks has 1,876 pixels, and the class other includes 2,660 pixels. Training samples of Reservoir B consist of six polygons for each class. The number of training pixels for class water is 138, for bare soil it is 388, for rocks it is 245, and for the class other it is 173. Training samples for the two reservoirs were selected manually from optical images with a few days difference. Training samples were chosen based upon the literature, considering distribution, size, and balance aspects (Congalton & Green, 2009;Zhu et al., 2016).
The first reference set (rs1) contains 400 points for Reservoir A and 192 points for Reservoir B. These points were obtained by stratified random sampling using the WMRF classification results for C3. Among the 400 points, Reservoir A contains 136 points for water and 264 points for nonwater. Reservoir B has 74 points for water and 116 points for nonwater. These points are evenly distributed throughout the study area, and the points are labeled by an expert using visual interpretation of SAR and the Sentinel 2A optical image of the study area collected in 26 April 2017.To ensure that the rs1 data do not inform the calculation of accuracy metrics, a second reference set (rs2) was generated. This set has 400 stratified random points created using the same classified optical image of the area and contains 200 points for each class. We evaluated the WMRF classified images using the second reference dataset.
The algorithm was written in R (R Core Team, 2018), version 3.5.0. The Rcpp and rgdal packages have been used.
Classification 3.2.1. Simulated Annealing Parameter Optimization
Parameters for energy minimization part of classification were set equal to τ 0 = 4.0 and σ = 0.s9. A neighboring system N i was chosen as the eight nearest pixels. The covariance matrix in equation (4)
Water Resources Research
the F-score for τ 0 = 1.0 and τ 0 = 4.0 reduced with an amount of only 0.006 and 0.0003 from 10 to 100 repetitions, respectively. Therefore, we decided that 10 repetitions provided an adequate representation of the variation in the results.
The suggested range for initial temperature found in the literature is between 3 and 4. This corresponds with our optimized value, so we selected τ 0 = 4.0 to optimize the value of σ. Figure 2b shows that the F-score is increasing with increasing σ values and that at the same time SD is decreasing. The results show that the optimal value equals σ = 0.99, corresponding with the highest F-score and the lowest SD values. In order to investigate the efficiency of the optimized value in terms of time consumption for energy minimization, the number of iterations for each value of σ is plotted in Figure 2c. From this figure we observe a slight growing in the number of iterations by increasing the value of σ. All σ values require approximately k = 150 iterations except for σ = 0.99, where more than k = 400 iterations were needed. As it turned out, there is trade-off between k and speed of the model. Hence, we suggest to select σ = 0.9 with admissible F-score and SD values, specifically for large data classification.
WMRF Model
To evaluate the performance of the WMRF model for surface water body detection, the WMRF was applied on the SLC data for different values of parameter λ. To deal with similar scaling variables, normalization has been applied on the conditional energy values, as we are dealing with complex values which cause unbalanced ranges of prior and conditional energy. Figure 3 shows average value of F-score, precision, and recall for 10 runs and their variance for 0 ≤ λ ≤ 1. The model classified the data into two classes (C2 legend) as the maximum observed F-score = 0.73 for Reservoir A and 0.81 for Reservoir B. Accuracy increases smoothly with increasing λ. It reaches its maximum F-score for λ = 0.9, whereas for λ = 1 it drops to 0.41 and 0.44 for study areas A and B, respectively. The maximum observed precision equals 0.58 for Reservoir A and 0.70 for Reservoir B. Recall results show the highest performance for all values of λ, whereas recall = 1 for Reservoir A and recall = 0.97 for Reservoir B. These results show that the model can successfully classify water pixels as class water but for many pixels misclassifies nonwater pixels as water. The following section aims to overcome this problem by defining new classes.
Number of Classes
We now evaluate the choice for legends C2, C3, and C4. Results for C3 and C4 show a higher sensitivity to changing λ than C2 (Figure 4). We observe a remarkable improvement in F-score and precision results from C2 to C3 and C4, for both reservoirs. The highest F-score equals 0.95 for C3 and C4 and 0.72 for C2, for Reservoir A (Figure 4a). Reservoir B also experiences an improved F-score from 0.81 for C2 to 0.95 for C3 and C4 (Figure 4d). From Figures 4b and 4e, we note that the improvement of the results for C3 and C4 is even stronger in terms of precision as compared to C2. For Reservoir A, the precision increases to 0.94 for C3 and 0.93 for C4 from 0.58 for C2 (Figure 4b). A similar increase is observed for Reservoir B: The precision increases from 0.70 for C2 to 0.97 for C3 and C4 (Figure 4d).
Including a class bare soil in the legend remarkably improves the precision of water classification and only slightly decreases the recall of the classification. Recall of C3 and C4 decreases 3% to 4% with respect to C2 for Reservoir A (Figure 4c), because the class separability of water and bare soil is poor and these classes have overlapping distributions. For C2, all low backscatter pixels are assigned to class water, so recall is high and all water pixels are classified correctly. For C3 and C4, pixels in the overlapping area, which have low backscatter, can be labeled as either water or bare soil.
To ensure that the reference data, rs1, do not inform the calculation of accuracy metrics, we evaluated WMRF classified images using the second reference data set, rs2, and compared those with our results using pairwise the test statistics Z, for testing the significance of the difference between two independent error matrices (Congalton & Green, 2009). The test statistics is Z = 0.16 and shows that the results are not significantly different.
Comparison With GMLC, WMLC, and Thresholding
As the last experiment to assess the performance of the WMRF model, we compared its results with those from the GMLC, the WMLC models, and thresholding. The strength of the WMRF model is more obvious for C3 and C4. The F-score for the WMRF model raises to 0.95 from 0.88 for WMLC and 0.67 for GMLC. This increase is also evident for precision that increases from 0.81 and 0.62 for WMLC and GMLC to 0.94 for WMRF. This improvement of precision shows that the WMRF model benefits from contextual information for classifying nonwater pixels properly. It accurately turns individual predicted water pixels to nonwater label, using labels of neighboring pixels. Therefore, misclassified nonwater pixels will be correctly labeled ( Figure 5).
Water Resources Research
Performance of the WMRF model for both reservoirs is more robust than the other two models ( Figure 6). The WMRF model acquires similar results for both study areas, specifically for C3 and C4, while results are varying for the GMLC model. The test statistic for testing the significance of κ represents the trustworthiness of the results. The high Z value of 43.39 and 29.81 for κ shows significance at the 95% confidence level, indicating that the results for WMRF model are substantially better than random.
10.1029/2019WR025192
Water Resources Research Figure 5 shows the results for the WMRF, WMLC, and GMLC models. The WMRF model improves the classification results. Surface water in both WMRF and WMLC model is homogenous, and it can be clearly interpreted as a water object. Nonwater pixels that are misclassified as water are obvious in the GMLC and WMLC models classification, whereas the WMRF model achieves smoother results.
A final issue in the evaluation of the different models concerns the efficiency of the algorithms. Computation time for one run of the algorithm is on average 19.388 s for 149 iterations, that is, 0.130 s per iteration, so the model runs on average 1.187 × 10 −7 s per pixel per iteration. The algorithm has run on an Ultrabook with processor of Intel® Core™ i7-6560U CPU @ 2.20 GHz. The compared models, GMLC and WMLC, have computation times equal to 9.41 × 10 −7 s per pixel and 11.23 × 10 −7 s per pixel, respectively.
For the third comparison we selected a median filter with size of 5 × 5 window, comparable with the neighborhood system of our WMRF model. We used 80 randomly chosen reference points to estimate the optimal threshold value. These points were removed from the reference set. Next, we compared the results of the thresholding with the results of the WMRF model using a pairwise test statistic, Z. We obtained Z = 3.21 showing a significant difference in the results of two different methods. When a smaller 3 × 3 median filter is used, the pairwise test is even higher, Z = 5.48. In addition to the statistical analysis, also, a visual interpretation of the results showed the superiority of the proposed model. Figure 6 indicates that the thresholding method has a higher commission error (lower precision) in comparison to the thematic map of the proposed method. Detailed results are provided in Table 2.
Discussion
This study proposed a supervised contextual algorithm for SLC polarimetric SAR imagery. The main difficulty in mapping water in SAR images is differentiating water from other classes with similar backscattering properties.
When applying a contextual classifier for the detection of a specific class, one can decide on the number of background classes to be defined. One possibility is to define all spectral classes present in the image. This would lead to higher accuracy, although at the expense of additional training efforts. For a spectral Our results show that a legend with a single background class, C2, is insufficient, because existence of a class with similar backscatter decreases the accuracy. Therefore, addition of a background class, bare soil (C3), spectrally similar to target class substantially improves the classification accuracy. Further splitting of background class, C4, however, did not lead to more improvement.
Our main focus was using SLC SAR data. In polarimetric SAR classification, most research, for example, Akbari et al. (2012) and Wu et al. (2008), used multilook data. Multilooking reduces speckle by averaging adjacent pixels at the expense of losing spatial resolution. Therefore, multilooked SAR data have a coarser resolution as compared to SLC data. The aim of using SCL data is to preserve finer resolution which is of interest for smaller surface water bodies. In this study, we use a contextual model that is effective to tackle speckle in SAR images. Using MRF, we build on the geometry of the surface water bodies to remove speckle from SAR images and improve classification. Common speckle filtering algorithms smooth away high-frequency information (Mather & Tso, 2009). Using MRF, class information of neighboring pixels are taken to account to suppression speckle.
One of the main advantages of using SLC data is the preservation of the spatial resolution of the sensor. This is most critical in small objects or objects with an irregular boundary, as is the case for reservoirs in mountainous areas considered in this paper. The proposed model is a supervised classifier so its application is not restricted to water body detection. The WMRF model assumes that the classes can be modeled with the complex Wishart distribution. As long as this assumption can be justified and the covariance matrices of the classes are different, the proposed model can distinguish different classes from SLC images. The reason is that the covariance matrix of a class includes information on the Radar Cross Section and polarimetric scattering mechanism of the object.
This research focused on homogenous surface water body detection. Although the study of surface water in arid and semiarid climate is of importance of this research, the integration of water and vegetation should be further investigated. Our study area is an arid and semiarid area without vegetation and reeds in water. Presence of vegetation is a problem, for example, in wetlands. The case study of this research is a calm water body with weak wind conditions. In case of wind or turbulent water, a choice of proper satellite data with appropriate wavelength is crucial. The Sentinel 1 SAR instrument operates in C-band corresponding to a radar wavelength of about 5.6 cm. This means that Sentinel 1 ignore water waves smaller than this wavelength, which is applicable for mapping calm water. For turbulent water or wind-generated waves higher than 5.6 cm, other satellite data with longer wavelengths are required. In addition, at substantial wave height in relation to radar wavelength, relative orientation and incidence angle with respect to wave should be considered.
Several studies, like Martinis et al. (2009, and Westerhoff et al. (2013), have benefited from using auxiliary data, such as digital elevation models. Although the use of auxiliary data improves the results, it may not be available everywhere. One benefit of our model is that it achieves high accuracy without use of any auxiliary data except for training data set. Therefore, our model is generally applicable in semiarid environments, even without auxiliary data.
During optimization, there is a clear trade-off between accuracy and computation time. Taking the advantage of high accuracy requires more computational time, which is reflected in the number of iterations. For large data sets, a higher value of σ increases the computational time, whereas a smaller data set can benefit from a higher accuracy and a lower number of iterations. The value of σ, however, can be controlled and be selected based on user preference.
Conclusions
This study showed how the complex Wishart distribution was satisfactorily incorporated into MRFs. The experimental results show a classification accuracy of 95% for two lakes in Iran. The high recall values for all experiments illustrate the strength of the model for correctly classifying water pixels. Using two classes only, we noted that pixels with low backscatter like bare soil are incorrectly labeled as water. Such a misclassification was prevented by splitting class water and bare soil which significantly improves 10.1029/2019WR025192
Water Resources Research
accuracy. We concluded that in the case of calm surface water bodies, the WMRF can perform robustly. The strength of the proposed model is a high classification accuracy of SLC polarimetric SAR data. It is reliable for differentiating classes with similar backscatter. | 7,169.4 | 2019-08-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
EFFECTS OF HOMOGENEOUS-HETEROGENEOUS CHEMICAL REACTION AND SLIP VELOCITY ON MHD STAGNATION FLOW OF A MICROPOLAR FLUID OVER A PERMEABLE STRETCHING/SHRINKING SURFACE EMBEDDED IN A POROUS MEDIUM
We report on a mathematical model for analyzing the effects of homogeneous-heterogeneous chemical reaction and slip velocity on the MHD stagnation point flow of electrically conducting micropolar fluid over a stretching/shrinking surface embedded in a porous medium. The governing boundary layer coupled partial differential equations are transformed into a system of non-linear ordinary differential equations, which are solved numerically using the MATLAB bvp4c solver. The effects of physical and fluid parameters such as the stretching parameter, micropolar parameter, permeability parameter, strength of homogeneous and heterogeneous reaction parameter on the velocity and concentration are analyzed, and these results are presented through graphs. The solute concentration at the surface is found to decrease with the strength of the homogeneous reaction, and to increase with heterogeneous reactions, the permeability parameter and stretching or shrinking parameters. Comparison between the previously published results and the present numerical results for various special cases has been done and are found to be an excellent agreement.
INTRODUCTION
Micropolar fluid is a non-Newtonian fluid that belongs to a class of fluids with non-symmetrical stress tensor and is referred to as polar fluid.Micropolar fluids are fluids with internal structures in which coupling between the spin of each particle and the microscope velocity field is taken into account.They represent fluids consisting of rigid, randomly oriented or spherical particles suspended in a viscous medium, where the deformation of fluid particles is ignored.Micropolar fluid theory was introduced by Eringen (1966) in order to describe physical systems, which do not satisfy the Navier-Stokes equations.The equations governing the micro polar fluid involve a spin vector and a micro inertia tensor in addition to the velocity vector.The dynamics of micro polar fluids provides some practical applications, for example turbulent shear flow, the flow of colloidal suspensions, polymeric fluids, liquid crystals, additive suspensions, human and animal blood, analyzing the behaviour of exotic lubricants.The potential importance of micro polar fluids in industrial applications has motivated many researchers to extend the study in numerous ways to include various physical effects.The essence of the theory of micro polar fluid lies in particle suspension (Hudimoto and Tokuoka, 1969), liquid crystals (Lockwood et al., 1987); animal blood (Ariman et al., 1974a), exotic lubricants (Erigen, 1976), etc.An excellent review of the various applications of micro polar fluid mechanics was presented by (Ariman et al.,1974b), Hayat et al., 2016, Ramzan et al., 2016, Sajid et al., (2009a) investigated the exact analytic solution for the three thin film flow problems of a micro polar fluid.The main advantage of using a micro polar fluid model to study the boundary layer flow in comparison with other classes of non-Newtonian fluids is that it takes care of the rotation of the fluid particles by means of an independent kinematic vector called the micro-rotation vector was investigated by Sajid et al., (2009).
The flow of a non-Newtonian fluid over a stretching sheet has attracted considerable attention during the last two decades due to its vast applications in industrial manufacturing such as hot rolling, wire drawing, glass fiber and paper production, drawing of plastic films, polymer extrusion of plastic sheets and manufacturing of polymeric sheets.For the production of glass fiber/plastic sheets, thermo-fluid problem involves significant heat transfer between the sheet and the surrounding fluid.Sheet production process starts solidifying molten polymers as soon as it exits from the slit die.The sheet is then collected by a wind-up roll upon solidification.To improve the mechanical properties of the fiber/plastic sheet we use two ways, the extensibility of the sheet and the rate of cooling.Crane (1970) was the first who reported the analytical solution for the laminar boundary layer flow past a stretching sheet.Several researchers viz.Gupta and Gupta (1977), Dutta et al. (1985), Chen and Char (1988) extended the work of Crane by including the effects of heat and mass transfer under different situations.
Magneto hydrodynamic (MHD) is the science which deals with the motion of highly conducting fluids in the presence of a magnetic field.The motion of the conducting fluid across the magnetic field generates electric currents which change the magnetic field and the action of the
Frontiers in Heat and Mass Transfer
Available at www.ThermalFluidsCentral.orgmagnetic field on these currents gives rise to mechanical forces which modify the flow of the fluid.The Magneto hydrodynamic (MHD) character of fluid especially in physiological and industrial processes seems too much important.Such consideration is useful for blood pumping and magnetic resonance imaging (MRI), cancer therapy, hyperthermia etc. Abo-Eldahab and Ghonaim (2003) investigated convective heat transfer in an electrically conducting micropolar fluid at a stretching surface with uniform free stream.Wang et al., (2011) studied with the magnetohydrodynamic flow of a micropolar fluid in a circular cylindrical tube.Eldabe and Ouaf (2006) solved the problem of heat and mass transfer in a hydro magnetic flow of a micropolar fluid past a stretching surface with Ohmic heating and viscous dissipation using the Chebyshev finite difference method.Hiemenz (1911) first reported the stagnation point flow towards a flat plate.It is worthwhile to note that the stagnation flow appears whenever the flow impinges to any solid object and the local fluid velocity at a point MHD stagnation point flow of a micropolar fluid over a stretching surface with heat source (called the stagnation-point) is zero.Chiam (1994) extended the works of Hiemenz (1911) replaced the solid body a stretching sheet with equal stretching and straining velocities and he was unable to obtain any boundary layer near the sheet.Whereas, Mahapatra and Gupta (2001) reinvestigated the stagnation-point flow towards a stretching sheet considering different stretching and straining velocities and they found two different kinds of boundary layers near the sheet depending on the ratio of the stretching and straining constants.The study of a steady two-dimensional stagnation point flow of a micropolar fluid over a stretching sheet when the sheet was stretched in its own plane and the stretching velocity was proportional to the distance from the stagnation point was examined by Nazar et al. (2004).The resulting coupled equations of nonlinear ordinary differential equations were solved numerically.Hayat et al. (2009a) investigated the two-dimensional Magneto hydrodynamic (MHD) stagnation-point flow of an incompressible micropolar fluid over a nonlinear stretching surface.Hayat et al. (2009b) analyzed the steady two dimensional MHD stagnation point flow of an upper convected Maxwell fluid over the stretching surface.The governing nonlinear partial differential equations were reduced to ordinary ones using the similarity transformation.The homotopy analysis method (HAM) was used to solve these equations.Bhattacharyya (2013) investigated the boundary layer stagnation-point flow of Casson fluid and heat transfer towards a shrinking/stretching sheet.Yacos et al. (2011) have been investigated melting heat transfer in boundary layer stagnation-point flow toward a stretching/shrinking sheet in a micropolar fluid.
The combined heat and mass transfer problems with chemical reactions are of importance in many processes, and therefore have received a considerable amount of attention in recent years.In processes, such as drying, evaporation at the surface of a water body, energy transfer in a wet cooling tower and the flow in a desert cooler, the heat and mass transfer occurs simultaneously.Many chemically reacting systems involve both homogeneous and heterogeneous reactions, with examples occurring in combustion, catalysis, biochemical systems, crops damaging through freezing, cooling towers, fog dispersion, hydrometallurgical processes etc.The interaction between the homogeneous reactions in the bulk of fluid and heterogeneous reactions occurring on some catalytic surfaces is generally very complex, involving the production and consumption of reactant species at different rates both within the fluid and on the catalytic surfaces.A simple mathematical model for homogeneousheterogeneous reactions in stagnation-point boundary-layer flow was initiated by Chaudhary and Merkin (1995(a)).They modeled the homogeneous (bulk) reaction by isothermal cubic kinetics and the heterogeneous (surface) i reaction was assumed to have first-order kinetics.Later Chaudhary and Merkin (1995(b)) extended their previous work to include the effect of loss of the autocatalyst.They studied the numerical solution near the leading edge of a flat plate.A model for isothermal homogeneous-heterogeneous reactions in boundary layer flow of a viscous fluid flow past a flat plate was studied by Merkin (1996).Effects of homogeneous and heterogeneous reactions in flow of nanofluids over a nonlinear stretching surface with variable surface thickness was reported by Hayat et al., (2016) and observed that the homogenous and heterogeneous parameters have opposite behaviors for concentration profile.Ziabakhsh et al. (2010) studied the problem of flow and diffusion of chemically reactive species over a nonlinearly stretching sheet immersed in a porous medium.Chambre and Acrivos (1956) studied an isothermal chemical reaction on a catalytic in a laminar boundary layer flow.They found the actual surface concentration without introducing unnecessary assumptions related to the reaction mechanism.The effect of flow near the twodimensional stagnation point flow on an infinite permeable wall with a homogeneous-heterogeneous reaction was studied by Khan and Pop (2010).They solved the governing nonlinear equations using the implicit finite difference method.It was observed that the mass transfer parameter considerably affects the flow characteristics.Melting and homogeneous/heterogeneous reactions effects in nanofluid flow by a cylinder are addressed by Hayat et al., (2016).It is found that maximum heat transfer and minimum thermal resistance for base fluid suspended multi-wall carbon nanotubes (MWCNTs) when compared with other nanofluids.The behavior of homogeneous parameter K on concentration profile is sketched for water and kerosene oil base fluids by Hayat et al., (2016).It is analyzed that the concentration field is decreasing function of homogeneous parameter K for base fluids water and kerosene oil.In fact higher values of homogeneous reaction parameter correspond to larger chemical reaction which consequently reduces the concentration distribution.Hayat et al., (2016) developed numerical analysis for homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder and noticed that the flow accelerates for large values of Powell-Eyring fluid parameter.Hayat et al., (2016) disclose the effects of homogeneous-heterogeneous reactions and melting heat phenomenon in the Magnetohydrodynamic second grade fluid flow.Heat transfer is tackled with heat generation/absorption.Khan and Pop (2012) studied the effects of homogeneous-heterogeneous reactions on the viscoelastic fluid toward a stretching sheet.They observed that the concentration at the surface decreased with an increase in the viscoelastic parameter.
Flow through porous media has various physiological applications such as the flow of blood in the micro-vessels of the lungs which may be treated as a channel bounded by two thin porous layers (Misra and Ghosh (1997)).It is realized that fluid slips at the walls in certain physiological and engineering situations.The no slip boundary condition is a core concept in fluid dynamics in which the fluid and the boundary move with same velocity.Beaver and Joseph (1967) were the first to propose slip boundary condition.The boundary condition proposed by Beaver and Joseph was simplified by Saffman (1971).The existence of slip phenomenon at the boundaries and interfaces has been observed in the flows of rarefied gases, physiological flows, hypersonic flows of chemically reacting binary mixture etc.Also, flows with slip occur for certain problems in chemical engineering, for example, flows through pipes in which chemical reactions occur at the walls, certain two-phase flows and flows in porous slider bearings.Haliza Rosali et al. (2012) studied a micropolar fluid flow towards a permeable stretching or shrinking sheet in a porous medium.Mhd Flow and Heat Transfer through a Porous Medium over a Stretching/Shrinking Surface with Suction was analyzed by F. Ahmad1 et al. (2015).Homogeneousheterogeneous reactions in micropolar fluid flow from a permeable stretching or shrinking sheet in a porous medium was studied by Shaw et al. (2013).
At the macroscopic level, it is well accepted that the boundary condition for a viscous fluid at a solid wall is one of no-slip, i.e., the fluid velocity matches the velocity of the solid boundary.While the noslip condition has been processed experimentally to be accurate for a number of macroscopic flows, it remains an assumption that is not based on physical principles.In many practical applications, the particle adjacent to a solid surface no longer takes the velocity of the surface.The particle at the surface has a finite tangential velocity.It slips along the surface.The flow regime is called a slip-flow regime, and this effect cannot be neglected.The study of magneto-micro polar fluid flows in the slip-flow regimes with heat transfer has important engineering applications, e.g., in power generators, refrigeration coils, transmission lines, electric transformers, and heating elements.Mahmoud and Waheed (2010) performed a theoretical analysis to study heat transfer characteristics of magneto hydrodynamic mixed convection flow of a micro polar fluid past a stretching surface with slip.Hayat et al., (2016) presented the effect of Partial slip effect in flow of magnetite Fe3O4 nanoparticles between rotating stretchable disks.Bakr (2011) analyzed Chemically Reacting Unsteady Magneto hydrodynamic Oscillatory Slip Flow of a Micropolar Fluid in a Planer Channel with Varying Concentration.Hayat et al., (2016) analyzed that larger values of first order slip velocity parameters and magnitude of second order slip velocity parameters correspond to lower velocity.With an increase in slip velocity parameters, stretching velocity is partially transferred to the fluid so velocity profiles decrease.The Effects of Chemical Reaction, Hall, and Ion-Slip Currents on MHD Micropolar Fluid Flow with Thermal were studied by S. S. Motsa1 and S. Shatey (2012).Hayat et al., (2016) studied the MHD three-dimensional flow of nanofluid with velocity slip and nonlinear thermal radiation.Hayat et al., (2016) looks at the influence of an inclined magnetic field on peristaltic transport of hyperbolic tangent nanofluid in inclined channel having flexible walls.Alireza et al. (2013) presented an analytical solution for MHD stagnation point flow and heat transfer over a permeable stretching sheet with chemical reaction.
MATHEMATICAL FORMULATION
Let us consider the steady two-dimensional stagnation point flow of viscous, incompressible and electrically conducting micropolar fluid over a stretching sheet embedded in a porous medium.The Cartesian coordinate system is used with the x-axis along the sheet and the y-axis normal to the sheet.Two equal but opposite forces are applied to the stretching sheet so that the surface is stretched, keeping the position of the origin unaltered.A magnetic field B0 is applied perpendicular to the sheet.It is assumed that the magnetic Reynolds number is much less than unity so that the induced magnetic field is negligible in comparison to the applied magnetic field.Keeping the origin fixed, it is assumed that the surface is stretched/shrunk with a linear velocity UW(x) = Uwx, where UW is a constant with UW > 0 for a stretching sheet, UW < 0 for a shrinking sheet and UW =0 for a static sheet.Also, we consider a simple model for the interaction between a homogeneousheterogeneous reaction involving the two-chemical species A and B in a boundary layer flow proposed by Chaudhary andMerkin (1995a,1995b) of the following form: Where a and b are concentrations of chemical species A and B respectively, and kc, ks are the constants.It is assumed that the ambient fluid moves with a velocity ( ) , where U is a constant, in which there is a uniform concentration a0 of reactant A and in which there is no auto catalyst B over a flat surface.Under these assumptions and boundary layer approximations the steady two-dimensional stagnation point flow of micropolar fluid towards a stretching sheet embedded in a porous medium is described by the following equations: Continuity Equation where u and v are the velocity components in the x and y directions respectively, e u is the velocity outside the boundary layer, is the dynamic viscosity, eff is the effective dynamic viscosity, k is the vortex viscosity, is the density of the fluid, N is the micro rotation, is the electrical conductivity,, 0 B is the uniform magnetic field, 1 k is the permeability of the porous medium, j U is the micro inertia per unit mass, is the spin gradient viscosity defined as Where w v is the constant mass flux with w v < 0 for suction and w v >0 for injection (blowing) respectively; N is the slip velocity coefficient and n is a constant ( 0 1 n ).Here n = 0 represents the strong concentration (Guram and Smith, 1980), and n = 1 represents the turbulent boundary layer flow (Peddieson, 1972).The case n = 1/2 indicates the vanishing of the antisymmetrical part of the stress tensor and denotes weak concentration (Ahmadi, 1976), which is the case considered in the present study.Now, introducing the following transformation 0 0 , ( ), ( ), ( ) , ( ) Where is the similarity variable and ( , ) x y
is the stream function.
The velocity components are defined by Substituting (9) into the Equations ( 4) -( 7) and ( 8), we get the following set of ordinary differential equations The corresponding boundary conditions are (0) , (0) (0),p(0) (0), where the primes denote the differentiation with respect to , is the magnetic parameter, Using the similarity variables in Eq. ( 19), we get Where ( ) Re
SOLUTION OF THE PROBLEM
The set of equations ( 12) to ( 13 Here p is a vector of unknown parameters.Boundary value problems (BVPs) arise in most diverse forms.Just about any BVP can be formulated for solution with bvp4c.The first step is to write the ODEs as a system of first order ordinary differential equations.The details of the solution method are presented in Shampine and Kierzenka (2000).
RESULTS AND DISCUSSIONS
The numerical computations have been carried out using the MATLAB bvp4c solver for several values of the physical parameters arised in the study then acquired results are presented in graphs.
The variations of the velocity and concentration profiles are plotted as a function of η for some values of λ in Figures 1 and 2 for Kp = 0.1, χ = 0.1, Sc = 1, K = 1, s = 0.5, n = 0.5, Ks = 1.In Fig. 1, (i) for λ >0 (stretching surface), the fluid velocity is becoming increasingly greater than the free stream.In this case the fluid velocity decreases with the value of η and converges at unity as per the condition.(ii) For λ=0 (static surface), the fluid velocity initially is stationary, but with η value it increases in a non-linear way.(iii) For λ < 0 (shrinking surface), the fluid velocity is initially negative, but it increases with η, and after a certain value of η, it becomes positive.For the concentration profile in figure 2, all the curves are started from the origin and they increase nonlinearly with η to follow 'S' shape and finally reach unity according to the given condition.The variations of the velocity and concentration profiles are plotted for different values of M in Figures 3 and 4. In Fig. 3, for λ >0 (stretching surface), the fluid velocity is becoming increasingly greater than the free stream.In this case the fluid velocity decreases with the value of η and converges at unity as per the condition.For λ< 0 (shrinking surface), the velocity decreases with the increase of magnetic parameter M for opposing, assisting and steady state cases.That is because the application of a magnetic field in the y-direction to an electrically conducting fluid gives rise to a flow resistive force called the Lorentz force.The concentration profile in figure 4, for λ >0 (stretching surface), the fluid concentration increases with increasing magnetic parameter M and opposite case is observed for λ< 0 (shrinking surface).All the curves are started from the origin and they increase nonlinearly with η to follow 'S' shape and finally reach unity.
The solute velocity, however, increase with the permeability for stretching/shrinking parameters is observed from Fig. 5.
A concentration profile for different values of permeability parameter is shown in Fig. 6.For λ >0 (stretching surface), the fluid concentration decreases with increasing the permeability parameter D and opposite case is observed for λ < 0 (shrinking surface).M = 0.0, 1.0, 2.0 M = 0.0, 1.0, 2.0 The variations of the velocity and concentration profiles are plotted for different values of slip parameter (Sν) in Figures 7 and 8.In Fig. 7, for λ > 0 the fluid velocity increases with the increase of slip parameter and an opposite effect is seen when λ < 0. The increase in the slip parameter has the tendency to reduce the friction forces which reduces the fluid velocity.The fluid concentration enhances when Sν enhances for λ > 0 and decreases for λ < 0 in Fig. 8.
The effect of heterogeneous and homogeneous reactions on the concentration profile are separately shown through Figures 9 and 11 for stretching sheet and Figures 10 and 12 for shrinking sheet respectively.It is evident that the concentration boundary layer of the reactants is increasing with η in both cases, and after a certain η value, they all coincide, i.e., after a certain η value, the homogeneous and heterogeneous reactions have no e ect on the concentration of the reactants.This critical value of η (η∞) depends on the strength of the homogeneous reaction and increases with the value of K, but it does not depend on the strength of the heterogeneous reaction.A similar phenomenon is observed for the second solution.The graphs for these condition solutions with Ks = 0.2 and 1 coincide.It is observed that the first solution is more stable and converges more easily than the second solution.The concentration of the reactants depends on the Schmid tnumber (Sc) and heterogeneous reaction parameter.The variation of the concentration with K for di erent values of the Schmidt number is shown in Figures 13 and 14.The Schmidt number is the ratio between a viscous di usion rate and a molecular di usion rate.For a fixed molecular di usion rate, with increase in Schmidt number, the viscous di usion rate increases, which helps to increase the concentration of the fluid for both stretching and shrinking sheet.
Figs.15 and 16 are aimed to shed light on the effect of suction ( 0 S represents impermeable, 0 S represents suction and 0 S represents the injection or blowing) on the velocity and concentration profiles.From these, we observed that the velocity decreases with an increase in the suction parameter whereas concentration increases for stretching sheet this is due to the fact that the heated fluid is pushed towards the wall where the buoyancy forces can act to retard the fluid due to high influence of viscosity.This effect acts to decrease the wall shear stress.The effect of the velocity and concentration profiles for different suction parameter S is shown in Figures 17 and 18.The velocity of the fluid increases due to increase of S, and this leads to an increase in the solute concentration for shrinking sheet.Sc = 1.0, 2.0, 3.0, 4.0, 5.0 Table 1.The values reported by Katagiri (1971) using an iterative numerical quadratures and by Lok et al. (2007) using the Keller-box method were also included in this table.It is seen that the present results are in excellent agreement with both results obtained by Katagiri (1971), Lok et al. (2007) and Khan and Pop (2010).We notice that for an impermeable wall (S = 0) the values of (0) f reported by Hiemenz (1911) is S = 1.233.[Wang (2008) -7.086378 -7.086378 -6.904439 -7.086378 -6.904439 5.0 -10.26475 -10.26479 -10.26479 -9.837608 -10.26479 -9.837608From Table 3 it is clear that as the micro polar parameter or the magnetic parameter increases both (0) f and g(0) increases.As the diffusion coefficient and Schmidt number increases both (0) f and g(0) remains constant.As increases a tremendous decrease is seen in (0) f , a reverse effect is seen when a slip parameter increases.
CONCLUSIONS
The present analysis investigates the effect of the homogeneous and heterogeneous chemical reaction and slip velocity on MHD stagnation flow of a micropolar fluid flow through a permeable stretching/shrinking sheet embedded in a porous medium.The momentum and concentration equations were transformed into a set of S = -2.0,-1.0, 0.0, 1.0, 2.0 Fig. 18 Velocity profiles for some values of S for shrinking sheet g( ) coupled nonlinear differential equations using similarity transformations and solved numerically by Matlab bvp4c package.We discussed the effects of the governing parameters on the fluid flow and concentration characteristics.A new feature that emerges from our results It is found that these solutions terminate at = 0 with values given in Table 1.The concentration profiles g(g) appear to be similar in shape for different values of , K and Ks.There is an excellent correlation between previous literatures and the present study.
the forced convection flow towards the stagnation point on a static surface, number.It is assumed that the diffusion coefficients of chemical species A and B to be of a comparable size.This argument provides us to make further assumption that the diffusion coefficients DA and DB are equal i.e.
) were reduced to a system of first-order differential equations and solved using a MATLAB boundary value problem solver called bvp4c.This program solves boundary value problems y f x y p a x b , by implementing a collocation method subject to general nonlinear, two-point boundary conditions
Fig. 1
Fig. 1 Velocity profiles for some values of
Fig. 3 Fig. 4 Fig. 7 Sv
Fig. 3 Velocity profiles for some values of M
Fig. 10 Fig. 8 Fig. 9
Fig. 10 Concentration profiles for some values of K for shrinking sheet
Fig. 13 Fig. 15 Fig. 16 Fig. 17
Fig. 13 Concentration profiles for some values of Sc for stretching sheet
0
,Ishak et al. (2010),Rosali et al. (2012)] and the present study of the comparison of (0)f for several values of in the absence of micropolar parameter, magnetic parameter, permeability parameter, suction parameter and the slip velocity parameter for a stretching sheet.This investigation confirms that the existence and uniqueness of solution depends on the stretching/shrinking sheet parameter.Represents the forced convection flow towards the stagnation point on a static surface.It is clear that the skin friction is a decreasing function of .All values of the skin friction coefficient are positive for <1, while they are negative when >1.Physically, the negative values of the skin friction coefficient correspond to the surface exerting a drag force on the fluid and the opposite sign implies the inverse phenomenon.The skin friction coefficient is zero when = 1 regardless of the values of other parameters.This is because for = 1, there is no shear stress at the surface as the surface and fluid move with the same velocity.
1.88731 -1.887307 -1.887307 -1.941163 -1.887307 -1.941163 3.0 --4.276541 -4.276541 -4.260253 -4.276541 -4.260253 4.0 - . Here n = 0 represents the strong concentration and n = 1 represents the turbulent boundary layer flow.The case n = 1/2 indicates the vanishing of the anti-symmetrical part of the stress tensor and denotes weak concentration which is the case considered in the present study.
Table 1
Comparison of
Table 2
Comparison of (0)f for several values of in the absence of micropolar parameter, magnetic parameter, permeability parameter, suction parameter and the slip velocity parameter
Table 3
The values of skin friction coefficient and dimensionless concentration for various values of Kp, M, D, Sc, , | 6,293 | 2017-01-25T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Chemistry",
"Physics"
] |
Exploiting Vitamin D Receptor and Its Ligands to Target Squamous Cell Carcinomas of the Head and Neck
Vitamin D (VitD) and its receptor (VDR) have been intensively investigated in many cancers. As knowledge for head and neck cancer (HNC) is limited, we investigated the (pre)clinical and therapeutic relevance of the VDR/VitD-axis. We found that VDR was differentially expressed in HNC tumors, correlating to the patients’ clinical parameters. Poorly differentiated tumors showed high VDR and Ki67 expression, whereas the VDR and Ki67 levels decreased from moderate to well-differentiated tumors. The VitD serum levels were lowest in patients with poorly differentiated cancers (4.1 ± 0.5 ng/mL), increasing from moderate (7.3 ± 4.3 ng/mL) to well-differentiated (13.2 ± 3.4 ng/mL) tumors. Notably, females showed higher VitD insufficiency compared to males, correlating with poor differentiation of the tumor. To mechanistically uncover VDR/VitD’s pathophysiological relevance, we demonstrated that VitD induced VDR nuclear-translocation (VitD < 100 nM) in HNC cells. RNA sequencing and heat map analysis showed that various nuclear receptors were differentially expressed in cisplatin-resistant versus sensitive HNC cells including VDR and the VDR interaction partner retinoic acid receptor (RXR). However, RXR expression was not significantly correlated with the clinical parameters, and cotreatment with its ligand, retinoic acid, did not enhance the killing by cisplatin. Moreover, the Chou–Talalay algorithm uncovered that VitD/cisplatin combinations synergistically killed tumor cells (VitD < 100 nM) and also inhibited the PI3K/Akt/mTOR pathway. Importantly, these findings were confirmed in 3D-tumor-spheroid models mimicking the patients’ tumor microarchitecture. Here, VitD already affected the 3D-tumor-spheroid formation, which was not seen in the 2D-cultures. We conclude that novel VDR/VitD-targeted drug combinations and nuclear receptors should also be intensely explored for HNC. Gender-specific VDR/VitD-effects may be correlated to socioeconomic differences and need to be considered during VitD (supplementation)-therapies.
Introduction
In the last three decades, there have been tremendous attempts to undercover the role of vitamin D (VitD) in the prevention, prognosis, and treatment of cancer. Unfortunately, the results have been contradictory, and until now, no general recommendations or standard treatment options considering VitD for cancer patients exist [1,2]. However, the majority of the observational studies supported a benefit of higher vitamin D intake concerning the reduction in cancer incidence (e.g., colon and breast cancer) [3,4]. Other studies showed a correlation between high serum VitD levels and lower cancer risk [5,6]. Nevertheless, for the nuclear receptors are able to activate various transcriptional programs [3,4,[30][31][32]34,36]. Importantly, for VDR, anti-tumoral effects have been already suggested [3,4,32,34,37,38].
Hence in this study, we investigated the (pre)clinical and potential therapeutic relevance of the VDR/VitD-axis to assess the association between the VitD level and VDR expression for HNC. Aside from analyzing the HNC dataset of The Cancer Genome Atlas (TCGA), a case-control study was analyzed. To mechanistically uncover VDR/VitD's pathophysiological relevance, we further combined the evaluation of clinical data with comprehensive dry and wet lab systematic studies of innovative HNC cell models. Besides the use of the 2D tumor cell model, there is increasing evidence that advanced 3D tumor spheroids react differently compared to conventional 2D cultures when exposed to drugs, radiation, or signaling ligands [39][40][41][42]. Hence, we established a 3D cell culture model aiming to approach the tumor situation in vivo. In comparison to the 2D culture systems, 3D spheroids exhibit a number of advantages, for example, they mimic a more realistic 3D architecture of a tumor including the supply of nutrients, oxygen, and anti-cancer drug. Another advantage is the development of polarity in the spheroid culture due to neighboring cell-to-cell contacts [39,40]. Collectively, cells in 3D tumor spheroids seem to preserve key morphological and signaling patterns closely associated with tumor development and drug resistance in animal models and patients [39][40][41][42].
VDR Expression and VitD Levels Correlate with HNC Patients' Clinical Parameters
As knowledge of the VDR/VitD-axis for HNC is limited, we first investigated the VDR expression and VitD serum levels in a cohort of newly diagnosed HNC patients (n = 40) compared to the healthy individuals (n = 40) (details see Table 1, Supplementary Tables S3 and S4). The most common site of occurrence was the tongue (60%) and the least was the lip (Figure 1a). Notably, regarding gender, there was a significant difference in the male-to-female ratio (Figure 1b, Supplementary Tables S3 and S5) which is often observed in the Middle East and North Africa (MENA) region [43][44][45][46]. Histopathologically, the most common differentiation subtype was moderately differentiated HNC (60%, n = 24; Figure 1c, Supplementary Table S5). Table 1. Comparison between the two studied groups, according to different parameters. Chi-square test U: Mann-Whitney test, t: Student t-test, p: p-value for comparing between the two groups *: Statistically significant at p ≤ 0.05. (23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40) In order to correlate the VitD serum levels with VDR expression in the tumor tissues, peripheral blood samples were taken from the patients before or during surgery. Total serum VitD was quantified by using fully validated, modified highperformance liquid chromatography (HPLC) [47]. The VitD serum levels were lowest in patients with poorly differentiated cancers (4.1 ± 0.5 ng/mL), increasing from moderate (7.3 ± 4.3 ng/mL) to well-differentiated (13.2 ± 3.4 ng/mL) (Figure 1d, Table 2). The mean serum VitD level was 7.4 ± 4.5 ng/mL in cancer patients in comparison to 28.7 ± 4.6 ng/mL in healthy individuals ( Figure 1e, Table 2). Notably, females showed higher VitD insufficiency compared to males, correlating with poor tumor differentiation (Table 1). Table 2. Overview of obtained results (shown in Figure 1) summarizing the histopathological differentiation, VitD serum levels, and VDR/Ki67 expression in the cancer patients and healthy controls. VDR expression in the cancer tissue was inversely correlated with the VitD serum levels. High VDR expression occurred in poorly differentiated, highly proliferative tumor tissues. Additionally, the VDR protein expression was analyzed by immunofluorescence and immunohistochemical staining in tumor biopsies classified as poorly, moderately, and welldifferentiated (Figure 1g-i). Here, a significant inversely proportional correlation between the VitD levels and VDR expression was found ( Figure 1f). As shown in Figure 1g-i, all studied cases showed immunofluorescence reactivity to the VDR antibody with varying intensities. Moreover, we found that the VDR levels correlated with the patients' clinical and pathobiological tumor parameters. Particularly, poorly differentiated tumors showed high VDR and Ki67 expression (Figure 1g), whereas VDR and Ki67 levels decreased from moderate to well-differentiated tumors (Figure 1h,i). (f) VDR expression was inversely correlated with the VitD serum levels. (g-i) Staging of the HNSCC cases according to UICC (8th edition). (g) The most common stage was S-I with 35%, followed by S-II (30%), and S-III and S-IV with 22.5% and 12.5%, respectively. (h,i) Quantification and correlation of the VitD serum levels (h) and corresponding VDR expression (i). (j-l) Tumor size classification of HNSCC cases according to UICC (8th edition). (j) The most common subtype was T 1 followed by T 2 with 45% and 37.5%, respectively. Only a few cases were classified as T 3 (10%) and T 4 (7.5%). (k,l) Quantification and correlation of the VitD serum levels (k) and corresponding VDR expression (l). (m-o) High VDR expression occurred in poorly differentiated, highly proliferative tumor tissues. Expression of VDR and Ki67 was determined by immunofluorescence (IF) and immunohistochemical (IHC) staining of the tumor biopsies classified as poorly (m), moderately (n), and well-differentiated (o). IF staining of VDR (green) was visualized by confocal laser scanning microscopy, and the intensity of fluorescence (mean area percent, MA%) was measured using ImageJ (shown in (f)). Cells at higher magnification are included in the IHC image overviews. Representative examples are shown. Tissues were stained with H&E and specific Abs as indicated. Scale bars, 50 µm/12.5 µm (magnifications). Statistical significance is represented in figures as follows: * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. A p value that was less than 0.05 was considered statistically significant.
Clinical Relevance of VitD Receptor (VDR) and Retinoid X Receptor Alpha (RXRα) Expression in HNC Patients
To independently confirm the relevance of VDR expression in the HNC patients, we bioinformatically analyzed the PANCAN dataset acquired from The Cancer Genome Atlas (TCGA), encompassing more than 12,000 samples of cancer patients of various entities and clinical backgrounds. Moreover, upon binding of its ligand VitD and nuclear translocation, VDR is also able to form heterodimers with the retinoid X receptor (RXR), thereby activating various cancer-relevant transcriptional programs (Supplementary Figure S1) [23,25,32,34,[48][49][50]. As VDR/RXR expression has not been studied for HNC, we also studied the expression of RXR in the datasets.
We found VDR overexpressed in the primary tumors. Comparing the different entities, the highest expression of VDR was found in rectal and colon adenocarcinoma and kidney cancer, directly followed by HNC (Supplementary Figure S2), supporting our conclusions obtained from the analyses of our cohort (see also Figure 1).
Thus, in the second step, we focused on the analysis of the TCGA HNC cohort (n = 604) showing upregulation of VDR in tumor versus non-tumor tissues (Figure 2a, n = 564, p = 0.0059 **). Interestingly, RXRα expression showed no correlation with the disease markers (Figure 2b, n = 520 p = 0.4931). VDR expression highly correlated with the histological differentiation of the tumor, in contrast to the RXRα levels ( Figure 2c,d, n = 540, p = 0.0002 ***/p = 0.0056 **). Since the HPV status affects the therapy outcome and prognosis of HNC patients, we analyzed HPV-negative versus HPV-positive patients. VDR expression was significantly increased in HPV-negative HNC patients (Figure 2e, n = 114, p < 0.0001 ****). Again, changes in the RXRα levels were less significant (Figure 2f; n = 114; p < 0.0108). Moreover, high VDR expression correlated with perineural invasion (Figure 2g, n = 393, p = 0.0006 ***) in contrast to the RXRα levels ( Figure 2h, n = 393, p = 0.4154), underlining again the relevance of VDR but not of RXRα as a biomarker and/or therapeutic target for HNC. p = 0.0002 ***/p = 0.0056 **). Since the HPV status affects the therapy outcome and prognosis of HNC patients, we analyzed HPV-negative versus HPV-positive patients. VDR expression was significantly increased in HPV-negative HNC patients (Figure 2e, n = 114, p < 0.0001 ****). Again, changes in the RXRα levels were less significant (Figure 2f; n = 114; p < 0.0108). Moreover, high VDR expression correlated with perineural invasion ( Figure 2g, n = 393 p = 0.0006 ***) in contrast to the RXRα levels ( Figure 2h, n = 393, p = 0.4154), underlining again the relevance of VDR but not of RXRα as a biomarker and/or therapeutic target for HNC. Overexpression of VDR, but not RXRα was found in the primary tumors versus normal tissue (a,b) and correlates with (c) but to a lesser extent in RXRα. VDR expression correlates with tumor differentiation (c), negative HPV status (e), and perineural invasion (g). For all of the studies' clinical parameters, RXRα showed less or no significant correlations compared to VDR (d,f,h). Significance p values and sample size (n) are indicated. Statistical significance is represented in figures as follows: * p < 0.05, ** p < 0.01 and *** p < 0.001. A p value that was less than 0.05 was considered statistically significant.
Nuclear Receptor Profiling and Translocation Kinetics in HNC Cells
It is accepted by the field that the superfamily of nuclear receptors are key regulators in many pathologies including cancer [32][33][34]. Thus, we used 'omics' approaches to profile nuclear receptor expression and the potential pathobiological relevance in HNC tumor cell models. As HNC treatment is often complicated by recurrence due to resistance to cisplatin-based treatments, we analyzed the chemoresistant HNC cells. The cisplatinresistant cell line, Pica res , was established by selecting HNC Pica cells with sub-toxic concentrations of cisplatin (3-5 µM) for six months. Hence, Pica wt and Pica res allow for the comparison of cisplatin-sensitive and resistant HNC cells. Here, next-generation RNA sequencing transcriptomics was used to analyze the expression of various nuclear receptors (Figure 3a, Supplementary Table S6). As illustrated in the heat map analysis (Figure 3a; green: downregulated, red: upregulated), VDR and several other receptors such as Nuclear Receptor Subfamily 4 Group A Member 2 (NR4A2) or RXRα were differentially expressed in therapy-resistant (res) versus sensitive (wt) Pica cells. These data also suggest investigating the pathobiological relevance of other nuclear receptors for HNC in comprehensive followup studies.
When studying the impact of nuclear receptors, it is also the key to control if the respective receptor is expressed and indeed capable of cytoplasmic to nuclear trafficking upon ligand binding in the relevant cell model. Nuclear translocation is required to activate ligand-dependent transcriptional programs [3,4,[30][31][32]34,36]. The activation of VDR by ligand binding typically involves VDR-RXRα dimerization and the initiation of downstream signaling (Supplementary Figure S1). The immunofluorescence staining of endogenous VDR and RXR receptors demonstrated that VitD triggered nuclear accumulation of the receptors, in contrast to retinoic acid (RA) treatment alone (Figure 3b). When referring to VitD, the active form calcitriol (1,25(OH)2D3) was used if not indicated otherwise. Hence, although both receptors are capable of cytoplasmic to nuclear trafficking, VitD and VDR seem more relevant in HNC cells. We also confirmed and quantified the VDR expression in different HNC cell lines (Figure 3c,d). Statistical significance is represented in figures as follows: * p < 0.05, ** p < 0.01, and **** p < 0.0001. A p value that was less than 0.05 was considered statistically significant.
To further study the kinetics of VDR translocation in real-time, we established HNC cell lines stably expressing VDR fused to GFP. Therefore, the VDR reading frame was cloned from primary HNC tumor cells and stably expressed VDR-GFP in the HNCUM-02T or HNC FaDu cells (Figure 3e). An important question regarding VDR's nuclear translocation is the determination of the most effective ligand dose and the time kinetics of the process. Using the high-content screening microscopy platform, Array Scan VTI, we automatically quantified VDR translocation. Here, cells were treated with different clinically relevant doses of VitD (0-100 nM) for 30 min (Figure 3f,g). Fluorescence microscopy showed dosedependent VDR translocation into the nucleus by VitD, which was most effective at a VitD concentration of 100 nM. Importantly, RA alone did not trigger the nuclear translocation of VDR (Supplementary Figure S3).
VitD/VDR Targeting Synergistically Improves Cisplatin-Mediated Killing of HNC Tumor Cells
Chemoresistance is not only one of the main causes influencing cancer progression, but it is also strongly correlated to the cancer mortality rates. Hence, developing strategies for enhancing chemo sensitivity, potentially also by functional food supplementation with VitD, is expected to benefit patients. Indeed, such efforts have been made to correct VitD deficiency in cancer patients [11,12,51,52]. However, the success of VitD/VDR targeting therapies requires mechanistic knowledge and experimental investigation in vitro.
To examine the effect of combination therapy on HNC, we thus measured the cell viability after the VitD/cisplatin treatments. To also mimic the pathophysiological conditions of high and low VitD serum levels, cells were seeded in the presence or absence of 100 nM VitD, which we found to trigger efficient VDR nuclear translocation, and thus biological activation (see Figure 3f,g). After 24 h of VitD pre-treatment, cells were additionally treated with physiological concentrations of VitD (100 nM), 15-20 µM cisplatin, or a combination ( Figure 4). As expected, VitD alone did not affect cell viability. However, the combination treatments significantly enhanced tumor cell death compared to cisplatin alone in the three HNC cell lines tested (Figure 4a; Supplementary Figure S4). To objectively uncover a potential synergistic effect of VitD/cisplatin combinational treatments, we performed the Chou-Talalay method. The calculation of the combination index (CI) using the Chou-Talalay algorithm allowed us to uncover additive (CI = 1), synergistic (CI < 1), or antagonistic effects (CI > 1) of the drug combinations [53]. As shown in Figure 4b-d, all calculated indices were less than 1, revealing a synergistic effect on tumor cell killing for the VitD/cisplatin combinations in the tested HNCUM 02T, FaDu, and Pica cell lines.
Impact of VitD/VDR Targeting on HNC 3D Tumor Spheroids
Conventional 2D tumor cell models are well-established tools to assess various aspects of tumor pathobiology. However, there is increasing evidence that advanced 3D tumor spheroids react differently compared to conventional 2D cultures when exposed to drugs, radiation, or signaling ligands [39,40]. The architecture of spheroids leads to a gradient of nutrition and oxygen from the outer surface to the core, and drug delivery to parts of the 3D cell cluster also seems to differ. Additionally, cells in 3D tumor spheroids seem to preserve certain distinct signaling patterns that are closely associated with drug resistance in animal models and patients.
In order to closely approach the tumor situation in vivo, we next established HNC 3D tumor spheroids to investigate the effects of VitD/VDR targeting in an experimental setting, more closely mimicking the patients' tumor microenvironments. Here, cells were cultured in ultra-low adhesion cell culture vials that promoted the formation of 3D spheroidshaped tumor cell clusters. As summarized in Figure 5a, various pathobiological relevant properties of the established 3D spheroids were subsequently analyzed by fully automated high-content microscopy, allowing for an objective assessment of the tumor spheroids' growth, morphology, and vitality.
First, we found that the synergistic killing effect of the VitD/cisplatin combinations observed in the 2D cultures was also relevant for the 3D spheroids (Figure 5b). Cotreatment significantly reduced the mean objective area and viability (Figure 5b,c). Interestingly, although VitD alone did not affect the vitality of the 2D cultures, it already affected the 3D spheroid formation and induced morphological and architectural changes. As shown in Figure 5b-e and Supplementary Figure S5, automated high-content microscopy revealed that spheroid formation and growth were significantly impaired, suggesting that the expression of epithelial surface markers may be reduced. Notably, the effect was more prominent for the cisplatin-resistant cell line Pica res (Figure 5d,e, Supplementary Figure S5), although the molecular details are not known. In conclusion, these data uncover a novel effect of VitD and also demonstrate that 3D tumor spheroids are a valuable experimental tool to uncover aspects of tumor pathobiology potentially occluded in conventional 2D tumor cell models.
VitD Enhances the Chemotherapeutic Effect via mTOR-PI3K/Akt Downregulation in HNC
To further investigate how VitD or VitD/cisplatin combinations inhibit the proliferation and clonogenic survival of HNC cells, we examined the cancer-relevant signaling pathways. First, bioinformatics analyses employing the Ingenuity Pathway Analysis software (Version v01-04) revealed multiple molecular mechanisms involved in cancer pathogenesis and treatment resistance (Supplementary Figure S6). Subsequently, we focused on potential VDR-RXR activation pathways (Supplementary Figure S1) and further explored the literature [54][55][56][57]. As summarized in Figure 6a,b, VitD has been suggested to regulate several pathways including the cancer-relevant mTOR/PI3K-Akt pathways. Here, key regulatory proteins are (in)directly affected by VitD overlap such as the Akt kinase (Figure 6a,b). Under 'healthy' conditions, the mTOR/PI3K-Akt pathways are important players in development, cellular homeostasis, and health control.
However, in cancer, abnormally activated mTOR/PI3K-Akt signaling stimulates tumor cells to grow, metastasize, and become resistant to treatment [54][55][56][57][58]. Notably, when we examined the impact of VitD and VitD/cisplatin treatment combinations in HNC cell models, we found that expression of the active, phosphorylated forms of mTOR and Akt (i.e., of pmTOR and pAkt) was particularly decreased upon VitD/cisplatin cotreatment (Figure 6c,d). No significant reduction in pmTOR and pAkt was detected upon cisplatin treatment alone. These findings not only provide a potential molecular explanation for the enhanced cisplatin-killing effect on the cancer cells by VitD, but also suggest the further experimental exploitation of additional cotreatment combinations such as using mTOR and Akt inhibitors in combination with VitD. cused on potential VDR-RXR activation pathways (Supplementary Figure S1) and further explored the literature [55][56][57][58]. As summarized in Figure 6a,b, VitD has been suggested to regulate several pathways including the cancer-relevant mTOR/PI3K-Akt pathways.
Here, key regulatory proteins are (in)directly affected by VitD overlap such as the Akt kinase (Figure 6a,b). Under 'healthy' conditions, the mTOR/PI3K-Akt pathways are important players in development, cellular homeostasis, and health control.
Discussion
The VDR/VitD-axis has been intensively investigated for more than a decade for the prevention and/or treatment of many cancers. Such (pre)clinical studies range from VitD food supplementation and cancer-prevention trials to different combination therapies [3,4,32,34,37]. Indeed, various anti-tumoral effects have been suggested for this member of the nuclear receptor superfamily, and VitD deficiency is often observed in cancer patients [21,22,59,60]. However, the underlying mechanisms of the VitD/VDRmediated effects are not understood in detail and sometimes conflicting reports underline that its role, especially in specific cancer types, remains to be dissected [3,4,34,37].
Our clinical and experimental data support a significant role of the VDR/VitD-axis in the prognosis and clinical outcome of HNC patients. First, by analyzing our cohort of n = 40 HNC patients compared to healthy controls (n = 40), we demonstrated that both the VitD serum levels and VDR expression correlate with clinical parameters such as histopathological tumor classification. Although we could not provide specific data on patient prognoses such as survival curves, in general, the HNC patients' overall survival correlates with histopathological differentiation of the tumor (see Supplementary Figure S7). Our finding that patients with poorly differentiated tumors and thus poor prognosis exhibited the lowest VitD levels is in line with previous studies of other entities. For example, Yao et al. found that low serum 25OHD levels at diagnosis were associated with poorer survival and worse prognosis in breast cancer patients [61]. Additionally, there have been studies observing an inverse relationship between cancer mortality and serum VitD level [59,62], suggesting that VitD supplementation therapy was most effective in patients with VitD deficiency at diagnosis [62]. However, in contrast to other clinical studies, we here paid attention to recruiting an age-sex-matched control group of non-cancer patients, allowing us to draw conclusions about a potential (gender-specific) correlation between the serum VitD level and HNC. This study's confinement of cases to 40 patients due to the complexity of the subject matter could be seen as a potential limitation. It also has to be mentioned that the study cohort includes tumors of different sites such as the tongue and lip, which can differ in their prognosis. Nevertheless, the cohort is suitable to represent the commonly observed distribution of subsites and histopathological differentiation.
Of note, our study cohort was recruited in Egypt, exhibiting socio-economical characteristics, which we feel worth discussing. First, the study cohort differed in its gender composition from typical Northern European and American study groups because it consisted mainly of women (male-female ratio 1:4). This is often observed in the Middle East and North Africa (MENA) region [43][44][45][46], which among other factors such as increased smoking [63,64] could be explained by differences in VitD supply. A normal VitD supply is defined as when the 25(OH)D serum concentrations ranged between 30 and 50 ng/mL, whereas levels <20 ng/mL were classified as VitD deficiency [65,66]. The mean level of serum VitD (25(OH)D) in the healthy population differs depending on the geographical residence, whereas mean VitD levels in adults in North America, Asia Pacific, and Europe range between 20.4 and 28.9 ng/mL, and thus could be classified as insufficient, but not yet deficient. Interestingly, in the Middle East and North Africa region (MENA) the mean VitD levels seem to be significantly lower with 13.6-15.2 ng/mL (applies to the same age group, does not take differences in sunshine duration into account) [67]. Different reasons may explain lower VitD levels in the MENA region such as increased air pollution, reducing the amount of UVB rays available for VitD production in the skin [17,68,69]. Another explanation could be a physiological de-toxification mechanism of VitD, which is activated after longtime sunlight exposure to prevent the toxic effects of very high VitD levels in the human body [15,16]. While the analyzed healthy patients of our study cohort lay above the statistic MENA value with a mean VitD concentration of 28.7 ng/mL, the HNC patients exhibited very low VitD levels (mean 7.4 ng/mL), classified as severe VitD deficiency (<12 ng/mL). Here, especially the female patients exhibited very low VitD levels (5.3 ng/mL), which is supported by other studies describing the female gender as a risk factor for hypovitaminosis [67]. Aside from the general reasons for VitD deficiency in the MENA region described above, additional circumstances such as veiling and/or reserved clothing style, lower socio-economic standard, and predominant indoor activity may contribute to VitD deficiency in women [67,70,71]. These factors come along with the lack of awareness about the importance of VitD to the human body [67]. Assuming a significant role of VitD in the pathogenesis of HNC, this could partly explain the increased incidence of HNC in females. Such a correlation has also been suggested for colorectal cancer. Here, the VitD levels were inversely proportional to the risk of cancer in women, but not statistically significant in men [24]. Furthermore, it has been proposed that VitD supplementation could be protective against breast cancer in menopausal women, underlining its effect on tumorigenesis [72]. Again, for the gender-specific conclusions also drawn from our study, the restricted sample size of n = 34 females should be considered, suggesting further larger studies focusing on the gender-specific relevance of VitD in HNC.
Since VitD executes its biological functions via nuclear receptor binding, we analyzed the clinical relevance as well as expression and ligand-dependent activation of VDR and its heterodimerization partner RXRα. Here, we showed that VDR, but not RXRα, was significantly overexpressed in the primary HNC patients, which also correlated with clinically relevant disease markers such as HPV status, perineural invasion, and histopathological differentiation. However, there are some conflicting studies about the clinical relevance of VDR overexpression for tumorigenesis [73][74][75]. For example, Choi et al. correlated the VDR overexpression with negative prognosis in thyroid cancer [73], supporting our data showing that VDR overexpression in poorly differentiated, highly proliferative tumor tissue. Other studies have correlated the high VDR expression with an improved prognosis of patients [74][75][76]. RXRα expression and its clinical relevance in HNC have also been controversially discussed. RXR agonists such as bexarotene can benefit HPV-negative HNC patients [77]. Bexarotene combination therapy was also effective in a preclinical trial [78]. For breast cancer, there are studies demonstrating a concurrent overexpression of VDR and RXRα [79], partially also describing a worse disease-free survival when RXRα is overexpressed [80][81][82]. Hence, RXR might be worth investigating in future experimental and clinical VitD/VDR studies in general.
Chemoresistance is a major cause of cancer progression and impacts the mortality of cancer patients, particularly for HNC [9, 10,39,83]. Hence, developing strategies for enhancing chemosensitivity, potentially also by food supplementation with VitD, is needed and may benefit patients. Indeed, such efforts have been made to correct VitD deficiency in cancer patients [11,12,51,52]. Through our comprehensive in vitro studies applying established 2D as well as 3D spheroid HNC cell models, we could show that VitD treatment improves chemotherapeutic killing, especially of therapy-resistant HNC tumor cells, suggesting VitD supplementation during the primary (radio)chemotherapy of HNC patients. Of course, the serum VitD levels of respective patients should be carefully monitored during therapy, and other clinically relevant factors also have to be considered. Previous observational studies and clinical trials have partially reported improved survival of cancer patients after VitD supplementation, but the findings are not conclusive yet, and further studies combining clinical with wet lab investigation are needed [84].
Our nuclear receptor profiling by next-generation RNA sequencing transcriptomics provides the first data suggesting that other nuclear receptors may also be relevant for cisplatin-chemoresistance in HNC. Furthermore, VDR or RXRα investigated here, differentially expressed receptors such as Nuclear Receptor Subfamily 4 Group A Member 2 (NR4A2) seem to be relevant for various aspects of HNC pathobiology including HPV status and mTOR/Akt signaling, underlining the value of our datasets [85][86][87]. Due to the complexity of this area, we did not explore other nuclear receptors in this study, which might be considered as a potential limitation. Hence, we refer the reader to the literature regarding the specific receptor of interest. We conclude that the data provided here may stimulate the field to further explore the relevance of the nuclear receptor superfamily for therapy resistance in HNC.
In cancer, abnormally (de)activated signaling pathways such as mTOR/PI3K-Akt and NFκB signaling stimulate tumor cells to proliferate aggressively, metastasize, and become even more resilient to therapy [54][55][56][57][58]. Here, we found that the VitD/VDR-axis enhances the chemotherapeutic effect via mTOR-PI3K/Akt downregulation in HNC. The potential relevance of the Akt-and mTOR pathways in VitD/VDR signaling is supported by reports in other tumor types [56,88]. It has to be mentioned that VitD executes its biological functions by various cellular pathways, and thus is likely that additional proapoptotic pathways may contribute to cancer-associated VitD effects. Of note, bioinformatic modeling and predictions, as performed in our study, will aid in hypothesis building, but detailed investigations are needed to confirm the candidates' relevance. Our findings not only suggest an additional molecular mechanism for the observed beneficial effects of VitD supplementation, but also suggest further exploitation of additional cotreatment combinations such as using mTOR and Akt inhibitors (e.g., ICSN3250, LY3023414, AZD8055, or rapamycin) [58,89]. However, these preliminary results give the first molecular evidence for further co-treatment options, and detailed analyses have to be performed in future (pre-)clinical studies.
Collectively, we can conclude that novel VDR/VitD-aided drug combinations should be intensely investigated in (pre)clinical studies. Here, gender-specific VDR/VitD-effects impacted by country-specific socioeconomic differences may need additional attention. Moreover, nuclear receptors should be further explored not only for breast or colon cancer, but also for HNC.
Study Population
The investigation was conducted following the ethical standards according to the Declaration of Helsinki of 1975 and according to the local, national, and international guidelines. Tissue samples were obtained from patients undergoing surgical resection of HNC at the Department of Oral and Maxillofacial Surgery at the Faculty of Dentistry of Alexandria University from December 2017 to November 2018. In that period, the cases were consecutively enrolled in the study. The study protocol was approved by the local ethics committee (#0008839) after obtaining the patient's informed consent to participate in the study and was processed anonymously. Patients undergoing simultaneous chemo-or radio-treatment before or during the surgery were excluded from the study. All cases were diagnosed histopathologically as HNC and staged according to the TNM classification of malignant tumors recommended by the 'Union International Contre le Cancer UICC (8th edition). All experiments were performed in accordance with the relevant laws and the Alexandria University Guidelines and approved by the institutional ethics committee at the Faculty of Dentistry, Alexandria University. In this study, tumor specimens and corresponding non-malignant tissue were analyzed, different tumor sizes (T1-T4), lymph node status (N0-2), and grading G1-G3. Upon resection, samples were immediately fixed in formaldehyde. Histological analyses were performed to ensure that each specimen contained >70% tumor tissue and <10% necrotic debris. Samples not meeting these criteria were rejected. Specimens were handled as usual (i.e., paraffin-embedded, sectioned, and H&E staining). The H&E stain was implemented by staining the specimens with Harris' hematoxylin as described [83,90]. The interpretation was performed by oral pathologists at Alexandria University. Peripheral blood samples were taken from patients before or during surgery. The total serum 25-hydroxyVitD concentration (sum of D3 & D2 forms) is regarded as the best single marker of VitD status in the human body. Total serum VitD (25-hydroxyVitD3) was quantified by using fully validated, modified high-performance liquid chromatography (HPLC) [47]. Publicly available gene expression and survival datasets were obtained from The Cancer Genome Atlas (TCGA) Research Network (http://cancergenome.nih.gov/, accessed on 1 October 2022), filtering for patients with HNCs (TCGA HNC). Of note, the expression values were not detectable for all genes of interest for every patient in the TCGA database. Here, VDR and RXR expression was found in n = 604 patients and analyzed as described [39]. Data were assessed via the USCS Xena server and patients were grouped according to the indicated phenotypic or clinical characteristics as described [37].
Cell Culture
Authenticated and characterized cell lines FaDu and SCC-4 were purchased from the ATCC repository, expanded, stocks prepared at early passages, and frozen stocks kept in liquid nitrogen. SCC-4 cells were established from a tongue squamous cell carcinoma. HNCUM-01T and -HNCUM-02T were established from tongue squamous cell carcinoma as described by Welkoborsky et al. [91]. The Pica cell line was established from laryngeal squamous cell carcinoma and maintained as described [39]. The FaDu cell line was established from a hypopharyngeal squamous cell carcinoma [92]. Thawed cells were routinely monitored by visual inspection and growth-curve analyses to keep track of the cell-doubling times, and were used for a maximum of 20 passages for all experiments. Depending on the passage number from purchase, cell line authentication was further performed at reasonable intervals by short tandem repeat (STR) profiling. We cultured the HNCUM-01T, HNCUM-02T, and SCC-4 cells in Dulbecco's modified Eagle's F-12 medium. Pica and FaDu cells were cultured in Dulbecco's modified Eagle's medium. We added 10% fetal bovine serum (FBS), and 1% penicillin-streptomycin to all medium types. Cells were cultured under a 5% CO 2 atmosphere at 37 • C and subcultured every 3 days as described [39]. We checked the absence of mycoplasma regularly via the Venor GeM Advance Detection Kit (Minverva Biolabs, Berlin, Germany) according to the manufacturer's instructions. The cell numbers were determined using Casy Cell Counter and Analyzer TT (OMNI Life Science GmbH & Co KG, Bremen, Germany). To treat the cells, Hy-clone fetal bovine serum (FBS) (Sigma Aldrich, Munich, Germany) was used instead of standard FBS to ensure the absence of VitD in the controls and the control VitD treatment doses in the treated samples.
Generation of Cisplatin Resistant Cell Model
We generated constantly selected cell lines by treatment with sub-toxic doses of cisplatin corresponding to IC90 (5 µM) and then constant treatment (3 µM). We used the resistant cell line for experiments 6 months after constant exposure to cisplatin and the re-establishment of relatively regular proliferation.
Cell Viability Assays
To probe cell viability, we seeded the cells in 96-well plates (5000 to 15,000 cells/well) depending on the cell line and the treatment duration and treated them with the indicated substances and concentrations (n = 3) starting 24 h after seeding. After 48/72 h treatment, we performed a commercially available assay CellTiter-Glo ® 2.0 (Promega, Walldorf, Germany) according to the manufacturer's instructions and recorded the luminescent signals using a Tecan Spark ® (Tecan Group Ltd., Männedorf, Switzerland). Later, we normalized the signals to the untreated control samples.
In order to objectively determine the pharmacological effect of the proposed drug combination, we used the combination index equation described by Chou-Talalay [53]. In this algorithm, synergy is defined as combination index values < 1.0, antagonism as values > 1.0, and additivity as a value = 1.0.
Fluorescence Microscopy
Fluorescence images were acquired, analyzed, and quantified using an Axiovert 200 M fluorescence microscope (Zeiss, Oberkochen, Germany) or an automated highcontent screening microscope Array Scan VTI (Thermo Fisher, Dreieich, Germany) as described [39,93,94]. We seeded cells in microscopic dishes (35 mm, MatTek, Ashland, MA, USA) or clear-bottom 96-well plates (Greiner, Kremsmünster Austria) and fixed them with 4% PFA (20 min, RT). For immunofluorescence staining, we additionally permeabilized the cells via incubation with Triton-X 100 (0.1%, 10 min, RT). Antibodies were diluted in 10% FBS/PBS and incubated with samples for 1 h at RT. We washed the cells (n = 3) in PBS and then incubated the samples with fluorophore-labeled antibodies for 1 h at RT. Finally, we stained the nuclei by adding Hoechst 33342 (50 ng/mL in PBS) for 30 min at RT. For automated high-content screening, regions of interest were created using the nucleus signal and each sample was acquired in triplicate, imaging at least 5000 events per sample according to [39].
RNA Sequencing and Visualization
RNA sequencing was then performed as described in [95] and the visualizations were achieved with the help of GraphPad Prism. Ingenuity Pathway Analysis (Qiagen, Hilden, Germany) was used to visualize the mTOR and PI3K-AKT signaling pathways.
Plasmids and Transfection
To construct a VDR expression plasmid, cDNA was isolated out of the HNC cancer cell lines, and the full open reading frame of human VDR cDNA was cloned into the pcDNA3.1 mammalian expression vector (Invitrogen, Karlsruhe, Germany) with the C-terminal GFPtag (for primer sequences, please see Supplementary Table S2). Colony PCR was performed to check for positive clones [93,94,96].
For cellular transfection, plasmid DNA and Lipofectamine 3000 (Fisher Scientific, Schwerte, Germany) were mixed according to the manufacturer's instructions and added to the cells, which were cultured in Opti-MEM medium as described [97].To mark VDRexpressing cells, plasmid pC3 coding for GFP expression was co-transfected. To exclude artifacts, a control transfection of empty plasmid pC-DNA3 and the GFP-coding plasmid was conducted in parallel. The medium was changed 5 h post-transfection to a normal cell culture medium. We confirmed the VDR overexpression of cell lines via Western blot analysis, and positively transfected cells were selected by the addition of puromycin (1 µg/mL; Sigma Aldrich, Munich, Germany). To establish a uniform expression of the VDR transfected cells, the cells were sorted into low, medium, and high fluorescence using FACS as previously described [96].
Protein Extraction, Immunoblot Analysis
Whole-cell lysates were prepared using low salt lysis RIPA buffer (50mM Tris pH8.0, 150 mM NaCl, 5 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF, Complete EDTA-free from Roche Diagnostics, Mannheim, Germany) and samples were separated on 8-12% SDS gels, as has previously described [96,98,99]. Blotting onto activated PVDF membranes was achieved with Trans-Blot Turbo (Bio-Rad, Munich, Germany) and blocking and antibody incubations (1 h/RT or 16 h/4 • C depending on antibody) were performed in 5% milk powder or BSA in TBST or PBS. The detection of the luminescence signal of HRP-coupled secondary antibodies after the addition of Clarity Western ECL Substrate was performed using the ChemiDoc TM imaging system (Bio-Rad). Equal loading of lysates was controlled by reprobing blots for housekeeping genes (Actin). At least n = 2 biological replicates were performed and representative results are shown. Results of the densitometric analyses of all Western blots can be found in the Supplementary Materials.
Statistical Analysis
Statistical analyses were performed using GraphPad Prism (version 9.3.1) as described [39]. Survival data were obtained from the USCS Xena server, visualized, and analyzed by GraphPad Prism (Log-rank/Mantel-Cox test; Hazard Ratio (Mantel-Haenszel)). For two groups, a paired or unpaired Student's t-test, for more group analysis of variance (ANOVA) was performed. Unless stated otherwise, p values represent data obtained from two independent experiments conducted in triplicate. Statistical significance is represented in the figures as follows: * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, and n.s. indicates not significant. A p-value that was less than 0.05 was considered statistically significant.
Informed Consent Statement:
All patients were recruited via the TCGA Research Network. The TCGA informed consent guidelines are publicly available from https://www.cancer.gov/tcga, accessed on 1 March 2022. Tissue samples were obtained from patients at the University of Alexandria after obtaining the patient's informed consent to participate in the study and were processed anonymously.
Data Availability Statement:
The cell line raw data required to reproduce these findings are available upon request. The clinical results shown here are based on data generated by the TCGA Research Network: https://www.cancer.gov/tcga, accessed on 1 October 2022. | 9,108.4 | 2023-02-28T00:00:00.000 | [
"Biology",
"Medicine"
] |
GLOBALIZATION AND TEACHER DEVELOPMENT FOR SPOKEN ENGLISH INSTRUCTION
The impact of globalization is experienced most strongly in business and commerce but also increasingly in education. As a result, some scholars have called for a re-envisioning of the role of teachers to model what it means to be a global citizen. In this paper, I acknowledge the need for ESL/EFL teachers to re-examine their identity and roles in light of these global developments. At the same time, I argue that teachers should not lose sight of the importance of honing the craft of teaching English so as to increase their professional capital to mediate the impact of globalization for their students. This article first discusses the changing roles of teachers in a globalized world and highlights the implications for English language teaching and learning. The ideas are further related to teaching second language oracy (speaking and listening) because of its centrality in developing important 21st Century skills in the globalized world. The article also offers ways in which teacher education that takes cognizance of globalization forces can develop ESL/ EFL teachers’ knowledge and beliefs to play their new roles more
Kata kunci: globalisasi, warga global, pembangunan global, modal profesional Globalization refers to the process of interaction, interconnection and integration of people, businesses and cultures across national boundaries, supported by new technologies that enable speed in communication and exchanges.It is "characterized by the extensive flow of information, ideas, images, capital and people across increasingly permeable political border due to economic and technological change" (Harper & Dunkerly, 2009, p.56)Although these features of connectedness beyond political boundaries have been present in the past centuries ago, the scope, volume and speed of such exchanges are unprecedented.The impact of globalization can be felt in many countries in local lives and on the national stage while very few countries are exempted from this experience.One of the areas in which this impact is experienced most keenly is in business and commerce, but the forces of globalization are also increasingly experienced in education.This has given rise to many academic discussions about education and teaching in a globalized world, with some scholars, such as Luke (2004) calling for the role of teachers to be re-envisioned as "transcultural and cosmopolitan", as they are increasingly expected to model what it means to be a global citizen (p.1438).
How should teachers and teacher educators in the field of English language teaching respond to such a call?In this paper, I argue that while it is important to acknowledge the broad social implications of globalization for teacher education, teacher educators should continue to help teachers-to-be and practising teachers hone the craft of teaching English and develop greater professional capital by being a versatile practitioner with a strong knowledge and skills base.These competencies that they develop will undergird teachers' changing beliefs about their roles as mediators of learning in the globalized world and enable teachers to adapt to new educational landscapes from an informed professional position.This article first discusses the changing role of teachers in a globalized world and highlights the implications for ESL/EFL teaching and learning.These ideas are further discussed in the teaching speaking and listening, offering suggestions on how teachers can teach these two language skills effectively to address their students' needs for the 21 st Century.Luke (2004) argues that teachers for our globalized world need to be ready to engage with the global while rooted in the local; hence, the currency of the term "glocal" which has been used by many authors.'Glocal' teachers are teachers who are willing to interact and collaborate with teachers, researchers and other educationalists across national boundaries.Teachers for a globalized world are ones who are ready to accept and embrace diversity within and beyond their own culture and educational and professional space.Luke's re-envisioned role of teachers for a globalized world that is driven by new needs and technologies have important implications for teacher development programmes.Such a view of teachers is an interesting and appealing one, and speaks directly to the type of globallyready learners that many education institutions aim to nurture.More importantly, it speaks to the pressing need for teachers to reassess their roles and contributions to the education of the young people in their countries in a time of great economic and cultural changes, and to reclaim the prestige and symbolic capital that teaching as a profession holds.
ENGLISH LANGUAGETEACHERS IN A GLOBALIZED WORLD
While this view of a re-envisioned teacher for the globalized world is important to the shaping of teacher identity, it does not illuminate the importance of other aspects of teachers' work with reference to the subjects that they have to teach.In fact, such a broad and generic view of the teacher's role runs the danger of ignoring the craft of teaching for specific subject areas, such as EFL/ESL.A broad sociocultural view of teachers' role offers insights for teacher development in the way teachers should view themselves and how other people should be encouraged to view the profession.It does not, however, address what is happening in the language classroom and veils rather than illuminate the way teachers can build up the symbolic and professional capital that is needed for them to conduct their work effectively.This is not a criticism of Luke's view as he is concerned with more macro issues about teacher roles and identity in general in the age of globalization and does speak directly to the teaching of any curriculum subject to begin with.He is mainly cautioning against a narrow and parochial view of teacher preparation that has been the course of the day in many teacher education programmes all over the world, as well as pointing to the problem with some new practices afforded by globalization.Nevertheless, a socialized view of teacher learning remains an attractive one, but it can only be beneficial to teachers and teachers-inpreparation when it is incorporated into models of language teacher education where domain-specific competencies are developed.Importantly, ESL/ EFL teacher development should not lose sight of the role of teachers in facilitating learners' mental and emotional processes needed for successful language learning.Teachers will also need to be cognizant of the implications of globalization for teaching English which is increasingly viewed as an international language (McKay, 2002).
According to a Warschauer (2000, p. 511), globalization has implications for English language teaching and learning in the three ways.Firstly, we will see further spread of English as an international language and a shifting of authority to nonnative speakers and dialect.Secondly, economic and employment trends is changing the way English is used, particularly in presentation, discussion and interpretation of ideas.Thirdly, old notions of literacies limited to reading and writing from traditional print sources are being transformed by new information technologies to include a broad range of multiple literacy skills.
Globalization has indeed resulted in an unprecedented number of people speaking and learning English.Traditionally, English is used when people from a non-English speaking country have to communicate with people from an Anglophone country, such as the US, UK and Australia/ New Zealand.Increasingly, however, communication in English takes place predominantly among speakers from non-Anglophone countries who needed to speak in a language that both parties can understand.In many countries there is also the phenomenon of children growing up speaking English rather than or in addition to a language of their parents' ethnicity.The exponential spread of English and the ways it is acquired and used as a result of globalization has given rise to a variety of new nativized or localized Englishes, including Asian Englishes (Rubdy, Zhang & Alsagoff, 2011).Because of the transnational movements of people, the learning of English now increasingly occurs in a linguistically diverse context in the same country.In the past, English classes in a non-Anglophone country traditionally consisted of students who spoke the same language or at least share broad linguistic backgrounds.In countries such as Japan, it is not unusual to find non-Japanese students sitting alongside Japanese students in an English language class because the latter have come from other non-English speaking countries to work and live in Japan (Kubota & McKay, 2009).
Economic changes and shifts in seats of economic power have further influenced the way intercultural communication is perceived and learned.Kumaravadivelu 2008) argues that instead of an ethnocentric deductive approach where western cultural behaviours are treated as the norm against which others are measured, intercultural communication should be viewed as a complex process that requires the development of both linguistic and pragmatic skills and understanding across all cultural groups.The question of which standards to be promoted in language classrooms will naturally arise (McKay, 2002).Technological innovations in the way information is communicated means that learners have to develop new skills in searching, using and producing online information in English.In addition the proliferation of synchronous and asynchronous on-line communication will require skills that transcend basic reading and writing skills to include abilities to demonstrate cognitive and social features of oral communication.
In our discussion of globalization it would be impossible not to consider the skills that individuals need to succeed in the 21 st Century.Partnership for 21 st Century Skills (2011) identified these to be learning and innovation skills, information, media and technology skills, and life and career skills.Of these skill sets, learning and innovation skills are most relevant to our discussion about teaching spoken English to language learners.The component skills of creativity, critical thinking, communication and collaboration rarely develop individually or in a vacuum, but frequently through speaking with and listening to others.The process of interacting with peers and more knowledgeable others through the process of talk can help individuals express, synthesize, evaluate and apply ideas that are jointly constructed.
What do these global changes in the way individuals learn, make meaning and produce knowledge mean for English language teachers?How should teacher educators view these developments and draw support in preparing teachers for the challenging task of developing language learners' English competencies of speaking and listening?As I have argued earlier, while it is important to envision an enhanced role for English language teachers in the globalized world, it is just as important if not more to ensure that teachers acquire competencies for teaching English language knowledge and communication skills.In other words, teachers and teacher educators would need more than acknowledging a socialised view of a teacher's role and function in our globalized world.They need theoretical perspectives and pedagogical principles that can speak to the practical and day-today endeavours of teaching and understanding how students learn in this new context.To illustrate this point, examples from teaching speaking and listening are discussed next.
TEACHING ENGLISH COMPETENCIES OF SPEAKING AND LISTENING
Teachers' attention to speaking and listening skills must begin with an understanding of the concept of oracy.A term coined by British professor of education, Andrew Wilkinson in 1965, oracy refers to an individual's general ability in using the oral skills of speaking and listening.It aimed to capture the essence and the importance of oral skills in the way the terms 'literacy' and 'numeracy' represented abilities to read and write and to think scientifically or quantitatively.Wilkinson noted that the absence of a term to represent mastery of oral skills was indicative of the neglect in developing speaking and listening skills.This neglect was not only felt in the first language context that Wilkinson was working from but it was also true in the teaching of second and foreign languages at the same time.The importance of oracy is further emphasised by Barnes (1988) who argued that the function of oracy is to serve as a tool by which students can "engage through speech with important aspects of the social and physical world […] Using language in the world is not just a matter of skills but of understanding that world in all its complexity and variety, and knowing how to influence it" (p.48, 52).This observation is even more relevant today for English language learners in the globalized world where English has become a default language for transnational and intranational communication.
Good oracy abilities are important for all face-to face communication, and with the advances in technologies, oracy is the channel by which much communication occurs between people separated by physical space through facilities such as Skype and other types of on-line talk.Since speaking and listening often occur together in face-to-face interactions, the two skills are often practiced in an integrated manner in many oral communication lessons.In research and theoretical discussions, however, they are often treated individually, as each is a different construct and needs to be understood differently.Accordingly, in this article, speaking and listening are discussed separately to highlight the developments in approaches to teaching these two skills.
TEACHING SPEAKING
Approaches to speaking instruction have seen many changes over the past few decades.Burns (1998) categorised these approaches into two main types.The first is the direct/controlled approach.It focuses directly on developing isolated speaking skills and is concerned with accuracy of sentence structures and other language forms, chiefly being pronunciation.It emphasizes practice of language forms to produce increasingly accurate production while at the same time aims to raise learners' awareness about grammar and discourse structures.Common practice activities in the past include drills for language pattern practice and structure manipulation.Another type of controlled learning activity involves language analysis tasks where learners' attention is drawn to specific language features as a way of increasing their language awareness.Direct/controlled speaking lessons tend to be led by the teacher who provides the model for speech practice and leads in consciousness-raising activities.
The second approach is called the indirect/transfer approach.It focuses on the production of speech during communicative activities, such as pair work, and is more concerned with speech fluency.It involves learners in practising spoken language for special functions and purposes.For example, they may be asked to describe a picture to a partner who has not seen the picture, or role play a situation where a customer complains about poor service or damaged goods.The basic assumption of the indirect approach is that when learners practise how to speak effectively in class, they will transfer the speaking skills developed through such communicative activities to real-life situations.Learning activities have a high degree of authenticity (similarity to real life communication) in the topics that learners work on and the skills that they practice.These activities are typically learnercentred, where learners do most of the talking and the teacher takes on the role of facilitating understanding of task requirements and use of language.Unlike the direct approach, there is no explicit focus on language in transfer activities for speaking practice.
Both direct and in-direct approaches help learners to practise speaking skills in a variety of ways to improve accuracy and fluency, but neither of them effectively supports key processes of second language speaking development.For example, the trade-off for accuracy practice is the lack of authenticity in communication through face-to-face communication, particularly in situations where negotiation for meaning is necessary.Thus learners practise pronunciation and grammar using mainly isolated sentences without consideration for the context in which oral communication takes place.The current discussion of globalization would also raise the questions of whether pronunciation accuracy has not been over-emphasized and what should be accepted as standard in learners' pronunciation.
The indirect approach, on the other hand, places so much emphasis on fluency practice that attention to grammatical accuracy and discourse structures is often neglected, as Bygate(2001) has pointed out.
To address this mainly dichotomous approach to speaking development, a holistic approach which uses the Teaching Speaking Cycle as a model for planning tasks and activities for speaking classes has been proposed (Goh & Burns, 2012).The Cycle guides teachers in their instruction through seven phases: 1. Focus learners' attention on speaking.2. Provide input and/or guide planning.3. Conduct speaking tasks.4. Focus on language/discourse/skills/ strategies. 5. Repeat speaking tasks.6. Direct learners' reflection on learning.7. Facilitate feedback on learning.
The different stages provide opportunities for learners to focus their attention on accuracy as well as practise their use of language in both planned and spontaneous speech.More importantly, it increases the learners' metacognitive awareness by helping them reflect on their own experiences as speakers of another language and providing them with feedback on their learning.These experiences are often absent from speaking activities where students are often asked to complete tasks all by themselves with minimal input and feedback from teachers and peers, and where the value of individual reflection on learning is also often overlooked.
TEACHING LISTENING
Over the past five decades, listening instruction has gradually shifted its focus from a heavily text-based approach to a greater concern for the communication and learning needs of learners, reflecting a deepening understanding of the construct of learner listening as a cognitive, social and communication skill.In the 1950's and 6o's, text comprehension-based techniques were the order of the day as listening instruction was influenced by practices in the teaching of other language communication skills, particularly reading.
Learners were asked to listen to written passages read aloud and they had to demonstrate their understanding of listening passages by giving the correct answers.These texts often contained very few grammar features of spoken texts; they contained long sentences with many embedded ideas and were not easy to process in real time when listening.The activities were in fact a disguised form of reading comprehension done through the spoken medium.Although techniques involving written texts have largely given way to others that are more communication-oriented.Such comprehension-based techniques have persisted even till today in some listening classes.
We have indeed seen changes in the types of spoken texts used for listening practice.In the 1970's and 80's, when Communicative Language Teaching methodology gained popularity in many places such as Europe and Asia, texts used in listening classes consisted of spoken texts that attempted to simulate a high degree of authenticity or that were recorded from non-teaching materials such as movie dialogues, songs and radio programmes.Recorded texts that were scripted for language teaching included many features of spontaneous speech found in real life interaction, such as repetitions, hesitations and shorter or incomplete utterances.Learners were also asked to do tasks that had greater authenticity, for example, listen to a talk and take notes.In the 1990's to the present, listening instruction has been heavily influenced by a socio-cognitive paradigm that addresses learner needs such as cognitive processing demands and anxiety.The strategy approach within this paradigm aimed to train learners to use a range of strategies to handle the demands of listening (Mendelsohn 1998).
Recently, the strategy approach was enhanced in the form of the metacognitive approach which engages learners in a range of listening activities that focus on learning and language (Vandergrift & Goh, 2012).Metacognitive activities are integrated into listening lessons to help learners deepen their understanding of themselves as L2 listeners and the demands and process of L2 listening.The cognitive and social processes of listening were acknowledged, and learners have opportunities to explore and practise these processes in their own listening.In addition, the activities also provide opportunities for learners to practise and acquire strategies on how to manage their comprehension and learning through planning, monitoring and evaluating.Two complementary methods were proposed for implementing the approach: a) A pedagogical sequence for listening to pre-recorded texts which consists of five stages which take learners through the process of listening: Pre-listening-planning/predicting stage First verification stage Second verification stage Final verification stage Reflection and goal setting b) Task-based lessons for one-way and interactional listening that make use of three stages of pre-listening, whilelistening and post-listening.The meta-cognitive approach expands the scope and purposes of the pre-and postlistening phases to include activities that develop learners' have orientations towards language and metacognitive awareness.
TEACHER COGNITION AND THE TEACHING OF SPEAKING & LISTENING
The above discussion of teaching approaches to second language oracy alluded to changes in the way the constructs of speaking and listening have been conceptualized respectively for language development.These changes have mainly been informed by theories of speech processing and language comprehension, second language acquisition, language performance and language use, learning and motivation, and last but not least, metacognition.How should teachers and teacher educators evaluate and apply these ideas in light of the impact that globalization has on English language teaching and the role of teachers?Before an attempt is made to answer this question, it is necessary to consider the way teacher thinking can influence the decisions teachers make about their teaching.The knowledge, beliefs and thoughts that teachers possess are collectively known as teacher cognition, "the unobservable cognitive dimension of teaching" which heavily influences the observable actions of teachers' practice (Borg, 2003, p.81).The way teachers think about theory and principles pertaining to oracy development will directly influence the way they plan and deliver their lessons for speaking and listening.As beginning teachers develop to become active and reflective professionals, they have to not only master the routines of teaching but also make moment to moment decisions that influence these routines.Like experienced teachers, they also have to make on-the-spot decisions in class which may cause them to modify or abandon what they have planned to do when preparing for a lesson.At the same time, they are also cultivating beliefs or views about themselves, their teaching and their learners through their lived experiences in day-to-day teaching.
Although English teachers have become the focus of much teacher cognition research in recent years, relatively little has been documented about teacher cognition concerning the teaching of spoken English skills.In a study in Singapore, it was found that English Language teachers felt they were less knowledgeable in teaching listening and speaking compared with teaching other areas of English such as reading, writing, grammar and vocabulary (Goh, Zhang, Ng & Koh, 2005).While the teachers believed that oracy development was important for their students, those teaching secondary schools also admitted to spending the least amount of class time on these two skills because the skills carried less weighting in high stakes examinations.In a survey study in the USA that investigated elementary school teachers' self-reported knowledge about oral English instruction, DeBoer (2007) found that one third of her 275 respondents reported less than adequate knowledge for teaching oral skills.
Research has also been conducted among East Asian teachers' to understand their perceptions about their own spoken English proficiency and their confidence in teaching English.Butler (2004) found that the majority of the 500 plus elementary school teachers surveyed in Korea, Japan and Taiwan felt that they did not have the necessary proficiency to teach English at the elementary school level.In a more recent study in China, Chen and Goh (forthcoming) found that the majority of 527 teacher respondents reported inadequate knowledge about how to teach oral skills, and that the teachers' knowledge about teaching spoken English and their students' needs were significantly influenced by their own learning experiences, self-perceived speaking ability and familiarity with teaching methodologies.
Research in teacher cognition has shown the important influence that it has on teaching and learning a second language.None of the studies so far, however, has directly taken into consideration the impact of globalization on teachers' thinking.It would also be helpful for teachers themselves to re-examine their beliefs and knowledge about teaching speaking and listening in light of the effects globalization might have on their decision-making processes and classroom practice.
IMPLICATIONS FOR TEACHER EDUCATION
At the start of this paper, I discussed Luke's (2004) view concerning the changing roles of teachers in the face of the economic and sociocultural changes resulting from globalization.I also put forward the view that although the reenvisioning of teachers' role as one of a transnational and cosmopolitan professional working and learning beyond national boundaries is an appealing one, it does not speak directly to the day-to-day needs of teaching a language in the classroom.Language teachers would still need to have knowledge and skills for helping their students acquire the language.While pre-service teachers have to develop rudimentary understanding and skills for their work, in-service teachers need to hone their craft of teaching through improving their knowledge about teaching English that they gather from reflecting on experiences and professional development courses.
I would like to suggest that teachers consider the existing literature on teaching speaking and listening and study the implications of globalization for the planning of lessons and instructional materials.Teachers who are familiar with current discussions on theory and principles will be in a good position to evaluate and apply or adapt these ideas for their respective contexts.Existing methodology courses can further provide new and additional areas of focus such as the following: The importance of oracy development for English language learners' participation in the 21 st Century Skills for teaching, managing and modeling speaking and listening processes in the language classroom Teachers' identity and awareness of the world in their chosen profession of teaching ESL/EFL Teacher educators can also re-envision the role of English teachers in their respective countries so that the teachers not only develop personally but also help their students develop in ways that are relevant to the 21 st century.Here are some questions that could guide this re-envisioning, particularly in relation to developing teacher competencies for teaching speaking and listening: How can teachers from their respective local backgrounds develop a global outlook? How can teacher preparation and professional development courses help teachers themselves acquire 21 st century skills for learning and innovation (creativity, critical thinking, communication and collaboration)? What dispositions and skills would help teachers in their task of preparing their students for the use of English in the globalized world? What should the goals be for preparing teachers to acquire and teach the standards accepted for spoken English? How can teacher education programmes include new technologies and forms of new literacies to enhance oracy development?
CONCLUSION
With so much talk about the rapid globalization of our world, there is a tendency for us to focus our attention only another macro social context of teaching and learning.Sensible as this might seem, it does not directly address the professional needs of many EFL/ESL teachers and the learning needs of their students.In addition to embracing a socialised view of spoken English, teachers need to continually reexamine their own theories about how learners develop spoken English competencies, as well as increasing their own understanding from the available literature.
In relation to teaching spoken English competencies that we are concerned with in this article, it is important that adequate time is spent on improving their knowledge about the constructs of second language speaking and listening, the nature and demands of speaking and listening tasks for students, and methods and principles for planning and delivering speaking and listening lessons.Teachers-to-be and practising teachers also need to examine their own attitude and beliefs concerning new and emerging varieties of English and how they would like to respond to these developments in their own speaking and listening classes.They should also explore their own understanding of notions of communication resulting from new technologies and transnational interactions.
A teacher of spoken English for our globalized world will therefore need to develop heightened awareness of their cosmopolitan role and acquire a sound understanding of principles for teaching in a new social, economic and linguistic landscape. | 6,259.4 | 2013-07-01T00:00:00.000 | [
"Education",
"Linguistics"
] |
Transmission characteristics of a bidirectional transparent screen based on reflective microlenses
A microlens array (MLA) based see-through, front projection screen, which can be used in direct projection head-up displays (HUD), color teleprompters and bidirectional interactive smart windows, is evaluated for various performance metrics in transmission mode. The screen structure consists of a partially reflective coated MLA buried between refractive-index-matched layers of epoxy as reported in Ref [1]. The reflected light is expanded by the MLA to create an eyebox for the user. The brightness gain of the screen can be varied by changing the numerical aperture of the microlenses. Thus, using high gain designs, a low-power projector coupled with the screen can produce high-brightness and even 3D images as the polarization is maintained at the screen. The impact of the partially reflective coatings on the transmitted light in terms of resolution and modulation transfer function associated with the screen is studied. A condition similar to the Rayleigh criteria for diffraction-limited imaging is discussed for the microlens arrays and the associated coating layers. The optical path difference between the light transmitted from the center and the edges of each microlens caused by the reflective layer coatings should not exceed λ/4. Furthermore, the crosstalk between the front and rear projected images is found to be less than 1.3%. ©2013 Optical Society of America OCIS codes: (220.0220) Optical design and fabrication; (230.1980) Diffusers; (230.3990) Micro-optical devices; (310.6860) Thin films, optical properties. References and links 1. M. K. Hedili, M. O. Freeman, and H. Urey, “Microstructured head-up display screen for automotive applications,” Proc. SPIE 8428, 84280X1–84280X-6 (2012). 2. P. Görrn, M. Sander, J. Meyer, M. Kröger, E. Becker, H. H. Johannes, W. Kowalsky, and T. Riedl, “Towards see‐through displays: fully transparent thin‐film transistors driving transparent organic light‐emitting diodes,” Adv. Mater. 18(6), 738–741 (2006). 3. A. Olwal, C. Lindfors, J. Gustafsson, T. Kjellberg, and L. Mattsson, “ASTOR: an autostereoscopic optical seethrough augmented reality system,” Mixed and Augmented Reality, Proceedings. Fourth IEEE and ACM International Symposium on. IEEE (2005). 4. J. P. Rolland and H. Fuchs, “Optical versus video see-through head-mounted displays in medical visualization,” Presence (Camb. Mass.) 9(3), 287–309 (2000). 5. M. K. Hedili, M. O. Freeman, and H. Urey, “Microlens array-based high-gain screen design for direct projection head-up displays,” Appl. Opt. 52(6), 1351–1357 (2013). 6. H. Urey and K. D. Powell, “Microlens-array-based exit-pupil expander for full-color displays,” Appl. Opt. 44(23), 4930–4936 (2005). 7. H. Urey, “Diffractive exit-pupil expander for display applications,” Appl. Opt. 40(32), 5840–5851 (2001). 8. G. Hass and J. E. Waylonis, “Optical constants and reflectance and transmittance of evaporated aluminum in the visible and ultraviolet,” JOSA 51(7), 719–722 (1961). 9. M. Estribeau and P. Magnan, “Fast MTF measurement of CMOS imagers using ISO 12233 slanted edge methodology,” Proc. SPIE 5251, 243–252 (2004). #188993 $15.00 USD Received 17 Apr 2013; revised 24 Jun 2013; accepted 19 Aug 2013; published 8 Oct 2013 (C) 2013 OSA 21 October 2013 | Vol. 21, No. 21 | DOI:10.1364/OE.21.024636 | OPTICS EXPRESS 24636 10. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986). 11. K. Pearson, “LIII. On lines and planes of closest fit to systems of points in space,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2.11, 559-572 (1901). 12. D. G. Voelz, Computational Fourier Optics: A Matlab Tutorial (SPIE, 2011). 13. W. J. Smith, Modern Optical Engineering (Tata McGraw-Hill Education, 2000). 14. A. D. Rakić, “Algorithm for the determination of intrinsic optical constants of metal films: application to aluminum,” Appl. Opt. 34(22), 4755–4767 (1995).
Introduction
Transparent displays have been of great interest as they create a magical feeling by displaying information as if it were hanging in the air.They have many applications like head-up displays (HUD), teleprompters, transparent computer/laptop displays, holographic displays and augmented reality wearable displays [1][2][3][4].
In this paper we present a partially reflective microlens array (MLA) based, see-through, front projection screen that can be projected onto from two sides simultaneously as seen in Fig. 1.Projecting two different images on either sides of a single screen and maintaining the transparency at the same time allows new kinds of interactive displays.A partial reflective MLA sandwiched between refractive-index-matched epoxy layers forms the basis of our seethrough screen technology [1].Due to this structure, the screen creates a bright display in the reflection mode (due to high gain) while maintaining excellent transparency.Ref [5].focused on the design methodology for screens with gains on the order of 100, using rotated microlens designs.The focus of this paper is the analytical and experimental characterization of the reflective coating's impact on the transmittance function of the screen, which was first reported in ref [1].Fig. 1.Transparent microlens array screen can be used from both sides simultaneously with negligible crosstalk between the images on two sides.The screen offers a gain and viewing angle controlled by the numerical aperture of the MLA.
In Section 2, the basic operation principle and fabrication steps are explained, and the eyebox related issues are discussed using ray optics simulations.The transmitted light through the MLA screen passes through the layered media with non-flat microstructures and layers of coatings.The impact of this micro-optical element on the resolution and modulation transfer function (MTF) is studied in detail in Section 3 and a condition similar to Rayleigh criteria for diffraction limited imaging is derived.
See-through MLA screen
A good diffuser screen, i.e.Lambertian screen, expands the light in every direction.Thus the projected image can be seen from anywhere within the semi-sphere centered on the screen.In many applications the user has limited mobility in front of the screen, such as computers and HUDs; that's why expanding the incident light in every direction is inefficient.A screen that concentrates the diffused light in a useful area, i.e. an eyebox, has a gain compared to a Lambertian screen.An eyebox can be described as a virtual box suspended in the air where, if the users' eyes are within that box, they can see the image on the screen [5].The brightness gain of the screen makes it possible to be used with low-lumen projectors and it is particularly important with transparent displays where a considerable portion of the light is lost to transmission.We used a reflective MLA to diffuse the light in a controlled manner to achieve high brightness and a large enough eyebox for comfortable operation.
A partially-reflective coated MLA is sandwiched between refractive-index-matched layers of glass and epoxy, as illustrated in Fig. 2. As a result, the transmitted light sees a phase object with negligible phase variation across the MLAs, whereas the reflected light gets expanded by the MLA and creates an eyebox for the viewer.In other words, for the reflected light the screen behaves like a bright screen with a limited viewing window and for the transmitted light it behaves essentially like ordinary glass.Although the whole structure is index matched, the thickness of the partially-reflective coating introduces some phase function to the transmitted light, whose impact is discussed later in this paper.Fig. 2. The partially reflective coated MLA is buried between index-matched layers of epoxy and glass.The reflected portion of the incident light is expanded by the MLA towards the user to create an eyebox, whereas the transmitted light remains unaffected due to the index-matched structure.The screen can be used from both sides at the same time with less than 1.3% crosstalk, if the input beam directions are adjusted accordingly.
A quartz substrate is micro-fabricated to create the MLA mold, which is used as the master for making many screens.Using the mold and a UV-curable resin (n 2 in Fig. 2), an MLA is fabricated on a glass or plastic substrate (n 1 in Fig. 2).The surface of the MLA is coated with a thin-film, partially reflective coating.Choices for this coating are discussed in greater detail below.Once the coating has been applied, a second glass or plastic substrate is mounted onto the MLA using the same UV-curable resin.
One key advantage of MLAs compared to diffractive optical elements is that the eyebox size is the same for all the wavelengths [6,7].As a result, the color balance across the overlapping eyeboxes is very good.Moreover, as the MLA surface is very smooth, the angular expansion is very uniform and there is virtually no speckle on the screen, which is particularly important when using it with laser projectors.
Since the MLA is a periodic structure, it creates diffraction orders.The MLA pitch (Λ) and wavelength (λ) determines the angular separation (θ) of diffraction orders as given by the grating equation, sin(θ) = λ/ Λ.The MLA pitch should be smaller than the pixel size of the display so that each pixel is expanded fully, but it should also be large enough to set the diffraction order spacing smaller than the minimum pupil size of the human eye, which is assumed to be 3mm.As an example, for HUD applications, the typical distance between the screen and the user is about 0.5m-1m, so the angular separation between the diffraction orders should be at most 3mrad.With these constraints, 300μm is a good choice for the MLA pitch for the screen distances in the range 0.5m-1m.
The screen with the parameters above has been modeled and simulated using Zemax software.The simulation layout can be seen in Fig. 3(a).The radius of curvature of the microlenses is set at 625μm, which is optimized using Zemax to yield the desired eyebox size of about 65cm at the user's position.As illustrated in Fig. 3(b), a total of 50 equidistant sample points were chosen on 5 equidistant rows across the screen of size 87.5mm x 175mm.The screen size is based on what is currently available for the experiments.The model represents using a scanned laser projector to illuminate the MLA screen.Each incident beam results in a hexagonal eyebox at the driver's position.The eyeboxes shift laterally as the scan angle increases.The sum of eyeboxes in all their shifted positions for the 50 points in the Zemax model can be seen in Fig. 3(b).The bright white region at the center of the Fig. 3(b) shows where all of the individual eyeboxes overlap that is the useable full viewing window, where every pixel on the screen can be seen by the user.The width of the full viewing window is ± 18°, which corresponds to about 65cm at the user's position that is 1m away from the screen.Due to the rectangular shape of the screen, the maximum incidence angle in the horizontal direction is twice as much as the maximum incidence angle in the vertical direction.As a result the eyeboxes shift more in the horizontal direction, thus their overlapping region is reduced and the overlapped eyebox becomes an elongated hexagon, as seen in Fig. 3(b).Even though the shape of the microlenses is a regular hexagon, the shape of the eyebox is an elongated hexagon due to the increased incidence angles along the horizontal direction.
Since the screen has a symmetrical structure it can be used from both sides simultaneously.The only difference is the curvature of the microlenses, i.e. for one side it is concave and for the other side it is convex.This difference does not have an impact on the screen operation because for the concave side the image is formed less than 1mm in front of the screen and for the convex side the image is formed less than 1mm behind the screen.The different image locations cannot be perceived by the human eye.The crosstalk between the two sides of the screen when used by two people, as in Fig. 1, has been measured by projecting a white page from one side only.The luminance both on the projected side and the back side are measured.The average luminance on the back side is divided by the average luminance on the projected side to find the cross-talk ratio.The experiment was also repeated for the other side.The crosstalk is measured to be less than 1.3% as shown in Table 1.The screen has a gain of about 3 compared to a Lambertian scatterer.The gain is calculated using Zemax simulations where a 100% reflective Lambertian scatterer and 100% reflective MLA screen are compared.The average intensity in the overlapped eyebox for the MLA screen is divided by the average intensity for the Lambertian scatterer to calculate the gain.By rotating the microlenses towards the user, we can improve the efficiency of the screen substantially and can offer brightness gains on the order of 100, as illustrated in Fig. 4.However, the fabrication of such a screen still remains as a major challenge [5].
Coating design
Many choices are available for the partially reflective coating.We have tried thin metal coatings and two different designs for wavelength selective notch coatings as the reflector layer on the MLA.A single layer metal coating is the simplest for our demonstrators.The thickness of the metal controls the reflectance of the screen and in our experiments: 40Å aluminum coating resulted in average values across the visible band of about 35% reflectance and 50% transmittance, and 15% absorption [8].With metal coatings there is a trade-off between the transmittance and the reflectance, so the thickness of the coating should be optimized for specific applications.A thin metal coating is a good choice for broadband sources like LED based projectors.If a laser projector is used, more advanced coatings are possible, such as a notch coating that reflects nearly 100% of the RGB laser wavelengths and transmits nearly 100% of the visible spectrum outside of the reflective notches.We designed the notch coating shown in Fig. 5 for a laser pico-projector that has the RGB wavelengths of 645nm, 532nm and 445nm.The transmission bands of the fabricated screen was measured with a grating spectrometer and found to be shifted from the original coating specifications as seen in Fig. 5.This resulted in some error in the transmittance and reflectance values and the coloration of the screen.Figure shows the imaging setup to test the coated screens.While the metal coated screen appears in the correct color, the notch coated screen has a pinkish hue, which is primarily due to the measured transmittance characteristics in Fig. 5. Improving the coating process can in principle eliminate this problem.While the metal coated screen produces a sharp image, the notch-coated screen degrades the resolution of the transmittance as seen in Fig. 6(b).The blurring effect is quantified by measuring the MTF of the screens as discussed below.Fig. 6. a) A resolution chart imaged by a camera lens through the transparent screen.b) Three pictures correspond to no screen, metal coated screen and notch coated screen placed at 1m distance to the camera.Note that the notch coating has a blurring effect on the image compared to the metal coating and it introduces undesired coloration to the image, which makes this notch coating unacceptable for a see-through screen.
MTF measurements
We used the slanted edge technique to measure the MTF of the MLA screens [9].An experimental setup similar to Fig. 6(a) was used, where the resolution chart was replaced by an LCD computer monitor.A slanted edge with a 5° angle was displayed on the LCD located 80cm from the MLA screen, followed by a CCD camera at 1m distance to the MLA screen.Figure 7(a) shows the slanted edge image on the camera.First the Canny edge detection algorithm is applied to find the edge [10].The angle of the slanted edge is subsequently computed using Principal Component Analysis (PCA) [11].The image is up-sampled by a factor of four and the edge is straightened by an affine transformation of the whole image using the computed angle of the edge.The resultant image is shown in Fig. 7(b).The average of the columns of Fig. 7(b) results in the oversampled edge profile, which is the 1D edge spread function (ESF) of the system, as shown in Fig. 7(c).Figure 7(d) shows the derivative of the ESF, which is the point spread function (PSF) of the system.MTF is obtained by calculating the modulus of the Fourier Transform of the PSF and normalizing the resultant transfer function.With the method described above, the MTF curves for the thin metal and notch-coated screens are obtained, as illustrated in Fig. 8.The free space MTF shows the MTF of the imaging system without any screen for reference.The camera lens diameter and f/# were adjusted to obtain a cut-off frequency of around 30cyc/deg in the experiments to make it consistent with the performance of the 20/20 vision for human eye.As seen in Fig. 8, the metal coating and the 'no screen' MTF curves match very well, showing that the indexmatched structure behaves as expected.For MTF50, that is the MTF falling to 50%, the bandwidth of the notch coated screen is reduced almost by half, compared to the 'no screen' MTF.The reduced bandwidth explains the blurring observed in Fig. 6(b).
Coating thickness effect and the Strehl ratio
With the index matched screen structure that was shown in Fig. 2, the screen should not have any effect on the transmitted light.However, as seen from the MTF curves, thick coatings can degrade the resolution.Since the notch coating is thick, it introduces some phase function across each microlens and as the MLA structure is periodic, the phase function results in diffraction orders.To test the presence of diffraction orders emanating from the screen, the screen was illuminated with a 3mm collimated beam from a laser diode.The diffraction orders are faint compared to the central spot and only visible when the logarithm of the image intensity is displayed as seen in Fig. 9(a).Since MLAs are packed in a hexagonal fashion, there are six 1st order diffraction spots surrounding the central 0th order.Figure 9(b) shows the physical optics simulations for the same scenario.The details in between the diffraction orders observed in logarithmic scale in Fig. 9(b) are mostly due to interference and are missing in Fig. 9(a) due to the limited dynamic range of the camera.To quantify the noise due to diffraction and scattering, the encircled intensity plot of the experimental PSF is calculated as shown is Fig. 9(c), which is the integral of the intensity inside a circle with an increasing radius.If the phase variations due to the partially reflective coating were negligible and there were no noise due to scattering and diffraction, we would expect to see a step function with a smooth transition from zero to one.In the real case we have diffraction orders due to the coating thickness, thus we observe two steps in the encircled intensity plot.The variations between those steps are mainly due to diffraction and scattering noise.
The main problem associated with the MTF degradation is due to the thickness of the coating layers.As shown in Fig. 10(a), even though the coating thickness is uniform, the lens curvature introduces an optical path difference (expressed as Δ in Eq. ( 1)) between the light transmitted through the center and the edges of the lenses, where p(x,y) is the length of the ray path inside the coating, which varies between t and t max from the center to the edge of each microlens, n 2 and n 3 are the refractive indices of the medium and the coating, respectively.A parametric plot of the Δ created by the coating is given in Fig. 10(b).The phase function associated with a single microlens is shown in Eq. ( 2) and the phase function for an array of microlenses can be expressed as in Eq. ( 3), where ** denote 2D convolution and d x and d y denote the pitch of the MLAs in each axis.
x y i j t x y x y x id y jd We performed physical optics simulations to see the effect of this phase function using MATLAB [12].In our code, a 3mm diameter area on the screen is illuminated with a collimated, monochromatic light with λ = 550nm.The screen is a hexagonally packed MLA with 300μm pitch, where each hexagon is filled with the phase function exp(j2πΔ/λ), as shown in Fig. 11.After the light passes through the screen, it propagates 1m and is focused by a 3mm-diameter lens.The resulting intensity is compared to the diffraction-limited system, i.e. uniform phase function, to find the Strehl ratio of the actual system.We simulated three different cases to see the relationship between the Δ and the Strehl ratio, as seen in Fig. 12.In each subsection of Fig. 12 the real part of the phase function exp(j2πΔ/λ) is shown on the left and the Strehl ratio for the corresponding Δ is shown on the right.In Fig. 12(a) the Strehl ratio is 0,91 and the real part of the phase function is nonnegative.In Fig. 12(b) the Strehl ratio is 0,79 and the real part of the phase function starts to show negative values at the corners of the hexagon.In Fig. 12(c) the Strehl ratio is 0,31 and the negative values gets more dominating in the real part of the phase function.It is generally assumed that the human eye can differentiate the aberration effects for Strehl ratios less than 0,8 [13].From the simulations we conclude that the real part of the phase function should be greater than zero to satisfy this condition.This means cos(2πΔ/λ) ≥ 0, so Δ ≤ λ/4 to eliminate aberration artifacts introduced by the coating.This is essentially identical to the well known Rayleigh criteria.The metal-coated screen has a film thickness of about 40Å and a refractive index of about 1.09 in the visible band [14], which creates a peak-to-valley OPD of 0.0006λ and results in a Strehl ratio of 0.998.However, the notch coating has more than hundred layers of coatings with refractive index of about 2,5 to 3 for some coating layers, which violates the condition Δ ≤ λ/4.
Conclusion
In this paper the design methodology, fabrication steps and the partial reflective coating analysis for the see-through screen were discussed.We demonstrated that a partial reflective MLA sandwiched between index-matched layers can be used as a see-through screen that can be projected onto from both sides simultaneously, as seen in Fig. 1.The large eyebox creates a comfortable operating zone for the user.The screen can be used in any application where the desired information needs to be superimposed onto the real world scene such as direct projection HUDs and teleprompters.The effect of coating selection and its impact on the screen transparency has been discussed.A limiting condition for the thickness and refractive index of the partially reflective coating has been demonstrated that can be used to ensure diffraction limited performance of the see-through screen.
Fig. 3 .
Fig. 3. a) The Zemax layout of the simulation scheme.b)The grayscale image of the simulated eyeboxes corresponding to the 50 sample points on the screen that are shown above.The overlapped region of the eyeboxes is 65cm wide at the desired viewing distance of one meter.Even though the shape of the microlenses is a regular hexagon, the shape of the eyebox is an elongated hexagon due to the increased incidence angles along the horizontal direction.
Fig. 4 .
Fig. 4. a) For a planar MLA screen the eyeboxes for each pixel on the screen shift laterally with changing angle of incidence.The overlapped region of these shifted eyeboxes becomes the vignetting-free eyebox.b) By rotating each microlens individually to achieve 100% overlapped region increases the screen efficiency substantially.
Fig. 5 .
Fig. 5.The transmittance spectrum of the desired notch coating at the top, the measured transmittance spectrum of the fabricated screen at the bottom.The green lines mark the laser wavelengths of the laser pico-projector.
Fig. 7 .
Fig. 7. a) The image of a slanted edge captured by the camera without using an MLA screen.b) The edge is straightened by the image processing software based on the computed angle of the edge.c) The average of the columns in Fig. 7(b) is computed to get the 1D edge spread function (ESF) of the system.d) The derivative of the ESF in Fig. 7(c) results in the point spread function (PSF) of the system.
Fig. 8 .
Fig. 8. Experimental MTFs of the screens with different coatings.The metal coating has negligible effect on the MTF but the notch coating reduces the bandwidth at MTF50 almost by half, which explains the blurring effect of the notch coating.
Fig. 9 .
Fig. 9. a) The log10 of the experimental PSF of the notch coated screen.b) The log10 of the simulated PSF of the notch coated screen.c) The encircled energy plot of the experimental PSF of the notch coated screen.With the see-through screen geometry in Fig. 2, the transmitted light should not have seen a phase variation across the screen but the diffraction orders in the PSFs show that the notch coating violates this condition.
Fig. 10 .
Fig. 10.a) Cross-section of a single lens with a uniformly thick coating.The transmitted light through the coating travels different path lengths due to the curvature of the microlenses and the refractive index difference, which creates an optical path difference across the microlenses.b) A parametric plot of the OPD as a function of lens and coating parameters shown in Fig. 10(a).
Fig. 11 .
Fig. 11.a) The simulated periodic phase function to find out the effect of the coating on the quality of the screen transparency.Each hexagon has the phase function shown in Eq. (1).b) The horizontal cross-section of the phase function in (a).
Fig. 12 .
Fig. 12.The left column shows the real part of the phase function of a single microlens for different coating thicknesses and the right column shows the Strehl ratio for the corresponding optical paths.As seen from the three examples above, the real part of the phase function should be greater than zero for a diffraction limited system, which is defined as Strehl ratio greater than 0,8. | 5,937.4 | 2013-10-21T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
A Nanoengineered Stainless Steel Surface to Combat Bacterial Attachment and Biofilm Formation
Nanopatterning and anti-biofilm characterization of self-cleanable surfaces on stainless steel substrates were demonstrated in the current study. Electrochemical etching in diluted aqua regia solution consisting of 3.6% hydrogen chloride and 1.2% nitric acid was conducted at 10 V for 5, 10, and 15 min to fabricate nanoporous structures on the stainless steel. Variations in the etching rates and surface morphologic characteristics were caused by differences in treatment durations; the specimens treated at 10 V for 10 min showed that the nanoscale pores are needed to enhance the self-cleanability. Under static and realistic flow environments, the populations of Escherichia coli O157:H7 and Salmonella Typhimurium on the developed features were significantly reduced by 2.1–3.0 log colony-forming unit (CFU)/cm2 as compared to bare stainless steel (p < 0.05). The successful fabrication of electrochemically etched stainless steel surfaces with Teflon coating could be useful in the food industry and biomedical fields to hinder biofilm formation in order to improve food safety.
Introduction
Biofilms are sturdy microbial communities commonly found at liquid-air interfaces, and are the cause of many medical, industrial, and environmental problems [1,2]. Within these biofilms, bacteria attach to, multiply, and encase themselves in a slimy surface-associated community of extracellular polymeric substances (EPS). The EPS is generally composed of proteins, lipids, polysaccharides, and nucleic acids [3]. Importantly, bacterial cells embedded in this EPS layer tend to express phenotypic features that differ from those of their planktonic counterparts, such as augmented tolerance to antibacterial agents [1].
Two bacterial pathogens, Escherichia coli O157:H7 and Salmonella enterica, have been shown to form biofilms on solid surfaces, including plastic, metal, glass, and rubber [4][5][6][7]. The presence of these biofilms is an ongoing concern in the food industry, as it has been demonstrated that this may lead to enhanced environmental persistence [8,9]. In humans, E. coli O157:H7 is a major cause of various illnesses, including bloody diarrhea and hemolytic uremic syndrome [10]. Additionally, S. Typhimurium infection leads to fever, chills, diarrhea, nausea, vomiting, and abdominal pain [11]. Biofilm contamination within both the food processing and biomedical sectors is a common and significant source of human infections, outbreaks, and recalls [12,13].
Surface technology has been investigated as a means of inhibiting bacterial adhesion and biofilm development [14][15][16][17]. This approach typically relies on the release of biocidal agents or inhibition of bacterial adhesion [18]. For instance, such strategies include coatings that release microbiocidal agents, such as silver, antibiotics, enzymes, polycations, and antimicrobial peptides, into the surrounding aqueous environment [16]. On the other hand, adhesion prevention strategies have been used to assess surface chemical functional groups that hinder protein adsorption, such as weakly polarizable materials that reduce van der Waals forces [19], low surface energy [20], and hydrophilic polymeric materials that create highly hydrated features [21]. However, these approaches are mostly transient, and it is difficult to combat bacterial attachment by surface structuring or surface chemistry alone [22]. As a result, slippery liquid-infused porous surfaces (SLIPS) were proposed for anti-biofouling [22,23]. In 2016, however, the United States Food and Drug Administration (FDA) revoked the use of perfluorinated chemicals, which are used for SLIPS involving food contact [24].
Nanostructures, which can effectively prevent attachment of water droplets, were initially observed on the lower surface of a lotus leaf [25]. Leaves of the lotus flower are extremely hydrophobic and are thought to have "self-cleaning" characteristics, as water droplets completely roll over the surface, removing undesirable particles [26]. Furthermore, several researchers have observed nanoscale features on softwood fiber, titanium, and aluminum that prevent bacterial adhesion (i.e., smaller than the size of a bacterium) [27][28][29].
Stainless steel is an alloy material that is widely used in the food industry and in various environments because of its superior corrosion resistance and workability [30]. Currently, there are very limited nanoengineered surface studies that have assessed the combination of etching techniques and Teflon coating and their inhibitory effects for bacterial adhesion on stainless steel. Etching, the simplest method used to fabricate a nanostructured surface, creates the nanofeatures on the surface arbitrarily, however the overall roughness is homogeneous [31].
Previous studies have shown the surface topography and surface chemical composition to strongly affect cell behavior, including bacterial attachment and biofilm development. Therefore, the main purpose of this study was to investigate the effects of a Teflon-coated electrochemically etched surface (ET) on the adhesion capabilities of E. coli O157:H7 and S. Typhimurium, and to evaluate their biofilm formation ability under static and dynamic conditions.
Fabrication of Nanoengineered Stainless Steel Surface Using Electrochemical Etching
The 304 stainless steel was cut into 50 × 25 × 0.2 mm specimens to serve as the substrate for the nanoengineered surfaces. Each specimen was submerged in a diluted aqua regia solution, which was mingled in a 1:1 ratio (v:v) of 3.6% (1N) HCl and 1.2% HNO 3 and placed in parallel to a carbon plate at a distance of 5 cm. Both were positioned in a double-walled beaker set at 4 • C and connected to a circulation temperature controller ( Figure 1). Constant electric potentials at 5, 10, and 15 V provided by a DC power supply (CPX400SP; Aim TTI, Huntingdon, United Kingdom) were applied to the stainless steel specimen for 5, 10, and 15 min. The stainless steel was employed as a working electrode (anode) and the carbon was used as a counter electrode (cathode) in the electrochemical etching. The substrates were rinsed in deionized (DI) water immediately after etching and completely dried. This method was adapted from Lee et al. [32] with minor modifications.
Creation of Superhydrophobic Surface on Stainless Steel
For the superhydrophobic structure, the electrochemically etched surfaces (E) were coated with 0.2 wt % Teflon AF1600 powder (DuPont Inc., Wilmington, DE, USA) in FC-40 perfluoro compound (Sigma-Aldrich, St. Louis, MO, USA). Each specimen was consecutively baked on a hot plate at 110 • C for 10 min, at 165 • C for 5 min, and at 330 • C for 15 min. Finally, the samples were rinsed with DI water for 5 min and air-dried before water contact measurements and bacteriological tests. The measurements of apparent contact angles of sessile water droplets (~3 µL) were conducted on each specimen using a contact angle goniometer (FTA-1000, First Ten Angstroms, Newark, CA, USA).
Bacterial Strains and Culture Preparation
E. coli O157:H7 C7927 and S. Typhimurium ATCC 14028 were provided by the Food Microbiology Laboratory (University of Hawaii at Manoa, Honolulu, HI). Each strain of E. coli O157:H7 and S. Typhimurium was grown in 10 mL of tryptic soy broth (TSB) at 37 • C for 24 h under static incubating conditions, collected by centrifugation at 4000× g at 4 • C for 20 min, and washed three times with 100 mM phosphate-buffered saline (PBS) at pH 7.1-7.4. The final pellets were resuspended in sterile PBS corresponding to approximately 10 8 -10 9 colony-forming unit (CFU)/mL.
Bacterial Attachment and Biofilm Forming Ability Test
For cell attachment, we adapted a method from Kim et al. [33]. Each fabricated specimen was moved to a sterile 50 mL centrifuge tube (VWR International, Radnor, PA, USA) containing 10 mL bacterial cell suspensions in PBS (≈10 8 CFU/mL). The tubes containing the specimens were kept at 4 • C for 24 h for cell attachment. The cell-attached specimens were transferred and submerged in 200 mL of sterile distilled water (22 ± 2 • C) and softly agitated for 5 s. Rinsed samples were placed in 50 mL tubes containing 30 mL of TSB, then stored at 25 • C for 6 days of static incubation.
Biofilms were generated after bacterial attachment according to the methodology previously described by Kim et al. [34] based on the Center for Disease Control (CDC) biofilm reactor, which contained eight specimens (Biosurface Technologies, Corp., Bozeman, UT, USA). Bacterial cultures were prepared by incubating E. coli O157:H7 and S. Typhimurium in 1/10 strength TSB at 37 • C for 20 h. The reactor was placed on a stirring plate connected to the peristaltic pump (Manostat Corp., New York, NY, USA). Then, 3.5 mL of the bacterial culture solution was inoculated into 350 mL of 1/100 strength TSB and added to the biofilm reactor. The initial bacterial population in this medium was approximately 10 6 CFU/mL. The developed nanoengineered coupons were placed in the reactor, which was operated in batch mode for 24 h at 100 rpm at room temperature. After completion of the batch mode, the reactor was then connected to a carboy containing 1/300 strength TSB and a waste collection carboy. The system was then operated in continuous flow mode at a rate of 11.67 mL/min for 24 h. Differential strengths of TSB were applied to the batch and continuous flow modes to maintain the final volume and concentration.
Field Emission Scanning Electron Microscopy (FE-SEM)
In order to characterize the nanoscale features of the fabricated surfaces and visually ascertain the effects of electrochemical etching and Teflon coating on bacterial adhesion, the developed surfaces were photographed using field emission scanning electron microscopy (FE-SEM, Hitachi S-4800, Ibraraki, Japan). The specimens were submerged in 2.5% glutaraldehyde suspended in 0.1 M sodium cacodylate buffer (pH 7.4) at 25 • C for 1 h and washed in 0.1 M cacodylate buffer twice for 10 min each. For post-fixation, specimens were submerged in a mixture of 1% osmium tetroxide and 0.1 M cacodylate buffer for 30 min. The specimens were dried with a graded ethanol series of 10, 20, 30, 50, 70, 85, and 95% (5 and 15 min incubation times for each change), and then three changes of 100% for 10 min each. After dehydration, the specimens were positioned in a critical point dryer filled with liquid carbon dioxide and softly dehydrated by evaporating liquid carbon dioxide. Dried samples were mounted onto aluminum stubs using conductive carbon tape and coated with gold-palladium in a Hummer 6.2 sputter coater using a vacuum coater. Finally, photomicrographs were obtained using FE-SEM.
Statistical Analysis
All experiments were repeated at least three times with independently prepared samples. All bacterial counts of the specimens were log-transformed. The data were analyzed via analysis of variance and Duncan's multiple range tests (the separation of means was tested at a probability level of p < 0.05) using the Statistical Analysis System (SAS Institute, Cary, NC, USA).
Surface Morphology of Nanoengineered Stainless Steel Surface
We observed a morphology typical of a stainless steel surface (e.g., possessing deep crevices) (shown in Figure 2a,b). Crevasses within stainless steel are difficult to clean and represent protective harborages for bacterial cells [35]. In contrast, Figure 2c,d show the FE-SEM images of nanostructures developed on the 304 stainless steel substrates using the electrochemical etching technique. The electrochemical etching conditions, including the applied voltage and treatment time, were optimized for the desired nanostructure formation via parameter optimization. As a result, the morphology of the developed E treated at 10 V for 10 min had distinct nanoscale pores. This is in accordance with Lee et al. [32], who observed that a substrate etched at 10 V had nanopores and microsized bumps that resulted in a hierarchical surface and a significant change in wettability. While many researchers have fabricated well-ordered nanoporous aluminum and nanopillared structures by transforming the hexagonal array of nanoporous aluminum [28,36], stainless steel is a steel alloy, meaning formation of uniform surface nanostructures is difficult. Song et al. [37] previously observed that after anodization on pure titanium (grades 2 and 3) and titanium alloys (Ti6Al4V and TiAl7Nb), nanopores were distributed regularly over the surface of anodically oxidized pure titanium metals, however the distribution and size of the nanopores on the anodically oxidized titanium alloy varied. Based on the FE-SEM micrographs, the pore sizes of the nanopore structures on E were between 50 and 100 nm. The adhesion of Staphylococci bacteria, which are approximately 1 µm in diameter and spherical, dramatically decreased on poly(ethylene glycol) when the distance between the microgels was 1.5 µm or less [38]. This finding implies that the surface structures for operational inhibition of bacterial deposition should be nanosized, which is smaller than the dimensions of a bacterium.
Wetting Properties of Nanoengineered Stainless Steel Surface
The water contact angles of the resulting electrochemical etching and Teflon coating were measured, with the results shown in Figure 3. The contact angle on the flat bare stainless steel surface before electrochemical etching was 89.5 • (Figure 3a), indicating low hydrophobicity. Nanoscale features on a solid surface are essential for superhydrophobicity, which can generate a high contact angle. The contact angle on the etched surface without Teflon coating was 128.2 • (Figure 3b). Lee et al., (2015) reported that E was hydrophobic, but the water droplet adhered onto the E instead of rolling over it, which is characteristic of hydrophobicity and illustrates the rose petal effect related to high contact angle hysteresis [39]. On the other hand, the contact angle measured on the E when coated with Teflon increased to 151.1 • (Figure 3c). Accordingly, the Teflon coating influenced the surface superhydrophobicity, having a water contact angle greater than 150 • . When tilted, a water droplet rolled off the ET (Figure 4b,c), as compared to the bare stainless steel, which held the water droplet due to capillary interactions (i.e., surface tension) between the surface and the liquid (Figure 4a). When a water droplet contacts a rough surface possessing an appropriate combination of surface texture and solid-liquid surface energy in such a configuration, water may not fully enter into the surface pores, but could rather position itself on the posts and form a composite solid-liquid-air interface, in line with the Cassie-Baxter model [40,41]. The Cassie-Baxter state brings about liquid-repellent surfaces, and the high liquid-air fractional contact area facilitates self-cleaning [42]. Such self-cleaning, nanostructured, superhydrophobic features can be useful in diminishing bacterial interactions with surfaces. Figure 5 displays the attachment responses of E. coli O157:H7 and S. Typhimurium on bare stainless steel; E treated at 10 V for 5, 10, and 15 min; and E treated at 10 V for 5, 10, and 15 min and followed by Teflon coating. The initial levels of E. coli O157:H7 and S. Typhimurium in PBS were 7.9 and 8.2 log CFU/cm 2 , respectively; and the attached populations of these bacteria on the control stainless steel measured 6.6 and 6.7 log CFU/cm 2 , respectively. The attachment levels of E. coli O157:H7 and S. Typhimurium were reduced by 1.1-1.5 log CFU/cm 2 and 0.8-1.5 log CFU/cm 2 on the E treated at 10 V, respectively. Slight reductions of bacterial attachment occurred on the E because of the diminished contact angle and the resulting water deposition led to bacterial adhesion. Enhancing the contact angle from 128 • to 151 • on the ET reduced attachment levels of E. coli O157:H7 and S. Typhimurium by 1.6-2.3 log CFU/cm 2 and 2.5 log CFU/cm 2 , respectively. This was caused by the weakly polarizable Teflon layer with low surface energy on the nanoporous feature, minimizing the van der Waals interaction between the bacteria and solid surface [43]. Previously, we showed that attached E. coli K-12 decreased by 1.5 log CFU/cm 2 on the Teflon-coated large nanoporous anodic aluminum oxide when compared to the bare aluminum [44]. Hizal et al. [28] reported that Teflon coating significantly decreased the adhesion levels of Staphylococcus aureus and E. coli as compared to the flat surface. In the present study, attachment levels between E. coli O157:H7 and S. Typhimurium were not significantly different (p > 0.05) on the developed surfaces, but a significant decrease in the population of S. Typhimurium was detected on the ET treated for 5 and 10 min. Fadeeva et al. [45] observed different attachment responses for S. aureus and Pseudomonas aeruginosa on superhydrophobic titanium surfaces made using femtosecond laser ablation. Although colonization of P. aeruginosa cells was not seen on the fabricated titanium structures, in contrast to S. aureus, its biovolume was comparable to that observed for S. aureus because P. aeruginosa created a substantial amount of EPS [45]. Wang et al. [46] reported that both E. coli O157:H7 and S. Typhimurium strains had different abilities to produce EPS, which can affect bacterial growth and colonization on solid surfaces. Bacterial attachment is considered the first step of biofilm formation and biofouling, and involves the accumulation of bacterial cells and organic materials. Under static conditions, the E. coli O157:H7 and S. Typhimurium cells in biofilm formed on the ET significantly decreased by 2.4 and 2.1 log CFU/cm 2 , compared to the bacteria on the E, which decreased by 0.9 and 0.4 log CFU/cm 2 , respectively ( Figure 6a). When immersed, biofilm formation and development usually occurs under dynamic flow conditions (e.g., in food processing facilities, venous catheters, and dental water lines), and biofilms have been known to attach strongly to surfaces under flow [47]. For this reason, biofilm formation was tested on the developed surfaces placed in the CDC reactor, which imitates nature-like shear forces and renewable nutrient sources. In the present study, the levels of biofilm-forming E. coli O157:H7 and S. Typhimurium cells on the ET significantly decreased by 2.9 and 3.0 log CFU/cm 2 (p < 0.05), respectively, as compared to bare stainless steel. Slightly additional reductions of bacterial biofilm cells on the ET were observed under flow conditions. Fluid shear stresses can cause detachment of bacterial cells by slipping and rolling off surfaces [48]. Hizal et al. [28] observed that nanopillared surfaces with Teflon coating caused the greatest reductions under flow, with more pronounced effects in S. aureus than E. coli. Yin et al. [49] reported that the use of SLIPS on the enamel surface significantly hindered biofilm development of Streptococcus mutans in vitro. In another study, SLIPS prevented 99.6% of P. aeruginosa biofilm adhesion over a 7 day duration, as well as 97.2% of S. aureus and 96% of E. coli biofilm attachment under static and low flow conditions [22]. Comparatively, in our study, ET reduced more than 99% of E. coli O157:H7 and S. Typhimurium in terms of attachment and biofilm development.
Reduced Bacterial Adhesion and Biofilm Formation on a Nanoengineered Stainless Steel Surface
After 4 h of the bacterial attachment treatment, the density of E. coli O157:H7 attached on the bare stainless steel was higher than the developed surface ( Figure 7). While dense biofilm coverage with EPS was seen on the bare stainless steel, we observed sparse biofilm cells on the ET under static conditions. Similarly, under flow conditions, clustered bacterial cells surrounded by biofilm mass were observed on the bare stainless steel, whereas poor biofilm deposition was observed on the ET. Further investigations are needed to enhance the nanostructure of stainless steel, and to understand the interactions between bacteria and the nanofeatures of stainless steel.
Electrochemical etching followed by Teflon coating treatment has been shown to be an effective strategy for hindering bacterial attachment and biofilm development on stainless steel in laboratory settings. As stainless steel is frequently used in the food industry, the developed nanoengineered surfaces could be a preventive control for microbial populations in food production sites, without requiring heat or chemical involvement. Biofilm formation of E. coli O157:H7 and S. Typhimurium on the developed surface significantly decreased in both static and flow aquatic environments as compared to bare stainless steel (p < 0.05). The combination of nanostructured stainless steel and coating with a low surface energy material holds great promise for antibiofilm and antibiofouling applications, including water systems, food industry settings, and biomedical spaces where bacterial adhesion is widespread. For commercialization of this technique for mass production in the food industry, the wear rate of the nanostructured stainless steel with Teflon coating and scaling factors in consideration of the etching process should be investigated in future studies. | 4,590.6 | 2020-10-22T00:00:00.000 | [
"Materials Science"
] |
Performance Analysis of the Differences Restricted Access Window ( RAW ) on IEEE 802 . 11 ah Standard with Enhanced Distributed Channel Access ( EDCA )
IEEE 802.11 standard is a WLAN (Wireless LAN) standard that has been used in all over the world. IEEE 802.11ah is the newer technology that designed to supports Internet of Things (IoT) and Machine-to-machine Communication (M2M). IEEE 802.11ah has a feature called Restricted Access Window (RAW) that capable to reduce power usage and have satisfying Quality of Service (QoS). In this research, Enhanced Distributed Channel Access (EDCA) is also applied. Same as RAW, EDCA also be able to affect QoS by modified the MAC Layer in 802.11 standard. This research used 3 different scenarios for RAW parameters: Modifying the number of RAW Group, Modifying the number of RAW Slot, and Comparing 2 Datamode. The EDCA Parameters that used in this research were: Contention Window and Arbitrary inter-frame Spacing Number. The values that expected to be the output in this research are: Delay, Throughput, Packet Delivery Ratio, Availability, and Reliability. After the research has been simulated, the results are: First, the lowest of average delay was Ngroup = 1, the highest of PDR was Ngroup = Nsta/2, and the highest of Throughput was Ngroup = Nsta/2. Second, the lowest of average delay was RAW Slot = 6, the highest of PDR were RAW Slot = 3 and 4, and the highest of Throughput was RAW Slot = 4. Third, the lowest of average delay was Datamode 3,9 Mbps BW 2 MHz, the highest of PDR was Dat mode 3,9 Mbps BW 2 MHz, and the highest of Throughput was Datamode 3,9 Mbps BW 2 MHz. Reliability, Availability, and Energy Consumption also can be affected by modifying RAW parameters, in 802.11ah Energy Consumption can be reduced by increasing the number of RAW Stations and RAW Groups.
I. INTRODUCTION
The background of 802.11ah development is to increase the performance of industry automation, home smart appliance, smart metering, farm industries and health solution.These industries mostly used wireless sensor to monitor the physical condition around the industry environment.802.11ah would increase the efficiency of information distribution in the network with further reduced power consumption by its feature of its advanced power saving function in significant compared to conventional Wi-Fi network uses [1].
Clear Channel Assessment (CCA) is a logic function in PHY layer to do Carrier Sensing and Collision Detection [1].This function will decide a Wi-Fi device to use available channel and check the distribution channel for its availability.In different case CCA will calculate the energy efficiency level in the wireless device to check the their energy threshold, and availability [1].The disadvantage for DCF method is treating all kinds of traffic types as same as each other.Then the newest contention-based method introduced on 802.11e standard, i.e.Enhanced Distributed Channel Access (EDCA).In EDCA, traffic data divided into four types, i.e.Voice, Video, Background, and Best Effort.And the channel sensing changes to more flexible duration known as Arbitration Interframe Space (AIFS).
Restricted Access Window (RAW) is a duration of time consists of several time slot.RAW could be used for certain usage [1], and its often used to avoid data collision in data communication [2].RAW also helps STA to prevent the STA using the channel continuously 164 Jurnal Infotel Vol. 10 No. 4 November 2018 https://doi.org/10.20895/infotel.v10i4.397but limited into fixed duration.So that a STA access is controlled by RAW.As the network controlled by RAW the data transfer process will adjust according to their respective rules of transfer.RAW also can forbid a STA to use particular slot in data transfer [3].In [4], RAW feature is also able to divide STA into groups, limiting the channel usage by STA itself, this feature help to reduce the chance of data collision while network is busy.While 802.11ah not giving specific method on how dividing and giving duration limit and also for the dividing method itself.IEEE 802.11ah has 1 MHz and 2 MHz channel for its usage in every country to be used and claimed will fulfill the requirement of 802.11 standards.The 1 MHz channel transmission will also increase the coverage and reliability performance for the standard itself [2].
Even though the 802.11ah has not been officially released, there are other research that investigating this standard already, for both the PHY Layer or the MAC Layer.The researches have been done to help for reach the better performance for 802.11ah.
In this paper [5], comparison between the normal AIFS number and the fixed AIFS number, by comparing the number of the RAW Groups =1 and RAW Groups = Nsta/2, and testing the differences number of RAW Slots.Also the result between [5] and this research is, this research got higher PDR percentage and lower Delay number but lower Throughput rate, it is because the Queue Packet number and time in this research given more time than [6], comparing two Error Rate Model, Yans Error Rate and Nist Error Rate and also implementing the RAW.And mainly, the 802.11ah researches are implemented by RAW scenario, because RAW is a new feature built in 802.11ah, that used to minimalize system power consumption.Also, [7], implementing Traffic-Adaptive RAW Optimization Algorithm (TAROA), the using if TAROA is aim to solve the aforementioned problem by estimating the transmission interval of each station on AP side and maximizing the number of successful transmission.Also compared TAROA to EDCA/DCF mechanism.
And there are also researches that implying Hidden Node condition to 802.11ah standard.Hidden Node is a condition where a station cannot hear the other station that located outside the range.Example, there are station A, B, and C, B is in A's range, but C is outside A's range.So when A transmit any data to B, C cannot hear, and C also transmit data to B at the same as A transmit [8].This problem will cause many data collisions in the network and it will occure performance issue.The Comparison between the 802.11ah standard that implemented with Hidden Node problem and the normal 802.11ah was done by this research [8].The result is the 802.11ah without Hidden Node has higher Throughput compared to the result for 802.11ah with Hidden Node problem [8].This research [7] was also implementing the Hidden Node problem in their research.
A. Research Scenario
This research uses 3 different scenarios, the RAW group, the RAW Slot, and the Datamode.The focus outcome of this research for every scenarios are, Throughput, Packet Delivery Ratio (PDR), Delay, Energy Consumption, Availability, and Reliability.The basic parameters that used in this research will be shown in Table 1.The first scenario is the RAW Group (Ngroup
B. Tools and Material
For this research, Network Simulator 3 (NS-3) is used for implementing the 802.11ah network, the version of NS-3 for this research is NS-3.25.The Operating System for this research is Linux Ubuntu 12.04 63-bit and used VMWare Workstation Pro for the Machine Virtualization for the Ubuntu.
C. Performance Test Parameter
After the research, the results of the simulation are examined with six test parameters.The test parameters are: Throughput is a receive rate for data that received at the receiver in a particular observation time, the measurement for Throughput is Mbit/s or Mbps.The formula for throughput is: b) Delay Delay is the average of transmitting time from the transmitter to the receiver.The formula for delay is: In the equation ( 2), Received Packet represents the total data that arrive in the receiver, and t is total of time it takes to send the data.
c) Energy Consumption
Energy Consumption is the energy that the system takes for doing its work, from the Access Point (AP) to the end-device.
d) Reliability
Reliability defined as the probability that shows the system can operating properly without failure or failure free condition within normal operating condition [9][10].
e) Availability Availability is the probability that the system can perform its required function when it needed to, that it is not failed or under a repairing action [10] [11] [12].
III. RESULT
This research analyzed using 3 given scenario.The first scenario is changing the amount of Ngroup and NSta, the second scenario is changing the amount of RAW slot and NSta and the third scenario is changing the Datamode type and NSTA.From these methods there will be a parameter to be measured, those parameters are: packet delivery ratio (PDR), throughput, delay, availability and reliability.
A. Result for First Scenario
First scenario is the Ngroup scenario with the number of Ngroup are Ngroup = 1 and Ngroup = Nsta/2.This scenario aims to find out how much the throughput can be by changing the number of Ngroup.In Fig. 1, the highest throughput rate for Ngroup = Nsta/2 is 0.148 Mbit/s for 90 stations, while the lowest number is 0.12 Mbit/s for 60 stations.Meanwhile, for the Ngroup = 1, the highest throughput rate is 0.1446 Mbit/s for 90 stations.And the lowest rate is 0.1089 Mbit/s for 150 stations.In Fig. 2, the highest delay time is in 150 stations for both Ngroup = 1 and Ngroup = Nsta/2.For Ngroup = 1, the highest delay is 4.7127 s and for Ngroup = Nsta/2 is 4.425 s.For the lowest delay time, for Ngroup = 1 is in 60 stations with 0.0019 s.For Ngroup = Nsta/2, the lowest delay is also in 60 stations with 0.0085 s.
B. Result for Second Scenario
In this scenario, the amount of RAW slot will be adjusted and also the amount of STA will be adjusted.This scenario will use certain amount of RAW slot, there will be 3, 4, 5, 6, and 7 slots RAW to be examined in this scenario.The amount of STA will be also adjusted by 60, 90, 120, and 150.From Fig. 3, the highest throughput rate for RAW Slot = 4 is 0.16033 Mbit/s in 90 stations while the lowest rate is 0.1091 Mbit/s in 60 stations.From Fig. 3, RAW slot = 4 has the highest average throughput rate with 0.136 Mbit/s, the second highest is RAW Slot = 3 with 0.133 Mbit/s, the third is RAW Slot = 5 with 0.124 Mbit/s, the fourth is RAW Slot = 7 with 0.02581 Mbit/s.From the Fig. 4 the highest delay recorded is while the amount of STA is 60 and 90, following by the amount of RAW slot = 7.The recorded delay was 0.3843 s and 0.9847 s.This delay is caused by the maximum duration per slot is increased following by the increasing slot.In this scenario the slot used is SlotFormat =1, where the maximum duration per-slot is 246,14 ms [6].
C. Result for Third Scenario
In the third scenario, the scenario being implemented in the third scenario is the changes of 2 type of datamode with a particular bandwidth and datarate of 2,7 Mbps BW 1 Mhz and 3.9 Mbps BW 2 MHz, and there are an increased pattern of STA from 60, 90, 120, and 150 STA.From Fig. 5, the highest throughput rate for Datamode 3.9 Mbps BW 2 MHz is in 90 stations with 0.16324 Mbit/s and the lowest is in 60 stations with 0.12 Mbit/s.Meanwhile, for the Datamode 2.7 Mbps BW 1 MHz, the highest rate is also in 90 stations with 0.1597 Mbit/s and the lowest rate is in 60 stations with 0.1199 Mbit/s.From Fig. 6, we can see the lowest delay is achieved while the datamode is 3.9 mbps BW 2 MHz by 0.01363 s STA = 60.While STA = 90, the delay is increased to 0.04716 s, and while STA = 120 and 150 STA there is a significant increased delay with respective score delay of 2.2985 s and 4.1578 s.
E. Result for Availability and Reliability
For the availability and reliability result, this research uses the first-thirty failed packets data to calculate the downtime from the pcapfile.This method applied to all availability and reliability calculations and graphs in this research.a) Availability and Reliability for First Scenario In Fig. 8, we can see the highest reliability we could get is in Ngroup = Nsta/2 with the amount of sta = 120 with score of 99.150% and the lowest score of availability is 98.510% where the amount of sta is 60.From Fig. 9, the highest availability for Ngroup = Nsta/2 is 99.160% in 120 stations and the lowest is 98.53% in 60 stations.For the Ngroup = 1, the highest availability is 99.14% in 150 stations and the lowest is 98.7% in 60 stations.In the Fig. 8 and Fig. 9, the degradation occurred on 120 stations to 150 stations.That is because the channel idle time is decreasing caused by the re-transmission request.In the Fig. 10, the highest average reliability is 98.72% by RAW Slot = 3, the second is 98.68% by RAW Slot = 4, the third is 98.648% by RAW Slot = 5, the fourth is 98.58% by RAW Slot =7, and the last is 98.568% by RAW Slot =6.From Fig. 11, the highest average availability percentage is RAW Slot = 3 with 98.740%, second is RAW Slot = 4 with 98.698%, third is RAW Slot = 98.655%, fourth is RAW Slot = 7 with 98.6%, and lastly is RAW Slot = 6 with 98.588%.c) Availability and Reliability for Third Scenario From Fig. 13, the highest availability percentage for Datamode 2.7 Mbps BW 1 MHz is 98.88% at 150 stations and the lowest is 98.6% at 60 stations.For the Datamode 3.9 Mbps BW 2 MHz, the highest percentage is 98.95% at 150 stations and the lowest percentage is 98.630% at 60 stations.
b) Availability and Reliability for Second Scenario
IV. DISCUSSION This chapter will discuss about the result that the research has been done.And also analyze about the graphs that computed from the result of this research.
A. QoS Analysis
In the Fig. 1 we have analyzed that the Ngroup = Nsta/2 had better Packet Delivery Ratio comparing to Ngroup = 1.Dividing STA in groups improve the performance of the network in case of packet transmitting.This result was also showing by increasing the Ngroup reduce contention, collision and boost the process of re-transmission.From Fig. 2, increasing the amount STA will cause STA delayed their packet transmission.However, the advanced method of re-transmission will reduce the throughput but increasing delay.There is an increased delay shown in the amount of increased STA by 90, 120 and 150, mostly caused by the activity of re-transmission which did by the system [6].Increasing the amount of STA will also reduce the throughput thus it also reduces the idle time of a channel.The more STA is also causing there are more packet to be send.
From Fig. 3, the declining trend of the line graph it's caused by re-transmission activity is higher and higher as the stations increase.It is because more number of stations will increase the packets transmitted in the network.Because the number of packet is increase and the re-transmission activity, it will affects the channel idle time, the more the transmit activity in the network, the less the channel idle time will be.
We can also see from the Fig. 4, the more the STA, the delay will also higher.Mostly by the increasing amount of STA will also increase the amount of packet send in the network, that also creating packet collision in the network and also increasing the contention.However, with the rule of RAW slot = 7 and amount STA = 150, the contention rate is lower [13].The previous set up resulting better performance graph of delay with the best delay achieved by 2.9838 s.From the scenario above we can conclude the more RAW slot we implement is not the main solution to an increased amount of STA, because RAW slot will makes a transmission duration shorter while increasing the packet size transmitted over the network.
From Fig. 6, we can see the lowest delay is achieved while the Datamode is 3.9 Mbps BW 2 MHz by 0.01363 s STA = 60.While STA = 90, the delay is increased to 0.04716 s, and while STA = 120 and 150 STA there is a significant increased delay with respective score delay of 2.2985 s and 4.1578 s. we can conclude from the graph with the Datamode of 3.9 Mbps and BW 2 MHz the scenario achieved a lower delay score for each amount of STA compared to the Datamode of 2.7 Mbps BW 1 MHz.this case is happening because of the higher datarate and the higher the bandwidth will reduce the delay following a narrowed packet transmission lane.From Fig. 5, he throughput rate in Datamode 3.9 Mbps BW 2 MHz is having higher score by 0.144 Mbit/s compared to Datamode 2.7 Mbps BW 1 MHz which is scored by 0.138 Mbit/s.It caused by faster re-transmission in the Datamode 3.9 Mbps BW 2 MHz so that the demand of re-transmission can be fulfilled much more rather than Datamode 2.7 Mbps BW 1 MHz.
B. Energy Consumption Analysis
For the Energy Consumption, from Fig. 7 with the increasing amount of sta, the energy to transmit is much smaller as stated in [14] [15], the more sta we deploy there is much less energy consumed in the system.The main point of energy consumption is caused by RAW usage a non-transmitting sta will be stated as idle sta.without RAW, increasing the number of sta will cause the packet queueing longer so that the idle time is decreased because of the contention.
C. Availability and Reliability Analysis
From Fig. 8, we can see from the line graph of Ngroup = nsta/2, starting from 60 STA to 12 sta, there is an increase to reliability score, while the highest increase is happening while the sta increase from 60 to 90 sta.this case is caused by self-healing factor of the system.From Fig. 9, the score of availability is affected by the score of downtime and also uptime.The Ngroup = nsta/2 scenario there is a downtrend of performance with sta = 150 sta which is mainly caused by the increasing trend of downtime, the root of this problem is caused by the inability of system self-healing so that the packet delivered slowly.There is also a chance of collision because of the increasing amount of sta, resulting more packet to be delivered in the network.
For the RAW Slot scenario, in Fig. 10, the score of reliability is showing an uptrend following the increase of sta number.We can conclude that the amount of packet circulating in the network will not affect the performance of the network in their self-healing factor.From Fig. 11, from the pattern of the lien graph, Jurnal Infotel Vol.10 No.4 November 2018 https://doi.org/10.20895/infotel.v10i4.397scenario is affecting the availability.This mostly caused, while we increase the RAW slot, the idle status of the channel will be shorter caused by lot of RAW attempting to access the channel in the same time.
Lastly, for the Datamode scenario's availability, in Fig. 12 and Fig. 13 we can see the relation between data rate changes and bandwidth changes from 2 type of Datamodes compared to current availability and reliability respectively.According to the result it is expectable, with higher bandwidth, it is easier to improve the QoS score, which is also affecting the availability score and also reliability.A high data rate and bigger bandwidth will reduce contention in the system.
V. CONCLUSION
Based on the test results, we can conclude that, increasing the Ngroup will affect the performance of 802.11ah network.This is due to the ability of the standard to divides Ngroup will reduce the traffic load.Increasing the RAW slot on the network in a less station will increase the chance of data collision.It is proven by the score of throughput and percentage of packet delivery ratio to the amount of raw slot of 3,4, and 5 having higher score than raw slot of 6 and 7. From the Datamode scenario, we can conclude with a higher speed and bigger bandwidth, the network itself will be more productive and performing better.It is because, providing better datarate and bandwidth can reduce the chance to drop the packets.
Availability is a comparison between a time where the network is working with its total amount of time working to the downtime where the network is unavailable.While reliability is a terminology where the network is able to work correctly and flawless subtracted by the factor of the network downtime.However, the score of availability and reliability is affected by the QoS score of the network, resulting a variable of uptime and downtime of the network.Increasing the amount of station in 802.11ah will cause less energy consumption.It is due to one of 802.11ahRAW features.With RAW, the station which is not transmitting at the time will be considered as perfectly idle station.In the idle state, the station is not consuming a noticeable amount of energy.
) and the RAW Station (Nsta), the Ngroup uses 2 different numbers which are Ngroup = 1 and Ngroup = Nsta/2 and the RAW station uses 60, 90, 120, and 150 stations.The Datamode for first scenario is Datamode 2.4 Mbps BW 1 MHz or MCS5 BW 1 MHz and RAW Slot = 3.The second scenario is the RAW Slot and the RAW Station (Nsta).The RAW Slot uses 5 different numbers which are 3, 4, 5, 6, and 7 RAW Slots and the RAW station uses 60, 90, 120, and 150 stations.The Datamode for the second scenario uses the Datamode 2.4 Mbps BW 1 MHz, and the Ngroup =1.The third scenario is the Datamode and the RAW Station (Nsta).This research uses 2 different kinds of Datamode, first is Datamode 2.7 Mbps BW 1 MHz or MCS 6 BW 1 MHz and second is Datamode 3.9 Mbps Jurnal Infotel Vol.10 No.4 November 2018 https://doi.org/10.20895/infotel.v10i4.397BW 2 MHz or MCS 4 BW 2 MHz.The RAW Station uses 5 different numbers, same as other scenario, which are 60, 90, 120, and 150 stations.Number of Ngroup is Ngroup = 1 and the RAW slot = 3.
Fig. 1 .
Fig.1.Result for Ngroup and Number of STA to Throughput
Fig. 2 .
Fig.2.Result for Ngroup and Number of STA to Delay
Fig 1 .
Fig 1. Result for RAW Slot and Number of STA to Throughput
Fig. 2 .
Fig.2.Result for RAW Slot and Number of STA to Delay
Fig. 3 .
Fig.3.Result for Datamode and Number of STA to Throughput
Fig. 4 .
Fig.4.Result for Datamode and Number of STA to Delay
Fig. 5 .
Fig.5.Result for All Scenarios to Energy Consumption From Fig.7, the biggest average score of energy consumption is achieved by the RAW Slot scenario.RAW Slot = 6 has the highest average energy consumpted with 10.39 Joules, followed by RAW slot = 7 with 10.315 Joules.The Datamode scenario has the
Fig. 8 .
Fig.8.Result for Ngroup and Number of Stations to Reliability
Fig. 9 .
Fig.9.Result for Ngroup and Number of Stations to Availability
Fig 10 .
Fig 10.Result for RAW Slot and Number of Stations to Reliability
Fig 11 .
Fig 11.Result for RAW Slot and Number of Stations to Availability
Fig 12 .
Fig 12. Result for Datamodes and Number of Stations to ReliabilityFrom Fig.12, the highest reliability percentage for Datamode 3.9 Mbps BW 2 MHz is 98.940% at 150 stations.The lowest percentage is 98.610% at 60 stations.For the Datamode 2.7 Mbps BW 1 MHz, the highest is at 120 and 150 station with 98.86% and the lowest is at 60 stations with 98.6%.
Fig. 13 .
Fig.13.Result for Datamodes and Number of Stations to Availability
Table 1 .
Basic Parameters | 5,418 | 2018-11-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
DCM for complex-valued data: Cross-spectra, coherence and phase-delays
.
Introduction
This technical note describes a dynamic causal or generative model for time-series, under ergodic assumptions.It is based on a linearization of mean-field models of coupled dynamical systems; in our case, neuronal subpopulations.Under the assumption that the system is driven by exogenous fluctuations with known (or parametric) spectral densities and uniform phase-distributions, it is possible to predict the coherence and phase-differences observed among system responses; in our case electrophysiological measurements.This enables the model to be optimized using empirical measures of (complex) cross-spectral densities and thereby access hidden parameters governing the datagenerating process (e.g., coupling parameters and rate constants).We validate this model using simulations and illustrate its application using real local field potential (LFP) data.
The contributions of this work are threefold.First, it generalizes variational Bayesian techniques (Variational Laplace: Friston et al., 2007) used to invert models of empirical data so that they can be applied to complex data.This generalization can be applied in any setting that uses variational Laplacian schemes to select and optimize models.It is illustrated here in the context of Dynamic Causal Modeling of steadystate responses in electrophysiology.Second, we use the generalization to show how conventional time-series measures of activity and coupling in the frequency domain can be derived from Dynamic Causal Models that have a biophysically plausible form.Finally, we show how this enables the computation of conventional measures, such as coherence and phase-delay functions, between hidden states; in other words, between sources as opposed to sensors.
Current applications of Dynamic Causal Modeling have been used to explain real-valued data features, including evoked transients, induced responses and power spectra (Kiebel et al., 2008a,b), using biologically informed mean-field models of coupled dynamical systems.The ability to model complex-valued data features offers two key advantages.First, while current DCMs can estimate conduction delays, the estimates do not have access to phase information.In principle, models that can predict or generate complex data enable the phase relationships among observed responses to inform and constrain estimates of model parameters, like conduction delays.More importantly, the extension to complex valued data bridges the technical divide between modelbased and model-free analyses (Brown et al., 2004;Kay and Marple, 1981), as we hope to show.
The generalization to complex data was motivated by DCM for steady-state responses, as applied to electrophysiological time-series acquired under particular brain states (Moran et al., 2007(Moran et al., , 2008)).Our focus here is on the potential importance of complex-valued data features and the implications for the inversion or optimization of models of those features.We try to illustrate the potential of this scheme by looking at how precisely axonal conduction delays can be estimated, when fitting complex cross-spectra, in relation to real-valued crossspectra.More generally, we anticipate that this DCM will provide a useful link between the generative modeling of biological time-series and conventional (linear systems) characterizations (e.g.Kay and Marple, 1981) that predominate in electrophysiology.This link rests upon the fact that conventional measures (e.g., coherence, phase-delay and auto-correlation functions) are caused by neuronal circuits with particular biophysical parameters (e.g., synaptic efficacy, time constants and conduction delays).This means that biophysical parameter estimates can be used to create conditional estimates of coherence and cross-correlation functions.In turn, this means that it is possible to infer which biophysical parameters are responsible for observed coherence or phase-delays.Conceptually, the difference between DCM and conventional measures of coupling (i.e., functional connectivity) lies in the fact that DCM appeals to an explicit generative or forward model of how data features are caused (i.e., effective connectivity).In this instance, the data features are provided by the spectral behavior of observed time-series that are, usually, the end point of conventional analyses.The advantage of having an underlying generative model is that one can estimate the spectral behavior and relationships, not just among observed sensors or data channels, but between the neuronal sources generating those data.Furthermore, one can map quantitatively between the underlying biophysical parameters and spectral summaries.We will illustrate these points using simulated and real data.
Dynamic Causal Modeling for steady-state responses has been used to make inferences about hidden neuronal states and parameters using both invasive and non-invasive data.There has been a considerable effort to validate this approach using simulations, developmental manipulations and psychopharmacological interventions (Moran et al., 2008(Moran et al., , 2009)).There is now a large literature on Dynamic Causal Modeling in electrophysiology (Chen et al., 2008;Daunizeau et al., 2009;David et al., 2006a,b;Kiebel et al., 2007) and, in particular, models for steady-state activity (Moran et al., 2007(Moran et al., , 2008(Moran et al., , 2009(Moran et al., , 2011)).We take DCM for steadystate responses as the starting point for the causal modeling of complexvalued data.This is because this current scheme uses the absolute value of the cross-spectrum between channels.However, the cross-spectrum is a complex quantity, which means that it has the attributes of modulus (absolute value or amplitude) and argument (angle or phase).This means that one is throwing away information when using absolute measures to invert or fit generative models.In what follows, we look at the advantages of using both amplitude and phase information.
The phase of the cross-spectrum is usually taken to indicate something about systematic lags or delays between two signals.If one signal appears in the other, after a constant time lag, then the phasedifference scales with frequency.In practice, this notion has been used, for example, in epilepsy research where authors have used phase differences between signals from different intracerebral electrodes or EEG channels to estimate conduction or propagation delays (Brazier, 1972;Gotman, 1981).A regression of the phase-difference on frequency is also often used to estimate temporal delays over a particular frequency range (Rosenberg et al., 1989).More generally, one can divide the phasedifference by frequency to quantify time lags as a function of frequency.A further advantage of working with the complex cross-spectrum is that its inverse Fourier transform provides the cross-correlation function between two time-series.If the correlation structure is dominated by a single time delay, the latency of this delay can often be inferred from the timing of a peak in the cross correlation.In short, a generative model of complex-valued data features (e.g.cross-spectra) provides a more complete model of data and provides conditional estimates of coherence, phase-delay and cross-correlation functions that are implicitly constrained by the functional architectures inducing those correlations.The examples in this paper attempt to illustrate this in a practical fashion.
This note comprises four sections.In the first, we consider the nature of cross-spectra and their relationship to coherence and phasedelays.This section is used to frame conventional measures of coupling in terms of the underlying transfer functions between sources generating data.Crucially it shows that although coherence is sensitive to the dispersion of phase-differences between two sources or sensors, coherence does not provide a complete picture of coupling, because it is insensitive to phase-delays per se.As indicated above, a more comprehensive summary rests upon the complex cross-spectra that include both real and imaginary parts that embody phase-delays.However, the phase-delay does not report the time-delays between two signals directly and can only be interpreted in relation to a model of how time-delays are manifest in data.The models we use here are biologically plausible (neural mass) models, based on delay differential equations that make time-delays an explicit model parameter.This section concludes with a brief description of these neural mass models that constitute a generative model for steady-state responses.The second section briefly reviews the inversion of these models, with a special focus on constructing free-energy bounds on model logevidence for complex-valued data.In the third section, we present an illustrative analysis of simulated data, in which we know the coupling strengths and time-delays among a small number of neuronal populations.These simulations are used to verify that the various conditional estimates of coupling, in time and frequency space, can recover the true values in a reasonably precise way.Our special interest here is in the direction of coupling and conduction delays, as inferred through the conditional distribution over time-delays and how they manifest in phase-delay and cross-correlation functions.There are many interesting aspects of the mapping between model parameters (effective connectivity) and spectral characterizations (functional connectivity): we have chosen conduction delays as one of the more prescient.This is because there is a non-trivial relationship between (axonal) conduction delays and delays that manifest in terms of phasedelays and cross-correlation lags.In the final section, we repeat the analysis of the third section but using real data from local field potential recordings in rats.
Coherence and causal modeling
In this section, we look at the nature of conventional measures of functional connectivity from the point of view of phase-differences and their distribution.The main points made in this treatment are: (i) Coherence is a function of the absolute value of the cross-spectra and, as such, provides an incomplete picture of spectral dependencies among stationary time-series.(ii) Furthermore, even if we consider complex cross-spectra, their phase information cannot be interpreted in terms of time-delays, unless we know (or model) how they were caused.Specifically, when the coupling between two sources of data is bidirectional, there is no straightforward correspondence between phase and time-delays.The resulting ambiguity can only be resolved by reference to a model of how phasedifferences are generated.This section concludes by briefly reprising the generative models used in DCM of steady-state responses that resolve this ambiguity.
Notation and preliminaries
Where possible, we will denote Fourier transforms by upper case letters such that S i (ω) ∈ C is the Fourier transform of a stationary timeseries s i t ð Þ∈R.We will also make a crucial distinction between the observed or sample cross-spectrum g ij (ω) (referred to as the sample cross-spectrum) and a cross-spectrum g ij (ω, θ) predicted by a model with parameters θ (referred to as the modeled cross-spectrum).Unless used as a subscript, j = ffiffiffiffiffiffiffi ffi −1 p .
Coherence and phase-synchronization
The complex coherence function between two wide-sense stationary signals s i (t) and s j (t), is equal to their cross power spectrum g ij (ω)= 〈S i S j *〉 ∈ C divided by the square root product of the two auto power spectra (Carter, 1987;Priestley, 1981).The magnitude-squared coherence (Carter, 1987), C ij ω ð Þ∈ R (herein referred to as simply 'coherence') is given by: The coherence can be factorized into the correlation between the signal amplitudes and the (circular) dispersion of their phasedifferences (e.g.Priestley, 1981;Friston et al., 1997, (Eq. A3)): Here, α i =|S i | corresponds to signal amplitude and ϕ i =arg{S i } to phase (where radial frequency is ω : = ˙ϕi ).Eq. ( 2) shows that coherence depends on a function of the density p(δ ij ) over phasedifferences: δ ij = φ i − φ j .This function Φ(ω) ij is called the phasesynchronization index (Mardia and Jupp, 1999;Pascual-Marqui, 2007) and reflects the circular variance or dispersion of phase-differences.Eq. ( 2) means that coherence is effectively the normalized absolute value of the cross-spectrum, while phase-synchronization is the absolute value of the cross-spectrum when derived from normalized Fourier transforms (see Pascual-Marqui, 2007 for a generalization to multivariate time-series).The key thing to note here is that coherence does not change with the average phase-difference, only its dispersion; that is, coherence reflects the stability of the phase difference.
In what follows, we will be concerned with generative models of data features that disclose the mapping between some exogenous (neuronal) fluctuations or innovations u k t ð Þ∈ R with Fourier Transform U k (ω) ∈ C and observable signals s i t ð Þ∈ R, under ergodic assumptions.Under linear assumptions, this mapping is specified by a kernel, κ i k (τ, θ)=∂ s i (t)/∂ u k (t − τ), whose parameters θ we wish to estimate.Usually, one would associate each innovation with a neuronal population or source of signals (although there may be others, like common input).The kernels are then defined by a model of how the innovations are transformed by synaptic processing in connected sources and the physical transmission of source activity to one or more sensors.Hence, the parameters of the model (or the kernels) include the effective connectivity among sources (with time-delays) and the parameters of any mapping from sources to channels (e.g., an electromagnetic forward model for EEG data).
The cross-spectral density is the sum of cross-spectra induced by each innovation, where there is an innovation for each source of activity that contributes to observed signals.The cross spectrum due to the k-th innovation is simply the product of the transfer functions (Fourier transforms) of the corresponding kernels, K i k (ω, θ) and the spectral density, γ k (ω)=〈|U k ||U k |〉 of (statistically independent) innovations Here, ϕ i k = arg{K i k } is the phase-delay induced by the kernel mapping the k-th innovation to the i-th channel.Eq. (3) just means that the predicted cross-spectrum is a linear mixture of cross-spectra induced by each innovation.This mixture depends on the mapping from each innovation or source to the channels in question.For example, in local field potential recordings, the number of innovations and channels could be the same.However, in non-invasive electromagnetic studies, the number of channels can be much greater than the number of sources. Eqs.
(2) and ( 3) provide a generative model of sample crossspectra.We have exploited this sort of model for steady-state responses extensively, when trying to infer the neuronal architectures generating local field potentials and other electromagnetic signals (Moran et al., 2007(Moran et al., , 2008)).However, these models used real-valued cross-spectra g ij ω ð Þ∈ R, which ignore systematic phase-differences.So how do phase-differences induced by the transfer functions appear in complex cross-spectra?Eq. (3) shows that the cross-spectrum is a mixture of complex components due to each innovation, where the phase-differences | of the associated transfer functions.This means the phase of the cross-spectrum is a complicated mixture of phase-differences that is related to the average phase-difference between channels.The average phase-difference, induced by all the innovations together is: This is a rather complicated integral to evaluate (involving Gauss hypergeometric forms; see Lee et al., 1994).Fig. 1 (left panel) shows the implicit density over phase-differences for a simple example with asymmetries in how innovations drive the signals.It can be seen that the density peaks at the phase-difference induced by each innovation.If we now plot the phase-delay function arg{g ij (ω, θ)} against the average phase-difference (the numerical integral in Eq. ( 4)), we see that the two are closely related (but are not the same because the phase of an average is not the average of a phase).Fig. 1 (right panel) shows the relationship (dots) between the phase of the crossspectrum and the average phase-difference; this relationship was disclosed by varying the relative power of the two innovations, where ln γ 2 /γ 1 ∈ [− 8, 8], while keeping the time-delays fixed.
Fig. 1 illustrates the key problem with interpreting phase-delays in terms of time-delays, when the sources of data are reciprocally coupled: the phase-delay function arg{g ij (ω, θ)} is zero for certain combinations of power.However, the time-delays did not change.Put simply, the phase-delays induced by different innovations can cancel each other out.Only when the power of one innovation predominates is the symmetry broken.In this situation, the phasedelay function reflects the time-delays associated with the larger innovation.This means that the phase-delay function is a lower bound on the phase-delays caused by each innovation (that depend on the time-delays between sources).This can be seen in the right panel of Fig. 1, which shows that phase-delays are bounded by the phase-differences induced by the two innovations.This means that the strength and time-delay of connections among distributed sources of data cannot be recovered from cross-spectra (or phasedelay functions) in the absence of a generative model that specifies how sources are connected.
Summary
In summary both the real and imaginary parts of cross-spectra contain useful information about the underlying system.The modulus relates to the measure of coherence, while the argument (phase or angle) is a complicated function of phase-delays induced by exogenous fluctuations (innovations).However, neither provides a unique or complete description of how data are generated and may be better thought of as data features that have yet to be explained by a generative model.In what follows, we will therefore consider the key data feature the sample cross-spectra g ij (ω) and treat both its real and imaginary parts on an equal footing.We will use conditional predictions of |g ij (ω, θ)|, arg{g ij (ω, θ)} and FT − 1 {g ij (ω, θ)} to report the coherence, phase-delay and cross-correlation functions for pairs of hidden neuronal states and observed signals.To generate these predictions we need the system's kernels, κ i k (τ, θ).These are specified in a straightforward way by the form of the model and its parameters, as described next.
From models to kernels
The kernels obtain analytically from the Jacobian I = ∂f = ∂x describing the stability of flow ẋ = f x; u; θ ð Þof hidden neuronal states, x(t) and a mapping (forward model) s(x, θ) : x → s that couples hidden states to observed signals (channel data).For channel i, and innovation k, the kernel (which can be evaluated numerically) is This means the kernels are functions of the model's equations of motion and output mapping.The output mapping may be a simple gain function (for LFP data) or an electromagnetic forward model (for EEG and MEG data).The use of the chain rule follows from the fact that the only way past inputs can affect current outputs is through hidden states.The particular equations of motion used here correspond to a neural-mass model that has been used extensively in the causal modeling of electromagnetic data (David and Friston, 2003;Jansen and Rit, 1995;Moran et al., 2008).These equations implement a simple but biologically motivated (alpha-function based) model (Jansen and Rit, 1995) that captures the key aspects of synaptic processing; fast excitation and inhibition in layered cortical sources (Moran et al., 2011).The equations for a single source are summarized in Fig. 2.
Endogenous inputs
In a DCM comprising N sources, firing rates provide endogenous inputs from subpopulations that are intrinsic or extrinsic to the source (see Fig. 2).These firing rates are a sigmoid function of depolarization, which we approximate with a linear gain function (evaluated at the system's fixed point; Moran et al., 2007).Subpopulations within each source are coupled by intrinsic connections (with a conduction delay of 4 ms: Lumer et al. (1997)), whose strengths are parameterized by γ = {γ 1 , …, γ 5 } ⊂ θ.These intrinsic connections can arise from any subpopulation.Conversely, in accordance with cortical anatomy, extrinsic connections arise only from the excitatory pyramidal cells of other sources.The strengths of these connections are parameterized by the forward, backward and lateral extrinsic connection matrices; A F ∈R NxN , A B ∈R NxN and A L ∈ R NxN respectively, with associated conduction delays Δ∈ R NxN .
Exogenous fluctuations
The innovations correspond to exogenous fluctuations u t ð Þ∈R Nx1 that excite the spiny stellate subpopulation in the granular layer.We
A B
Fig. 1.Phased distribution functions and expected phase-differences.Panel A shows the distribution over phase-differences between two channels or sources.In this (toy) example, we have introduced an asymmetry in the amplitude of the innovations driving each source (and the coupling between them).This results in a rather complicated distribution with two peaks corresponding to the phase-delays induced by the innovations at each source respectively.Panel B shows the relationship between the phase-difference of the (complex) cross-spectrum and the mean of the phase-difference.This relationship (dots) was disclosed by varying the relative amplitude of the innovations driving the sources.The lower panel details the simple form of the transfer functions assumed for this illustrative example.
parameterize their spectral density, γ(ω), in terms of white and pink spectral components; where these power law terms are ubiquitous features of neuronal noise (Freeman et al., 2003;Stevens, 1972): As noted above, one innovation is associated with each neuronal node or source.
Neuronal responses
The observer function is a mapping from N sources to observed data features expressed at M channels: is a mixture of the depolarizations over subpopulations in each source.For invasive LFP recordings (that are obtained close to neuronal sources) this mapping can be reduced to a simple gain matrix, L = diag(exp(η 1 , …, η N )) where the parameters model electrode-specific log-gains.In EEG and MEG (electro-and magnetoencephalography) the mapping is specified with a gain matrix of lead-fields, L η ð Þ∈R MxN , with unknown spatial parameters, η ⊂ θ, such as source location and orientation.Generally, this matrix rests upon the solution of a conventional electromagnetic forward model.
This completes the description of the neuronal model and, implicitly, the generative model for modeled cross-spectra.This model contains unknown parameters θ ⊃ {γ, A, Δ, α, β, η, …} controlling the strength and delays of intrinsic and extrinsic connections, the auto-spectra of innovations and the electromagnetic forward model.These parameters define the kernels and associated cross-spectra in Eq. (3).To complete our specification of a generative model, we presume the data (sample cross-spectra) to be a mixture of the predicted cross-spectra, channel noise and Gaussian prediction error (see Moran et al., 2009 for details) The channel noise, like the innovations, is parameterized in terms of (unknown) white (α) and pink (β) components, which can include channel-specific and non-specific components.Please see Moran et al. (2009) for more details.
Summary
This section has motivated the use of complex cross-spectra as data features that summarize the behavior of ergodic time-series.We have seen that only the absolute values of cross-spectra are used to form measures of coherence.Although coherence depends upon the dispersion of phase-differences, it is not sensitive to the expected or systematic phase-differences that could be introduced by neuronal dynamics or conduction delays.A simple solution to this is to use a generative model of both the real and imaginary parts of the crossspectra and fit these predictions to sample cross-spectra.To do this we need a Bayesian model inversion scheme that can handle complexvalued data.
Fig. 2. Equations of motion for a single source.This schematic summarizes the equations of motion or state equations that specify a neural mass model of a single source.This model contains three sub-populations, each loosely associated with a specific cortical layer; corresponding roughly to spiny stellate input cells, deep pyramidal output cells and inhibitory interneurons.Following Felleman and Van Essen (1991), we distinguish between forward connections (targeting spiny stellate cells in the granular layer), backward connections (with slower kinetics and targeting pyramidal cells and inhibitory interneurons in both supra-and infragranular layers) and lateral connections (targeting all subpopulations).The output of each source is modeled as a parameterized mixture of the depolarization of each subpopulation (primarily the pyramidal cells).The second-order differential equations describe changes in (vectors of) hidden states x(t) (e.g., voltage and current) that subtend observed local field potentials or EEG signals.These delay differential equations effectively mediate a linear convolution of presynaptic activity to produce postsynaptic depolarization v(t).Average firing rates within each sub-population are then transformed through a nonlinear (sigmoid) voltage-firing rate function σ(•) to provide inputs to other populations.These inputs are weighted by connection strengths.Here, v(t − Δ) represents a vector of (primarily) pyramidal depolarization in all sources, delayed by a connection-specific time-lag.Intrinsic connection strengths γ i : i = 1, …, 4 are shown connecting the different populations in different layers.When these equations are linearized around the system's fixed point, they specify the systems transfer function, and implicitly, the complex crossspectra mapping from exogenous neuronal fluctuation or innovations u(t) to observed responses s(t).These functions depend on the parameters of the model that include the extrinsic connection strengths and other parameters like H j , τ j : j ∈ {i, e} that control post-synaptic responses of inhibitory and excitatory populations.Under assumptions about the spectral form of the innovations, this constitutes a generative model of observed cross-spectra over multiple channels.
Inverting models of complex-valued data
In this section, we consider a generalization of the variational scheme (Friston et al., 2007) used to invert Dynamic Causal Models that can handle complex-valued data.In what follows, we will briefly summarize the overall principles of model inversion and list the special differences that attend the analysis of complex-valued data.
Almost universally, the fitting or inversion of Dynamic Causal Models uses a variational free-energy bound on the log-evidence for a model m (see Friston et al., 2007 for details).This bound is optimized with respect to a variational density q(θ) (which we assume to be Gaussian) on unknown model parameters.By construction the freeenergy bound ensures that when the variational density maximizes free-energy, it approximates the true posterior density over parameters: q(θ) ≈ p(θ|y, m).At the same time, the free-energy itself F y; q ð Þ≈lnp yjm ð Þ becomes a bound approximation to the logevidence of the data.The (approximate) conditional density and (approximate) log-evidence are used for inference on parameter and model spaces, respectively.
Usually, one first compares different models (e.g., with and without particular connections) using their log-evidence and then turns to inferences on parameters, under the model selected (for an overview of procedures for inference on model structure and parameters in DCM, see Stephan et al., 2010).Here, we focus on the use of the conditional density, given a single model, which we assume has a Gaussian form This density is quantified by the maximum a posteriori (MAP) value of the parameters μ (corresponding to their conditional mean or expectation) and their conditional covariance Σ (inverse precision) that encodes uncertainty about the estimates and their conditional dependencies.Crucially, the conditional mean μ or MAP estimate of the parameters implicitly defines the conditional estimate of the system's transfer functions κ i k (τ, μ) and through these, the modeled cross-spectra g ij (ω, μ) and associated functions.In other words, having optimized the model and parameters with respect to free-energy, we can recover all the conventional spectral characterizations, such as coherence, phase-delay and crosscorrelation functions.However, these are not descriptive characterizations, but are mechanistically interpretable (in the context of the model).To access these summaries, we need to express the freeenergy of the variational density in terms of complex-valued data.
The free-energy of complex-valued data
The free-energy is the average of the log-likelihood and log-prior of the model under the variational density and its entropy (see Friston et al., 2007;Kiebel et al., 2008a,b).For nonlinear models, under Gaussian assumptions about the variational density and observation noise, this has a very simple form: Here, g(ω, μ) represents any nonlinear prediction or mapping from model parameters to data features (cf, Eq. ( 6)) and ε(μ) ∈ C are the corresponding prediction errors (i.e., discrepancies between the sampled and predicted cross-spectra).Similarly, v(μ) ∈ C are prediction errors on the parameters, in relation to their prior density p θ j m ð Þ= N υ; Π −1 ν À Á .For complex-valued data, we have to separate the real and imaginary parts of the implicit sum of squared prediction error in Eq. ( 8).This is because the sum of an absolute value is not the absolute value of a sum.This means the sum, implicit in the linear algebra above, has to be performed separately for real and imaginary parts.In a similar vein, the partial derivatives of the Gibb's energy G μ ð Þ with respect to the parameters are again separated into real and imaginary parts: The gradients in Eq. ( 9) are used in a Gauss-Newton scheme to optimize the parameters iteratively, until the free-energy has been maximized.In practice, things are a little more complicated because one often makes a mean-field assumption when estimating parameters of the model and the noise precision, Π ε (inverse covariance).In other words, the precision of the prediction error is usually assumed to be conditionally independent of the parameters.The gradient ascent then becomes a coordinate ascent that optimizes the conditional expectations of the model and precision parameters respectively.This is called Variational Laplace, which reduces to classical expectation maximization under some simplifying assumptions.A full description of these schemes, and their relationship to each other, can be found in Friston et al. (2007).
Summary
In this section, we have considered the central role of the freeenergy bound on log-evidence used in model selection and inversion.
The only thing we have to worry about, when dealing with complexvalued data, is to separate the real and imaginary parts of the data (and implicitly prediction errors), when evaluating the free-energy and its gradients.Having done this, we can then use standard schemes to optimize the parameters of any Dynamic Causal Model and select among competing models to find the one that has the highest freeenergy (log-evidence).We now illustrate the application of this scheme using simulated data.
Simulations and validation
In this section, we use simulated data from four sources, with known directed connections and delays, to establish the face validity of the inversion scheme of the previous section.Our particular focus here is on the improvement in the precision of parameter estimates, when including the phase information in complex cross-spectra.To illustrate this we will look closely at the conditional density over conduction delays.We will then be in a position to compare these estimates with true values and how these conduction delays translate into phase-delays and time-lags at the level of simulated population dynamics.
Simulations
To simulate data, we used the (David and Friston, 2003) neural mass model above to simulate four sources, organized into two pairs.The sources within each pair were coupled with lateral connections, whereas there was an asymmetric directed coupling between the first and second pair.This allowed us to look at predicted and estimated cross-spectra within and between pairs and illustrate the consequences of reciprocal connections between sources.The data were generated using the model parameters estimated from the empirical data of the next section.The only difference was that we suppressed backward connection to enforce an asymmetric (directed) coupling between the two pairs.The data were modeled as arising from pairs of sources in the Globus pallidus (GP) and subthalamic nucleus (STN).The connections from the GP to the STN constitute the forward (GABAergic) connections of the indirect basal ganglia pathway, while the reciprocal (glutamatergic) connections are from the STN to the GP.To suppress these backward connections we set them to a half, while the forward connections were given a value of four.The lateral connections were given intermediate values of one.This means our forward connections were 2 × 4 = 8 times stronger than the backward connections (i.e., the first pair of sources drove the second).The conduction delays were as estimated from the empirical data.See Fig. 3 (left panels) for a schematic of this small network of sources.
Simulating spectra
Data were simulated over 4 s with time bins of 4 ms.Cross-spectra were generated directly in frequency space, assuming that each source was driven by random fluctuations and that LFP data were observed with a signal to noise ratio of about 10%.The spectral characteristics of the innovations and channel noise were controlled by mixing white and pink noise components (see Eqs. ( 6) and ( 7)), using the conditional parameter estimates from the empirical analysis reported in the next section.
An example of these simulated data (sample cross-spectra) is shown in Fig. 3 (right panels) and illustrates the characteristic beta coupling seen in patients with Parkinson's disease and animal lesion models thereof (Lehmkuhle et al., 2009;Silberstein et al., 2005).These simulated cross-spectra were then used to invert the neural mass model described.Because connections strengths and time delays (and other model parameters) are nonnegative quantities, their prior mean is scaled by a free parameter with a log-normal distribution.We refer to these as the log-scale parameters, such that a log-scaling of zero returns the prior mean.Priors, p θ jm ð Þ= N υ; Π −1 ν À Á (Eq.( 8), above) are specified in terms of their prior mean η and variance ζ (as detailed in Moran et al., 2008 Table 1 and available in SPM8 http://www.fil.ion.ucl.ac.uk/spm).The prior variance determines how far the scaleparameter can move from its prior mean.Parameters, like the maximum excitatory potential and channel time constants have a prior variance of ζ = 1/8 : {ζ ∈ Π ν − 1 }, allowing for a scaling up to a factor of about four.In contrast, relatively flat priors are used for effective connectivity measures (the parameters of interest) to allow for an order of magnitude scaling (with a prior variance of ζ = 1/2).This ensures that their posterior estimates are determined primarily by the data.In other words, the scheme will optimize the strength of all connections in the model, both intrinsic to each source (Fig. 2) and extrinsic between sources (Fig. 3).There is no bias in the estimates; however, the prior variances of the extrinsic parameters are larger than those of the intrinsic parameters, allowing for greater divergence from their prior mean in posterior estimation (c.f.Eq. ( 8)).We have chosen to highlight these extrinsic connectivity estimates in Fig. 4 because these quantify inter-regional coupling and determine the delays, coherence and phase at the source and sensor levels.
Parameter estimates and their cross-spectra corresponding absolute values or modulus (lower panel).The auto and cross-spectra for all four simulated channels are shown as dotted (colored) lines.Following optimization of the model parameters, the modeled cross-spectra are shown as solid lines and illustrate the goodness of fit or accuracy of model inversion (they are barely distinguishable in many cases).The key thing to take from this example is the pronounced cross-spectral density in the beta range (20 Hz) that can be seen in both the real and imaginary parts.The relative contribution of the complex part is only about 10% of the real part but is concentrated in the frequency ranges over which coherence induced by coupling among the sources is expressed.The imaginary part of the complex crossspectra (upper right panel) contains information that enables the estimation of phase-delays.These predicted cross-spectra are based on the conditional means of the parameters shown in the next figure.
The upper panel of Fig. 4 shows the true and prior values of the key coupling parameters in this DCM (for clarity, only the strengths and conduction delays of the extrinsic connections among sources are shown).The lower two panels show the posterior or conditional densities after fitting the model to complex (middle panel) and absolute (lower panel) cross-spectra from the four regions.These estimates allow us to quantify any improvement in the accuracy or precision of parameter estimates, in relation to the true values, when inverting complex data relative to absolute data (Moran et al., 2009).The model comprises four forward connections (A F ) from the GP to STN (Fig. 3), four backward connections (A B ) from the STN to GP, and four lateral connections (A L ).Given the conditional densities over these parameters, we can not only assess whether the complex scheme provides estimates that are closer to the true parameter values than the corresponding modulus-based estimates, but also whether the conditional precision or confidence increases.In the upper panel, we see that a priori all log-scale parameters are zero; these priors regularize the estimates and induce a "shrinkage" effect on the posterior The pink bars correspond to the prior 90% confidence interval.The true values of the simulated parameters are shown as blue crosses.The true strengths disclose the asymmetry in this directed connectivity, which we hoped would subtend substantial phase-delays.
Conditional densities over extrinsic connections
First we consider the first twelve parameters corresponding to the connections strengths, A F , A B and A L .These are shown in the lower panels, where the gray bars report the posterior mean and pink bars denote 90% posterior confidence intervals.In the middle panel of Fig. 4, one can see that the asymmetry in the GP-STN network has been detected using the complex spectra, with larger values for the forward connections than for the backward connections.However, the shrinkage priors have precluded the forward connections from attaining their true values of log(4).Interestingly, the inversion has failed to decrease the backward connections to their true value of −log(2).These posterior densities should be compared with the lower panel in Fig. 4, illustrating the equivalent densities obtained after fitting the absolute cross-spectra.In this example, the conditional estimates using the complex and modulus schemes are roughly the same.
Conditional densities over conduction delays
However, Fig. 4 reveals a greater improvement in the estimation of the delays (Δ) for the complex compared to the modulus-data scheme.These are the second set of twelve parameter estimates.In particular, we observe that the 90% confidence intervals encompass the true simulated values in ten of the twelve parameters in the complex-scheme, compared to only seven of twelve parameters in the modulus scheme.Crucially, the estimates of the delays are more uncertain for conventional modulusbased schemes than when using complex-valued data.We now look at this more closely.
The upper panel of Fig. 5 shows the posterior uncertainty (covariance) for all (127) unknown or free parameters in this DCM.
Here, we have plotted the conditional uncertainty after fitting the modulus data against the equivalent uncertainty when using complex cross-spectra.As might be anticipated, the uncertainties about estimates that are informed by the modulus only are higher than when both real and imaginary parts are used.These differences are particularly marked for the estimates of conduction delays (marked by red dots).In some instances, there has been more than a doubling of the conditional precision or certainty when using complex data.This is exactly the sort of behavior we hoped to observe and reflects the improvement afforded by generative models of complex data.The bottom panel provides the full conditional density on the conduction delay for one lateral (within pair) connection (the connection from the third source to the last).This is the parameter that showed the greatest change in conditional covariance under the two inversions (indicated by the connecting line in Fig. 4).The true value of the conduction delay was about 5 ms and falls within the posterior (shown in blue) when using complex-valued data.This is very distinct from the broader prior density shown in red.Interestingly, the posterior density obtained when using the modulus data has an intermediate value and fails to include the true value within its 90% posterior confidence interval.These results illustrate the increased accuracy and precision of posterior inferences, particularly on delay parameters, that are afforded by using complex-valued data with both real and imaginary parts.
We now turn to the implicit coherence, phase-delay and crosscorrelation functions predicted by the parameter estimates.In what follows, we will actually use the cross-covariance functions to present quantitatively, the shared variance in two signals.Furthermore, we will divide the angular phase-delay by frequency and display it in units of milliseconds.
Predicted coherence and phase-delay functions
The upper right panels in Fig. 6 show the sample (dotted lines) and modeled (solid lines) coherence among the four simulated channels.The panels below the leading diagonal show the corresponding phase-delay functions in milliseconds.The leading diagonal panels (pertaining to auto-spectra) have been omitted, because the associated phase-delay is zero and the coherence is one for all frequencies.Note that the coherence between one channel and another is the same (calculated as the angular phase-delay divided by frequency).There are two important points to be made using these results.First, there is a relatively poor correspondence between the sample and modeled coherence.This is because coherence (defined in Eq. ( 1); technically the magnitude squared coherence) is a highly nonlinear function of the original data features (the complex cross-spectra).From Eq. ( 1) we can see that the nonlinearity results from normalizing the absolute squared cross-spectra by the product of the auto-spectra from the two channels (Carter, 1987).It is possible that the large gamma coherence (~0.4 in some cases) observed from the channel data and not capitulated in our estimate result from unstable ratios at frequencies with low power in the auto-spectra.This contrasts with the modeled coherence based upon the modeled cross-spectra, which was produced by a biologically-motivated model (the DCM).The second point to note here is that the phase-delay functions are not constant over frequencies, despite the fact that the conduction delays were fixed during data generation.Like the coherences, the most interesting excursions are contained within the (beta) frequencies mediated by simulated interactions among the underlying sources.However, these are not estimates of conduction delays; as shown in the previous sections, they are lower bounds.For example, in the highlighted panel in Fig. 6 we see how the phase-delay function would suggest that this lateral (within-source pair) connection has a conduction delay of 5 ms, even though the reciprocal connections have different time-delays (5 ms for the connection from source 3 to 4, and 16 ms for the connection from source 4 to 3).This illustrates that there is no one-to-one mapping between the phase-delay and the underlying conduction delay.One can see this immediately by noting that in general the phase-delay (at any frequency) between two nodes is by definition anti-symmetric (a sign-reversal), even though there may be a greater conduction delay from one source to a second, compared with the conduction delay from the second to the first.The bottom line here is that it is extremely difficult to infer the direction of coupling or delays from phase-delay functions in the setting of reciprocal connections.In contrast, the conditional estimates of the parameters of the DCM afford an unambiguous characterization of conduction delays.We will pursue this in the next section but in the context of coupling among hidden sources, as opposed to channels.
Frequency specific indices of coupling among channels and sources
To highlight the distinction between measures of coupling in channel and source space we will focus on a forward connection (between the first and fourth regions).Fig. 7 reports on this coupling between channels (left panels) and sources (right panels).For this pair of channels (resp.sources) the first panel shows the sample and modeled covariance as a A B Fig. 5. Conditional densities and conduction delays.Panel A shows the posterior uncertainty (covariance) for all (127) unknown or free parameters in this DCM.Here, we have plotted the conditional uncertainty (after fitting the absolute cross-spectra) against the equivalent uncertainty using complex cross-spectra.As might be anticipated, the uncertainties in the estimates are (in general) reduced when fitting complex-valued data features; i.e., most dots are above the identity line.These differences are particularly marked for the estimates of conduction delays, marked by the red dots.The bottom panel (B) provides the full conditional density on the conduction delays for one connection (the connection from the third source to the last).This is the parameter that showed the greatest change in conditional covariance under the two inversions.The true value of the condition delay was about 5 ms and falls within the posterior density, using complex-valued data (blue line).This is very distinct from the (broader) prior density shown in red.Interestingly, the posterior density obtained when using the absolute data has an intermediate value but fails to include the true value within its 90% confidence interval.function of lag in milliseconds (noting that the covariance function can be recapitulated in terms of a cross-correlation measure).The second panel shows the corresponding sample and modeled coherence as a function of frequency and the third panel shows the sample and modeled phasedelay in milliseconds as a function of frequency.Finally, the fourth panel shows the conditional density over its conduction delay.In all panels, the solid blue lines represent the true values used to generate the simulated data.The thin blue lines correspond to the conditional expectation, while the gray regions correspond to 90% posterior confidence intervals.The modeled covariance, coherence and phase-delay functions are all functions of the modeled cross-spectra, which depend upon the parameters of the generative model.In other words, there is a direct mapping from any set of parameter values to a particular covariance, coherence or phase-delay function.This means that we can compute the posterior confidence intervals simply by sampling parameters from the posterior or conditional density to produce a density on these functions.We have shown the conditional densities in channel and source space side by side to emphasize some key points.
The channel space characterizations include both specific and nonspecific instrumentation or channel noise that has both white and colored components (see Eq. ( 7)).In contrast, the source-space functions are what would have been observed in the absence of channel Fig. 6.Observed and predicted coherence and phase-delays.The upper right panels show the observed (dotted lines) and predicted (solid lines) coherence among the four simulated sources.The panels on the lower left show the corresponding phase-delay functions in milliseconds.The coherence between one channel and a second is the same as the coherence between the second and the first.Conversely, the reciprocal phase-delay functions have the opposite sign.noise (and with unit gain on the LFP electrodes).This means the characterizations in channel space (left panel) are a mixture of neuronal and non-neuronal spectral features, whereas the source space results in the right panel reflect the components or coupling due only to neuronal fluctuations or innovations.Specifically, one can see that the modeled cross-covariance function in channel space is higher, tighter and estimated with a greater conditional confidence than the corresponding modeled and sample covariance function in source space.This is because the channel data contain a substantial amount of white noise that is common to all electrodes, resulting in a more peaked cross covariance function.When removed, one can see more clearly the underlying crosscovariances due to the neuronal fluctuations.These have a clear oscillatory pattern in the beta range (note the peaks around delays 50 and −50 ms) that has been shaped by the neuronal transfer functions associated with each source.Similarly, the modeled and sample coherence in channel space are much smaller than in source space.This is due to the channel-specific noise component, which disperses the phase-differences and suppresses coherence.When this effect is removed, the coherence increases markedly, particularly at higher frequencies.In terms of phase and conduction-delays it can be seen that the modeled phase-delay increases, when considering sources in relation to channels.This effect can be explained in terms of nonspecific channel noise that changes the distribution of phase-differences, so that most of its probability mass is centered at zero lag.This means the (average) phase-delay shrinks towards zero (Daffertshofer and Stam, 2007).The implication here is that the phase-delay between channels represents a lower bound on the neuronal phase-delay between sources.For example, in the range 20-30 Hz, the phase-delay between channels does not exceed 5 ms, whereas it is nearly 10 ms between sources.Crucially, this is not the conduction delay (which would be the same for all frequencies).The true conduction delay in this example was ~15 ms and was estimated to be about 20 ms.Happily, the true value fell within the 90% conditional confidence interval (note that the conduction delays are the same for sensors and sources because they are an attribute of the underlying system not its measurement).It is also important to note that one could not deduce the conduction delay from peaks in the modeled or sample cross-covariance functions (zero and the conduction delays are shown as vertical lines).Although there is a small peak in the (sample and modeled) cross-covariance function between the two channels, there is no hint of such a peak in the modeled cross-covariance between sources.This speaks to the complicated relationship between the true (conduction) delay and how it is expressed both in terms of phase-delay functions and cross-covariance (and cross-correlation) functions (for an example from cortico-muscular recordings see Williams and Baker, 2009).
Summary
In summary, we have used simulations to show that it is possible to recover the biophysical parameters of a reasonably realistic model of distributed responses from complex-valued data, summarized in terms of their sample cross-spectra.We have also seen that some parameters (especially conduction delays) are estimated more precisely when one uses complex cross-spectra, as opposed to its modulus.By identifying the system in terms of its parameters, one can derive coherence, phase-delay and other functions used in conventional measures of functional connectivity.However, it can be difficult to map back from these spectral A B Fig. 7. Indices of coupling among channels and sources.The panels on the left (A) describe the coupling between the first and fourth channels whereas the corresponding panels on the right (B) describe the same coupling between the first and fourth sources.For this pair of channels (resp.sources) the first panel shows the sample and modeled covariance as a function of lag in milliseconds.The second panel shows the corresponding coherence as a function of frequency and the third panel shows the phase-delay in milliseconds.Finally, the fourth panel shows the conditional density over the conduction delay associated with this connection.The solid blue lines represent the true (sample) values, the thinner blue lines correspond to the modeled values while the gray regions correspond to 90% confidence intervals and the vertical in the fourth panel is the true conduction delay.The covariance, coherence and phase-delay functions are all functions of the modeled cross-spectra, which depend upon the conditional means of the parameters of the generative model shown in previous figures.The functions in channel or sensor space (A) include both specific and nonspecific channel noise that has both white and colored components.In contrast, the source space functions are what would have been seen in the absence of noise (and unit gain on virtual LFP electrodes).This means the characterizations in source space are a mixture of neuronal and non-neuronal spectral features, whereas those on the right reflect the components or coupling due only to neuronal fluctuations or innovations.
characterizations to the architectures that caused them.In the next section, we consider an analysis of real data.
Analyses of real data
In this section, we apply the analysis of the previous section to real data to demonstrate the reconstruction of conditional estimates of conventional measures of coupling among hidden sources and to highlight the complex relationship between these measures and underlying conduction delays.It should be noted that we are not presenting this analysis to draw any neurobiological conclusions but just to illustrate some technical points (an analysis of these data can be found in Mallet et al., 2008a,b).These data were acquired from adult male (6-OHDA-lesioned) Sprague-Dawley rats (Charles River, Margate, UK) in accordance with the Animals (Scientific Procedures) Act, 1986 (UK).Briefly, anesthesia was induced with 4% v/v isoflurane (Isoflo™, Schering-Plough Ltd., Welwyn Garden City, UK) in O 2 , and maintained with urethane (1.3 g/kg, i.p.; ethyl carbamate, Sigma, Poole, UK), and supplemental doses of ketamine (30 mg/kg, i.p.; Ketaset™, Willows Francis, Crawley, UK) and xylazine (3 mg/kg, i.p.; Rompun™, Bayer, Germany).Extracellular recordings of LFPs in the, external GP and STN were made simultaneously using 'silicon probes' (NeuroNexus Technologies, Ann Arbor, MI).Each probe had one or two vertical arrays of recording contacts (impedance of 0.9-1.3MΩ measured at 1000 Hz; area of ~400 μm 2 ).Neuronal activity was recorded during episodes of spontaneous 'cortical activation', defined according to ECoG activity.For the present paper, we used 4 s of data (downsampled to 250 Hz) from a single rat, comprising two (arbitrary) channels from the GP and STN probes.The cross-spectra were constructed from these time series using a vector autoregressive model (with order p=8 chosen to reflect the order of the neural state-space used in DCM see Moran et al., 2008).We then treated these empirical data in exactly the same way as the simulated data, i.e., we inverted the model with the same structure and priors as above.The results of this analysis are shown in Figs. 8 to 11, using the same format as for the simulated data.
Spectral and parameter estimates
Fig. 8 shows the estimated extrinsic connections strengths and predicted data features (cross-spectra) using the real data from the four LFP channels described above.The free-energy objective function maximized during estimation (Eq.( 8)) ensures maximum accuracy under complexity constraints, where complexity is the divergence between the prior and posterior densities (Penny et al., 2004).In other words, to avoid over-fitting, the model is constrained by priors over the parameters.In the present analyses, it is noteworthy that despite these constraints, the predictions in Fig. 8 are very accurate, capturing most of the salient features in both the real and imaginary parts of these crossspectra (with the exception of frequencies above 50 Hz).The images (lower panels) show the conditional estimates of the extrinsic coupling strengths for forward, backward and lateral connections respectively (on the left, middle and right).The connection strengths and the posterior probability of exceeding their prior mean (of 32, 16 and 4 [arbitrary units] for forward, backward and lateral connections, respectively) are displayed alongside the connections in the left panel: The strongest connection was from the second pallidal source to the second subthalamic nucleus source.In terms of backward connections, the most prominent was from the first subthalamic to the second pallidal source (although both backward connections were weaker than their forward homologues).The most salient aspect of the ensuing architecture is a predominantly forward connectivity from GP to STN sources.This is consistent with the role of these connections in the A B C Fig. 8. Parameter and state estimates using empirical data.A: Schematic showing the conditional estimates of coupling strengths among the four sources analyzed.We have only shown connections with a posterior probability (in brackets) of exceeding their prior mean was about 80%.This panel uses the same format as Fig. 3. B: These panels shows the real (left) and imaginary (right) predicted and observed data features (complex cross-spectra) using the real data from the four LFP channels described in the main text.As with the examples in Fig. 3 (using simulated data) the accuracy of these predictions is extremely high and captures most of the salient features in both the real and imaginary parts of these cross-spectra (with the exception of high frequencies).C: These panels shows the conditional estimates of the extrinsic coupling strengths for forward, backwards and lateral connections respectively (in the left, middle and right panels: The numbers over each panel specify the range of the grayscale used).The strongest connection was from the second globus pallidus source to the second subthalamic nucleus source.In terms of backward connections, the most prominent was from the first subthalamic to the second globus pallidus source.The corresponding predictions of this architecture, in terms of absolute cross-spectra are shown in the next figure.
indirect pathway.The predictions of this architecture, in terms of absolute cross-spectra, are shown in Fig. 9.
Fig. 9 shows the modeled (solid lines) and sample (dotted lines) absolute cross-spectra among the four channels.The auto-spectra along the leading diagonal are gathered together on the lower left.In these data, we see a pronounced spectral peak at 20 Hz in most channels; although relatively suppressed in the cross-spectra involving the fourth channel.The corresponding modeled coherence and phase-delay functions among the underlying sources are shown in Fig. 10.This figure follows the same format as Fig. 6 but presents the modeled coherence and phase-delay functions in source space (as opposed to sensor space) having removed channel noise.The most salient feature of these results is the marked phase-delay (more than 10 ms) between the second STN source and the remaining sources.Interestingly, the greatest coherence between this source and the remaining sources is seen in the gamma range (40-60 Hz in these data), whereas beta (20 Hz) coherence appears to be restricted to exchanges between the globus pallidus and first subthalamic source.In some cases these sources appear to have coherence approaching one.Fig. 11 uses the same display format as in Fig. 7 and shows the covariance, coherence and phase-delay functions for the connection between the first globus pallidus source and the second subthalamic nucleus source.The asymmetry in this bidirectional coupling has induced a profound asymmetry in the modeled crosscovariance function, with greater covariances at lags up to about 30 ms.Fig. 9. Predicted and observed cross-spectra for the empirical data.This figure shows the predicted (solid lines) and observed (dotted lines) absolute cross-spectra among the four empirical channels analyzed in the illustrative analyses.The auto-spectra occupy the leading diagonal and are gathered together on the lower left.In these data, we see a pronounced spectral density at 20 Hz, in most channels; although relatively suppressed in the cross-spectra involving the fourth channel.Again, these results show the high degree of accuracy seen in Fig. 8.The corresponding coherence and phase-delay functions are shown in the next figure.
The modeled phase-delay function peaks at around 12 ms and is upperbounded by the conditional estimate of the (forward) conduction delay (just above 15 ms).From a linear systems perspective, the coupling here appears to be mediated by gamma coherence (upper right panel).This is consistent with the (asymmetrical) peaks of the cross-covariance function, where the first peak (for positive lag) occurs around 25 ms: this lag is not inconsistent with the high coherence at 40 Hz shown on the upper right.However, it would be a mistake to interpret these results as showing that signals from the GP to the STN source are delayed by 25 ms.Furthermore, the differential phase-delay (of 12 ms) in the beta range and (5 ms) in the gamma range does not suggest that fast frequencies are propagated with a smaller conduction delay than slow frequencies: The conduction delay is the same for all frequencies (about 15 ms).The frequency dependency of phase-delays is a result of interactions within and between sources, modeled here in terms of linear differential equations.6 but presents the coherence and phase-delay functions following analysis of the real LFP data.Here, we have shown the coherences and phase-delays among sources, having removed channel noise.The most salient feature of these results is the marked phase-delay (more than 10 ms) between the second source in the STN and the remaining sources.Interestingly, the greatest coherence between this source and the remaining sources is seen in the high gamma range, whereas the beta coherence appears to be restricted to exchanges between the globus pallidus and the first subthalamic source.many more (or less) channels than sources.In this context, the ability to recover conditional estimates of coupling among sources (as opposed to channels) is crucial and finesses some of the issues associated with interpreting coherence among channels, e.g., volume conduction effects (Schoffelen and Gross, 2009;Stam et al., 2007;Winter et al., 2007) or correlated noise in the context of Granger causal estimates (see Valdes-Sosa et al., 2011 for a discussion).
Phase synchronization and Granger causality
In principle, any metric that has proven fruitful for connectivity analyses at the sensor level (such as phase-synchronization, transfer entropy or Granger-causal measures; e.g., Brovelli et al., 2004;Bressler et al., 2007;Dhamala et al., 2008;Lachaux et al., 1999;Rodriguez et al., 1999;Vakorin et al., 2010;Varela et al., 2001) can be derived from the conditional estimates provided by DCM.This is because conventional measures can be derived from the transfer functions that are determined uniquely by the parameters of a biophysical DCM.With the developments described in this paper, it is now possible to reproduce conventional metrics of coupling by replacing the conventional modelfree (sample) estimator with a model-based (conditional) estimator.Crucially, this can be done in either sensor or source space.
Phase-synchronization is usually used to quantify the amount of nonlinear coupling between channels (e.g., Rosenblum et al., 1996;Tass and Haken, 1996).The phase-synchronization index (Eq.( 2)) can be computed from the distribution of phase-differences (Eq.( 4)), which is specified by the conditional estimates of a DCM.However, the underlying DCM can be linearized (as in this paper), which provides an interesting perspective on phase-synchronization.Many people (including ourselves; Chawla et al., 2001) have tried to understand how zero-lag phasesynchronization can emerge in nonlinear coupled neuronal oscillators.However, the linear systems perspective provides a rather trivial explanation: the phase-delays induced by random fluctuations that are passed between reciprocally connected sources cancel.In fact, it is rather hard to generate non-zero lag phase synchronization unless one introduces substantial asymmetries in the coupling (see Fig. 1).Whether this is a useful perspective remains to be established, particularly in the context of DCMs that model nonlinear coupling (e.g., Chen et al., 2008).
It is hoped that these developments may harmonize DCM and conventional time-series analysis.This is meant in the sense that conventional analyses in electrophysiology can now be complemented with conditional estimates of spectral behavior that are informed by the neuronal architecture generating these behaviors.This should allow intuitions about how phase relationships and coherence arise to be tested.The marriage between conventional (linear systems) time-series analysis and DCM is evident in this work at two levels.First, we use a linearization around the fixed point of the system to enable the use of linear systems theory to generate predicted spectral responses.Secondly, the data features predicted are themselves motivated by linear systems theory.However, the estimation of these sample spectra highlights the fundamental difference between the spectral characterizations used in conventional analyses and those furnished by DCM.This difference rests upon the underlying generative model.Our sample spectral data were constructed from time series using a vector autoregressive model At this point conventional approaches would stop and report power, coherence, and other metrics of functional connectivity and interpret these quantities directly.However, from the point of view of DCM, this autoregressive process (or spectral estimates derived from Fourier or wavelet based techniques) serves as a feature selection step to provide a compact summary of the data in terms of their sample cross spectra.The desired spectral estimates are those that are conditioned upon a biologically plausible DCM, which best accounts for the sample (conventional) spectra.In short, the difference between conventional and conditional cross-spectra (in sensor space) is that the latter are constrained by a model that allows one to put formal constraints and prior beliefs into the estimation.Furthermore, there is a unique mapping between the parameters of the underlying model and the conditional spectra provided by DCM.Employing complex-valued data features, as we have shown, becomes especially important when trying to establish spectral asymmetries in reciprocal connections (e.g. between forward and backward message-passing in the brain) and associating these asymmetries with the laminar specificity of forward and backward connections.To address these sorts of issues it will be necessary to examine conditional coherence between different subpopulations (i.e., cortical layers), which is, in principle, possible with DCM.We will pursue this in future work using ECoG recordings in awake-behaving monkeys (Rubehn et al., 2009).
Conclusion
Perhaps the simplest and most important point made by the analyses in this paper is that conventional characterizations of coupling among observed channel data are basically the starting point for Dynamic Causal Modeling.In other words, we are interested in establishing how particular data features like coherence and phase-delay are generated biophysically.Once one has an explicit mapping between the underlying biophysical parameters of a generative model and the predicted behavior in terms of cross-spectral density (and associated functions) the rather complicated relationship between conduction strengths and delays and how they manifest in terms of coherence and cross-correlation functions becomes more evident.In this sense, Dynamic Causal Modeling of observed cross-spectra may allow one to further qualify and understand the subtleties of conventional summaries.Perhaps one of the most important (and unforeseen) aspects of the analyses presented here was how channel noise can influence sample covariance and coherence functions in such a qualitative fashion.One of the key advantages of having a generative model is that one can partition observed coherence into those parts that are mediated neuronally and those parts which are not.This may represent one step towards a more quantitative assessment of coherence and phase-delays and how they relate to asymmetries in the strength and conduction delays of underlying neuronal connections.
Software note
All the inversion schemes and DCM analyses described in this paper can be implemented using Matlab routines that are available as part of our academic freeware from http://www.fil.ion.ucl.ac.uk/spm/ software/spm8/.
Fig. 3 (Fig. 3 .
Fig. 3 (right panels) shows the simulated sample cross spectra, in terms of their real and imaginary parts (upper panels) and the
Fig. 4 .
Fig. 4. True and predicted parameters of the DCM.The upper panel (A) shows the true and prior values of key coupling parameters in the DCM of the previous figure, while the lower panels show the posterior or conditional densities after fitting the model to complex (B) and absolute (C) cross-spectra from four regions.Only the extrinsic (forward, backward and lateral) connection strengths (first twelve) and associated conduction delays (second twelve) are shown.These were the key model parameters that define the network architecture.The blue crosses are the true values and the pink bars correspond to 90% confidence intervals (prior confidence in A and posterior confidence in B and C).The conditional means are depicted as gray bars.These are the expected log-scale parameters that scale the connection strengths and delays.The true values disclose the asymmetry in this directed connectivity, which we hoped would reveal substantive phase-delays.A priori, the connection strengths and delays have logscaling parameters of zero (i.e., are equal to their prior mean).The curved line highlights a conduction delay that is the focus of the next figure.
Fig. 10 .
Fig.10.Coherence and phase-delay functions for empirical data.This figure uses the same format as Fig.6but presents the coherence and phase-delay functions following analysis of the real LFP data.Here, we have shown the coherences and phase-delays among sources, having removed channel noise.The most salient feature of these results is the marked phase-delay (more than 10 ms) between the second source in the STN and the remaining sources.Interestingly, the greatest coherence between this source and the remaining sources is seen in the high gamma range, whereas the beta coherence appears to be restricted to exchanges between the globus pallidus and the first subthalamic source. | 15,106 | 2012-01-02T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Measurement of Serum Low Density Lipoprotein Cholesterol and Triglyceride-Rich Remnant Cholesterol as Independent Predictors of Atherosclerotic Cardiovascular Disease: Possibilities and Limitations
The serum low density lipoprotein cholesterol (LDL-C) concentration is the dominant clinical parameter to judge a patient’s risk of developing cardiovascular disease (CVD). Recent evidence supports the theory that cholesterol in serum triglyceride-rich lipoproteins (TRLs) contributes significantly to the atherogenic risk, independent of LDL-C. Therefore, combined analysis of both targets and adequate treatment may improve prevention of CVD. The validity of TRL-C calculation is solely dependent on the accuracy of the LDL-C measurement. Direct measurement of serum LDL- C is more accurate than established estimation procedures based upon Friedewald, Martin–Hopkins, or Sampson equations. TRL-C can be easily calculated as total C minus high density lipoprotein C (HDL-C) minus LDL-C. Enhanced serum LDL-C or TRL-C concentrations require different therapeutic approaches to lower the atherogenic lipoprotein C. This review describes the different atherogenic lipoproteins and their possible analytical properties and limitations.
Introduction
Increased serum total cholesterol (TC) is associated with an increased risk of developing atherosclerotic cardiovascular disease (ASCVD) [1][2][3]. High serum low density lipoprotein C (LDL-C) is generally considered as the predominant cause of ASCVD progression [4][5][6]. For decades, serum LDL-C has been the main target to be lowered with statins, either alone or in combination with ezetimibe [7]. Recently, bempedoic acid has been introduced as a possible replacement for statins when these cannot be tolerated by the patient [8]. In a considerable number of patients, LDL-C lowering targets are not reached [9][10][11]. Additional LDL-C lowering therapies have been developed, such as inhibition of proprotein convertase subtilisin/kexin type 9 (PCSK9), which intercellularly degrades the LDL-receptor (LDLR) [12,13]. LDLR is the carrier protein that enables LDL to enter the cell. LDL is formed by very low density lipoprotein (VLDL) through intermediate density lipoprotein (IDL) after progressive removal of triglyceride (TG) by lipoprotein lipase (LPL) and hepatic lipase (HL) [14] (Figure 1). VLDL, IDL, and LDL differ in particle size and density [15].
It is important to realize that VLDL, IDL, and LDL are density classes only composed of differently sized particles. Since VLDL secretion and LPL activity may vary over time, a large variety of lipoprotein particles with a range of TG and C contents is simultaneously present in serum [13,[16][17][18]. These particles, which are larger and more TG-rich than LDL, are called TG-rich lipoproteins (TRLs). Additionally, highly TG-rich chylomicrons (CM) are continuously released from the intestine two and four hours after a fat-rich meal [19][20][21][22]. CMs are gradually converted to chylomicron remnants (CMR) via loss of TG by LPL. The sum of C in VLDL derived TRLs and CMs plus CMRs may be called total TRL-C. Recently, the measurement of remnant cholesterol (remnant C) has attracted considerable attention and its abundance has been proven to be associated with the development of various types of atherosclerotic events independent of LDL-C [23]. However, the definition of remnant-C and the determination of serum remnant-C concentration are subjects of discussion. According to the generally applied calculation, remnant C equals TRL-C. TRLs contain lipoprotein particles in between VLDL and LDL including IDL in combination with CMR [24]. All of these remnants and LDL are supposedly atherogenic. Graduation in atherogenicity cannot be clearly defined, despite the fact that small, highly dense LDL particles ("small-dense LDL") are more atherogenic than larger ones [25]. In addition, while no clear general description of atherogenic lipoproteins can be provided, the presence of apolipoprotein B (ApoB) as carrier protein is at least characteristic. ApoB is represented as ApoB100 in VLDL-derived particles and ApoB48 for chylomicron-derived particles. This separates atherogenic particles from high density lipoprotein (HDL) which carries apolipoprotein A1 (ApoA1) as the unifying apolipoprotein. Additionally, the cholesterol ester (CE) content adds further to the atherogenicity [26][27][28]. Different risk indicators have been introduced to predict atherosclerotic risk. Apart from clinical indicators, such as obesity, smoking and/or diabetes, these include the serum concentrations of LDL-C, VLDL-remnant C (TRL-C), non-HDL-C and ApoB. ApoB has been introduced as a risk indicator based on the knowledge that CMs, CMRs, TRLs, and LDLs contain one ApoB molecule per particle and that the number of particles may be more conclusive than the concentration of lipoprotein C [29,30]. To date, clinicians tend to rely on LDL-C as the best marker for pro-atherogenic lipoproteins and HDL-C as the marker for anti-atherogenic lipoproteins [31][32][33]. It should be realized that HDL particles exchange CE with ApoBcontaining particles in exchange for TG [34] mediated by cholesterol ester transfer protein (CETP). Reduction of CETP activity is considered a potential target for increased reversed cholesterol transport [35]. Thus, depending on CETP activity the TG and CE proportions in HDL and ApoB lipoproteins may vary. HDL mainly delivers phospholipids and CE to the liver, whereas ApoB remnants and LDL are taken up to some extent by extrahepatic tissues, but predominantly by the liver via the LDL-receptor (LDLR) and the LDL-receptor related protein (LRP) [36]. One particular lipoprotein is the lipoprotein (a) (Lp(a)). It represents the densest ApoB-100-containing particle with a density higher than LDL. The measured LDL-C contains C originating from Lp(a). The lipoprotein lipid metabolism is presented in Figure 2.
VLDL is formed in the liver and transports TG and CE into the blood. It is gradually converted into LDL via intermediate formation of IDL. VLDL remnants and IDL may partly return to the liver before being converted to LDL. LDL is extracted into extrahepatic cells, but predominantly into the liver. CMs are produced in the enterocyte, transporting TG and CE from absorbed fatty acids from the diet and FC from the diet and from bile. They are converted to CMRs by the action of LPL and then delivered to the liver. The TG content of VLDL is provided by TG derived from CMR, fatty acids (FA) synthesized in the liver and FA taken up from blood (Figures 2 and 3). The hepatic C pool is composed of C derived from extracted HDL, LDL, VLDL remnants, IDL and CMR as well as from synthesized C. Hepatic TG is secreted into the blood in VLDL particles. Hepatic C is secreted in VLDL as CE, secreted into bile as FC, and as bile acids. The distribution of C divided over these three fluxes is largely unknown. HDL-C appears to be dominantly secreted into bile [37,38] and converted to bile acids [39]. The flux distribution may be dependent on the hepatic C concentration. In this review we critically evaluate the proposed predictive markers according to the characteristics of various lipoproteins, their metabolism, and their analysis or calculation procedures.
Atherogenic Lipoprotein C Concentrations as Indicators of Enhanced Risk for Atherosclerosis Development
The pro-atherogenic character of lipoproteins is not yet fully understood. In principle, HDL particles are considered anti-atherogenic. Accepted atherogenic characteristics are the presence of ApoBs (ApoB100 and ApoB48), reduced size and increased density beyond undefined limits, and a high load of CE. To actually cause atherosclerosis, the turnover of lipoproteins must be delayed. This increases the exposure of the arterial wall to the toxic lipoproteins, thus enhancing the atherosclerotic process. A delay of turnover may be caused by reduced activity of LPL, HL, and/or LDLR. During the day, the presence of intestinal-derived CMs and CMRs is maximized with fat-rich meals. Therefore, lipoprotein analysis in fasting blood will not represent the daily exposure to atherogenic lipoproteins. The time to transport CMs, convert CM to CMR, and transport CMR to the liver affect the time-dependent contribution of CM-derived lipoproteins in the postprandial phase. Thus, the residence time of TRLs in plasma determines an individual's atherosclerotic risk. Patients with obesity, diabetes mellitus, kidney disease, and/or familial history of cardiovascular disease have an increased risk of developing ASCVD. This risk must be conveyed to the patient and followed by treatments for risk reduction. After cardiovascular events have been treated, the residual risk for repeated events must be considered. Continuous treatment is required to reduce any residual risk. The degree of atherogenicity is associated with the level of circulating C-rich lipoproteins (LDLs, remnants, Lp(a)). Initially, serum TC was used as a predictive marker and serum C lowering therapies were developed as the treatment of choice. Thereafter, the target became the lowering of serum LDL-C concentration. In the last decade, initiatives were undertaken to extend the focus to the other potentially atherogenic lipoprotein C, by applying non-HDL-C as the predictive marker [40]. Non-HDL-C is calculated as TC minus HDL-C and contains C present in all potentially atherogenic lipoproteins, including LDLs, remnants and Lp(a). Additional evidence identified the C content of the TRLs, i.e., all ApoB containing lipoproteins, excluding LDL-C, were an additional risk marker in patients with normal LDL-C levels or those with a sufficiently reduced LDL-C level following C-lowering treatment [41]. A third suggestion has been to measure total ApoB, i.e., ApoB-100 plus ApoB-48 as a risk marker [42,43]. Lp (a) is an independent risk marker in specific patients and should always be measured during the original diagnosis. Next, the validity of various markers is discussed.
LDL-C
For many years, a high serum LDL-C concentration has been considered the central factor associated with an increased risk for development of atherosclerosis and cardiovas-cular events [1,3,5]. The definition of LDL-C is the lipoprotein consisting of Apo B-100 as the associated apolipoprotein (20% of total content), with a density of 1.019 to 1.063 g/mL and a diameter of 20 to 25 nm. The lipid core contains 12% TG and 59% cholesteryl esters. Small LDL particles are more atherogenic than larger ones [44]. The LDL-C concentration is determined in every hospital all over the world. For the most accurate analysis of LDL-C, serum must be treated with ultracentrifugation [45,46] in order to isolate the LDL density fraction for C analysis. This is the official reference method, which is considered optimally selective. A second approach to separate lipoproteins is via electrophoresis [47] and a third approach is via nuclear magnetic resonance (NMR) [48]. In addition, liquid mass spectrometry methods are now being developed [49]. However, for routine clinical laboratories, these techniques are time consuming, laborious, and expensive. In 1972, Friedewald established a simple estimation procedure [50]. In fasting serum, total C is predominantly composed of HDL-C, LDL-C, and VLDL-C (plus IDL-C). From measurement of VLDL-C and VLDL-TG after isolation with ultracentrifugation, Friedewald found that the mean TG/C ratio in fasting serum was 5.0 and considered this number to reflect the ratio in the healthy population. Thus, he expressed the LDL-C calculation as: LDL-C = TC minus HDL-C minus TG/5. Obviously, the Friedewald formula means that the calculated LDL-C strongly depends on the TG concentration. TG is bound to the abovementioned lipoproteins and is not limited to VLDL. VLDL contains about 55% TG. CMs carry 88% TG, which decreases during conversion to CMR. The TG content of lipoproteins is dependent on VLDL and chylomicron production and release as well as LPL activity. Furthermore, CE is transferred from HDL into LDL in exchange for TG. From VLDL, CM, and CMR, low LPL activity leads to high TRLs. It has been observed that the Friedewald equation starts to lose accuracy when the LDL-C concentration is low (<1.8 mmol/L, <70 mg/dL) and the TG concentration is high (>1.69 mmol/L, >150 mg/dL). Generally, it is acceptable to use the Friedewald equation when LDL-C > 1.4 mmol/L (>54 mg/dL) and the TG concentration < 4.5 mmol/L (<400 mg/dL) and Lp(a) is within the reference range [51]. This makes the equation unsuitable in patients with hypertriglyceridemia and mixed hyperlipidemia and increased Lp(a) levels. Furthermore, recent literature advocates lipoprotein profiling to be performed in the postprandial phase, when TG is at higher levels and CMs and CMRs are present at variable higher concentrations [52][53][54][55]. Recent evidence has been provided indicating that production and release of chylomicron particles are slow processes. Fat is temporarily stored as liquid droplets in the intestinal cells [56]. The supply of chylomicrons and as a consequence chylomicron remnants are spread out over time. This way, the body is protected against an excessive load of fat after meal consumption. Depending on the dietary fat intake, the time point of the last meal, and the delay of CM secretion, the presence of CMs and CMRs in fasting serum may become relevant. Approaches have been made to improve the weakness of the Friedewald equation. The most accepted improvements are the approaches of Martin [57] and Sampson [58]. They correct the LDL-C value according to the combination of TG and HDL-C in the sample. Both approaches extend the range of TG concentrations at least up to 9 mmol/L (800 mg/dL). The Martin-Hopkins approach also provides more accuracy at low LDL-C concentrations. However, direct measurement of LDL-C is highly recommended. Many commercially available direct homogeneous LDL-C assays are on the market [59], enabling a rapid and selective analysis. These assays are based on the fact that they exclude HDL, VLDL, and CMs from the C measurement. However, CM-and VLDL remnants with reduced TG content-if present in fasting serum-may potentially interfere with the measurement. Another potentially interfering factor is LP(a). LP(a) equals LDL in size and density and may be included in the measurement of LDL-C if present.
TG Rich Lipoprotein C (TRL-C) or Remnant C
Almost three decades ago it was indicated that TG enriched lipoproteins (TRL-C) in serum correlate with the severity of coronary artery disease [60]. Serum TRL-C, also called remnant C, received much attention as an atherogenic component associated with cardiovascular events and independent of serum LDL-C. In 2022, over 2000 hits were obtained when searching the Pubmed data base for "remnant cholesterol". A small extract is shown here [61][62][63]. It has frequently been proposed that "remnant C" should be determined in the individual patient at risk and that remnant-C lowering therapies need to be established. Interestingly, according to the calculation procedure, remnant-C equals TRL-C in fasting serum. Therefore, we will continue using the term TRL. Elevated serum TRL concentrations may be caused by various factors, such as excess dietary TG intake, high secretion rates of CMs, high hepatic VLDL secretion, and most importantly, by reduced efficiency of LPL [14,64,65]. A high serum TRL concentration is most likely the result of the combination of enhanced secretion and reduced lipolysis. This will initially result in relatively large TG rich particles that may be less atherogenic. In the extreme situation of genetically caused inhibition of LPL, hyperlipidemic pancreatitis is more common than ASVD [66,67]. As indicated before, TRL-C is calculated as TC minus HDL-C minus LDL-C. When the Friedewald formula is used for the LDL-C calculation, the inaccuracy in the determination of LDL-C affects the TRL-C calculation. As a matter of fact, the equation can then be rewritten as TRL-C = TG/5. In a healthy situation, the TRL-C concentration calculated via the Friedewald equation is on average about 10% of the LDL-C concentration. The LDL-C concentration calculated by the Friedewald equation tends to underestimate LDL-C by about 10% when compared to LDL-C measurement after LDL isolation using ultracentrifugation [68]. Correcting LDL-C for a potential 10% underestimation leads to about 50% reduction in TRL-C. This suggests using direct measurement of LDL-C to calculate a reliable TRL-C concentration. Recently Varbo et al. [69] described an alternative technique to measure TRL-C independent of LDL-C and HDL-C. Using a commercial assay (Denka, TRL-C, Denka Company Limited, Tokyo, Japan), LDL and HDL are degraded and removed. Thereafter C is measured. It was found that directly measured TRL-C identified 5% more patients with increased risk of cardiovascular disease than calculated TRL-C applying the Martin-Hopkins equation. The question arises as to how the patients involved should be treated. Probably, their clinical and nutritional status must be closely studied. An obese patient with a high fat intake may be successfully treated by reduction of dietary fat intake. This may be achieved with a low fat, fiber rich diet, eventually combined with orlistat, which binds to pancreatic lipase reducing fat digestion and promotes weight loss [70][71][72]. A patient with high sugar intake may limit sugar intake and thereby potential endogenous fat synthesis. New therapies are under development such as pemafibrate [73] and the omega-3 fatty acid icosapent ethyl [74]. Decreased serum TG was observed under statin treatment and more pronounced under combination of statin with ezetimibe [75,76]. The mechanism for this serum TG reduction during LDL-C reduction therapy is unclear.
Non HDL-C and ApoB
Interestingly, discussion has focused serum LDL-C and TRL-C as determinants of increased cardiovascular risk. Apparently, a subgroup of patients develops cardiovascular disease despite a normal LDL-C concentration [1]. Other research groups promote non-HDL-C as the ultimate marker of atherogenic cardiovascular disease risk. Non-HDL-C is a calculated parameter obtained as non-HDL-C = TC − HDL-C. TC as well as HDL-C are measured directly with generally accepted methods. The difference is well defined and without discussion. It reflects LDL-C plus TRL-C. Thus, non-HDL-C contains all atherogenic components. However, large and potentially less atherogenic TRL components may be included, particularly when LPL activity is low. This may decrease the prognostic efficiency. Furthermore, at TG > 400 mg/dL, HDL-C measurement is inaccurate since TRLs are not sufficiently precipitated and thus TRL-C is partly included in the HDL-C value. Normally, LDL-C comprises the majority of non-HDL-C. However, non-HDL-C is considered a better predictor for a residual risk for cardiovascular disease than LDL-C [40]. It is also known that an undefined subgroup of ApoB-containing lipoproteins expresses the highest atherogenic action and it has been established that small dense LDL particles are more atherogenic than larger ones. Thus, the number of lipoprotein particles reflects the atherogenicity better than the lipoprotein concentration. Since each ApoB-containing lipoprotein carries only one ApoB molecule, it has been proposed to determine the total ApoB concentration as a measure of atherogenicity [43,77], also under statin treatment [78]. This uncouples atherogenicity from C. Apo-B and Apo-AI can be assayed using commercial test kits based on automated immunoturbidimetric methods (Randox, Crumlin, United Kingdom). First, Apo-B-containing particles are precipitated from serum by phosphotungstic acid-MgCl 2 . ApoA1 is measured in this fraction while ApoB in the residual fraction. For optimal differentiation, the separation of ApoB100 from ApoB48 may be considered in distinguishing between liver-derived and gut-derived ApoB containing TRL particles.
Personalized Diagnostics and Therapy
This review highlights a discrepancy between available research findings and daily clinical routine. It may take some time before the measurement of TRL-C and ApoB concentrations in serum become routine analysis in the clinical laboratory. Routine daily measurements include TC, TG, HDL-C, and LDL-C. Measurement of LDL-C via direct methodology is slowly being introduced and must be further standardized in all clinical laboratories. While LDL-C may remain a leading predictive parameter, the additionally acquired data for TRL-C should also be considered. Table 1 outlines a diagnostic and personalized treatment strategy. It must be realized that Lp(a) is included in LDL. Therefore, this has to be measured in each patient at least once in a lifetime. LDL-C lowering should consist of combined statin or bempedoic acid and ezetimibe treatment in order to obtain the maximal response. When LDL-C lowering is insufficient, PCSK9 inhibition should be added to the combination treatment.
Limitations
The proposed extended diagnosis procedure of the atherogenic lipoprotein components depends to a large extend on the quality of the LDL-C measurement. Isolation of LDL using ultracentrifugation followed by C measurement will ensure ultimate quality. However, this technique is too time consuming and complex to be incorporated into daily clinical routine. Homogeneous direct assays are now available to isolate LDL by chemical means [59,79]. However, these various commercial assays may produce different results. In addition, the validity of the assay appears high in healthy subjects and lower in patients with cardiovascular disease [80]. Furthermore, the analytical result may differ when measured in fresh serum or frozen serum. Therefore, the most reliable assay must be chosen and applied under controlled conditions. Any reasoning for applying the measurement must be well defined. When the risk of development of cardiovascular disease needs to be determined by measuring atherogenic lipoprotein C in documented patients, LDL-C measurement must be performed exactly under cardiovascular conditions. A concentration above the cut off level of the normal range is the criterion for treatment. It will suffice to apply the same assay and quality control criteria continuously. The patient should be followed over time during treatment.
Summary of Results
VLDL and CM-derived remnants including CMR, IDL, and LDL in serum are considered atherogenic. Their C concentrations in serum are documented as predictive atherogenic indicators of cardiovascular disease risk. VLDL, IDL, and LDL are the dominant lipoproteins in fasting serum. CMRs are added to postprandial serum in amounts depending on the dietary fat intake. Serum LDL-C is used as the gold standard for risk prediction and treatment is focused on lowering serum LDL-C. VLDL-C, IDL-C, and CMR-C are called TRLs. Atherosclerosis may develop in patients with low LDL-C and high TRL-C concentrations. The accuracy of LDL-C and TRL-C determinations is procedure-dependent, i.e., on direct measurement or estimation procedures. It is unknown whether all TRLs are equally atherogenic. Non-HDL-C combines TRL-C and LDL-C and thus all potentially atherogenic lipoprotein species. Non-HDL-C may be considered the best and simplest marker of lipoprotein atherogenicity. At higher serum TG concentrations (>300 mg/dL) HDL-C also includes TRL-C because those particles are not completely precipitated by the HDL-determination method. LDL-C and TRL-C concentrations in fasting serum do not usually reflect the daily exposure to atherogenic lipoproteins, which is highest in the postprandial phases. Potentially atherogenic lipoproteins all contain ApoB. The serum ApoB concentration reflects the number of atherogenic particles and thus the cardiovascular risk. Patients with elevated serum TRL-C concentrations may be detected when serum LDL-C is measured directly with sufficient accuracy. Homogeneous, direct assays are available allowing rapid analysis in a clinical routine setting. However, selection of the preferred assay must be performed carefully.
Conclusions
Epidemiologic and genetic studies have established TRL and their remnants as important contributors to ASCVD. Combinations of LDL-C, non-HDL-C, TRL-C, and ApoB concentrations must be evaluated as the utmost predictive risk marker for development of cardiovascular disease and are recommended in the current guidelines. For clinical routine, direct measurements of TGs, TC, HDL-C, and LDL-C allow semi accurate calculation of TRL-C and non HDL-C. Patients with elevated LDL-C may be treated with conventional C lowering therapies. Patients with elevated TRL-C should be detected and treated specifically. The first step of treatment is the implementation of lifestyle interventions. Second, LDL-C lowering with statins or bempedoic acid-in case of statin intolerance-with or without ezetimibe are recommended to reduce vascular risk, independent of statin-associated lowering of TRL itself. Novel and emerging data, e.g., on omega-3 fatty acids (high-dose icosapent ethyl) and new generations of selective peroxisome proliferator-activated receptor (PPAR) modulator pemafibrate may identify patients who will benefit from TRL lowering. | 5,283.6 | 2023-05-01T00:00:00.000 | [
"Biology"
] |
Convergent Synthesis of Two Fluorescent Ebselen-Coumarin Heterodimers
The organo-seleniumdrug ebselen exhibits a wide range of pharmacological effects that are predominantly due to its interference with redox systems catalyzed by seleno enzymes, e.g., glutathione peroxidase and thioredoxin reductase. Moreover, ebselen can covalently interact with thiol groups of several enzymes. According to its pleiotropic mode of action, ebselen has been investigated in clinical trials for the prevention and treatment of different ailments. Fluorescence-labeled probes containing ebselen are expected to be suitable for further biological and medicinal studies. We therefore designed and synthesized two coumarin-tagged activity-based probes bearing the ebselen warhead. The heterodimers differ by the nature of the spacer structure, for which—in the second compound—a PEG/two-amide spacer was introduced. The interaction of this probe and of ebselen with two cysteine proteases was investigated.
Introduction
Ebselen represents a lipid-soluble, selenium-containing, multifunctional drug with a broad range of pharmacological effects including both beneficial and harmful actions. The general mechanism of action is mainly based on reactions with specific thiol groups. This reactivity makes it a potent modulator for proteins that require cysteine for normal function [1,2]. Ebselen has been shown to be an efficient antioxidantin vivo. It was considered to be a relatively nontoxic compound because its selenium is not bioavailable [1][2][3][4]. However, it can act detrimentally through the depletion of glutathione [5,6]. Ebselen targets a wide variety of enzymes and modulates several biological processes. The inhibition of enzymes is based on the high reactivity of ebselen with critical protein thiol groups, leading to the reversible formation of relatively stable seleno-sulfide bonds [1,3]. However, such formation can be reversed by the addition of reducing agents [1], as has been shown, for example, for cerebral Na + , K + -ATPase and indoleamine 2,3-dioxygenase [7,8]. Ebselen exhibits anti-inflammatory effects due to its ability to directly inhibit inflammation-related enzymes [1,3,9].
The drug ebselen interferes with certain selenoenzymes, an important class of antioxidant biocatalysts that include glutathione peroxidase (GPx) and thioredoxin reductase (TrxR). Glutathione peroxidase protects biomembranes and other cellular components by using glutathione as the reducing substrate for the detoxification of a variety of hydroperoxides. Thioredoxin reductase catalyzes the reduction of thioredoxin (Trx) with NADPH as the cofactor. These transformations create a basis for a number of processes, such as defense against oxidative stress, the synthesis transformations create a basis for a number of processes, such as defense against oxidative stress, the synthesis of desoxyribonucleotides, redox regulation of gene expression, and signal transduction [5,10,11]. Ebselen, on the one hand, mimics GPx activity by catalyzing the reduction of peroxides with glutathione, and, on the other hand, has been demonstrated to be an excellent substrate for human TrxR [1,9,12]. As an antioxidant compound and a GPx mimic, ebselen appears to be a potential drug for the treatment of several disorders including diabetes-related diseases associated with reduced GPx levels such as atherosclerosis and nephropathy as well as a prospective treatment for cerebral ischemia. Hence, ebselen has been used in clinical trials for the prevention and treatment of different disorders [1,3].
In order to provide tool compounds for the ongoing scientific efforts to further characterize the biological activity of ebselen, we designed two activity-based probes containing the intact ebselen structure and a rigidified 7-amino coumarin (coumarin 343) as the fluorescent tag. Coumarins represent a widely used type of fluorophores and are characterized by their small molecular size, large Stokes shifts as well as chemical and enzymatic stability [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]. In the second heterodimer, the coumarin tag should be connected via a PEG/two-amide linker with the ebselen substructure ( Figure 1). It was the aim of this study to synthesize these probes as biochemical tools for future studies. Moreover, we investigated the interaction of both ebselen and the second probe with two model cysteine proteases, i.e. the human cathepsins B and L.
Results and Discussion
The two convergent synthetic routes start with the procedure for synthesizing ebselen (2-phenyl-1,2-benzoselenazol-3-one) [28] (Scheme 1). Anthranilic acid (1) as the starting material was converted into a diazonium salt 2, which was treated with a disodium diselenide containing solution to obtain 2,2'-diselenobisbenzoate (3). The diselenide required for this step was obtained through a hydrazine-promoted reduction of selenium. After reacting 3 with thionyl chloride and a catalytic amount of DMF, Boc-semiprotected 4-aminomethylaniline was added. The resulting intermediate 5 was deprotected with trifluoroacetic acid to give the ebselen building block 6. The fluorescent coumarin 343 (8) was synthesized by submitting 8-hydroxyjulolidine-9-carboxaldehyde to a Knoevenagel condensation [20]. This was further coupled with building block 6 by using HATU in DMF to yield the first heterodimeric ebselen-coumarin probe 9.
To incorporate a PEG linker between the ebselen and coumarin substructures, we successfully employed a synthetic strategy in which the linker was first connected with the coumarin and the product was subsequently finalized by introducing the ebselen building block (Scheme 2). To generate the linker structure 13 [20], the amino group of 2-(2-aminoethoxy)ethanol (10) was Cbz-protected with benzyl chloroformate, then alkylated at the hydroxy group with tert-butyl bromoacetate using potassium tert-butoxide as a base to give compound 12 [29]. Afterwards, the linker structure 13 was achieved by a Cbz deprotection with hydrogen and palladium/carbon [20]. The fluorophore 8 was then reacted with 13 by using HATU in dichloromethane. The resulting compound 14 was deprotected with trifluoroacetic acid and finally coupled to the ebselen building block 6 by means of HATU in DMF to assemble the second probe 15.
Results and Discussion
The two convergent synthetic routes start with the procedure for synthesizing ebselen (2-phenyl-1,2-benzoselenazol-3-one) [28] (Scheme 1). Anthranilic acid (1) as the starting material was converted into a diazonium salt 2, which was treated with a disodium diselenide containing solution to obtain 2,2'-diselenobisbenzoate (3). The diselenide required for this step was obtained through a hydrazine-promoted reduction of selenium. After reacting 3 with thionyl chloride and a catalytic amount of DMF, Boc-semiprotected 4-aminomethylaniline was added. The resulting intermediate 5 was deprotected with trifluoroacetic acid to give the ebselen building block 6. The fluorescent coumarin 343 (8) was synthesized by submitting 8-hydroxyjulolidine-9-carboxaldehyde to a Knoevenagel condensation [20]. This was further coupled with building block 6 by using HATU in DMF to yield the first heterodimeric ebselen-coumarin probe 9.
To incorporate a PEG linker between the ebselen and coumarin substructures, we successfully employed a synthetic strategy in which the linker was first connected with the coumarin and the product was subsequently finalized by introducing the ebselen building block (Scheme 2). To generate the linker structure 13 [20], the amino group of 2-(2-aminoethoxy)ethanol (10) was Cbz-protected with benzyl chloroformate, then alkylated at the hydroxy group with tert-butyl bromoacetate using potassium tert-butoxide as a base to give compound 12 [29]. Afterwards, the linker structure 13 was achieved by a Cbz deprotection with hydrogen and palladium/carbon [20]. The fluorophore 8 was then reacted with 13 by using HATU in dichloromethane. The resulting compound 14 was deprotected with trifluoroacetic acid and finally coupled to the ebselen building block 6 by means of HATU in DMF to assemble the second probe 15. The structures of the new compounds were confirmed by analytical data, including high-resolution mass spectra (HRMS) which gave two main signals according to the two predominant isotopes of selenium. We have recorded UV/Vis and fluorescence spectra of 9 and 15 in three different solvents. Coumarins exhibit advantageous features, i.e. a high fluorescent intensity and a large Stokes shift. Such properties were also obtained for our heterodimers, as shown in Figure 2 and Table 1. The structures of the new compounds were confirmed by analytical data, including high-resolution mass spectra (HRMS) which gave two main signals according to the two predominant isotopes of selenium. We have recorded UV/Vis and fluorescence spectra of 9 and 15 in three different solvents. Coumarins exhibit advantageous features, i.e. a high fluorescent intensity and a large Stokes shift. Such properties were also obtained for our heterodimers, as shown in Figure 2 and Table 1. The structures of the new compounds were confirmed by analytical data, including high-resolution mass spectra (HRMS) which gave two main signals according to the two predominant isotopes of selenium. We have recorded UV/Vis and fluorescence spectra of 9 and 15 in three different solvents. Coumarins exhibit advantageous features, i.e. a high fluorescent intensity and a large Stokes shift. Such properties were also obtained for our heterodimers, as shown in Figure 2 and Table 1. Wavelengths of absorption were used for excitation. The heterodimer 9 was measured similarly (see Table 1). All fluorescent measurements were carried out with a PMT value of 200 V. Ebselen was reported to react with its targets through the formation of covalent seleno-sulfide bonds. This covalent mode of interaction provides an impetus for the development of activity-based probes. Recently, an ebselen-cyanine probe was reported to be utilized for the real-time imaging of the cellular redox status changes [30]. In this study, we have coupled the ebselen warhead with coumarin 343. The fluorophor coumarin 343 is valued for its bathochromic shift of absorption and emission and its high fluorescence quantum yield, even in aqueous medium [31]. These desirable properties arise from its rigidified tetracyclic structure. Compared to less rigid 7-donor substituted coumarins, the rotation of the amino group in coumarin 343 is constrained and the nitrogen lone pair can maximally interact with the aromatic system [32,33]. Accordingly, coumarin 343 was incorporated as a fluorescence label in various activity-based probes [13,18,20].
Besides the direct connection of the ebselen structure with that of coumarin 343, realized in compound 9, we have designed the second probe 15 comprising a PEG/two-amide spacer. The incorporation of this flexible spacer is thought to facilitate the interaction of the ebselen part with putative targets by preventing a possible steric hindrance caused by the coumarin part of the heterodimer. Moreover, 15 was unsurprisingly found to be more soluble than 9 in different organic solvents. Compounds 9 and 15 expand the portfolio of ebselen homo-and heterodimers [2,4,12,30,34,35]. Our ebselen containing fluorescence-labeled probes are expected to be suitable pharmacological tools to continuously elucidate the biological activity of ebselen.
The known reactivity of ebselen with cysteine residues of several proteins [1,7,8,[38][39][40] prompted us to investigate its interaction with two model cysteine proteases, the human cathepsins B and L. An inactivation of these enzymes through a seleno-sulfide bond formation would provide a possible starting point for the development of probes for cysteine proteases. Ebselen and probe 15 were added at a final concentration of 10 μM to the cathepsin activity assays [41,42]. Duplicate experiments revealed no inhibition of the proteolytic activity by using peptidic chromogenic substrates. However, the covalent interaction of ebselen and assumedly of 15 with critical protein thiol groups can be reversed by the addition of reducing compounds, such as dithiothreitol (DTT) [1,8,38,39]. In our usual cathepsin measurements, DTT is applied for the activation of the cathepsins, were used for excitation. The heterodimer 9 was measured similarly (see Table 1). All fluorescent measurements were carried out with a PMT value of 200 V. Ebselen was reported to react with its targets through the formation of covalent seleno-sulfide bonds. This covalent mode of interaction provides an impetus for the development of activity-based probes. Recently, an ebselen-cyanine probe was reported to be utilized for the real-time imaging of the cellular redox status changes [30]. In this study, we have coupled the ebselen warhead with coumarin 343. The fluorophor coumarin 343 is valued for its bathochromic shift of absorption and emission and its high fluorescence quantum yield, even in aqueous medium [31]. These desirable properties arise from its rigidified tetracyclic structure. Compared to less rigid 7-donor substituted coumarins, the rotation of the amino group in coumarin 343 is constrained and the nitrogen lone pair can maximally interact with the aromatic system [32,33]. Accordingly, coumarin 343 was incorporated as a fluorescence label in various activity-based probes [13,18,20].
Besides the direct connection of the ebselen structure with that of coumarin 343, realized in compound 9, we have designed the second probe 15 comprising a PEG/two-amide spacer. The incorporation of this flexible spacer is thought to facilitate the interaction of the ebselen part with putative targets by preventing a possible steric hindrance caused by the coumarin part of the heterodimer. Moreover, 15 was unsurprisingly found to be more soluble than 9 in different organic solvents. Compounds 9 and 15 expand the portfolio of ebselen homo-and heterodimers [2,4,12,30,34,35]. Our ebselen containing fluorescence-labeled probes are expected to be suitable pharmacological tools to continuously elucidate the biological activity of ebselen.
The known reactivity of ebselen with cysteine residues of several proteins [1,7,8,[36][37][38] prompted us to investigate its interaction with two model cysteine proteases, the human cathepsins B and L. An inactivation of these enzymes through a seleno-sulfide bond formation would provide a possible starting point for the development of probes for cysteine proteases. Ebselen and probe 15 were added at a final concentration of 10 µM to the cathepsin activity assays [39,40]. Duplicate experiments revealed no inhibition of the proteolytic activity by using peptidic chromogenic substrates. However, the covalent interaction of ebselen and assumedly of 15 with critical protein thiol groups can be reversed by the addition of reducing compounds, such as dithiothreitol (DTT) [1,8,36,37]. In our usual cathepsin measurements, DTT is applied for the activation of the cathepsins, leading to rather high final DTT concentrations of 100 µM (cathepsin B) and 200 µM (cathepsin L) in the assays. Hence, the strong excess of DTT might prevent enzyme inhibition by the two compounds. We have modified the assays as follows: (i) the final DTT concentration was reduced to 3.5 µM; (ii) the final DMSO concentration was changed from 2% to 5%; (iii) the enzymes were preincubated for 10 min with the substrate prior to the addition of the test compounds; and (iv) the final enzyme concentration was increased 3.5-fold (cathepsin B) and 1.75-fold (cathepsin L). Under such conditions, the enzymes could be assayed appropriately with the decreased amount of DTT. Ebselen and probe 15 were investigated at final concentrations of 1 µM and 0.2 µM in duplicate measurements. Unexpectedly, the proteolytic activity was stimulated, in particular that of cathepsin B, when each of the two compounds was added. Obviously, the test compounds were able to interfere with the redox equilibrium between the protein and DTT. It could therefore be concluded that probe 15 is not suitable for labeling cysteine cathepsins. Nevertheless, the fluorescent ebselen derivatives might be helpful to identify proteins which are targeted by this drug and might therefore be valuable in future biological studies.
General Methods and Materials
Thin-layer chromatography was carried out on Merck aluminum sheets, silica gel 60 F254. Detection was performed with UV light at 254 nm. Preparative column chromatography was performed on Merck silica gel 60 (70-230 mesh). Melting points were determined on a Büchi 510 oil bath apparatus (Büchi, Essen, Germany). Mass spectra were recorded on an API 2000 mass spectrometer (electron spray ion source, Applied Biosystems, Darmstadt, Germany) coupled with an Agilent 1100 HPLC system (Agilent Technologies, Santa Clara, CA, USA) using a Phenomenex Luna HPLC C 18 column (50ˆ2.00 mm, particle size 3 µm).
Syntheses
2,2'-Diselenobisbenzoic acid (3). A disodium diselenide solution was prepared by the reaction of selenium powder (5.9 g, 75 mmol), 100% hydrazine hydrate (0.98 g, 0.95 mL, 20 mmol) and sodium hydroxide (9 g, 22.5 mmol) in MeOH (150 mL) carried out at rt for 24 h. Meanwhile, to a stirred solution of anthranilic acid (1, 10.3 g, 75 mmol) in 3N HCl (60 mL) cooled with an ice/salt bath, a solution of sodium nitrite (5.7 g, 83 mmol) in water (15 mL) was added dropwise, and the temperature was maintained below 5˝C. After stirring for additional 15 min, this solution was added dropwise to the stirred solution containing disodium diselenide cooled in an ice/salt bath below´5˝C. The temperature was kept for 2.5 h and then increased to rt for another 20 h. The reaction mixture was filtrated and the filtrate was acidified to pH 3 by adding 3N HCl (300 mL). After 30 min, the obtained precipitate was filtered off, washed with hot water (2 L), dried at 90˝C for 4 h, then on air for another 24 h, and recrystallized from 1,4-dioxane resulting in the pure product 3 (2.7 g, 18%); m.p. > 250˝C, lit. [41] tert-Butyl 4-(3-oxobenzo[d] [1,2]selenazol-2(3H)-yl)benzylcarbamate (5) [34]. Compound 3 (8.0 g, 20 mmol) was added to thionyl chloride (40 mL) and DMF (1 mL), and the reaction mixture was refluxed at 85˝C for 3 h. The solvent was then evaporated, and the obtained product was recrystallized from n-hexane resulting in pale yellow prisms of 2-(chloroseleno)benzoyl chloride 4 (4.65 g, 46%); m.p. 64-66˝C, lit. [4] mp 64-66˝C, which was used without further characterization. In the next step, a solution of compound 4 (2.54 g, 10 mmol) in dry CH 2 Cl 2 (16 mL) was added dropwise to a stirred solution Ebselen-coumarin Heterodimer (15). Compound 14 (292 mg, 0.6 mmol) was dissolved in dry CH 2 Cl 2 (30 5 mL) and trifluoroacetic acid (6 mL) was added. The resulting reaction mixture was stirred at rt for 2 h. The solvent was then evaporated, and the residue was diluted with CH 2 Cl 2 (4ˆ10 mL) and evaporated to remove the excess of trifluoroacetic acid. The crude product was dissolved in dry DMF (10 mL), and HATU (255 mg, 0.67 mmol) and DIPEA (0.70 mL, 517 mg, 4 mmol) were added. After stirring at room temperature for 30 min, compound 6 (334 mg, 0.80 mmol) was added and the reaction mixture was stirred at rt for 48 h. The solvent was then evaporated and the resulting residue was redissolved in DMF (3 mL). To this solution silica gel was added and the solvent was again evaporated. The crude product attached to silica gel was added to a column and purified using CH 2 Cl 2 /MeOH (9:1) as eluent. The product-containing fractions were combined and evaporated to dryness. MeOH (
Modified Enzymatic Assays
Cathepsin B. Human isolated cathepsin B (Calbiochem, Darmstadt, Germany) was assayed spectrophotometrically (Cary 50 Bio, Varian, Agilent Technologies, Santa Clara, CA, USA) at 405 nm and at 37˝C. Assay buffer was 100 mM sodium phosphate buffer pH 6.0, 100 mM NaCl, 5 mM EDTA, 0.01% Brij 35. To 2 µL an enzyme stock solution of 1.81 mg/mL in 20 mM sodium acetate buffer pH 5.0 and 1 mM EDTA, a volume of 9.98 µL assay buffer containing 5 mM DTT was added. Then, 988.02 µL of assay buffer was added. This enzyme solution was incubated for 30 min at 37˝C. Stock solutions (10 mM) of ebselen and 15 were prepared in DMSO. A 100 mM stock solution of the chromogenic substrate Z-Arg-Arg-pNA was prepared with DMSO. The final concentration of DMSO was 5%, the final concentration of the substrate was 500 µM, and the final DTT concentration was 3.5 µM. Assays were performed with a final concentration of 253 ng/mL of cathepsin B. Into a cuvette containing 176 µL assay buffer, 7 µL DMSO, 1 µL of the substrate solution and 14 µL of the enzyme solution were added, thoroughly mixed, and incubated for 10 min at 37˝C. The reaction was initiated by adding 2 µL of DMSO or inhibitor solution and followed over 30 min.
Cathepsin L. Human isolated cathepsin L (Enzo Life Sciences, Lörrach, Germany) was assayed spectrophotometrically (Cary 50 Bio, Varian) at 405 nm and at 37˝C. Assay buffer was 100 mM sodium phosphate buffer pH 6.0, 100 mM NaCl, 5 mM EDTA, and 0.01% Brij 35. To 10 µL of an enzyme stock solution of 135 µg/mL in 20 mM malonate buffer pH 5.5, 400 mM NaCl, and 1 mM EDTA, a volume of 10 µL assay buffer containing 5 mM DTT was added. Then, 980 µL of assay buffer was added. This enzyme solution was incubated for 30 min at 37˝C. Stock solutions (10 mM) of ebselen and 15 were prepared in DMSO. A 10 mM stock solution of the chromogenic substrate Z-Phe-Arg-pNA was prepared with DMSO. The final concentration of DMSO was 5%, the final concentration of the substrate was 100 µM, and the final DTT concentration was 3.5 µM. Assays were performed with a final concentration of 94.5 ng/mL of cathepsin L. Into a cuvette containing 176 µL assay buffer, 6 µL DMSO, 2 µL of the substrate solution and 14 µL of the enzyme solution were added, thoroughly mixed, and incubated for 10 min at 37˝C. The reaction was initiated by adding 2 µL of DMSO or inhibitor solution and followed over 30 min.
Conflicts of Interest:
The authors declare no conflict of interest. | 4,783.8 | 2016-07-08T00:00:00.000 | [
"Chemistry"
] |
Magnetic and Electronic Properties of Spin-Orbit Coupled Dirac Electrons on a $(001)$ Thin Film of Double Perovskite Sr$_2$FeMoO$_6$
We present an interacting model for the electronic and magnetic behavior of a strained $(001)$ atomic layer of Sr$_2$FeMoO$_6,$ which shows room-temperature ferrimagnetism and magnetoresistance with potential spintronics application in the bulk. We find that the strong spin-orbit coupling in the molybdenum 4$d$ shell gives rise to a robust ferrimagnetic state with an emergent spin-polarized electronic structure consisting of flat bands and four massive or massless Dirac dispersions. Based on the spin-wave theory, we demonstrate that the magnetic order remains intact for a wide range of doping, leading to the possibility of exploring flat band physics, such as Wigner crystallization in electron-doped Sr$_{2-x}$La$_{x}$FeMoO$_6.$
We present an interacting model for the electronic and magnetic behavior of a strained (001) atomic layer of Sr2FeMoO6, which shows room-temperature ferrimagnetism and magnetoresistance with potential spintronics application in the bulk. We find that the strong spin-orbit coupling in the molybdenum 4d shell gives rise to a robust ferrimagnetic state with an emergent spin-polarized electronic structure consisting of flat bands and four massive or massless Dirac dispersions. Based on the spin-wave theory, we demonstrate that the magnetic order remains intact for a wide range of doping, leading to the possibility of exploring flat band physics, such as Wigner crystallization in electron-doped Sr2−xLaxFeMoO6.
PhySH: Monolayer films, Ferrimagnetism, Half-metals The coexistence of a strong spin-orbit coupling (SOC) and low dimensionality gives rise to novel quantum phases of matter [1]. Two-dimensional (2D) systems confined in the atomically thin films can possess rich electronic properties different from the bulk and could host various new correlated phenomena. Especially, the rapid progress in synthesizing atomic-scale slabs, superlattices and heterostructures of correlated transition metal oxides by pulsed laser deposition or molecular beam epitaxy has motivated the exploration of various (perovskite) compounds epitaxially grown on different cubic substrates as potential nano-scale devices [1][2][3][4]. An advantage of the epitaxial growth is that by changing the substrate, one can introduce strain to thin films due to a mismatch of the lattice constants and thereby control the electronic state, which we call strain engineering [5]. By replacing 3d transition metal ions with heavier 4d or 5d ions, one can even control the strength of the SOC. These flexibilities of epitaxially grown atomic-scale layers could pave a way to search for unusual spin-orbit coupled correlated phenomena in 2D systems. In this context, theoretical exploration of possible phases can provide a useful guidance.
In order to explore such collective phenomena, perovskite oxide is one of the best-established platforms [1]. For instance, LaAlO 3 /SrTiO 3 (LAO/STO) interface is known as a good 2D electron gas system [6] and hosts various electronic phenomena, including 2D superconductivity [7] and Rashba SOC effects [8]. Although there have appeared new types of atomic-scale 2D systems like the transition metal dichalcogenides [9], where we can expect a stronger effect of SOC than in graphene [10], perovskite systems are still important playgrounds because the knowledge on their synthesis and chemical properties has been accumulated for a long time.
Specifically, the bulk double perovskite compounds, such as Sr 2 FeMoO 6 (SFMO) and A 2 FeMoO 6 (A = Ca, Ba, Pb), have been investigated intensely as examples of half-metallic ferrimagnets (FiM) with enhanced magnetoresistance at room temperature and as possible spintronics devices based on the high spin polarization of charge carriers [11][12][13][14]. Theoretical studies of the carrier induced FiM in cubic SFMO have been previously discussed within the ab initio [15][16][17][18] and model Hamiltonian approaches [19][20][21] without including the SOC. Since double perovskite compounds have twosublattice structure, the synthesis of high-quality SFMO thin films with completely staggered Fe/Mo sublattices is experimentally challenging. Here, motivated by a recent fabrication of well-ordered thin films of double perovskite SFMO epitaxially grown along the (001) direction on various perovskite substrates that showed a ferrimagnetic ground state [22], we theoretically explore the combined effects of the strong SOC, tetragonal elongation, and carrier doping in a (001) layer of SFMO. For example, a typical perovskite substrate STO has a slightly shorter lattice constant (∼ −1.1%) than SFMO, so STO/SFMO heterostructures [23] would be ideal systems to investigate such effects.
In the insulating compounds such as Ba 2 YMoO 6 , the SOC locally stabilizes the j = 3/2 quartet of Mo 5+ and triggers rich multiorbital physics [24][25][26], unlike in the insulating iridates, where the orbital shape of the lowest energy j = 1/2 state is fixed, and the SOC manifests itself in the anisotropic exchange interactions [27,28]. On the other hand, in SFMO the molybdenum 4d electrons are itinerant, forming a conduction band. In this case, a strong impact of the SOC on the band structure is expected. Indeed, it has recently been proposed that the SOC may stabilize a Chern insulator phase in the (001) and (111) monolayers of double perovskites [29,30] and lead to a topologically nontrivial band structure in BaTiO 3 /Ba 2 FeReO 6 /BaTiO 3 2D quantum wells [31]. It should be noted that the tight-binding model for this system with the SOC and tetragonal compression has been investigated [30] in the free fermion level. However, the effect of carrier doping on the magnetism in the presence of the SOC is still illusive.
In this Letter, we study a (001) layer of pure and doped SFMO based on a minimal microscopic model within a large-S expansion. We find that the strong SOC gives rise to a robust nontrivial magnetic state with an electronic structure consisting of four spin-polarized massive or massless Dirac dispersions as well as flat bands. Based on the spin-wave theory, we demonstrate that such an unusual magnetic state is stable in a large, experimentally relevant range of carrier doping. In the electron-doped Sr 2−x La x FeMoO 6 , we suggest that the extra electrons would occupy a fully polarized flat band.
The model.-In the (001) layer of double perovskite SFMO, Fe and Mo ions form a checkerboard pattern on a square lattice [see Fig. 1(a)], and these metal ions reside inside the oxygen octahedra. In the ionic picture, iron is in the Fe 3+ valence state with five 3d-electrons coupled by Hund's rule into the high-spin state forming a localized S = 5/2 moment. The Mo 5+ ion has a single 4d-electron in the t 2g manifold of degenerate xy, xz, and yz orbitals. The lowest energy coherent charge transfer process takes place when this single electron moves from Mo 5+ to a neighboring Fe 3+ through the hybridization between the same t 2g states along a given bond. For example, on a bond along the x-direction there is a finite overlap either between xy or xz neighboring orbitals with a real amplitude −t.
We consider a tetragonal elongation of the oxygen octahedra due to a substrate-induced compressive strain. The corresponding tetragonal crystal field ∆ T > 0 lifts the threefold t 2g orbital degeneracy by stabilizing an orbital doublet of the axial xz and yz orbitals and by placing the planar xy orbital at a higher energy. It should be noted that for the xy orbital, where the SOC is completely quenched by the tetragonal distortion, we can repeat the analysis of the cubic case without the SOC [20] to discuss the stability against doping. In addition, we include the SOC λ > 0 in Mo 5+ that further lifts the degeneracy of the xz and yz orbitals by stabilizing j z = ±3/2 Kramers doublet of the effective total angular momentum j = s + l = 3/2 quartet [32]. Here, s = 1/2 and l = 1 are spin and effective angular momentum of a t 2g electron, respectively [33]. We will not include the SOC for the Fe 3d-orbitals because it is much weaker than that for the Mo 4d-orbitals. The resulting local energy structure of Mo 5+ is depicted in Fig. 1(c), and the explicit forms of j z = ±3/2 wave functions are given by and are labeled hereafter by fermionic annihilation operators c σ with a pseudospin σ = ↑, ↓.
Here we take the limit of a strong SOC λ and a tetragonal field ∆ T compared to the nearest-neighbor (NN) hopping amplitude |t|. Projecting the t 2g orbitals onto the lowest energy states (1), we have obtained a lowenergy Hamiltonian for a charge transfer between NN Fe and Mo ions [see Fig. 1(d)], as follows.
where i(j) labels Fe(Mo) ions, ij ∈ x(y) refers to each NN bond along the x(y)-direction, σ = ↑, ↓ = ±1 stands for a spin index, 2∆ is a charge transfer gap between Mo 5+ and Fe 3+ , the number operators n j measure carrier density d † i d i and c † j c j , respectively. In the undoped SFMO, there is one carrier n = n (d) + n (c) = 1 per formula unit, ignoring the localized half-filled Fe d-shell. The SOC manifests itself in the spin-dependent hopping in Eq. (2) that explicitly breaks the original SU (2) symmetry. Hereafter, we set t = 1 as the energy scale for simplicity.
When an itinerant electron visits the Fe 3+ ion with a core spin S = 5/2, the resulting total spin S of Fe 2+ could, in principle, take one of the two possible values S = 2 and S = 3. However, the maximum allowed spin quantum number for six electrons in a d-shell is S = 2. The unphysical S = 3 states appear because the local and itinerant spin operators on the Fe site are treated as independent variables. In order to project the enlarged Hilbert space onto the physical one, we supplement the hopping Hamiltonian (2) by a local antiferromagnetic (AF) coupling J → ∞ between the local and itinerant spins [20]. The total Hamiltonian then becomes The sum is taken over every Fe site i, and S i and s i are operators for the local and itinerant spins, respectively. Ferrimagnetic ground state.-The model defined by Eqs. (2) and (3) is one version of canonical double exchange (DE) problems with an infinite exchange coupling between the local and itinerant spins. Similarly to the DE, a maximum kinetic energy gain is achieved when the local moments align ferromagnetically (FM) and, in the present case, antiparallel to the itinerant spins, giving rise to an FiM state.
We consider the FiM order along the tetragonal symmetry z-axis and discuss later its stability within the large-S spin-wave theory. We introduce fermionic operators D ↓(↑) for the carriers on the Fe sites with their spins quantized along the local moments. This representation diagonalizes the spin part of the Hamiltonian Eq. (3) and projects out the fermionic states D ↑ corresponding to the unphysical states with S = 3 (see Ref. [20] for details). The d-operators in Eq. (2) in terms of the new ones are expressed as d xz(yz) where b is a bosonic annihilation operator for a single magnon state. This is created when a spin-down electron moves away from Fe 2+ , which is in the entangled S = 2 state of the local and carrier spins, leaving an Fe 3+ local moment in the S z = S − 1 = 3/2 single magnon state which is tilted away from the fully polarized S = S z = 5/2 state [see Fig. 1(b)]. This representation provides the correct matrix elements of fermionic operators within the eigenstates of the allowed total spin, S = 5/2 and S = 2, in the perturbative level and retains a quantum nature of the local moments.
Inserting the above representation into Eq. (2), we get H = H 0 + H 1 + H 2 , where H 0 is a single-particle part Diagonalizing the noninteracting H 0 part, we get the following expression in the momentum space.
where E k = ∆ 2 + 2(cos 2 k x + cos 2 k y ) and the eigenstates α k↓ , β k↓ , γ k↓ , and c k↑ have been obtained by a unitary transformation [34]. The band structure (4) is composed of four bands and is shown in Fig. 2(a)-(b). The two flat α ↓ and c ↑ bands correspond to a nonbonding state composed of the d xz↓ and d yz↓ orbitals of Fe and a localized j z = 3/2 state of Mo, respectively. The dispersive antibonding β ↓ and bonding γ ↓ bands are made of spin-down states of Mo and Fe. The next-nearestneighbor (NNN) hopping between the same Fe (or Mo) ions, not considered here, might in principle give a finite dispersion to the flat bands. However, the corresponding hopping is between the d xz and d yz orbitals, and is extremely small (∼ few meV) [29]. Moreover, it exactly vanishes when projected onto the complex wave functions of the Mo j z = ±3/2 states due to a destructive quantum interference.
We now discuss the effects of a uniform external magnetic field H on H 0 . This just splits the four bands without hybridization because H 0 conserves the z-component of the real spins. While the external field is uniform, the Zeeman splitting gµ B H of the itinerant electrons on the Fe and Mo ions become staggered due to the difference in the g-factors. As shown in Fig. 3, this would allow us to control the mass of Dirac dispersions and also to dope the flat band just above the Fermi energy.
The flat bands, which are already fully spin-polarized, are different from the unpolarized ones, such as the ones in the (110) thin films of STO [36] or in (metal)organic systems [37,38], supporting the flat-band ferromagnetism [39][40][41].
Therefore, in electron-doped Sr 2−x La x FeMoO 6 or SFMO under a strong magnetic field, where extra electrons occupy the nondispersive Mo band, we anticipate other types of instabilities, such as Wigner crystallization [42] or various types of complex charge ordered patterns, as well as the formation of selftrapped polaronic states of minority spins at the Mo sites. As confirmed in the following part, the FiM state is stable in a wide carrier doping range of the electron doping, and, consequently, the minority-spin flat Mo band can indeed be electron-doped.
Spin-wave spectrum.-We now analyze the stability of the FiM order state postulated above. To this end, we derive a spin-wave excitation spectrum from the magnon Green function G q,ω = 1/[ω − Σ q,ω ] evaluated within the leading order of the large-S expansion. First, we note that in the classical S → ∞ limit, the magnons are localized, and they become dispersive only due to quantum corrections. The corresponding magnon selfenergy corrections (Σ q,ω ∼ 1/S), shown in Fig. 1(c), stem from magnon interactions with propagating transverse and longitudinal particle-hole excitations. Their expressions are quite lengthy and are given in Ref.
[34]. We find that a coherent spin-wave mode emerges below the Stoner continuum with the following dispersion relation in the low-energy limit.
where Γ 1q = cos q x cos q y , Γ 2q = (cos 2 q x + cos 2 q y )/2, J 1 and J 2 are the carrier induced exchange couplings between the NN and NNN local moments, respectively. They depend only on the carrier density and the charge transfer gap ∆ [34]. The spectrum (5) is gapless at q = (π, 0) and at the symmetry-related points, which is very surprising because (i) the model defined by Eqs. (2) and (3) does not host any apparent continuous spin-rotation symmetry, and (ii) the gapless points are away from the FM Bragg point q = (0, 0). Actually, the model has a hidden SU (2) symmetry that can be uncovered by a gauge transformation [34].
For the spectrum (5), the spin stiffness of the FiM ordered state is given by D = 2J 1 S + 4J 2 S. Shown in Fig. 4(a) is the dependence of D on the carrier density n for 0 < n < 1 at various values of the band gap ∆. In the range 1 < n < 2, D remains constant for ∆ = 0. This is because the added electron carriers occupy the unpolarized flat band, and no additional potential or kinetic energy is gained. For ∆ = 0, D becomes very weakly renormalized (∼ few percent [34]) as carriers occupy the minority spin flat band. For comparison, in Fig. 4(b) we plot the spin stiffness obtained at a zero SOC [20]. D vanishes at some critical doping, signaling the instability of the FiM order. Without SOC the FiM ground state cannot be stabilized at n = 1 or n slightly larger than 1.
Thus, a strong SOC extends the stability window of the FiM order to the experimentally accessible electron doping range. The reason behind the extended stability is the SOC-induced electronic band structure reconstruction which transforms a large Fermi surface centered around the q = (0, 0) to four small Fermi pockets around (± π 2 , ± π 2 ), as shown in Fig. 4(c), allowing more kinetic energy gain with an increasing carrier density. Indeed, as seen experimentally [43], electron-doped Sr 2−x La x FeMoO 6 thin films do exhibit the stable FiM order in a wide range of doping as correctly shown here by the model with the SOC.
In conclusion, we have proven the stability of the FiM ground state of SFMO thin films against doping within a perturbative analysis. We discovered that the SOC plays a critical role in the enhancement of the stability and results in a phase with an unusual band structure including Dirac dispersions and flat bands. We anticipate that this gives rise to interesting collective behaviors, such as Wigner crystallization [42]. | 4,146.8 | 2017-11-23T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Sewage Sludge‑Derived Producer Gas Valorization with the Use of Atmospheric Microwave Plasma
Atmospheric microwave plasma was applied to the processing of the partially cleaned producer gas obtained from sewage sludge gasification. The plasma processing resulted in residual tar compounds conversion and changes in the gas composition. During the tests with a different gas flow rates and microwave power inputs, liquid and gaseous samples were collected to evaluate the plasma reactor’s performance. The conversion efficiency ranged from 19 to 100% and it depended on the specific energy input ( SEI ), gas flow rate, initial tar concentration, and the nature of the tars compounds. Generally, it was shown that the conversion rate increased with the SEI and that the aliphatic, cyclic and substituted compounds were converted much easier than benzene. Moreover, applying plasma led to the production of heavier aromatics (i.e. naphthalene, indene, acenaphthylene) but the rise in their concentration was significantly smaller than the amount of converted compounds. The gas composition changes revealed in the increase of H 2 and CO concentration that was an effect of hydrocarbons and CO 2 conversion. Additionally, it was indicated that the microwave plasma reactor’s performance was noticeably worse than in the case of the laboratory test with a simulated producer gas. This was mainly attributed to differences in the reactors’ geometry, lower hydrogen concentration and the presence of inorganic deposit on the reactor’s walls that might have inhibited microwaves transfer. In general, the microwave plasma technology seems promising in the context of cleaning and upgrading the producer gas, however, further optimization research is necessary to make it more reliable and less energy consuming. Abstract
Introduction
Biomass gasification is gaining rapt attention in the context of renewable energy and sustainable development [1,2]. One of the reasons for it is that gasification can be considered as one of the most flexible fuel conversion processes. The producer gas can be utilized in boilers, engines, turbines, or even fuel cells [3]. Moreover, it can be converted into syngas or hydrogen which might be applied in chemical syntheses of many products, including components of liquid fuels, methanol, and fertilizers [4]. Additionally, biomass can be considered as a renewable source (neutral in terms of CO 2 emission [2,5]) that is widely and relatively evenly distributed along the earth thus accessible locally [5]. The biomass gasification may become even more attractive if wastes were considered as the gasification fuel. Gasification of the sewage sludge could be a good example of such an approach.
The abundancy of sewage sludge is now a major concern in many countries [6][7][8]. The development of sewage treatment plants, being a result of directives and large financial expenditures of the European Union [8,9], results in a significant increase in the production of sewage sludge [7,8,10]. Moreover, there are directives that significantly limit or even completely prohibit landfilling of the sewage sludge [7,11]. The use of sewage sludge as fertilizers is also significantly limited [7,9], due to a high content of heavy metals and pathogens contained in it [6,7]. A solution to this was found in thermal processing of sewage sludge. The most common approach is the incineration [7,12], but it involves the emission of significant amounts of pollutants such as SO 2 , NO x , and heavy metals [7,13]. Alternatively, sewage sludge could be processed via gasification resulting in the production of valuable producer gas of flexible utility [7,10,11].
Despite its advantages, biomass gasification is inseparably connected with tars production and sewage sludge gasification is no exception. Tars may be defined in many ways and assigned to a few categories but generally, they are a mixture of heavy organic compounds-mainly aromatics [14,15]. These compounds tend to condensate at the temperature range of 150-350 °C [15,16] and elevated pressure [17] thus resulting in malfunctions of mechanical devices as well as fouling and blocking of pipelines and filters [15,18]. In fact, tars may be considered as a waste stream from the gasification process, that is the main obstacle in the process commercialization [14,18,19]. The most common approach to remove tars from the gas stream is to use mechanical, usually wet methods that involve scrubbers or washing towers [15,19,20]. However, this method does not solve the tars problem completely but instead "pushes" it away creating a wastewater stream that consists of tars [19]. A more proper approach involves tar conversion rather than its removal. Since the yield of tars may vary from few to even 10 wt% (or more) [14] they may carry a considerable amount of energy. Methods that can allow for tar conversion may be classified into thermal, catalytic, and plasma methods [18]. While the thermal methods seem to be the most simple, they require a very high temperature (e.g. 1200 °C [21,22]) that may be hard to achieve and can cause material problems. Catalytic methods require a significantly lower temperature of ca. 500-700 °C [15,23] but the catalyst itself may be expensive and vulnerable to poisoning and deactivation (as it is in the case of the most commonly tested nickel catalyst [23,24]). The poisoning may be especially problematic in the case of sewage sludge gasification, due to its high content of S and N [12,13]. Sulfur and nitrogen compounds present in the produced gas are a well-known catalyst poison [25][26][27]. Plasma methods, although the most expensive in terms of investment and operational cost, do not suffer from the drawback that can be attributed to other conversion methods. In fact, they might be considered as a specific hybrid of catalytic and thermal methods due to a usually high temperature of plasma and the presence of reactive species (e.g. radicals, electrons, ions, and excited molecules), that may significantly enhance the decomposition process [18,28]. Consequently, similarly to catalytic methods, plasma may provide high conversion rate of tars into valuable products that will result in a general improvement of biomass conversion. However, the main drawbacks of plasma methods are high investment and operational costs. The operational costs are mostly connected with electric energy consumption thus a significant cost reduction could have been achieved if the plasma methods would be coupled with renewable energy 1 3 sources. Such a connection seems especially appropriate since the quick start/stop procedure of plasma reactors is advantageous in terms of quickly changing natural sources of energy (wind of sunlight) [29]. Energy efficiency of plasma reactors variates widely depending on the process conditions and plasma types, but the recent research show that as high values as 20-60 g of converted tar compounds per kWh could be achieved [29,30] with the specific energy input (SEI-ratio between power supply and volumetric gas flow rate, see "Assessment Methods") section usually below 1 kWh/Nm 3 . With the SEI close to 1 kWh/ Nm 3 or even higher, it can be concluded that application of plasma method is not reasonable in case of air gasification with the gas calorific values usually around 4-6 MJ/Nm 3 (1.11-1.67 kWh/Nm 3 ) [14]. However, the application of high SEI plasma could be justified in the context of more valuable products, like syngas, hydrogen and subsequent chemicals. Many plasma techniques have been investigated in terms of tars conversion, e.g. corona discharge [31,32], dielectric barrier discharge [33,34], arc plasma [12,35], gliding arc plasma [36][37][38], and microwave plasma (MWP) [28, [39][40][41]. The previous research of the authors proved that MWP might allow high conversion of tar compounds, while at the same time it influences gas composition increasing the content of H 2 and CO at the cost of CO 2 and hydrocarbons [28,40]. This can be achieved not only by a high temperature of the atmospheric MWP, which can usually range from ca. 5000 to 6000 K, but also due to the high concentration of radicals, i.e. O, OH, and H, that enhanced the conversion reaction [28, 40,42]. MWP may also be considered as a promising method for tar conversion due to its electrodeless character (electrodes are the most life-limiting factor in plasma techniques [18], what may be especially problematic in the context of aggressive producer gas conditions), relatively cheap components [43], and the possibility for scaling up due to commercial production of microwave generators with a wide range of power from few to hundreds of kW.
Although there is a lot of research on plasma application in tar conversion, most of it involves simplified conditions, e.g. using nitrogen and/or argon as a plasma gas or model tar compounds like toluene. There are very few articles where a real producer gas [12,31,35] is applied and none of them involve MWP. While the authors' recently published article [28] presents the results from the labscale MWP tar conversion in the atmosphere of simulated producer gas, it is still quite a simplification. Therefore, the goal of the experiments presented in this article was to test the MWP tar conversion when a real producer gas, derived from sewage sludge gasification, was applied. The motivation behind those works was to verify the results from simplified, simulate, lab-scale experiments [28,40] and to recognize any potential problems that may arise when a real producer gas is converted in the MWP reactor.
Experimental Setup
For the purpose of the experiment, REMIX S.A. company (Poland, Świebodzin) provided a test facility with a gasifier. The pilot-scale gasification plant investigated in this research was composed of four basic units: supply unit of feedstock, gasifier itself, producer gas cleaning line, and the furnace for the utilization of unburnt pellets. The gasified material was prepared from dried sewage sludge in the form of pellets with the diameter of 6 and 8 mm and stored in a silo, from which they were conveyed to the fuel hopper on top of the gasifier. The gasification reactor was the downdraft gasifier made of heat-resistant steel with an internal diameter of 0.42 m and a height of 1.82 m. The pilot plant operated periodically: the prepared batch of sewage sludge pellets was loaded into a sealed container from which the feedstock was delivered to the gasifier by means of a cellular feeder and bucket elevator with a capacity of 20 kg/h. Air electrically heated up to 400 °C was supplied by the compressor to the reactor throat as a gasifying agent with a capacity of about 20 Nm 3 /h. The average equivalence ratio (ER-ratio of air used to stoichiometric air) was 0.4. The incompletely gasified char-pellets were burned-out in the BFB furnace. The amount of char collected after gasification was approximately 60% in relation to the mass of pellets. Figure 1 presents the scheme of the producer gas cleaning line. In the initial step, the gas was cleaned with the use of mechanical methods that include cyclone, disintegrator (centrifuge), two scrubbers (one with oil and one with water), demister, and a filter with sawdust. After passing the initial cleaning step, the gas was heated up to 120 °C to prevent condensation of water at the inlet of the MWP reactor.
The principle of work of all atmospheric, electrodeless MWP reactors is identical [28,39,41,42]. The plasma discharge is initiated and sustained due to absorption of microwave radiation. The microwave generator provides radiation (in this case with the frequency of 2.45 GHz) that transfers through a waveguide. This kind of plasma reactors may work without electrodes and the plasma is generated in a quartz tube where microwave radiation affects flowing gas. The only exception to the electrodeless character of the reactor is the moment of plasma ignition. During this short stage, a tungsten rod is introduced into the quartz tube to focus microwave energy and provide ionization of the gas and further development of the plasma discharge. After the ignition, the rod is removed from the reactor. The plasma gas is introduced into the reactor tangentially, what increases the 1 3 stability of the plasma discharge and protects the reactor's walls from contact with the plasma. Practically, most of the electrodeless, atmospheric plasma reactors have very similar construction [28, 39, 41, 42] but some differences may occur in the geometry of the reactor, the generator's power and the presence of additional auxiliary devices. The major difference in the case of presented results was using four MW generators in series. The upper two generators were of 5 kW power while the two lower ones were of 3 kW. Since the microwave generation efficiency is ca. 60%, the microwave power was 3 kW and 2 kW, respectively. Each of the microwave generation lines ended in a movable plunger which allows optimizing microwave absorption/reflection ratio [42]. The reactor's quartz tube was 1500 mm length and its inner diameter was 36 mm. The quartz tube was cooled with the use of five auxiliary fans located alongside the reactor. The purpose of cooling was to prevent overheating of the quartz and damaging of the reactor. For the same reason, no thermal insulation was applied. The MWP reactor was designed and manufactured by PROMIS At the outlet of the reactor, a two-stage cooler (fed with air and water) was connected. After the cooler, the gas was passing a gas meter (Intergaz, BK-G16 M) and eventually was stored in gasbag or burned in a flare.
Material
The feedstock, sewage sludge, was obtained from a wastewater treatment facility in Świebodzin (Poland). The material was dried and pelletized at REMIX S.A. test facility prior to gasification. The properties of the sewage sludge are presented in Table 1.
Experimental Procedure
The initial operating parameters of the gasifier were as mentioned above ("Experimental Setup" section). After initiation of the gasification process with the use of hot air (400 °C), the air feed was controlled to keep the reduction zone temperature at the level of 725 ± 50 °C. After reaching a reasonable repeatability of the temperature and the gas composition (usually 3-4 h after the start-up), the producer gas (preliminarily cleaned by with mechanical methods) was passed through the MWP reactor as the only plasma gas. Prior to that, the MWP reactor was started-up and worked for ca. 10 min on air, that was gradually shut down with the producer gas introduction.
The whole experiment concerning MWP reactor application in gas cleaning was divided into two stages. During the first stage, only two upper microwave generators were applied resulting in microwave power of 3 and 6 kW, respectively. The second stage was performed with the use of three and all four generators thus the microwave power was respectively 8 and 10 kW. These two stages have been It should be also noted that the main parameter used to control the gasification process was the temperature inside the bed. This temperature was controlled by the stream of gasifying agent-air. As a result, the gas flow rate in the MWP reactor changed as well. Additionally, partial blocking of the producer gas cleaning line filters and pellets sintering in the reactor also could have affected the gas flow rate. Table 2 summarizes the conditions of all four experiments with different microwave power, gas flow rate, and resulting SEI. Since the gasification was performed in a periodic, semi-continuous fix bed gasifier, the gas composition could have varied within the process. Moreover, as it can be seen in Table 2 even the gas flow rate could vary significantly depending on the run. Due to hard to achieve stability and repeatability of the process, as well as the time-consuming character of the experiments, the sampling of the gas was done only once for every case. As a result, no error analyses were performed and it should be noted that the results presented in the further part of the article might be burdened with a high uncertainty. Nevertheless, the results show clear trends and they might be used for comparison with previous results or provide valuable leads considering process improvement.
For the purpose of composition analyses, the gas was sampled before MWP reactor (1st sampling point in Fig. 1) and after passing it (2nd sampling point in Fig. 1). In the latter case, the sampling was done twice for each SEI applied in the stage. The gas was sampled with the use of a conditioning unit (M&C, PSS 5/3) equipped with a peristaltic pump (180 NL/min) for 10 min. Before reaching the conditioner, the gas was passing through an absorption unit that consisted of three Dreschle flasks filled with isopropanol (75 mL) and kept in a chiller (− 10 °C) (PolyScience, SD07R-20). Additionally, the conditioner was followed by a gas analyzer (GEIT, GAS 3100R) allowing for the measurement of CO, CO 2 , and H 2 . Finally, the gas was pumped into the Tedlar bag (5 L) and transported to a laboratory for the gas chromatograph (GC) analyses. The GC analyses were applied for both gas and liquid samples. The liquid samples (a mixture of isopropanol with tars) were analyzed with the use of GC (Agilent 7820) equipped with a mass spectrometer (Agilent MSD 5977) and HP-5M column. These analyses allowed identifying the tar compounds in the producer gas and quantifying some of them. The GC (HP 6980) equipped with a flame ionization detector (FID) and RT-Alumnia Bond KCl column (Restek) was used for gas analyses. The purpose of these analyses was to identify and quantify the main light hydrocarbons (C 1 -C 3 ) present in the producer gas. More information considering chromatographic analyses, including temperature programs, can be found in the previous work [40]. All the GC samples were analyzed at least three times. The results of the GC analyzes present average values with standard deviation.
Measurements of particles concentration in the producer gas were made using the gravimetric method according to the Polish standard (PN-Z-04030-7). This method base on an isokinetic suction of the gas through a fiberglass filter (in 1st sampling point in Fig. 1). The sampling was done with the use of gravimetric dust meter (EMIO, EMIOTEST 2598). Samples of collected dust were dried to evaporate water at 105 °C and then at the temperature of 200 °C to remove organic compounds. Knowing the total gas flow rate, the final mass of the filter was used to calculate the particles concentration. The dust sampling was done twice.
Additionally, a residue deposited on the walls of the reactor during the plasma processing was collected and analyzed with the use of scanning electron microscope (SEM) (Phenom, XL) equipped with energy dispersive spectroscopy (EDS) detector and X-ray diffraction method (XRD) with the use of symmetric θ/2θ Bragg-Brentano geometry system (Philips, X'PERT).
The analyses of the dried sewage sludge (see Table 1) and its ash (see Table 7) were done by an external laboratory (Laboratory of Fuels and Activated Carbons, Institute for Chemical Processing of Coal, Zabrze, Poland).
Assessment Methods
The conversion efficiency (η) of tars compounds was calculated as follows: where, C 0 -refers to the initial, inlet concentration of converted compound (i.e. benzene, toluene) (g/Nm 3 ), C-refers to final, outlet concentration of converted compounds after MWP processing (g/Nm 3 ).
Alternatively, a simplified approach for estimation of the conversion efficiency may have involved area of the peaks from GC analyses instead of the actual concentration. In that case, concentrations (C 0 and C) were substituted with peak's area (S). Specific energy input (SEI) was defined as follows: where, P-MWP generator power supply (kW), V-volumetric gas flow rate (Nm 3 /h). It should be explained that for the purpose of all the calculations an assumption was made that the volumetric gas flow rate did not change noticeably due to plasma treatment. Therefore, the inlet and outlet gas flow rate were equal. This assumption was partly proven during some of the test by the measurements of the gas flow rate with and without plasma working. It was also validated by the fact, that the gas composition did not change significantly due to plasma treatment and the dominating compound was nitrogen (see Table 3). Table 3 presents the analyses of the gas composition (considering permanent compounds) from both stages of the experiment. As it can be seen, the composition of the gas differs in both tests, which confirms that the work of the gasifier was not stable and not repeatable. Moreover, the gas obtained in the gasifier was of low quality, resulting in a low content of CO, H 2 , and CH 4 , and at the same time a high content of CO 2 . Nevertheless, the presented data quite clearly indicate the positive impact of MWP. The use of plasma resulted in a significant increase in the content of H 2 and CO. The
MWP Impact on the Permanent Compounds
increase in the former was around 20-30%, although some doubts may occur in the case when the SEI of 1.14 kWh/ Nm 3 was applied. This result does not fit into the general trend of hydrogen content increase with the use of plasma (Table 3). This might have been caused by an error made during sampling or analysis, or from the unstable gasification process and temporary changes in the gas composition.
There are no such ambiguities in the case of CO, since its concentration significantly increased in each run, and this increase ranged from 200 to 300%. Changes in the CO and H 2 concentration were associated with a simultaneous decrease in CO 2 and CH 4 (as well as other hydrocarbons) yield. The CO 2 may dissociate due to the high temperature of the plasma as in R1: Moreover, carbon dioxide may interact with H 2 (R2) and CH 4 (R3) as well as with unprocessed char and soot (see R5) in Boudouard reaction (R4): Methane (and other hydrocarbons) may simply decompose due to thermal dissociation (R5) but it may also react with H 2 O that is present in a producer gas (R6). It should be also mentioned that CH 4 decomposition may be enhanced by the reaction with H radicals (R7).
(R1) It is also possible, for a water-gas shift reaction (R8) to take place in the lower part of the MWP reactor, where the temperature decreases: A more detailed information considering the interaction between permanent compounds in the presence of MWP may be found in the previous work [28].
Analyzing the impact of plasma on light hydrocarbons present in the gas, it can be seen that methane and ethane show a clear decrease with the increasing SEI. In the case of other compounds, there is no such a clear trend, and their concentrations fluctuate around relatively stable values. With a multicomponent mixture, it is difficult to clearly indicate the reasons for this behavior. It should be noted that all of these compounds can decompose and, at the same time, create due to recombination of methane and other hydrocarbon decomposition products, as indicated in the literature [44,45] and previous research of the authors [28].
In conclusion, the obtained data quite clearly demonstrate a positive effect of MWP on the quality of the gas, resulting in the increase of CO and H 2 concentration at the cost of CO 2 and hydrocarbons. Table 4 shows the concentration of the main, quantified aromatic compounds that were identified in the producer gas and changes in the tar composition resulting from the plasma application. Besides the compounds shown in the table, the producer gas also contained other compounds that are graphically depicted in the chromatograms (Fig. 2a, b).
MWP Impact on the Aromatic Compounds
Analyzing the data in the table and chromatograms, it can be concluded that the producer gas consisted mainly of low-boiling organic compounds. These compounds included aromatics (mainly benzene and its substituted forms), cyclic compounds (substituted cyclopentane and cyclohexane compounds), and aliphatic compounds (C 7 -C 10 ). The main, dominating compounds were benzene and toluene. At the same time, the gas also contained small concentrations of heavier compounds, i.e. indene, naphthalene and acenaphthylene.
Before considering the impact of plasma on the conversion of compounds contained in the gas, it should be noted that the individual tests differed not only in the SEI but also in the volumetric gas flow rate (see Table 2) as well as the initial concentrations of these compounds. While these facts make it difficult to directly and accurately compare the results, it does not affect the general trends and characteristics of the process. Figure 3a, b provide the surface of the peaks of chosen tar compounds that were qualified in samples taken during the experiment. Additionally, the figures include conversion rate estimated on the basis of peak's area (which is usually
Table 4
Changes of the aromatic compounds concentration due to MWP processing Table 4, it can be observed that a definitely higher conversion rate was achieved for cyclic compounds, aliphatic and substituted benzene compounds (including toluen ± e) than for the benzene itself. While the SEI of 0.29 kWh/ Nm 3 provides only a limited conversion rate (ranging from ca. 6 to 80% depending on the compound), the increase in Specific Energy Input up to 0.59 resulted in a significant improvement in the conversion rate (ranging from ca. 60 to 100% depending on the compound). In the case of higher SEI almost all tar compounds were completely converted. Only Styrene and p-Xylene achieved lower but still high conversion rate of 77-86% and 91-96%, respectively. The conversion rates obtained for toluene and benzene were lower, but their initial concentration was definitely higher. At the same time, gas conditioning using MWP also resulted in a noticeable increase in the concentration of indene, naphthalene, and acenaphthylene (Table 4). In general, it can be concluded that the increase in SEI resulted in an increase in the conversion rate of most compounds, but at the same time contributed to the increase in the share of heavier components such as indene, naphthalene, and acenaphthylene. However, the disproportion between the decomposed and created compounds should be noted-the decrease in the concentration of benzene and toluene was about two orders greater than the increase in heavier compounds.
The observed influence of plasma on the changes in tar compounds composition is consistent with literature data and previous experiments. The relatively low degree of benzene conversion is related to the high thermal stability of Fig. 3 a Peak's area and simplified conversion rate of identified, but not quantified, compounds presented in liquid samples collected during 1st stage of experiment. b Peak's area and simplified conversion rate of identified, but not quantified, compounds presented in liquid samples collected during 2nd stage of experiment 1 3 this compound [21,22,40], the presence of light organic compounds that may recombine to benzene [28] and the possibility of benzene formation due to the decomposition of its substituted forms (like toluene [40]). Moreover, the decomposition products of benzene and other aromatics can lead to the formation of heavier compounds, including indene, naphthalene, and acenaphthylene. This process mainly involves reactions between phenyl radical (which is an important intermediate product of benzene decomposition) and light hydrocarbons and their radicals (mainly C 2 H 2 but also C 3 compounds)-eventually, this condensation process may lead to the production of soot [28,40,46]. In fact, thermal decomposition of benzene leading to the creation of phenyl radical (R9) and the following products seems natural in the context of MWP's high temperature [40].
However, the addition of H 2 O, CO 2 , and H 2 makes new reaction pathways available, due to the presence of O, OH and H radicals (R10, R11, R12) [47], thus enhancing the conversion rate [28,40].
Moreover, these compounds, especially CO 2 and H 2 O, strongly influence the final conversion product leading to the formation of CO and H 2 rather than soot and hydrocarbons [28,40]. Therefore, it may be stated that MWP enables tar compounds conversion due to high temperature and the presence of radicals. However, the radicals may be produced not only due to thermal dissociation but also thanks to vibrational excitation, which is typical for MWP and compounds like CO 2 , H 2 , and N 2 [48,49]. The direct influence of electrons or ions is rather negligible in the case of MWP due to low energy of the former [48] one and low concentration of the latter one [50]. More information considering MWP characteristics and its influence on tar compounds conversion may be found in the previous works [28 , 40].
Comparison of the Simulated and the Sewage Sludge-Derived Producer Gas Results
The results presented in this paper are valuable due to the fact that they considered using MWP to clean a "real" produces gas obtained from the gasification process. However, this final step was preceded by an extensive research on a small laboratory scale with the use of model tar compounds and simulated gases [28,40]. Referring to this lab-scale results and comparing them to those presented in this paper should provide a deeper insight into the process and allow pointing out any explicit differences. Table 5 shows a composition of the simulated producer gas used in the previous research [28] and the producer gas obtained from sewage sludge gasification. The figure presents both the composition before and after plasma treatment. Since SEI is one of the most proper parameters allowing for the comparison of the process efficiency, two runs with the SEI closest to the "real" producer gas case (1.25 and 1.67 compared to 1.44 kW/Nm 3 ) were chosen for the presentation of the simulated gas case results. Despite significant differences in the initial concentration of the producer gas components, the changes in the gas composition are consistent, leading to an increase in the H 2 and CO concentration at the cost of CO 2 and hydrocarbons. What is different is the scale of these changes. While the relative increase in H 2 and CO is similar (20-30% in the case of H 2 and 200-300% in the case of CO) the absolute is significantly smaller in the case of gasification derived producer gas. Since CO and H 2 are produced from CO 2 and hydrocarbons this drop in their absolute share can be clearly connected with the decrease in CH 4 and CO 2 conversion rate that characterize the "real" gas case.
Similar conversion drop can be attributed to benzene. Table 6 presents data considering benzene conversion (as a model tar compound or tar component) in both cases: the simulated lab-scale and the sewage sludge gasification experiments. Despite similar SEI, the lab-scale experiments showed a higher conversion rate. This is especially interesting since the higher conversion rate was achieved even Table 5 Comparison of changes in permanent gases due to MWP processing of sewage sludge-derived producer gas and simulated producer gas *Methane was the dominating compound considering light hydrocarbons in the sewage sludge producer gas and the only light hydrocarbon in the simulated gas. However, the values presented in the table include also ethylene and acetylene that were present in a noticeable amounts in both cases (in the simulated gas they were among conversion products [ Fig. 3a, b) concentration is probably an order lower than the concentration of benzene (basing on the peak's area). Moreover, these other compounds (including toluene), are converted much easier than benzene. It should be also mentioned, that the lab-scale experiments were characterized by a higher concentration of CH 4 (compared to C 1 -C 3 compounds concentration in the other case). It is important since the presence of methane (or any other light hydrocarbon) can significantly decrease benzene conversion rate due to the secondary creation of aromatics and competitive consumptions of radicals that are crucial in decomposition/conversion of hydrocarbons [28]. Summarizing, the sewage sludge gasification test showed a lower conversion rate of CO 2 and hydrocarbons despite their lower initial concentration and similar SEI. This phenomenon may have been a result of a few factors. Some influence may be assigned to the reactor's geometry. Firstly, there were some differences in the space velocity comparing both reactors. In the case of simulated lab-scale experiments, the space velocity (calculated on the basis of normal conditions) was 0.64 s (for SEI = 1.66 kWh/Nm 3 ) or 0.48 s (for SEI = 1.25 kWh/Nm 3 ). In the second case (experiments with the producer gas), the space velocity was 0.61 s (SEI = 1.44 kWh/Nm 3 ), 0.39 s (SEI = 1.14 kWh/Nm 3 ), and 0.32 s (SEI = 0.29 and 0.59 kWh/Nm 3 ). Therefore, it can be seen that in most runs in the "real" gas experiments, the space velocity was lower and so was the reaction time. However, it should be noted that the calculation of the space velocity did not include gas temperature or the fact that the gas flow was swirled. The second factor connected with the reactors' geometry comes from a microwave energy distribution. In the case of simulated experiments, the reactor had only one generator focusing the whole microwave power (1.8 kW) in a small volume. The 10 kW reactor had four generators alongside the reactor's quartz tube. Therefore, while the SEI might have been similar in both cases, the energy distribution has been quite different. In the second case, lower energy density might have resulted in a lower concentration of OH, O and H radicals which play a crucial role in organic compounds conversion [28,40].
The stepwise distribution of the microwave energy alongside the quartz tube might have been also unfavorable due to another factor. During the experiments with producer gas, a residue layer was deposited on the inner walls. This deposit could have absorbed microwaves and inhibited their penetration through the gas. Interestingly, the deposit from the "real" gas experiments was quite different from the one obtained in the simulated gas experiments. The latter one was of purely organic origins. It was easy to remove from the quartz tube by blowing it off or washing. It was composed only of C, H, and N and its creation could have been easily and completely inhibited by the addition of CO 2 or H 2 O [28,40]. In the case of the experiments with the sewage sludgederived producer gas, the deposit was hard to remove, and it was created even though the gas included CO 2 and H 2 O. The SEM analyses proved that the deposit included, beside C, H, N and S, also elements like Al, Si, Fe, and O suggesting a partly inorganic origin of the deposit. Moreover, analysis of the experimental diffraction patterns (Fig. 4) allowed to state, that while the most of the deposit was in the form of amorphous, probably carbonous particles, crystalline phases as graphite and quartz, with the most intense peaks position similar to standards from JCPDS base (card numbers: 25-0284 and 33-1161, respectively), were also clearly identified. The presence of few additional peaks, nevertheless not very intensive, indicated the presence of other complex constituents in the deposit, corresponding to the standard cards no: 42-1491 (composed of Al, Si N and O). This is in contrary to the purely carbonous and amorphous structure of the soot obtained during the simulated lab-scale experiments [28]. The inorganic nature of the "real" gas experiments seems natural due to the high amount of ash in the sewage sludge (Table 1). In fact, the measurement of particles showed that their concentration was as high as 2.13-3.30 g/Nm 3 . This fact emphasizes another difference between the "real" and the simulated gas deposits. While the former one could have originated from the particles already present in the treated gas, the latter one was produced only due to the processes inside the plasma reactor. Consequently, conversion of the producer gas deposit involved heterogeneous reactions that could have been additionally limited by inorganic structures. In the case of simulated gas deposit, its creation could have been inhibited due to naturally faster homogenous reactions, i.e. between H 2 O/CO 2 and soot precursors. Additional information considering the deposit might be also derived from the sewage sludge's ash characteristic. Table 7 presents the composition of the ash and its characteristic temperatures. As it can be seen, the ash includes a lot of Si and Al what is consistent with the SEM and XRD analyses. Moreover, the softening and flowing temperatures of the ash are significantly below the temperatures that are obtained in the plasma reactor [28].
Therefore, it seems possible that the inorganic material in the producer gas was melted in the plasma core and deposited on the cooler walls of the quartz tube. The problem of deposit interfering with the microwaves transfer and plasma stability is a common issue in the MWP processing of carbon sources [42,51]. Finally, another important factor that may have caused a difference in the conversion rate could have been connected with the hydrogen concentration. Hydrogen, or more specifically its H radicals (and derived OH radicals), has a positive effect on enhancing decomposition of hydrocarbons both aromatic (like benzene) and light ones (like methane). In the experiment with the producer gas, the H 2 concentration 1 3 was 2-3 times lower than in the case with the simulated gas experiments ( Table 5). As a result, the influence of H radials might have been significantly limited. Additionally, the deposit on the quartz tube walls, inhibiting the microwave transfer, might have lowered the H population even more (as well as the temperature in the reactor). Consequently, the endothermic reaction R8 might have changed its direction (as in R13) and take part in decreasing the conversion rate [40].
Conclusion
Atmospheric MWP was applied as a method for producer gas processing. The producer gas was generated by sewage sludge gasification and partially cleaned via mechanical methods. Applying the MWP allowed to significantly reduce the concentration of residual tar compounds in the gas and positively influenced permanent gases composition.
The results showed that a higher conversion rate was achieved for substituted aromatics, as well as substituted and aliphatic hydrocarbons. A moderate conversion efficiency could have been attributed to benzene. Basing only on the quantified tar compounds, in the case of the highest SEI, the tar concentration was reduced from ca. 2453 to 643 mg/ Nm 3 (including benzene as a tar compound) or from ca. 620 to 125 mg/Nm 3 (excluding benzene). As a result, the achieved conversion efficiency was ca. 73.8% or 79.9%, respectively. However, it should be indicated, that the real conversion rate was definitely higher since these values did not include other, unquantified tar compounds. While their concentration was minor in comparison to benzene or toluene, their conversion efficiency was much higher, often reaching 100%.
Besides tar compounds conversion, MWP treatment resulted in an increase in H 2 and CO concentration due to the conversion of CO 2 and hydrocarbons. This change in the composition may be especially attractive if the gas was to be used for synthesis or H 2 production purpose.
Despite these advantages, the process might have some unwanted features. During the hydrocarbons conversion small amount of heavier aromatic, i.e. naphthalene, indene, and acenaphthylene were produced. However, the amount of these byproducts was disproportionately smaller (few mg/ Nm 3 ) than the amount of converted compounds (thousands of mg/Nm 3 ). Another problematic issue was the formation of a char/inorganic deposit on the reactor's walls. It is believed that this deposit limited the transfer of the microwave radiation into the quartz tube thus lowering the process efficiency and decreasing the conversion rate. This problem was mainly the result of the sewage sludge high ash content and its high melting and flowing temperatures.
(R13) C 6 H 5 + H 2 → H + C 6 H 6 Generally, the presented results and their comparison with the previous ones imply that the MWP technology may be considered as promising in terms of tar conversion and producer gas valorization. However, some further efforts are required to optimize the process, making it less energy consuming, limiting the heat losses and preventing formation of the deposit. | 9,241.4 | 2019-07-27T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Chemistry"
] |
Response to “Alternative theories of morphology in the Parallel Architecture: A reply to Benavides 2022”
<jats:p>N/A</jats:p>
I would like to thank Ray Jackendoff for submitting a reply to my article. I concur with him in hoping that this dialogue will enrich the broader conversation about the nature of morphology and its place in the language faculty.
In this reply I will address the points raised by Jackendoff (2022), in the order they were presented, starting with the abstract.
Before the discussion, it must be noted that, overall, while Jackendoff (2022) addresses several important aspects of my paper, many of the key arguments made in Benavides (2022) regarding RM are left unaddressed, including the confusing and excessive coindexation of schemas; the proliferation of schemas; the lack of a detailed explanation of how RM deals with changes in argument structure in word formation (e.g. inheritance and suppression of arguments); and the issue of unnecessarily ascribing meaning to a morphological construction (making it parallel to syntactic constructions such as the resultative) when meaning is already accounted for by the components of the structure (base and affix), as represented in concatenative models. Using constructional schemas, as Jackendoff & Audring (2020) (J&A) do, may not be the best way to incorporate morphology into the PA or to account for morphological phenomena in general, and it is important to explore other options.
In what follows, I quote relevant parts from Jackendoff (2022) and provide a response for each. Jackendoff's (2022) abstract begins with the following: "The Slot and Structure Model of morphology (SSM: Benavides 2022) presents itself as an extension of the Parallel Architecture (PA : Jackendoff 1997: Jackendoff , 2002." The and in the name, Slot and Structure Model, is not accurate. It is the Slot Structure Model because it is a model based crucially on lexical items being organized in an entry formed by slots that contain information. As noted in Benavides (2022: 13), this arrangement of blocks of information located within their respective slots constitutes the slot structure of each lexical item. The slots are not something separate or additional to the structure; they are the structure, which, along with percolation, determines the configuration of complex words. Jackendoff (2022) continues: "It is shown that (a) SSM does not segregate semantic structure from syntactic structure, violating the fundamental premise of the PA" This non-segregation in SSM only occurs below the word level. RM seeks to have a uniform notation for both phrasal syntax and morphology below the word level, but Benavides (2022) has shown that what happens below the word level is different from what happens above the word level, the phrasal component. Constructions such as the resultative are justified in syntax, not in morphology, because building phrases is different from forming words, especially with respect to the semantics, and this requires a different treatment for phenomena below the word level, as shown in Benavides (2022), § 3, 4. Language below the word level is different and the PA should adapt to that. Preserving a single notation (i.e. schemas) throughout should be warranted only by the data, not out of a desire for symmetry or uniformity, or in order to extend to morphology a formalism, based on constructions, that is used (effectively) for syntax.
In addition, as noted in Benavides (2022: 65), what happens below the word level is supported by Jackendoff's (2013) own theory of processing (based on the PA). In syntax, treelets are simply clipped together, without the need for a full phrase/sentence to be formed before clipping on another treelet. But in morphological processing, according to Jackendoff (2013), lexical items have to be fully-formed before they participate in phrasal structure. This automatically makes morphology different from the phrasal level, requiring, in my view, formations as in SSM.
The abstract continues: "(b) SSM is concerned primarily with deriving productive morphology, while the PA is stated in terms of declarative schemas that license nonproductive as well as productive morphology;" However, unification, which accounts for a significant portion of morphological processes, is procedural, not declarative, in RM. In J&A: 29 it is stated that "this single procedural rule is unification." Thus, it is not accurate to say that declarative schemas in RM license productive morphology; they do so only in part. In J&A: 28-9 it is stated that the schema in its generative function "contributes the procedural character of the rule; it actively manipulates pieces of structure, turning a structure that satisfies the 'input template' into one that satisfies the 'output template.' " Thus, in RM it is schemas in their generative function that implement unification and also license and carry out productive morphology. There is instantiation of variables as part of unification, but there is much more, including manipulation of structure, as seen in the description above (see also J&A: 30, 53, 158, 265).
In addition, schemas in their generative role do have to account for the 92% of Spanish -ble derivatives, the 93% of German -bar derivatives that are regular, as well as for derivations with the many other regular affixes in Spanish and other languages. Even non-productive affixes have relatively high levels of compositionality that need to be accounted for by a regular process (unification). Through a corpus study that included both productive and non-productive affixes in Spanish, Benavides (2014) found that there is an 87% of compositionality for affixes in general, and prefixes have almost 100% compositionality. Even affixes with very low productivity (as measured based on hapax legomena) had a majority of regular derivatives, with some of them reaching a level of regularity of over 85%.
The SSM can be seen as a refinement of the dual route model, and as such, SSM does account for irregular morphology in addition to regular word formation (via lexical redundancy rules). And while the dual route model has focused on morphology, its principles apply to the other components of the grammar as well (see Pinker 2006). For example, in syntax, any structure that does not follow regular syntactic rules, such as idioms or syntactic nuts, is stored in the associative network and undergoes lexical redundancy rules.
(c) SSM enforces a strict division between morphology and syntax, while the PA allows a degree of interpenetration.
In SSM morphology does interact with syntax, via the lexicon. As noted in Benavides (2022: 58), when needed in a sentence, fully-formed complex words separate into their respective components to participate at the interfaces, and instantiate into variables when necessary. More importantly, PA also separates morphology from phrasal syntax, as seen in the discussion below (p. 8). As also noted in Benavides (2022: 60), it is advantageous to represent morphology as operating below the word level as its own subcomponent, as this clearly marks the distinction between the phrasal and the word-based components of the lexicon/grammar. The formalism in (1) explicitly segregates semantic, syntactic, and phonological structures. The links between levels of representation are encoded by subscripting: subscript 1 connects the levels of the base drive, subscript 2 connects the levels of the entire word, and subscript 3 connects the levels of the affix (this last an issue to which we will return). Thus it directly embodies the basic principle of the Parallel Architecture.
These RM schemas segregate semantic, syntactic and phonological information, but they do not show the changes in argument structure (Benavides 2022, p. 64). It has not been shown in RM what exactly are the effects on argument structure in word formation, that is, what arguments are inherited, added or lost. In RM representations (in 1 above and in J&A) it is not shown that the Agent is no longer a part of the entry, and that driver can only take an object argument. The Agent is no longer a part of the derivative, but this is not reflected in the RM entry. It is also important to keep in mind that the derivation for driver is a relatively simple one when compared to the derivations with causative suffixes in Chichewa, Madurese, Malayalam, Chimwi:ni, and Choctaw, all analyzed in Benavides (2022: 31 ff.). Can RM, as currently formulated, account for the changes in argument structure in that type of complex derivations, in a principled and consistent way? That may be, in the future, but that is not shown in J&A.
However, this violates the basic premise of the Parallel Architecture, namely that phonological, syntactic, and semantic levels of representation are independent and internally unified. In RM and CxM, the close link between basic semantic categories and syntactic parts of speech is captured not by putting them in a box together, but rather by specifying the interface between the two levels. Similarly, the SSM slot labeled SUBCAT/SELECT encodes the affix's constraints on its base. It too mixes semantic and syntactic information. Moreover, it makes no connection with the syntax and semantics of the CATEGORIAL slot. The remaining slots deal with aspects of semantics: "core" lexical semantics and argument structure. So semantics is scattered throughout the slots. (p. 4) Lexical entries in SSM are represented that way for convenience and ease of interpretation. The syntactic and ontological features of the Categorial slot can be given a slot each, with the ontological category represented within a slot that is part of semantics The same goes for subcat/select features. Semantic information is not just scattered randomly in a lexical entry in SSM. Recall that there is a horizontal way of representing the LCS of a word, without "boxes" (Benavides 2022: 21). Just as syntactic structures are not necessarily represented in the mind as trees, lexical entries in SSM do not have to be represented with "boxes"; entries can be represented in different ways. However, whether represented vertically, horizontally, as circles within circles, or in other ways, lexical entries in SSM are still triplets of semantic, syntactic and phonological information.
And on the other hand phonology has no slot at all, just an informal listing at the top of the table. In short, on this reading, SSM, unlike RM and CxM, cannot be considered an instantiation of the Parallel Architecture. (p. 4) There are no slots for phonology in SSM representations, but slots could well be added. Their inclusion, however, would neither add nor detract from the way SSM accounts for the formation of complex words. In addition, the role of phonology is addressed (briefly) in Benavides (2022 p. 58-9).
RM and CxM also endorse a dual-route theory of processing, in which compositional derivations are in competition with stored complex items (Jackendoff and Audring 2020, chapter 7; Huettig, Audring, and Jackendoff 2022; Booij 2010, 251-253), and RM develops an extensive account of the network of stored forms. In fact, given the sheer volume of "irregular, semiproductive, or unpredictable forms" that "have to be memorized," RM might well be thought of as primarily a theory of the "relational network that is part of the lexicon." Such a theory should say that on one hand, driver is related to drive through its base, but on the other hand it is related to baker, singer, and winner through its affix. Moreover, it should be related to butcher and carpenter through its affix, even though the base of these words is a "bound root" rather than an independent word on its own. RM encodes these relations in affix. (p. 4) As noted above, there is also a sheer volume of regular forms to be accounted for. Regular forms are not always the most frequent, as we know from the German plural, where the default -s is in the minority when compared to irregular forms. However, there are still massive amounts of regular complex words in a language, and they need to be accounted for. SSM accounts for them in a simple and clear way, but the (generative) schemas of RM have characteristics that make them confusing, as seen in Benavides (2022: § 4). This is particular to RM, because in the PA no emphasis is given to regular or irregular forms. Regarding relating words ending in -er, this can be done with lexical redundancy rules as well. J&A: 82 present a diagram similar to the one in (2), which shows some of the possible relational links between the suffixes in derivatives ending in -er. While J&A do not explicitly label these as lexical redundancy rules, the diagram does illustrate what these rules can do; it is another way to visualize them.
(2) driv er butch er play er sing er bak er carpent er [adapted from J&A] SSM does recognize the creative power of analogy, as implemented with lexical redundancy rules, as noted in Benavides (2022: p. 54): "In fact, some of these words may have acquired their additional meanings due to the operation of lexical redundancy rules, by analogy. For example, the regular apreciable may have acquired its meaning of "worthy or deserving of being appreciated" by analogy to the already stored patrimonials punible and condenable, which have this meaning of "deserving of." Likewise, English translatable, which has the regular meaning of "capable of being translated," could have acquired its additional meaning of "easy to translate" by analogy to readable." " The key, however, is that, by definition, analogy is based on similarity, and when there is no similarity between two or more items, analogy has difficulty producing an appropriate form (Pinker 1999). Here is where the regular, default rule or structure comes in. So, analogy, or relational structures, or lexical redundancy rules do have an important role, but that role is limited, because they are all based on similarity. The regular process, unification, has an extremely important role as well, because the massive amounts of regular forms also have to be accounted for.
These roles, internal to the lexical network, constitute what RM calls the relational function of schemas. Benavides (p. 69) is correct in surmising "that, in essence, relational schemas are a modification and formalization of lexical redundancy rules." However they are not identical, and in particular are no longer represented in the format of Jackendoff 1975, which Benavides appears to adopt (p. 68). (p. 5) While relational schemas are not identical to lexical redundancy rules, they perform the same functions as lexical redundancy rules, which are not simply loose associations, as Jackendoff (2022: 5) says, but rather express a formal relation, as the rule shown in Benavides (2022: p. 68) illustrates.
There is a further consequence. Consider the English regular plural: it clearly can be used generatively to produce novel forms. However, it also appears inside of forms that have to be memorized, for instance clothes, woods, dregs, smarts, best regards, raining cats and dogs. In these cases, the plural schema is being used relationally, capturing the similarity between these forms and regular forms, rather than generating these forms online. This is not an isolated case: it turns out that any productive pattern can also be used relationally. (p.
5)
Lexical redundancy rules can be used for exactly the same purposes as those noted above involving words such as clothes, woods, dregs and others. A network similar to the one in (2) above can be created for words ending in -s, using lexical redundancy rules.
This conclusion undermines any attempt such as SSM to treat productive patterns in isolation, and to set aside nonproductive patterns as a matter for some sort of loose associationor as a matter for lexical redundancy rules. At the same time, RM upholds the distinction between computation and storage in processing by appeal to the difference between generative and relational functions of schemas. (p. 5) Lexical redundancy rules are not simply loose associations, as noted above.
The relation between morphology and syntax
A third difference between SSM and RM concerns the relationship between morphology and syntax. RM proposes an architecture along the lines of (4), in which the upper three components are concerned with the grammar of phrases and the lower three with the grammar of words. The double-headed arrows represent interface correspondences. (Thus Benavides is mistaken in claiming (p. 60) that "[i]n this model, morphology is not seen as being located below the word level.") From the perspective of PA and RM, a theory of morphology has to be concerned not just with morphosyntax but also with its interfaces with phrasal syntax, word phonology, and lexical semantics. (p. 5) SSM does show how morphology interacts with other components. For example, in Benavides (2022: p. 58) it is noted that "fully-formed structures created by WFRs (e.g. caza+dor 'hunter') in turn become a part of the lexicon. When needed in a sentence, they separate into their respective components to participate at the interfaces, and instantiate into variables when necessary," This diagram does not have a separate component called "lexicon," because RM and CxM, along with Construction Grammar, argue that the entire grammar can be said to be "in the lexicon." (p. 6) The use of the word "lexicon" in the diagram is just a matter of terminology, to which Jackendoff (2022) gives undue importance, given that it was explained in Benavides (2022: 4) (and other parts) that it means "below the word level." In the diagram in (1) in Benavides (2022: 4), the label "Word Level" is shown in parentheses below the word "Lexicon," with an explanation in the text that it refers to the lexicon below the word level. The definition of "lexicon" is even given as the subtitle for the paper: "The Centrality of the Lexicon Below the Word Level." Saying that there is a lexicon below the word level automatically implies that there is another part of the lexicon that is above the word level (the phrasal component). However, in order to avoid the use of the word "Lexicon" in such diagrams, the diagram can be represented as in (3) below, a variation of the diagram presented in Jackendoff (2022: 6), taken from Benavides (2022: 58). All the components of the diagram are a part of the lexicon, as per PA principles, with morphology operating below the word level.
( (3) above, linking to the phonology, syntax and conceptual structure, and with word formation (morphology) feeding fully formed words to the lexicon (below the word level). As ten Hacken (2019: 95) notes, in his diagram "the lexicon has been added as a box, but it is not a component of the same type as phonological, syntactic and conceptual structures." While ten Hacken (2019) This is in fact mentioned in Benavides (2022: 76) and in several parts of §2. The lexicon below the word level and above the word level are both a part of the lexicon, and this does not imply any contradictions with SSM.
Benavides says (p. 64): "Another important contrast between the SSM (as incorporated into the PA) and RM is that,…. in the former, morphology does not interface directly with phrasal syntax or semantics. It does so via the lexicon." Benavides approvingly cites Bresnan and Mchombo's (1995) Lexical Integrity Principle, which insulates internal word structure from phrasal effects. In short, SSM apparently considers it a virtue to isolate morphology from phrasal grammar. But RM considers it a vice. (p. 7) In SSM, morphology does interact with syntax, via the lexicon, as noted above; it is not isolated from phrasal syntax. Importantly, however, the PA does separate morphology from phrasal syntax, as noted in many places in J&A, including pp. 5, 16, 17, 20, 21, 134, 136, 139, and 273, where it is stated that morphology is its own subcomponent and that there are boundaries between morphology and syntax. For example, on p. 20, J&A note that "while phrasal syntax deals with how words are combined to form phrases, morphosyntax deals with the structure inside words." (Emphasis in the original.) This supports the idea that what happens below the word level (that is, inside words) is different from what happens at the phrasal level. On p. 273, J&A note that "morphosyntax is its own component of grammar, governing the internal structure of words." (Emphasis mine.) Again, the "internal structure of words" is the lexicon below the word level. And on p. 5 of J&A we see that "the boundary between morphology and syntax is maintained." When there are (at least) nine locations in a book (J&A) that either indicate or suggest that there is a boundary between morphology and syntax, it is a clear indication that there is a separation between morphology and phrasal syntax in PA/RM; that in a sense, morphology is isolated from phrasal grammar. As in SSM, however, the fact that there is a boundary between them does not mean that these two components do not interact. They do interact, not only in PA/RM but also in SSM.
Here are four representative phenomena that bear on the relation of word grammar to phrasal grammar. First, consider inflectional morphology. An inflected form answers to two masters. On one hand, its abstract features such as, say, second person singular dative, have to be licensed by its syntactic position and the features of other items that it must agree with. (p. 7) This is explained in Benavides (2022: 42), as part of the discussion on inherent and contextual features, and in the subsequent pages it is shown how inflectional features participate in morphological trees. Once an inflected word is fully formed, it separates into its components and participates at the interfaces in the phrasal grammar. This is the way there is an interaction between morphology and syntax in SSM.
Second, Booij 2010 points out that the grammar of numerals intercalates what look like compounds (e.g. seventy-six) with phrasal combinations (two and two thirds). Similarly, the grammar of English place names alternates between compounding (Crater Lake, Roosevelt Boulevard) and phrasal combination (The Gulf of Aqaba, The Bay of Biscay) (Jackendoff and Audring 2020, 41).
Places names such as The Bay of Biscay seem to be fully phrasal but they are fixed (as proper names), in a way that is similar to compounds, idioms, and prepositional link compounds (Lang 2013) (e.g. Sp. casa de campo [house + of + countryside] 'country house'), in that no intervening material is allowed. For example, we can say the port of beautiful Aqaba, but not *The Gulf of beautiful Aqaba, because the former is fully phrasal but the latter is not. The same is the case for expressions such as two and two thirds; they are fixed as numbers, in a similar way to place names, and do not allow intercalated material (cf. *two and almost two thirds, which is no longer strictly a number, but rather a phrase that includes numbers).
The fact that compounds are inflected (cf. the left-headed Sp. hombres rana 'frog men'), but do not allow intervening material (*hombres hábiles rana 'skillful frogmen') (see Benavides 2022: 76), actually reinforces the idea that morphology has to happen first, and only when items are fully-formed can they participate in syntax. Compounding and phrasal syntax each have their own, separate combinatorial principles.
Third, there exist paradigmatic relations between stored phrasal combinations and morphological combinations. For instance, alternating with phrasal knock NP out, there is the word knockout; likewise for send NP off and sendoff, and many others. This can also be explained through the use of lexical redundancy rules. Analogy exists not only between words, but among any type of item that is stored. Words such as sendoff were created by analogy with the phrases, which are also stored. However, this process may not be that productive, cf. turn NP in and *a turnin and beat NP up and *a beatup, which shows that these relations are not fully paradigmatic, or may have a significant number of accidental gaps. -it-yourself-er, dark-reddish, can-doism, down-to-earthness, and ex-man-of-steel. (p. 7) Examples such as these are accounted for in Benavides (2022: 64), as part of the discussion on lexicalism. The phrases (can do, down-to-earth) are represented in the mind as stretches of sound pressed into service as a word (Pinker 1999), regardless of what their function was before they were inflected. They are now units similar to words, not phrasal structures any more, that can undergo affixation.
Such phenomena must be accounted for. In CxM and RM, which countenance interactions between phrasal and morphological structure, they are to be expected. In contrast, a theory that demands a strict distinction between syntax and morphology, such as SSM, cannot cope with them. Perhaps we are owed an explanation of why such phenomena (other than inflection) are relatively rare, but it cannot deny their existence or otherwise sweep them under the rug. (p. 7) As noted above and as seen in Benavides (2022), the SSM not only copes with all these issues but also accounts for them in a principled way, far from sweeping them under the rug.
A mistaken interpretation of the RM formalism
To conclude, we must correct a mistaken interpretation of the RM notation.
Here again is the RM analysis of driver and the [N V-er] affix.
(1) a. RM representation of driver (cf. Jackendoff In these examples, coindex 3 connects only morphosyntax and phonology; one might expect it to connect to something in semantics as well. Likewise, one might expect a coindex 1 on the semantics DRIVE in (1a), connecting it to a verb in morphosyntax and the phonology /drajv/. And in (1b), one might expect a coindex z on the variable function F, connecting it to the verb in syntax and the variable in phonology. Benavides evidently has these expectations, as he says (p. 62) (p. 8) These are not my expectations alone. J&A: 129 explicitly say that "Intuitively, on grounds of uniformity, one might expect this link to extend to semantics as well." This means that the expectation is objective and justified, based on the purpose of being consistent. In Benavides (2022: 62) I add that, to solve this problem, J&A discuss several notational variants for coindexed schemas, noting that the issue boils down to the fact that the semantic structure associated with the affix is not always a coindexable constituent. J&A conclude that, given the difficulties associated with the alternatives, the notation adopted throughout their book appears to be a reasonably optimal combination of rigor and practicality. Thus, this problem of non-uniformity in RM is not resolved.
Similarly, "the affix does not contribute to the semantics" (p. 60); "affixes are found in morphosyntax and word phonology but their content or contribution is not found in word semantics (or in any of the phrasal components) (p. 60); "in RM …, the derivational suffix does not contribute any meaning" (p. 62); and "in devour (39), only part of the semantics, the Patient, is linked to phonology and syntax. The core meaning, DEVOUR, is left unlinked" (p. 66).
However, if one looks a little more closely at (1), these issues are resolved. First consider the absence of a coindex 3 in the semantics. The idea behind this notation is that the phonology /ər/ is an overt marker of the entire complex in (1b). The semantics of the complex is linked not to this marker, but rather to the morphosyntax and the phonology of the complex as a whole. (p. 8) Right, the semantics is linked to the complex as a whole, that is, the schema is what carries the meaning, not the affix proper. As Masini & Audring (2019: 369) note, "the semantic contribution of affixes is 'only accessible through the meaning of the morphological construction of which they form a part' (Booij 2010a: 15). Thus, affixes are not stored on their own and do not have an independent meaning outside the structure they occur in. This is part of CxM being a word-based theory." This confirms that the affix itself does not carry any meaning in schemas. Thus, in schemas, while the syntax and phonology of affixes are linked, they are delinked from semantics.
Other examples also show that the linking issue is not fully resolved in RM. The entry in (4) below shows the entry for devour exactly as presented in J&A: 11. Notice that, unlike the V in (1a) for driver (repeated below devour), the V in devour is not linked to the semantics; it is only linked to the phonology. Even if we wanted to link the V in devour to the semantics, as in (1a), there would be circularity, because the verb would redundantly encompass the NP that is already linked with the semantics (subscript y); the V would be linked to its own NP in syntax. Again, there is confusion, and the linking issue is still not resolved. The same comparison goes for other examples in J&A, including baker (p. 71), which has the same coindexation as driver in (1a). There is no mistaken interpretation of the RM formalism in Benavides (2022) [Jackendoff 2022] In addition, the Agent is still showing in the entry for driver, even though it is no longer a participant. As noted in Benavides (2022: 56), in this type of derivation, based on coindexation, it is not shown that the first argument of the base disappears after affix attachment and is no longer an argument of the derivative (cf. *Peter driver of the truck; *driver of the truck by Peter); the argument still appears as a part of the RM representation. Furthermore, in driver, only the semantics of the base should be reflected; the syntactic category of the base, V, is no longer a part of the output, yet it appears in the representation of driver in Jackendoff's (2022) example (1a). That representation, as well as others in RM, show the history of the derivation, but they do not show the final product accurately, as does SSM, as in (5). (5) SSM lexical entry for driver (Benavides 2022 As noted above and in Benavides (2022: 64), it is also unclear in RM how changes in argument structure (e.g. loss or addition of arguments) occur in word formation. According to J&A and Jackendoff & Audring (2019), the morphosyntax-semantics interface is responsible for the effects of morphological combination on argument structure. For example, event or process nominals such as abandonment preserve the argument structure of the corresponding verb abandon, while agentive nominals like baker and result nominals like inscription denote one of the semantic arguments of the corresponding verb. However, Jackendoff & Audring (2019) do not show what exactly are the effects on argument structure (e.g. what arguments are inherited and which are lost). In contrast, this is accounted for in a fine-grained way in the SSM. As for J&A, while they discuss some examples, there are inconsistencies related to those that arise with respect to coindexation.
understanding, the meaning of the affix can be roughly 'person who F's.' Hence the conclusion that RM words and affixes are semantics-free is unfounded. (p. 9) Affixes are indeed semantics-free in RM and CxM, as noted above and in Masini & Audring (2019: 369): affixes "do not have an independent meaning outside the structure they occur in." It is the schema (construction or structure) they occur in that contributes the meaning, as noted by Masini & Audring (2019) in the previous quote above. Regarding words as represented in RM, nowhere in Benavides (2022) is it stated or implied that words are semantics-free; only that the link between the form and semantics in words is not clear and is thus an inconsistency.
Benavides's misapprehension has a further consequence. Consider again "there is no direct mapping between form and meaning, as there should be in a construction." Similarly, in schemas and derived forms, while the link between phonology and morphosyntax is retained, the link to semantics is lost. Since the semantics is delinked, this is no longer a triplet of linked structures, as per the definition of a lexical entry in the PA. (p. 62) The implication is that an item that lacks one of the three levels of representation is not a lexical entry. However, unlike Construction Grammar, PA/RM countenances lexical items that do not involve all three levels (Jackendoff and Audring 2020, 11-12). Fortunately Benavides corrects this error on p. 72, listing some oft-cited examples such as yes (which lacks syntax), the do of do-support (which lacks semantics), and the -duce of reduce (also lacking semantics). (p. 9) Having a missing part of the triplet is fine for defective items; that is why they are given that name. But it is not fine for prototypical items such as the word devour or the suffix -er. It is a part of the definition of (typical or standard) lexical items that they consist of a triplet of semantic, syntactic and phonological structures. Prototypical words and affixes, not being defective items, should satisfy the triplet criterion.
(2 blank spaces before next section)
Closing remarks
Retaking what was mentioned at the outset, several of the key arguments made in Benavides (2022) regarding RM, that are left unaddressed in Jackendoff (2022), are accompanied in J&A by phrases such as "the clumsiness of this solution" (p. 165), for the proliferation of schemas; a "reasonably optimal" solution (p. 131), for the lack of a link between the form and the semantics in the schemas for words and affixes; "not always perspicuous…rather messy" (p. 127) and "impossible to use" (p. 129) for the coindexation used in RM schemas (which raises the issue of lack of plausibility in terms of processing); and others, including with respect to the lack of a detailed explanation of how RM deals with changes in argument structure in word formation (p. 19); and the issue of unnecessarily ascribing meaning to a morphological construction (making it parallel to syntactic constructions such as the resultative) when meaning is already accounted for by the components of the structure (the base and affix).
Clarity and consistency are both crucial traits in a morphological theory. Yet, due to the problematic issues discussed above, it is not clear that RM, as currently formulated, clearly satisfies these traits. Whether or not SSM is a perfect fit for the PA, it still seems to account for word formation in a more consistent way than RM. | 7,907.2 | 2022-09-20T00:00:00.000 | [
"Philosophy"
] |
Effects of Colored Noise in the Dynamic Motions and Conformational Exploration of Enzymes
: The intracellular environment displays complex dynamics influenced by factors such as molecular crowding and the low Reynolds number of the cytoplasm. Enzymes exhibiting active matter properties further heighten this complexity which can lead to memory effects. Molecular simulations often neglect these factors, treating the environment as a “thermal bath” using the Langevin equation (LE) with white noise. One way to consider these factors is by using colored noise instead within the generalized Langevin equation (GLE) framework, which allows for the incorporation of memory effects that have been observed in experimental data. We investigated the structural and dynamic differences in Shikimate kinase (SK) using LE and GLE simulations. Our results suggest that GLE simulations, which reveal significant changes, could be utilized for assessing conformational motions’ impact on catalytic reactions.
Introduction
The intracellular physical environment exhibits complex dynamics due to various factors, including crowding components and the low Reynolds number of the cytoplasm, among others [1,2].Furthermore, this complexity is heightened by certain components, such as enzymes [3,4], which display active matter properties, meaning they possess internal driving forces [5].Within this environment, a single molecule, for instance, a protein chain, is subject to continuous collisions with other components.Collectively, these factors can induce correlations between the motions of any molecule and its surroundings, resulting in memory effects [3,6].
This physical environment is generally not fully captured by molecular simulations in which some of the aforementioned factors are often neglected, such as the effects of the surrounding molecules, except for possibly water and ions.A standard approach to simulate the effects of the surrounding environment, viewed as a "thermal bath", is through the Langevin equation (LE) with an external random force [7][8][9][10].This random force is typically modeled as "white noise" where it is assumed that the interactions of the system and thermal bath are uncorrelated over time [10,11].However, experimental observations indicate that the true physical environment plays a more complex role, for instance, in proteins, in which correlations exist among certain degrees of freedom, which are reflected in the behavior of time-correlation functions [12][13][14].
The investigation of protein dynamics, particularly in enzymes, is critically important due to its potential role in catalysis-a topic that remains elusive [26,27].The influence of the fluctuating environment on dynamics is typically incorporated into simulation studies via white noise [28,29], wherein, as mentioned above, the correlations are neglected a priori.Correlations can be crucial for enzymatic reactions as time-dependent observables, such as rate constants, display a wide range of variation [12].For instance, these correlations can influence the hierarchy of functional motions [27] relevant during catalysis.Previous work reported the uncoupling of dynamic motions and catalysis [28]; however, the results were based on a specific type of noise satisfying the fluctuation-dissipation theorem (FDT).
In this work, we investigated the structural and dynamic differences in the Shikimate kinase (SK) enzyme when LE and GLE simulations are employed.SK is an enzyme involved in the production of chorismate, an essential compound for the functioning of pathogenic bacteria [30].The simulation lengths considered in this work align with the timescales captured in current quantum mechanical and molecular mechanical (QM/MM) simulations [31].Given that the LE and GLE simulations displayed structural and dynamic changes in SK, we propose that GLE simulations could be utilized to assess the influence of conformational motions on catalytic reactions in QM studies.
Materials and Methods
In this section, we outline the methodology employed in this work, specifically the Langevin equation (Section 2.1) and the generalized Langevin equation (Section 2.2), as well as the protocol followed for conducting the simulations (Section 2.3).
Langevin Equation (LE)
The LE is used to simulate the dynamics of a particle i, with a moment p i , in an environment modeled by a Stokes term with a friction coefficient γ. f i and ζ i are the deterministic and random forces acting on the particle, respectively, and expressed as follows: .
In the LE, it is assumed that fluctuations from the heat bath, which is kept at a temperature T, occur on a shorter time scale than those of the particle itself [10,11,25].Because of this, the former fluctuations can be modeled as white noise.This noise satisfies the following expression, which agrees with the FDT [9,32,33]: where k B is the Boltzmann constant, m i is the particle's mass, and δ ij is the Kronecker delta.The Dirac delta function δ(t − t ′ ) reflects the fact that the impacts from the environment are almost instantaneous, and therefore they are not correlated in time.
Generalized Langevin Equation (GLE)
When the motions of the molecules in the system are correlated in time with the thermal fluctuations, the dynamics is better described using the GLE [25,34] as follows: .
A key distinction between this expression and that of the LE (Equation (1)) lies in the friction term, as the former incorporates the history of the momentum weighted by a kernel function, as detailed in Ref. [35].
A similar kernel was used previously by Ceriotti et al. in the context of thermostats for MD [11].Here, t L is the local average memory time that is used to smooth out high frequency while increasing low frequency fluctuations in the system.λ denotes the intensity of the guiding frictional force.Notably, this kernel contains the Ornstein-Uhlenbeck term (second term in Equation ( 4)), which vanishes when one considers long-time averages ( t L → ∞ ) or minimal guiding force contributions ( λ → 0 ).The colored noise, η i , also satisfies the FDT, akin to Equation (2) for LE, but it is related to the kernel in Equation (4) instead.
The fact that noise types described by Equations ( 2) and ( 5) satisfy the FDT guarantees that the systems are in equilibrium.Previous studies have employed various noise models to elucidate single-molecule experimental results [12][13][14].However, it is important to note that colored noise is not limited to the type described here (Equations ( 4) and ( 5)); other types can also be explored that may lead to systems out of equilibrium.
Setting up the Simulations
SK from Helicobacter pylory (PDB code 3MUF, structure resolved at 2.3 Å) was used as a test case.It was studied previously through extensive MD simulations by one of the authors [36,37].The protein structure was solvated with TIP3P [38] water molecules and salt, consisting of Na + and Cl − ions, at a concentration of 150 mM, resulting in a 66 Å 3 cubic box.The setup of the simulation box and the input files' generation for MD were carried out on the web-based graphical user interface for CHARMM [39] using the CHARMM-27 force field set of parameters [40][41][42].Short-range interactions for vdW and electrostatic terms were computed explicitly up to a 12 Å cutoff distance.The former term was truncated by using a force-switching approach [43] starting at a 10 Å distance, while in the latter, the long-range interactions were solved with the fourth-order interpolation particle mesh Ewald (PME) method [44,45] with an FFT grid size of 72 in all directions.Simulations were conducted with the AMBER package (version 2022, San Francisco, CA, USA) [46].
The system was initially minimized during 10 × 10 4 steps, with the first 2500 cycles utilizing the steepest-descent method and the remaining steps employing the conjugate gradient method.After minimization, an equilibration procedure was performed in the NVT ensemble at 303.15 K using the Langevin dynamics thermostat with a friction coefficient of 1 ps −1 during 1.25 × 10 6 steps (1 fs time step).Hydrogen bonds were constraint with the SHAKE algorithm [47] for the protein structure and the SETTLE algorithm [48] for the water molecules.Harmonic positional restraints of 1 kcal mol −1 Å −2 on protein atoms were applied during the minimization and equilibration steps.
The final structure of the equilibration step was used as the initial structure for data production for both LE and GLE simulations.In the former, Langevin dynamics was assessed in the NVT ensemble during 600 ns with similar MD parameters as in the equilibration step but with a larger time step of 2fs.Regarding GLE simulations, a local averaging time (t L ) of 0.2 ps and a momentum guiding factor (λ) of 1.0 were used.These simulations were also conducted in the NVT ensemble with similar parameters for the Langevin dynamics as in the LE case.Frames were saved every 0.02 ns for analysis with the first 100 ns being skipped and considered part of the equilibration step.In total, 3× repetitions for each case, LE and GLE, were run to obtain standard deviation bars.
Analysis of Simulations
Principal component analysis (PCA) [49] was performed by selecting the H, C, O, N, and C α atoms with the AmberTools 2022 [50].The first two PC values were used to generate 2D histograms of counts (heatmaps) that provide information on the explored conformational space using a bin size of 1 Å 2 .The ratios of areas were calculated by taking the number of bins explored divided by the total displayed area of 125 × 97 Å 2 .The root mean square fluctuation (RMSF) was computed with in-house Tcl scripts for VMD software (version 1.9.4,Urbana-Champaign, IL, USA) [51] based on the C α atoms.
An interaction network was obtained, for both LE and GLE simulations, using an in-house Julia script (Code S1 in Supplementary Material), where C α atoms formed a link if they lay within a cutoff distance of 7 Å, as suggested in previous works [52][53][54].The weights for the interacting links were computed by taking the ratio of the frequency of link formation during the trajectory and the total number of frames.Based on these two computed networks an alluvial diagram was generated to compare the structural differences between them with Infomap (version 2.6.1,Umeå University, Umeå, Sweden), for community detection and then applying the mapping change algorithm [55][56][57][58].In total, 100 trials with the two-level algorithm were used to obtain network communities [59].Alluvial diagrams facilitate the visual comparison of the community structures across different networks.Infomap communities and alluvial diagrams were generated by using the online alluvial generator [60].
The normalized autocorrelation function (NACF) was computed according to the following equation (Code S2 in Supplementary Material): Here, we consider the distance between the center of masses (COMs) of the LID (residues numbers 109-123), r LID COM , and SB (residues numbers 32-60), r SBD COM , regions, where d(t) = r LID com (t) − r SBD com (t) , δd(t) = d(t) − ⟨d⟩, and ⟨• • • ⟩ denote time averages.The LID and SB domains, together with the link d connecting their COMs, are shown in the inset of Figure 4.
Results and Discussion
In the following subsections, we describe the structural and dynamic differences that were observed in the simulations for the LE and GLE simulations of SK.
Structural Modifications of SK in the Simulations with White and Colored Noise
Previous studies have indicated that the SB and LID domains of Shikimate kinase exhibit a high degree of fluctuations with and without substrates [36,37].Changes in the fluctuations in these domains are important, as they may be linked to catalysis not only in Shikimate kinase [61] but also in other protein kinases such as the topologically similar [62] adenylate kinase enzyme [63].
Our observations indicate an increase in RMSFs in the SB region of SK for the GLE simulations compared to the LE simulations (see Figure 1).Conversely, the LID domain exhibited a slight decrease in fluctuations.Because both domains are involved in the catalytic step, through the binding of the substrates, we argue that the changes in their motion can be used to assess the influence of dynamics on the catalysis through QM-free energy simulations.This will be discussed in more detail in Section 3.3.
LEU118 from the LID domain was split into two community blocks in the GLE case.Thi was also the situation for the LE community of residues SER36, ASP30, and ILE35, which were in the SB domain.This reveals that the interaction network of residues in the LID and SB domains is modified in the GLE simulation compared to the LE case.Taken to gether, the alluvial diagram allowed us to find out where new interactions appeared in the protein networks and to visually present the differences in both networks.
Behavior of Time-Dependent Observables in White and Colored Noise Simulations
Besides the structural analysis, dynamic properties (in terms of NACFs) were com puted.Because the LID and SB domains showed the largest RMSFs (see Figure 1) in the LE and GLE simulations, we computed the NACF of the distance between the COMs o these domains; the results are plotted in Figure 4.The plot shows that up to 10 ns, both
Differences in the Explored of the Conformational Space with the Two Types of Noise
The exploration of the conformational space was monitored through heatmaps, which were built upon the first two PCs; the results are shown in Figure 2. It was noticed that, in the LE case, the sampled conformational space was smaller (39.6% area ratio) than in the GLE case (48.9% area ratio).This was expected as the original purpose of the GLE method was to enhance the exploration of the conformational space.
In the LE simulations (see Figure 2a), four conformational basins can be distinguished, where the protein stays trapped most of the time.However, in GLE simulations (see Figure 2b), only one basin, around the (25,15) bin, is observed, which corresponds to the initial conformation.In this case, after the GLE gathers enough information on the low-frequency fluctuations, it escapes this basin and explores the conformational space more efficiently than the LE simulation.For systems with a few degrees of freedom, the sampled distributions for both LE and GLE simulations should converge to the canonical distribution, but in more complex systems, such as proteins, the relaxation process could extend over long time scales, and the distribution could differ in practical MD [64] and even more in QM simulations.
The difference in the conformational ensemble was also monitored through the analysis of residue network interactions by using alluvial diagrams.They allow for the detection of structural changes between networks [56,60].In the alluvial diagram, the two interaction networks from LE and GLE simulations were compared, and the results can be seen in Figure 3.
ing that a higher degree of correlation was maintained in the latter.We propose that the difference in these correlation rates between LE and GLE simulations may be used to assess the coupling between enzyme dynamics and the catalytic reaction in QM simulations.This may be achieved, for instance, by running both simulations and noticing the differences between energetics (through free energy differences) and dynamics (through the computation of correlation functions).In this diagram, communities are depicted as blocks (in different colors).When a community block from one network maps to another, it indicates that the two communities share the same network nodes.Otherwise, when a community block in the first network splits into two (or even more) community blocks in the second network, it means that some nodes from the original community interact more in the newly formed community from the second network.Thus, splitting reveals changes in the structure of the networks.
We noticed that the LE community formed by the residues ALA124, ARG112, and LEU118 from the LID domain was split into two community blocks in the GLE case.This was also the situation for the LE community of residues SER36, ASP30, and ILE35, which were in the SB domain.This reveals that the interaction network of residues in the LID and SB domains is modified in the GLE simulation compared to the LE case.Taken together, the alluvial diagram allowed us to find out where new interactions appeared in the protein networks and to visually present the differences in both networks.
Behavior of Time-Dependent Observables in White and Colored Noise Simulations
Besides the structural analysis, dynamic properties (in terms of NACFs) were computed.Because the LID and SB domains showed the largest RMSFs (see Figure 1) in the LE and GLE simulations, we computed the NACF of the distance between the COMs of these domains; the results are plotted in Figure 4.The plot shows that up to 10 ns, both systems behaved in a similar manner, with a slightly slower decaying behavior in GLE simulations.However, for longer times, they started diverging.It can also be noticed that beyond 100 ns, the NACF of the LE case decayed faster than that of the GLE case, indicating that a higher degree of correlation was maintained in the latter.We propose that the difference in these correlation rates between LE and GLE simulations may be used to assess the coupling between enzyme dynamics and the catalytic reaction in QM simulations.This may be achieved, for instance, by running both simulations and noticing the differences between energetics (through free energy differences) and dynamics (through the computation of correlation functions).In these simulations, we observed differences in the NACFs between the LE and GLE cases, where the noise changed from white to colored.Now the question is whether there is any physical basis for colored noise.In experiments, colored noise was found to explain time-dependent correlation functions in enzymatic turnovers [12-14].Also, computational studies suggest that the time-dependent transmission coefficient of the unfolded- In these simulations, we observed differences in the NACFs between the LE and GLE cases, where the noise changed from white to colored.Now the question is whether there is any physical basis for colored noise.In experiments, colored noise was found to explain time-dependent correlation functions in enzymatic turnovers [12][13][14].Also, computational studies suggest that the time-dependent transmission coefficient of the unfolded-misfolded amyloid-β [23] and the viscosity dependence of folding-unfolding rates in proteins [21] are better described by using a colored noise in the GLE.Thus, there is already evidence supporting the idea that colored noise is present in physical systems such as enzymes.
Conclusions
In this study, we investigated the structural and dynamic differences in the Shikimate kinase (SK) enzyme when simulated using both the traditional Langevin equation (LE) with white noise and the generalized Langevin equation (GLE) with colored noise.The primary objective was to understand how incorporating memory effects, as opposed to treating the environment as a thermal bath with uncorrelated interactions, influences the structural and dynamic behavior of enzyme systems.By employing colored noise, which accounts for time-correlated interactions, the GLE can provide a more realistic model of the intracellular environment.
The simulations revealed notable structural and dynamic differences in SK between the LE and GLE approaches.Specifically, compared to the LE case, GLE simulations exhibited enhanced RMSFs in the SB domain, which is responsible for substrate binding during catalysis.The difference in the community networks of LE and GLE cases, monitored through alluvial diagrams, showed that mainly the residues in the SB and LID domains exhibited different interactions.Also, the NACF for the distance between the SB and LID domains displayed noticeable differences beyond 10 ns, which is a time scale achievable in QM simulations.
The ability of GLE simulations to capture these differences in the simulations suggests their potential utility in quantum mechanical and molecular mechanical (QM/MM) studies, where understanding the influence of conformational motions on catalytic events is still elusive.As opposed to using restraints on predefined collective variables in a QM/MM simulation to modify the behavior protein motions, one could run GLE simulations where those variables are not necessary.
The structural changes observed in the SK enzyme under GLE conditions indicate that traditional LE simulations may overlook critical dynamic aspects that are essential for enzymatic function.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/foundations4030021/s1.Code S1: Julia script for computing an interaction network based on the C α atoms of a protein by using a cutoff distance of 7 Å.Code S2: Python script for the calculation of the NACFs based on the COM distances between two protein domains.
Figure 1 .
Figure 1.RMSF analysis for the LE and GLE simulations shows that the main differences in th fluctuations come from the LID and SB domains.Fluctuations are increased in the LID domain and slightly decreased in the GLE case compared to the LE simulations.
Figure 1 .
Figure 1.RMSF analysis for the LE and GLE simulations shows that the main differences in the fluctuations come from the LID and SB domains.Fluctuations are increased in the LID domain and slightly decreased in the GLE case compared to the LE simulations.
Figure 2 .
Figure 2. Heatmap for LE (a) and GLE (b) simulations built upon the first two PCs.The sampled ensemble in the GLE case is broader than the LE case, but it is also more localized around the bin(25,15).The boundary of the explored conformational space is indicated by the red line.
Figure 2 .
Figure 2. Heatmap for LE (a) and GLE (b) simulations built upon the first two PCs.The sampled ensemble in the GLE case is broader than the LE case, but it is also more localized around the bin(25,15).The boundary of the explored conformational space is indicated by the red line.
Figure 3 .
Figure 3. Alluvial diagram displaying the difference in the communities for the LE and GLE cases.LE and GLE networks and their respective communities were compared using the mapping change algorithm, which compares the structural changes between communities in both networks.Vertical blocks in the alluvial diagram represent identified communities or modules detected by Infomap.In each column, each distinct community is assigned a unique color to visually differentiate it from
Figure 3 .
Figure 3. Alluvial diagram displaying the difference in the communities for the LE and GLE cases.LE and GLE networks and their respective communities were compared using the mapping change algorithm, which compares the structural changes between in both networks.Vertical blocks in the alluvial diagram represent identified communities or modules detected by Infomap.In each column, each distinct community is assigned a unique color to visually differentiate it from others.Where flow streams merge or split, blending of colors indicates nodes moving between communities.
Foundations 2024, 4 ,
FOR PEERREVIEW 8 others.Where flow streams merge or split, blending of colors indicates nodes moving between communities.
Figure 4 .
Figure 4. NACFs of distance between the com of the LID and SB domains (inset) for LE and GLE cases.NACFs displayed strong divergence after 10 ns for both cases (dashed green line).
Figure 4 .
Figure 4. NACFs of distance between the com of the LID and SB domains (inset) for LE and GLE cases.NACFs displayed strong divergence after 10 ns for both cases (dashed green line). | 5,199.6 | 2024-07-08T00:00:00.000 | [
"Biology",
"Chemistry",
"Physics"
] |
Initial-Condition Effects on a Two-Memristor-Based Jerk System
: Memristor-based systems can exhibit the phenomenon of extreme multi-stability, which results in the coexistence of infinitely many attractors. However, most of the recently published literature focuses on the extreme multi-stability related to memristor initial conditions rather than non-memristor initial conditions. In this paper, we present a new five-dimensional (5-D) two-memristor-based jerk (TMJ) system and study complex dynamical effects induced by memristor and non-memristor initial conditions therein. Using multiple numerical methods, coupling-coefficient-reliant dynamical behaviors under different memristor initial conditions are disclosed, and the dynamical effects of the memristor initial conditions under different non-memristor initial conditions are revealed. The numerical results show that the dynamical behaviors of the 5-D TMJ system are not only dependent on the coupling coefficients, but also dependent on the memristor and non-memristor initial conditions. In addition, with the analog and digital implementations of the 5-D TMJ system, PSIM circuit simulations and microcontroller-based hardware experiments validate the numerical results.
In recent years, memristor-based chaotic circuits and systems have been broadly investigated, since memristor is a special nonlinear circuit component with memory effect and synaptic plasticity [19,28,29]. This particular type of chaotic circuit and system is conductive to deriving coexisting infinitely many attractors. Such coexistence of infinitely many attractors means initial-condition-related extreme multi-stability [30][31][32]. To implement the initial-condition-related extreme multi-stability, an effective and simple method is to introduce memristor into an existing dynamical system to construct a new memristor-based dynamical system, which is different from the method of using periodic trigonometric function to realize the initial condition-boosted infinitely many attractors in some special boostable systems [33,34]. In fact, the memristor-based dynamical system has a total bifurcation route to chaos with the evolution of the initial conditions [19] and can display the coexistence of various types of attractors [27]. However, most of the recently published literature only focuses on the extreme multi-stability related to the initial conditions of memristors [35][36][37][38], and little on the extreme multi-stability related to the initial conditions of non-memristors. In this paper, we present a new 5-D TMJ system and emphatically study complex dynamical effects induced by the initial conditions of memristors and non-memristors therein. Thus, the dynamical effects of the initial conditions on the 5-D TMJ system are disclosed comprehensively, which have not been wholly reported in the previously published literature.
The rest of this paper is arranged as follows. Section 2 first presents a 5-D TMJ system and then studies complex dynamics related to the coupling coefficients. Section 3 focuses on the dynamical effects of memristor and non-memristor initial conditions on the TMJ system. With the analog and digital implementations, PSIM circuit simulations and experimental measurements are carried out to validate the numerical simulations in Section 4. Finally, the paper is summarized in Section 5.
The TMJ System with Complex Dynamics
This section first presents a new 5-D TMJ system and then studies complex dynamics related to the coupling coefficients using multiple numerical methods.
Mathematical Modeling
A jerk system is a third-order ordinary differential equation. It is of the following form: J(·) is a nonlinear function. Since the mathematical model in (1) represents the cubic time derivative of variable x, it is named the 'jerk'. A simple jerk system used in this paper is improved from [39] and its general jerk form can be described as follows: where a and b are two positive control parameters. Denote x = x 1 ,ẋ = x 2 , andẍ = ax 3 , respectively. The simple jerk system in (2) can be rewritten as: where x 1 , x 2 , and x 3 are three state variables. Therefore, the simple jerk system is a threedimensional nonlinear dynamical system. The simple jerk system in (3) only has a cubic polynomial nonlinearity of state variable x 1 . On this basis, and referring to [35], a new 5-D TMJ system is presented in this paper, which is achieved by replacing the cubic polynomial in brackets with one memristor and introducing another memristor into the first equation.
For an input x and an output y, a flux-controlled memristor with state variable ϕ can be described as: The memductance W(ϕ) in (4) is selected as a threshold nonlinear function bounded above and below, which can be expressed by: Thus, the 5-D TMJ system is established using the above two memristors as: where x 4 and x 5 represent the inner state variables of two memristors, and k 1 and k 2 represent two changeable coupling coefficients. Therefore, the system modeled by (6) is a 5-D nonlinear dynamical system. Besides, the initial conditions are defined as ICs = (x 1 (0), x 2 (0), x 3 (0), x 4 (0), x 5 (0)), in which the first three initial conditions IC 1 = (x 1 (0), x 2 (0), x 3 (0)) are called the non-memristor initial conditions and the last two initial conditions IC 2 = (x 4 (0), x 5 (0)) are called the memristor initial conditions. To aim at the revelation of extreme multi-stability, two positive control parameters in (6) are kept unchanged as a = 2.5 and b = 0.8. When the initial conditions are fixed as ICs = (10 −9 , 0, 0, 1, 0) and the two changeable coupling coefficients are determined as k 1 = 1 and k 2 = 2.3, the 5-D TMJ system can generate a representative chaotic attractor, whose phase portraits in two different planes are depicted in Figure 1. Here, a MATLAB ODE45 algorithm with time-step 0.01 and time-span (400, 1000) is utilized. The results show that chaotic dynamics can be established in the 5-D TMJ system.
Coupling Coefficient-Reliant Complex Dynamics
Bifurcation diagram and Lyapunov exponent (LE) spectra of a dynamical system are taken as two dynamical indicators to characterize the type of bifurcation scenario giving rise to chaos. To show complex dynamical behaviors in system (6) intuitively, the twoparameter bifurcation diagram and dynamical map are numerically simulated in the k 1 -k 2 plane, as shown in Figure 2, where the initial conditions are fixed as ICs = (10 −9 , 0, 0, 1, 0). The two-parameter bifurcation diagram is plotted by measuring the periodicities of the state variable x 1 based on a MATLAB ODE45 algorithm with time-step 0.01 and time-span (400, 1000) and the two-parameter dynamical map is depicted by calculating the maximal Lyapunov exponent (LE) based on Wolf's Jacobi method.
In Figure 2a, the red block labeled CH represents chaos, the black block labeled UB represents unbounded behavior, the white block labeled P0 represents stable point, the yellow block labeled MP represents multi-period with the periodicity greater than 8, and the other color blocks labeled P1 to P8 represent period-1 to period-8. Meanwhile, in Figure 2b, the pink-red-yellow-cyan blocks with positive maximal LE denote chaotic behaviors and the blue blocks with zero maximal LE denote stable-point behaviors with different positions or periodic behaviors with different periodicities. Besides, the dynamical distributions described by Figure 2a,b are consistent with each other, which clarifies how the dynamical behaviors evolve in the k 1 -k 2 plane. As can be seen from Figure 2, chaotic behaviors with several periodic windows appear in the center left part of the pictures, implying the existence of tangent bifurcations and chaos crises; the dynamical behaviors have the transitions from P1 to P2, to P4, and to P8, demonstrating the occurrence of period-doubling bifurcations. To demonstrate the dynamical effects of the initial conditions, the initial conditions of the 5-D TMJ system are considered as ICs = (1, 0, 0, 1, 0) and (0, 1, 0, 1, 0). In other words, the memristor initial conditions are kept as IC 2 = (1, 0), while the non-memristor initial conditions are set to IC 1 = (1, 0, 0) and (0, 1, 0), respectively. For these two different cases of the initial conditions, the dynamical distributions of the 5-D TMJ system vary greatly in the parameter intervals concerned. Under these two sets of initial conditions, the two-parameter bifurcation diagrams are numerically simulated in the k 1 -k 2 plane, as shown in Figure 3. Looking at the dynamical distributions in Figures 2a and 3, it is easy to see that they are quite different from each other. This shows that the initial conditions of different state variables have great influence on the dynamical behaviors of the 5-D TMJ system. To describe the evolution of its dynamical behaviors with respect to the singleparameter interval, the single-parameter bifurcation diagram and LE spectra are employed. The initial conditions are set as ICs = (10 −9 , 0, 0, 1, 0) for simplicity.
First, the coupling coefficient k 2 is set as a fixed value of 2.3, and the coupling coefficient k 1 varies within the interval (0.2, 2). Second, k 1 = 1, and k 2 varies within the interval (1.8, 2.6). The single-parameter bifurcation diagrams and the corresponding LE spectra are simulated by MATLAB software and their results are depicted in the lower half and the upper half of Figure 4, respectively. Note that the single-parameter bifurcation diagrams in the lower half of Figure 4 are acquired by plotting the local maxima of state variable x 1 according to the coupling coefficients that are increased in slight steps. When increasing k 1 or k 2 , it is found that the 5-D TMJ system under consideration can undergo abundant bifurcation scenarios and exhibit rich dynamical behaviors. Take the case in Figure 4a as an example to illustrate the bifurcation route to chaos of the 5-D TMJ system. When increasing k 1 , the running orbit begins with unbounded behavior at k 1 = 0.2, transfers into bounded chaotic behavior at k 1 = 0.24, and rapidly degenerates into periodic behaviors via a serial of period-doubling bifurcation. Afterwards, the running orbit breaks into chaos by tangent bifurcation at k 1 = 0.29. In the relatively wide interval k 1 ∈ (0.29, 1.25) the 5-D TMJ system mainly works in chaos state, but accompanied by some periodic window behaviors. The largest periodic window with period-4 appears in the interval k 1 ∈ (0.78, 0.88), during which the chaos crisis and tangent bifurcation happen naturally. When further increasing k 1 , the running orbit ultimately degenerates into periodic behaviors via serial period-doubling bifurcation. Meanwhile, for positive maximal LE, the 5-D TMJ system evolves irregularly in the folded space of a chaotic attractor, while for zero maximal LE, the 5-D TMJ system oscillates regularly on a limit cycle.
Consequently, the 5-D TMJ system can exhibit complex dynamical behaviors that are reliant on to the coupling coefficients, and its dynamical distributions in the parameter plane are greatly affected by the initial conditions of the system.
Memristor and Non-Memristor Initial-Condition Effects
This section focuses on the dynamical effects of the memristor and non-memristor initial conditions on the 5-D TMJ system through theoretically exploring the stability distribution of the plane equilibrium point and numerically investigating the initial-conditionrelated extreme multi-stability.
Stability Distribution for Plane Equilibrium Point
The equilibrium point plays an essential role in the dynamical behaviors of the system and it can be solved by setting the right-hand side of the system equations to zero. Thus, a plane equilibrium point can be easily obtained from system (6) and it can be described by: where µ, η are two arbitrary constants, representing the initial positions of the equilibrium point S 0 .
Evaluated at the equilibrium point S 0 , the Jacobian matrix J of system (6) can be derived as follows: It is obvious that the equilibrium point S 0 has two zero roots and three nonzero roots, similar to that reported in [36]. For the nonzero roots, the corresponding cubic characteristic equation can be derived from the Jacobian matrix of system (6) at S 0 as: where For the three nonzero roots, Routh-Hurwitz criteria for the above cubic characteristic equation are given by: If the three criteria of (10) are satisfied, the equilibrium point S 0 is stable and a point attractor appears; otherwise, S 0 is unstable and periodic or chaotic behaviors may occur in system (6).
The two changeable parameters are determined as k 1 = 1 and k 2 = 2.3, the initial parameter µ is varied in the region (−0.5, 1), and the initial parameter η is evolved in the region (−3, 3). According to the conditions given in (10), the stability distribution identified by the three nonzero roots in the µ-η plane can be depicted, as shown in Figure 5. The different color blocks are used to divide three categories of stability regions consisting of the unstable region I, stable region II, and unstable region III. The equilibrium point S 0 located in the region I and region III is the unstable saddle-focus (USF), whereas S 0 in the region II is the stable node-focus (SNF). Besides, the three nonzero roots marked by 0P3N represent the three negative real parts with no positive real part, and the three nonzero roots marked by 1P2N and 2P1N represent the one and two positive real parts, respectively. Particularly, the boundary marked by HB line between the region II and region III is the Hopf bifurcation (HB) line. Therefore, different types of attractors' behaviors may emerge in the different stability regions, such as point, periodic, chaotic, and unbounded behaviors, demonstrating that the dynamical behaviors are extremely dependent on the initial positions of the equilibrium point S 0 , i.e., the initial conditions of two memristors. Due to the existence of two zero roots, the aforementioned SNF for S 0 is critically stable, indicating that the stability of system (6) cannot be effectively judged only by the three nonzero roots [19]. Just for intuition's sake, the initial conditions are determined as ICs = (10 −9 , 0, 0, µ, η). For different values of µ and η located in different regions of Figure 5, the three nonzero roots, stability regions, and attractor types are numerically simulated, as listed in Table 1. Consequently, for different values of µ and η, various kinds of attractors appear in system (6), leading to the emergence of extreme multi-stability. The results from Figure 5 and Table 1 demonstrate that, with the variations of the memristor initials µ and η, system (6) derives multiple stability distributions and presents complex dynamical behaviors. Particularly, the unbounded behavior appears in system (6) under a very large region of memristor initial conditions, meaning that system (6) is less robust to the initial conditions. Besides, the stable-point attractors also appear in the unstable region I, and are triggered by the two zero roots of S 0 .
Initial-Condition-Related Extreme Multi-Stability
The local basin of attraction is the attracting region of a steady-state attractor in the plane of two initial conditions, within which any initial conditions will settle down to the attractor. The above analysis results indicate that the 5-D TMJ system can produce extreme multi-stability dependent on the memristor initial conditions. Thus, the memristor initial conditions are taken as two invariant measures for classifying the coexisting infinitely many attractors' behaviors [27,40]. Note that the local basin of attraction is colorfully drawn by calculating the periodicities of the state variable x 1 based on a MATLAB ODE45 algorithm with time-step 0.01 and time-span (400, 1000), regardless of the structure and position of the generated attractor.
The representative parameters are determined as a = 2.5, b = 0.8, k 1 = 1, and k 2 = 2.3, and the memristor initial conditions are denoted as IC 2 = (x 4 (0), x 5 (0)) = (µ, η). To depict the local basin of attraction, we examine the periodicities of the state variable x 1 at each point in the µ-η plane. For two sets of tiny non-memristor initial conditions of IC 1 = (10 −9 , 0, 0) and IC 1 = (−10 −9 , 0, 0), the local basins of attraction in the µ-η plane are measured and the results are shown in Figure 6. The color blocks represent the memristor initial conditions that trigger the trajectories of different dynamical behaviors, which are exactly the same as those used in Figure 2a. As a result, the 5-D TMJ system can exhibit complex dynamical behaviors closely dependent on the memristor initial conditions, indicating the emergence of extreme multi-stability therein.
Particularly, as revealed in Figure 6, the local basins of attraction with IC 1 = (10 −9 , 0, 0) and (−10 −9 , 0, 0) are very different. The attracting regions in Figure 6a are completely symmetric with respect to µ, while the right half of the attracting regions in Figure 6b is exactly the same as that in Figure 6a, but the left half is unbounded. This is shocking that completely different dynamical behaviors appear in the µ-η plane when the initial condition x 1 (0) is slightly varied. Unfortunately, the dynamical mechanism is quite difficult to be identified based on the mathematical model of system (6). Moreover, because there are extremely small differences in the initial condition x 1 (0), the dynamical distributions depicted by Figure 6a are different from those depicted by Figure 2, but the dynamical distributions depicted by Figure 6b are similar to those depicted by Figure 5. This is an unexpected and very interesting result.
For manifesting dynamical effects of the non-memristor initial conditions, two sets of large non-memristor initial conditions of IC 1 = (1, 0, 0) and IC 1 = (0, 1, 0) are employed. Accordingly, the local basins of attraction in the µ-η plane are measured and their results are shown in Figure 7. As can be seen, with the changes of the non-memristor initial conditions, the left and right attracting regions appeared in the cases of tiny non-memristor initial conditions are connected as a whole, and the bounded behaviors are mainly confined to the lower plane regions of the memristor initial conditions. In brief, the simulated results further demonstrate that the non-memristor initial conditions have a great influence on the dynamical behaviors of the 5-D TMJ system. Referring to the local basin of attraction revealed in Figure 6a, the memristor initial condition-induced attractors are measured to verify the coexistence of infinitely many attractors in the 5-D TMJ system. For some representative memristor initial conditions IC 2 , the phase portraits of the induced attractors in the x 4 -x 5 plane are depicted in Figure 8, where the stable point attractors are marked by the symbol "*". For better visual effect, only partial phase portraits of the induced attractors are simulated in Figure 8, but they sufficiently demonstrate the memristor initial-condition-related extreme multi-stability. In fact, more phase portraits of the induced attractors with different structures, periodicities, and positions can be numerically measured from the 5-D TMJ system, indicating the appearance of the coexisting infinitely many attractors' behaviors.
Analog and Digital Implementations
This section designs an analog circuit and a digital microcontroller-based hardware platform for implementing the 5-D TMJ system. The PSIM circuit simulations and experimental measurements acquire the memristor initial condition-induced attractors to validate the numerical simulations.
Analog Circuit Design and PSIM Circuit Simulations
PSIM (Power Simulation) circuit simulations are employed to verify the dynamical effects of the initial conditions on the 5-D TMJ system using a physical circuit. Based on the mathematical model given in (6), an analog implementation circuit in a jerk-circuit form is designed for the 5-D TMJ system and its circuit schematic is shown in Figure 9. Here, the memristor used in system (6) is implemented equivalently by the circuit module shown at the top of Figure 9, where the −tanh(·) module can be defined by a function module in PSIM software. When the two capacitor voltages v 3 and v 1 are taken as the input voltages of two memristors, respectively, the generated currents i 1 and i 2 can be expressed as: where τ 0 = RC is the integration time constant, v 4 and v 5 stand for the capacitor voltages of the memristor circuit modules, g 1 and g 2 represent the multiplier gains of the memristor circuit modules, and R k1 and R k2 are used for adjusting the two coupling coefficients. Usually, the multiplier gains g 1 and g 2 are fixed as 1, and the integration time constant τ 0 = RC set to 10 kΩ × 100 nF = 1 ms. Thus, the resistances R a and R b in the main circuit are determined by R a = R/a and R b = R/b, and the resistances R k1 and R k2 in the memristor circuit modules are obtained by R k1 = R/k 1 and R k2 = R/k 2 . For the representative control parameters a = 2.5 and b = 0.8, as well as the coupling coefficients k 1 = 1 and k 2 = 2.3, the four resistances can be calculated as R a = 4 kΩ, R b = 12.5 kΩ, R k1 = 10 kΩ, and R k2 = 4.3478 kΩ. Besides, two inverters should be connected to the capacitor voltage terminals of the memristor circuit modules to achieve the same results of circuit simulations and numerical simulations. The three initial capacitor voltages in the main circuit are preset as v 1 (0) = 1 nV, v 2 (0) = 0 V, and v 3 (0) = 0 V, and only the two initial capacitor voltages v 4 (0) and v 5 (0) of the memristor circuit modules are adjustable to acquire the desired phase portraits. Corresponding to the numerical simulations shown in Figure 8, the phase portraits of the induced attractors in the plane of v 4 and v 5 are simulated by PSIM software and the measured outputs under different values of v 4 (0) and v 5 (0) are displayed in Figure 10, where IC 2 = (v 4 (0), v 5 (0)). Therefore, the circuit simulations in Figure 10 validate the memristor initial condition-induced attractors revealed in Figure 8, indicating the feasibility of the designed circuit.
Digital Hardware Platform and Experiments
Based on a pony-size and low-cost microcontroller, a digitally circuit-implemented hardware platform can be developed for the 5-D TMJ system, in which STM32F407VET6 microcontroller with 32-bit RISC core and DAC8563 digital-to-analog (D/A) converter are utilized. According to the mathematical model given in (6), a discrete-time mathematical model is built by means of the fourth-order Runge-Kutta algorithm, which can be coded in C language and uploaded to the microcontroller. All the system parameters and nonmemristor initial conditions are determined in advance, and only the memristor initial conditions are adjusted to acquire the desired phase portraits using a digital oscilloscope. Figure 11 displays the digitally circuit-implemented hardware platform for the 5-D TMJ system and the memristor initial condition-induced attractors acquired by the digital oscilloscope. Similarly, following the numerical simulations shown in Figure 8, the phase portraits of the induced attractors in the plane of v 4 and v 5 are acquired, as shown in Figure 12, where the memristor initial conditions IC 2 = (v 4 (0), v 5 (0)) are changed as different values according to those given in Figure 8. Therefore, the experimentally acquired results in Figure 12 validate the numerical results well, reflecting the feasibility of the digitally circuit-implemented hardware platform.
Conclusions
In this paper, a new 5-D nonlinear dynamical system based on two memristors was presented by introducing two memristors to a simple jerk system, and its complex dynamics related to the coupling coefficients under different memristor initial conditions were studied using multiple numerical methods. Emphatically, the dynamical effects of the memristor and non-memristor initial conditions on the 5-D TMJ system were explored by theoretical analyses and numerical simulations, which were validated by PSIM circuit simulations and microcontroller-based hardware experiments. However, because of the two zero roots caused by the plane equilibrium point, the dynamical mechanism of the initialcondition effects on the 5-D TMJ system cannot be effectively explained only by the stability distributions of the three nonzero roots. To address this issue, an incremental integral reconstitution model can be used for elaborating the dynamical mechanism of the initialcondition effects [19,35], which deserves further study.
Data Availability Statement:
Data generated during the current study will be made available at reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,651.8 | 2022-01-27T00:00:00.000 | [
"Engineering",
"Physics"
] |
Optimal Sequential Sampling Plans Using Dynamic Programming Approach
In this paper we proposed a dynamic programming procedure to develop an optimal sequential sampling plan. A suitable cost model is employed for depicting the cost of sampling, accepting or rejecting the lot. This model is based on sequential approach. A sequential iterative approach is used for modeling the cost of different decisions in each stage. In addition a backward recursive algorithm is developed to solve the dynamic programming. On the other hand, the purpose of this paper is to introduce a new sequential acceptance sampling plan based on dynamic programming. At the end of this paper a numerical example is solved to show how this model works and then sensitivity analyses of main parameters of acceptance sampling model are carried out.
Introduction
Acceptance sampling is a useful tool which is used in quality control in order to design decision principles for lot acceptance problem.Depending on number of defective items in sample, the decision may be, 1. Accepting the batch 2. Rejecting the batch 3. Continuing the sampling and repeating decision process.
Therefore we expect to have less defective items in accepted batches using this approach.This method encourages mangers to use it for two reasons: 1. Improving the quality level of the batch with respect to inspection characteristics 2. Having reverse relevance between quality level and inspections required (Montgomery 2005).
We use sampling plans usually when we have destructive inspection experiments or cost of 100% inspecting is high or it takes long time.In this work a new decision making approach for accepting or rejecting a sample based on recursive inference is improved.
Recursive inference is used for determining the optimal decision.Also when none of the decisions of accepting or rejecting is optimal, it is assumed that we can have more observations and continue to the next stage.Thus a mathematical model is developed which is optimal solution because of using stochastic dynamic programming approach.
The main objective of model is to optimize cost of sampling system based on fraction of defective items.On the other hand Sequential sampling reduces the number of samples required to evaluate a population level by 40-80% in comparison with classical sampling techniques (Boivin and Vincent 1983).
Sequential sampling plans are designed to enhance the performance of sampling methods.First step of this method is to inspect an initial sample from the lot then we decide to accept, reject the lot or take another sample by analyzing the results of inspected sample.Sequential sampling plans are often applied where minimizing sample size is very important (Montgomery 2005).In such plans, item are inspected successively in different stages, until a decision is made on the lot or process thus the sample size of this method is not determined until the lot is accepted or rejected.Sequential sampling plans select the suitable decision quickly, especially when quality is particularly good or particularly.
Literature Review
The purpose of sampling plans is to provide users with evidence that the items reach the quality levels required and agreed upon.Sampling plans has been commonly conducted by quality inspection for acceptance purposes, which is based on the statistical theory.Acceptance sampling inspection is a statistical tool concerned with sampled items to make a decision about inspected products; especially to check whether items have met pre agreed quality specifications.Sampling inspection process concern with the definition of quality characteristics, sample size, acceptance criteria, and a combination of quality levels required by the producer and the consumer (Tong et al 2011).Golub (1953) presented a method for determining optimal Sampling inspection plans based on the criterion of minimizing sums of both producer's and consumer's risks when sample size is fixed.Tagaras (1998) studied the joint process control and machine maintenance problem of a Markovian deteriorating machine.He assumed that sampling and preventive maintenance were performed at fixed intervals.Kuo (2006) developed an optimal adaptive control policy for joint machine maintenance and product quality control.He included the interactions between the machine maintenance and the product sampling in the search for the best machine maintenance and quality control strategy for a Markovian deteriorating batch production system.Wortham and Wilson (1971) proposed a backward recursive technique for optimal sequential sampling plans.
Most attempts to minimize sampling cost were concentrated on reducing expected sample size and reducing inspection cost.But the costs associated with the accept-reject decisions should be considered in designing the sampling plans.The cost of each defective item in an accepted lot can be evaluated very accurately in some quality control environments.On the other hand, cost of accepting defective item can be estimated by considering its results in practical situations.Cost estimates for rejected lots are readily obtained by considering the rectification method (Wortham and Wilson 1971).
In this research, a new model for acceptance sampling problem is introduced.The objective of the model is to determine the optimal decision that minimizes the total cost including the cost of rejecting the batch, the cost of inspection and the cost of defective items.
The assumptions and notations of the proposed method are presented in section 3, the model is presented in section 4 and the solution algorithm along with numerical demonstration on the application comes in section 5, sensitivity analyses are performed in Section 6 and we discussed and conclude the results in Section 6.
The method to be reported here is based on dynamic programming and the recursive property of sequential sampling plans.This method is coded by excel 2013.Assume there is a batch with N items.A sample of n items is selected.Each defective item is replaced with a good one in sample after inspection.We although call "m" equal to the maximum number of items that can be inspected and it is determined by economic considerations.Based on inspection's output we want to make an optimal decision.The possible decisions are as follows, Rejecting the lot accepting the lot Taking another sample
Sampling Plan Using Dynamic Programming
Dynamic programming is one of the most powerful methods to model the stochastic state of decision-making processes (Ross 1983).In mathematics, computer science and economics, dynamic programming is a method for solving complex problems by breaking them down into simpler sub-problems.In general, to solve a given problem, we need to solve different parts of the problem, and then combine the solutions of the subproblems to reach an overall solution.The dynamic programming approach seeks to solve each sub-problem only once.In sampling plans, in cases where we are to select between accepting and rejecting a lot, since the proportion of defective item is not known, we are in stochastic space.Since the stochastic state of the process may be dynamic and changes by collecting more data, it will be possible to use the concept of the stochastic dynamic programming to model an acceptance-sampling plan.However, before doing so, first we need to have some definitions.Consider a problem where defective items in sample are replaced by good items and all items in the rejected lots are inspected and all defective times are replaced by good items before delivery.This leads to a dynamic programming model.
Let a K (n,x) be the expected cost of accepting the lot after observing x defective items in n sampled items thus we have c s n: is accumulated number of items sampled (n) multiplied by cost of sampling and inspecting one item (c s ), so this item is total cost of inspecting the items in sample.
c r x: is accumulated number of defective items sampled (x) multiplied by cost of replacing, reworking or repairing a defective item (c r ), so this item is total cost of replacing, reworking or repairing defective items.
c a E(y|x): is expected number of defective items in the part of the lot that has not been sampled (E(y|x)) multiplied by cost of one defective item in an accepted lot (c a ), thus this item is total cost of defective items in an accepted lot.
And now let r K (n,x) be the expected cost of rejecting the lot after observing x defective items in n sampled items thus we obtain c s N: is Lot size (N) multiplied by cost of sampling and inspecting one item (c s ), so this item is total cost of inspecting the lot after rejecting the batch.
c r x: is accumulated number of defective items sampled (x) multiplied by cost of replacing, reworking or repairing a defective item (c r ), so this item is total cost of replacing, reworking or repairing defective items.
c r E(y|x): is expected number of defective items in the part of the lot that has not been sampled (E(y|x)) multiplied by cost of replacing, reworking or repairing a defective item (c r ), so this item is total cost of replacing, reworking or repairing defective items in the part of the lot that has not been sampled.
Let s K (n,x) denotes the cost of taking one more sample after observing x defective items in n sampled items.If * K (n,x) denotes the optimal cost of decision making system, thus following is concluded, It is obvious that in decision of taking one more sample, if the item was not defective then we move to state (n+1,x) which its optimal cost is equal to * K (n+1,x) and if the sampled item was defective then we move to state (n+1,x+1) which its optimal cost is equal to * K (n+1,x+1) .Since the expected value of the proportion of defective items is E(p) thus using the conditional mean formula, following result is concluded, Also it is obvious that E(p) obtained as follows, The recursive approach of solving proposed dynamic programming model is summarized in the above Table .It is seen that the model can be easily solved using a computer program.In the next section, sensitivity analysis is performed on different parameters.
Sensitivity Analysis
A sensitivity analysis is performed on the parameters of the problem and the results are summarized in the Table (2).The results are summarized as follows, By comparing case one and case two, it is seen that when the cost of sampling and inspecting one item s c increases, then the optimal decision in the proposed method is to reject the lot.By comparing case one and case three, it is seen that when the cost of sampling and inspecting one item s c decreases, then the optimal decision in the proposed method is to continue to the next decision making stage.So it is logical, because when the cost of inspecting the one item decreases then, optimal policy would be to inspect more items.
Also it is seen that when the cost of replacing, reworking or repairing a defective item r c increases, then the optimal decision in the proposed method is to accept the lot as expected.Also when the cost of replacing, reworking or repairing a defective item r c decreases, then the optimal decision in the proposed method is to continue to the next decision making stage.
It is seen that when the cost of accepting a nonconforming item a c decreases, then the optimal decision in the proposed method is to accept the lot.Since decreasing cost of accepting a nonconforming item leads to decrease the cost of accepting the lot thus optimality of the acceptance decision is logical.Also it is seen that when the cost of accepting a nonconforming item a c increases, then the optimal decision in the proposed method is to continue to the next decision making stage.
Conclusion
This paper presents a dynamic programming procedure to design an optimal sequential sampling plan.That is very important for quality control managers to have a good lot with minimum cost.Since the cost of inspecting all items is very high, it is popular that managers use sampling for this purpose.In this work, a new decision making approach for accepting or rejecting a sample based on recursive inference is improved.The objective of the model is to determine the optimal decision that minimizes the total cost including the cost of rejecting the batch, the cost of inspection and the cost of defective items.Also when none of the decisions of accepting or rejecting is optimal, it is assumed that we can have more observations and continue to the next stage.A mathematical model is developed which leads to optimal solution because of using dynamic programming approach.Also, sequential approach is used for obtaining the cost of different decisions in each stage.A numerical example is solved to elaborate the application of the proposed methodology.At the end sensitivity analyses of main parameters are performed and the results are discussed.
,x) = c n + c x + c E(y|x)Where; ,x) = c N + c x + c E(y|x)Where;
Then the optimal decisions are determined using a backward recursive approach.They have shown that acceptance sampling model based on Bayesian inference is efficient and less expensive than the single sampling plan.Fallahnezhad and Aslam (2013) proposed a decision tree for optimizing the cost acceptance sampling problem based on recursive modeling.
2015 pp575-586 578 3. Dynamic Modeling
Pak.j.stat.oper.res.Vol.XI No.4 r : cost of replacing, reworking or repairing a defective item c a : cost of one defective item in an accepted lot.
Table 1 : Results of the dynamic programming process in the proposed sampling model
* K * K * K * K 72
Table 2 : Optimal solution for different values of parameters
* K | 3,116.4 | 2015-12-03T00:00:00.000 | [
"Mathematics",
"Business"
] |
Thermal rearrangement of harunganin and allylations of some compounds from Harungana madagascariensis
In honor of Prof. Dr. B. M. Abegaz on the occasion of his 60 th birthday anniversary Abstract The thermal rearrangement of harunganin ( 1 ), a major constituent of Harungana madagascariensis , was investigated. The rearranged products are mostly found as natural constituents in this plant. In addition, allylations of some anthranoids including harunganin ( 1 ), harongin anthrone ( 8 ), harunganol B ( 9 ), kenganthranol A ( 10 ) and 1,7-dihydroxyxanthone ( 14 ) with allyl bromide in the presence of potassium carbonate were studied. The chelated phenolic hydroxyl groups were not allylated under theses conditions. Harongin anthrone ( 8 ) and harunganol B ( 9 ) gave the O - and C -bisallylation products 11 and 12 , respectively.
Introduction
Several prenylated anthranoids have been found to occur in the family Hypericaceae and the majority of them are C-prenylated.2][3][4] Apart from the last compound, all were C-prenylated and showed strong α-glucosidase enzyme inhibition activity. 3,4Harunganin [(3,8,9-trihydroxy-6-methyl-4,4,5-tris(3-methylbut-2-enyl)anthracen-1(4H)-one] (1), the major constituent of this plant and the most active in this series, was observed to be instable in solution.The thermal (neat) rearrangement of harunganin methyl ether was studied by Richtie and Talyor 2 and that of harunganin (1) and the related isomers ferruginin A and B by Monache et al. 5 However, in the previous work 2 , only one rearranged product was observed upon heating of harunganin methyl ether whereas several products were formed according to our TLC studies in the solution decomposition of harunganin (1).Therefore, in the course of the present study, we reinvestigated the thermal rearrangement of harunganin (1) in dichloromethane solution.[8][9]
Results and Discussion
Refluxing of harunganin (1) in dichloromethane gave a mixture of five rearranged compounds 2-6.As can be seen from Scheme 1, the sterically congested two prenyl groups at C-4 thermally rearrange into the ortho or para position of the respective phenolic hydroxyl groups at C-3, C-8, and C-9.The anthrone HR 2 (3), harunganol A (2) and B (4), and harongin anthrone (5) were previously obtained upon the thermal rearrangement (20 min at 150 °C, then 10 min at 170 °C) of harunganin (1) by Monache et al. 5 One new compound, the anthrone B (6) was observed in the long term boiling in dichloromethane.][3][4][5] O-Allylation of harunganin (1) gave one major O-allylation product 7 (Scheme 2).During this short term heating in acetone, no major prenyl migration was observed.Not surprisingly, the chelated hydroxyl groups at C-8 and C-9 were not allylated.Harongin anthrone (8) and harunganol B (9) gave bisallylation products.In addition to the O-allylation of the non-chelated hydroxy group at C-3, a C-allylation of the C-H-acidic benzylic anthrone position occurred to afford the products 11 and 12, respectively.The mass spectra of these products showed introduction of one or two allyl units in the molecule.The presence of allyl groups was supported by the signals in the 1 H NMR spectra for compound 11 at δ Interestingly, kenganthranol A (10) with a benzylic hydroxyl group at C-10, did not undergo allylation at this sterically hindered hydroxyl group and gave one major product 13 with allylation of the non-chelated phenolic hydroxyl group at C-3.As expected, 1,7dihydroxyxanthone (14) gave only the mono-allylation product 15 with reaction of the nonchelated hydroxyl group at C-7.
In summary, the long term thermal decomposition in dichloromethane of harunganin (1) gave products also found as natural constituents in H. madagascariensis and similar rearrangements may occur in biosynthesis.During the allylations of some phenolic anthranoids only the more reactive non-chelated hydroxyl groups reacted.C-Allylation was observed in the reaction of harongin anthrone (8) and harunganol B (9) to yield the O-and C-bisallylation products 11 and 12, respectively.
Experimental Section
General Procedure.Melting points were determined on a Büchi 535 melting point apparatus and are uncorrected.The IR spectra were obtained in CHCl 3 a JASCO 302-A spectrophotometer.UV spectra were recorded on a Hitachi UV 3200 spectrophotometer. 1 H and 2D NMR spectra were run on Bruker AMX 400 and AMX 500 MHz NMR spectrometers.Mass spectra were obtained with a Varian Model MAT 311 spectrometer at 70 eV.HREIMS were taken on a JEOL HX 110 mass spectrometer.Silica gel [Kieselgel 60 (0.063-0.200 mm) were used for column chromatography; precoated silica gel plates (Merck, Kieselgel 60 F254, 0.25 mm and 1 mm) were used for TLC and preparative TLC analysis.Spots were visualized under UV light (254 and 366 nm) and by spraying with ceric sulfate followed by heating.Thermal rearrangement of harunganin (1).Harunganin (1) (85 mg) was refluxed in dichloromethane (30 mL) for 90 h.The crude products were purified by silica gel column chromatography.Hexane-ethyl acetate (99:1) eluted successively compounds 2 (13.2 mg), 4 (8.6 mg), 3 (17.0mg), 5 (6.0 mg), and 6 (4.2 mg).][3][4][5] Typical procedure for allylation of phenolic compounds 7-10 and 14.The procedure used was similar for all allylations.A solution of the compound in anhydrous acetone (10 mL) was added to a suspension of potassium carbonate (1.0 g).Allyl bromide was added drop wise and the mixture was reflux for 1.5 h.After removal of the solvent, water was then added and the mixture was extracted with ethyl acetate (3 x 10 mL).The combined organic extracts were washed with water (50 mL), dried (MgSO 4 ) and concentrated in vacuum.The residue was examined by TLC using the mixture of hexane-ethyl acetate (96:4) as eluent.
3-O-Allylharunganine (7).
Allyl bromide (300 mg, 2.4 mmol) was added to a solution of harunganin (1) (15 mg, 0.033 mmol) following the procedure described above.After completion of the reaction (TLC monitoring), work up was carried out and the residue was chromatographed on silica gel.The fraction containing 7 was purified on preparative TLC (PTLC) to give an ARKAT USA, Inc. | 1,324.4 | 2006-10-11T00:00:00.000 | [
"Chemistry"
] |
Polymodal Responses in C. elegans Phasmid Neurons Rely on Multiple Intracellular and Intercellular Signaling Pathways
Animals utilize specialized sensory neurons enabling the detection of a wide range of environmental stimuli from the presence of toxic chemicals to that of touch. However, how these neurons discriminate between different kinds of stimuli remains poorly understood. By combining in vivo calcium imaging and molecular genetic manipulation, here we investigate the response patterns and the underlying mechanisms of the C. elegans phasmid neurons PHA/PHB to a variety of sensory stimuli. Our observations demonstrate that PHA/PHB neurons are polymodal sensory neurons which sense harmful chemicals, hyperosmotic solutions and mechanical stimulation. A repulsive concentration of IAA induces calcium elevations in PHA/PHB and both OSM-9 and TAX-4 are essential for IAA-sensing in PHA/PHB. Nevertheless, the PHA/PHB neurons are inhibited by copper and post-synaptically activated by copper removal. Neuropeptide is likely involved in copper removal-induced calcium elevations in PHA/PHB. Furthermore, mechanical stimulation activates PHA/PHB in an OSM-9-dependent manner. Our work demonstrates how PHA/PHB neurons respond to multiple environmental stimuli and lays a foundation for the further understanding of the mechanisms of polymodal signaling, such as nociception, in more complex organisms.
the nature of such responses of phasmid neurons to environmental stimuli has yet to be determined on a cellular level and the underlying molecular mechanisms remain unclear.
In this study, we show that the PHA/PHB neurons respond to a wide range of aversive stimuli including aversive odors, copper, alkaline solution, hyperosmotic solution, and harsh touch. We further identify critical roles for the TRPV protein OSM-9, the CNG channel protein TAX-4 and the post-synaptic neuropeptide in the sensory transduction of PHA/PHB. Our data suggests that the PHA/PHB neurons are polymodal neurons employing an elaborate combination of intracellular and intercellular signaling pathways to detect and process environmental stimuli.
Results
The PHA/PHB neurons respond to a wide range of aversive stimuli. To monitor the activities of the PHA/PHB neurons, we generated a transgenic strain in which the calcium indicator protein GCaMP5.0 was transcribed under the control of the ocr-2 promoter 9 . Previous studies have reported that PHA/PHB neurons are required for detergent (SDS)-evoked avoidance behavior 6,7 . Consistent with these studies, we observed reliable calcium elevations in both the soma and the processes of the PHA/PHB neurons upon perfusion of 1% SDS to the tail of the worm (Fig. 1a-c).
We then sought to discover whether the PHA/PHB neurons could be activated by other chemical and physical stimuli. We observed robust calcium transients in the PHA/PHB neurons during the stimulation of the worms with aversive odors such as isoamyl alcohola (1:100 IAA) and 1-octanol (1:1000) and an alkaline solution of pH 12. Harsh touch (20 μ m displacement) and hyperosmotic solution (2 M glycerol) also induced robust calcium transients in the PHA/PHB neurons. However, no such response was observed with the perfusion of the bath solution, alkaloid quinine (20 mM), or an acidic solution of pH 3 ( Fig. 2a and b). Interestingly, the calcium levels of the PHA/PHB neurons decreased upon the application of copper heavy metal ions and were increased by copper removal (Fig. 2a,b). No detectable calcium variation was observed with the application of attractive odorants such as butanone. Notably, we did not observed any differences between the responses of PHA and PHB to these stimuli. These observations suggest that PHA/PHB are polymodal neurons responding to noxious chemical and physical stimuli.
PHA/PHB neurons function as primary sensory neurons for sensing odorants. One possibility is that calcium elevations in the PHA/PHB neurons upon exposure to sensory stimuli occur post-synaptically and are induced by other neurons. Therefore, we tested the IAA-induce responses in PHA/PHB in unc-13 mutant worms and unc-31 mutant worms. In this unc-13 encodes the ortholog of the mammalian Munc13 which is required for neurotransmitter release from synaptic vesicle 10,11 and unc-31 encodes the ortholog of the mammalian CAPS proteins and is essential for neuropeptide release from dense core vesicles (DCVs) 10,11 . Notably, IAA-induced calcium elevations in PHA/PHB in unc-13 and unc-31 background were similar to those of wild-type worms. This seems to confirm that PHA and PHB are the primary sensory neurons for sensing IAA (Fig. 3a,b).
IAA-sensing of the PHA/PHB neurons is dependent in TAX-4 and OSM-9.
We then investigated the molecular mechanisms of IAA-sensing in PHA/PHB. TAX-4, a subunit of a cyclic nucleotide gated channel involved in chemotaxis mediated by the AWC neurons, has been implicated as required for PHA/PHB-mediated avoidance response to SDS 6,12 . We found that IAA-induce responses in PHA/PHB were dramatically diminished in tax-4 mutant worms (Fig. 3c,d). Sensory transduction in the ASH neurons in response to noxious osmotic shock, heavy metal ions and volatile chemical and alkaline solutions have all been noted to be mediated by OSM-9, a TRPV-related cation channel 1,13 . OSM-9 is expressed in PHA/PHB as well as in some amphid sensory neurons such ASH and AWA 14 . Interestingly, IAA-induced responses in PHA/PHB were also significantly weaker in osm-9 mutants than in wild-type worms (Fig. 3c,d). This demonstrates that both TAX-4 and OSM-9 are required for IAA-sensing in the PHA/PHB neurons.
Copper inhibits the PHA/PHB neurons. Both IAA and copper activates ASH neurons 1,15 . Unexpectedly, we found that the calcium levels in the PHA/PHB neurons were decreased by the application of copper (an "ON" response), and were increased by copper removal (an "OFF" response) (Fig. 4a,4b). Neither the "ON" response nor the "OFF" response was affected by the loss of UNC-13 ( Fig. 4a,b). However, the "OFF" response was abolished in unc-31 mutant worms (Fig. 4a,b). These results indicate that copper autonomously inhibits PHA/PHB at a cellular level. Meanwhile, the PHA/PHB neurons may be post-synaptically activated by copper removal via neuropeptides. The Cu 2+ -induced "ON" response in PHA/PHB was diminished in osm-9 mutant worms. Nevertheless, TAX-4 was required for "OFF" responses ( Fig. 4c,d). These data suggests that copper inhibits PHA/PHB in an OSM-9-dependent manner, and both TAX-4 and neuropeptides are involved in copper removal-induced calcium elevations in PHA/PHB.
PHA/PHB neurons sense mechanical stimulation in an OSM-9-dependent manner.
Laser ablation of the PHA/PHB neurons reduces response to harsh touch. This shows that these neurons are also involved in mechano-sensation 8 . Consistent with the behavioral phenotype, we observed robust touch-induced calcium elevations in PHA/PHB (Fig. 5a,b). Touch-induced calcium elevations in PHA/PHB were not reduced in unc-13 mutant worms and were only slightly smaller in unc-31 mutant worms than those in the wild-type, which indicates that PHA and PHB are likely mechano-receptor cells (Fig. 5a,b).
Three mechano-gated channels have been identified in C. elegans, the two amiloride-sensitive sodium channel (ENaC) proteins MEC-4 and DEG-1, and the TRPN (nomPC) protein TRP-4 [16][17][18] . Since MEC-4 and TRP-4 are not expressed in the PHA/PHB neurons [17][18][19] , we examined touch-induced response in PHA/PHB in deg-1 mutant worms. We found that the touch-induced calcium elevations in the PHA/PHB neurons were normal in deg-1 mutant worms (Fig. 5c,d). Furthermore, the ENaC blocker amiloride failed to affect the touch-induced calcium elevations in PHA/PHB (Fig. 5c,d). This demonstrates that ENaC channels are not involved in mechano-transduction in PHA/PHB. OSM-9 is required for touch-evoked responses in the ASH neurons 1 . We found touch-induced calcium elevations in PHA/PHB were dramatically reduced in osm-9 mutant worms, indicating that OSM-9 plays a role in mediating PHA/PHB excitation in response to mechanical stimulation.
Discussion
In this study, we demonstrate that the C. elegans phasmid neurons PHA and PHB are polymodal sensory neurons responding to harmful chemicals and mechanical stimulation. We show that the TRPV channel OSM-9 is essential for both IAA-sensation and touch-sensation, but not for copper-induced calcium variations in PHA/PHB. The CNG channel TAX-4 is especially required for chemo-sensation in these neurons. In addition, neuropeptides are likely required for the copper removal induced-calcium elevations in PHA/PHB.
In C. elegans, two GPCR-related signal transduction systems are prominent in chemo-sensation. One relies upon CNG channel and another is mediated by TRPV channels 5,15 . In the AWC neurons, odorants bind to the GPCR receptor and activate Gα proteins. This leads to drop of the intracellular level of cGMP, thereby closing the CNG channels TAX-2/TAX-4 and hyperpolarizing the cell 5,20 . The CNG channels also mediate thermo-sensation in the AFD neurons and photo-sensation in the ASJ neurons 21,22 . The TRPV channel OSM-9 has been proposed to mediate depolarization following all chemical stimuli sensed by the ASH neurons 1,13 . Interestingly, here we found both TAX-4 and OSM-9 were essential for IAA-sensation in the PHA/PHB neurons, while OSM-9 and TAX-4 were involved in the copper-induced "ON" and "OFF" responses, respectively, in PHA/PHB. These observations suggest that the PHA/PHB neurons represent distinct mechanisms of chemo-transduction from the amphid sensory neurons. Additionally, we found that copper removal post-synaptically activated the PHA/PHB neurons via neuropeptides. This indicates that the activities of PHA/PHB can also be modulated by other neurons.
Two classes of mechano-gated channels have been identified in C. elegans. The first is the amiloride-sensitive ENaC channel subfamily and includes MEC-4 and DEG-1. The second is the TRP subfamily which includes TRP-4 [16][17][18] . MEC-4 is expressed in the six touch receptor neurons including ALM, AVM, PLM, and PVM. MEC-4 may form a heteromeric mechano-transduction channel with MEC-10 18 . These proteins interact with the paraoxonase-like MEC-6 and the cholesterol-binding stomatin-like MEC-2 protein which are required to sense gentle mechanical touch along the body wall 18,23 . TRP-4 is an N-type TRP channel which is a close homolog of NOMPC/TRPN1 in Drosophila 24 . TRP-4 is expressed in dopaminergic neurons such as CEP, PDE and DVA, and is involved in slowing the basal response, proprioception and in sensing ultrasound stimulus 17,19,25 . In the ASH neurons, loss of OSM-9 abolishes touch-evoked calcium elevations 1 . However, DEG-1, but not OSM-9, is required for the touch-receptor currents in ASH. This suggests that OSM-9 may act as a calcium modulator, but not as a touch receptor 16 . Here we found that OSM-9 was required for touch-induced calcium responses in the PHA/PHB neurons. Nevertheless, our data excludes a role of either DEG-1 or other ENaC in touch-induced calcium responses in PHA/PHB. Further efforts might be expended to identify which mechano-gated channel(s) function as the mechano-receptor(s) in these neurons. The answer to this question may shed new insights into the long-lasting attempt to identify the mechno-gated channels mediating hearing, touch-sensation and pain in mammals 23,26,27 .
A single type of sensory neuron responding to different kinds of stimuli represents an intriguing problem in neurology. Our data suggests that the combined approach of CNG signaling, TRP signaling, and neuropeptide signaling are responsible for encoding and discriminating between different kinds of stimuli in the PHA/PHB neurons. This observation may help us to uncover the other mechanisms of polymodal signaling such as nociception in more complex organisms.
Materials and Methods
Strains. C. elegans strains were maintained under standard conditions 28 . We generated a transgenic strain kanEx178 [Pocr-2::dsRed + Pocr-2::GCaMP5] to monitor intracellular activities in the PHA/PHB neurons. Mutant strains included: osm-9(ky10) kanEx178; tax-4(ky11) kanEx178; deg-1(u38) kanEx178; unc-13(e51) kanEx178; unc-31(e928) kanEx178. Calcium Imaging. A drop of bath solution containing a D2 adult worm was placed on a coverslip. Then the worm was glued to the pad with a cyanoacrylate-based glue (Gluture Topical Tissue Adhesive, Abbott Laboratories). The anus segment was exposed to chemical and mechanical stimuli. The calcium indicator GCaMP5 was used to measure the intracellular calcium signals. Imaging was acquired in an Olympus microscope (BX51WI) with a 60 × objective lens on an Andor DL-604M EMCCD camera. Data was collected using the Macro-manager software. GCaMP5 was excited by a Lambda XL light source and fluorescent signals were collected at a rate of 1 Hz. The average GCaMP5 signal from the first 3 s before stimulus was taken as F0, and Δ F/F0was calculated for each data point. The data was analyzed using Imaging J. The bath solution contained (in mM):145 NaCl,2.5 KCl, 1 MgCl 2 , 5 CaCl 2 ,10 HEPES, 20 glucose. (325~335 mOsm, pH adjusted to 7.3 with NaOH).
Mechanical Stimulation. Touch stimuli was delivered to the cell using a tip diameter of ~1 μ m borosilicate glass capillary driven by a piezoelectric actuator (PI) mounted on a micromanipulator (Sutter). The needle was placed perpendicular to the worm's body. In the "on" phase, the needle was moved toward the worm's tail so that it could probe into the worm's tail on the cilia and then held on the cilia for 500 ms. In the "OFF" phase the needle was returned to its original position.
Statistical analysis. Data analysis was performed using Excel 2010 and Image J.Error bars were mean ± SEM. N represents the number of cells. P values were determined by Student's t test. P < 0.05 was regarded as statistically significant. | 2,970.8 | 2017-02-14T00:00:00.000 | [
"Biology"
] |
Engineering selection stringency on expression vector for the production of recombinant human alpha1-antitrypsin using Chinese Hamster ovary cells
Background Expression vector engineering technology is one of the most convenient and timely method for cell line development to meet the rising demand of novel production cell line with high productivity. Destabilization of dihydrofolate reductase (dhfr) selection marker by addition of AU-rich elements and murine ornithine decarboxylase PEST region was previously shown to improve the specific productivities of recombinant human interferon gamma in CHO-DG44 cells. In this study, we evaluated novel combinations of engineered motifs for further selection marker attenuation to improve recombinant human alpha-1-antitrypsin (rhA1AT) production. Motifs tested include tandem PEST elements to promote protein degradation, internal ribosome entry site (IRES) mutations to impede translation initiation, and codon-deoptimized dhfr selection marker to reduce translation efficiency. Results After a 2-step methotrexate (MTX) amplification to 50 nM that took less than 3 months, the expression vector with IRES point mutation and dhfr-PEST gave a maximum titer of 1.05 g/l with the top producer cell pool. Further MTX amplification to 300 nM MTX gave a maximum titer of 1.15 g/l. Relative transcript copy numbers and dhfr protein expression in the cell pools were also analysed to demonstrate that the transcription of rhA1AT and dhfr genes were correlated due to the IRES linkage, and that the strategies of further attenuating dhfr protein expression with the use of a mutated IRES and tandem PEST, but not codon deoptimization, were effective in reducing dhfr protein levels in suspension serum free culture. Conclusions Novel combinations of engineered motifs for further selection marker attenuation were studied to result in the highest reported recombinant protein titer to our knowledge in shake flask batch culture of stable mammalian cell pools at 1.15 g/l, highlighting applicability of expression vector optimization in generating high producing stable cells essential for recombinant protein therapeutics production. Our results also suggest that codon usage of the selection marker should be considered for applications that may involve gene amplification and serum free suspension culture, since the overall codon usage and thus the general expression and regulation of host cell proteins may be affected in the surviving cells. Electronic supplementary material The online version of this article (doi:10.1186/s12896-015-0145-9) contains supplementary material, which is available to authorized users.
Background
The approval of tissue plasminogen activator produced in Chinese hamster ovary (CHO) cells in 1986 set the stage for CHO cells to become the dominant mammalian cell line for biopharmaceutical production till date. In addition to its ability to produce glycoproteins with post-translational modifications compatible to humans [1], and its refractory nature to human viruses [2], the availability of well-established gene amplification systems for CHO cells coupled with its ability to adapt and grow in serum-free suspension culture make CHO cells ideal for large scale high-titer cultures in the industry [3][4][5][6]. Currently, the titers of biopharmaceutical production from CHO cells have achieved gram per liter range and this 100 fold improvement since 1980s can be attributed to advances in bioprocess development, media development and cell line development. While many of these bioprocess and culture media improvements were kept as trade secrets [7], cell line development technologies like expression vector engineering, cell line engineering and clone screening technologies were extensively reviewed [1,[7][8][9][10]. Typical cell line engineering strategy focuses on improving time integral of viable cell density (IVCD) and specific protein productivity (q p ) of cells [11][12][13][14][15][16]. The availability of CHO genome data as well as the advancement of omics tools and in silico modelling of mammalian systems have also identified target genes with diverse array of functions to potentially improve the titer of biopharmaceuticals [9,17,18]. Together with the discovery of genome wide editing tools like zinc finger nucleases, transcription activator-like effector nucleases and meganucleases, more of these genes can be validated for their roles in biopharmaceuticals production [19][20][21].
To date, expression vector engineering technologies remain as the most timely and convenient method for new cell line development. The primary objective of expression vector engineering technologies is to improve the efficiency and efficacy of generating and isolating high producing clones. To increase the rate of transcription of gene of interest (GOI), the structure of chromatin can be altered by specific DNA elements that maintain the chromatin in an "open" state to increase transcription of the GOI. Examples of such elements are the ubiquitous chromatin opening element (UCOE) which is a methylation free CpG island [22], and the matrix attachment regions (MARs) which anchor the chromatin structure to the nuclear matrix during interphase [23]. As an alternative to altering the chromatin structure, site specific recombination is also used to introduce the GOI into a pre targeted genomic hotspot of the host cell line which was previously determined to enable stable and enhanced transcription of a reporter gene. Two site specific recombination systems, Cre and Flp, are well established and they are commonly used to insert GOI into targeted hot spots through their respective cis acting 34 bp loxP and 48 bp Flp Recombination Target (FRT) sites [24][25][26][27][28][29][30].
Another expression vector engineering approach is to improve selection stringency [31]. Selection stringency can be improved by using mutant neomycin phosphotransferase II selection markers with reduced affinities for the neomycin drug [32,33], by using a weak Herpes simplex virus thymidine kinase promoter [34], and by codon deoptimization of selection marker gene [35], which reduce the selection markers' activity, transcription initiation and translation rate respectively. With a higher selection stringency, the selection marker gene has to be expressed at higher levels to be sufficient for surviving the selection process. As the GOI is likely integrated near the selection marker, this results in the high expression of the GOI to improve the probability of isolating high producing clones.
In addition to selection stringency, it is also important to co-localize the GOI with the selection marker, for efficient selection and successful amplification of the GOI gene [3,36]. While coexpression of GOI and selection marker using multiple promoters on the same vector may help in the co-localization, we have previously demonstrated that gene fragmentation can occurs at a high level of 14% during stable transfection of dual promoter dicistronic vector in CHO-DG44 cells [37]. As gene fragmentation dissociate the expression of selection marker with that of the GOI, additional cloning and screening steps are necessary for selection of high producing cell clones. To mitigate this, the GOI can be linked to the selection marker with the insertion of an internal ribosome entry site (IRES) [38]. By positioning the selection marker downstream of the GOI and IRES, the transcription of the selection marker is dependent on the successful transcription of the GOI upstream of it in the expression vector. Thus, the probability of selection marker expression without that of GOI and the survival rate of cells with fragmented transgenes are reduced.
In our previous studies, we have similarly shown that specific productivities can be improved when we increased selection stringency by destabilizing the selection marker through the addition of AU-rich elements (ARE) to promote mRNA degradation and murine ornithine decarboxylase (MODC) PEST region to enhance protein degradation [31]. Subsequently, an attenuated IRES element was used together with the PEST region to allow for high recombinant protein titer using stably amplified cell pools [39].
In this study, we evaluated various vector designs for further optimizing the strength of selection marker expression in CHO cells for the production of our model protein: recombinant human alpha1-antitrypsin (rhA1AT).
Alpha1-antitrypsin is a serum protease inhibitor that protects tissues from enzymes secreted by inflammatory cells, and the protein is currently purified from human blood plasma as replacement therapy for patients who developed chronic obstructive pulmonary disease due to deficiency in the protein. rhA1AT expression vectors were constructed with tandem PEST sequence, further attenuation of the IRES element, and codon-deoptimization of the dihydrofolate reductase (dhfr) selection marker. The selection and amplification efficiency, recombinant protein productivity, relative transcript copy numbers and dhfr expression levels were then analyzed.
Transfection, selection and methotrexate (MTX) amplification
For transient transfections, suspension CHO-DG44 cells were seeded at a density of 3 × 10 5 cells/ml into CD DG44 Medium (Life Technologies) with 8 mM L-glutamine (Invitrogen) and 0.18% Pluronic® F-68 (Invitrogen) in a 250 ml shake flask 3 days before transfection. For each transfection, 5 million cells were pelleted at 90 × g for 10 min at room temperature. The cell pellet was resuspended in the vector solution mix consisting of 2 μg of expression vector mixed with 100 μl of the supplemented Nucleofector™ Cell Line SG solution. This was transferred into a Nucleocuvette™ and electroporated with the Amaxa™ 4D-Nucleofector™ using pulse code DT-137 (Lonza). After 2 min, 500 μl of pre-warmed CD DG44 media was added, followed by incubation in a static incubator at 37°C with 5% CO 2 (Sanyo) for 10 min, before the cells were transferred into 1.5 ml of prewarmed CD DG44 medium in a 24 well suspension culture plate. After 24 hours, cell pellets were harvested by centrifugation at 6000 × g for 10 min, washed once with sterile phosphate buffer saline (PBS) and stored immediately in a −80°C freezer for future analysis.
Generation of stable pools was performed on a Nucleofector I Device (Lonza), utilizing program U-24 and Nucleofector Kit V, as per manufacturer's instructions. Briefly, each transfection was carried out using 1.5 × 10 6 CHO DG44 cells and 4 μg of BstBI linearized rhA1AT expressing plasmid. The transfected cells were then resuspended with 2 ml HyQ PF-CHO containing 1 × HT supplement and 1% fetal bovine serum (FBS) (Life Technologies) and transferred into a 6-well adherent culture plate. At 48 h post-transfection, cells were detached, centrifuged at 100 × g rpm for 5 min and seeded at 2000 cells/well into a 96-well adherent culture plate containing the serum supplemented media without HT. Nontransfected cells died 7-14 days after selection. Selection efficiency was quantified as the percentage of wells that became confluent 4 weeks post-transfection. Gene amplification was subsequently induced in 30 randomly picked pools for each vector by passaging in serum media containing increasing concentrations of MTX (Sigma-Aldrich) from 10 to 50 nM MTX with each amplification step taking two week. The two highest producing cell pools from each vector at 50 nM MTX were selected for further MTX amplification to 300 nM MTX. The 50 and 300 nM MTX pools were then adapted to serum-free suspension culture in HyQ PF-CHO while maintaining the same MTX concentrations.
Characterization of rhA1AT producing cell pools
Stably transfected cells pools that survived selection and MTX amplification at 10 and 50 nM MTX in 96 well plate adherent cultures were seeded into new 96 well plates for titer evaluation. These cells were allowed to grow for 14 days in a static incubator at 37°C with 5% CO 2 , with one change in culture medium at Day 7. Culture supernatants were harvested to determine rhA1AT titer.
Cells adapted to serum free culture were characterized by seeding them in 40 ml of serum-free medium at a cell density of 4 × 10 5 cells/ml in 125 ml shake flask on shaker platforms set at 110 rpm in a humidified incubator at 37°C with 8% CO 2 . Cell densities and viabilities were determined daily using an automated cell counter, Vi-Cell XR (Beckman Coulter), according to manufacturer's instructions, until culture viability dropped below 50%. Culture supernatant was sampled daily for analysis to determine rhA1AT titer, and for biochemical analysis using BioProfile 100 Plus (Nova Biomedical).
Analysis of transcript levels by quantitative polymerase chain reaction (qPCR)
Total RNA was isolated from about 1 × 10 6 cells using RNeasy® Mini kit (Qiagen) and treated with RNase-free DNase I (Qiagen cat. no. 79254) as per manufacturer's instructions. Using the NanoDrop 2000 spectrophotometer (Thermo Scientific), RNA quantity and quality were determined by absorbance measurement at 260 nm and by the ratio of absorbance at 260 nm and 280 nm, respectively. 200 ng of total RNA was used for first-strand cDNA synthesis via ImProm-II™ Reverse Transcription System (Promega), with an oligo(dT) 28 (Sigma-Aldrich) primer and treatment with recombinant RNasin® Ribonuclease Inhibitor, as per manufacturer's instructions. RNA samples were stored in aliquots at −80°C and cDNA samples were stored at −20°C until use in qPCR.
qPCR were carried out on rhA1AT, dhfr and β-actin genes. Primer sets used for qPCR are described in Table 1. The qPCR reaction mixture was made up of 12.5 μl of SYBR® Green PCR Master Mix (Applied Biosystems), 2 μl of a primer mixture containing 5 μM of each forward and reverse primer, and 10.5 μl of cDNA template was loaded into each well of the MicroAmp® Optical 96-Well Reaction Plate (Applied Biosystems). Triplicates for each cDNA sample and buffer controls without template cDNA were loaded for each primer set. AB 7500 Fast Real-Time PCR System (Applied Biosystems) was used to analyse the samples using a thermal cycling condition that consisted of 50°C for 2 min, 95°C for 10 min followed by 40 cycles of 95°C for 15 sec and 60°C for 1 min. All run data was analysed using the Sequence Detection Software Version 1.4 (Applied Biosystems). A threshold fluorescence of 0.2 was used to determine the average threshold cycle (Ct) for all sample-primer sets. Results were normalized and analyzed using the comparative critical threshold (ΔΔCt) method [43]. Briefly, ΔCt for each sample was obtained by subtracting Ct of the target gene with Ct of the normaliser gene. ΔΔCt were then determined by subtracting the ΔCt of sample with ΔCt of the reference sample. The relative transcript copy numbers of the samples were calculated as 2 ΔΔCt .
Analysis of proteins by Western blot
To probe for secreted rhA1AT in transiently transfected CHO-DG44 cells, culture media was harvested 3 days post-transfection and clarified by centrifuging at 18000 × g for 5 minutes, prior to Western blotting, as described below. For intracellular proteins (dhfr, β-actin and intracellular rhA1AT), 1 × 10 6 DG44 suspension cells were lysed using 120 μl CelLytic M mammalian cell lysis/extraction reagent (Sigma, C2978, USA) supplemented with protease inhibitor cocktail without EDTA (Nacalai Tesque, #03969-21, Japan), as per manufacturers' instructions. Solubilized proteins were collected using centrifugation at 18000 × g for 15 min. Total protein concentrations were determined using Pierce BCA protein assay reagent (Thermo Scientific, #23227, USA).
13 μl of clarified culture media or 20 μg total protein from each sample were resolved on 4-12% gradient PAGE gel (Thermo Fisher NuPAGE gel, # NP0321, USA). The proteins were transferred onto PVDF membrane using iBlot® (Thermo Fisher, #IB4010-01, USA) and then blocked for 1 h with a blocking buffer comprising of
Calculations
Cell doubling time (t D ) was determined according to Equation 1, where μ is the specific growth rate.
Specific growth rate (μ) was determined as the slope of the linear trendline obtained by plotting ln(N) vs t according to Equation 2, where N is the viable cell density, N 0 is the initial viable cell density and t is the culture time.
q p was determined as the slope of the linear trendline obtained by plotting rhA1AT titer vs IVC according to Equation 3, where IVC is the cumulative integrated viable cell number, P is the cumulative rhA1AT produced and V is the culture volume. Equation 4 where p is the rhA1AT titer obtained from ELISA.
IVC was determined by trapezium rule according to Equation 5.
Student's t-test was performed by comparing data from 2 vector sets by assuming the samples have unequal variance and using a one-tailed distribution, since we wanted to test whether the rhA1AT titers from cell pools transfected with the different vector sets were higher than that obtained using pAID vector.
Results and Discussion
Design of mammalian expression vectors with attenuated dhfr selection marker In our previous study, we have demonstrated that selection marker attenuation with destabilizing sequences to reduce the transcript and/or protein levels of the selection marker gene, can improve the production of the recombinant gene of interest upon selection and MTX amplification [31]. An IRES element was then used for the production of recombinant sDectin-1, to allow for the successful MTX amplification of cell pools by transcriptionally linking the gene of interest and the selection marker [39]. With this design, selection stringency was maintained with the concurrent use of PEST destabilizing sequence for facilitating selection marker protein degradation and an attenuated IRES element for reduced translation of the selection marker.
In this study, we wanted to evaluate the effects of further selection marker attenuation on recombinant protein production in CHO cells using the IRES expression vector. As such, we designed 7 expression vectors expressing rhA1AT that can be classified into 3 sets ( Figure 1B). Using rhA1AT as the gene of interest, the first vector set consists of pAID, pAIDp and pAIDpp. Comparing data from the use of pAIDp against pAID will allow us to validate the use of PEST element in improving stable recombinant gene expression, as observed in our previous studies [31,39]. The application of 2 tandem PEST elements in pAIDpp then allowed us to determine whether an additional PEST can further improve stable recombinant gene expression, as this has not been demonstrated in literature to our knowledge. The second vector set consists of pAI709Dp and pAI772Dp. These 2 vectors incorporated mutations described by Hoffman MA and Palmenberg AC [40] into the attenuated IRES [44,45]. This is to evaluate whether the further attenuation of selection marker expression with these additional impediment in translation initiation can improve stable recombinant gene expression. The third vector set comprised of pAID* and pAID*p. These 2 vectors incorporated a codon de-optimized dhfr selection marker ( Figure 1A) to evaluate the use of codon deoptimization as a strategy to further reduce selection marker expression levels, since it will theoretically reduce translation efficiency, a different aspect of gene expression that is not affected by the attenuated IRES and PEST. Codon deoptimized dhfr has also been used by Westwood AD et al. [35] to improve recombinant protein yield in serumcontaining CHO cell culture that were not exposed to MTX, though a different sequence is used in our study.
To verify the expression of rhA1AT, CHO-DG44 cells were transiently transfected with pAID and pAIDp vectors. For both transfections, rhA1AT protein was detected in the cell lysates and culture supernatants as respective weak and strong bands that were between the 50 and 75 kDa molecular weight markers (Additional file 1: Figure S1). This corresponded to the expected molecular weight of the glycoprotein at 55 kDa, while being bigger than the molecular weight of the full length polypeptide with and without the signal peptide at 46.7 and 44.3 kDa respectively. This suggests that the rhA1AT from CHO cells is secreted and glycosylated, as previously reported by Lee KJ et al. [46]. While we also attempted to probe for intracellular dhfr protein from transiently transfected cells, it was below the detection limit by Western blotting for all our vectors. Nonetheless, rhA1AT and dhfr transcript levels of transfected cells were more than 8 fold higher than that of the untransfected control by qPCR, indicating that all the vectors can be transcribed successfully.
Selection and MTX amplification efficiencies for rhA1AT production
Stable transfections of the rhA1AT expression vectors with modified selection markers were subsequently performed by plating transfected CHO-DG44 cells in 96 well plates for selection in a HT-deficient (−HT) culture medium. 30 randomly picked cell pools for each vector from these 96 well plates were then subjected to sequential MTX amplification at 10 nM and 50 nM concentrations. The selection and amplification efficiencies are listed in Table 2. With selection in -HT medium, pAID, pAIDp, pAI772Dp and pAID* gave high selection efficiencies of more than 80%, pAIDpp gave 70%, pAID*p gave 45%, and none of the pAI709Dp or untransfected control cell pools survived. This suggests that the further attenuation of the IRES element in pAI709Dp did not allow for ample dhfr expression for cells to survive the selection in -HT medium. In addition, tandem PEST and codon deoptimization, in pAIDpp and pAID*p vectors respectively, have the effects of further lowering selection efficiency, compared to pAIDp vector, which suggest that dhfr expression is lowered in these vectors. At 10 nM MTX, the reduced survival fitness of the pAIDpp and pAID*p vector population is also shown by the markedly lower amplification efficiency of these samples as compared to the other vector sets. The trend in survival fitness was more obvious when the cells were subjected to a greater selection pressure of 50 nM MTX, where amplification efficiencies were of the order pAID > pAIDp > pAID* > (pAI772Dp and pAIDpp) > pAID*p > pAI709Dp. This shows that additional selection marker attenuation acted to reduce selection fitness of the transfected cells, for example, pAID > pAIDp; pAIDp > pAI772Dp, pAIDpp, pAID*p and pAI709Dp; and pAID* > pAID*p.
The 30 randomly selected cell pools for each vector that were subjected to sequential MTX amplification were then evaluated for rhA1AT production. The cell pools were re-plated into a new 96 well plate and cultured for 14 days. After which, the culture supernatant was harvested to determine the rhA1AT titers. The titers obtained using the different expression vectors are illustrated in Figure 2. While there is some variability in rhA1AT titers regardless of vector used, the average titers of the pAIDp vector are 1.5 to 2.2 fold higher than those of the pAID vector at various amplification stage ( Figure 2, p value < 0.05). This concurs with our previous report [31] and provided further support to the hypothesis that selection marker attenuation can improve recombinant protein production in the MTX amplification system.
With additional selection marker attenuation by tandem PEST (pAIDpp) and further IRES attenuation (pAI772Dp), it was surprising that the titers in -HT medium were similar to that of pAID vector. Improvements in titers for pAIDpp and pAI772Dp were only observed in the amplification media with 10 nM MTX and 50 nM MTX respectively. These improvements in rhA1AT titers corresponded with the decrease in the amplification efficiencies of the respective cell pools ( Table 2). We thus speculate that the additional dhfr attenuation may have negatively impacted recombinant protein production at the onset of selection. rhA1AT titers then increased when the cells are exposed to MTX, driven by the increased selection stringency and potential increases in gene copy number induced by MTX. With fold amplifications of 17 and 20 at 50 nM MTX for the pAIDpp and pAI772Dp vectors respectively, the Untransfected control 0 0 0 1 Total number of cell pools subjected to -HT selection is 96. 2 30 cell pools that survived -HT selection were randomly selected for sequential MTX amplification at 10 nM followed by 50 nM. Amplification efficiency is calculated as the percentage of cell pools out of the 30 initial cell pools that survived the amplification process. effect of selection stringency became dominant for these cell pools to obtain average rhA1AT titers 3.8 to 4.5 fold higher than that of pAID, higher than the improvement driven by the pAIDp vector ( Figure 2B). As for the vectors with partial dhfr codon deoptimization (pAID* and pAID*p), these gave rhA1AT titers lower or similar to that from pAID cell pools in both the -HT and 10 nM MTX media. These low rhA1AT titers are obtained despite the higher selection stringency as demonstrated by the relatively higher fold amplifications ( Figure 2B) and the lower selection and amplification efficiencies of the pAID*p vector (Table 2), in contrast to the study by Westwood AD et al. [35] where higher selection stringency led to higher recombinant protein titers in serum-containing -HT medium. To understand how the different codon de-optimized dhfr sequences used may have contributed to the different observations, we compared our codon de-optimized dhfr with the 2 codon de-optimized dhfr used by Westwood AD et al. [35] and the wild-type murine dhfr: the codon adaptation indexes (CAI) of these dhfr were calculated with an online CAI calculator [47] to be 0.586, 0.529, 0.437 and 0.741 respectively, based on Cricetulus griseus codon usage from database: http://www.kazusa.or.jp/ codon/ [41]. The CAI for the first 120 amino acids of our dhfr which were codon de-optimized was 0.500, lower than that of the full protein, although it is still higher than one of the sequences used by Westwood AD et al. [35]. This suggests that the codon deoptimized dhfr used in both studies may be comparable. However, in addition to the codon deoptimized dhfr, we are also applying an attenuated IRES to further weaken the expression of dhfr in our study. Hence, we speculate that this additional dhfr attenuation in the pAID* and pAID*p vectors may have further reduced dhfr expression to negatively impact recombinant protein production, similar to pAIDpp and pAI772Dp cell pools.
At 50 nM MTX, the effect of higher fold amplification became dominant when the cell pools from pAID* and pAID*p vector sets separate into 2 clusters: one low producing cluster comprising of 5 pAID* pools producing low titers of rhA1AT (2.8 -29 mg/l), and the other high producing cluster comprising of 2 pools from pAID* and 1 from pAID*p that are producing high titers of rhA1AT (78 -141 mg/l). As the 3 cell pools from the high producing cluster were derived from the top 3 producer cell pools in these vector sets at 10 nM MTX, we postulate that these cell pools may have transgene integration into sites where gene amplification may be facilitated. As such, gene amplification triggered by the higher MTX concentration may be used by these cells as the primary means of rescue from the stringent selection to give rise to the higher rhA1AT titers in these cultures.
Growth and productivity evaluation of top cell pools
The top 2 producing cell pools from each vector set at 50 nM MTX were subjected to further amplification to 300 nM MTX. Both sets of 50 nM MTX and 300 nM MTX cell pools were adapted to serum free suspension culture to obtain their growth and rhA1AT production profiles in batch shake flask cultures (Table 3, and Additional file 2: Figure S2). As only 1 cell pool from vector pAID*p survived the amplification process, and 1 cell pool for vector pAID survived the adaptation to suspension culture, these vector sets were only represented by 1 cell pool in this evaluation.
Comparing the 50 nM MTX cell pools in suspension shake flask culture (Table 3) and adherent 96-well plate cultures (Figure 2), the rhA1AT titers from the suspension cultures were 2.0 to 8.5 fold higher than those from the 96-well plate cultures. This improvement is likely attributable to the higher cell densities of the suspension cultures. It is nonetheless interesting to note that the cell pools derived using the same vector were somewhat consistent in their fold increase in maximum rhA1AT titers, suggesting that these titer improvements due to adaptation to serum free suspension cultures may be characteristics of the different expression vectors. In addition, we noted that both pAI772Dp cell pools have the highest improvement in maximum titers at 8.5 and 8.2 folds, much higher than the 2.0 to 5.6 fold improvements from cell pools derived using other vectors. This suggests that the cell pools derived using the pAI772Dp vector performed much better in serum free suspension culture. The exponential doubling times of these cell pools are also the lowest at 28 and 26 h respectively. These factors resulted in maximum rhA1AT titers of 1.05 and 0.94 g/l for these cell pools in shake flask batch culture. This is noteworthy because we have achieved g/l titers of a recombinant protein using a 2-step MTX amplification that took less than 3 months, and that this titer was obtained from shake flask batch culture of mammalian cell pools in un-optimized culture medium.
While some cell pools adapted to serum free suspension culture successfully, others did not fare too well: the maximum titers for pAIDp Pool 2, both pAID* cell pools and pAID*p cell pool increased by only 2.0 to 3.1 folds, much lesser than that of pAID at 5.6 fold. These cell pools also have lower exponential specific rhA1AT productivity rates compared to pAID, though they have shorter exponential doubling times. As such, despite the higher titers observed in adherent 96-well plate cultures, the maximum titers from these cell pools became comparable to that of the pAID cell pool after adaptation to serum free suspension culture at 50 nM MTX, with pAID* Pool 2 giving a maximum titer lower than pAID.
Of the 4 cell pools that did not adapt well into suspension serum free culture, 3 of which contained the codon-deoptimized dhfr selection marker. This suggests that the selection marker codon-deoptimization may be detrimental to recombinant protein productivity in suspension serum free culture of MTX amplified cells. As codon usage changes to improve survivability of single cell organisms and viruses have been previously reported and discussed [48][49][50][51], we postulate that this observation may be due to codon usage changes in these cells to improve survivability under serum free selective conditions. Clonal differences, such as changes in the cell epigenetics during adaptation and overall cell fitness in serum free suspension culture, may also have contributed to these observations.
Comparing between 50 nM MTX and 300 nM MTX suspension cell pools, cell pools derived using the same vector were also consistent in their fold increase in maximum rhA1AT titers, suggesting that the titer improvements by MTX amplification may also be characteristics of the different expression vectors. For the cell pools adapted to 300 nM MTX, we observed that maximum titers for pAIDp and pAI772Dp cell pools maintained at the similar levels exhibited by their corresponding 50 nM MTX suspension culture, suggesting that no amplification occurred for these cell pools in this step of MTX amplification. Similar observations had been made from our previous experience [39] during MTX amplification of our production cell pool. We postulate that for each step of MTX increase, a critical MTX concentration needs to be reached prior to the occurrence of gene amplification in the cells. Seemingly, the postulated critical MTX concentration for these 4 cell pools transfected with the pAIDp or pAI772Dp vectors was not crossed at 300 nM MTX, resulting in similar cell characteristics compared to the corresponding 50 nM MTX cell pools. In contrast, the pAID*p and pAIDpp cell pools gave 1.5 to 1.8 fold increases in maximum titers, followed by the pAID and pAID* cell pools with fold increases of 2.2 to 2.6. These fold increases in maximum titers were due to corresponding improvements in specific rhA1AT productivities, as indicated in Table 3. This observation also suggests that the MTX amplification step up from 50 nM to 300 nM MTX may be more suitable for pAID*, pAID, pAIDpp, and pAID*p vectors, in that order, and not so suitable for pAIDp and pAI772Dp vectors. As a result of more amplification, the pAIDpp Pool 1 also attained a g/l maximum titer of 1.15 g/l, comparable to that of the pA772Dp Pool 1 at 1.11 g/l. To our knowledge, this is the highest g/l recombinant protein titers reported from shake flask batch culture of stable mammalian cell pools.
This study thus illustrates the importance of expression vector optimization for the generation of high producing stable cell lines that are essential for the manufacturing of recombinant protein therapeutics. In addition, a relatively quick development of high producing stable cell pools, such as the system demonstrated in this study, can also be a viable alternative for the rapid production of milligrams to grams of representative product for preclinical development of therapeutic proteins, a niche that is commonly fulfilled with transient transfection of HEK293 cells: Besides the ease of scale up and having an identical expression system as the final product, the ability to generate stable cell pools with high productivities as described here will also mean that less scale up is required to obtain the desired gram quantities of representative product.
Transcript and protein level characterization of cell pools
To gain some insights on how the different vectors resulted in the different phenotypes, we analysed the relative transcript copy numbers, as well as the relative dhfr protein expression, in the 50 nM and 300 nM MTX cell pools from suspension shake flask cultures (Figure 3).
To derive the relative transcript copy numbers, qPCR results of rhA1AT and dhfr were normalized with the housekeeping gene β-actin to determine their relative expression levels compared to the pAID cell pool ( Figure 3A). We first observed that the relative transcript copy numbers of rhA1AT and dhfr were similar in each cell pool. This was further demonstrated when we derived the relative transcript copy numbers of dhfr with rhA1AT as the normalizer, to obtain similar numbers ranging from 0.86 to 1.04 ( Figure 3A). This transcriptional correlation of the rhA1AT and dhfr genes verified the successful use of a single promoter and IRES mediated translation to physically link these genes on the same transcript with the use of these different vectors, even though the cells were subjected to MTX amplification.
Next, we observed that cell pools with relative q p higher than that of the pAID cell pool also have higher rhA1AT transcript copy number, with the exception of pAIDpp Pool 2 which suggests that its translation efficiency of the rhA1AT gene is higher ( Figure 3A). However, there is no general correlation between relative q p to rhA1AT transcript copy number: for example when the pAI772Dp cell pools at 50 nM MTX were compared, Pool 1 has a lower rhA1AT transcript copy number but a higher relative q p compared to Pool 2. This suggests that the overall productivity of rhA1AT was not solely dependent on transcript levels, but possibly other factors such as translation and secretion efficiencies, which may be peculiar to individual cell pools and not vector specific. Nonetheless, with the pAI772Dp and pAIDpp vectors clearly showing superior q p and maximum titers, we postulate that each cell pool may have underwent varying extent of changes in these different factors during the MTX amplification process, to attain the levels of productivities dictated by the use of the different expression vectors. First-strand cDNA from each sample were analyzed by qPCR. Threshold cycle (Ct) data were analyzed using the ΔΔCt method using pAID as sample reference and β-actin as normalizer. Relative transcript copy numbers were calculated as 2 ΔΔCt . Standard deviations from technical triplicates were determined to be lesser than 10% of the relative values. The relative specific productivity q p was obtained by normalizing exponential q p to that of pAID at the respective MTX concentrations. (B) 20 μg of total protein from each sample were resolved by SDS-PAGE, transferred onto a PVDF membrane, and probed with primary antibodies against dhfr and β-actin.
In the same comparison of relative q p to relative transcript copy number, we noted that both parameters decreased for the pAIDp and pAI772Dp cell pools when these cell pools were amplified from 50 nM MTX to 300 nM MTX. In view of the 1.8 and 2.2 fold respective increases in q p and maximum titers of the pAID cell pool used for normalization, this verified our previous postulate that the rhA1AT transgene had minimal gene amplification and as such lowered relative transcript copy numbers, in the 4 pAIDp and pAI772Dp cell pools during the increase in MTX concentrations from 50 nM to 300 nM MTX.
Similar to the rhA1AT, we observed that the relative dhfr transcript copy numbers varied between the cell pools, even between those generated using the same expression vector ( Figure 3A), although all these cell pools were able to survive at their respective MTX concentrations. This suggests that the overall activity of dhfr to allow for the survival of these cell pools was also not completely dependent on transcript levels, but other factors such as translation efficiency, dhfr protein stability and activity. Examining the cell pools with relative dhfr transcript copy numbers greater than 1.1 as compared to pAID, there are 4 such cell pools at 50 nM MTX (pAIDp Pool 1, pAIDpp Pool 1, pAI772Dp Pools 1 and 2) and 300 nM MTX (pAIDpp Pool 1, pAI772Dp Pools 1 and 2, and pAID* Pool 1). When the intracellular dhfr proteins were analysed, we noted that these higher relative dhfr transcript copy numbers did not translate to higher dhfr protein levels when compared to pAID with the only exception of pAID* Pool 1 at 300 nM MTX, and that the pAID dhfr bands were one of the darkest at both 50 nM and 300 nM MTX ( Figure 3B). This demonstrates that the strategies of further attenuating dhfr translation efficiencies and protein stabilities, with the use of the mutated IRES and tandem PEST respectively, were effective in reducing dhfr protein levels in these MTX amplified cell pools: Specifically comparing the dhfr protein levels in pAID, pAIDp and pAIDpp cell pools, they were generally in decreasing intensity despite the variability in their relative dhfr transcript copy numbers, suggesting that the tandem application of PEST is effective in decreasing the dhfr protein stability. Comparing the pAIDp and pAI772Dp cell pools, although pAID772Dp Pool 2 had relative dhfr transcript copy numbers much higher than that of pAIDp Pool 1, their dhfr protein level seem comparable. On the other hand, pAI772Dp Pool 1 which had relative dhfr transcript copy numbers similar to that of pAIDp Pool 1 had comparable dhfr protein levels at both 50 nM and 300 nM MTX. (Figure 3) These evidences suggest that the IRESatt772 mutant was effective in reducing translation efficiency.
On the contrary, the dhfr protein levels of pAID* suggest that the codon deoptimization applied in this study was not effective in reducing dhfr protein expression in suspension serum free culture, since the pAID* dhfr protein bands were of similar intensities to the corresponding pAID band, despite the lower transcript levels for both pAID* cell pools at 50 nM MTX and for pAID* Pool 2 at 300 nM MTX. With a higher relative dhfr transcript copy number, pAID* Pool 1 had a dhfr protein band that was also darker than that of pAID. These observations suggest that host cell codon usage had changed in these cell pools. This concurs with our previous postulate that codon usage changes in these cell pools may have enhanced their survivability in suspension serum free culture under selective conditions, and that these changes might have negatively affected rhA1AT productivity during adaptation to suspension serum free culture, since the rhA1AT gene was codon-optimized for CHO expression. We noted that previous observations in serum containing media suggest that codon deoptimization is effective in improving selection stringency and the titer of the recombinant protein product: These include our previous observations of lower amplification efficiency ( Table 2) and higher titers of pAID* cell pools in 96 well plate serum containing culture (Figure 2), and the lower dhfr levels measured with the use of codon deoptimized dhfr in the previous study by Westwood AD et al. [35]. Nonetheless, we noted that these are not inconsistent with our observed ineffectiveness of this strategy in suspension serum free culture: we postulate that host cell gene expression and regulation may have become less important to cell survivability during the adaptation to serum free suspension culture, thus allowing changes to host cell codon usage to improve overall cell fitness with better expression of the selection marker in the serum free selective environment, since serum supplemented CHO cell culture is known to grow faster and has different gene expression profile compared to serum-free CHO culture. In an attempt to extend this concept of a changing host cell codon usage to cell biology studies, we noted that this may not be applicable to the common usage of microbe-derived selection markers since most of such cultures are in the presence of serum. Another consideration that may be peculiar to our study is that in our cell pools, the translation of the selection marker may become limiting to cell survival due to the MTX-driven gene amplification. This may be an additional driving force for codon usage changes in these cells to improve cell fitness and survivability. Nonetheless, this result suggests that codon usage has to be carefully considered for applications in genes that may affect survivability of cells, since the overall codon usage in the surviving cells, and thus the general expression and regulation of host cell proteins may be affected.
Lastly, we noted that the molecular weights of the dhfr proteins ( Figure 3B) corresponded with their expected molecular weights of 21, 26 and 30 kDa for the dhfr without PEST, with a single PEST and with tandem PEST sequences respectively. While we have also sequenced the transcript from these cell pools to verify that these cells were indeed transfected with the said expression vectors (data not shown), this protein level data demonstrates that these cell pools were also producing the respective dhfr proteins with the engineered modifications. | 9,517.2 | 2015-06-02T00:00:00.000 | [
"Engineering"
] |
Constructive Type-Logical Supertagging With Self-Attention Networks
We propose a novel application of self-attention networks towards grammar induction. We present an attention-based supertagger for a refined type-logical grammar, trained on constructing types inductively. In addition to achieving a high overall type accuracy, our model is able to learn the syntax of the grammar’s type system along with its denotational semantics. This lifts the closed world assumption commonly made by lexicalized grammar supertaggers, greatly enhancing its generalization potential. This is evidenced both by its adequate accuracy over sparse word types and its ability to correctly construct complex types never seen during training, which, to the best of our knowledge, was as of yet unaccomplished.
Introduction
Categorial Grammars, in their various incarnations, posit a functional view on parsing: words are assigned simple or complex categories (or: types); their composition is modeled in terms of functor-argument relationships. Complex categories wear their combinatorics on their sleeve, which means that most of the phrasal structure is internalized within the categories themselves; performing the categorial assignment process for a sequence of words, i.e. supertagging, amounts to almost parsing (Bangalore and Joshi, 1999).
In machine learning literature, supertagging is commonly viewed as a particular case of sequence labeling (Graves, 2012). This perspective points to the immediate applicability of established, highperforming neural architectures; indeed, recurrent models have successfully been employed (e.g. within the context of Combinatory Categorial Grammars (CCG) (Steedman, 2000)), achieving impressive results (Vaswani et al., 2016). However, this perspective comes at a cost; the su-pertagger's co-domain, i.e., the different categories it may assign, is considered fixed, as defined by the set of unique categories in the training data. Additionally, some categories have disproportionately low frequencies compared to the more common ones, leading to severe sparsity issues. Since under-represented categories are very hard to learn, in practice models are evaluated and compared based on their accuracy over categories with occurrence counts above a certain threshold, a small subset of the full category set.
This practical concession has two side-effects. The first pertains to the supertagger's inability to capture rare syntactic phenomena. Although the percentage of sentences that may not be correctly analyzed due to the missing categories is usually relatively small, it still places an upper bound on the resulting parser's strength which is hard to ignore. The second, and perhaps more far reaching, consequence is the implicit constraint it places on the grammar itself. Essentially, the grammar must be sufficiently coarse while also allocating most of its probability mass on a small number of unique categories. Grammars enjoying a higher level of analytical sophistication are practically unusable, since the associated supertagger would require prohibitive amounts of data to overcome their inherent sparsity.
We take a different view on the problem, instead treating it as sequence transduction. We propose a novel supertagger based on the Transformer architecture (Vaswani et al., 2017) that is capable of constructing categories inductively, bypassing the aforementioned limitations. We test our model on a highly-refined, automatically extracted type-logical grammar for written Dutch, where it achieves competitive results for high frequency categories, while acquiring the ability to treat rare and even unseen categories adequately.
Type-Logical Grammars
The type-logical strand of categorial grammar adopts a proof-theoretic perspective on natural language syntax and semantics: checking whether a phrase is syntactically well-formed amounts to a process of logical deduction deriving its type from the types of its constituent parts (Moot and Retoré, 2012). What counts as a valid deduction depends on the type logic used. The type logic we aim for is a variation on the simply typed fragment of Multiplicative Intuitionistic Linear Logic (MILL), where the type-forming operation of interest is linear implication (for a brief but instructive introduction, refer to Wadler (1993)). Types are inductively defined by the following grammar: where T, T 1 , T 2 are types, A is an atomic type and d − → an implication arrow, further subcategorized by the label d.
Atomic types are assigned to phrases that are considered 'complete', e.g. NP for noun phrase, PRON for pronoun, etc. Complex types, on the other hand, are the type signatures of binary functors that compose with a single word or phrase to produce a larger phrase; for instance NP su − → S corresponds to a functor that consumes a noun phrase playing the subject role to create a sentence -an intransitive verb.
The logic provides judgements of the form Γ B, stating that from a multiset of assumptions Γ = A 1 , . . . A n one can derive conclusion B. In addition to the axiom A A, there are two rules of inference; implication elimination (2) and implication introduction (3) 1 . Intuitively, the first says that if one has a judgement of the form Γ A → B and a judgement of the form ∆ A, one can deduce that assumptions Γ and ∆ together derive a proposition B. Similarly, the second says that if one can derive B from assumptions A and Γ together, then from Γ alone one can derive an implication A → B.
1 For labeled implications d −→, we make sure that composition is with respect to the d dependency relation.
The view of language as a linear type system offers many meaningful insights. In addition to the mentioned correspondence between parse and proof, the Curry-Howard 'proofs-as-programs' interpretation guarantees a direct translation from proofs to computations. The two rules necessary for proof construction have their computational analogues in function application and abstraction respectively, a link that paves the way to seamlessly move from a syntactic derivation to a program that computes the associated meaning in a compositional manner.
Constructive Supertagging
Categorial grammars assign denotational semantics to types, which are in turn defined via a set of inductive rules, as in (1). These, in effect, are the productions a simple, context-free grammar; a grammar of types underlying the grammar of sentences. In this light, any type may be viewed as a word of this simple type grammar's language; a regularity which we can try to exploit.
Considering neural networks' established ability of implicitly learning context-free grammars (Gers and Schmidhuber, 2001), it is reasonable to expect that, given enough representational capacity and a robust training process, a network should be able to learn a context-free grammar embedded within a wider sequence labeling task. Jointly acquiring the two amounts to learning a) how to produce types, including novel ones, and b) which types to produce under different contexts, essentially providing all of the necessary building blocks for a supertagger with unrestricted codomain. To that end, we may represent a single type as a sequence of characters over a fixed vocabulary, defined as the union of atomic types and type forming operators (in the case of type-logical grammars, the latter being n-ary logical connectives). A sequence of types is then simply the concatenation of their corresponding representations, where type boundaries can be marked by a special separation symbol.
The problem then boils down to learning how to transduce a sequence of words onto a sequence of unfolded types. This can be pictured as a case of sequence-to-sequence translation, operating on word level input and producing character level output, with the source language now being the natural language and the target language being the language defined by the syntax and semantics of our categorial grammar.
Related Work
Supertagging has been standard practice for lexicalized grammars with complex lexical entries since the work of Bangalore and Joshi (1999). In its original formulation, the categorial assignment process is enacted by an N-gram Markov model. Later work utilized Maximum Entropy models that account for word windows of fixed length, while incorporating expanded lexical features and POS tags as inputs . During the last half of the decade, the advent of word embeddings caused a natural shift towards neural architectures, with recurrent neural networks being established as the prime components of recent supertagging models. Xu et al. (2015) first used simple RNNs for CCG supertagging, which were gradually succeeded by LSTMs (Vaswani et al., 2016;Lewis et al., 2016), also in the context of Tree-Adjoining Grammars (Kasai et al., 2017).
Regardless of the particular implementation, the above works all fall in the same category of sequence labeling architectures. As such, the type vocabulary (i.e. the set of candidate categories) is always considered fixed and pre-specified -it is, in fact, hard coded within the architecture itself (e.g. in the network's final classification layer). The inability of such systems to account for unseen types or even consistently predict rare ones has permeated through the training and evaluation process; a frequency cut-off is usually applied on the corpus, keeping only categories that appear at least 10 times throughout the training set . This limitation has been acknowledged in the past; in the case of CCG, certain classes of syntactic constructions pose significant difficulties for parsing due to categories completely missing from the corpus ). An attempt to address the issue was made in the form of an inference algorithm, which iteratively expands upon the lexicon with new categories for unseen words (Thomforde and Steedman, 2011) -its applicability, however, is narrow, as new categories can often be necessary even for words that have been previously encountered.
We differentiate from relevant work in not employing a type lexicon at all, fixed or adaptive. Rather than providing our system with a vocabulary of types, we seek to instead encode the type construction process directly within the network.
Type prediction is no longer a discernible part of the architecture, but rather manifested via the network's weights as a dynamic generation process, much like a language model for types that is conditioned on the input sentence.
Corpus
The experiments reported on focus on Dutch, a language with relatively free word order that allows us to highlight the benefits of our nondirectional type logic.
For our data needs, we utilize the Lassy-Small corpus (van Noord et al., 2006). Lassy-Small contains approximately 65000 annotated sentences of written Dutch, comprised of over 1 million words in total. The annotations are DAGs with syntactic category labels at the nodes, and dependency labels at the edges. The possibility of re-entrancy obviates the need for abstract syntactic elements (gaps, traces, etc.) in the annotation of unbounded dependencies and related phenomena.
Extracted Grammar
To obtain type assignments from the annotation graphs, we design and apply an adaptation of Moortgat and Moot's (2002) extraction algorithm. Following established practice, we assign phrasal heads a functor (complex) type selecting for its dependents. Atomic types are instantiated by a translation table that maps part-of-speech tags and phrasal categories onto their corresponding types.
As remarked above, we diverge from standard categorial practice by making no distinction between rightward and leftward implication (slash and backslash, respectively), rather collapsing both into the direction-agnostic linear implication. We compensate for the possible loss in word-order sensitivity by subcategorizing the implication arrow into a set of distinct linear functions, the names of which are instantiated by the inventory of dependency labels present in the corpus. This decoration amounts to including the labeled dependency materialized by each head (in the context of a particular phrase) within its corresponding type, vastly increasing its informational content. In practical terms, dependency labeling is no longer treated as a task to be solved by the downstream parser; it is now internal to the grammar's type system. To consistently binarize all of our functor types, we impose an obliqueness or- welke(which), rol(role), spelen(play), typen(types) WHQ → E (b) Derivation for "welke rol spelen typen" (which role do types play), showcasing object-relativisation via second-order types. Type SV1 stands for verb-initial sentence clause.
Figure 1: Syntactic derivations of example phrases using our extracted grammar. Lexical type assignments are the proofs' axiom leaves marked L. Identity for non-lexically grounded axioms is marked id. Parentheses are right implicit. Phrasal heads are associated with complex (functor) types. Phrases are composed via function application of functors to their arguments (i.e. implication elimination: → E). Hypothetical reasoning for gaps is accomplished via function abstraction of higher-order types (i.e. implication introduction: → I).
dering (Dowty, 1982) over dependency roles, capturing the degree of coherence between a dependent and the head. Figure 1 presents a few example derivations, indicating how our grammar treats a selection of interesting linguistic phenomena.
The algorithm's yield is a type-logical treebank, associating a type sequence to each sentence. The treebank counts approximately 5700 unique types, made out of 22 binary connectives (one for each dependency label) and 30 atomic types (one for each part-of-speech tag or phrasal category). As Figure 2 suggests, the comprehensiveness of such a fine-grained grammar comes at the cost of a sparser lexicon. Under this regime, recognizing rare types as first-class citizens becomes imperative.
Finally, given that all our connectives are of a fixed arity, we may represent types unambiguously using polish notation (Hamblin, 1962). Polish notation eliminates the need for brackets, reducing the representation's length and succinctly encoding a type's arity in an up-front manner. At least one such type is present in a non-negligible part of the corpus (12% of the overall sentences). A significant portion of types (45%) appears just once throughout the corpus.
Model
Even though prior work suggests that both the supertagging and the CFG-generation problems are learnable (at least to an extent) in isolation, the composition of the two is less straightforward. Predicting the next atomic symbol requires for the network to be able to model local, close-range dependencies as ordained by the type-level syntax. At the same time, it needs a global receptive field in order to correctly infer full types from distant contexts, in accordance with the sentencelevel syntax. Given these two requirements, we choose to employ a variant of the Transformer for the task at hand (Vaswani et al., 2017). Transformers were originally proposed for machine translation; treating syntactic analysis as a translation task is not, however, a new idea (Vinyals et al., 2015). Transformers do away with recurrent architectures, relying only on self-attention instead, and their proven performance testifies to their strength. Self-attention grants networks the ability to selectively shift their focus over their own representations of non-contiguous elements within long sequences, based on the current context, exactly fitting the specifications of our problem formulation.
Empirical evidence points to added benefits from utilizing language models at either side of an encoder-decoder architecture (Ramachandran et al., 2017). Adhering to this, we employ a pretrained Dutch ELMo (Peters et al., 2018;Che et al., 2018) as large part of our encoder.
Network
Our network follows the standard encoder-decoder paradigm. A high-level overview of the architecture may be seen in Figure 3. The network accepts a sequence of words as input, and as output produces a (longer) sequence of tokens, where each token can be an atomic type, a logical connective or an auxiliary separation symbol that marks type boundaries. An example input/output pair may be seen in Figure 4.
Our encoder consists of a frozen ELMo followed by a single Transformer encoder layer. The employed ELMo was trained as a language model and constructs contextualized, 1024-dimensional word vectors, shown to significantly benefit downstream parsing tasks. To account for domain adaptation without unfreezing the over-parameterized ELMo, we allow for a transformer encoder layer of 3 attention heads to process ELMo's output 2 .
Our decoder is a 2-layer Transformer decoder. Since the decoder processes information at a different granularity scale compared to the encoder, we break the usual symmetry by setting its number of attention heads to 8.
At timestep t, the network is tasked with modeling the probability distribution of the next atomic symbol a t , conditional on all previous predictions a 0 , . . . , a t−1 and the whole input sentence w 0 , . . . , w τ , and parameterized by its trainable weights θ: p θ (a t |a 0 , . . . , a t−1 , w 0 , . . . , w τ ) We make a few crucial alterations to the original Transformer formulation.
First, for the separable token transformations we use a two-layer, dimensionality preserving, feed-forward network. We replace the rectifier activation of the intermediate layer with the empirically superior Gaussian Error Linear Unit (Hendrycks and Gimpel, 2016).
Secondly, since there are no pretrained embeddings for the output tokens, we jointly train the Transformer alongside an atomic symbol embedding layer. To make maximal use of the extra parameters, we use the transpose of the embedding matrix to convert the decoder's high-dimensional output back into token class weights. We obtain the final output probability distributions by applying sigsoftmax (Kanai et al., 2018) on these weights.
Training
We train our network using the adaptive training scheme proposed by Vaswani et al (2017). We apply stricter regularization by increasing both the dropout rate and the redistributed probability mass of the Kullback-Leibler divergence loss to 0.2. The last part is of major importance, as it effectively discourages the network from simply memoizing common type patterns. ). The first two lines present the input sentence and the types that need to be assigned to each word. The third line presents the desired output sequence, with types decomposed to atomic symbol sequences under polish notation, and # used as a type separator.
Experiments and Results
In all described experiments, we run the model 3 on the subset of sample sentences that are at most 20 words long. We use a train/val/test split of 80/10/10 4 . We train with a batch size of 128, and pad sentences to the maximum in-batch length.
Training to convergence takes, on average, eight hours & 300 epochs for our training set of 45000 sentences on a GTX1080Ti. We report averages over 5 runs.
Accuracy is reported on the type-level; that is, during evaluation, we predict atomic symbol sequences, then collapse subtype sequences into full types and compare the result against the ground truth. Notably, a single mistake within a type is counted as a completely wrong type. 3 The code for the model and processing scripts can be found at https: //github.com/konstantinosKokos/ Lassy-TLG-Supertagging. 4 It is worth pointing out that the training set contains only ∼85% of the overall unique types, the remainder being present only in the validation and/or test sets.
Main Results
We are interested in exploring the architecture's potential at supertagging, as traditionally formulated, as well as its capacity to learn the grammar beyond the scope of the types seen in the training data. We would like to know whether the latter is at all possible (and, if so, to what degree), but also whether switching to a constructive setting has an impact on overall accuracy.
Digram Encoding Predicting type sequences one atomic symbol or connective at a time provides the vocabulary to construct new types, but results in elongated target output sequence lengths 5 . As a countermeasure, we experiment with digram encoding, creating new atomic symbols by iteratively applying pairwise merges of the most frequent intra-type symbol digrams (Gage, 1994), a practice already shown to improve generalization for translation tasks (Sennrich et al., 2016). To evaluate performance, we revert the merges back into their atoms after obtaining the predictions.
With no merges, the model has to construct types and type sequences using only atomic types and connectives. As more merges are applied, the model gains access to extra short-hands for subsequences within longer types, reducing the target output length, and thus the number of interactions it has to capture. This, however, comes at the cost of a reduced number of full-type constructions effectively seen during training, while also increasing the number of implicit rules of the type-forming context-free grammar. If merging is performed to exhaustion, all types are compressed into single symbols corresponding to the indivisible lexical types present in the treebank. The model then reduces to a traditional supertagger, never having been exposed to the internal type syntax, and loses the potential to generate new types.
We experiment with a fully constructive model employing no merges (M 0 ), a fully merged one i.e. a traditional supertagger, (M ∞ ), and three in-between models trained with 50, 100 and 200 merges (M 50 , M 100 and M 200 respectively). Table 1 displays the models' accuracy. In addition to the overall accuracy, we show accuracy over different bins of type frequencies, as measured in the training data: unseen, rare (1-10), medium (10-100) and high-frequency (> 100) types. Table 1 shows that all constructive models perform overall better than M ∞ , owing to a consistent increase in their accuracy over unseen, rare, and mid-frequency types. This suggests significant benefits to using a representation that is aware of the type syntax. Additionally, the gains are greater the more transparent the view of the type syntax is, i.e. the fewer the merges. The mergefree model M 0 outperforms all other constructive models across all but the most frequent type bins, reaching an overall accuracy of 88.05% and an unseen category accuracy of 19.2%. We are also interested in quantifying the models' "imaginative" precision, i.e., how often do they generate new types to analyze a given input sentence, and, when they do, how often are they right (Table 2). Although all constructive models are eager to produce types never seen during training, they do so to a reasonable extent. Similar to their accuracy, an upwards trend is also seen in their precision, with M 0 getting the largest percentage of generated types correct.
Together, our results indicate that the typesyntax is not only learnable, but also a representational resource that can be utilized to tangibly improve a supertagger's generalization and overall performance.
Other Models
Our preliminary experiments involved RNN-based encoder-decoder architectures. We first tried training a single-layer BiGRU encoder over the ELMo representations, connected to a single-layer GRU decoder, following Cho et al. (2014); the model took significantly longer to train and yielded far poorer results (less than 80% overall accuracy and a strong tendency towards memoizing common types). We hypothesize that the encoder's fixed length representation is unable to efficiently capture all of the information required for decoding a full sequence of atomic symbols, inhibiting learning.
As an alternative, we tried a separable LSTM decoder operating individually on the encoder's representations of each word. Even though this model was faster to train and performed marginally better compared to the previous attempt, it still showed no capacity for generalization over rarer types. This is unsurprising, as this approach assumes that the decoding task can be decomposed at the type-level; crucially, the separable decoder's prediction over a word cannot be informed by its predictions spanning other words, an information flow that evidently facilitates learning and generalization.
Type Syntax
To assess the models' acquired grasp of the type syntax, we inspect type predictions in isolation. Across all merge scales and consistently over all trained models, all produced types (including unseen ones) are well-formed, i.e. they are indeed words of the type-forming grammar. Further, the types constructed are fully complying with our implicit notational conventions such as the obliqueness hierarchy.
Even more interestingly, for models trained on non-zero merges it is often the case that a type is put together using the correct atomic elements that together constitute a merged symbol, rather than the merged shorthand trained on. Judging from the above, it is apparent that the model gains a functionally complete understanding of the typeforming grammar's syntax, i.e. the means through which atomic symbols interact to produce types.
Sentence Syntax
Beyond the spectrum of single types, we examine type assignments in context.
We first note a remarkable ability to correctly analyze syntactically complex constructions requiring higher-order reasoning, even in the presence of unseen types. An example of such an analysis is shown in Fig 5. For erroneous analyses, we observe a strong tendency towards self-consistency. In cases where a type construction is wrong, types that interact with that type (as either arguments or functors) tend to also follow along with the mistake. On one hand, this cascading behavior has the effect of increasing error rates as soon as a single error has been made. On the other hand, however, this is a sign of an implicitly acquired notion of phrase-wide well-typedness, and exemplifies the learned long-range interdependencies between types through the decoder's auto-regressive formulation. On a related note, we recognize the most frequent error type as misconstruction of conjunction schemes. This was, to a degree, expected, as coordinators display an extreme level of lexical ambiguity, owing to our extracted grammar's massive type vocabulary.
Output Embeddings
Our network trains not only the encoder-decoder stacks, but also an embedding layer of atomic symbols. We can extract this layer's outputs to generate vectorial representations of atomic types and binary connectives, which essentially are high-dimensional character-level embeddings of the type language.
Considering that dense supertag representations have been shown to benefit parsing (Kasai et al., 2017), our atomic symbol embeddings may be further utilized by downstream tasks, as a highly refined source of type-level information.
Comparison
Our model's overall accuracy lies at 88%, which is comparable to the state-of-the-art in TAG supertagging (Kasai et al., 2017) but substantially lower than CCG . A direct numeric comparison holds little value, however, due to the different corpus, language and formalism used. To begin with, our scores are the result of a more difficult problem, since our target grammar is far more refined. Concretely, we measure accuracy over a set of 5700 types, which is one order of magnitude larger than the CCGBank test bed (425 in most published work; CCGBank itself contains a little over 1100 types) and 20% larger than the set of TAGs in the Penn Treebank. Practically, a portion of the error mass is allotted to mislabeling the implication arrow's name, which is in one-to-one correspondence with a dependency label of the associated parse tree. In that sense, our error rate is already accounting for a portion of the labeled attachment score, a task usually deferred to a parser further down the processing line. Further, the prevalence of entangled dependency structures in Dutch renders its syntax considerably more complicated than English.
Conclusion and Future Work
Our paper makes three novel contributions to categorial grammar parsing. We have shown that attention-based frameworks, such as the Trans- Figure 5: Type assignments for the correctly analyzed wh-question "in hoeverre zal het rapport dan nog een rol spelen" (to what extent will the report still play a role) involving a particular instance of pied-piping. The type of "in" was never seen during training; it consumes an adverb as its prepositional object, to then provide a third-order type that turns a verb-initial clause with a missing infinitive modifier into a wh-question. Such constructions are a common source of errors for supertaggers, as different instantiations require unique category assignments.
former, may act as capable and efficient supertaggers, eliminating the computational costs of recurrence. We have proposed a linear type system that internalizes dependency labels, expanding upon categorial grammar supertags and easing the burden of downstream parsing. Finally, we have demonstrated that a subtle reformulation of the supertagging task can lift the closed world assumption, allowing for unbounded supertagging and stronger grammar learning while incurring only a minimal cost in computational complexity. Hyper-parameter tuning and network optimization were not the priority of this work; it is entirely possible that different architectures or training algorithms might yield better results under the same, constructive paradigm. This aside, our work raises three questions that we are curious to see answered. First and foremost, we are interested to examine how our approach performs under different datasets, be it different grammar specifications, formalisms or languages, as well as its potential under settings of lesser supervision. A natural continuation is also to consider how our supertags and their variable-length, content-rich vectorial representations may best be integrated with a neural parser architecture. Finally, given the close affinity between syntactic derivations, logical proofs and programs for meaning computation, we plan to investigate how insights on semantic compositionality may be gained from the vectorial representations of types and type-logical derivations. | 6,528.2 | 2019-05-31T00:00:00.000 | [
"Computer Science"
] |
Nucleosome positioning based on DNA sequence embedding and deep learning
Background Nucleosome positioning is the precise determination of the location of nucleosomes on DNA sequence. With the continuous advancement of biotechnology and computer technology, biological data is showing explosive growth. It is of practical significance to develop an efficient nucleosome positioning algorithm. Indeed, convolutional neural networks (CNN) can capture local features in DNA sequences, but ignore the order of bases. While the bidirectional recurrent neural network can make up for CNN's shortcomings in this regard and extract the long-term dependent features of DNA sequence. Results In this work, we use word vectors to represent DNA sequences and propose three new deep learning models for nucleosome positioning, and the integrative model NP_CBiR reaches a better prediction performance. The overall accuracies of NP_CBiR on H. sapiens, C. elegans, and D. melanogaster datasets are 86.18%, 89.39%, and 85.55% respectively. Conclusions Benefited by different network structures, NP_CBiR can effectively extract local features and bases order features of DNA sequences, thus can be considered as a complementary tool for nucleosome positioning.
Background
In eukaryotes, nucleosomes are the basic structural unit of chromatin. The nucleosome is composed of a histone octamer core which is formed by four types of histones (H2A, H2B, H3, H4) and DNA that is tightly wound around histone core about 1.65 turns. The winding DNA is called core DNA with 147 bp in length. The DNA that binds to histone H1 and connects two adjacent nucleosomes is called linker DNA, in around 20-60 bp, and it is responsible for stabilizing the structure of nucleosomes [1]. Nucleosomes not only compress the chromatin structure, but also play a key role in biological processes such as genome expression, DNA replication and repair [2][3][4][5]. Therefore, it is of far-reaching biological significance to study nucleosome positioning on the whole genome.
Since DNA needs to be bent and coiled around histone core, the flexible regions of DNA are more likely to form nucleosomes [6]. In the core DNA region found in chicken red blood cells, AA / TT / TA fragments repeat every 10 bp in the direction of the DNA facing to histone core; GG/GC/CC/CG appears every 10 bp in the direction of the back of histone core [7]. Similar periodic laws have been found in the studies of other eukaryotes [8]. In addition, the study found that nucleosomes in the poly (dA:dT) region were significantly lacking [9]. The affinity between DNA and histones obviously depends on the order of the bases, which indicates that DNA sequences do affect the formation of nucleosomes [10]. Peckham et al. extract the k-mer frequency of the DNA sequence and use a support vector machine to clearly distinguish the core DNA and junction DNA sequences of the yeast [11]. These researches indicate to a certain extent that nucleosome positioning is affected by sequence information. Thence, we can construct theoretical models to extract sequence features and distinguish core DNA from linker DNA to predict the location of nucleosomes.
In the past decade, due to the popularity of machine learning, more nucleosome positioning prediction models based on DNA sequence information have been proposed [12][13][14][15][16][17]. In addition, with the widespread popularity of artificial intelligence, deep learning algorithms have also been applied to nucleosome positioning and made great progress. Di Gangi et al. utilize a stacked convolutional layer and long-short-term memory (LSTM) network to establish a deep learning model [18]. LeNup add the Inception module and gated convolutional structure to the convolutional neural network (CNN) [19]. CORENup conduct the parallel method of CNN and LSTM network to show high performance in both classification accuracy and calculation time [20]. These deep learning prediction models all use one-hot encoding to represent DNA sequences.
DNA sequence is composed of A, T, C, and G, and can be seen as a broad language which natural language processing (NLP) technology can be applied to. Word2vec is a technology that converts a single word into a vector, which is mainly used in the field of NLP [21]. It also has a good application on biological sequence processing. Ng utilize the human genome sequence as the learning corpus to exploit the pre-training vector of the DNA sequence (dna2vec) through training word2vec model [22]. Dna2vec has been used to predict the interaction between enhancer and promoter [23]. In predicting the compound-protein interaction, the word2vec method was also used to obtain the word vector of the amino acid sequence [24].
CNN has obvious advantages in image processing. It was initially mainly used in the field of computer vision. In 2014, TextCNN model used convolutional neural networks in text classification tasks, and selected multiple filters of different scales to extract more local information of the text, and the effectiveness was verified [25]. The sequence of bases contains rich information, and there are long-range interactions between each base. Therefore, recurrent neural network (RNN) could be helpful to mine the hidden information in the DNA sequence [26]. Gated recurrent unit (GRU) and long short-term memory (LSTM) networks are two mainstream variants of RNN, which can learn information from a long time ago [18,23].
In this paper, we utilized the k-mer embedding trained by word2vec to represent the DNA sequence. In addition, we built several deep learning models to compare the impact of different network structures on prediction quality. We found that the prediction performance of the hybrid model that integrates CNN and RNN is significantly better than single structure model. Our results also demonstrated that using the k-mer vector to represent the DNA sequence is more effective.
Selection of word vector dimensions
Obviously, the size of k-mer will determine the vocabulary size, then affect the training efficiency. In addition, we also need to notice the dimension of word vector especially. The setting of vector dimension is related to the vocabulary size and experimental requirements. The higher dimensional word vector can more accurately reflect the feature distribution of each k-mer in the sequence space. However, the higher word vector dimension is, the more calculation burden becomes.
In order to determine k and word vector dimension, we train k-mers into word vectors with several different dimensions, for k ranging from 3 to 6 respectively. Then, word vectors of different dimensions are fed to support vector machine (SVM) to find the most suitable k and word vector dimension.
In this paper, we applied python package (gensim 3.8.3) to implement the word2vec model. And we used python package Scikit-learn (Sklearn 0.23) to implement the SVM algorithm. Figure 1 shows the experimental results with combinations of different k and dimensions on the first group of datasets.
In summary, the selection of k and vector dimensions for each species in this experiment are shown in Table 1.
CNN model improves the classification performance
We compare the classification results of CNN model with SVM, as shown in Table 2. For each species, the bold numbers in the table indicate the better model under each evaluation index. Table 2 shows that prediction performance of CNN on C. elegans and D. melanogaster are significantly better than SVM. Especially for C. elegans dataset, CNN is higher than SVM in ACC, S n , S p , MCC by 2.23%, 2.07%, 2.38%, 4.62%, respectively. However, for H. sapiens dataset, CNN is lower than SVM in ACC, S n , S p , MCC by 3.48%, 2.69%, 4.26%, 6.43%.
Performance on BiGRU + BiLSTM model is close to CNN
The performance of the BiGRU + BiLSTM model is also evaluated by tenfold cross-validation, which is shown in Table 3. Table 2, we find that results obtained by these two deep learning models are relatively close, and the difference in accuracy is less than 0.4%. Overall, SVM has obvious advantages for H. sapiens datasets.
The integrative model NP_CBiR yields outstanding performance
NP_CBiR is based on convolutional layers, BiGRU and BiLSTM networks. Table 4 shows classification results of NP_CBiR via tenfold cross-validation.
NP_CBiR has improved prediction performance on each dataset compared with the previous two deep learning model in Tables 2 and 3. Except H. sapiens on which the classification results of NP_CBiR are little lower than SVM, the performance of NP_CBiR on the other two species are all higher than SVM. More precisely, the ACC of NP_CBiR for H. sapiens, C. elegans, and D. melanogaster datasets are 1.9%, 1.2%, and 2.7% higher than the BiGRU + BiLSTM model, respectively. These results show that the performance of hybrid model is better.
We also plot the ROC curves of NP_CBiR on the first set of data, as shown in Fig. 2.
Comparison with other algorithms
The above results show that the prediction performance of jointly using convolutional layers and RNN networks is significantly better than single module neural network. Therefore, we further compare NP_CBiR with other proposed nucleosome positioning algorithms on the second group of datasets. Liu et al. [27] proposed an evaluation method for this group of datasets. The method stipulates that 100 test sample sets are randomly selected from each dataset, and each sample set contains 100 core DNA sequences and 100 linker DNA sequences, then calculates the ROC curve of each sample set and the average of the 100 sample sets.
The experimental results are shown in Table 5, the approximate value is represented by "∼", and the bold number represents the best value. The second column of the table shows the best AUC values of the eight methods reported by Liu et al. [27].
For NP_CBiR, its AUC value on the H-5U, H-LC, and D-PM are better than other methods; the AUC value on the H-PM, D-5U, and D-LC are better than the results of Liu et al. [27] and DLNN [18], but slightly lower than CORENup [20].
We compare the classification results of NP_CBiR model with SVM, as shown in Table 6. For each Dataset, the bold numbers in the table indicate the better model. Table 6 shows that prediction performance of NP_ CBiR on D-5U, D-LC and D-PM are slightly better than SVM, on H-LC is flat with SVM, and on H-5U and H-PM are slightly lower than SVM.
In addition, we compared the prediction results of NP_CBiR with other methods in the first group dataset via tenfold cross-validation. As shown in Table 7, 8 and 9, the best values are in bold.
Compared with other algorithms, for H. sapiens, the classification accuracy of NP_CBiR is higher than DLNN and ZCMM by 0.81% and 8.46%. For C. elegans, the prediction result of the NP_CBiR is close to DLNN, and it is These results show that the combination of CNN, BiGRU and BiLSTM network can make up for the shortcomings of a single module network model and effectively improves the classification performance.
Conclusions
In this work, nucleosome positioning method based on DNA sequence embedding and deep learning is introduced. Word vector embedding of DNA sequence has been verified to be helpful in nucleosome positioning. Moreover, we construct three deep learning models with different network structures to better understand advantages of these structures. Our results demonstrate that NP_CBiR model which integrated convolutional layers, BiGRU and BiLSTM network structures has a better prediction performance. Convolutional layers can extract local features in DNA sequences, but ignore the order of bases and lose the hidden position information. While BiGRU and BiLSTM networks can make up for CNN's shortcomings in this regard, they take the contextual information into account and thus can dig out the correlation information in the sequence. The prediction performance of NP_CBiR to a certain degree is comparable with or better than SVM. Therefore, by combining these two structures, the hybrid model NP_CBiR can effectively extract the local features and long-term dependent features of the sequence and be considered as a complementary model in distinguishing core DNA from linker DNA.
Nucleosome positioning is a complex dynamic process, it still needs to be further researched. In recent years, many excellent and effective models have emerged with the continuous development of deep learning. The proposed models in this paper contain relatively simple architectures. As for future work, we will explore the application of more advanced neural networks and models in nucleosome positioning.
Methods
In this work, we segment a DNA sequence to several k-mers [15], and then apply word2vec model to transform k-length sub-sequence of DNA sequence into the word vectors. Meanwhile, we utilize support vector machine (SVM) to determine the best dimension of the DNA word vector. Then we propose three nucleosome positioning deep learning models with different networks, such as CNN, BiGRU and BiLSTM. In addition, we conduct relatively sufficient experiments for each model to compare and analyze the prediction performance among models. We choose PaddlePaddle deep learning framework to implement related experiments (https:// www. paddl epadd le. org. cn).
Dataset descriptions
This paper mainly uses two groups of datasets downloaded from published papers. The first datasets contain DNA sequence data of H. sapiens, C. elegans, D. melanogaster and D. melanogaster, they were constructed by Guo et al. [12], the length of sequences is 147 bp.) The yeast data was constructed by Chen et al. [28], which is 150 bp in length. In order to avoid redundancy and reduce homology deviation, sequences with more than 80% similarity were eliminated. The core DNA sequences are positive samples (P-S), and linker DNA sequences are negative samples (N-S). The sample size of the first dataset sequence is shown in the Table 10.
The second datasets are from Liu et al. [27]. It contains six subsets of DNA sequences related to two species. They are largest chromosome (LC), promoter (PM) and 5'UTR exon region (5U) sequences from H. sapiens and D. melanogaster. Based on the experimental data provided by Liu, Amato et al. [20] extracted core DNA and linker DNA by downloading the genome file from the UCSC gene browse http:// www. genome. ucsc. edu/ cgibin/ hgTab les. The length of sequences is 147 bp and sample sizes of the second group of datasets are shown in the Table 11. In addition, we downloaded an additional set of Homo sapiens genome sequences containing nucleosome references to implement the genome-wide test to obtain the predictive performance of our model under the real context. We downloaded Healthy_Song data (GSE81314_healthy_Song_stable_100bp_hg38.bed.gz) from GRCh38(hg38) via https:// gener egula tion. org/ NGS/ stable_ nucs/ hg38/, and expanded the length of sequence from 100 to 147 bp. The number of nucleosome sequences is 404565.
Performance evaluation
In this work, we adopted k-fold cross validation (for k = 10) to train and assess the model. Original dataset is divided into k mutually disjunct parts, k-1 parts for training and 1 part for testing. The train/assess-procedure will be conducted k times for k different testing parts, and the average performance on these k testing parts can be seen as model's generalization ability. In classification tasks, it is necessary to set metrics to evaluate the generalization ability of the model. Usually, we use sensitivity ( S n ), specificity ( S p ), accuracy (ACC), and Matthew's correlation coefficient (MCC) to measure the effectiveness of the model [12,19]. The mathematical expressions are:
DNA sequences embedding based on word2vec
One-hot encoding is often used in deep learning to represent DNA sequences [18][19][20]. This method has a limitation that vectors are independent each other so that the model cannot capture the hidden association information in the sequence. While word2vec model that trained by context information maps each word into a dense continuous low-dimensional word vector [22,29], which can generate word vector reflecting the connection between words. Word2vec makes up for the defect that one-hot encoding cannot express the similarity between words. Meanwhile, it has the advantages of simple model hierarchy and short training time. Word2vec's basic structure is a shallow neural network with two types of training modes: Continuous Bag-of-Words (CBOW) and Skip-gram. In practice, Skip-gram has a better processing effect on low-frequency words. Therefore, we choose Skip-gram model to train the DNA sequence word vector in this paper.
To apply word2vec technology to represent DNA sequences, it is necessary to segment the sequences into k-mers firstly [22]. It means that a DNA sequence is divided into substrings containing k bases [15], a sequence with length L is generally divided into L-k + 1 k-mers. We know that the number of all possible combinations of A,C,G,T for 4 digit is 4 k , so the vocabulary size is 4 k . All k-mers in a super large dataset are input into the model for training, then a word vector dictionary of 4 k k-mers can be obtained. According to the dictionary, each k-mer of a DNA sequence can be represented by a word vector, so that a length L DNA sequence can be converted into an embedding matrix. Taking 4-mer as an example, the process of word vector representation of DNA sequence is shown in Fig. 3.
CNN model
Convolutional Neural Network (CNN) is a classic model in deep learning, which has shown extraordinary advantages in computer vision [30,31]. It can also be applied in text classification tasks [25]. Convolutional layer is the core of CNN, and it performs convolution operations through filters to extract features from the input data. Meanwhile, the parameters in the convolutional layer are shared, which greatly reduces parameter scale. Pooling layer reduces the feature dimension by sampling the output, and it is often connected after convolutional layer. Pooling operation can not only simplify the network parameters and reduce the amount of calculation, but also further compress the features and key output features to prevent the model from overfitting. There are two common types: max pooling and average pooling.
We establish nucleosome positioning prediction model based on the TextCNN, as shown in Fig. 4. Recently, DeepInsight [32] can perform non-image to image transformation, and DeepFeature [33] can also find features/genes other than non-image to image transformation which can be then used by CNN. More clearly and concisely, we use pre-trained word vectors of DNA sequences as inputs of the model, several different size of filters (3,4,5) for convolutional operation, and the number of filters is 64. Unlike TextCNN, the model changes global max pooling to max pooling with width and stride 2. This is more conducive for further extracting salient features and reducing the size of output features [34].
The fully connected layer contains 100 neurons, and the dropout ratio is 0.5 [35]. Batch size is 64, and the number of training iterations is 10 epochs, with a learning rate of 0.001. We use Adamax optimizer and cross-entropy loss function.
BiGRU and BiLSTM model
The neurons of the hidden layer in recurrent neural network (RNN) are connected to each other so that the network is endowed with memory ability, which can mine the information hidden in the previous part of sequence. Therefore, RNN is mostly applied in sequence processing or generation tasks [36]. In particular, the bidirectional recurrent neural network (BiRNN) can also take the context into account, and integrate previous and future information, so generally it has a better efficiency. In this work, we try to construct the RNN model using two types of RNN units: LSTM and GRU [37].
LSTM unit is composed of three gates and a memory cell, which is responsible for the storage of information. The element value of each gate is between 0 and 1 to implement forgetting or strengthening [18]. The performance of GRU is almost equivalent to LSTM. While its parameter scale is much lower than LSTM, and it can also achieve long short-term memory function. GRU does not use the memory cell and three gates like LSTM but uses the update gate and the reset gate [38]. Considering that the sequence of bases in the DNA sequence contains hidden long-range correlation information, we constructed the model based on BiGRU and BiLSTM, as shown in Fig. 5.
The input layer of the model is followed by a bidirectional GRU layer. The output vector after bidirectional GRU is spliced and then input to a bidirectional LSTM layer, the information lost in the previous layer is further captured through LSTM network. The output features of bidirectional LSTM are connected together and input to a fully connected layer containing 100 neurons, and then a dropout layer (p = 0.5). Finally, a softmax fully connected layer is added for classification. The hidden size of GRU and LSTM are 100 and 200 respectively. Batch size is 64, and the number of training iterations is 15 epochs, with a learning rate of 0.001. We use Adamax optimizer and cross-entropy loss function here.
Architecture of NP_CBiR
Some studies have shown that integrative models with multiple network structures have better capabilities of feature extraction [19,20,37]. Considering the model characteristics of CNN and RNN, we propose a hybrid model named NP_CBiR, as shown in Fig. 6.
NP_CBiR has been further modified on the basis of previous models. The specific content is as follows: In the convolutional layer, NP_CBiR only use one scale filter, the size is 5 with the number of 50. Although the sampling operation of pooling layer can reduce the feature dimension, it has the risk of destroying the global features. Since each segment in the DNA sequence is equally important, NP_CBiR uses batch normalization (BN) layer to replace pooling layer [39]. The normalization of the BN layer can effectively prevent the model from overfitting and improve the generalization ability. The network structure after BN layer is similar to Section D. The hidden sizes of GRU and LSTM are 50 and 100, respectively. The fully connected layer contains 100 neurons, and the dropout ratio is 0.5. Batch size is 64, and the number of training iterations is 15 epochs, with a learning rate of 0.0001. We also used Adamax optimizer and cross-entropy loss function. | 5,045.6 | 2022-04-13T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Diode-pumped Yb:gso Femtosecond Laser
Compact femtosecond laser operation of Yb:Gd 2 SiO 5 (Yb:GSO) crystal was demonstrated under high-brightness diode-end-pumping. A semiconductor saturable absorption mirror was used to start passive mode-locking. Stable mode-locking could be realized near the emission bands around 1031, 1048, and 1088 nm, respectively. The mode-locked Yb:GSO laser could be tuned from one stable mode-locking band to another with adjustable pulse durations in the range 1~100 ps by slightly aligning laser cavity to allow laser oscillations at different central wavelengths. A pair of SF10 prisms was inserted into the laser cavity to compensate for the group velocity dispersion. The mode-locked pulses centered at 1031 nm were compressed to 343 fs under a typical operation situation with a maximum output power of 396 mW. 3+-doped YVO 4 crystal for efficient Kerr-lens mode locking in solid-state lasers, " Opt. " 60-W average power in 810-fs pulses from a thin-disk Yb:YAG laser, " Opt. Apatite-structure crystal, Yb 3+ :SrY 4 (SiO 4) 3 O, for the development of diode-pumped femtosecond lasers, " Opt. Efficient diode-pumped Yb 3+ :Y 2 SiO 5 and Yb 3+ :Lu 2 SiO 5 high-power femtosecond laser operation " Opt. High power diode-pumped Yb 3+ :CaF 2 femtosecond laser, " Opt. Femtosecond pulse generation with a diode-pumped Yb 3+ :YVO 4 laser " Opt. Efficient laser action of Yb:LSO and Yb:YSO oxyorthosilicates crystals under high-power diode-pumping, " Appl. Diode-pumped continuous-wave and passively mode-locked Yb:GSO laser " , Opt. Express 14, 686-685 (2006).
Introduction
Ytterbium-doped crystals that typically have broad emission bands around 1 μm have been recognized in recent years as very attractive gain media for diode-pumped femtosecond (fs) oscillation and amplification [1][2][3][4][5][6][7][8].Ytterbium ion has a very simple electronic-level scheme involving only two manifolds 2 F 5/2 and 2 F 7/2 , which consequently eliminates undesired effects such as excited-state absorption, cross relaxation, up-conversion, and concentration quenching.Its relatively low intrinsic quantum defects (generally less than 10%) and high radiative quantum efficiency results in a low heat generation that may support efficient and compact diode-pumped lasers.An important advantage of Yb-doped laser crystals over their Nd-doped counterparts is their broad emission spectra, which allows ultrashort pulse generation.Many Yb-doped materials have been already demonstrated as competitive laser gain media in the fs regime [1][2][3][4][5][6][7][8].Among these new materials, Yb-doped glasses exhibit very broad and smooth emission spectra that permit possible generation of ultrashort pulses, but poor thermal conductivity and low stress-fracture which limit their applications in high-power laser systems.High-power compact lasers make use of Yb-doped crystals with comparatively high thermal conductivities and large emission cross-sections.Nevertheless, Yb 3+ ions in crystalline host matrices typically have narrow emission and absorption bands.The splittings of the fundamental manifold 2 F 7/2 of Yb 3+ in most of the Yb-doped crystals are only a few hundreds of cm -1 , comparable to the thermal energy at room temperature, which may cause strong re-absorption at the emission wavelengths due to thermal populating of the terminal laser level.These problems have been partly solved in some recently developed ytterbiumdoped oxyorthosilicates, such as Yb:Y 2 SiO 5 (Yb:YSO), Yb:Lu 2 SiO 5 (Yb:LSO), and Yb:GSO, which have been demonstrated to exhibit broad emission spectra, large ground-state splittings, and high thermal conductivities [9,10].High-power fs laser oscillations of Yb:YSO and Yb:LSO have already been demonstrated [5].Yb:GSO has been demonstrated to exhibit a large fundamental manifold splitting up to 1067 cm -1 , and a broad emission bandwidth in the range 1020-1120 nm with a full-width of half maximum of 77 nm, which supports broadband tunable cw or ultrafast lasers.Tunable cw Yb:GSO lasers have been realized efficiently with low LD pump thresholds [10].In this letter we report on fs operation of Yb:GSO laser.
A large manifold splitting of Yb 3+ ions in the GSO host indicates a quite strong crystalfield interaction notably due to the anisotropic and compact structure of the GSO oxyorthosilicate host matrix.The crystal structure of the GSO belongs to the primitive monoclinic space group P2 1 /c and is composed of a two-dimensional network of corner linked (OGd 4 ) tetrahedra where the (SiO 4 ) tetrahedra are packed.In comparison, the structures of YSO and LSO belong to the end-centered monoclinic I2/a space group, where (SiO 4 ) and (OY 4 ) or (OLu 4 ) tetrahedra share edges and form chains interconnected by isolated (SiO 4 ) tetrahedra.Accordingly, the GSO is more anisotropic and has more compact structure than YSO or LSO, which results in a larger manifold splitting for Yb 3+ .In the GSO host lattice, there are two nonequivalent crystallographic sites of Gd 3+ , site Gd 1 and Gd 2 , which are coordinated with 7 and 9 oxygen atoms, respectively.Accordingly, there exist two substitution sites for ytterbium-doping.In site Gd 1 , the Yb 3+ ion is affected by the stronger crystal-field and results in a larger Stark-splitting of the Yb 3+ manifolds.
Spectral properties of Yb:GSO
In the previous work, we measured room-temperature absorption and emission spectra of a thick Yb:GSO sample [10].A strong re-absorption loss was observed near the zero-phonon emission line, which corresponds to the absorption maximum at 976 nm.To confirm that the abnormal emission around the zero-phonon line are caused by the strong re-absorption in thick samples, we re-examined the emission spectrum of a 0.82-mm-thick 5at% Yb-doped sample with a Triax550 spectrofluorimeter under the 940 nm laser diode excitation.The absorption and emission spectra are presented in Fig. 1.As expected, a clear emission peak appears at the 976 nm, with a cross section of 0.24×10 -20 cm 2 .The emission cross section at 1088 nm was determined to be 0.44×10 -20 cm 2 , nearly equal to the value of the 9.48-mm-thick sample [10].The absorption band around 976 nm overlaps with the shortest-wavelength emission band, corresponding to the zero-line transition between the lowest levels of 2 F 7/2 and 2 F 5/2 manifolds.The emission bands centered around 1013, 1031, 1048, and 1088 nm correspond to transitions from the ground state 2 F 7/2 to the other sublevels of 2 F 5/2 , respectively.The emission band at the longest wavelength around 1088 nm corresponds to the transition from the lowest levels of the 2 F 5/2 manifold to the highest levels of the 2 F 7/2 manifold, which also has the largest cross section.We can therefore estimate the maximum splitting of the 2 F 7/2 manifold as 1067 cm -1 , which is much larger than those in Yb:YSO and Yb:LSO, and even 1003 cm -1 splitting in Yb:GdCOB [11].Such a large fundamental splitting helps to reduce thermal population of the terminal laser level and thus to decrease the laser threshold.This should in principle benefit mode-locking around the broadband emission bands.
Experimental setup
In this paper, we report on fs pulse generation in a compact Yb:GSO laser under direct highbrightness laser diode pumping, where passive mode-locking is started with a semiconductor saturable-absorber mirror (SESAM).In comparison with the so-called Kerr-lens modelocking technique commonly used in ultrafast lasers, SESAM-based mode-locking offers advantages such as release of critic requirements of precise cavity design and alignment, and ease in self-starting mode-locking.It is especially applicable to laser oscillators that are inappropriate to operate Kerr-lens mode-locking, such as the mode-locked operation of Yb lasers that typically exhibit low stabilities intrinsically related with the long excited-state lifetimes of Yb 3+ ions.Recently, we have generated picosecond pulses from SESAM-based passive mode-locking in a Yb:GSO laser end-pumped by a fiber-coupled laser diode with the fiber core diameter of 400 μm [12].It is well-known that the strong tendency toward Qswitched mode-locking of Yb lasers can be suppressed by choosing a proper intracavity beam diameter on the SESAM and optimizing the pump arrangement to get a small mode area in the gain medium.For this purpose, we employed a high-brightness laser diode with a fiber core diameter and numerical aperture of 50 μm and 0.22, respectively.The laser cavity loss was minimized to reach high intracavity pulse energies.
In order to generate stable fs pulses from a compact Yb:GSO laser, we employed a folded resonator as schematically shown in Fig. 2. The laser resonator consists of a SESAM and four mirrors: an input flat mirror M 1 with high transmission at 974 nm and high reflection in a broad band from 1020 to 1120 nm, two folded concave mirrors M 2 (R=500 mm) and M 3 (ROC=300 mm) both with high reflection in a broad band from 1020 to 1120 nm, and an output coupler (OC) flat mirror with a transmission of 2.5%.The length between M 1 and M 2 is about 295 mm, while M 2 and OC are separated by 360 mm, and the length between OC and SESAM is 826 mm.The total cavity length is 1481 mm.Our SESAM is a commercially available one (BATOP GmbH, Germany), which has a 2% saturable absorption at 1064 nm, 70-μJ/cm 2 saturation fluence, and 20-ps relaxation time constant.The experiment was performed with a 2-mm-long 5%-doped Yb:GSO crystal, which was antireflection-coated at 974 nm and a broad band from 1020 to 1120 nm.To efficiently remove the generated heat under diode-pumping, we wrapped the crystal with indium foil and fixed it tightly in a watercooled copper heat sink.The temperature of the laser crystal was controlled at 14°C.The high-brightness fiber-coupled laser diode used for end-pumping was controlled by a temperature regulation to emit at 974 nm with the maximum power up to 5 W. The laser cavity was carefully designed to guarantee a sufficient small mode area in the gain medium and appropriate operation of SESAM in the strong saturation regime within its damage threshold.With the folded cavity as shown in Fig. 2, we can estimate by the so-called ABCD analysis that the beam waists were near 58 μm on the SESAM and 50 μm in the crystal, respectively.By using the setup of Fig. 2(a) with a 2.5% output coupler, the Yb:GSO laser was operated in the picosecond regime at a repetition rate of 101 MHz with the transverse mode remained as TEM 00 .The mode-locked pulse train was detected by a fast photodiode with a rising time of less than 200 ps and recorded with a digital storage oscilloscope.The standard deviations of the cw mode-locked Yb:GSO laser pulses are shown in Fig. 3 along with the pulse power spectrum recorded by a spectral analyzer (Agilent E4411B), which clearly show that the cw mode-locked laser was stably operated at 101 MHz without any observable sidebands.The shot-to-shot fluctuation and long-term mode-locking stability were monitored by checking the output pulse energy.In comparison with the case under high-power pumping with fiber-coupled LD of a large fiber-core diameter, a small mode area in the gain medium made the cw mode-locking to start up more easily and to operate more stably.
Results and discussions
In the absence of intra-cavity prisms for pulse compression, the output pulses reached 1.3 ps at 1031 nm with a maximum average output power of 586 mW.As shown in Fig. 4(a), the measured autocorrelation traces are well-fitted assuming a sech 2 pulse shape.The output spectrum was recorded with a fiber-based spectrometer.As typically presented in the inset of Fig. 4(b), the 1031 nm pulse have a full-width of half-maximum (FWHM) of 4.2 nm.According to the emission spectrum of Yb:GSO, laser action may occur around the strong emission bands of 1013, 1031, 1048, and 1088 nm.Laser emissions around 1013 nm suffer from strong reabsorption losses caused by thermal populating of the terminal laser level.The intermediate bands around 1031 and 1050 nm have medium emission cross-sections where the terminal levels are a little populated.Efficient laser actions are possible.The band around 1088 nm has the strongest emission cross-section with the corresponding terminal laser level very few populated with a probability of 6 ×10 -3 at room temperature according to the Boltzman thermal distribution.As a result, Yb:GSO laser at 1088 nm nearly approaches a quasi-four-level laser scheme, which results in an easy population inversion and a low pump threshold.Depending on the substitution sites, ytterbium ions in Yb:GSO exhibit different but large overall splittings of the ground-state manifold 2 F 7/2 .Nevertheless, ytterbium ions in the GSO host matrix have separate emission bands, corresponding to transitions from the lowest levels of 2 F 5/2 manifold to the split levels of 2 F 7/2 manifold.Separate terminal laser sub-levels limit the continuously tunable range of the cw laser operation.Without any intracavity wavelength selectors, the quasi-four-level nature results in a cw laser oscillation at long wavelengths depending on the laser crystal length and the OC transmission.While in the mode-locked laser operation, the preferred lasing wavelengths differ from that of the cw laser operation.Stable mode-locking could be realized near the emission bands around 1031, 1048, and 1088 nm, respectively.Near the emission valleys between two adjacent emission bands, mode-locking was instable, and multi-wavelength hops were observed as the laser cavity was intentionally aligned for lasing at the valley emission wavelengths.This limited the achievable broadband laser spectra.In the stable mode-locked band near 1088 nm, long pulse durations were observed.For instance, the mode-locked pulse centered at 1092 nm, under the typical operation with a maximum average output power of 602 mW by using a 2.5% output coupler, had a pulse duration of 103 ps (assuming a sech 2 pulse shape).This could be ascribed to the fact that the SESAM used in the experiment, whose saturation absorption centered around 1064 ± 5 nm, had a smaller saturation fluence around longer central wavelengths.It is well known that a smaller modulation depth typically leads to longer pulses.In addition, the dispersion of the gain medium may differ under different emission bands due to the corresponding electronic resonances between the lowest levels of 2 F 5/2 manifold and the split levels of 2 F 7/2 manifold as separate terminal laser sub-levels.As the central wavelength of the mode-locked Yb:GSO laser could jump from one stable mode-locking band to another by slightly adjusting the laser cavity, mode-locked pulse duration could be correspondingly adjustable in the range 1~100 ps by slightly aligning laser cavity to allow laser oscillations at different wavelengths.Wavelength tuning was mainly determined by the reabsorption loss in the gain medium and spatial mode-matching between the pump and intracavity laser.
Conclusions
In conclusion, we have demonstrated what we believe is the first diode-pumped fs Yb:GSO laser.This oxyothosilicate crystal has the unique property of high structural disorder and thus exhibits a very broad emission band of 77 nm.The demonstrated maximum output power of 396 mW is not a limit.Further progress could be expected by improving Yb-doped GSO crystal homogeneity and purity, which may enable a high doping concentration and thus more powerful laser concepts.Driven by these demonstrated promising advantages, high-quality Yb:GSO laser crystals are competitive alternatives to the widely-used Ti:sapphire crystals as compact, tunable fs solid-state lasers operated in the 1 μm wavelength regime.
Fig. 3 .
Fig. 3. (a) The standard deviations of the cw mode-locked Yb:GSO laser pulses.(b) The 101 MHz repetition rate of the cw mode-locked pulses.(c) Power spectrum of a cw mode-locked Yb:GSO laser.
Fig. 4 .
Fig. 4. Autocorrelation trace of the cw mode-locked Yb:GSO laser (a) and that with intracavity SF10 prisms for pulse compression (b).The inset of (b) gives the corresponding spectrum of the 343 fs pulse. | 3,518.2 | 2007-03-05T00:00:00.000 | [
"Physics"
] |
Hb H Disease Diagnosed During Adolescent Pregnancy
Abstract Hb H disease is a moderate to severe form of α-thalassemia (α-thal). Patients with Hb H disease may become symptomatic, especially during infections and pregnancy, and may require transfusions. Herein, we present a 16-year-old female with Hb H disease who was initially diagnosed during adolescent pregnancy and was found to carry the –α3.7/–(α)20.5 deletions. The relatively mild presentation of this case highlights the milder phenotypic consequences of deletional α mutations. The case describes the screening and management of pregnancy with Hb H disease. Additionally, this case demonstrates that screening of some undiagnosed inherited blood disorders is important during pregnancy.
Hb H disease is a mild-to-severe form of inherited autosomal recessive a-thalassemia (a-thal), and is more common in Mediterranean countries, the Middle East, and South East Asia [1]. Hb H disease usually occurs as a result of deletions of three a globin genes; however, nondeletional mutations such as Hb H/Hb Constant Spring (HBA2: c.427T>C) may also occur. The patients have mild-to-severe anemia, hypochromia, microcytosis, and an average or slightly reduced level of Hb A 2 , along with a variable quantity of Hb H (0.8-40.0%) [1]. Excess b-globin chains in the presence of decreased a-globin production form b4 tetramers, namely Hb H, which may be detected by globin electrophoresis and by staining of blood smears with brilliant cresyl blue [2]. The patients may become symptomatic, especially during infections, and may require transfusion therapy. Herein, we present a patient with Hb H disease, initially diagnosed during an adolescent pregnancy.
Case report
A 16-year-old female presented at the obstetrics and gynecology clinic with pallor, tachycardia, recurrent syncopes when she was 20 weeks pregnant. In compliance with ethics standards, informed consent was obtained from the parents of the patient and is included in the hospital's clinical documents. Palpitations and syncopes had been present since the 12th week of gestation. Anemia and arrhythmia were detected, and she was referred to hematology after an erythrocyte transfusion. Personal history revealed that when she was 13 years old, she presented with abdominal pain, and splenomegaly during an upper respiratory tract infection and received an erythrocyte transfusion. Additionally, she had already received an erythrocyte transfusion due to anemia in the 12th week of gestation. The patient's parents were cousins, and the cousin of the patient was suffering from anemia requiring intermittent transfusions.
Physical examination revealed pallor, obliterated Traube's space, but spleen and liver size could not be evaluated due to pregnancy. Weight and height percentiles were 50 and 10 percentile, respectively. Abdominal ultrasonography confirmed hepatosplenomegaly. Laboratory studies of the patient before transfusion showed a hemoglobin (Hb) level of 7.7 g/dL, mean corpuscular volume (MCV) 65.0 fL, mean corpuscular Hb (MCH) 19.2 pg, and red blood cell distribution width (RDW) 28.2%. Complete blood count (CBC) values of the patient, spouse, child, and parents of the patient are shown in Table 1. The peripheral smear demonstrated severe hypochromia, anisopoikilocytosis, tear-drop shaped erythrocytes and microcytosis. Corrected reticulocyte count was 4.4%, serum ferritin level was 176.0 ng/mL, and bilirubin, lactate dehydrogenase (LDH) levels were normal. Hb H inclusions were demonstrated on the patient's peripheral blood with brilliant cresyl blue staining.
High performance liquid chromatography (HPLC) showed Hb A 94.5%, Hb H 4.0%, Hb A 2 1.5%, Hb F 0.0%. A multiplex polymerase chain reaction (PCR) analysis revealed -a 3.7 /-(a) 20.5 confirming the Hb H diagnosis. The patient was put on a transfusion program during pregnancy in order to achieve a Hb level above 10.0 g/dL. The supplementation of folic acid was implemented. On the other hand, the cardiac evaluation revealed Wolff-Parkinson-White (WPW) syndrome, and she underwent radio frequency catheter ablation. Her spouse had average hematological values, peripheral smear, and globin electrophoresis, but genetic studies could not be studied. The pregnancy resulted in a full-term male weighing 3040 g. He is now 5 months old, and his hemogram values are shown in Table 1.
The incidence of a-thal disease is reported to be 0.25-4.10% in Turkey [3]. Although the incidence of a-thal is high, the relatively rare occurrence of Hb H disease may be due to inadequate reporting, insufficient awareness of the disease or the disease is usually asymptomatic without triggering factors such as infection, inflammation induced hemolysis or pregnancy as in our case [4]. In the majority of patients with Hb H disease, especially in deletional types, patients have normal growth, and their first transfusion usually is in the second decade, while jaundice is uncommon [4]. In our case, pregnancy worsened the anemia, and the anemia aggravated the arrhythmia.
According to the hemogram, the father of the patient carries a silent a-thal, and the mother is an a-thal trait carrier who had no genetic counseling. The patient was found to carry the -a 3.7 /-(a) 20.5 deletions. The -a 3.7 (rightward) deletion was reported to be most common, and -(a) 20.5 deletion was the second most common in patients with Hb H disease in a study from Turkey [5]. The -a 3.7 /-(a) 20.5 genotype was also reported as one of the most common genotypes in cohorts from Turkey [5,6]. Moreover, the -a 3.7 and -(a) 20.5 deletions were reported to be common in all Mediterranean populations [1]. Hematological parameters of the patients with a combination of deletional mutations have higher Hb and lower Hb H levels than patients who have nondeletional types of mutations [4,5,7]. The relatively mild presentation of the case highlights the milder phenotypic consequences of deletional a mutations vs. nondeletional a mutations.
It has previously been reported that pregnancies from mothers with Hb H disease might end in a preterm birth, low birth weight, and growth restriction of the developing fetus at higher risk [8]. As maternal anemia may hamper fetal growth by hypoxia, it was suggested to maintain Hb levels higher than 10.0 g/dL [9]. Prenatal screening of the fetus and on-demand transfusion therapy should be implemented according to fetal growth. Additionally, a lower limit of Hb level (10.0 g/dL) may be set up for pregnancies without close follow-up or symptomatic cases, as in the presented case.
The case highlights the need for universal prenatal screening for hemoglobinopathies in countries with highrisk populations. Screening for hemoglobinopathies might be performed at the neonatal period, school age, before marriage, or after marriage as part of family planning [10]. The premarital screening program has been widely available in Turkey since 2018. Unfortunately, the diagnosis of this case was missed before the pregnancy was diagnosed. All women can be screened for hemoglobinopathies as soon as a pregnancy is confirmed to minimize the incidence of new cases. Prenatal screening could be as simple as a CBC and Hb pattern analysis; however, further diagnostic tests are required for a-thalassemias or Hb variants.
In conclusion, the incidence of a-thal is high in Turkey [3]. The case points out deficiencies in prenatal screening. The authors recommend universal prenatal screening for hemoglobinopathies in countries with high-risk populations. Diagnosis of the Hb H disease would prevent unnecessary iron replacement therapies, possible thalassemic baby births, and restore pregnant women and fetal health.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 1,657.6 | 2020-03-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
ERK5 and JNK Regulate the Expression of VCAM-1 Differentially in Insulin-and Tumor Necrosis Factor-α Stimulated Rat Aortic Endothelial Cells
Vascular Cell Adhesion Molecule-1 (VCAM-1) is a cell surface molecule that is expressed on vascular endothelial cells. One of the many functions of VCAM-1 is to interact with serum monocytes, causing them to adhere to the endothelium and transmigrate into the medial portion of the artery. Increased expression of VCAM-1 is part of the pathology of vascular inflammation and atherosclerosis. Insulin and Tumor Necrosis Factor-α (TNFα) are physiological stimulators of cell surface VCAM-1 expression and their intracellular signaling is regulated, in part, by intracellular kinases. We show here that Extracellular Signal-regulate Kinase-5 (ERK5) and c-Jun N-terminal Kinase (JNK) have opposing effects on insulin-and TNFα-stimulated VCAM-1 expression in Rat Aorta Endothelial Cells (RAEC). Using short hairpin ERK5 plasmid DNA and short hairpin JNK plasmid DNA we decreased the expression of ERK5 and JNK, respectively, producing ERK5 Knockdown (KD) and JNK KD cell lines when this hairpin-producing plasmid DNA were expressed in RAEC the following occurred: (1) the simultaneous expression of both ERK KD and JNK KD in the absence of insulin and TNFα increased the basal levels of cell surface VCAM-1, (2) expression of ERK KD decreased insulin and TNFα-stimulated VCAM-1 expression, (3) expression of JNK KD increased the expression of insulin and TNFα-stimulated VCAM-1 and (4) expression of ERK5 KD plus JNK KD stimulated an increase in both insulin and TNFα-stimulated VCAM-1 above that seen for positive controls alone.
Introduction
The integrity of the vascular wall is regulated in part by molecular factors in the serum. Hormones, growth factors and cytokines interact with the endothelium and cause physiological and pathophysiological events within the arteries. Atherosclerosis and inflammation are pathologic conditions of the vascular system and manifest in part as neo-intima formation, decreased arterial lumen diameter and occlusion of the arteries. Additionally, changes in the amounts of cell surface and internal proteins via their regulation play important roles in the physiology and the sequelae of pathophysiology of the vasculature.
Diabetes is a risk factor for cardiovascular disease. The pathology of diabetes is, but not limited to, increased serum insulin (hyperinsulinemia), glucose (hyperglycemia) and inflammatory cytokines. Hyperinsulinemia and increased presence of Tumor Necrosis Factor-alpha (TNFα) are among the many players of diabetes that contribute to the pathology of atherosclerosis. We and others have demonstrated that increased concentrations of insulin and TNFα stimulate increased total and cell surface expression of Vascular Cell Adhesion Molecule-1 (VCAM-1) [1,2]. The transduction of serum insulin and TNFα information into internal cellular events of the endothelial cells is regulated in part by internal cellular proteins of the kinase super family. Extracellular Signal-Regulated Kinase-5 (ERK5) and c-Jun N-terminal Kinase (JNK) are important kinase mediators of cell surface transduction proteins [3,4]. These two intracellular kinases are also important in cellular activities such as cellular proliferation and inflammation [5].
Here we show that ERK5 and JNK are positive and negative regulators, respectively, of insulin and TNFα stimulation of VCAM-1 expression in Rat Aorta Endothelial Cells (RAEC). Additionally, we report here that there appears to be cross-talk between ERK5 and JNK in the regulation of insulin and TNFα-stimulated surface expression of VCAM-1 on rat aorta endothelial cells.
SDS-PAGE and Western blot analysis
Sodium Dodecyl Sulfate Polyacrylamide Electrophoresis (SDS-PAGE) was performed using cleared lysates. Western blot analyses were subsequently performed as previously described, with the following differences [6]. After completion of protein transfer, membranes were washed in ultra-pure water for 5 min. Membranes were then incubated in 3% non-fat milk (milk) in Tris-buffered Saline (TBS) blocking solution for 1 hr at room temperature and then incubated with a designated primary antibody solution (1:1000 in 3% milk/TBS-T[0.1% tween]) overnight at 4 °C . Membranes were washed 4 times with TBS plus Tween (TBS-T) for 5 min at room temperature and then incubated with a goat anti-rabbit secondary antibody (1:5000 in 3% milk/TBS-T) conjugated to fluorochrome IR680RD for 1 hour at room temperature. Membranes were washed 4 times with TBS-T for 5 min each time at room temperature and then incubated with a rabbit anti-tubulin primary antibody solution (1:1000 in 3% milk/ TBS-T) for 3 hr at room temperature. After washing the membranes for 4 times with TBS-T, the membranes were allowed to dry before performing densitometry. Densitometry was performed using an Odyssey Licor system (Lincoln, NE, USA). Alpha-tubulin protein was used to normalize VCAM-1 signals.
Preparation of shRNA stable cell lines
RAEC were grown to 50-70% confluence in CGM in 6-well culture plates. Cells were transfected with shERK5 or shJNK inhibitory plasmids as previously described [7]. Cells were incubated in CGM containing 2 µg/mL of Puromycin (Sigma-Aldrich) for 2-3 weeks for selection of Puromycin resistant transformants.
Dual transfection of stable cell lines
To examine the effect of simultaneous ERK5 and JNK knockdown on VCAM-1 expression, the ERK5 shRNA stable cell line (ERK5 KD) was transiently transfected with shJNK plasmids and the JNK shRNA stable cell line (JNK KD) was transiently transfected with the shERK5 plasmid. These two protocols were carried out in order to see if any differences occurred with respect to transfection sequence. Stable cell lines were transiently transfected with shRNA plasmid DNA as described above and incubated for 5 hr with the DNA transfection mix. Subsequently the transfection mix was aspirated and replaced with 2.0 mL CGM with 2 µg/mL puromycin. Stimulation of cells by insulin and/or TNFα occurred 48 hours after transient transfection was accomplished.
Stimulation of VCAM-1 expression
RAEC were cultured in CGM, whereas shRNA stable cell lines (e.g., JNK KD and ERK5 KD) were cultured in CGM containing 2 µg/mL. Puromycin until assays were performed. After incubating the transfected cells for an additional 48 hr the cells were stimulated without or with insulin (10 nM) alone for 1hour or TNFµ (10 ng/mL) alone for six hours or in combination. Thereafter we evaluated for VCAM-1 expression by Western blot analysis and flow cytometry.
Flow cytometry
Mock transfected RAEC (control), stably transfected ERK5 KD RAEC or JNK KD cell lines were inoculated into 6-well tissue culture dishes, transiently transfected with vehicle, shJNK or shERK5 plasmids, respectively, and stimulated with insulin (10 nM) or TNFα (10 ng/mL) alone or combination as described above. The cells were washed twice with 2 mL of 1X PBS (Gibco). The PBS was aspirated and 0.5 mL of Cell Dissociation Solution Non-Enzymatic (Sigma-Aldrich, St. Louis, MO, USA) was added to each well. After incubating the cells at 37 °C and 5% CO 2 for 30 min, 1 mL of 1% Bovine Serum Albumin (BSA, Sigma-Aldrich) in PBS was added to the cells and then were gently triturated into a single cell suspension. The cells were transferred to 5 mL Falcon polystyrene round bottom tubes (Thermo Scientific) and centrifuged at 500 x g for 5 min. After aspirating the supernatants, the cells were resuspended in 3 mL 1% BSA, pelleted at 500 x g by centrifugation, and the supernatants were removed by aspiration. The cells were resuspended in 200 µL of 1% BSA. Two microliters of DyLight 488-conjugated anti-VCAM-1 antibody (Life Technologies, Grand Island New York, USA) were added to each tube and the cells were resuspended by vortexing. The cells were incubated in the dark for 30 min at room temperature. The cells were centrifuged, washed twice with 3 mL 1% BSA and resuspended in 200 µL of 1% paraformaldehyde (PFA, Electron Microscopy Sciences, Hatfield, PA). After incubating the cells for 5 min at room temperature, the cells were diluted with an additional 300 µL of PBS and analyzed using flow cytometry. The experiments were run on a BD LSRII (BD Biosciences, San Jose, CA). MFI and gating percentages as part of data analysis was done using BD FACSDiva v6 software.
Chamber slide cell preparation 2 x 10 5 of RAEC control (mock transfected), JNK KD or ERK KD stable cell lines were plated separately in 1 mL of CGM in each well of a 4-well chamber slide and allowed to grow for 24 h, 37 °C and 5% CO 2 . The medium was aspirated and 1.0 mL of fresh CGM was applied to the cells. ERK5 KD and JNK KD cells were mock transfected, or transiently transfected with shJNK or shERK5, respectively, for 48 hours. Cells were then treated with either vehicle, insulin (10 nM) or TNFα (10 ng/mL) alone or in combination. The medium was aspirated and the cells were washed three times with PBS and then incubated in 400 µL of 4% paraformaldehyde in PBS for 30 minutes. The medium was aspirated and washed three times with 1 mL of PBS. The final PBS wash was aspirated and 400 µL of a 1:1000 DyLight anti-VCAM-1 antibody soluton in 1% BSA was added to each chamber and incubated for 30 minutes at room temperature. The cells were then washed three times with 500 µL of 1% BSA. The chamber walls were removed and one drop of DAPI Mounting Medium was added Page -03 ISSN: 2475-5591 to each group of cells on the slide. Cells were then sealed with a glass cover slip using clear nail polish. Slides were kept in a dark refrigerator until microscopic visualization.
Confocal microscopy
A single, non-confluent monolayer of cells was imaged with a Leica TSC SP8X white light laser scanning confocal microscope (Leica Microsystems GmbH. Ernst-Leitz-Straße 17-37 Wetzlar, 35578 Germany). All image acquisitions were carried out using the Leica Application Suite X (version 1.1.0.12420, LASX AF). Excitation of the DAPI channel was accomplished using a 405 nm diode laser with an excitation intensity level of 2.67%. Emission signal was captured with standard PMT Channel 1 and an emission gap of 430 nm-480 nm. The Leica Supercontinuum white light excitation laser line (488 nm) with 3 % intensity level was used to for Alexa fluor 488. Emission signals were captured with the Leica HyD 2 detector (Hybird 2 PMT) with an emission gap of 505 nm-555 nm.
Data analysis
Data were analyzed by either unpaired Student's t test (two groups) or ANOVA with subsequent Tukey posttest (several groups) as indicated. A "P" value of less than 0.05 was considered significant. Results were expressed as the mean ± Standard Error of the Mean (SEM) of three or more independent experiments.
Results
We have previously shown that clone #1 (from clones 1-4) of short-hairpin (sh) ERK5 had the best efficacy of downregulating total ERK5 protein [8]. In this report we demonstrate that clone #2 of shorthairpin (sh) JNK was the most potent clone to affect a significant decrease of total JNK protein ( Figure 1). Although we have previously shown kinase knock-down studies of ERK5 and p38 in Western blot analyses and flow cytometry, we have not demonstrated the effects of shERK5 and shJNK on insulin-and TNFα-stimulated VCAM-1 cell surface expression in Rat Aorta Endothelial Cells (RAEC) [8].
In order to accomplish dual transfection experiments we needed to demonstrate that transfection of both short hairpin RNA clones (i.e. shERK5 and shJNK) in the same cells would elicit dual downregulation of each kinase (Figure 2). Here we show that total ERK5 protein was down regulated in the presence of the single transfection of shERK5 and JNK protein was down regulated in the presence of shJNK. Additionally, we demonstrated that the dual transfection of shERK5 and shJNK down regulated both ERK5 and JNK simultaneously. Graph represents the percent of controls of total ERK5 and JNK proteins expressed without or with shRNA and results are expressed as the mean ± SEM for three independent experiments. ERK5 protein (gray bars) and JNK protein (open bars)*, P < 0.05 versus mock-transfected controls (CON).
ISSN: 2475-5591
We have previously demonstrated that Western blot analysis (i.e. total protein content) does not always reflect changes in the amounts of cell surface protein [7]. Thus, we measured changes only in surface VCAM-1 protein by flow cytometry with respect to insulin and/or TNFα stimulation in the absence and presence of shERK5 and shJNK. We first measured VCAM-1 cell surface expression in quiescent cells and cells stimulated with insulin or TNFα alone or in combination in order to determine baseline negative and positive controls, respectively. Insulin alone did not significantly increase surface VCAM-1 expression above that seen for negative controls (Figure 3). In contrast we observed significant (P < 0.05) increases in surface VCAM-1 in the presence of TNFα alone or in combination with insulin as compared to negative controls.
We then measured changes in surface expression of VCAM-1 in RAEC transfected with shERK5 alone or in cells transfected with shERK5 first and then in combination with shJNK as well as in the absence or presence of insulin or TNFα alone or in their combination ( Figure 3). Cells stably transfected with shERK5 (ERK5 KD) and treated with insulin alone exhibited a small, but insignificant (P = 0.06) decrease in cell surface VCAM-1 as compared to positive non-transfected controls. Interestingly, ERK5 KD cells exhibited a significant (P < 0.05) decreased VCAM-1 expression in TNFα and TNFα plus insulin-stimulated cells compared to mock transfected positive controls. In comparison, ERK KD cells transfected with shJNK and then treated with insulin or TNFα alone or in combination exhibited a significant (P < 0.03) increase in cell surface VCAM-1 expression above stimulated positive controls. Interestingly, VCAM-1 expression in non-stimulated ERK5 KD plus shJNK transfected cells was significantly (P < 0.05) greater than that seen in non-stimulated negative controls. These last results suggested that VCAM-1 expression is constitutively down regulated in part by ERK5 and JNK, such that the absence of both kinases and in the absence of stimulation, VCAM-1 expression would be upregulated.
In order to verify that transfection sequence was not an issue, we repeated the above experiments with RAEC transfected with shJNK alone and then followed by transfections with shERK5. In comparison to experiments in Figure 3, we noted a number of interesting differences and analogies in VCAM-1 expression profiles. First, cells transfected with shJNK alone (i.e., JNK KD) and not stimulated with insulin or TNFα, exhibited a slight, but insignificant (P = 0.08) increase in surface VCAM-1 proteins. Second, cells transfected with JNK KD and shERK5 in the absence of stimulus exhibited increased VCAM-1 expression above mock transfected controls suggesting, again, a release of a constitutive negative regulation on VCAM-1 expression. Third, JNK KD cells stimulated with either TNFα alone or in combination with insulin exhibited a significant (P < 0.05) increase in VCAM-1 expression as compared to positive nontransfected controls. Fourth, JNK KD plus shERK5 cells stimulated with insulin or TNFα alone or in combination exhibited a significant (P < 0.05) increase in surface VCAM-1 expression as compared to cells transfected with shJNK alone (Figure 4).
Finally, in order to visualize changes in cell surface expression of VCAM-1 in the presence or absence of insulin and/or TNFα in cells without or with double transfections, we performed immunofluorescence confocal microscopy ( Figure 5). Here we observed that the combination of shJNK and shERK5 in RAEC exhibited the greatest increase in VCAM-1 surface expression in the presence of insulin and TNFα.
Discussion
We report here that ERK5 and JNK regulate insulin and TNFαstimulated rat aorta endothelial cell (RAEC) surface VCAM-1 expression. In particular ERK5 appears to be a positive regulator of insulin and TNFα-stimulated VCAM-1 expression whereas JNK seems to be a negative regulator of insulin and TNFα-stimulated VCAM-1 expression.
To reiterate, RAEC with less JNK protein (JNK KD and JNK KD + shERK5) exhibited increased VCAM-1 expression even in non- stimulated cells. This suggests that JNK may be an internal negative control of VCAM-1 expression even in quiescent conditions. Second, ERK5 KD transfected cells in the presence of shJNK cells exhibited an increase in VCAM-1 surface expression. This suggests that (1) ERK5 has less influence on VCAM-1 expression than JNK and (2) in the absence of JNK, shERK5 cells express more VCAM-1 because of the activation of other important kinases may be upregulated ( Figure 6). Thus, disinhibition of ERK5, along with the disinhibition of JNK appears to cause an even greater expression of VCAM-1.
There have been a vast number of clinical research studies that have measured differences in the amounts of expressed VCAM-1 between control participants versus patients with atherosclerosis. Two such studies have demonstrated that there is no correlation between VCAM-1 expression and vascular disease. Kilic et al. were unable to demonstrate a correlation between aortic stiffness and VCAM-1 levels [9]. Additionally, Hwang et al. observed that circulation levels of VCAM-1 were not significantly different in Coronary Heart Disease (CHD) patients than that measured in control patients [10].
In contrast, a number of clinical studies have determined a correlation between vascular inflammation and VCAM-1 expression. Three studies in particular demonstrated a strong association between VCAM-1 expression and atherosclerosis. Mu et al. demonstrated that patients with atherosclerosis expressed an increase in VCAM-1 on endothelial cells and in the tunica media region noted in immunohistochemically stained explanted vascular tissue [11]. Additional studies have shown that there is a strong association between VCAM-1 expression and intimal leukocyte accumulation and atherosclerosis [12,13]. In addition to the latter group, others have demonstrated that introduction of Maritime Pine Bark Extract, Metformin and Atorvastatin have affected a decrease in VCAM-1 expression [14][15][16]. All of which decreased the detrimental effects of CVD.
There are many clinical research trials that are not favorable to the correlation between atherosclerosis and VCAM-1 expression.
Yet, there appears to be a larger number of studies that illustrate a very strong relationship between vascular pathology and VCAM-1 expression. In addition, the uses of both homeopathic and Western medicine applications have indicated that VCAM-1 expression can be regulated. Obviously, more research is needed in this area of human health and pathology.
Cross-talk between internal kinases appears to be a common thread in cellular physiology [17,18]. In some examples, the kinases are redundant in their regulation; that is, both are positive or both are negative regulators. Each is present to ensure that the transduction of outside stimulators is transmitted into the same internal events. In contrast, many kinases have opposing controls of cellular activities, thus modulating many signals to regulate internal events. The transduction of external signals followed by internal kinase modulation is important to the overall "health" of the cell. Balancing the myriad of external signals into coordinated cellular functions is a complicated feat for a cell.
Diabetes is associated with inflammation and atherosclerosis [19,20]. Among the many sequelae of diabetes is insulin resistance and vascular pathophysiology [21,22]. Insulin's role in inflammation and atherosclerosis is a hotly debated topic [23,24]. In contrast, increased serum and cell surface presence of TNFα is a well-accepted theory of vascular inflammation and increased VCAM-1 expression [4,25].
Our recent studies give thought and evidence that postulate that ERK5 and especially JNK are excellent candidates for molecular therapeutic targets and may be gateways to preventative interventions of vascular inflammation and atherosclerosis. | 4,379.2 | 2017-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Nuclear Safety of RBMK Storage Pool under Seismic Impact
Nuclear safety of RBMK storage pool of spent fuel during of the maximum design earthquake is evaluated. The lower ends of the fuel assemblies are not fixed and they can deviate from the vertical position. The seismic action may be one of the reasons for such deviations. 3D model of fuel assemblies movements caused by seismic impact is used. The simulation of the dynamics of a fuel assemblies group under seismic impacts allows to find the dangerous configuration of closest approach of the fuel assemblies. Three-dimensional neutron program STEPAN calculates the Keff of the most dangerous systems. The maximum design earthquake is the design basis accident. In this case according to the regulatory documents the fuel is considered with zero burn-up. Nuclear safety of RBMK storage pool under considered conditions is provided.
1.Introduction
The RBMK spent fuel pool is located directly in the reactor hall. The pool consists of two concrete compartments. Each compartment is covered with corrosion-resistant steel plates and is 10.3 m long, 4.2 m wide and of 17.3 m deep. The compartment is filled with water up to 16.6 m and closed with caps. Fuel assemblies are hanged on the 2 m long cantilever beams. The distance between the centers of the beams is 0.25 m. There is a canyon in the middle of the compartment between the ends of the beams. The canyon is used to transport fuel assemblies under water to their place of storage. In the initial project it was planned to put the fuel assemblies into steel pipes. But to increase the capacity of the pool 2.2 times the fuel assemblies are placed now without the pipes by a triangular lattice. The lower ends of the fuel assemblies are not fixed and they can deviate from the vertical position. The seismic action may be one of the reasons for such deviations.
According to the regulations nuclear safety during storage and transportation of nuclear fuel is provided when the neutron multiplication factor Keff is less than 0.95 in normal operation, in case of violations of normal operations and in case of design basis accidents. The Keff≤0.95 criterion is not required for beyond design basis accidents. It is sufficient to show that a self-sustaining fission chain reaction is impossible. For beyond design basis accidents regulatory documents require a realistic (non-conservative) analysis. This means that the actual fuel burn-up should be considered in the calculations.
The maximum design earthquake is the design basis accident. In this case the burn-up credit can be used if a special device for the measurement of fuel burn-up is available. Otherwise, all of the fuel in the pool should be considered as fresh (with zero burn-up). The conservative static analysis has shown that if a lot of assemblies with fresh fuel deviate from the vertical and their lower ends are in contact, the criterion Keff≤0.95 may be violated. That brought up the task of modeling of the actual behavior of a fuel assemblies group during an earthquake.
Fuel assemblies motion simulation
The problem can be divided into two parts. The first part is the simulation of the dynamics of a fuel assemblies group under seismic impacts with regard to their collisions and water resistance. This simulation allows to find the dangerous configuration of closest approach of the fuel assemblies (FA).
In the second phase the three-dimensional neutron program calculates the Keff of the most dangerous systems. Seismic action moves the pool with cantilever beams. Since the FA fixed only at the top, they start to oscillate like a pendulum relative to the walls and bottom of the pool due to the inertia. When the horizontal oscillations occur, the lower part of the water at the depth of more than ¾ of the pool width is considered to be rigidly connected with the walls, i.e. it moves the same way as the pool itself [1]. Above the level of ¾ of the pool width there is a complex movement of water, including the possibility of wave oscillations of liquid free surface. Water that moves together with the pool walls resists the movement of the FA. Fuel assemblies are fixed to the beams by the thin hangers. Resistance of water to hangers can be neglected. To simplify the task the bending of the fuel assemblies is not considered. Figure 1 shows a scheme of the fuel assembly with the hanger. The hanger consists of two parts 14 and 36 mm diameter. As a result of the seismic action assembly deviates from the vertical by an angle α, which leads to displacement of the lower part of FA by a distance X. Water resistance (force) for dz element of moving fuel assembly at a height z is determined by the formula: (1) where C x (z) -drag coefficient; ρ -water density; μ -added mass of water; u(z), and a(z) -velocity and acceleration at the height z; D, L -diameter and length of the fuel assembly or hanger. The drag coefficient is determined according to the formula C x = cRe -b k [2], where c -constant depending on the geometry; Re -Reynolds number; b -exponent, k -correction factor that takes into account the non-uniformity of movement and the mutual influence of the fuel assembly.
From dynamic equation considering the moments of all the forces, one can get an equation that describes the behavior of a harmonic oscillator with a nonlinear damping and external disturbing force f (force of inertia): where X -deviation of lower part of FA from of the vertical axis; β -coefficient determined by the geometry of the assembly and the viscosity of water; ω 0 -the natural frequency of the assembly with added mass of water. The movement of the lower part of each assembly, composed of the oscillation in X and Y directions, is described by equation (2). All assemblies move in the same way before the collision with the walls or other assemblies. Distance between them is almost constant. The collision of the FA lower parts with the wall and its reflection cause a chain reaction of mutual collisions. As a result regular lattice is broken. In some places the lower parts of fuel assemblies form a compact group with a reduced lattice pitch that leads to an increase of the neutron multiplication factor.
Evaluation of the neutron multiplication factor
STEPAN-БВ code is used for the analysis of nuclear safety. This code is a modification of a wellknown STEPAN code [3] that is certified for calculations of RBMK. Changing lattice pitch in the horizontal direction for the layers with different heights allows to simulate the FA deviation from the vertical axis in emergency situations. Two-group system of constants has been prepared for the threedimensional calculations. The constants depend on fuel burn-up, lattice pitch, water density and temperature. Calculation model was verified by comparison with the precision calculations by the Monte Carlo MCNP code [4]. Keff calculation example for different types of lattice is shown in the table 1. Pool fragment 20x14 FA is shown in figure 3. The pool walls are to the left and at the bottom of the scheme. Canyon is at the top of scheme. The figure 3 shows the moment of closest approach of the FA lower parts. In the initial state before the earthquake Keff=0.790 for 2.6% enrichment fuel with a zero burn-up. For the configuration shown in figure 3 Keff=0. 937. Nuclear safety criterion is ensured. Safety can be improved by leaving the left row empty (free of fuel assemblies). In this case Keff=0.823.
Conclusion
Presented calculations have shown that nuclear safety of storage pool RBMK in the maximum design earthquake of 7 points is provided. | 1,794.8 | 2017-01-01T00:00:00.000 | [
"Geology"
] |
Einstein-Bose condensation of Onsager vortices
We have studied statistical mechanics of a gas of vortices in two dimensions. We introduce a new observable---a condensate fraction of Onsager vortices---to quantify the emergence of the vortex condensate. The condensation of Onsager vortices is most transparently observed in a single vortex species system and occurs due to a competition between solid body rotation (c.f. vortex lattice) and potential flow (c.f. multiple quantum vortex state). We propose an experiment to observe the condensation transition of the vortices in such a single vortex species system.
I. INTRODUCTION
Perhaps the most astonishing aspect of turbulence is not the complexity of its dynamics but rather that it feeds the emergence of ordered structures out of chaos. The ubiquity of large eddies in two-dimensional fluid flows was also noted by Onsager who suggested that it might be possible to obtain a statistical mechanics description of hydrodynamic turbulence of two-dimensional flows based on discrete collections of point-like vortex particles [1]. In particular, Onsager predicted that turbulent two-dimensional systems could support large scale clustered vortex structures, later coined Onsager vortices, and that such structures would correspond to negative absolute temperature states of the vortex degrees of freedom [2]. Notwithstanding the negative absolute Boltzmann temperature states were observed in nuclear spin systems [3][4][5] soon after Onsager's theoretical prediction, and more recently in the motional degrees of freedom of cold atoms confined in optical lattices [6], the negative temperature Onsager vortex states are yet to be uncovered in their original context of two-dimensional (super)fluid turbulence [7].
Kraichnan developed the theory of two-dimensional turbulence further, conjecturing that a scale invariant inverse energy cascade mechanism of incompressible kinetic energy could dynamically lead to the formation of Onsager vortices and even to their condensation and that: "The phenomenon is analogous to the Einstein-Bose condensation of a finite two-dimensional quantum gas" [8]. In the Kraichnan model the system scale Onsager vortex clusters would emerge due to a termination of the inverse cascade that accumulates energy at ever larger spatial scales. Ultimately, such a process could potentially lead to the condensation of the Onsager vortices, which correspond to the highest accessible energy states of the vortex degrees of freedom [8,9]. In a neutral system with N tot vortices in total, the condensation of Onsager vortices occurs at a critical negative temperature T EBC = −αN tot /4 [10][11][12], where α = ρ s κ 2 /4πk B = T HH is the critical positive temperature for the Hauge-Hemmer pair-collapse transition [13,14], which in the case of non-zero vortex core size becomes renormalised to the Berezinskii-Kosterlitz-Thouless (BKT) critical temperature T BKT = T HH /2 [15][16][17]. Here k B is the Boltzmann constant, ρ s is the (super)fluid density and κ = h/m is the circulation quantum with h the Planck's constant and m the particle mass. Inspired by Kraichnan's insight [8,9], we refer to the critical temperature of condensation of Onsager vortices with the acronym (EBC), which stands for Einstein-Bose condensation and in the case of zero-core point vortices is also known as supercondensation [9].
The recent developments of imaging and manipulating compressible superfluids has sparked renewed interest in Onsager's statistical hydrodynamics theory of turbulence. Experiments employing harmonically trapped Bose-Einstein condensates of atoms have ranged from studies of dynamics of vortex dipoles [18] or few vortices [19] to three- [20] and two-dimensional [21][22][23][24] quantum turbulence. Moreover, uniform atom traps are becoming increasingly popular [25][26][27][28][29][30][31][32][33][34] and will be particularly useful for studies of quantum turbulence. This is partly because well defined trap walls enhance the vortex clustering signal in comparison to harmonically trapped systems [12,22,35,36]. In the latter case, strong clustering has not been observed although in both cases Onsager vortices in decaying two-dimensional quantum turbulence has been predicted to emerge via an evaporative heating mechanism of vortices [12,35].
The successes of the recent experimental developments have also spawned novel theoretical investigations [12,[35][36][37][38][39][40][41][42][43][44][45][46]. In addition to visual inspection, the presence of Onsager vortices has been associated with indicators such as the vortex dipole moment [12,35], vortex clustering measures [37,38,41], or a peak in the power spectral density of incompressible kinetic energy [12,39,47]. However, a measurable that would quantify the degree of condensation of the vortices as opposed to their clustering, has been lacking. Here we use a vortex-particle duality to define a condensate fraction that enables quantitative measurements of condensation of Onsager vortices in these two-dimensional systems [48]. We have implemented a vortex classification algorithm based on the prescription by Reeves et al. [38], which can be used as a quantitative measure of vortex clustering. Together with the condensate fraction measurable introduced here that uniquely identifies the condensate of Onsager vortices, these two observables enable acquisition of detailed information on clustering and condensation of vortices. We find that the condensate fraction exhibits universal arXiv:1612.02930v3 [cond-mat.quant-gas] 13 Feb 2018 behavior independent of the number of vortices in the bounded circular system. In contrast to condensation, clustering of vortices is present at all negative temperatures in the sense that the total number of vortices belonging to vortex clusters of varying size is greater than zero [46]. Vortex clustering is a precursor to the condensation of Onsager vortices and is reminiscent of the quasicondensation that precedes the superfluid phase transitions in low-dimensional quantum gas systems [49][50][51][52].
II. VORTEX-PARTICLE DUALITY
We first consider N tot singly quantised point-like vortices with a hard core of radius ξ and equal numbers of clockwise and counter-clockwise circulations confined in a circular disk of radius R • , unless stated otherwise. The pseudo-Hamiltonian describing our system is [12,53]: where r 2 j = x 2 j + y 2 j and x j and y j are the dimensionless Cartesian coordinates of the jth vortex measured in units of the system radius R • and s j = ±1 determines the circulation direction of the jth vortex. The first, singlevortex, logarithmic term is due to the interaction of each vortex with its own image, the second represents the pairwise two-dimensional Coulomb-like interaction between ith and jth vortex separated by distance r ij and the last term, due to the circular boundary, represents the interaction of system vortices with the images of all other vortices.
The dynamics of the point-like vortices are determined by the equations of motion [1] hs j ∂x j ∂t = ∂H ∂y j and hs j ∂y To draw a closer correspondence with Hamiltonian mechanics, we may assign for each vortex a canonical coor- where m v is the vortex mass [54] and ω 0 is an angular frequency. Thus the set of vortex coordinates {x j , y j } in the real space are mapped onto points in the phase space {q j , p j } spanned by the canonical conjugate variables. In this Hamiltonian description the vortex particles move in one-dimensional real space tracing out orbits in the twodimensional phase space, which is bounded by the circular wall of radius R • . Equation (3) establishes the vortexparticle duality-that a vortex in a two-dimensional (2D) fluid may behave as a particle in a one-dimensional (1D) space.
Motivated by the vortex-particle duality and in contrast to Kraichnan's conjecture, we anticipate the condensation of Onsager vortices to be analogous to the condensation of a finite one-dimensional quantum gas.
Interestingly, in the 2D fluid picture the vortex condensate corresponds to maximum kinetic energy states of the fluid whereas in the 1D dual picture the condensate corresponds to zero momentum state of the 1D vortex particles.
III. IDEAL VORTEX GAS APPROXIMATION
By ignoring the vortex-vortex interactions we obtain an ideal-gas model of vortex particles. A Maclaurin series expansion of the single vortex term in Eq. (1) with respect to r j formally yields a one-dimensional harmonic oscillator Hamiltonian with an inverted energy spectrum with respect to the canonical case.
Within the harmonic approximation, a single vortex v of this system will travel along a periodic phase space orbit The Einstein-Brillouin-Keller semiclassical quantisation rule [55] p where n is the principal quantum number and k is the Keller-Maslov index then evaluates to where we have integrated over one period, T = 2π/ω v , of the vortex orbit. The one-dimensional oscillatory motion has two classical turning points, k = 2, and therefore the quantisation rule, the combination of Eqs (7) and (8), yields the energy spectrum E n = (n + 1 This implies a minimum semi-axis min(R v ) = ξ for the vortex trajectories and yields the zero-point en- In correspondence with the Heisenberg uncertainly relation, ∆q∆p /2, the zero-point energy carries the information that the area A of the phase space is quantised in units of = m v ω 0 ξ 2 . This reflects the fact that it is not possible to localize the position of the vortex inside an area smaller than the vortex core. Consequently, any zero-core point vortex model with ξ = 0 violates Heisenberg uncertainty principle and fails to correctly describe the physics of the condensate of Onsager vortices. It is therefore paramount to introduce a non-vanishing vortex core size in order to describe physics of the low entropy states with T /T EBC < 1.
IV. INTERACTING VORTEX GAS APPROXIMATION
The velocity field induced by the vortices mediates strong vortex-vortex interactions such that the idealvortex approximation is strictly only valid for one vortex near the centre of the disk. However, the second term in Eq. (1) may be approximated as a mean-field potential by integrating out the spatial scales smaller than the intervortex spacing.
A neutral superfluid that locally rotates at an orbital angular frequency Ω with N v vortices of the same sign mimics the rotation of a classical fluid by having an areal vortex density Hence, the mean superfluid velocity is v(r) = Ωr, where r is the distance measured from the centre of such a rotating cluster of vortices with radius R. In contrast, in a high-winding number vortex with N v circulation quanta, the superfluid velocity field v(r) = m Nv r is a gradient of a scalar phase function. In general, the velocity field is therefore which is a combination of solid body rotation for r < R * and potential flow for r > R * , where R * is the radius of the vortex cluster. The kinetic energy associated with such a flow field may therefore be approximated by a mean-field interaction The final term in the point vortex Hamiltonian, Eq. (1) describes the remaining interaction with image vortices and yields an energy shift Combining Eqns (6), (14), and (15), we thus arrive at the effective 1D vortex-particle Hamiltonian, that describes a system of one-dimensional strongly interacting harmonic oscillators. It may be worth pointing out that the two forms of Eq. (14) have quite different interpretations. The first line is a long-range interaction of the vortices in 2D, whereas the last line is a strong contact interaction between 1D vortex particles with a coupling strength that is running with the energy scale set by the radius R * of the cluster. In the Tonks-Girardeau-like limit of R * → 0 the effective coupling constant g 1 ∝ ln R• R * → ∞ and Eq. (18) reduces to a semi-classical version of the Lieb-Liniger model [56]. Figure 1 shows the independent contributions of the three terms in the Hamiltonian, Eq. (1), for a system of 100 like-signed vortices as functions of reduced temperature. The details of this calculation are described in Sec. VIII. For comparison, the energy contributions due to the harmonic oscillator and mean-field approximations, Eq. (6) and Eq. (14), respectively, are shown by dashed lines. The harmonic oscillator approximation, Eq. (6), is better at lower reduced temperatures because the vortices clump close to the centre of the disk. However, since the mean-field term, Eq. (14), is proportional to N 2 v , it is overwhelmingly larger than the single vortex terms, which are proportional to N v . These results establish that the mean-field Hamiltonian Eq. (18) is a reasonable approximation for Eq. (1).
V. FRACTION OF CONDENSED VORTICES
On the basis of the vortex-particle duality, we anticipate condensation of Onsager vortices when the phase space density n v λ v 1. Here n v is the one-dimensional mean vortex density and is the thermal vortex de-Broglie wavelength, which in the vortex dual is inversely proportional to the size of an average temperature-dependent vortex orbit in the phase space. For N * vortices confined within length 2R * the condensation criterion becomes πN * ξ 2 / R v R * ∼ 1, which shows that condensation is expected when the vortices concentrate into a phase-space cluster with size of the order of ξ √ N * . These considerations lead us to define the fraction of condensed vortices as the ratio, N 0 /N , of N 0 vortices of a given sign in a single many-vortex cluster to the total number of vortices N of that same sign in the system. The highest density of vortices is found within clusters and by denoting N * to be the number of vortices in the largest cluster, which will be the first to condense, and A 0 = N * = N * m v ω 0 ξ 2 and A * = N * m v ω 0 r nn 2 to be, respectively, the minimum possible phase space area occupied by the N * vortices and the phase space area actually covered by them, we obtain Thus the condensate fraction is the product of the largest cluster fraction N * /N and the square of the ratio of single vortex core radius ξ to the mean radius r nn of the effective area occupied by a vortex within the cluster, where r nn is one half of the distance between the centres of nearest neighbour vortices in such a cluster. Although for single vortex species systems N * /N = 1, in general, the system contains both vortices and antivortices and to measure N * < N in such systems, clusters of like-signed vortices must first be identified by a vortex classification algorithm.
VI. VORTEX CLASSIFICATION ALGORITHM
To quantitatively study clustering and condensation of vortices we have implemented a vortex classification algorithm based on the prescription by Reeves et al. [38]. We assign each vortex in a given configuration of N vortices a unique and arbitrarily chosen label from the set {v 1 , v 2 , . . . , v N }. The vortex configuration is then described by a corresponding set of positions {z 1 , z 2 , . . . , z N } (in two-dimensional complex coordinates, where z j = x j + iy j ) and circulation signs {s 1 , s 2 , . . . , s N }, which here take the value s j = ±1, denoting clockwise or anti-clockwise circulation. The algorithm does not prioritise any vortex and yields the same classification outcome regardless of the choice of vortex labelling. Figure 2 shows an example configuration of twelve judiciously numbered point vortices. The vortex classification algorithm is outlined below.
A. Step 1: Find dipole and cluster candidates
For each vortex v j , we locate the nearest opposite sign (NOS) vortex and label it as (v NOS ) j [i.e. the nearest vortex which satisfies s j (s NOS ) j < 0]. We define the distance to this vortex to be (R NOS ) j ≡ |z j − (z NOS ) j |. Vortices are drawn in blue, while antivortices are drawn in green. In panel (a), dashed circles are drawn centered on v1 and v10, denoting the respective distances (RNOS)j to the nearest opposite signed vortex. Because v2 is the closest vortex to v1 and is of opposite sign, v2 is labelled as a dipole candidate for v1. Vortex v10, on the other hand, is closer to v9, v11 and v12 than it is to v7; hence, these three vortices become cluster candidates for v10. The lines joining clustered vortices in (b) are drawn using a minimum spanning tree algorithm, which is applied once all vortices have been labelled into the sets of clusters, dipoles or free vortices.
We then check to see if any other vortices (which are same-sign, by necessity) fall within the disk of radius Fig. 2(a), v 2 is labelled as a dipole candidate for v 1 ].
(ii) Clusters: If there are n j ≥ 1 vortices which are nearer to v j than (v NOS ) j , then these are labelled as Table I, and (b) drawn directly onto the example vortex configuration from Fig. 2. An arrow is drawn from each vortex vj to all the members of its candidate list lj. Only when arrows point in both directions between vj and v k are they defined to be mutual neighbours. All arrows that are onedirectional have been crossed out in both panels. In panel (b), dashed circles are drawn around clusters (blue/green for positive/negative), dipoles (red) and free vortices (black).
cluster candidates for v j [e.g. vortex v 10 in Fig. 2(a), for which v 9 , v 11 and v 12 are cluster candidates].
Each vortex v j now has a corresponding set of candidate vortex labels, which we denote by l j . For case (i), l j consists of a single opposite sign vortex, which is a dipole candidate. For case (ii), l j is a list of n j same-sign cluster candidates. Table I below displays the lists l j that are constructed in Step 1 of the algorithm when it is applied to the configuration shown in Fig. 2.
Step 2: Find mutually agreeing candidates In the second step of the algorithm, the lists l j are checked sequentially for mutual members. This process is shown schematically in Fig. 3 for the example configuration shown in Fig. 2 and Table I. l k that does contain v j , the two vortices v j and v k are labelled as belonging to the same cluster (e.g. in Fig. 3, vortex v 4 "checks" both l 3 and l 5 to see if it is a member of either. It is found to be a member of both, so all three vortices are placed in a single cluster). For each list l k that does not contain v j , neither vortex label is updated (e.g. vortex v 7 and v 10 in Fig. 3). Note that not all members of a single cluster have to be mutual candidates of one another.
In the example shown in Fig. 3, v 9 is only a mutual neighbour with v 10 , but is still placed in the same cluster as v 11 and v 12 . As the algorithm proceeds, vortices may be assigned to existing clusters, or previously classified clusters may become merged.
Any vortices left unclassified after this process are classified as free vortices, as they have no mutual dipole or cluster neighbours (e.g. vortex v 6 in Fig. 3).
In Fig. 3(b), any two vortices that are connected by a two-directional link are part of the same cluster or dipole, while any vortex that has no two-directional links is a free vortex.
To reduce computation, the checking of mutual candidates can be restricted such that it is only initiated for v j and v k if j > k. Alternatively, once a pair of vortices has been checked, then v j could be removed from l k and vice versa.
VII. TWO-SPECIES MONTE CARLO RESULTS
To study the thermodynamics of the condensation of Onsager vortices, we have performed Monte Carlo calculations using a Metropolis algorithm to find the equilibrium vortex configurations as functions of temperature for systems with 10, 20, 50, 100, 200, 300 and 400 vortices [11,12]. The Monte Carlo calculations, and the conclusions drawn from the results, are obtained using canonical ensemble with hard core vortex core regularization. A hard core diameter of 2ξ = 0.001 R • was imposed on each vortex in the results presented. The Monte Carlo samplings were performed for temperature in the range T ∈ (−∞, −0) with 10 6 microstates at each temperature after initial burn in of 10 6 steps. Out of the 10 6 microstates, 1000 uniformly spaced configurations were recorded and used for vortex classification analysis. Figure 4 shows typical vortex configurations of disordered and strongly clustered neutral vortex states of N tot = 200 vortices obtained from the Monte Carlo calculations at different temperatures. The same sign clusters, dipoles and free vortices are identified using the vortex classification algorithm and the velocity field stream lines are included to visualise the superflow around the vortices. Figure 4 (a) shows a vortex configuration at a high negative temperature T = 10 6 T EBC revealing a fairly disordered configuration of vortices with an abundance of vortex dipoles and small clusters. Figure 4 (b) shows a vortex configuration at T = 1.022 T EBC close to the critical temperature. In Fig. 4 (b) nearly all the Figure 5 shows (a) the largest cluster fraction, (b) the condensate fraction, and (c) the mean radius of the largest cluster in the system as functions of temperature in units of the critical temperature T EBC = −0.25αN tot . The largest cluster fraction Fig. 5(a) is strongly dependent on the total number of vortices in the system. In contrast, the condensate fraction, shown in Fig. 5(b), remains zero at all temperatures |T | > |T EBC | and thereafter increases as the absolute negative zero is approached. Figure 5(c) shows the mean radii of the largest vortex clusters as functions of temperature. As the critical temperature is approached from the disordered side, the largest cluster tends to grow in size as ever more vortices are joining the cluster. In the condensed phase the cluster rapidly shrinks as the phase-space density, and hence the condensate fraction, increases. Importantly, the condensate fraction shows universality in the sense that it is consistent with data collapsing onto a single curve, indicating the condensate fraction becoming a vortex number independent quantity in the large vortex number limit.
With the ability to quantify the condensation of Onsager vortices, we have revisited the dynamical meanfield simulations of Ref. [35]. Figure 6 shows a typical result revealing that in this neutral vortex system, the largest cluster fraction and vortex dipole moment are practically equivalent observables. However, although the system is continually evaporatively heated, the condensate fraction remains zero for all times. The initial vortex number in this simulation is 100 and it decays to the final value of 12. Comparing the largest cluster fraction in Fig. 6 with Fig. 5(a) shows that this system is initially at temperature |T | |T EBC | and evaporatively heats reaching a final temperature of |T | |T EBC |. Quantitatively, the temperature of the vortex system can be found using the vortex thermometry based on the fraction of clustered vortices in the system [46]. However, once the system becomes fully clustered, the evaporative heating mechanism switches off [12] and the condensation is unable to proceed.
VIII. ONE-SPECIES MONTE CARLO RESULTS
Clustering of vortices and their condensation are two separate phenomena. Vortex clusters exist at all negative temperatures [46], where as non-zero condensate fraction only exist in the temperature range 0 > T > T EBC . To demonstrate this more clearly, we have performed Monte Carlo calculations for a charge-polarised, Nv i=1 s i = N v , case where only one species of vortices is present in the system. Figure 7 (a)-(c) shows the vortex configurations at three different temperatures. These vortex configurations illustrate the fact that the vortex positions suddenly collapse when the radius of the host Onsager vortex cluster drops below a critical value, R c . The transition illustrated in Fig. 4 corresponds to independent condensation of two species of vortices at the same temperature due to the equal numbers of vortices and antivortices. In vortex number imbalanced systems there are two, vortex number dependent, critical temperatures T maj = −αN maj /2 and T min ≈ −αN min /2.
The critical temperature for the condensation of an Onsager vortex in a single vortex species system may be predicted by a similar free energy argument as for two vortex species systems [10,12]. The Helmholtz free energy, F = E − T S, of a vortex configuration where all N v vortices are concentrated inside a circular region of where the energy E is that of a multiply quantised vortex of core radius R * and the entropy S is obtained as the logarithm of a statistical weight of the configuration. A change in the sign of the free energy signifies that the probability p F ∝ e −F/kBT of observing such a configuration becomes exceedingly likely and predicts a critical temperature which is the same for a single species system with N v vortices as it is for a two-species system with the same number, N v = N tot /2, of vortices of one species.
In the general imbalanced case with N + vortices and N − antivortices with N tot = N + + N − = N maj + N min and N maj > N min there are two critical temperatures corresponding to separate condensation of each of the two vortex species. When the temperature approaches negative zero, the majority species condenses first at T maj = −αN maj /2, followed by the condensation of the minority species at T min ≈ −αN min /2, where the latter is shifted slightly toward negative zero due to the interaction with the condensate of the majority species.
The condensation of Onsager vortices may be viewed from the point of view of competition between solid body rotation within the core of the vortex cluster and potential flow outside the cluster, see Eqns (11). Balancing the kinetic energy contributions of these two velocity fields in the mean-field interaction energy term in Eq. (18) predicts a critical cluster radius such that for T /T EBC > 1 the whole system prefers to mimic solid body rotation of a classical fluid, Fig. 7(a) and (b), whereas for T /T EBC < 1 the system prefers to mimic the velocity field of a quantised superfluid vortex, Fig. 7(c). Figure 8 shows the condensate fraction measured using the Eq. (20). For T /T EBC > 1, vortices are found scattered everywhere within the circular boundary and the condensate fraction is strictly zero. Near the transition, the vortices begin to clump and at critical radius R c the vortex cluster suddenly begins to collapse. Accompanied with the shrinking of the vortex cluster, the condensate fraction grows almost linearly with the reduced temperature.
According to Eq. (21) the specific heat at the transition where in the last step we have assumed linear dependence of the cluster radius on the temperature in the vicinity of the transition, as suggested by Fig. 8. Figure 9 (a) shows the phase space density, n v λ v of the vortices as functions of position and reduced temperature. The one-dimensional vortex-particle density n(x) is obtained by modeling each vortex-particle by a normalised Gaussian wave packet of waist λ v . The frames (b) -(d) show the 1D density n(x) of the vortex gas for three different temperatures T /T EBC = 2, 1.031, and 0.769. For T /T EBC > 1 the vortex density is spread over the whole system while below the transition the vortex density becomes localised both in real space and in vortex momentum space.
On approaching the condensation transition from the infinite temperature side the asymptotic form of the 2D real-space vortex density is predicted to be [57,58] ρ v (r) = 1 with normalisation Figure 9 (e)-(g) shows the least squares fits of the function (25) to the radial 2D real space vortex density measured from the Monte Carlo calculations. Using the temperature T as the sole fitting parameter the best fitting temperatures are measured to be T fit /T EBC = 2.004, 1.033, and (1.005) and the resulting density profiles predicted by Eq. (25) are shown as orange curves. The parentheses are used here to denote that the Eq. (25) is used in a regime outside its validity. While the theory prediction, Eq. (25), for the radial 2D vortex densities are in excellent agreement with the Monte Carlo data shown in Fig. 9 (e) and (f), the theory prediction in Fig. 9 (g) is clearly unphysical because Eq. (25) diverges at T = T EBC and cannot be used for predicting the vortex density in the condensed phase for which T /T EBC < 1. This is evident in Fig. 9 (g) where the best fitting function has T fit /T EBC = (1.005), as opposed to the actual temperature T /T EBC = 0.769, of the state. Figure 10 shows the two-dimensional vortex density as a function of radial position for T /T EBC = 0.244. The condensed vortices seem to form a fluid like incompressible core of the cluster with a constant vortex density. The number of vortices within the shaded region corresponding to the condensate is equal to N 0 .
IX. DISCUSSION
The physics of the vortex system in the vicinity of the critical negative absolute temperature where ω k is the kelvon frequency and m 0 is the vortex mass per unit length [54], has been discussed extensively in the recent literature [12, 35-44, 46, 59] yet the nature of the condensate has remained unclear. This is partly because of the divergent behaviour of the zero-core point vortex model that becomes invalid at the critical point of condensation and is unable to yield predictions for the condensed phase. The situation is the same as in the positive temperature side where the Hauge-Hemmer transition to pair-collapsed phase in the zero-core point vortex model is divergent and the structure of the vortex core, which is always present in any real physical system, must be accounted for. Including the effects of non-zero vortex cores in the positive temperature systems allows correct treatment of the Berezinskii-Kosterlitz-Thouless phase transition whose critical temperature is shifted by a factor of 2 with respect to the Hauge-Hemmer transition that occurs at T HH = α [14]. Similarly, any selfconsistent treatment of the negative absolute temperature Onsager vortex condensate must include the effects of non-zero size of the vortex core. For the sake of clarity, we discuss the one and two vortex species cases separately below.
A. One vortex species case
First, considering the single vortex species system shows that the condensation of the Onsager vortices occurs when the vortex cores within a cluster of vortices begin to merge into a single vortex structure with multiple circulation quanta, signifying the emergence of large degeneracy in the quasiparticle degrees of freedom of the vortices.
We briefly recall the underpinnings of the quantum Hall effect of two-dimensional electron gas in a strong external magnetic field corresponding to extremely large kinetic energy per electron. This 2D problem is often theoretically mapped onto a dual 1D harmonic oscillator problem, which reveals that the topological phase transitions to the integer quantum Hall states occur when the electrons condense in the highly degenerate lowest Landau level. Although the electrons move in 2D space, the topological phase transitions are quantified in terms of the eigenstates of a 1D harmonic oscillator.
Similar physics is pertinent to the Onsager vortex condensation transition. It is therefore useful to consider the closest known physical realisation of the Onsager's point vortex model, which is a Bose-Einstein condensate with quantised vortices nucleated in the macroscopic condensate wavefunction. A trial wave function for such a system may be expressed as where f is the smooth condensate particle density in the absence of the vortices, θ j = arg[(x − x j ) + i(y − y j )] are the additive phase functions with singularities at the vortex locations {x j , y j }, and the (soft) vortex core function The probability current of the state (28) defines the superfluid velocity field v s , the incompressible component of which is remarkably well approximated by the velocity field of Onsager's point vortex model [60]. The elementary excitation spectrum of a 2D vortex configuration is obtained by solving the Bogoliubov-de Gennes (BdG) eigenvalue problem [61]. The N v phase singularities due to the N v quantised vortices in the system yield N v low energy quasiparticle eigenstates [62] that satisfy the bosonic commutation relations where η † q and η q are the usual Bogoliubov quasiparticle particle creation and annihilation operators. In accordance with the quasiparticle picture of superfluids, the macroscopic multiply connected wave function, Eq. (28), may be expressed in terms of the countably infinite set of such quasiparticle states [61]. These Bogoliubov quasiparticles are bosons and this property is inherited by the host vortices whose circulation is quantised.
For a single vortex with N v circulation quanta the condensate wave function may be expressed as where χ Nv (r) is the structure function of the vortex core with N v circulation quanta. The BdG quasiparticle excitation spectrum of such a state has only one vortex eigenmode, corresponding to the one phase singularity, with orbital angular momentum quantum number equal to w = −N v [63]. This high-winding number bosonic quasiparticle mode is a BEC of N v Bogoliubov quasiparticles associated with the N v vortex circulation quanta, in essence forming a "vortex BEC in a BEC of atoms". Such quasiparticle condensates are not unusual. For example, magnons (spin-waves) have previously been observed to form Bose-Einstein condensates of their own within their host BECs [64][65][66].
The circulation of a classical point vortex measured along a path C that encloses the vortex is invariant with respect to continuous deformations of the path C precisely as for a quantised vortex in a BEC of atoms. In a BEC of atoms the vortex cores trap the bosonic quasiparticles and when these localised bosonic modes overlap they may form a condensate. The vortex density of the point-like vortex cores thus effectively measures the density of states of the Bogoliubov quasiparticles attached to the vortices and the overlapping of the vortex cores is tantamount to the condensation of the N v BdG quasiparticles associated with the vortex degrees of freedom. It is in this sense that the classical point-vortex Hamiltonian describes the bosonic degrees of freedom of the quantised vortices and their quantum statistical condensation at T EBC . Indeed, Eq. (21) applies equally well for both classical point vortices and for quantised vortices in a BEC.
The vortex-particle duality allows a 1D treatment of the 2D vortex gas and motivates the definition of the condensate fraction as The condensate fraction is equal to the area ratio of the minimum possible phase-space area occupied by the N v vortices to the area actually occupied by them. A high vortex condensate fraction is equivalent to strong overlap between the BdG quasiparticle modes of the quantised vortices. The point-vortex model description works well in this extreme states of vortices because in such situations kinetic energy of the BEC of atoms is overwhelmingly larger than the usual mean-field atom-atom interaction.
It is interesting to recall the structure of a simple vortex in a superfluid or a superconductor. Outside the vortex core the superfluid or superconducting order parameter is at its bulk value whereas in the vortex core region the superfluid order parameter vanishes and the original symmetry of the full Hamiltonian is locally restored. A local observer spatially traversing across a vortex core in such systems measures superfluid-normalsuperfluid phase changes along their path.
For a 2 vortex problem, the change in the phase space topology of the point vortex model has been quantified by identification of the phase-space wall that divides the two regions of phase space where the vortex trajectories are either overlapping or non-overlapping [67]. We conjecture that similar phase-space dividing wall is associated with any number N v of vortices and that associated with the condensation of Onsager vortices a multiply connected phase-space topology transforms to a single connected region. In this sense, the condensation of Onsager vortices should be viewed as a topological phase transition.
B. Two vortex species case
In the neutral two vortex species case, the Onsager vortex condensation transition described above for single vortex species systems occurs in both of the vortex types separately, and simultaneously. In the case of vorteximbalanced system the majority vortex species condenses first at T maj = −αN maj /2, followed by the condensation of the minority species at where T min shifted slightly toward negative zero due to the interaction with the condensate of the majority species.
Before the condensation of the Onsager vortices proceeds, the vortices become spatially phase separated as shown e.g. in Fig. 4 and in the Supplemental Figure S1(b) of Ref. [12]. Such phase separation is one step in the sequence of phase space compactification leading to the EBC transition. The critical temperature is the same for a single species system with N v vortices as it is for a two-species system with the same number, N v = N tot /2, of vortices of one species, and the condensation transition occurs in both systems. In contrast, the phase separation is specific to the two-species case.
X. CONCLUSIONS
In conclusion, we have employed a vortex-particle duality to establish a correspondence between vortices in a two-dimensional fluid and a one-dimensional gas of vortex particles. Using this mapping, we have provided a quantitative measure for the condensation of Onsager vortices. The vortex condensate forms due to the overlap of the vortex cores. Ultimately, deep in the condensed phase a phase-space Wigner crystallisation of vortices with hard cores takes place [12] while soft core vortices would yield multiple quantum vortex state [63]. The situation bears resemblance to rapidly rotating neutral superfluids that are predicted to undergo phase changes when the vortex cores begin to significantly overlap and the filling factor, or the number of fluid particles per vortex, approaches unity [68,69]. One interesting future direction will be to study connections between the 1D vortex particle theory and other 1D systems [56,70,71].
Although observation of negative absolute temperatures is most transparent in neutral vortex gas where absolute negative temperature states are readily associated with the emergence of conspicuous vortex clusters [33,34], it seems that the most suitable system to study the critical physics of Einstein-Bose condensation of Onsager vortices is a single species vortex system. We therefore propose an experiment to observe the condensation of Onsager vortices using a BEC or superfluid Fermi gas of atoms by creating a giant vortex with multiple circulation quanta using, e.g., topological phase imprinting [72] or high-winding number Laguerre-Gauss laser beams [73], to imprint a multiply quantised, w = N v , quantum vortex into a superfluid in a preferably uniform trap [25][26][27][28][29][30][31][32][33][34]. Subsequently monitoring the decay of the state into N v singly quantised vortices, evolving from configurations akin to Fig. 7 (c) to those shown in (b), will enable quantitative observation of crossing the critical temperature T EBC . Additional benefit of this approach is that it does not require detection of the vortex circulation signs. Direct measurement of the vortex positions and their core sizes enables direct measurement of the condensate fraction N 0 /N , Eq. (20), in the condensed phase for T /T EBC < 1. Equation (25) enables explicit and accurate measurement of the vortex temperature for T /T EBC > 1, as shown in Fig. 9 (e) and (f). In combination, these two measurements will enable direct and quantitative experimental observation of the condensation transition of the Onsager vortices. | 9,273.6 | 2016-12-09T00:00:00.000 | [
"Physics"
] |
Optical stereometric analysis of an experimental partially-edentulous mandible
Stereooptics method have been successfully used in biomechanical studies of dental models. The aim of this study was to investigate, on the basis of functional deformities, the distribution of occlusal loads on the casts of a partially-edentulous mandible without and with dedicated copings. Precise measurement of strain and displacement of partially-edentulous mandibular control and experimental casts were provided by the digital image correlation method and ARAMIS software. Simulated loads ranged from 0 to 1000 N. Displacements and deformations of abutment teeth within the control cast of a partially-edentulous mandible were 0.48% for incisor without coping, 10.29% for canine without coping, and 6.64% for premolar without coping, and within the experimental cast of a partially-edentulous mandible they were 0.29% for incisor with coping, 7.007% for canine with coping, and 4.98% for premolar with coping. Wilcoxon matched pairs signed ranks sum test was not statistically significant for the majority of the examined parameters, except for the differences between deformations of teeth and copings under pressure p ≤ 0.05. When loading the abutment teeth, the distribution of strain through the remaining tooth substance is specific and various. Abutment teeth covered by protective copings are more resistant to loads.
Introduction
The surfaces of remaining teeth must be particularly prepared to accept the surface of the prosthesis, if indicated. Posˇtic´1 suggested that special preparation consists of making dedicated copings on the remaining solid dental tissues. Conversely the copings can be designed variously otherwise. Posˇtic´1 and Ilic´et al. 2 stated that various designs of copings can cause differences not only in the retention and the stabilization of even the same form of dental prosthesis, but also to the mechanical resistance and the stability of the dental tissue itself onto which the copings are placed. In addition, from a biomechanical aspect, there may be induction of a different stresses and the deformations due to differences in the occlusal loads. The resulting stresses (with or without deformation) can lead to adequate or inadequate responses which will be reflected to the functional adaptation of oral tissues.
Shahr and Weiner, 3 Ahn and Kim, 4 Kultchin et al., 5 and Durbin and Durbin 6 reported the optical stereometric methods based on volumetric measurement mechanisms as up-to dated practice of the biomechanical studies.
Kahn-Jetter and Chu 7 and Brynk et al. 8 established the use of the digital image correlation (DIC) as an optical full-field technique and measuring system for in vitro non-contact three-dimensional (3D) deformation assessments.
Aim
The aim of this study was to investigate the distribution of occlusal loads on the casts of a partially-edentulous lower jaw without copings and with dedicated copings.
Materials and methods
The materials in this study were two, as symmetric as possible, master casts of a partially edentulous lower jaw which were made of specific polymethyl resin material (Photopolymer Resin, Formlabs Inc., Somerville, MA, USA) after processing and manufacturing in a 3D printer. The control and the experimental master casts of a partially-edentulous lower jaw was characterized with the same topography of missing lower molars, the left lower second premolar, and the right missing canine and incisors. Three of the remaining tooth substances of the control master cast were moderately reduced. The remaining teeth of the control cast were not covered by copings. The experimental cast was almost the same as the control, except for the surfaces simulating three copings of oval design covering the remaining teeth.
The surfaces of the control and the experimental casts were sprayed (Primer can spray Motip, Wolvega, Netherlands, Europa) with a thin coat of white layer paint, followed by a thin layer of high contrast black paint placed on top of the white layer to allow the correct performance of the digital image correlation (DIC) method (Figures 1 and 2). Points of the spray occupied distances that were changed under loading and registered by cameras.
Measuring devices and methods
Precise and controlled loading was measured using a gnathodynamometer (SiemensAG, Erlangen, Germany) with horizontal extension. Axial occlusal loads were applied centrally and vertically to the distal abutment tooth (the first lower premolar), intermediate abutment tooth (canine), and mesial abutment tooth (the second lower incisor). The direction of loadings to the experimental cast was the same.
Measurements of the strain and displacement were provided by the Digital Image Correlation Method (GOM-Optical Measuring Techniques, Braunschweig, Germany). This system consists of two digital cameras and the associated software ARAMIS (Version 6.2.0; GOM-Optical Measuring Techniques Correlate Pro software, Zeiss, Europa, https://www.gom.com/en/ company). ARAMIS software, based on the principle of an objective fine-ground procedure, registered 3D changes in the shape and distribution of strain on the surface of statically-or dynamically-loaded objects. Moreover, ARAMIS also determined the shape of the photographed object with high accuracy, its dimensions, the field of 3D movements, vector of distorted field, and features of the biomaterial.
Two mobile cameras at specific time intervals photographed the distance between reference points before loading, in the calibration phase, and during the action force.
Before measuring the strains of the experimental models, a calibration procedure was performed. To measure 3D strains, two cameras were positioned manually and adjusted in accordance with the measuring volume of the calibration object. Windisch et al 9 suggested that strains within the selected area may be measured over a range from 0.01% up to several 100% and strain accuracy was up to 0.01%. Small and large objects, from 1 to 2000 mm, can be measured with the same sensor.
The vertical line (Section 0) ( Figure 2), was set by the software under the slams point of the load acting (in the direction of load). Any increase in the intensity of load is presented in figures of the corresponding stage. Since the study adopted simulated loading over the range between 0 and 1.000.0 N at intervals of 50.0 N, the obtained multistage view and figures showing dimensional changes of the structure under investigation are presented in three stages.
Since the incisor without the coping only reached Stage 1 and cracked after switching to Stage 2, the table shows the values of displacement and deformation of all constituent points of the vertical section at a load of 500.0 N (Stage 1).
This study primarily focused on the strain distribution inside and around the remaining abutment teeth of the control and the experimental casts. The strains are presented in different colors. The regions of sufficient interest were the abutments second incisor, canine, and the first lower premolar. 10 Sample size calculation was provided on the basis of the analyses of coordinates of selected points. Points were determined on the vertical line under the slams point of the direction of action of the load. From 20 to 22 separate points were analyzed for every tooth observed-the incisor without coping, canine without coping, and premolar without coping-the control master cast, as well as for incisor covered by rounded coping, canine covered by rounded coping, and the premolar covered by rounded coping-the experimental master cast. In all cases the points were distributed along the direction of action of the force.
Statistical analysis
Descriptive statistics were used to calculate the deformation and displacements of the teeth of the control cast and the teeth of the experimental casts covered by the copings. The MEANS procedure from the statistical package SAS (SAS Institute, Inc 2010, the SAS System for Windows, release 9.3, Cary (NC): SAS Institute USA) was used. Wilcoxons' Lambda-matched pairs signed ranks sum test was used for the analyses of numerical values of distortions and displacements of the teeth and the copings with significance level of p ł 0.05. The mean value was presented as a measure of central tendency. Nonparametric statistical methods based on the scores of a response variable (percent of deformation and displacement) were used to define the differences between the mean values of the coordinates and positions of selected points along the direction of the force on the teeth of the control cast and the experimental teeth covered by copings.
Results
According to the deformation formula e = (L0 2 L1)/ L0 3 100, where L0 and L1 were the lengths before In the table showing comparison of tooth deformities with and without incisor coping, the values obtained in Stage 1 were used indicating differences in deformation values (Table 1).
Strain intensity values are presented as a gradient of colors in the scale on the right side of each figure This study primarily focused on the strain distribution inside and around the remaining abutment teeth. The strains are presented in different colors. Therefore, the regions of sufficient interest were the abutments second incisor, canine, and the first lower premolar. 10 The displacement and deformation values for each point of the observed vertical section at a maximum load of 1.000 N (Stage 3) are shown (Figures 4-6).
As maximum and significant displacements the maximum displacements were noted for the experimental incisor with coping and for the control canine without coping (Table 2, Figure 4). However, the differences of numerical values obtained were not statistically significant.
The illustrations representing von Misses strains are shown in Figures 4(a) to (c) and 6(a) to (c).
The most intensive deformation value was recorded in canine without coping. (Table 1).
The statistical significance of the differences between deformations of the teeth of the control master cast and the teeth covered by coping of the experimental master cast under pressure are represented in the Table 2. The significance of the differences between displacement of the teeth of the control master cast and the teeth covered by coping of the experimental master cast under pressure are represented in the Table 3. The results of Wilcoxon test showed a direct link only between the percent of the deformation of the teeth of the control master cast and the teeth covered by the presence type of the coping of the experimental cast (p = 0.029). Analyses showed statistically significant greater deformation of teeth without copings comparing to teeth covered by round copings (Table 3). These findings are relevant for all of three referent pressure stages (under 500 N, at 500 N, and up to 1000 N)
Discussion
Previous works by authors dealing with the problem of the application of the optical -stereometric method, have shown good correlation between mathematical measurements under laboratory conditions. [11][12][13][14][15] In this regard, further research using stereooptic methods will certainly reveal more important details and facts regarding the treatment of partially-edentulous jaws with remaining dental tissues.
Strains and deformations of remaining teeth of the casts investigated in the present study were differently distributed. In the control incisor without protective coping, the test did not reach Stage 3 because the tooth of the 3D printed model cracked during Stage 2. The material then exceeded the tensile strength, which according to the literature, is 65 MPa. For this reason, the final deformation of the incisor with and without a coping at a maximum load of 1000 N could not be compared, because the incisor without the coping did not withstand the specified load. Instead, those from stage 1 ( Table 1) were used to compare deformation values. The cross-section of the incisor without the coping was smaller, so at an identical force it would have a significantly higher voltage value, thus it would be expected to fracture first. This is evidenced by the results obtained; the incisor covered with a coping withstood a load of 1000 N while the incisor without a coping did not. By comparing the results in Stage 1 for the incisor with and without the coping, we demonstrated that a greater deformity is present in the incisor without the coping. [10][11][12][13][14][15] Tanasic et al. 10 reported that a major strain field was present around abutment teeth. The difference in the obtained values may be attributed to the effect of the highest stress on the junction of different structures-the dentinal-cement border. Major strain values for the casts of the partially-edentulous lower jaw showed greater strain of the occlusal portion of the premolars either without or with the coping. For the incisor without a coping the major von Misses strains were in a region of remaining tooth substance toward the incisal areas. In contrast, for the incisor with the coping the major von Misses strains were in the cervical areas. Field et al. 16 advocated that loading of incisors, canines, and premolars of the experimental casts caused The maximum displacements were noted for the experimental incisor with a coping and for the control canine (Figures 3 and 5). It would certainly have been noted for the control incisor too, but for the fact that the control incisor cracked earlier, at stage 1 with a load of 500 N, removing the possibility of additional displacements ( Table 1).
The most intensive deformation value, which was observed in the canine without coping, suggested that the coping caused a decrease in the maximum deformation value (Table 1).
According to the results of the present study, the risk of overload appears to be highest around the cervical parts of remaining teeth. The biomechanical distribution of stress occurs primarily where the coping is in contact with the remaining tooth substance, therefore the stress distributions along the coping lengths should reflect their design. Although these differences were not extremely marked, they may be an important contributing factor to negative effects of coping overloading, non-axial loading, and insufficient number of remaining teeth and copings. 17 The optical stereometric method would have revealed whether it would have been better for the remaining tooth substance to be covered by a casted coping or to left uncovered. Based on the results of this study, we concluded that the better option for the remaining teeth was to persist covered by casted (or ceramic) copings. This is because the remaining tooth substance without coping protection could crack even at a low applied force of 500 N. On the other hand, tooth substances covered and protected by copings, because of the form of the copings and the hardness of the selected material (alloy), will not start cracking until the applied force reaches 800 N or more.
The design of the copings of the experimental mandibular cast was original for the purposes of this study. Firstly, the coping shapes used in this study were adapted to minimal dental removals in order to prepare abutments for coping acceptance. In this sense a special design was created of ''semi-anatomical'' forms in which the curves were pointed toward the tips of the cusps and nodules and in accordance with the original morphology of the occlusal surface. 1,2 These forms also resulted in specific distributions of forces which were mostly present on the surfaces of the incisor copings and even the canines according to the cervical portions, while on the other hand, the forces applied to the premolar remained concentrated on the occlusal plateau ( Figure 5). 1,2 Fracture of abutment teeth was frequently reported in the literature. This possibility is particularly expected in situations where the remaining dental substances are only present on one side of the jaw, and even more so if they are at the border of the intercanine and transcanine sectors. Reasons for this could be the unfavorable position of remaining tooth substance, inadequate tooth preparation for copings or overloading. However, Mercouriadis-Howalda et al. 18 assigned the fracture of the abutment teeth as a motive for tooth loss.
The present report performed optic stereometric analysis of forms of teeth-supporting hard tissues. Future studies are needed to evaluate other variables previously tested with other dental materials, such as compressive strength 19 and flexural properties. 20
Limitations of the study
The present study has certain limitations. Firstly, this research was focused to the supporting specifics of the lower jaw without end review to the upper jaw. The second limitation regards to the fact that hard tissuespredominantly the bone under the copings and the teeth were not possible to analyze using the method of optical stereometric analysis. Thirdly, in the present study were applied forces which are excessively over the range of habitual masticatory forces within stomatognathic system of a living human, which could have increased a number of choices to be taken into consideration while making decision on probable resistances or cracks. It is an issue whether this could be comparable to increased number of supporting teeth and copings and whether predictions would have hold across selected type of toothlessness. Finally, a small number of remaining tooth samples (n = 6) were involved in the present research project indicating observations only to a limited and complex situation in the mouth.
Conclusions
The following conclusions can be drawn of subsequent evaluation of the results: when loading the abutment teeth, the distribution of strain through the remaining tooth substance was specific but varied. The applied forces and deformation were mutually linearly dependent. Abutment teeth covered by copings were more resistant to loads.
Based on the evaluation of the results it may also be noted that further research using optical methods of strain measurement should focus on monitoring the deformation of copings of specific types, different forms of copings as well as some other variables of testing.
Authors' note
This work was presented at the FDI Congress in San Francisco, USA, 4-8 September 2019.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Data availability
The data that support the findings of this study are available from the corresponding author and the first author upon reasonable request. | 4,100.6 | 2021-09-17T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Biological Characters of Vancomycin Resistant Staphylococcus aureus Isolates from a University Hospital in Egypt
Background: Vancomycin resistance in Staphylococcus aureus in many cases appears to be associated with some biological changes. Methods and Findings: In this study 88 vancomycin resistant Staphylococcus aureus (VRSA) were isolated from Tanta. Antibiotic susceptibility test was performed. Some of its biological characters were studied including autolytic activity, coagulase production, surface hydrophobicity and biofilm formation. Conclusion: As observed, vancomycin MICs of the selected VRSA isolates were directly proportional to their hydrophobicity index and biofilm production. On the other hand, MICs were inversely proportional to coagulase activity (the higher the dye diffusion, the lower the coagulase activity).
Introduction
Staphylococci have been recognized as an important cause of wide range of infections [1]. Moreover, they are resistant to multiple antimicrobial agents including vancomycin which is considered as the last treatment option for staphylococci [2]. Many researches were conducted to understand characters of vancomycin resistant Staphylococcus aureus (VRSA). Morphologically, colonies of VRSA isolates often look smaller than their susceptible counterparts, which can lead to be confused with coagulase negative staphylococci (CoNS) [3]. Furthermore, VRSA isolates may require more incubation time for coagulase detection. If the coagulase reactions are incubated for less than 4 hrs, the result may be falsely negative and the isolate may be misclassified as CoNS [4]. A common characteristic among VRSA isolates is a thickened cell wall, although the explanation for this phenomenon is unknown. One hypothesized mechanism is a decrease in autolytic activity [5]. Also, hydrophobicity in VRSA may be different from the vancomycin sensitive isolates. Hydrophobic interactions play a role in the adherence of microorganisms to a wide variety of surfaces [6] and facilitate biofilm formation due to bacterial adhesion [7]. Therefore, increased cell surface hydrophobicity is considered as a factor in the ability of staphylococci to form biofilms [8]. It is well established that bacteria embedded in biofilm are much more resistant to antimicrobial treatment when compared to their planktonic counterparts [9]. A biofilm is a community of bacterial cells that is enclosed in a self-produced polymeric matrix and adheres to inert or living surfaces [10]. So it was interesting to study biological characters of VRSA isolates and to find the relation between such characters and vancomycin MICs.
Collection of samples and identification of bacteria
Blood, urine, nasal swaps, pus and sputum samples were collected according to [11]. Then they were cultured on mannitol salt agar and blood agar to isolate Staphylococcus aureus which were then confirmed by standard biochemical tests like coagulase, DNase and catalase tests.
Detection of vancomycin resistance among isolates
This was performed according to [12] guidelines. S. aureus isolates with vancomycin MIC ≥ 32 µg/ml was considered vancomycin resistant.
Determination of autolytic activity
The test was carried out according to the procedures described by [13]. Briefly, cells were grown to an OD 600 of 0.7 in TSB broth at 37°C, and were centrifuged at 3700 rpm for 5 min. The pellet was washed once with saline, and resuspended in 0.01 M phosphate buffer (pH 7.0) to about 0.8 at OD 600 .The cell suspension was incubated at 37°C with continuous shaking. Autolytic activity was measured as the decrease in OD 600 values that was monitored every 1 hr with spectrophotometer (SCHIMADZU, Japan).
Determination of coagulase activity
Coagulase activity of VRSA isolates was determined using dye diffusion test as described by [14]. Two drops of S. aureus bacterial suspension were added to an eppendroff containing 4 drops of citrated plasma solution, then the mixture was incubated at 37°C for 2 hrs to allow coagulation to occur. A drop of crystal violet solution was added to the mixture and allowed to diffuse for 1.5 hrs.
The assumption of this test is that the rate of diffusion of the dye is inversely proportional to the amount of coagulation. 100
Length of dye diffusion The percentage of dye diffusion
Total length of the mixture in the eppendorff = ×
Determination of cell surface hydrophobicity
Cell surface hydrophobicity of VRSA isolates was determined using the method described by [15]. A single colony of staphylococci isolate was inoculated into MHB for 16-18 hrs with shaking at 37°C. Bacterial cultures (10 ml) were centrifuged at 13.000 rpm for 15 min. Pelleted bacterial cells were washed twice with saline followed by centrifugation at 13.000 rpm for 15 min. Pelleted bacterial cells were resuspended in saline and 2 ml of bacterial suspension was transferred to a prewarmed MHB, then incubated for 1 hr at 37°C and centrifuged again. Pelleted bacterial cells were resuspended in PUM (Phosphate urea magnesium sulphate) buffer (pH 6.9). To 4.8 ml of cell suspension in PUM buffer, different volumes (0.3, 0.9, 1.2, 1.8 ml) of n-hexane were added and mixed well for 2 min. After complete phase separation, aqueous phase was separated carefully and the absorbance of the tested isolates as well as the sensitive control strain remaining in the aqueous phase was measured at 540 nm. The hydrophobicity index (HI) was then calculated as follow:
Detection of biofilm production
Biofilm formation was evaluated by the microtitration plate method using crystal violet for semi-quantitative measurement of biofilm production as described by [16]. After adjusting the OD 600 of overnight bacterial suspensions at 1, they were diluted 1 100 in sterile BHI, and 100 µL of bacterial suspension were seeded in 96-well flat bottom plates. Three wells were used for each isolate. Negative control wells contained BHI alone.
After 24 and 48 hrs of growth at 37°C, the contents of each well were removed gently, and the wells were washed three times with water to suppress the free-floating cells. Biofilms formed by adherent bacteria were fixed using 100 µL of methanol solution per well for 20 min and air dried for 1 hr. Fixed bacteria were stained with 100 µL of crystal violet solution per well for 10 min, and the excess stain was rinsed off by washing 5 times with water before air-drying. Dye bound to the biofilm was resolubilized with 100 µl of acetic acid solution per well. The OD 490 was measured in each well with ELIZA AutoReader (Sunrise Tecan, Austria).
Determination of autolytic activity
Autolytic activity was measured in selected VRSA isolates with different vancomycin MICs during incubation at 37°C as the decrease in OD 600 values. As shown in Figure 2, autolytic activity was relatively higher in sensitive control strain in comparison with the tested VRSA isolates that showed a slight decrease in autolytic activity by the increase in vancomycin MICs.
Determination of coagulase activity
Coagulase assay was performed on the selected VRSA isolates. The rate of diffusion was found to be inversely proportional to the extent of coagulation of plasma due to presence of free coagulase enzyme; the more the coagulation, the less the dye diffusion. The total length of the mixture of bacteria and plasma in the eppendorf tube was measured using a ruler and it was found to be 18 mm. It was found that the amount of coagulase enzyme secreted by bacteria was markedly reduced as the vancomycin MICs of staphylococci isolates increased which in turn resulted in an increased diffusion of crystal violet in the mixture as shown in Table 1.
Detection of biofilm production by selected VRS isolates
The selected VRSA isolates with different vancomycin MICs were screened for their ability to adhere to the wells of microtitration plates and hence biofilm production as shown in Figure 4. Biofilm OD values were measured using at wavelength of 570 nm. As shown in Table 2, vancomycin MICs of the tested isolates were directly proportional to their biofilm production. There are some biological changes which are prerequisite for expression of vancomycin resistance in staphylococci such as decreased autolytic activity, decreased coagulase activity, increased cell surface hydrophobicity and increased biofilm formation [13].
In the late 1990s, Sieradzki et al. suggested that alterations in cell wall structure inhibit vancomycin access to its active site in VRSA isolates. These alterations include overproduction and accumulation (in part through reduced autolytic activity) of cell wall material, activated cell wall synthesis leading to cell wall thickening and reduced vancomycin access to its active site [21].
Sieradzki et al. have demonstrated that vancomycin resistance is related to a decrease in autolytic activity in staphylococci isolates. In the current study, autolytic activity of selected VRSA isolates was assessed. Interestingly, it was found that autolytic activity in VRSA isolates relatively decreased in comparison to S. aureus reference strain. Similar results were reported by several authors in this field [5,[22][23][24][25][26].
Coagulase enzyme is a major phenotypic determinant of S. aureus and is usually used to identify it. However, it has been reported that coagulase activity decreased in VRSA isolates, occasionally leading to misidentification if coagulase test is incubated for less than 4 hrs [27]. In this study, coagulase activity was assayed by dye diffusion test. It was found that coagulase activity decreased as the vancomycin MICs increased. This finding was in accordance with [4,27] and who reported that coagulase activity decreased in VRSA isolates which has led to misidentification of VRSA as coagulase negative staphylococci. The reason of this phenomenon is still unknown. It might be due to low expression of coagulase enzyme gene or the enzyme activity affected by the thickened bacterial cell wall [27].
The molecular nature of the bacterial cell surface is critical in the interaction between the microorganism and the host. The hydrophobic or hydrophilic nature of the bacterial cell surface is an important determinant in the adherence of bacteria to living and non-living surfaces [28,29].
In the current study, cell surface hydrophobicity and biofilm formation in selected VRSA isolates were investigated. It was found that MICs of vancomycin were directly proportional to hydrophobicity index and biofilm production. Antune et al.
reported that there was no significant correlation between the biofilm production and the hospital unit, previous use of antimicrobials, length of stay in the hospital, associated infections, predisposing conditions, outcome, age or gender. The only correlation they found was that the biofilm-producing isolates showed higher rates of resistance to some antimicrobials used in therapy like vancomycin compared to the non-producing biofilm isolates. Similar results were obtained by [30][31][32]. This finding can be explained by the slow diffusion of vancomycin into the deeper layers of bacterial biofilms, which may lead to resistance due to the gradual exposure of the bacterial cells to low concentrations of the antimicrobial [10,33]. More research in the future should be conducted to help to determine additional characters of VRSA to further understand all aspects of these organisms to help to develop optimal treatment and to establish suitable control implications. | 2,466.8 | 2018-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Optimizing differential equations to fit data and predict outcomes
Abstract Many scientific problems focus on observed patterns of change or on how to design a system to achieve particular dynamics. Those problems often require fitting differential equation models to target trajectories. Fitting such models can be difficult because each evaluation of the fit must calculate the distance between the model and target patterns at numerous points along a trajectory. The gradient of the fit with respect to the model parameters can be challenging to compute. Recent technical advances in automatic differentiation through numerical differential equation solvers potentially change the fitting process into a relatively easy problem, opening up new possibilities to study dynamics. However, application of the new tools to real data may fail to achieve a good fit. This article illustrates how to overcome a variety of common challenges, using the classic ecological data for oscillations in hare and lynx populations. Models include simple ordinary differential equations (ODEs) and neural ordinary differential equations (NODEs), which use artificial neural networks to estimate the derivatives of differential equation systems. Comparing the fits obtained with ODEs versus NODEs, representing small and large parameter spaces, and changing the number of variable dimensions provide insight into the geometry of the observed and model trajectories. To analyze the quality of the models for predicting future observations, a Bayesian‐inspired preconditioned stochastic gradient Langevin dynamics (pSGLD) calculation of the posterior distribution of predicted model trajectories clarifies the tendency for various models to underfit or overfit the data. Coupling fitted differential equation systems with pSGLD sampling provides a powerful way to study the properties of optimization surfaces, raising an analogy with mutation‐selection dynamics on fitness landscapes.
Introduction
Much of science describes or predicts how things change over time. Differential equations provide a common model for fitting data and predicting future observations. Optimizing a differential equation model is challenging. Each observed or predicted point along a temporal trajectory is influenced by the potentially large set of parameters that define the model. Optimizing a model means improving the match between the model's trajectory and the observed or desired temporal path at many individual * Department of Ecology and Evolutionary Biology, University of California, Irvine, CA 92697-2525, USA web: https://stevefrank.org points in time.
Recent advances in machine learning have greatly improved the potential to optimize differential equation models [1][2][3] . However, with actual data, it is often not so easy to realize the promise of the new conceptual advances and software packages.
This article illustrates how to fit ordinary differential equation (ODE) models to noisy time series data. The fitted models are also sampled to develop an approximate Bayesian posterior distribution of trajectories. The posterior distribution provides a way to evaluate confidence in the fit to observed data and in the prediction of future observations. I use the classic data for the fluctuations of lynx and hare populations, an example of predator-prey dynamics 4 . This example illustrates the challenges that arise when fitting models for any scientific problem that can be analyzed by simple deterministic ODEs.
My work follows on the excellent recent article by Bonnaffé et al. 2 They fit neural ODEs (NODEs) to the hare-lynx data. NODEs use neural networks to fit time series data to the temporal derivatives of variables, in other words, NODEs estimate ODEs by using modern neural networks 1 . Bonnaffé et al. 2 emphasized that NODEs have the potential to advance many studies of ecological and evolutionary dynamics. However, they encountered several practical challenges in their application of NODEs to the hare-lynx data, ultimately concluding that "it is our view that the training of these models remains nonetheless intensive."
Materials and Methods
Extending Bonnaffé et al. 2 , I advance practical aspects of fitting ODE and NODE models. I show that several computational techniques provide a relatively easy way to fit and interpret such models. The computer code provides the specific methods by which I achieve each advance. Here, I emphasize six points.
Comparing NODE and ODE models
First, my computer code provides a switch between fitting the same data to a high dimensional NODE or a simple low dimensional ODE. Dimensionality here refers to the size of the parameter space. The switch makes it easy to compare the two types of model.
Conceptually, both NODE and ODE models are basic systems of ordinary differential equations. In practice, the difference concerns the typically much greater dimensionality of the NODE models and the wide variety of high-quality tools available to build, compute, and evaluate complex neural network architectures. The NODE models usually have much greater flexibility and power to fit complex patterns but also suffer from computational complexity and a tendency to overfit data.
Dummy variables
Second, I evaluate the costs and benefits of adding dummy variables to the fitting process. For exam-ple, there are two variables in the case of hare and lynx. To those two variables, we may add additional variables to the system. We may think of those additional dummy variables as unobserved factors 2 . For example, if there is one additional factor that significantly influences the dynamics, then trying to fit a two-dimensional model to the data will be difficult because the actual trajectories trace pathways in three dimensions. Dimensionality here refers to the number of variables in the system.
Data smoothing
Third, I smoothed the original data before fitting the models. Deterministic models can only fit the general trends in the data. Stochastic fluctuations may interfere with the fitting process, which typically gains from a smoother cost function, in which the cost function decreases as the quality of the fit improves. To reduce large fluctuations, I first log-transformed the data and normalized by subtracting the mean for each variable of the times series 2 . Then, to emphasize trends in the data, I smoothed the time series with a cubic spline. I added an interpolated time point between each pair of observed time points. I then smoothed the augmented data with a Gaussian filter. Figure 1 shows the original and the smoothed data.
Sequential fitting
Fourth, simultaneously fitting all points in the time series may prevent finding a good fit. The complexity of the optimization surface may be too great when starting from random parameters. Sequential fitting can help, first fitting the initial part of the time series, then adding later time points in a stepwise manner. However, when adding additional points, weighted equally with prior points, a strong discontinuity may arise in the fitting process. That discontinuity may push the fitting process too far away to recover a smooth approach to a good fit. To fix that issue, I slowly increased the weighting of later points, which provided greater continuity in the optimization process and better convergence to relatively good fits.
Approximate Bayesian posterior
Fifth, I estimate a distribution of temporal trajectories for a fitted model by analogy with sampling the Bayesian posterior distribution of the model parameters. The distribution of trajectories provides a measure of the confidence in the quality of a fit to the data and of predictions for future unobserved observations.
To sample the posterior distribution of fitted parameters and associated trajectories, I first fit the model by standard gradient descent methods, using the Adam learning algorithm 5 . Then, with the fitted model as an initial condition, I calculated the pre-conditioned stochastic gradient Langevin dynamics (pSGLD) 6 . In essence, a deterministic force moves each parameter toward a locally better fit, and a stochastic force causes parameter fluctuations.
For a gradient of the loss function with respect to the parameter, , and a given hyperparameter, , the stochastic force dominates the deterministic force when 1. Thus, when the fit is sufficiently near a local optimum and the associated gradient multiplied by is small, the model parameter value fluctuates randomly in a way that approximately samples the Bayesian posterior. Here, hyperparameter means a parameter that controls the fitting process rather than a fitted parameter of the model, following the common convention in the machine learning literature 7 .
In this study, I used a standard machine learning quadratic loss function 7 . In particular, the loss is the sum of squared deviations between each target time point in the smoothed data and the value of the model trajectory at that time point.
A quadratic loss associates with the Bayesian estimator for the mean of a parameter's posterior distribution. However, in this study, I used pSGLD to sample the distribution of parameters around a local optimum for a fitted model, in which each observation is a multidimensional parameter vector describing a differential equation model. That distribution provides a rough estimate of the confidence in the fitted parameter values and the associated trajectories, inspired by Bayesian principles rather than adhering strictly to the assumptions of Bayesian analysis.
Julia computer language
Finally, I used the Julia programming language 8 . Debate about choice of language often devolves into subjective factors. However, in my interpretation for fitting differential equation models, the current status of Julia provides several clear advantages.
Julia is much faster than popular alternatives, such as Python and R 9 . Speed matters, transforming difficult or essentially undoable optimizations into problems that can easily be solved on a standard desktop computer. I did all of the runs for this article on my daily working desktop computer using only the CPU, completing the most complex runs in at most a few hours, without any special effort to optimize the code or the process. Many useful runs for complex fits could be done in much less than an hour.
The Julia package DifferentialEquations.jl has a very wide array of numerical solvers for differential equations 10 . Using an appropriate solver with the correct tolerances is essential for fitting differential equations. I used the solvers Rodas4P for ODE problems and TRBDF2 for NODE problems. These solvers handle the instabilities that frequently arise when fitting oscillatory time series data. Further experimentation would be useful to test whether other solvers might be faster or handle instabilities better.
Efficient optimization of large models typically gains greatly from automatic differentiation. 11,12 In essence, the computer code automatically analyzes the exact derivatives of the loss function with respect to each parameter, rapidly calculating the full gradient that allows the optimization process to move steadily in the direction that improves the fit.
For fitting differential equations, the special challenge arises because each time point along the target trajectory to be fit must be matched by using the differential equation solver to transform the model parameters into a predicted time point along the calculated trajectory. That means that differentiating the loss function with respect to the parameters must differentiate through the numerical solver for the system of differential equations. It must do so for each target point. In this study, the target trajectory consisted of 362 time points, requiring each calculation of a loss function or derivative of the loss to analyze the match between the data and 362 numerically evaluated trajectory points.
The Julia package Diff EqFlux.jl provides automatic differentiation through many different solvers 3 . Other languages provide similar automatic differentiation but, in my experience, the process is either much slower or more limited in a variety of ways. By contrast, the Julia package works simply and quickly, with many options to adjust the process.
The Diff EqFlux.jl package also provides a broad set of tools to build neural network models. Those models can easily be analyzed as systems that estimate differential equations, NODEs which can be optimized with a few lines of code 13 .
Documentation of the Diff EqFlux.jl package presents several examples of fitting differential equation models. However, the toy data sets do not bring out many of the challenges one faces when trying to fit the kinds of noisy data that commonly arise in practice. The methods discussed here may be broadly useful for many applications.
Overview of the models
Each model has variables, two for hare and lynx and − 2 for dummy variables tracking unobserved factors. The models seek to match the log-transformed and smoothed data shown in Fig. 1. The differential equation for the vector of variables u in the ODE models has the form in which the 2 + parameters are in the × matrix, S, and the vector, b. The function maps the dimensional input to an dimensional output, potentially inducing nonlinearity in the model. One typically selects from the set of common activation functions used in neural network models. For all runs reported here, I used = tanh applied independently to each dimension. It would be easy to study alternative ODE forms. However, in this article, I focus on eqn 1. For each numerical calculation of a predicted trajectory, I set the initial condition to the data values for the first two dimensions. For the other − 2 dimensions, I set a random initial value at the start of an optimization run. My code includes the option to optimize the initial values for the − 2 dummy variables by considering those values as parameters of the model. However, in my preliminary studies, optimizing the dummy initial conditions did not provide sufficient advantages. I did not use that option in the runs reported here.
Typical NODE models are neural networks that take inputs 1 . The outputs are the vector of derivatives, du/d . Common neural network architectures provide a variety of systems for calculating outputs from inputs. The Julia software packages include simple ways to specify common and custom architectures.
I used a simple two-layer architecture, in which each of the inputs flows to internal nodes. Each internal node produces an output that is a weighted sum of the inputs, each weight a model parameter, plus an additional parameter added as a constant. Each of those outputs is transformed by an activation function, , for which I again chose = tanh.
Those outputs were then used as inputs into a second layer that produced outputs as the weighted sum of inputs plus a constant. The second layer did not use an activation function to transform values.
The parameters and output for all computer models are included with the source code files.
ODE versus NODE, varying
I fit ODE and NODE models for = 2, 3, 4. For each of those six combinations, Fig. 2 shows the model fits against the smoothed data for the full 90-year period of observations. The hare and lynx estimated abundances comprise the = 2 core variables of the analysis. Adding dummy variables to make > 2 improves the fit. That improving fit with rising can be seen in Fig. 2 by the better match between the data and model trajectories as one moves down the fitting sets in each column.
NODE models fit better than ODE models, associated with the greater number of parameters in NODE models. The better fits can be seen by comparing the NODE sets in the right column against their matching ODE sets in the left column.
One expects better fits by adding variable dimensions to increase or adding parameter dimensions in NODE models. The benefit here is to see exactly how the fits change with the different changes in the models. For example, the phase plots in the next subsection show clearly how the constraints of the relatively low-dimension parameter space of the ODE models limits the fit when compared to the flexibility of the larger parameter space in the NODE models. Figure 2 shows temporal trajectories for each species and each dummy variable, plotting abundance versus time. By contrast, phase plots draw trajectories by mapping the variables at each time to a point in -dimensional space. Combining the -dimensional points at different times traces a trajectory through phase space. Figure 3 shows phase plots for ODE and NODE models with = 3 variables. The upper plots limit the trajectories to the = 2 dimensions for the hare and lynx data (blue) and model predictions (gold). In two dimensions, the trajectories do not match well. However, one can see that the ODE model in the upper left traces a regular cycle confined to a small part of the two-dimensional phase space, whereas the NODE model in the upper right moves widely over the space. The three-dimensional phase plots in the lower panels of Fig. 3 clarify the differences between the ODE and NODE models. For those three-dimensional plots, I used the third dimension from the models' dummy variable to augment both the data and the model prediction trajectories. For the ODE model in the lower-left panel, the model's phase trajectory remains confined to a limited part of a two-dimensional plane, whereas the data wander over the third dimension.
Phase plots
Adding the third dummy variable for the NODE model in the lower-right panel greatly enhances the match between the model predictions and the data. One can see the trajectory of that third variable in Fig. 2, right column, middle set for NODE and = 3, in the bottom panel of that set. In that case, the dummy variable starts near an abundance of 1 ( 0 ) and declines toward 0 ( −300 ).
By spreading the two-dimensional hare and lynx dynamics over a third dimension that declines steadily with time, the messy and visually mismatched data and model trajectories in the twodimensional phase plot shown in the upper-right panel of Fig. 3 are transformed into smoothly oscillating and matching three-dimensional trajectories in the lower-right panel of that figure.
It could be that the high-dimensional and flexible parameter space of the = 3 NODE model has discovered the simple geometry of the phase dynamics. The loss sums the squared deviations between the smoothed data and the model trajectories. I measured the deviations for both species at the 181 half-yearly intervals, yielding 362 squared-deviation components in the loss calculations. One could of course find that geometry in other ways. The main advantage of the NODE model is that it does the fit quickly and automatically without any explicit assumptions about the shape of the dynamics.
Predicting future observations
Which models do best at predicting future outcomes? Given a single time series, one typically addresses that question by splitting the data. The first training subset provides data to fit the models. The second test subset measures the quality of the predictions. I trained the six models in Fig. 2 on the data from the first 61 yearly observations (0-60). I then compared the predictions of those models to the observed data for the subsequent 30 years (60-90). Figure 4 shows the results.
I obtained predicted values for a model by calculating the model's temporal trajectory for a particular set of fitted parameters. The predictions are the temporal trajectory over the test period, the years 60-90. A single trajectory represents the predictions for one set of fitted parameters.
When making predictions, one wants an estimate of the predicted values and also a measure of confidence in the predictions. How much variability is there in the trajectories when using alternative sets of fitted parameters?
To obtain a distribution of fitted parameter sets, I used the pSGLD method described earlier. That method provides a Bayesian-motivated notion of the posterior parameter distribution. To draw the predicted gold trajectories in Fig. 4, I estimated the posterior parameter distribution and then randomly sampled 30 parameter sets from that distribution.
How do the different models in Fig. 4 compare with regard to the quality of their predictions during the test period, 60-90? Starting at the top left, ODE = 2, the model predictions are precise but inaccurate. The small variation in trajectories during the test period reflects the high precision, whereas the large differences between the predictions and the data with regard to the timing of the oscillations reflect the low accuracy.
The low accuracy (high loss) during the training period and poor fit during the test period suggest that this model is underfit. Here, underfit roughly means that the dimensionality of the variables or of the parameters is not sufficient to fit the data.
Next, consider the lower left model in that figure, ODE = 4. That model has high accuracy during the training period, but very low precision during the prediction period. The model seems to be overfit. During the prediction period, the NODE models for = 3, 4 also have relatively low precision and varying but typically not very good accuracy. Those models may also be overfit, for which overfit means roughly that the models' high dimensionality caused such a close fit to the fluctuations in the training data that the models failed to capture the general trend in the data sufficiently to predict the outcome in the test period. Finally, consider the two best models with regard to predictions during the test period, ODE = 3 and NODE = 2. Those models have intermediate accuracy during the fitted period, which seemed to avoid underfitting and overfitting. During the test period, both models had moderately good accuracy with regard to the timing of oscillations and moderately good precision with regard to variation in the predicted trajectories. Although the fits are far from perfect, they are good given the short training period in relation to the complex shape of the dynamics.
A technical challenge arises when deciding how long to run the sampling period for pSGLD and what hyperparameters to use to control that process. When does one have a sufficient estimate for the posterior distribution of trajectories? In a typical run for this study, I first ran the pSGLD sampling for a warmup period that created 5,000 parameter sets and associated trajectories. I collected 10,000 or more parameter sets and associated trajectories by pSGLD. For each trajectory, I calculated the loss for the full time over both the training and test periods.
I concluded that the sample was sufficient when the distribution of loss values for the first half of the generated parameter sets was reasonably close to the distribution of loss values for the second half of the generated parameter sets. As long as the loss distributions were not broadly different, the plotted trajectory distributions typically did not look very different.
One could also study posterior distributions for individual parameters. However, in this study, there was no reason to analyze individual parameters.
Discussion
The various technical advances greatly enhance the ease of fitting alternative differential equation models. New possibilities arise to analyze dynamics, gain insight into process, improve predictions, and enhance control.
In this article, I focused on fitting observed dynamics from a natural system. Alternatively, one could study how to design a system to achieve desired dynamics or to match a theoretical target pattern 14 .
The pSGLD method to sample parameter combinations near a local optimum also raises interesting possibilities for future study. Technically, it is a remarkably simple and computationally fast method. Conceptually, it creates a kind of random walk near a local optimum on a performance surface, similar to mutation-selection dynamics near a local optimum of a fitness landscape 15 . That analogy suggests the potential to gain further understanding of genetic variation and evolutionary dynamics on complex fitness surfaces. | 5,383 | 2022-04-16T00:00:00.000 | [
"Computer Science"
] |
Extreme ultraviolet photoemission of a tin-based photoresist
Tin is a suitable element for inclusion in extreme ultraviolet photoresists because of its relatively high-absorption cross section at 92 eV. The electrons emitted after photon absorption are expected to generate secondary electrons in the solid film. In this way, several pathways lead to reactive species that cause a solubility switch. Here, we report the photoelectron spectra of tin oxo cage photoresists over the photon energy range 60–150 eV, and the relative yields of photoelectrons from the valence band of the resist, from the Sn 4d orbitals, and of inelastically scattered electrons. The experimental excitation spectra differ considerably from those predicted by commonly used database cross section values, and from the combined computed subshell spectra: the maximum efficiency of ionization of Sn 4d both in the photoresists and in Sn metal occurs near the industrially relevant EUV wavelength of 13.5 nm. VC 2021 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http:// creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/5.0047269 Extreme ultraviolet lithography is currently finding its way into industrial application in the semiconductor industry, despite the lack of detailed quantitative knowledge about the interaction of the radiation used (92 eV, 13.5 nm) with the materials applied as photoresists. Current implementation of the EUV technology is still based on chemically amplified photoresists, adapted from the well-established ultraviolet lithography that uses laser radiation with a wavelength of 193nm (6.4 eV). Molecular inorganic materials are considered as photoresists for the future because of their higher absorption cross section and potentially high etch resistance. The elements with atomic numbers 49 to 54 (In, Sn, Sb, Te, I, and Xe) have large photoionization cross sections at 92 eV that mainly derive from the core level excitation of 4d electrons. In order to estimate the photoabsorption and photoemission cross sections of the elements, values from the Centre for X-ray Optics, Berkeley database often reasonably reproduce experimental data, especially for the higher photon energies. In this communication, we report our results of a study of metallic tin and tin-based model photoresists (Fig. 1). The latter consist of a cage structure with 12 n-butyltin groups bridged by a total of 20 oxygen atoms. Of the n-butyltin units, six are in the central “belt” part of the molecule (5-coordinated Sn, black in Fig. 1), and six are on the two equivalent “caps” (6-coordinated, blue). Of the oxygen atoms, twelve are in the central part, six are bridging OH-groups, and two are in the interior of the cage. The tin oxo cage has a net charge of 2þ. The three compounds studied differ in the counterions: hydroxide, (TinOH), acetate (TinA), and trifluoroacetate (TinF). We show that the highest yield of Sn 4d electrons in the photon energy range 60–150 eV occurs near the EUV energy of 92 eV, rather than near 60 eV, which is predicted by quantum chemical calculations of subshell ionization cross sections. It is commonly accepted that EUV excitation leads to a cascade of secondary electrons, which can induce multiple chemical reactions per photon. The Sn–C bonds in the tin oxo cages are decisively weakened by removing or adding one electron; the former is the case with the photoionization Appl. Phys. Lett. 118, 171903 (2021); doi: 10.1063/5.0047269 118, 171903-1 VC Author(s) 2021 Applied Physics Letters ARTICLE scitation.org/journal/apl
Extreme ultraviolet lithography is currently finding its way into industrial application in the semiconductor industry, [1][2][3][4][5] despite the lack of detailed quantitative knowledge about the interaction of the radiation used (92 eV, 13.5 nm) with the materials applied as photoresists. Current implementation of the EUV technology is still based on chemically amplified photoresists, adapted from the well-established ultraviolet lithography that uses laser radiation with a wavelength of 193 nm (6.4 eV). 6,7 Molecular inorganic materials are considered as photoresists for the future because of their higher absorption cross section and potentially high etch resistance. [8][9][10][11][12][13][14][15] The elements with atomic numbers 49 to 54 (In, Sn, Sb, Te, I, and Xe) have large photoionization cross sections at 92 eV that mainly derive from the core level excitation of 4d electrons. 16 In order to estimate the photoabsorption and photoemission cross sections of the elements, values from the Centre for X-ray Optics, Berkeley database 17 often reasonably reproduce experimental data, [18][19][20][21] especially for the higher photon energies. In this communication, we report our results of a study of metallic tin and tin-based model photoresists (Fig. 1). 22 The latter consist of a cage structure with 12 n-butyltin groups bridged by a total of 20 oxygen atoms. Of the n-butyltin units, six are in the central "belt" part of the molecule (5-coordinated Sn, black in Fig. 1), and six are on the two equivalent "caps" (6-coordinated, blue). Of the oxygen atoms, twelve are in the central part, six are bridging OH-groups, and two are in the interior of the cage. The tin oxo cage has a net charge of 2þ. The three compounds studied differ in the counterions: hydroxide, (TinOH), acetate (TinA), and trifluoroacetate (TinF). We show that the highest yield of Sn 4d electrons in the photon energy range 60-150 eV occurs near the EUV energy of 92 eV, rather than near 60 eV, which is predicted by quantum chemical calculations of subshell ionization cross sections. 23 It is commonly accepted that EUV excitation leads to a cascade of secondary electrons, which can induce multiple chemical reactions per photon. 24 The Sn-C bonds in the tin oxo cages are decisively weakened by removing or adding one electron; 15,[25][26][27] the former is the case with the photoionization considered herein. The initial kinetic energy distribution after the EUV excitation obtained in the present work is the primary input for modeling the electron cascade and the initiation of chemical conversion in these materials.
Our experiments were performed at the PM4 beam line of BESSY II, using a VG Scienta angle resolved time-of-flight (ArTOF) electron analyzer. 28 The dipole beamline is equipped with a chopper that assures that pulsed x-rays are supplied to the LowDosePESstation. The ArTOF transmission function was measured as a function of electron kinetic energy (see the supplementary material Fig. S1). This was done by fitting the areas of the Au 4f photoelectron spectrum from an Au(111) single crystal sample; the spin-orbit components (binding energy 84.00 eV for Au 4f 7=2 ) were measured in a series of measurements in which the photon energies were varied. The cross section for photoionization of Au 4f in this energy range is well known. 29 The variation of photon flux was monitored via a mirror current in the beam line. The additional variation in detected electron intensity arises from the transmission function of the spectrometer. Since different energy windows sample different kinetic energy ranges, two sets of data were measured (using 15% and 5% energy windows).
The experimental Sn 4d intensities from the metal were recorded on an argon sputtered piece of polycrystalline Sn/Pb alloy. The Pb 5d signal (far from resonance) was used for intensity calibration.
TinOH and TinA were prepared as described previously. 30 TinF was obtained analogously to the synthesis of TinA, by adding two equivalents of trifluoroacetic acid to TinOH in tetrahydrofuran. TinF was characterized by 1 H and 19 F nuclear magnetic resonance spectroscopy (NMR), see Figs. S5(a) and S5(b). The 1 H NMR spectrum is very similar to the spectrum reported (and thoroughly assigned) for TinOH. 31 This result shows that the tin-oxo cage structure has remained completely intact during the conversion of TinOH to TinF. The 19 F NMR spectrum confirms that only one type of fluorine atom is present in TinF. Films of tin oxo cages were prepared by spincoating from toluene solutions on gold-coated silicon as described before. 32 The thickness of the films was ca. 20 nm according to Atomic Force Microscopy measurements.
Molecular structures were optimized using the B3LYP hybrid density functional model with the LANL2DZ effective core potential basis set using Gaussian 16. 33 For more reliable evaluation of (relative) energies, these were calculated using the Def2SVP and Def2TZVP basis sets at the optimized geometries. More details are given in the supplementary material.
The photoelectron spectra obtained for thin films of TinOH spin-coated on gold on silicon are shown in Fig. 2(a). The spectra have been corrected with respect to. spectrometer transmission function. All spectra have been normalized so that the area of the valence band (the energy range 5-15 eV) is set to unity.
To obtain the relative intensity of Sn 4d at different excitation energies, the spectra of TinOH were fitted, in a least squares sense, to a series of Voigt functions using the SPANCF routines (written by Kukk) for IGOR PRO (WaveMetrics Inc., Lake Oswego, OR, USA). The goal with the fit was to reproduce the spectral intensity distribution. The Sn 4d region was fitted with two components reflecting the spin-orbit splitting of this photoelectron peak. The inelastic losses in the background were modeled by fitting wide peaks to the rising background at the high binding energy side to both the valence and the Sn 4d set of lines. The shift between the Sn 4d line and the inelastic loss peak was kept the same as the corresponding background to valence band center shift. The results of the fits, including components are presented in the supplementary material, Fig. S3. We find a reasonable agreement between the spectra and the computed orbital energies. The comparison between the computed DOS and the experimental spectrum can be used to identify plasmon components of the experimental spectrum on the high binding energy side (as has been done for other Sn compounds 34,35 ). The relative areas of the peak originating from the Sn 4d electrons at the different photon energies are given in Table I. The Sn 4d signal is dependent on photon energy and is strongest at 92 eV excitation energy. The behavior of the intensity variation in TinOH is similar to that of metallic Sn [see Fig. 2(b)]. As seen in Table I, the size of the inelastic loss background varies relative to that of the main photoelectron signature.
The photoelectron spectra of TinOH are compared with those of TinA and TinF in Fig. 3 for photon energies of 92 eV and 150 eV, respectively. The spectra show a band of valence electrons in the binding energy (BE) range 5-25 eV and Sn 4d electrons, which give rise to an unresolved spin-orbit doublet (the split is about 1 eV) near 30 eV. In the gas phase, we found an onset of ionization for the bare dication at 12 eV. 26 This agrees well with the ionization energies calculated (B3LYP/Def2TZVP//LANL2DZ) for the bare dication of 11.9 eV. The value calculated for the neutral TinOH structure is lower because of the electrostatic interaction: 7.2 eV. To approximately account for the interactions with the polarizable environment in the solid, we performed calculations with the Polarizable Continuum Model using diethylether (static dielectric constant 4.24). The interaction further stabilizes the charged species relative to the neutral form, giving a predicted vertical ionization potential (IP) for TinOH of 6.6 eV.
In the thin solid films in the present work, the onset of ionization is found experimentally to agree with that number when the spectra are referenced to the vacuum level. The binding energy scale is constructed using that the ionization potential (IP) is shifted by half the solid's bandgap E g . (See, e.g., Ref. 36, the Fermi level of an intrinsic semiconductor is approximately in the middle of the bandgap.) Additionally, the work function / (known from calibration) of the system determines the shift between the Fermi level E F of the solid and the vacuum level. These films are thick enough to safely assume that the interfacial dipole is negligible. 37 The ionization potentials of the molecule and the solid are, thus, related through IP molecule ¼ IP solid À/ À E g =2. The bandgap of 4.91 eV of the molecular films was obtained from a Tauc-plot (see, e.g., Ref. 38) of the UV/Vis absorption spectrum (see the supplementary material Fig. S2). For the Sn 4d electrons of TinOH, the computed orbital energies are $À28 eV, in reasonable agreement with the experimental binding energies. The computed density of states is shown in the supplementary material [ Figs. S4(a) and S4(b)]. In the energy range 10-25 eV, primary electron emission can occur from C 2s (10-20 eV) and O 2s orbitals (20-25 eV) (Fig. S4). The spectra of the three tin oxo cages that investigated here do not differ much, as shown in Fig. 3. The calculations suggest that some of the F 2s electrons are more strongly bound than Sn 4d, but no electrons are detectable in this energy range in Fig. 3(b). In the valence region, the presence of the trifluoroacetate ion gives rise to small extra peaks between 12 eV and 14 eV.
Note that the two different types of Sn atoms in the tin cages are predicted to have slightly different binding energies: the Sn 4d electrons of the five-coordinated Sn-atoms in the central belt of the molecule are more strongly bound by 0.6 eV than those of the sixcoordinated Sn-atoms at the two caps [see computed DOS in Fig. 4(b)]. This difference together with the spin-orbit splitting of Sn 4d of about 1 eV (Table II) is responsible for the broad Sn 4d feature.
The Sn 4d electrons give rise to two sets of spin-orbit pairs at 29 and 30 eV for all three tin oxo cages studied (see Table II). This binding energy is smaller than values observed for tetramethylstannane Sn(CH 3 ) 4 (30.7 and 31.8 eV, 39 or 31.5 and 32.6 eV 40 ). For SnF 2 ca. 2.8 eV, smaller binding energies were found in the solid state than in the gas phase, 41 and if the same difference applies for the tin oxo cages and Sn(CH 3 ) 4 , we find that the 4d electrons are somewhat more strongly bound in the tin oxo cages than in Sn(CH 3 ) 4 . This can be qualitatively explained by the effect of the electronegative oxygen atoms.
The photoionization cross sections for the different subshells were calculated by Yeh and Lindau, 23 and no recent systematic calculations of the same properties are available. The CXRO database contains cross section information based on a combination of experimental and computational data for each element. In Fig. 4(a), we show the predicted cross section for TinOH obtained as a sum of contributions from the different elements (contributions of the different elements are also shown) in the binding energy range between 10 Figure 4(b) shows the computed subshell contributions (weighted with the number of atoms in the structure). The cross section near the ionization threshold of Sn 4d starting at 28 eV is very small. Only above 45 eV, the computed cross section [ Fig. 4(b)] starts to become appreciable, and it peaks near 60 eV. Gas phase studies on tetramethylstannane Sn(CH 3 ) 4 39,40 show a rapid rise of the 4d ionization cross section from the onset at 31 eV up to the highest photon energies used in that work, 70 eV. The CXRO data (which include all electrons of each element) show a similar trend, but the decrease in cross section >60 eV is not as drastic as in the computed data [ Fig. 4(b)]. Our results show that the maximum in the cross section for Sn 4d ionization occurs around 92 eV. A similar discrepancy for 4d ionization has been pointed out in the case of CH 3 I and its measured 4d ionization cross section. 43 This suggest that improved methods are needed for a more accurate computational picture of 4d ionization. 16 From a theoretical point-of-view, Cooper and co-workers point out that going beyond the central field approximation and treating electron-electron interaction with more sophistication can shift the maxima of cross sections toward higher kinetic energies. 44 It is commonly accepted that the chemistry of EUV photoresists is electron-driven. 24,45 After the initial photoionization at 92 eV, the primary electrons emitted from the occupied molecular orbitals have kinetic energies up to 80 eV. These are assumed to initiate a cascade of secondary electrons. When primary electrons are emitted from core levels, such as Sn 4d, Auger processes are additional potential sources of electron emission.
Taking the binding energies of about 30 eV for Sn 4d and about 5 eV for the HOMO, the Auger electrons can be emitted from orbitals with 5 eV < BE < 15 eV, in which molecular materials are delocalized molecular orbitals (MO). Orbitals that contain important contributions from the atomic orbitals of Sn (5s and 5p) are at the top of the valence band [ Fig. 3(b)]. Although Auger processes probably occur upon excitation at 92 eV, it is difficult to observe them because the emitted electrons have low kinetic energies spread over a broad range since they originate from the valence manifold. For atomic Sn, the NOO Auger spectrum 46 is distributed over kinetic energies <12 eV.
In this communication, we have shown that the orbital origin of the primary photoelectrons in tin oxo cage materials depends strongly on the photon energy. At the technologically relevant photon energy of 92 eV, the ratio of Sn 4d core ionization to valence ionization is particularly large. Thus, both Auger electrons and inelastic processes involving photoelectrons contribute to provide low energy electrons in the system. In this study, we analyze the electron spectrum after excitation, focusing on photoionization cross section and how that may give rise to low energy electrons. This is complementary to studies of the photoabsorption cross section of Sn in the context of EUV lithography. 47 The present results underscore the usefulness of Sn as a photon absorbing element in EUV photoresists. The ultimate understanding of EUV photochemistry must come from modeling the entire process, from ionization to reactions. Knowledge of the initial electron kinetic energy distribution is a milestone on the (long) road to this understanding.
See the supplementary material for the data, and more detailed descriptions of methods that support the findings of this study.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request. Applied Physics Letters ARTICLE scitation.org/journal/apl | 4,261.6 | 2021-04-27T00:00:00.000 | [
"Materials Science"
] |
A Lightweight Deep Neural Network Using a Mixer-Type Nonlinear Vector Autoregression
The design of a lightweight deep learning model would be an ideal solution for overcoming resource limitations when implementing artificial intelligence in edge sites. In this study, we propose a lightweight deep neural network that uses a Mixer-type architecture based on nonlinear vector autoregression (NVAR), which we refer to as Mixer-type NVAR. We applied overlapping patch embedding to enrich the image input and Sequencer architecture for vertical and horizontal operation inside the Mixer-type NVAR. We utilized a window partition technique and general quadratic positional encoding to increase the performance of the proposed model. Our model achieved a top-1 accuracy of 82.48% for the CIFAR-10 dataset with 0.159 M parameters and 98.36% for MNIST with 0.106 M parameters. Moreover, we evaluated its throughput on a central processing unit, which was 190.1 images per second for CIFAR-10 and 106.7 images per second for the MNIST dataset. These results are competitive with the state-of-the-art convolutional neural network-based model, MLP-Mixer, and the traditional reservoir-computing-based Mixer model with the same tuning of hyperparameters.
I. INTRODUCTION
Implementing artificial intelligence (AI) in the edge site offers promising advantages for tackling problems in communication between end users and data management, such as connection bottlenecks to the cloud, bandwidth limitations, scalability, and privacy [1], [2], [3], [4].However, the implementation of AI in the edge site or edge intelligence comes with its own limitations, especially with resource availability, such as for power and storage.Consequently, its implementation is still challenging, even though present end-user devices appear to have high performance, and it still lacks support for the AI model implementation [5].
Deep-learning technology has the potential for lightweight models to be implemented in edge sites, especially for the vision task.We start with models based on convolutional neural networks (CNNs), such as MobileNet [6] and EfficientNet [5], [7].The latest MobileNet (V3) achieved The associate editor coordinating the review of this manuscript and approving it for publication was Mehdi Sookhak .a top-1 accuracy of 92.80 with 4.21 M parameters [8].Moreover, EfficientNetV2 exhibited superior performance on a dataset of tiny images, achieving a top-1 accuracy of 99 with 121 M parameters [5].Besides these CNNbased models, Transformer-based models such as Vision Transformer (ViT) offer higher accuracy in vision-based tasks [9].Implementation of ViT in the edge site requires further study owing to the model's complexity compared with the number of parameters related to the training and testing costs, so a CNN-based model is currently a better solution for implementation in the edge site.
One particularly promising deep-learning model is MLP-Mixer.This model performed well in vision tasks without convolutions in the CNN-based model or self-attention in the Transformer-based model [10].By taking advantage of the Transformer, MLP-Mixer replaces the multi-head attention (MHA) with the stacks of the MLP layer for token and channel mixing.Since token mixing is sensitive to spatial information [11], MLP-Mixer captures the global translation without local operation.Moreover, this model offers high accuracy with a large number of parameters, which is an important consideration for implementation in an edge site.
Furthermore, models based on recurrent neural networks (RNNs) that have been proven to be effective at time series tasks also have the potential to be implemented into edge sites.RNN-based models also offer lighter and less complex operation.Deep long short-term memory (LSTM) in the Sequencer architecture [12] is an RNN-model that was built for image classification.Sequencer applies two bidirectional LSTMs (BiLSTMs) as the building blocks for replacing the MLP-layer in the MLP-Mixer architecture.These BiLSTMs work in the vertical and horizontal directions as parallel operations.In short, Sequencer is able to achieve a higher efficiency than Transformer-based models and MLP-Mixer.The number of parameters used in the Deep Sequencer model is 54 M, which is large compared with CNN-based models and MLP-Mixer.
Reservoir computing (RC) models are strong candidates for implementation in the edge site.Different from RNNbased models, the neural connections in RC are fixed and are generated randomly, thus enabling a faster training process compared with other fully trained RNN-based models [13].In [14], RC was successfully implemented for the vision task.The researchers in that study applied an echo state network (ESN), a type of RC that uses the nonlinear dynamics of the reservoir and the linear layer to recognize the output.In this state, ESN achieved a top-1 accuracy of 99.07 with 4000 nodes in the reservoir.In 2021, [15] introduced a new RC concept called the next-generation of reservoir computing (NGRC).NGRC proved that RC is mathematically equivalent to nonlinear vector autoregression (NVAR).Their results showed that NGRC is faster than traditional RC owing to its fewer fit parameters, and thus its smaller feature vector size.Extending its implementation to the vision task has potential, following what was done with previous RC models.This model provides an optimized solution for an implementation in the edge site, especially with enlarged applications, such as image classification.
In the present study, we proposed a lightweight deep neural network (DNN) model with the goal of achieving reasonable classification accuracy with fewer parameters than current state-of-the-art models.We adopt token and channel mixing by taking advantage of the MLP-Mixer and Sequencer architectures, allowing us to achieve a robust and lightweight model.We used NVAR to replace the MLP in the MLP-Mixer layer and BiLSTM in the Deep Sequencer.The contribution of this work can be summarized as follows.
• An overlapping patch was implemented in the patch embedding to enrich the feature vectors from image input.
• Mixer-type NVAR utilized only one block of the vertical NVAR Sequencer-Mixer and one block of the horizontal NVAR Sequencer-Mixer for the token and channel mixing.
• We used a window partition [16], [17] and general quadratic positional encoding (GQPE) [16] to improve the proposed model performance by performing local operations inside the Mixer layer.
The rest of this paper is structured as follows.Section II is an overview of related work, and Section III is an outline of our model's architecture and its components.In Section IV, we present the evaluation condition, the results, and a discussion of them.We conclude with Section V, in which we present our conclusions and future work.
II. RELATED WORK
A lightweight DNN model is a potential solution for overcoming resource limitations when implementing AI in edge sites.This is especially the case in vision tasks, for which light CNN-based models such as such as MobileNet and Efficienet are the current state-of-the-art deep learning models.MobileNet was proposed for mobile applications that use depthwise separable convolutions, and MobileNetV3 is an extension of MobileNetV2 [6] that applies the new nonlinearity h-swish, which is faster in computation and easier for quantization.MobileNetV3 also uses the squeeze and excited module in the residual layer to adaptively recalibrate features.EfficientNet also offers good performance in vision tasks for mobile applications [7].As an improvement over the previous version, EfficientNetV2 supports a faster and smaller model for image recognition [5].EfficienetV2 also uses adaptive regularization, which can be adjusted depending on the image size, thus enabling a faster training time.Since both MobileNetV3 and EfficientNetV2 have been proposed for mobile applications and have successfully demonstrated good performance in the vision task, they can both potentially be implemented in edge devices.However, these model architectures still require a large number of parameters.
MLP-Mixer, a newer deep-learning model architecture for vision tasks, applies MLP blocks consisting of two linear layers, a GELU activation function, and a fully connected layer as a classifier [10].In general, the architecture of MLP-Mixer is similar to that of ViT.A patch-embedding layer is applied to divide and embed image inputs using linear projection, which are then fed into MLP blocks.The first MLP block is responsible for token mixing and the second is for channel mixing.Both the MLP-Mixer and the ViT models [9] use non-overlap patching as a token for the input.Consequently, MLP-Mixer relies only on global information captured within the respective patch, and it may lack important information from the edge of the patch as a result, especially for tiny images like those in the CIFAR dataset.
Sequencers, which are a new architecture in RNN-based models, are also an optimized solution for enhancing model efficiency and decreasing the training time in vision tasks [12].The Sequencer adopts the architecture from ViT and applies the BiLSTM to replace the MHA block.The BiLSTM is constructed in two parts: a vertical BiLSTM and a horizontal BiLSTM.Since the vertical and horizontal BiLSTMs operate in parallel, the sequence length is reduced, thus decreasing the time involved in training and the inference process.Moreover, Sequencer is flexible with regard to the image input resolution, so it preserves accuracy if the image resolution differs from the training process.Since Sequencer uses recursion and memory saving to mix spatial information, many parameters are required to build the model architecture.Sequencer requires 54 M parameters for a large Sequencer2D model.Similar to MLP-Mixer, Sequencer also applies a non-overlap image input.Thus, Sequencer may lack important features for tiny images.
As an RNN-based model, RC has been successfully implemented in a time series.It offers a faster and lighter training process by applying a fixed and randomly generated network for extracting features.An ESN, as a type of RC, also has a fixed random connection in the network architecture [18], [19].Compared to a traditional RNN, ESN has less complexity in its training process, and is robust against noise and overfitting.After extending the implementation of RC to a vision task, ESN was successfully implemented to perform the recognition task with the MNIST dataset [14].Additionally, ESN is elaborate enough to be able to handle the high-dimensionality of input owing to its random connections, but its application to vision tasks still needs to be improved.The researchers in [13] and [15] demonstrated that RC is equivalent to NVAR, and thus it has been referred to as next-generation reservoir computing.Their NVAR model performed better by reducing the training data and testing time.However, implementation of this model is still limited to time-series tasks, such as forecasting the short-term dynamics and long-term climate of chaotic systems.
In the present study, we applied the Mixer and Sequencer architecture processes by implementing NVAR as a token and channel mixing.We utilized the CIFAR-10, MNIST, Fashion MNIST, and EMNIST datasets to improve our proposed model application in vision tasks and to evaluate its performance for tiny images.
III. PROPOSED LIGHTWEIGHT DNN MODEL USING A MIXER-TYPE NVAR A. OVERVIEW
In this section, we describe our proposed lightweight DDN model that applies a Mixer-type construction and a Sequencer in its architecture.As shown in Fig. 1(a), we adopted the architecture of MLP-Mixer and Sequencer as the backbone of our model.We applied the overlapping patch to generate tokens from image inputs before feeding them to the Mixer blocks and a pooling layer (general average pooling).In the Mixer block, we constructed the vertical NVAR Sequencer-Mixer and the horizontal NVAR Sequencer-Mixer, which mixes the tokens and channels.In the edge of the architecture, a fully connected layer is used as a classifier for image classification.The normalization layer is used to normalize the outputs of patch embedding before feeding them to the NVAR Sequencer-Mixer block.Our model also applies the skip connection to improve the information that is introduced into the Mixer.Fig. 1(b) shows the construction of the Sequencer that builds the Mixer blocks.It is here that the vertical and horizontal Sequencers mix the token and channel information.We can see in Fig. 1(c) how the NVAR model is used in the Sequencer as the base Mixer layer.Additionally, a window partition is used to support the local operation inside the Mixer block.Furthermore, we applied the GQPE as a type of relational positional encoding (PRE) to enhance the token and channel mixing.
B. NONLINEAR VECTOR AUTOREGRESSION
Our model is primarily based on NVAR, which is used as a base in the Mixer layer to replace the MLP in MLP-Mixer and the BiLSTM in the Deep Sequencer LSTM.We utilized one block of NVAR [15] in the vertical and horizontal Mixer blocks, as shown in Fig. 1(b) and (c).Since NVAR is mathematically equivalent to RC, it can serve as a more optimized solution compared with the state-of-the-art CNN and MLP models.In addition to its robustness to noise and overfitting, NVAR has a delicate operation with a smaller number of parameters, making it suitable for overcoming the resource limitations in edge site implementation.As explained in [15], NVAR, as the next generation of reservoir computing, utilizes a constant (c), and both a linear and a nonlinear portion of the feature vector, as shown in (3).
An illustration of the total output of NVAR is given in Fig. 2. The linear features shown in (1) consist of an input vector X at the current time step i and at k − 1, the previous time for space s. (s−1) represents the skipped steps before the input vector of X at i, and k is the number of reflections.The nonlinear feature vector (O nonlin,i ) in ( 2) is obtained by taking the tensor product of each linear feature vector.As important parameters, this model requires only s × k time steps at the starting point of feature vector processing.We also applied the polynomial as a nonlinear function to get the nonlinear output from the linear feature vector of NVAR, as in (4), where (p) is the order of the polynomial feature vector.
C. NVAR SEQUENCER-MIXER
In the implemented Mixer architecture, we utilized a normalization layer, the Sequencer architecture for token and channel mixing, and a residual.As mentioned above, we replaced the MLP blocks in MLP-Mixer with NVAR as a base model.In Fig. 1(b) we can see that the primary process Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. in our Mixer architecture is composed of the vertical and horizontal Sequencer-Mixer blocks for token and channel mixing.In these Mixer blocks, the inputs are split from four dimensions, (batch size × height × width × channel, or B × h×w×C) yielding two two-dimensional outputs, (h×C) and (w × C).The vertical Sequencer-Mixer crosses the vertical axis while conducting token and channel mixing of (h × C) for the input feature, while the horizontal Sequencer-Mixer shuffles the horizontal axes of (w × C).We also utilized the layer norm and a skip connection for each Mixer block input before feeding them to Sequencer parts, and we utilized a window partition that has previously been proven to work well in [16], [20], and [17].The window partition crops the output of the linear layer into several small windows and captures the local information within each window.
To enhance the performance of Sequencer, we applied GQPE [16] as an additional feature in the Sequencer portion.This positional encoding supports the token position that is applied by the Transformer as essential information, thus enriching the feature data [21].This local operation reduces the computational complexity in the Sequencer-Mixer blocks, enabling a lighter and faster DNN model.
D. OVERLAP
We applied the overlapping technique to obtain beneficial spatial information on tiny datasets, such as the CIFAR-10 dataset.Overlapping can provide local continuity of an input image in the patch embedding process [22].The overlapfiltered window can also provide additional neighboring information, such as the edge information from the patches, which might contain essential features.Additionally, some information might be lost during the filtering process, especially for tiny images that are easily misinterpreted.In [23], it was shown that slight information exchanges between neighborhood windows increase the model's performance.An illustration of the overlapping patch is given in Fig. 3.For example, a 7 × 7 image input size with a 3 × 3 applied kernel size and stride 2 for patch division with padding 1.The patch is overlapped by the pixels stridden in both directions (horizontal and vertical) for each image input.Consequently, we can obtain a number of patches in addition to the nonoverlapping patching.Overlapping also enriches the information by continuously capturing the local information in the image input and collecting the neighboring information from patches (as shown in the generated patches), resulting in 3.
E. POSITIONAL ENCODING
GQPE is a type of PRE that adds encoding based on time lags [24].When implemented in the manner of [16], GQPE provides the positional information lacking in channel mixing.We applied GQPE in our model specifically to tackle this problem in our Mixer block, especially in the horizontal Sequencer performing as a channel token Mixer.Since GPRE offers O(1) for token mixing complexity [16], this benefit might increase time efficiency, thus reducing operation time for both training and inference.The learnable vector, v, and the relative positional, r δ , are defined as [16] and [24] and δ = (δ x , δ y ) is the relative position of the learnable vector, and is the controller for the displacement of the distribution center relative to that belonging at position p i .controls the distribution functions in a learnable vector.
IV. EVALUATION A. DATASET 1) CIFAR-10 DATASET
Since [25] introduced the CIFAR-10 dataset, many deep learning models [5], [6], [10], [26] have applied it to visionbased tasks.The CIFAR dataset contains the CIFAR-10 dataset for ten classes of tiny images and the CIFAR-100 dataset for one hundred classes [27].These datasets provide training images as well as testing datasets.the present study, we applied the CIFAR-10 dataset to assess how our model works on tiny images.This dataset provides 50,000 images for training and 10,000 for testing, with ten classes for each category.
2) MNIST DATASETS
We also utilized the MNIST datasets [28], [29] in this study, of which there are three: MNIST, Fashion MNIST, and EMNIST.The MNIST and Fashion MNIST datasets have ten classes of images, while the EMNIST dataset has 64 classes.This dataset is composed of one channel of grayscale images with a 28 × 28 resolution.The MNIST and Fashion MNIST datasets provide 60,000 images for training and 10,000 images for testing.EMNIST, on the other hand, offers 814,255 [30].These datasets differ in the characters that are used in them.We applied MNIST to classify ten classes of digits, Fashion MNIST to classify ten classes of clothing images, and EMNIST to classify 64 classes of handwritten digits.
B. EVALUATION CONDITIONS
In evaluating the performance of the proposed model, we applied several lightweight models with traditional RC and NVAR as the RNN-based model, MobileNetV3 as the CNN-based lightweight model, and MLP-Mixer itself.Specifically, we used traditional RC and NVAR in the Mixer architecture, as shown in Fig. 4, the goal being to provide a fair comparison to our overlapping Mixer-type NVAR.Each model was tested using the CIFAR-10 and the MNIST family of datasets with the same settings for the hyperparameters and the training and testing environments.We used 200 reservoir nodes for RC-Mixer, and k = 2, s = 1, and p = 2 for the NVAR-based model.We applied the small architecture of MobileNetV3 [6] as a CNN-based model.The MLP-Mixer architecture and configuration used in this study were provided by the TIMM library [31], and are based on [10].We utilized the separated dataset for training, which consists of the training and validation datasets, and the testing dataset for the inference process.After training our proposed model using the hyperparameters listed in Table 2, we tested it and the benchmark models using the M1 processor, the specifications of which are given in Table 1.We calculated the top-1 accuracy, the number of parameters, and the floating point operations (FLOPs) usage for each model with each dataset.Additionally, we also calculated the throughput so that we could evaluate model performance in the inference process.
C. ABLATION STUDIES
We performed ablation studies before attaining our final model, the details of which are given in Table 3.The ablation studies validated the significance of the techniques applied in our proposed model.Starting from the state-of-the-art RC model that was implemented in the vision task, we applied ESN as the base model for RC.This is compared with the new generation of RC (NVAR) in the Mixer architecture shown in Fig. 4(a).Fig. 4(c) and (d) correspond to the RC-Mixer and NVAR-Mixer layers, respectively.First, we applied the traditional RC and NVAR layers in the Mixer architecture without overlapping in the patch embedding, Sequencer, and GQPE, with a window partition in the Mixer block.The results for RC-Mixer and NVAR-Mixer for each dataset with no additional techniques in the Mixer layer are shown in Table 4. Second, after confirming the results for the traditional RC-Mixer and NVAR-Mixer, we then applied the overlapping patch of an image input to the patch embedding block for the NVAR Sequencer-Mixer model.As a result, the NVAR-Mixer architecture with the Sequencer and the overlapping patch embedding increased model performance up to 3% for the CIFAR-10 dataset and 8% for the MNIST and Fashion MNIST datasets.Third, to validate the effectiveness of GQPE with a window partition, we applied the GQPE to NVAR Sequencer-Mixer along with both overlapping and non-overlapping in the patch embedding, as shown in Table 3.The accuracy of the NVAR Sequencer-Mixer with GQPE significantly improved.
As can be seen in Table 4, accuracy improved when we used NVAR-Mixer with the vertical and horizontal Sequencers in the Mixer block, demonstrating their effectiveness for token and channel mixing.The inference time was also reduced significantly.Our proposed model using the Sequencer-Mixer architecture achieved higher throughput compared with the traditional RC-Mixer and NVAR-Mixer.The NVAR Sequencer-Mixer also reduced the number of parameters and the FLOPs usage.This demonstrates that the applied Sequencer architecture in the Mixer block increased resource usage efficiency during the computation of the token and channel input.Even though the throughput was slightly decreased while using the GQPRE with the window partition, the result was still reasonable.Based on the results of the ablation studies, we chose to adopt the overlapping technique for NVAR Sequencer-Mixer as our proposed model.We also applied the GQPE with the window partition to enhance the performance of token and channel mixing in the Mixer block.
D. RESULTS
Upon conclusion of the ablation studies, we performed experiments with the state-of-the-art models using the same set of hyperparameters for each dataset and the same testing environment.The results are summarized in Table 5.Using the same number of hyperparameters for the CIFAR-10 dataset in the training process, our model showed a competitive result compared with MobileNetV3, a CNNbased model, and MLP-Mixer.It achieved a top-1 accuracy of 82.48% in the CIFAR-10 dataset, while the state-of-theart model achieved a top-1 accuracy of 87%.As mentioned previously, the overlapping patch improved model accuracy, especially for tiny images like those in the CIFAR-10 dataset.For the MNIST dataset, the top-1 accuracy of our proposed model was close to that of the state-of-the-art model: our model achieved 98.36%, MobileNetV3 achieved 98.78%, and MLP-Mixer achieved 98.24%.Our model also showed good performance with the Fashion MNIST dataset, achieving a top-1 accuracy of 89.87%, while MobileNetV3 and MLP-Mixer achieved 93.49% and 91.24%, respectively.Last is the EMNIST dataset, for which our model again exhibited good performance: a top-1 accuracy of 84.15%, close to MobileNetV3 with 86.92% and MLP-Mixer with 85.68%.
Additionally, in our attempts to develop a lightweight model for edge implementation, we also evaluated the number of parameters, the FLOPs usage, and the throughput.We found that the number of parameters and the FLOPs used in our model was significantly smaller than in MobileNetV3 and MLP-Mixer when using the CIFAR-10 dataset, even compared with the traditional RC and NVAR Mixers.Table 5 confirms that our model achieved a higher throughput than MobileNetV3 and MLP-Mixer for each dataset.In the MNIST family of datasets, the values for a number of parameters differed slightly, but they were still smaller than other state-the-art models.
E. DISCUSSION
Our goal was to develop a lightweight model with competitive performance that could overcome the limitations in resource allocation in the edge site.Therefore, the number of parameters, the FLOPs usage, and the model performance in inference are the highlights of this discussion.
1) PARAMETERS AND FLOPS USAGE
As shown in Tables 4 and 5, we compared our proposed model with the traditional RC and NVAR in the Mixer architecture, and to the state-of-the-art models MobileNetV3 and MLP-Mixer.Our model outperformed both of the traditional models and showed a competitive performance compared with MobileNetV3 and MLP-Mixer under identical conditions.The number of parameters in our model decreased significantly, as shown in Table 6.NVAR Sequencer-Mixer was 0.104 times the number of parameters used in MobileNetv3.Since the Mixer block becomes the main block in the Mixer architecture using the vertical and horizontal Sequencers, our model used only one Sequencer-Mixer block for token and channel mixing, which led to a reduction in parameter usage.
We also calculated the number of FLOPs used in inference.Since the FLOPs quantify the computational complexity of a model [32], we can evaluate the efficiency of our model by counting them.FLOPs usage in our model was reduced significantly.As mentioned previously, the NVAR model in the Sequencer-Mixer layer, shown in Fig. 1(c), used two-dimensional tensors as inputs, (h × C) for the vertical Sequencer and (w × C) for the horizontal Sequencer, thus reducing the tensor input size from four dimensions.Additionally, NVAR applied only linear and nonlinear projections into the two-dimensional input.This operation reduced the complexity inside the Mixer blocks.Using the same hyperparameters and datasets, our model used only 0.556 times for CIFAR-10 and 1.533 times for the MNIST datasets of FLOPs used in MobileNetV3.Despite the number of FLOPs used in MobileNetV3 being smaller than in our proposed model for the MNIST datasets, the number used by our model was still reasonable compared with MLP-Mixer.103550 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The overlapping patch from the image input also increased the number of patches compared to the non-overlapping patch.However, adjusting the appropriate kernel size, stride, and padding values did not significantly influence the number of parameters, especially for tiny images.Table 4 shows that the number of parameters remaining was identical for each dataset regardless of whether they used overlapping or nonoverlapping patches.Regarding FLOPs usage, the NVAR Sequencer-Mixer model with GQPE and a window partition consumed the same number of FLOPs for inference with and without overlapping patches.However, the FLOPs varied slightly when the proposed model used the window partition and GQPE inside the Mixer layer.FLOPs usage increased from 0.021 G to 0.023 G for the MNIST datasets.As stated previously, the window partition and GQPE are performed as local operations inside the Mixer block, and can contribute to the number of FLOPs used.Despite the slight increase in the number of FLOPs, the results overall tell us that our model is promising as a DNN model for edge implementation.
2) MODEL PERFORMANCE
As previously stated, the NVAR had a smaller number of parameters, and it used nonlinear mapping to transform the input data into a high-dimensional space.Using the NVAR inside the Sequencer-Mixer layer reduces computational complexity.We received a smaller number of parameters with competitive performance and a faster inference process.As shown in Table 4, our model improved accuracy compared to the traditional RC-Mixer and NVAR-Mixer without Sequencer construction, increasing to 82.48% for CIFAR-10 and 98.36% for the MNIST dataset.Moreover, the overlapping patch from the image input also increased the performance of our model; it achieved greater accuracy than the traditional RC-Mixer and NVAR Sequencer-Mixer models without overlapping input.The top-1 accuracy of our model for tiny images showed that the overlapping patch enriches the feature map, thereby providing essential information for enhancing model performance in image classification.Additionally, compared with the state-of-theart models shown in Fig. 5(a),(b),(c), and (d) for each dataset, our proposed model performed well with a smaller number of parameters.
Additionally, the Mixer architecture, such as in MLP-Mixer, is linear with respect to model complexity.This stands in contrast to Transformer-based models, which are quadratic in model complexity [11].Therefore, we evaluated the throughput of our proposed model by defining the number of images per second in the inference.As stated previously, overlapping patches improved the accuracy of the model, as well as its throughput.Another feature that improved the throughput of the model was using a window partition and GQPE.Both of these reduced the processing time owing to their lower complexity [16].Table 4 shows that the number of throughputs of our model surpassed that of RC-Mixer and NVAR-Mixer.It reached 190.1 images per second for the CIFAR-10 dataset and 111.4 for the EMNIST dataset, the highest among the MNIST datasets.The slight increase in FLOPs usage in our model when using a window partition and GQPE also increased the computation time.Compared with the state-of-the-art model shown in Table 6, our model outperformed the throughput of MobileNetV3 and MLP-Mixer for the CIFAR-10 dataset.For the MINIST, the throughput of the proposed model is still a competitive result.
V. CONCLUSION AND FUTURE WORK
In this study, we proposed a lightweight DNN model that adopts the Mixer and Sequencer architectures.We utilized overlapping for the image input processing in the patch embedding.We developed a Mixer block with an NVAR layer to replace the MLP layer in the MLP-Mixer and the BiLSTM layer in the Deep Sequencer, respectively.Additionally, GQPE and a window partition were also applied to enhance the performance of the local process inside the Mixer block.Our model showed a significant improvement in the accuracy, the number of parameters, the FLOPs usage, and the throughput compared with the traditional RC-Mixer and NVAR-Mixer models.It achieved a top-1 accuracy of 82.48% for the CIFAR-10 dataset with 0.159 M parameters and 98.36% for MNIST with 0.106 M parameters.Also, its throughput on a central processing unit, which was 190.1 images per second for CIFAR-10 and 106.7 images per second for the MNIST dataset.These results were also competitive with MobileNetV3 and the MLP-Mixer model.The proposed method may be an optimized solution for implementation in the edge site.The lower complexity of the model and the smaller parameter usage are beneficial for tackling edge limitations.
Nevertheless, further study is required to improve the continuity of our model.First, regarding the hyperparameter usage in building the model architecture, another experiment with a varying number of parameters is required to achieve better model performance.Second, this study used CIFAR-10 and the MNIST family of datasets, which are in grayscale with a lower resolution, so we need to expand the implementation to a larger dataset of high-resolution images.Using a larger and higher-resolution dataset will allows us to further assess the effectiveness of the window partition in enhancing local computation efficiency inside the Mixer block.
FIGURE 1 .
FIGURE 1.(a) Main workflow for performing image classification using a Mixer-type NVAR.An overlapping patch of image input is utilized in the patch embedding process and then fed to the Mixer block.(b) The Mixer block consists of vertical and horizontal Sequencer NVAR blocks for token and channel mixing, respectively.(c) The NVAR model is the main portion of the Sequencer in the Mixer layer.After passing the linear layer, the output of the NVAR is parted by a window partition and applied to GQPE as relational positional encoding.A Mixer-type NVAR uses the fully connected layer as a classifier in the end architecture.
FIGURE 2 .
FIGURE 2. Illustration of the linear and nonlinear inputs of NVAR.
FIGURE 4 .
FIGURE 4. (a) Illustration of the Mixer block for token and channel (b),(c), and (d) are the Mixer layers used in the study: (b) for the MLP-Mixer model, and (c) and (d) for the RC-Mixer and NVAR-Mixer models, respectively.
Fig. 4 (
b), (c), and (d) are the respective Mixer layers.Since we utilized the overlap techniques in the patch embedding, we also studied the non-overlapping version of the Mixertype NVAR.
TABLE 4 .
Results of the ablation studies.
TABLE 5 .
Results of image classification . | 7,264.6 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
Privacy Preserving Federated RSRP Estimation for Future Mobile Networks
Leveraging location information for machine learning applications in mobile networks is challenging due to the distributed nature of the data and privacy concerns. Federated Learning (FL) helps to tackle these issues and is a big step towards enabling privacy-aware distributed model training; however still prone to sophisticated privacy attacks such as membership inference. In this paper, we implement an FL approach to estimate Reference Signal Received Power (RSRP) values using geographical location information of the user equipment. We propose a privacy-preserving mechanism using differential privacy to protect against privacy attacks and demonstrate the impacts and the privacy-utility trade-off via privacy accounting measures.
I. INTRODUCTION
With the advent of beyond 5G and 6G, next-generation wireless systems are expected to connect everything with enhanced coverage, capacity, energy efficiency, latency for different application-oriented use-cases. Such challenging requirements could be achieved via utilizing advance wireless techniques or intelligent data-driven methodologies.
Existing approaches like network densification, optimal resource allocation, and massive and distributed Multi-Input Multi-Output (MIMO) systems help reaching these requirements. However, for instance, network densification requires frequent carrier and cell measurements to enable interference mitigation and coordination schemes due to high interference resulting from densifying the network. Such intense measurements should be performed by the User Equipment (UE) on the primary, secondary, and any potential target cells on different frequency bands and spatially distributed network sites. Although UE measurement reports are important to guarantee service continuity, frequent Reference Signal Received Power (RSRP) measurements, e.g., in every 40 ms, produce large signaling overhead and increase power consumption.
The application of data-driven intelligent techniques on wireless systems is considered as a potential solution to the problem of high overhead, energy consumption, interference mitigation, resource allocation. Integrating the intelligence to monitor and predict the network status enables network automation and improves UE experience by learning from history and promoting proactive real-time network decisions [1]. Machine Learning (ML) algorithms can be utilized for RSRP prediction, which reduces the signaling overhead, enables proactive network actions, and increases computation efficiency. RSRP prediction with relevant features, e.g., UE contextual information, further enhances beam management and mobility robustness.
Despite the increasing popularity of ML, centralized processing of a big volume of data increases both computation complexity at network sites and privacy concerns for many UEs against data leakage. However, local datasets are valuable assets to increase learning accuracy. In order to benefit from a large dataset that captures independent and heterogeneous scenarios without transferring it, while avoiding centralized exhausting computation, distributed learning mechanisms drew considerable attention recently. Federated learning (FL) [2], [3], [4] is one of such distributed learning schemes addressing privacy concerns by performing coordinated learning on clients without revealing their local dataset to a central entity. FL provides a privacy-aware topology since the client's data is never shared with the server instead, only model updates are exchanged, which do not contain any personally identifiable information, and the size of exchanged updates is less than the raw data. Although these are considered significant privacy enhancements compared to the centralized training scenarios, where clients need to send raw data to the server, it is still possible to launch privacy attacks. Recent advances show that sophisticated privacy attacks such as membership inference, model inference, and model extraction attacks can still exploit the model updates [5]. Differential Privacy (DP), which is a privacy-enhancing technology, is used to prevent such attacks.
In this paper, the geographical location of UEs is utilized as a feature to predict RSRP. Since location sharing with network sites is not preferable due to privacy concerns, differentially private FL is employed to realize the privacypreserving prediction. We defined each capable UE as a client; thus, each UE can help the system by mapping its location to RSRP via updating model parameters. Targeting not only 5G networks but also beyond 5G and 6G networks, this study: • Utilizes UEs' location information in the context of FL to predict RSRP (obtained from a realistic testbed experiment) to enhance beam management and mobility robustness. • Brings a privacy-preserving approach with DP against possible privacy attacks on the above proposed framework. • Provides comprehensive experiment results including, but not limited to, privacy versus utility trade-off and performance metrics. The organization of this paper is described below. In Section 2, we present a background on FL, its privacy concerns, and DP. We explain the proposed solution for RSRP prediction and how it can be performed using differentially private FL in Section 3. Details about our implementation and evaluations are discussed in Section 4. We discuss state of the art in Section 5, and we conclude the study in Section 6.
A. Federated Learning
FL was proposed by [3] and is one kind of distributed learning utilized for the minimization of data transfer, joint optimization, and privacy purposes, and its performance depends on the variety of implementations. Depending on the distribution of samples and features among the clients, training can be done via horizontal or vertical splits over samples or features. Horizontal FL is applicable when the clients share the same feature space but differ in sample space, whereas vertical FL is appropriate when the clients have overlaps in the sample space. When clients have small overlaps in both sample and feature space, federated transfer learning is more suitable.
From an optimization point of view, Federated Stochastic Gradient Descent (FedSGD) and Federated Averaging (FedAvg) are the most used methods. In FedSGD, each client sends every Stochastic Gradient Descent (SGD) update to the server, while in FedAvg, each client performs some number of iterations (called epoch) over a local mini-batch then sends the updated model to the server. In our study, we follow horizontal FL case and FedAvg as it is more communication efficient. Hence, in each iteration, the clients (UEs) do training on their local batch, send the resulted model back to the server located in a centralized network site (e.g., hereafter we call the centralized network site as gNB, but this work is not limited to 5G, as it could be adopted in 6G network), via uplinks signals. It follows that the server aggregates models by averaging the corresponding weights, then shares the aggregated model with the clients (UEs) in the downlink.
Due to the nature of its operation, FL exhibits considerable advantages for the mobile network, described as follows: • Exchange learnings among clients (UEs): The individual learning of each UE is shared among each other when building the global model. This sharing of learning could be global sharing among all UEs (e.g., when no clustering is employed before aggregating), or it could be a cluster-based sharing of learning (e.g., when clustering for a specific type of service or UE, is deployed before aggregating [6]). • Privacy: Since training is performed locally, FL preserves UE information privacy. The original operation of FL does not require the UE to send its private information (e.g., UE location) to gNB, or reveal its identity. Instead, it learns from aggregated information produced by multiple UEs.
• Efficient network footprint by reducing the amount of signaling required at gNB by UEs, due to the need of sending on the gradient of the model's weights.
B. Privacy Concerns
FL is a big step towards enabling privacy-aware model training, as it aims joint model training by only sharing the necessary parameters and not client data. On the other hand, recent studies demonstrate sophisticated privacy attacks by exploiting the observed gradients or using the collected inference results. Privacy attacks to FL such as membership inference [7], attribute property inference [8], deep leakage [9] , and model extraction [10] may be posed by a malicious client or the server trying to infer sensitive information during training or inference phase. The adversarial goal of privacy attacks is to gain more information about the training data and the ML model parameters. The attacker may try to determine whether a specific client's data was included in the training dataset, called membership inference, or try to infer certain properties of other clients' training data, which is not explicitly shared, called property inference. In model extraction attacks, attacker uses a prediction interface as an oracle to obtain the structure of the model by inspecting the probabilities returned from each class. Deep leakage attacks attempt to infer both training input and labels, which is more dangerous since the raw training data can be extracted [9].
From the solution perspective, existing methodologies including DP, Homomorphic Encryption (HE), and Secure Multi-Party Computation (SMPC) are investigated in the literature. However, for large-scale FL scenarios and crossdevice settings, HE and SMPC may not be the best options as they introduce additional computation and communication overhead. DP satisfies the constraints at the cost of reducing accuracy which is controllable based on the privacy budget.
C. Differential Privacy
DP provides a mathematically provable guarantee of privacy protection based on the available privacy budget, which is the limit on the amount of difference that an individual's participation can generate. Hence, DP prevents distinguishing whether or not the individual's data is included in the training. Definition: A randomized function is ε-differentially private if for any subset of the output S in the range of , and for all data sets and differing in a single entry [11]: The given formulation is called (ε, δ)-differential privacy where δ is the relaxation parameter. ε is the control parameter for privacy level, denoting privacy budget. δ limits the probability that the privacy guarantee will not hold; in other words, the probability of information accidentally being leaked. The best practice is to set δ to be less than the inverse of the data size. In the context of FL, D1 and D2 datasets correspond to client -in our case UE-training datasets. Adjacent datasets are those where D2 can be formed from D1 by adding or removing all the training samples associated with a single client. This approach is called user-level privacy [12].
There are two different DP implementations in the FL setting, called Central DP and Local DP. The difference comes from the trust relation. In Central DP, noise is added in and controlled by the server which aggregates the updates, so clients trust the server. Local DP allows the client to control and add noise for cases the server is not trusted by the clients. Although Local DP allows the individual client to set different local privacy guarantees and removes the trusted server assumption, it reduces the model accuracy. In our case, we use Central DP where the noise is controlled by the gNB.
Privacy accounting is essential in DP as it is the control mechanism for the privacy versus accuracy trade-off. The mechanism to control and track privacy uses the moments accountant method [13]. It assures that the defined privacy budget given with the via (ε, δ) parameters stays within the allowed limit. In FL, privacy accounting for multiple iterations can be done using the composability feature of DP to compute and accumulate the privacy cost at each round of training.
A. System Model
Signal quality (either layer-1 Channel State Information-Reference Signal (CSI-RS) or layer-3 "RSRP") prediction is an important feature that can be obtained with the introduction of intelligent and proactive management of the network resources. RSRP measurements is obtained, in the legacy system, via measuring the reference signal in CSI-RS, then it is reported back to the network in a specific configuration. Instead of reporting legacy RSRP, reporting predicted RSRP to the network could be used to proactively 1) perform handover (HO) or allow UE to perform conditional handover, 2) switch beams or carrier for a UE to prevent service interruption. For instance, the UE must report predicted RSRP when the network needs to decide, whereas it could report its own predicted decision if the scenario is like that of conditional HO.
In order for the UE to predict such RSRP, it requires input and output labels to train the UE model (as illustrated better in a later section). The input of the learning model is the UE location. And not any other feature (e.g., time advance (TA)), because running feature importance proves that location is one of the most critical features, plus other features like TA would introduce signaling overhead. The training label is obtained from CSI-RS reference signal.
B. RSRP Prediction via FL
An ML model is created at the gNB and shared with the clients (UEs), in which each UE trains its individual model by mapping geographical location features to RSRP measurements of the serving carrier/cell. Let the data set throughout the network be defined as , , 1, … , , 1, … , where ∈ ℝ d represents the feature space and d is the number of features, denotes the output label per sample , is the UE index out of UEs, and i is the dataset sample. The training algorithm ∶ → , ∈ finds the estimated function, , , for the unknown function, , , that maps the UE location information to RSRP information. contains only geographical location information of the UEs, i.e., latitude and longitude, and consists of the maximum measured RSRP values out of all beams. Then the prediction problem is formulated to establish a relation between the geographical location and the highest RSRP values. The features contain sensitive information, and learning has been performed in a federated framework over the UEs. The global model minimizing the objective function given as, where global objective function is defined as average of local objective functions of each UE such that Herein, local objective functions are defined via global loss function and evaluated at local datasets ' , ( of k-th UE such that !" ) !; , ".
individual model by mapping its location data to RSRP measurements. After local training, each UE sends its trained model parameters to the gNB. gNB aggregates the parameters using FedAvg and sends back the updated model parameters to the UEs, which contains an implicit mapping of all UEs to RSRP, embedded by local training.
C. RSRP Prediction via Differentially Private FL
Differential privacy mechanism perturbs the averaged updates conducted at the gNB using a randomized Gaussian mechanism. The purpose of the randomization process is to hide each UE's contribution within the federated aggregation and thus within the learning procedure.
D. Threat Model
In our study, we consider UEs not being malicious during training activities. UEs obey the FL steps, do not try to inject malformed data to poison the model or alter the protocol. gNB, as the FL server, is considered as trusted data aggregator, i.e., it follows the FL steps and does not attempt to share parameters with anyone. Our framework protects the model against untrusted analysts who can send queries using an inference interface and try to infer sensitive information by collecting model inference results.
A. Simulation Environment
We realized our implementation in the Tensorflow environment. Keras, which is a high-level ML API used to define a Neural Network (NN) in the Tensorflow. Federated computations on decentralized data are performed with Tensorflow Federated (TFF) framework. The corresponding NN is created by using Keras comprised of an input, an output and 3 hidden layers including 10 neurons each. To measure the training loss, Mean Squared Error (MSE) is used.
We integrated DP using Tensorflow Privacy library which enables us to analyze the performance of our implementation by turning on/off DP. Tensorflow privacy library gives the opportunity to set different privacy levels by adjusting privacy hyper-parameters such as noise multiplier, gradient clip-scale and 4. In our setup, we used a fixed clip-scale for all updates. In performance evaluation subsection C, we provide the impact of these parameters. The experimental setup of our implementation is described in Table 1.
B. Dataset
In a real-life environment, we ran a measurement campaign with a realistic base-station and UEs, which was conducted in an urban environment in Stockholm, Sweden. Fig. 2 illustrates the longitude and latitude values, X(m) and Y(m), with respect to a reference coordinate, and the color map indicates the RSRP values in dB scale, obtained at 3.5 GHz carrier on the evaluation area. The collected RSRP measurements over multiple days and locations are gathered in a single gNB node. Dataset generated over a specific region or during a specific period, is assigned a client tag, i.e., k-th client, and is denoted by as in eq. (5), where i is the sample id, D is the number of sequential samples per region or period of and can vary for each UE.
C. Performance Evaluation
In this section, Mean Absolute Percentage Error (MAPE) and epsilon (ε) have been evaluated to show the impact of c, B, and z during the FL training.
In Fig. 3, it is demonstrated how different c values impact the training evaluation loss in terms of MAPE over FL rounds when noise multiplier, z=1 and batch size, B=100. Less gradient clipping, e.g., c={0.7, 1} results in faster convergence but a higher error with oscillations because not only the noise variance increases but also the gradients' sensitivity to the noise will not be the same in different clients. Nonetheless, if c decreases, the convergence becomes more stable. Therefore, the clip scale is chosen based on stability and convergence rate. Further, the privacy spent from the privacy budget increases during the training as the number of rounds increases, i.e., as MAPE decreases, ε, shown as privacy spent in Fig. 3, increases meaning that privacy decreases. Higher clip-scale values converge faster, i.e., require fewer rounds, thus results in better privacy (lower ε), but clip-scale should have an upper bound for sensitivity aspects. Given the setting in Table 1, for c=0.1, the processing time cost of running the experiment is around 25 minutes. Fig. 4 shows that B affects the clip-scale adjustment. When c is low (more clipping), bigger B can be used for faster convergence, thus less privacy will be spent from the privacy budget. When c is high, e.g., 0.7, bigger B impacts the norm of gradient vector, so the sensitivity to the noise, and MAPE eventually oscillates. Fig. 5 shows that MAPE increases with increasing noise multiplier. How much accuracy is lost by turning on the DP can be seen by comparing with No DP results given in black dotted line. This shows the trade-off between utility and privacy. To get more privacy, one needs to sacrifice accuracy. If noise multiplier is set as too high, then the training will not converge as in the settings for z≥10. Fig. 6 demonstrates that more privacy is spent as FL training continues with more rounds, and higher z values enhance privacy. However, one needs to consider the convergence while increasing the noise multiplier.
V. RELATED WORK ML technologies are used to predict different 5G metrics. For instance, the authors of [14] proposed a procedure to predict the strongest (highest RSRP) secondary serving cell. In their proposal, gNB does not require further signaling from UE but only uses existing 3GPP signals, i.e., timing advance and primary cell RSRP. Another work related to downloading auto-encoder ML model to enable UE to estimate radio measurements and reduce the signals transmitted over to the network [15]. This auto-encoder is already trained on the network side and downloaded to UEs. Also, the authors of [16] proposed a duo threshold-based classification to enhance reference signal received quality prediction. FL has been investigated intensively for 5G-beyond applications. A comprehensive review of the application of FL over 6G network is described in [17] and [18]. The authors of [19] proposed an FL scheme to predict the mmWave beamforming vector. In the proposed scheme, the users train the beamforming vector predictor network and share it with the network without sharing their data. Another interesting application of FL to 5G-beyond network is the usage of Gaussian process regression to track Channel State Information (CSI) by UEs [20]. Authors of [21] proposed digital and compressed analog distributed SGD to enable opportunistic scheduling of UEs based on channel conditions.
Preserving privacy and security of FL schemes is of main interest in the 5G and beyond networks. The authors of [22] designed a blockchain-based FL framework to achieve secure and reliable FL considered DP against inference attacks. In vehicular networks, the author of [23] introduced DP into the gradient descent training scheme. In a 5G social Internet-of-Things context, authors of [24] proposed a hybrid of data and content privacy-preserving scheme incorporating Bayesian DP and a new encryption method.
Our work differs from existing literature in multiple aspects. Compared to those works that addressed RSRP prediction, we proposed to use only UE location in a federated learning context. Compared to the existing works that utilized FL to predict CSI for 5G or 6G applications, we proposed the usages of differential privacy in addition to FL to protect the model against inference attacks. We also consider our work to be the first of its kind to use realistic dataset in FL context with privacy-preserving technique.
VI. CONCLUSION
This paper demonstrates a privacy-preserving federated approach for an RSRP prediction framework and focuses on two important aspects: (i) the geographical location of UEs as a feature in an FL framework to predict RSRP (ii) bringing privacy guarantee to FL framework against inference attacks.
The FL training is performed locally on the UEs, using the location information. The local dataset, e.g., UE's location and targeted RSRP measure, are acquired from a real-life environment with realistic base-station and UE over multiple days and different areas. The local updates are aggregated in the gNB without accessing any location information. Our evaluation results showed that our model could successfully predict RSRP values with a 19% loss in terms of MAPE. We enabled DP during the training phase to prevent privacy attacks on the resulting model and presented the impact of DP parameters by turning the DP on/off. RSRP estimation via differentially private FL not only enables location-aware communications and enhances the robustness of the beam management and mobility but also preserves the privacy of the UEs. | 5,121.2 | 2021-12-01T00:00:00.000 | [
"Computer Science"
] |
Tribology of Wire Arc Spray Coatings under the Influence of Regenerative Fuels
In order to further optimize the efficiency of today’s internal combustion engines, specific coatings are used on functional surfaces to reduce internal engine friction and wear. In the current research project, oxymethylene ether (OME) is discussed because it is CO2 neutral and has a strong soot-reducing effect as a fuel or fuel additive. In some operational regimes of the internal combustion engine a dilution of engine oil by fuel must be assumed. In this paper, the frictional contact between piston ring and cylinder raceway is modelled using a pin-on-disk tribometer and the friction and wear behavior between a diamond-like carbon coating (DLC) and a thermal spray coating is characterized. The wear of the spray layer could be continuously detected by radionuclide technology (RNT). With the aid of photoelectron spectroscopic measurements (XPS), the steel thermal spray coating was chemically analyzed before and after the tribometer tests and the oxidative influence of OME was investigated. In addition, confocal microscopy was used to assess the topographies of the specimens. The measurements showed that the addition of OME to the lubricant reduced the viscosity and load-bearing capacity of the lubricating film, which led to an increase in the coefficient of friction. While almost no wear on the pin could be detected at 10% OME, the first visible material removal occurs at an OME content of 20% and the layer delaminated at 30% OME. The evaluation of the RNT wear tests showed that both the tests with engine oil and with engine oil plus 20% OME achieved very low wear rates. No corrosion of the thermal spray coating could be detected by XPS. Only the proportion of engine oil additives in the friction track increased with increasing OME concentration.
Introduction
For more than a century, the combustion engine has been used as a drive unit without its basic structure having changed significantly.However, when considering its efficiency, it becomes clear that only a small proportion of the fuel's primary energy is converted into kinetic energy.The majority is converted into heat and is lost through friction [1].In addition, the current climate targets for greenhouse gas emissions require further optimization of the system.One approach to achieve this is the use of biofuels from renewable energies [2].A major advantage is the increased oxidative effect of biofuels, which results in a more complete combustion, thus reducing emissions [3].However, this raises the question of the compatibility of the existing engine components with an increased oxidative effect.In addition to the fuel-carrying system, the combustion chamber in particular can be considered for a possible corrosive effect due to the increased temperature level and fuel contamination of the lubricating oil.The current state of the art consists of a thermal spray coating that covers the cylinder wall and protects it from thermal and tribological loads [4,5] and a diamond-like carbon coating (DLC) coating on the piston rings that seal the combustion chamber [6].This is where this work starts and aims to investigate the tribological effects of a dilution of the lubricant by a regenerative fuel and to show a possible oxidative effect on the system.For this purpose, experiments were carried out on a pin-on-disk tribometer on a model system using oxymethylene ether as a regenerative fuel.
Materials and Methods
Before investigating the friction and wear behavior under oxymethylene ether (OME) dilution, the influence of Hertzian pressure on the friction coefficient had to be examined more closely.By polishing the flat sides of the pin before coating with DLC, different convexities of the surface were produced, which led to different Hertzian pressures and, associated with this, to different frictional power densities.The main focus of the test design was the analysis of the oxidative effect of the oxymethylene ether and the corrosion tendency of the steel spray coating.Tribometer tests with different OME concentrations in the engine oil were carried out for this purpose.The oil-fuel mixtures contained 10, 20%, 30%, and 50% by volume OME.To quantify the wear in the model system DLC-thermal spray coating, wear measurements were carried out using radionuclide technology (RNT).
Disks and Pins
The pin material for tribological experiments was chosen to represent a DLC coated piston ring while the disk material has been selected to represent a modern cylinder liner surface of a combustion engine.The plasma thermal wire arc (PTWA) spray coating (Terolab Surface GmbH, Langenfeld, Germany) applied to grey cast iron (EN-GJL-250) was deposited from a low-carbon steel (EN 10016-2) and had a thickness of up to 500 µm after deposition.The coating showed typical microscopically large inhomogeneities, characterized by pores, oxides and particles that had already solidified in flight.Figure 1 shows a cross section of the coating and the DLC layer.particular can be considered for a possible corrosive effect due to the increased temperature level and fuel contamination of the lubricating oil.The current state of the art consists of a thermal spray coating that covers the cylinder wall and protects it from thermal and tribological loads [4,5] and a diamond-like carbon coating (DLC) coating on the piston rings that seal the combustion chamber [6].This is where this work starts and aims to investigate the tribological effects of a dilution of the lubricant by a regenerative fuel and to show a possible oxidative effect on the system.For this purpose, experiments were carried out on a pin-on-disk tribometer on a model system using oxymethylene ether as a regenerative fuel.
Materials and Methods
Before investigating the friction and wear behavior under oxymethylene ether (OME) dilution, the influence of Hertzian pressure on the friction coefficient had to be examined more closely.By polishing the flat sides of the pin before coating with DLC, different convexities of the surface were produced, which led to different Hertzian pressures and, associated with this, to different frictional power densities.The main focus of the test design was the analysis of the oxidative effect of the oxymethylene ether and the corrosion tendency of the steel spray coating.Tribometer tests with different OME concentrations in the engine oil were carried out for this purpose.The oil-fuel mixtures contained 10, 20%, 30%, and 50% by volume OME.To quantify the wear in the model system DLC-thermal spray coating, wear measurements were carried out using radionuclide technology (RNT).
Disks and Pins
The pin material for tribological experiments was chosen to represent a DLC coated piston ring while the disk material has been selected to represent a modern cylinder liner surface of a combustion engine.The plasma thermal wire arc (PTWA) spray coating (Terolab Surface GmbH, Langenfeld, Germany) applied to grey cast iron (EN-GJL-250) was deposited from a low-carbon steel (EN 10016-2) and had a thickness of up to 500 μm after deposition.The coating showed typical microscopically large inhomogeneities, characterized by pores, oxides and particles that had already solidified in flight.Figure 1 shows a cross section of the coating and the DLC layer.The average Vickers hardness of the sprayed layer is 400HV0.3and due to the 3% to 5% porosity of the PTWA coating, the density is approx.7.42 g/cm 3 instead of the usual 7.86 g/cm 3 of the solid material.
For finishing, the coated disks were conventionally ground flat on both sides by Kiffe Engineering GmbH, Villingen-Schwenningen, Germany (cBN as hard phase, honing oil as cooling lubricant), which should come close to the surface quality of the cylinder honing (Ra = 0.4 μm; Rz = 4.4 μm).The average Vickers hardness of the sprayed layer is 400HV0.3and due to the 3% to 5% porosity of the PTWA coating, the density is approx.7.42 g/cm 3 instead of the usual 7.86 g/cm 3 of the solid material.
For finishing, the coated disks were conventionally ground flat on both sides by Kiffe Engineering GmbH, Villingen-Schwenningen, Germany (cBN as hard phase, honing oil as cooling lubricant), which should come close to the surface quality of the cylinder honing (R a = 0.4 µm; R z = 4.4 µm).
The DLC-coated pins were a hydrogen-containing amorphous carbon (a-C:H) with a hydrogen content of approx.25% and a 25-30% sp 2 and 45-50% sp 3 hybridization, which was deposited at the Fraunhofer IWM with the help of plasma-assisted CVD.The layer (see Figure 1b) consists of a bonding Lubricants 2018, 6, 60 3 of 9 layer (Si-containing DLC, starting material tetramethylsilane), a gradient layer (exchange of the starting materials tetramethylsilane for toluene) and a functional layer (made of toluene).The process gases were mixed with 50% argon at a process pressure of 1.4 Pa.The acceleration voltage of the plasma (bias voltage) was approx.500 V.The layer thickness of the DLC coating is up to 3.5 µm with an E-modulus of 158 GPa and a hardness of 1800 HV 0.003.
Lubricant and OME
Fuchs TITAN GT1 LONGLIFE III 5W-30 at a constant temperature of 80 • C was used as engine oil for the tests.The chemical formula of OME is H 3 CO-(CH 2 -O) n -CH 3 with the repeating functional group (CH 2 -O) n , called oxymethyl.The number n of oxymethyl units determines the molecular size and properties of the biofuel and is a mixture of 3 to 5 for the present study.The dynamic viscosity at 80 • C for OME is only 0.7 mPas compared to 15.4 mPas of the engine oil.The OME fuel for the experiments was provided by the Institute of Catalysis Research and Technology (IFKT) of the Karlsruhe Institute of Technology.
Tribometry
The friction experiments were carried out on a pin-on-disk tribometer (POD) SST from TETRA Gesellschaft für Sensorik, Robotik und Automation mbH.In order to avoid external temperature influences, the laboratory was air-conditioned and thus kept at a constant temperature.The temperature in the oil circuit was controlled by a circulation thermostat.Part of the circuit was also the RNT wear measuring system.The RNT system was operated using the so-called concentration method which measures the wear particle concentration in the lubricant [7].In order to correct for the half-life an activated reference sample was continuously measured in a second gamma detector.
In order to analyze the frictional behavior over time and to make statements about a possible running-in, a continuous run of three days with a constant load of 150 N (75 MPa) was carried out.
Analytical Techniques
In this work a PHI VersaProbe II from Ulvac-PHI Inc. (Hagisono, Japan) was used for chemical surface analysis of the disks.According to the manufacturer, the resolution of the device is 0.1 eV with a measuring spot of 200 µm.The sputtering rate for generating depth profiles was 2.5 nm/min with argon as process gas (1 kV, 500 nA).
A confocal microscope from Sensofar Tech S.L. (Barcelona, Spain) called PLµ 2300 was used to examine the topography of the pin surface and the disks.Due to a special sensor head, images can also be taken by white light interferometry by switching the measuring device.The measurement data were interpreted using the SensoMap Standard 6.2 evaluation software (Barcelona, Spain).
Tribological Results
To study the influence of OME on friction, tribometer tests were carried out at different OME concentrations (Figure 2).
The friction coefficients that are plotted in Figure 2 as a function of the test duration correlate with the OME dilution at constant temperature.Friction increases with increasing OME concentration at the same temperature.In addition, a time dependence can be seen in a decrease of the coefficient of friction at high temperature and an increase at low temperature.
The continuous increase in the coefficient of friction of 10% OME at 40 • C can be explained by the evaporation of the biofuel.Due to the high viscosity of the lubricant mixture (29.2 mPas), the operating point at the beginning of the test is in the range of hydrodynamics.After evaporation of the OME, viscosity increases even further and approaches the value of pure engine oil at 40 • C (58.5 mPas).
With increasing OME concentration in the lubricating oil, the viscosity of the oil-OME mixture decreases further, so that the solid contact in the mixed friction regime continues to increase.Consequently, the coefficient of friction continues to increase.The friction coefficients that are plotted in Figure 2 as a function of the test duration correlate with the OME dilution at constant temperature.Friction increases with increasing OME concentration at the same temperature.In addition, a time dependence can be seen in a decrease of the coefficient of friction at high temperature and an increase at low temperature.
The continuous increase in the coefficient of friction of 10% OME at 40 °C can be explained by the evaporation of the biofuel.Due to the high viscosity of the lubricant mixture (29.2 mPas), the operating point at the beginning of the test is in the range of hydrodynamics.After evaporation of the OME, viscosity increases even further and approaches the value of pure engine oil at 40 °C (58.5 mPas).
With increasing OME concentration in the lubricating oil, the viscosity of the oil-OME mixture decreases further, so that the solid contact in the mixed friction regime continues to increase.Consequently, the coefficient of friction continues to increase.
Within the first 10 h of the OME tests at 80 °C, a rapid decrease in the coefficient of friction can be observed, which can also be explained by the evaporation of the OME and the higher load-bearing capacity of the lubricant.The friction coefficients in the negative are artifacts and were caused by zero point shifts of the force sensor.The absolute value of the shift is low but can affect the coefficient of friction in the hydrodynamic regime.
The described effect of viscosity reduction and the associated solid contact can also be determined on the surfaces of the DLC pins after tribological testing (Figure 3).
After the experiment, the DLC layers of the pins show wear marks on their surfaces.The amount of marks increases with increasing OME dilution.The pin with 30% OME in the lubricant even has a deep crater reaching down to the substrate.Although distinct groove structures are present on the pin surfaces, the underlying wear mechanism is most likely carbon diffusion, which is typically found in DLC friction contacts [8].Abrasive wear is rather unusual with DLC systems [9].Furthermore, it is noticeable that at 30% OME dilution the DLC layer is ablated down to the substrate.The sharp edges of the wear mark indicate delamination as another wear mechanism.
Figure 4 compares the friction coefficients of 10% OME at 40 and 80 °C with those of pure engine oil.Within the first 10 h of the OME tests at 80 • C, a rapid decrease in the coefficient of friction can be observed, which can also be explained by the evaporation of the OME and the higher load-bearing capacity of the lubricant.The friction coefficients in the negative are artifacts and were caused by zero point shifts of the force sensor.The absolute value of the shift is low but can affect the coefficient of friction in the hydrodynamic regime.
The described effect of viscosity reduction and the associated solid contact can also be determined on the surfaces of the DLC pins after tribological testing (Figure 3).After the experiment, the DLC layers of the pins show wear marks on their surfaces.The amount of marks increases with increasing OME dilution.The pin with 30% OME in the lubricant even has a deep crater reaching down to the substrate.Although distinct groove structures are present on the pin surfaces, the underlying wear mechanism is most likely carbon diffusion, which is typically found in DLC friction contacts [8].Abrasive wear is rather unusual with DLC systems [9].Furthermore, it is noticeable that at 30% OME dilution the DLC layer is ablated down to the substrate.The sharp edges of the wear mark indicate delamination as another wear mechanism.
Figure 4 compares the friction coefficients of 10% OME at 40 and 80 • C with those of pure engine oil.At 80 °C, the coefficient of friction is initially higher due to the reduced viscosity caused by the dilution with OME and the reduced viscosity of the base oil at that temperature.During the experiment the friction drops strongly, which we attribute to the evaporation of OME.Towards the end of the experiment the COF of undiluted and diluted oils are the same within the scatter of the data.For the diluted oil with 10% OME that is tested at 40 °C of the viscosity of the base oil is expected to be higher than at 80 °C, but due to the OME content, which does not evaporate, the friction is slightly higher than the pure base oil.
The addition of OME to the lubricating oil generally reduces viscosity.Friction partners can thus be separated less effectively from each other and the formation of a load-bearing lubricating film can be prevented.From a content of 30% OME (80 °C), no hydrodynamic conditions are achieved under the predefined test conditions.Operating points with very low coefficients of At 80 • C, the coefficient of friction is initially higher due to the reduced viscosity caused by the dilution with OME and the reduced viscosity of the base oil at that temperature.During the experiment the friction drops strongly, which we attribute to the evaporation of OME.Towards the end of the experiment the COF of undiluted and diluted oils are the same within the scatter of the data.For the diluted oil with 10% OME that is tested at 40 • C of the viscosity of the base oil is expected to be higher than at 80 • C, but due to the OME content, which does not evaporate, the friction is slightly higher than the pure base oil.
The addition of OME to the lubricating oil generally reduces viscosity.Friction partners can thus be separated less effectively from each other and the formation of a load-bearing lubricating film can be prevented.From a content of 30% OME (80 • C), no hydrodynamic conditions are achieved under the predefined test conditions.Operating points with very low coefficients of friction (e.g., at 2.5 m/s, 150 N) are already shifted at 10% OME into the mixed friction regime with a growing proportion of solid state friction.This trend is further intensified with increasing OME concentration, as can be seen in Figure 5, where all friction data is plotted over the Hersey-number H = ηv/p, where η denotes the viscosity, v is the entrainment speed, and p the normal load.The Hersey number is related the oil film thickness and indicates the lubrication regime.
friction (e.g., at 2.5 m/s, 150 N) are already shifted at 10% OME into the mixed friction regime with a growing proportion of solid state friction.This trend is further intensified with increasing OME concentration, as can be seen in Figure 5, where all friction data is plotted over the Hersey-number H = ηv/p, where η denotes the viscosity, v is the entrainment speed, and p the normal load.The Hersey number is related the oil film thickness and indicates the lubrication regime.
Figure 6 shows the RNT tests with the correlated friction curves.Figure 6 shows the RNT tests with the correlated friction curves.
Lubricants 2018, 6, x FOR PEER REVIEW 6 of 9 friction (e.g., at 2.5 m/s, 150 N) are already shifted at 10% OME into the mixed friction regime with a growing proportion of solid state friction.This trend is further intensified with increasing OME concentration, as can be seen in Figure 5, where all friction data is plotted over the Hersey-number H = ηv/p, where η denotes the viscosity, v is the entrainment speed, and p the normal load.The Hersey number is related the oil film thickness and indicates the lubrication regime.
Figure 6 shows the RNT tests with the correlated friction curves.The resolution limit of the radionuclide technique depends on the labelling and is approx.0.1 µg/h (=0.05 nm/h for this sample).
In the tests with engine oil and engine oil plus 20% OME, the wear rates were below 1 nm/h.Due to the lower load-bearing capacity of the lubricating film of the oil thinned with 20% OME, higher contact pressures initially prevailed in the tribological system, which explain the higher beginning of the coefficient of friction.
Elemental Composition of the Worn Samples
The admixture of the biofuel could also have favored the formation of ZnDTP (zinc-dialkyl-dithiophosphate) layers, as higher friction densities were converted with increasing OME concentration.In Ref. [10] it was found that an oil dilution with ethanol in the tribological system DLC-cast iron had positively influenced the layer formation of ZnDTP and thus a lower wear could be measured.Oxymethylene ether could have had a similar effect in this context, which could explain the lower wear rates within the first 5 h.The initially strong drop in the coefficient of friction at 20% OME can mainly be associated with the evaporation of the OME and the resulting increase in viscosity.The drop in the friction value below that of the test with pure engine oil could be due to the increased formation of a friction-reducing third body with the participation of ZnDTP.In order to clarify this situation, photoelectron spectroscopic measurements (XPS) was used to analyze the sliding tracks on the disks, which were tested with pure engine oil and with an OME content (Figure 7).
Lubricants 2018, 6, x FOR PEER REVIEW 7 of 9 The resolution limit of the radionuclide technique depends on the labelling and is approx.0.1 μg/h (=0.05 nm/h for this sample).
In the tests with engine oil and engine oil plus 20% OME, the wear rates were below 1 nm/h.Due to the lower load-bearing capacity of the lubricating film of the oil thinned with 20% OME, higher contact pressures initially prevailed in the tribological system, which explain the higher beginning of the coefficient of friction.
Elemental Composition of the Worn Samples
The admixture of the biofuel could also have favored the formation of ZnDTP (zinc-dialkyl-dithiophosphate) layers, as higher friction densities were converted with increasing OME concentration.In Ref. [10] it was found that an oil dilution with ethanol in the tribological system DLC-cast iron had positively influenced the layer formation of ZnDTP and thus a lower wear could be measured.Oxymethylene ether could have had a similar effect in this context, which could explain the lower wear rates within the first 5 h.The initially strong drop in the coefficient of friction at 20% OME can mainly be associated with the evaporation of the OME and the resulting increase in viscosity.The drop in the friction value below that of the test with pure engine oil could be due to the increased formation of a friction-reducing third body with the participation of ZnDTP.In order to clarify this situation, photoelectron spectroscopic measurements (XPS) was used to analyze the sliding tracks on the disks, which were tested with pure engine oil and with an OME content (Figure 7).In all sliding tracks, a comparable depth profile (concentration and depth) of the oxygen concentration was detected independently of the OME content in the lubricant.Roughness on the length scale of the measurement area (several nm at 200 μm spot size) can in general influence the course of the depth signal but since all measured depth profiles show the same depth-concentration trend, does not affect the comparison of different OME concentrations.Therefore, a corrosive effect of oxymethylenether can be ruled out during the friction process.This is confirmed when the reference measurement (black) and the measurement at 30% OME (red) are compared in Figure 6.Moreover, with an oxidative effect of OME, the values of 10% and 20% OME within the first 15 nm sputter depth should not be below the reference measurement.The oxygen content, which does not In all sliding tracks, a comparable depth profile (concentration and depth) of the oxygen concentration was detected independently of the OME content in the lubricant.Roughness on the length scale of the measurement area (several nm at 200 µm spot size) can in general influence the course of the depth signal but since all measured depth profiles show the same depth-concentration trend, does not affect the comparison of different OME concentrations.Therefore, a corrosive effect of oxymethylenether can be ruled out during the friction process.This is confirmed when the reference measurement (black) and the measurement at 30% OME (red) are compared in Figure 6.Moreover, with an oxidative effect of OME, the values of 10% and 20% OME within the first 15 nm sputter depth should not be below the reference measurement.The oxygen content, which does not disappear at a depth of 50 nm, must be caused by oxidation of the spray particles during thermal spraying.
Furthermore, a correlation between the oxygen concentration and the concentration of the additives at a depth of more than 30 nm is found.As the OME concentration increases, oxides of the additives, namely Ca, Zn and P, are increasingly detected, which also led to an increase in the oxygen concentration at greater sputter depths.
By adding OME to the lubricating oil, the viscosity decreases so that the friction power density in the tribological contact increases.The friction track could have been exposed to locally higher temperatures, which could have caused ZnDTP to adsorb to the steel surface.With increasing OME content, the frictional stress increases due to a higher solid content and the formation of tribological layers becomes more probable.This assumption was confirmed by the increase in additive oxides, which had to be measured as the biofuel content increased.
Summary and Conclusions
In order to further optimize the efficiency of today's internal combustion engines, coatings were applied to the piston ring and cylinder liner to reduce internal engine friction and wear.
In tribological tests, oxymethylene ether was used as a fuel additive, which can also come into contact with engine oil in internal combustion engines by mixing.For this reason, the oxidative and corrosive influence of OME on friction and wear of the PTWA coating was investigated using the model system with radionuclide technology.With the help of XPS measurements, the steel spray coating was chemically analyzed after the friction tests and checked for corrosion.In addition, confocal microscope images were taken to assess the surface topographies.
The evaluation of the RNT investigations with and without OME showed comparable wear rates in the one-and two-digit nanometer per hour regime as required for modern combustion engines.However, the addition of OME reduces the viscosity of the oil so that the friction partners are more difficult to separate from each other, which led to an increase in the coefficient of friction.No corrosion of the sprayed steel layer could be detected by surface chemical analyses, so that the use of OME as fuel makes sense if oil-side measures for viscosity stabilization are taken.Thus, the pairing of a-C:H tested here in contact with an iron spray layer can be seen as a promising tribological system for engine operation with OME and should be examined more closely in engine tests.
9 Figure 2 .
Figure 2. Coefficient of friction at different OME concentrations.The moving average value is displayed and the corresponding raw data is shown in the background in the same color.
Figure 2 .
Figure 2. Coefficient of friction at different OME concentrations.The moving average value is displayed and the corresponding raw data is shown in the background in the same color.
Figure 3 .
Figure 3. Confocal microscopy images of the worn DLC pins after the tribological testing.
Figure 3 .
Confocal microscopy images of the worn DLC pins after the tribological testing.
Figure 4 .
Figure 4. Coefficient of friction of engine oil tested at 80 °C (red solid line) and engine oil plus 10% OME at 40 °C (black solid line) and 80 °C (blue solid line).Raw data is shown in the background in the same color.
Figure 4 .
Figure 4. Coefficient of friction of engine oil tested at 80 • C (red solid line) and engine oil plus 10% OME at 40 • C (black solid line) and 80 • C (blue solid line).Raw data is shown in the background in the same color.
Figure 5 .
Figure 5. Stribeck curve of the lubricated system DLC-spray coating and with OME dilution.The coefficients of friction (COF) were determined by varying the speed between 0.1 and 2.5 m/s and the normal force in the range between 60 N and 240 N. The COF is plotted against the Hersey parameter ηv/p, where η is the viscosity of the blend, v the entrainment speed and p the normal load.
Figure 6 .
Figure 6.Friction and wear of experiments with engine oil (light colors) and engine oil plus 20% OME (dark colors).
Figure 5 .
Figure 5. Stribeck curve of the lubricated system DLC-spray coating and with OME dilution.The coefficients of friction (COF) were determined by varying the speed between 0.1 and 2.5 m/s and the normal force in the range between 60 N and 240 N. The COF is plotted against the Hersey parameter ηv/p, where η is the viscosity of the blend, v the entrainment speed and p the normal load.
Figure 5 .
Figure 5. Stribeck curve of the lubricated system DLC-spray coating and with OME dilution.The coefficients of friction (COF) were determined by varying the speed between 0.1 and 2.5 m/s and the normal force in the range between 60 N and 240 N. The COF is plotted against the Hersey parameter ηv/p, where η is the viscosity of the blend, v the entrainment speed and p the normal load.
Figure 6 .
Figure 6.Friction and wear of experiments with engine oil (light colors) and engine oil plus 20% OME (dark colors).
Figure 6 .
Figure 6.Friction and wear of experiments with engine oil (light colors) and engine oil plus 20% OME (dark colors).
Figure 7 .
Figure 7. Photoelectron spectroscopic measurements (XPS) measurements in the friction tracks.Oxygen depth profiles for different OME concentrations are plotted as solid lines.The summed-up signal of typical anti-wear additive elements (Ca, Zn and P) as dashed lines and the iron profile of the base materials as squares.
Figure 7 .
Figure 7. Photoelectron spectroscopic measurements (XPS) measurements in the friction tracks.Oxygen depth profiles for different OME concentrations are plotted as solid lines.The summed-up signal of typical anti-wear additive elements (Ca, Zn and P) as dashed lines and the iron profile of the base materials as squares. | 6,955.2 | 2018-07-09T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Heat-Assisted Multiferroic Solid-State Memory
A heat-assisted multiferroic solid-state memory design is proposed and analysed, based on a PbNbZrSnTiO3 antiferroelectric layer and Ni81Fe19 magnetic free layer. Information is stored as magnetisation direction in the free layer of a magnetic tunnel junction element. The bit writing process is contactless and relies on triggering thermally activated magnetisation switching of the free layer towards a strain-induced anisotropy easy axis. A stress is generated using the antiferroelectric layer by voltage-induced antiferroelectric to ferroelectric phase change, and this is transmitted to the magnetic free layer by strain-mediated coupling. The thermally activated strain-induced magnetisation switching is analysed here using a three-dimensional, temperature-dependent magnetisation dynamics model, based on simultaneous evaluation of the stochastic Landau-Lifshitz-Bloch equation and heat flow equation, together with stochastic thermal fields and magnetoelastic contributions. The magnetisation switching probability is calculated as a function of stress magnitude and maximum heat pulse temperature. An operating region is identified, where magnetisation switching always occurs, with stress values ranging from 80 to 180 MPa, and maximum temperatures normalised to the Curie temperature ranging from 0.65 to 0.99.
Introduction
Non-volatile memories for primary storage are potential candidates for a universal memory, promising both long-term storage and reliability, as well as speeds comparable to volatile memory such as dynamic random access memory (RAM). Currently, the most common types of non-volatile RAM include flash memory and ferroelectric RAM [1]. Whilst these are commercially available, a number of problems prevent their use as a universal memory. Flash memory is relatively slow and unreliable due to limited number of write cycles, whilst ferroelectric RAM suffers from low bit densities. Other approaches are based on the use of magnetic materials. The most widely researched magnetic RAM is the spin transfer torque magnetic RAM (STT-MRAM) [2][3][4], based on switching the magnetisation direction of a free magnetic layer in a magnetic tunnel junction (MTJ) using spin-polarised currents. Whilst this is also commercially available, offering lower power consumption, faster speeds and comparable bit densities to dynamic RAM, the high manufacturing cost required to achieve large bit densities currently prevents it from being widely adopted. This stems in part from the complex multi-layered tunnel junctions in STT-MRAM. Other approaches under research include heat assisted MRAM [5], three-terminal domain wall MRAM [6] and racetrack memory [7,8]. The latter promises greatly increased areal bit densities due to a three-dimensional design allowing multiple bits to be stored per chip area.
Here, a heat-assisted multiferroic memory (HAMM) device is proposed and analysed, based on a magnetoelectrical multi-layered design. Magnetisation switching at room temperature through strain-mediated coupling in multi-layered magnetoelectrical structures has been demonstrated in Materials 2017, 10, 991; doi:10.3390/ma10090991 www.mdpi.com/journal/materials previous studies [9][10][11][12][13][14]. Other electric-field control methods of switching magnetisation in multiferroic structures have also been demonstrated, including electric-field control of spin polarisation [15,16], antiferromagnetic order [17] and interfacial perpendicular anisotropy in MTJs [18]. Combining both electric field control and spin polarised currents to switch the magnetisation in an MTJ has also been proposed [19]. In the HAMM array design introduced here, bits are stored in MTJ elements as with MRAM; however, the writing process uses a low power contactless method, based on triggering thermally activated magnetisation switching towards a strain-induced anisotropy easy axis. This avoids the difficulties encountered with STT-MRAM due to the high tunnel current densities required to induce magnetisation switching, allowing for the simplest possible MTJ stacks to be used.
Heat-Assisted Multiferroic Memory
The HAMM array is shown in Figure 1. The MTJ element is square shaped with bits "0" and "1" encoded as different magnetisation orientations along the two diagonals. The writing process does not use direct electrical contacts to the individual elements, instead relying on a limited number of voltage pads placed on the antiferroelectric layer as shown in Figure 1. These voltage pads take an industry standard 5 V input and through the antiferroelectric layer a directional in-plane stress is generated over a relatively large area. A similar effect is produced by a ferroelectric material, but ferroelectrics display non-zero remanent polarization and strain, while anti-ferroelectrics have zero polarization and zero strain in a relaxed state [20]. This condition is essential especially when the functionality of the memory cell is based on the strain mediated coupling effect, so the possibility of self-erasure or strain-induced reversal in the relaxed state is eliminated. The voltage is applied between the top and bottom electrodes, as shown in Figure 1, and the antiferroelectric layer thickness is chosen such that the resulting electric field strength is sufficient to induce an antiferroelectric to ferroelectric phase transition [21]. We have analysed a suitable antiferroelectric sample, PbNbZrSnTiO 3 , with the polarisation and strain loops shown in Figure 2. Here, the antiferroelectric to ferroelectric phase transition occurs above 30 kV/cm, thus for a fixed potential of 5 V a layer thickness of~1.5 µm is required. The phase transition occurs through domain nucleation processes within each ferroelectric sublattice as analysed in [21]. Note that, for the required thickness, pin-holes could become a problem, which need to be avoided by careful growth of the antiferroelectric layer, possibly using an epitaxial growth method as in [22], where PbZrTiO 3 films with thickness values down to 50 nm were used.
As shown in a previous study, the electric contact geometry of Figure 1 generates an in-plane strain between the top electrode pair [23]. Two in-plane stress directions are defined using the two sets of top electrodes shown in Figure 1. Through strain-mediated coupling, the stress is transmitted to a large number of MTJ elements, but crucially it is not strong enough to change the magnetisation state of the memory elements on its own. The bits are individually addressed by combining the stress input with a heat pulse delivered using a scanned pulsed laser beam. As the temperature of the magnetic layer increases, the equilibrium magnetisation length decreases, tending towards zero at the Curie temperature T C . This reduces the energy barrier that must be overcome in order to switch the magnetisation configuration. In the first laser scan pass, voltage V 0 is activated, generating an in-plane horizontal stress. In order to write bits "0" over the given array block, the scanned laser beam is turned on only over the memory elements where bits "0" must be stored. In the returning laser scan pass voltage V 1 is activated, now allowing all bits "1" to be written. The reading process can be done using electrical contacts to read the resistance state of the MTJ elements. Alternatively, the reading process can also be contactless, using either the optical reading method demonstrated previously [24] or by using a tunneling magneto-resistance [25] read-head element-in this case, a single magnetic layer can be used instead of the MTJ multi-layer. Figure 1. HAMM array. Information is stored in a patterned array of MTJ elements. Information is written using a low-power on-chip laser source and a minimal number of electrical contacts to the antiferroelectric layer, by heat-assisted stress-induced magnetisation switching. Two laser scan passes are used to write a block of information, in the first pass a voltage on the V0 contacts generates stress to write bits "0", whilst, in the second pass, bits "1" are written using the V1 contacts. Heat pulses are delivered to the elements by the laser source as required during the scan passes. Information is stored in a patterned array of MTJ elements. Information is written using a low-power on-chip laser source and a minimal number of electrical contacts to the antiferroelectric layer, by heat-assisted stress-induced magnetisation switching. Two laser scan passes are used to write a block of information, in the first pass a voltage on the V 0 contacts generates stress to write bits "0", whilst, in the second pass, bits "1" are written using the V 1 contacts. Heat pulses are delivered to the elements by the laser source as required during the scan passes. Figure 1. HAMM array. Information is stored in a patterned array of MTJ elements. Information is written using a low-power on-chip laser source and a minimal number of electrical contacts to the antiferroelectric layer, by heat-assisted stress-induced magnetisation switching. Two laser scan passes are used to write a block of information, in the first pass a voltage on the V0 contacts generates stress to write bits "0", whilst, in the second pass, bits "1" are written using the V1 contacts. Heat pulses are delivered to the elements by the laser source as required during the scan passes. Heat-assisted stress-induced switching. (a) Heating and cooling of the MTJ free layer during and after a laser pulse. The inset shows the temperature profile in the free layer; (b,c) heat-assisted magnetisation switching starting from states "1" and "0", respectively, for the two different stress directions, showing the magnetisation along the horizontal direction normalised to its zerotemperature saturation value as a function of time during the heating/cooling cycle.
The advantages over other non-volatile memories, in particular STT-MRAM, include low power required for writing data, minimal number of electrical contacts and simplicity of design. There are clearly a number of engineering challenges as indicated below, although the focus here is on understanding the physical processes and their feasibility for the proposed device. Including the onchip laser writer is an engineering challenge, although significant progress has been made in the related heat-assisted magnetic recording (HAMR) technology [26]. An alternative all-optical switching of MTJs using infrared laser pulses has also been proposed, relying on ferrimagnetic Gd(Fe,Co) as the free layer [27]. In order to speed-up the writing process, it is desirable for the laser array writer to have multiple and independently controllable output beams. These must also be The advantages over other non-volatile memories, in particular STT-MRAM, include low power required for writing data, minimal number of electrical contacts and simplicity of design. There are clearly a number of engineering challenges as indicated below, although the focus here is on understanding the physical processes and their feasibility for the proposed device. Including the on-chip laser writer is an engineering challenge, although significant progress has been made in the related heat-assisted magnetic recording (HAMR) technology [26]. An alternative all-optical switching of MTJs using infrared laser pulses has also been proposed, relying on ferrimagnetic Gd(Fe,Co) as the free layer [27]. In order to speed-up the writing process, it is desirable for the laser array writer to have multiple and independently controllable output beams. These must also be focused on the array surface and have a scanning capability as indicated in Figure 1. These requirements could be satisfied using a microelectromechanical systems-based (MEMS) design, allowing tapping and control of multiple output laser beams from a solid-state laser, as well as scanning using built-in deformable mirrors. In order to increase the data density, the magnetic elements can be reduced in size. The limitation here is set by the focused laser beam diameter, which is required to address individual elements, and this is typically around several hundred nanometres. This can be improved by careful device engineering. Since the focused laser beam has a Gaussian profile, the laser fluence is not uniform, but instead reaches a maximum at the centre. This should allow magnetisation switching of a single central MTJ element even though the laser beam diameter is larger and thus covers multiple MTJ elements-in this design, the temperature reached at the outer MTJ elements is not sufficient to switch the magnetisation. Another possibility is to use a near-field laser configuration, allowing addressing of much smaller MTJ elements.
Temperature-Dependent Magnetisation Switching Modelling
In order to investigate the operation of a HAMM element, the magnetisation switching processes are investigated using a three-dimensional coupled micromagnetics model based on the stochastic Landau-Lifshitz-Bloch (sLLB) equation and heat flow solver, as described previously [28]. The heat flow equation is given below, where C is the specific heat capacity, ρ is the mass density, and K is the thermal conductivity. The magnetic free layer is a square of side 320 nm and 5 nm thickness with the material set as magnetostrictive Ni-rich permalloy (Ni 81 Fe 19 ) and for simplicity only the free layer is simulated, placed directly on the substrate. Parameters for magnetic and thermal properties are given in [27]: Here, Q is the source term (W/m 3 ), which was set to a constant value of 1.6 × 10 18 W/m 3 . For the dimensions given and a laser pulse duration of 3 ns, this can be achieved using a laser fluence of 2.4 mJ/cm 2 , requiring a low powered laser beam of~0.8 mW. The simulated heating and cooling cycle is shown in Figure 3a, plotting the temperature normalised to the Curie temperature for permalloy of T C = 870 K [29]. A snapshot of the temperature distribution is also shown in the inset to Figure 3a, where blue represents the lowest temperature and red the highest-the temperature is lowest at the sides of the HAMM element since there the heat loss rate to the substrate and air is highest.
The effective field H contains a number of contributions: demagnetizing field, direct exchange interaction field, external field, magnetoelastic field, as well as a longitudinal relaxation field (T < T C ) [30]: Here, m e is the temperature-dependent equilibrium magnetization given by [29], The magnetoelastic field is derived from the magnetoelastic energy density [32], given in Equation (4), using the expression H me = −1/µ 0 M e ∂ε/∂m: Here, λ 100 and λ 111 are the magnetostriction coefficients along the crystallographic axes, σ is the stress generated by the antiferroelectric layer and transmitted through to the magnetic layer by strain-mediated coupling, m = (α 1 , α 2 , α 3 ) and (γ 1 , γ 2 , γ 3 ) are the direction cosines of the magnetisation and stress, respectively. Here, for simplicity, the magnetostriction is assumed to be isotropic with λ = λ 100 = λ 111 = −10 −5 [33], and any temperature dependence is not taken into consideration. A uniform compressive stress is used here, initially fixed to σ = 100 MPa. In-plane stress values of this order can easily be achieved in the geometry of Figure 1 using the PbNbZrSnTiO 3 antiferroelectric layer. The stress tensor is obtained from the product of the elastic constant, c E , and strain tensors. For the out-of-plane strain measured in Figure 2, a simple estimation using c 13 ∼ = 85 GPa [22] results in a maximum achievable in-plane stress of~200 MPa. For thin films, further problems can arise due to clamping effects from the substrate [34], which will need to be experimentally determined. It should be noted, however, the stresses required are relatively small, and as shown below the operating point can be set to values as low as 80 MPa. Materials with higher magnetostriction coefficients could also be chosen to further lower the required stress values.
The magnetisation switching process is shown in Figure 3b,c starting from the "1" and "0" states, respectively. Depending on the applied stress direction, the end state is either "0" or "1", respectively, as indicated in the figure. The applied stress induces an easy axis along the opposite diagonal; however, at lower temperatures, this is not strong enough to result in magnetisation switching. As the Curie temperature is approached, the average magnetisation length approaches zero. This reduces the effective energy barrier that must be overcome in order for the magnetisation to switch under the effect of the induced anisotropy due to the magnetoelastic coupling. To capture this process, it is important to include the effect of lattice vibrations on the magnetisation due to the non-zero temperature. This is achieved using the stochastic LLB equation (sLLB), where a thermal field, H th , is added to the transverse damping torque effective field in the explicit form of the sLLB equation, and a thermal torque, η th , is added to the sLLB equation [35]. The thermal field and torque are given in Equation (5), where their spatial and cross-correlations are zero, V is the volume of the computational cellsize, which was set to 5 nm 3 , ∆t is the fixed time-step used in the sLLB evaluation (the Milstein scheme was used here with ∆t = 0.1 ps [36]), and r H , r η are random unit vectors: For magnetic nanoparticles, the switching probability can be described using an Arrhenius law based on the Néel-Brown thermal activation model [37,38]. Here, the switching process tends to be dominated by reverse domain nucleation at the corners. This is illustrated in Figure 4, where snapshots of the magnetisation configuration during switching events are shown at different temperatures in the heating-cooling cycle. Close to T C , due to the strong effect of lattice vibrations on the magnetisation, the configuration is almost random, although a preferential alignment along the starting magnetisation configuration is still maintained. As the sample cools, reverse domains are nucleated along the induced anisotropy axis. The reversed domains quickly grow in size, finally reaching the reversed magnetisation configuration along the opposite diagonal.
To investigate the switching process further, the switching probability is calculated as a function of stress magnitude and maximum temperature during the heating-cooling cycle. Keeping the same laser fluence, this is adjusted by controlling the duration of the laser pulse. For each combination of stress and maximum temperature, the switching probability is calculated out of five heating-cooling cycles. The resultant switching probability is shown in Figure 5. As expected at stronger stress values and temperatures, magnetisation switching always occurs, defining the possible operating region for the HAMM array. A linear boundary delineates this region up to~200 MPa stress, where magnetisation switching can occur even at room temperature given enough time. If the temperature reaches values very close to T C , the starting magnetisation configuration is completely lost and the magnetisation recovery process becomes more complex. In particular, vortex structures tend to be nucleated, preventing the magnetisation from aligning along one of the diagonals. This results in a switching probability that is only weakly influenced by the applied stress magnitude as seen in Figure 5. Note that the antiferroelectric transition temperature of the antiferroelectric material is typically lower than the Curie temperature of Ni 81 Fe 19 , thus the higher operating temperatures may need to be avoided; it should be noted, however, that the temperature of the substrate is lower than that of the top magnetic layer, due to radiative heat loss to air and steep temperature gradient along the substrate thickness. This can be further minimised by insertion of a thermally insulating spacer layer between the MTJ and antiferroelectric layer. To avoid switching of the MTJ fixed layer, a high T C material can be chosen; for example, a simple Co/Al 2 O 3 /NiFe tunnel junction [39] can be used, noting that Co has T C~1 400 K.
Materials 2017, 10, 991 7 of 10 starting magnetisation configuration is still maintained. As the sample cools, reverse domains are nucleated along the induced anisotropy axis. The reversed domains quickly grow in size, finally reaching the reversed magnetisation configuration along the opposite diagonal.
To investigate the switching process further, the switching probability is calculated as a function of stress magnitude and maximum temperature during the heating-cooling cycle. Keeping the same laser fluence, this is adjusted by controlling the duration of the laser pulse. For each combination of stress and maximum temperature, the switching probability is calculated out of five heating-cooling cycles. The resultant switching probability is shown in Figure 5. As expected at stronger stress values and temperatures, magnetisation switching always occurs, defining the possible operating region for the HAMM array. A linear boundary delineates this region up to ~200 MPa stress, where magnetisation switching can occur even at room temperature given enough time. If the temperature reaches values very close to TC, the starting magnetisation configuration is completely lost and the magnetisation recovery process becomes more complex. In particular, vortex structures tend to be nucleated, preventing the magnetisation from aligning along one of the diagonals. This results in a switching probability that is only weakly influenced by the applied stress magnitude as seen in Figure 5. Note that the antiferroelectric transition temperature of the antiferroelectric material is typically lower than the Curie temperature of Ni81Fe19, thus the higher operating temperatures may need to be avoided; it should be noted, however, that the temperature of the substrate is lower than that of the top magnetic layer, due to radiative heat loss to air and steep temperature gradient along the substrate thickness. This can be further minimised by insertion of a thermally insulating spacer layer between the MTJ and antiferroelectric layer. To avoid switching of the MTJ fixed layer, a high TC material can be chosen; for example, a simple Co/Al2O3/NiFe tunnel junction [39] can be used, noting that Co has TC ~ 1400 K. shown at different temperatures, illustrating the switching process. The top row starts from state "1" with a stress applied to induce switching to state "0". The bottom row starts from state "0" and switches to state "1". shown at different temperatures, illustrating the switching process. The top row starts from state "1" with a stress applied to induce switching to state "0". The bottom row starts from state "0" and switches to state "1". Figure 5. Switching probability as a function of temperature and stress magnitude. The probability of switching from state "1" to state "0" was computed as a function of maximum heat pulse temperature (varied by changing the laser pulse duration) from room temperature up to TC, and as a function of stress magnitude. The operating region where switching always occurs is marked.
Conclusions
A bilayer magnetoelectric memory device has been investigated. The device consists of an antiferroelectric layer, used to generate stresses using a minimal number of voltage pads, with information stored as the in-plane magnetisation direction in an MTJ magnetic free layer, placed on the antiferroelectric material. Heat pulses are generated using a low powered laser, and these are used to trigger thermally activated switching of the magnetisation towards a strain-induced anisotropy easy axis. A three-dimensional micromagnetics solver based on the stochastic LLB equation and coupled to a heat flow solver has been used to investigate the magnetisation switching processes in these devices. The switching probability depends on both the applied stress magnitude and maximum temperature reached during the heat pulse, defining an operating region between 80 and 180 MPa and normalised temperatures ranging from 0.65 to 0.99. These results demonstrate the physical processes behind the proposed memory device. This simple architecture retains the advantages of STT-MRAM, namely non-volatility, fast bit reading and writing, reliability and low power usage, whilst avoiding problems related to the high tunnel current densities required for switching the magnetisation in MTJ elements. Whilst the proposed design allows for a simple architecture of the magnetoelectric layers, the most important difficulty that must be overcome is increasing the areal bit density. Below a certain element size, thermal stability becomes a concern and out-of-plane magnetisation devices may need to be investigated. Moreover, with large areal bit densities, magnetic dipolar interactions between the memory elements can become significant. These can be eliminated or reduced using synthetic antiferromagnetic or synthetic ferrimagnetic layers [40]. Before this limit is reached, the limitation rests with the minimum element size that can be addressed using a laser spot in order to deliver a heat pulse. The possibility of tuning the operating region by taking into account the non-uniform laser fluence has been discussed. In this design, using a blue laser of 480 nm wavelength, an array pitch of 80 nm could be achieved, comparable to STT-MRAM devices, resulting in areal bit densities of ~10-14 Gbit/cm 2 allowing for inclusion of top electrodes. Other possibilities include use of a near-field laser design based on a MEMS architecture or delivering localised heat pulses using an alternative method. . Switching probability as a function of temperature and stress magnitude. The probability of switching from state "1" to state "0" was computed as a function of maximum heat pulse temperature (varied by changing the laser pulse duration) from room temperature up to T C , and as a function of stress magnitude. The operating region where switching always occurs is marked.
Conclusions
A bilayer magnetoelectric memory device has been investigated. The device consists of an antiferroelectric layer, used to generate stresses using a minimal number of voltage pads, with information stored as the in-plane magnetisation direction in an MTJ magnetic free layer, placed on the antiferroelectric material. Heat pulses are generated using a low powered laser, and these are used to trigger thermally activated switching of the magnetisation towards a strain-induced anisotropy easy axis. A three-dimensional micromagnetics solver based on the stochastic LLB equation and coupled to a heat flow solver has been used to investigate the magnetisation switching processes in these devices. The switching probability depends on both the applied stress magnitude and maximum temperature reached during the heat pulse, defining an operating region between 80 and 180 MPa and normalised temperatures ranging from 0.65 to 0.99. These results demonstrate the physical processes behind the proposed memory device. This simple architecture retains the advantages of STT-MRAM, namely non-volatility, fast bit reading and writing, reliability and low power usage, whilst avoiding problems related to the high tunnel current densities required for switching the magnetisation in MTJ elements. Whilst the proposed design allows for a simple architecture of the magnetoelectric layers, the most important difficulty that must be overcome is increasing the areal bit density. Below a certain element size, thermal stability becomes a concern and out-of-plane magnetisation devices may need to be investigated. Moreover, with large areal bit densities, magnetic dipolar interactions between the memory elements can become significant. These can be eliminated or reduced using synthetic antiferromagnetic or synthetic ferrimagnetic layers [40]. Before this limit is reached, the limitation rests with the minimum element size that can be addressed using a laser spot in order to deliver a heat pulse. The possibility of tuning the operating region by taking into account the non-uniform laser fluence has been discussed. In this design, using a blue laser of 480 nm wavelength, an array pitch of 80 nm could be achieved, comparable to STT-MRAM devices, resulting in areal bit densities of~10-14 Gbit/cm 2 allowing for inclusion of top electrodes. Other possibilities include use of a near-field laser design based on a MEMS architecture or delivering localised heat pulses using an alternative method. | 6,154.4 | 2017-07-15T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
of Time-resolved synchrotron tomographic quantification of deformation-induced flow in a semi-solid equiaxed dendritic Al–Cu alloy
The rheology of semi-solid alloys has been studied by a novel in situ tomographic technique. Via extruding an equiaxed Al–15 wt.%Cu alloy, the inhomogeneous coherent compression of the a -Al grains was quantified, including the interdendritic channel closure and formation of a liquid extrudate. This investigation not only provides important insights into the microstructural changes occurring during semi-solid deformation, but also offers a validation benchmark for segregation and rheological models.
In solidification processing, deformation in the semisolid can induce a range of defects, including extrusion segregation in squeeze-casting [1] and surface exudation in direct-chill casting [7]. Although several prior investigations have identified deformation-driven melt flow as a possible mechanism of such defects [1,7,12], the influence of stress on a semi-solid alloy and the melt flow through the equiaxed microstructure are not clearly understood. Many models have been developed to predict the formation of those defects, based on the proposition of the mushy zone as a sponge saturated with liquid [2,7,13,14]. However, currently there are no direct validation techniques that capture the kinetics incorporated in this hypothesis; in situ synchrotron tomography is one possible solution.
Recently, high speed X-ray tomography has been utilized to perform four dimensional imaging (4D, i.e. 3D plus time) of the pore-scale fluid flow [15], solidification [16][17][18], and the influence of deformation on semi-sold alloys [19][20][21]. Tensile and uniaxial compression tests have been used previously with the help of 4D imaging to study semi-solid deformation; these were mainly focused on the formation of damage (hot tearing) as a result of the granular response of the mushy zone [19,21,22]. In this paper, we describe the application of an indirect extrusion cell to study the rheological behavior of the mushy zone and the mechanisms responsible for the liquid migration induced by deformation. Such an indirect extrusion cell can also be used to study how extrusion segregation and exudation form, since it mimics their forming conditions.
The sample was semi-solid, equiaxed dendritic Al-15 wt.%Cu; a cylindrical specimen 2.9 mm in diameter by 2.9 mm long was prepared using wire electro-discharge machining, and then inserted in a boron nitride holder with an inner diameter (ID) of 3 mm and outer diameter (OD) of 5 mm. An alumina tube (1.5 mm ID and 3 mm OD) was placed on top of the specimen forming an indirect extrusion cell (Fig. 1). The entire extrusion set-up was enclosed within a resistive furnace [21], mounted on a bespoke mechanical testing rig with inbuilt rotation (P2R [20,21]).
The experiment was conducted using 53 keV monochromatic X-rays on the I12 beamline at Diamond Light Source. A high speed X-ray imaging system was used, consisting of the beamline's custom-built imaging modules coupled to a CMOS camera (Miro 310M, Vision Research, USA). The imaging system provided a field of view (FOV) of 5.12 Â 3.2 mm and 4 lm pixel size. The sample was positioned so that the top half of the billet and extrudate was in the FOV. The sample was heated to 560 ± 2°C (27 ± 3% liquid fraction) in 15 min, and then held for 10 min for thermal homogenization. Subsequently, the top ram was moved down at 1 lm/s, forcing the alumina tube downwards while measuring loads.
Seven tomograms were captured, each comprising 900 radiographs, collected within 9 s at 45 s intervals. A filtered back projection algorithm was used to reconstruct the data to generate a tomography (unsigned 16-bit integral) [23]. Noise reduction was performed using a 3D median filter, followed by an anisotropic diffusion filter [24] using Avizo 8 (FEI VSG, France). Liquid phases were segmented by the Otsu method [25] using MATLAB 2012b (The Mathworks Inc., USA); errors were evaluated by varying the threshold value (24108) by ±50. Figure 2a-c displays the resulting 2D longitudinal slices of the specimen under extrusion at the displacements of 0, 162 and 324 lm, respectively. The dark gray dendrites are the a-Al grains, while the Cu-enriched liquid is light gray. The corresponding 3D volume-rendered image is shown in the Supplementary information. A small amount of liquid segregated into the tube on top of the sample is notable ( Fig. 2a at d = 0 lm); this extrudate is due to the stress caused by thermal expansion during heating. The subsequent response of the mush to the applied deformation is shown in Fig. 2b (162 lm) and Fig. 2c (324 lm). As deformation progressed, more melt flowed into the alumina tube from the semi-solid specimen. The liquid channels under the wall of the extrusion tube closed in response to the deformation (zone D in Fig. 2b and c). The evolution of the extruded liquid ( Fig. 2e-g) displayed the characteristic profile of laminar flow in a pipe. We can also observe the closure of pre-existing porosity (Fig. 2e-i) due to the compressive strain.
In addition to making the above qualitative observations, we performed a detailed, time-resolved quantification of the extrusion. From d = 0 to 324 lm, the volume of the expelled liquid in the tube increased from %0.2 to %2 mm 3 at an almost constant rate of %0.0055 mm 3 per lm displacement. The extruded liquid volume increased at the same rate as the volumetric displacement (%0.0053 mm 3 /lm) of the alumina tube. The liquid fraction in the billet (lower part of the specimen) decreased from 26.7 ± 2.8% to 15.1 ± 2.1%, indicating densification of the mush (Fig. 3b). The extraction of the liquid by compression of the solid skeleton can be understood by considering the mush to be a saturated sponge, consisting of two phases (the solid grains and the liquid phase). This observation is contrary to the shear-induced dilation that is observed during direct shearing [26] and uniaxial semisolid compression of equiaxed dendrites [21] and globular grains [22], where the liquid channels locally open rather than close. This suggests that different stress states can alter the fluid flow via different mechanisms (sponge or granular). The experiment reveals that constrained compressive stress densifies the solid skeleton and expels liquid from the mush (spongy-like behavior); shear stress is known to cause dilation, drawing liquid from the surrounding neighborhood into the dilated spaces between the grains (granular behavior) [21]. Therefore, when modeling semi-solid deformation, the effect of the stress states on the modulation of liquid flow needs be accounted for.
Along with liquid, a small amount of the solid phase was ejected into the die cavity ( Fig. 2d-f). The peak height of extruded solid increased gradually (Fig. 3a). A magnified view of the extruded grains is shown in Figure 2j and k. Those grains located near the extruder inlet were free to move and appear to be sheared by the grains below, leading to dilatant translation and rotation (e.g. the grain A moved Consequently, the liquid-filled interstitial space increased slightly (Fig. 2k). Buoyancy forces might also play a role in the grain movement as the Cu-rich liquid is denser than the a-Al solid. The movement of grains due to deformation and associated changes of interdendritic liquid will cause both compositional and microstructural variation in the final component.
Determining the mechanical response of the mush requires knowledge of the strength of the dendritic/globular a-Al network and the resistance of the liquid flow. Although calculating the strength of a-Al network would require complex simulations, we can use the 3D geometry of the liquid network to directly determine the permeability, or resistance to the flow of the interdendritic liquid. This was done by solving the Navier-Strokes equations on a subset of the mush at each time step. A subvolume of 2 Â 2 Â 0.8 mm was extracted from the central region of the sample within the billet. Avizo XLab flow simulation code (FEI VSG, France) was used for the simulations (conditions detailed in Ref. [16]). The simulation is also compared with the Carman-Kozeny permeability relationship [27]: where f l is the liquid fraction, S V is the surface area of the solid per unit volume of sample measured directly from the 3D data, and k c (the Kozeny constant) is set to 5 as suggested by Duncan et al. [28]. The simulated permeability decreased monotonically from %2.4 to %0.5 lm 2 during the 324 lm of extrusion (Fig. 3b). Although there is disparity between the simulation and Carman-Kozeny equation, this is still within the scatter of previous work [29]. The continuous decrease of permeability shows the extrusion continued to compress the solid skeleton, increasing the flow resistance and blocking further flow of the interdendritic liquid. The force measurement (Fig. 3c) provides additional information on the mechanical response of the semi-solid specimen. The load linearly rose from 9.7 ± 1.6 N at d = 54 lm, to 35.5 ± 2.5 N at 324 lm. The load increase rate is roughly linear at 0.1 N/lm. It is likely that further densification of the mush will significantly increase the stress as observed by Ludwig et al. [30]. Note that although the measured force is a combined response of liquid flow and solid deformation resistance of the mush, it is expected the liquid flow resistance is minimal as compared to the mechanical load of a-Al network. Figure 3a-c established the correlation of the rheological properties with the evolving two phase microstructure.
Although the measured bulk properties (force, liquid fraction, permeability and expelled liquid volume) are linear with time, the deformation is inhomogeneous. This has been quantified by determining the liquid fraction within different regions (A and B in Figure 4a insert) in the billet. Figure 4a reveals that the liquid fraction of Region A decreased faster than that of B. At the initial stage of deformation (d = 0 lm, Fig. 4b and d), the liquid flowed through a complex network, which was homogeneously distributed and well connected with few isolated liquid pockets. During the extrusion, a considerable rise in the number density of isolated liquid pockets was observed from %224 to %896 mm À3 in Region A, while Region B showed a marginal increase (%320 to %448 mm À3 ). At the final stage, more liquid pockets were observed in Region B than in A at 324 lm ( Fig. 4c and e). Compressive deformation narrowed the liquid channels and closed them at their throats. The inhomogeneous nature of deformation is due to the fact that the propagation of compression in granular medium is strongly dependent on the microstructure and tends to follow the percolating pathways [31].
In conclusion, a novel technique combining high speed synchrotron X-ray tomography and mechanical deformation was developed to measure the influence of microstructure on the rheological behavior of semi-solids. The potential of the technique has been demonstrated by observing and quantifying the rheology of a semi-solid equiaxed dendritic Al-15 wt.%Cu alloy. The real time 3D quantification of semi-solid extrusion provided new insights into the behavior of a mush, as follows: the strain distribution is very inhomogeneous due to the sponge-like compression of the partially coherent equiaxed dendritic solid; the strain is mostly accommodated by inter and intra-grain compaction, with only a small amount of granular flow; the interdendritic liquid is driven out of the semi-solid mush and forms an extrudate; and the permeability of the compacting mush approximately follows a Carman-Kozeny relationship. These microstructural level observations can be directly used to develop and validate segregation and rheological models. | 2,648.8 | 2015-07-01T00:00:00.000 | [
"Materials Science"
] |
Room-Temperature Fabricated Thin-Film Transistors Based on Compounds with Lanthanum and Main Family Element Boron
For the first time, compounds with lanthanum from the main family element Boron (LaBx) were investigated as an active layer for thin-film transistors (TFTs). Detailed studies showed that the room-temperature fabricated LaBx thin film was in the crystalline state with a relatively narrow optical band gap of 2.28 eV. The atom ration of La/B was related to the working pressure during the sputtering process and the atom ration of La/B increased with the increase of the working pressure, which will result in the freer electrons in the LaBx thin film. LaBx-TFT without any intentionally annealing steps exhibited a saturation mobility of 0.44 cm2·V−1·s−1, which is a subthreshold swing (SS) of 0.26 V/decade and a Ion/Ioff ratio larger than 104. The room-temperature process is attractive for its compatibility with almost all kinds of flexible substrates and the LaBx semiconductor may be a new choice for the channel materials in TFTs.
Introduction
The flexible active matrix organic light-emitting diode (AMOLED) displays have been attracting great attention because they have many outstanding advantages such as thin width, lightweight, and superior flexibility [1][2][3].As the key part of the AMOLED, thin film transistors (TFTs) with a low temperature process become an inevitable trend in order to match flexible displays.
Over recent decades, amorphous silicon (a-Si) and polycrystalline silicon have been the main choice for channels in TFTs.However, a-Si TFTs has a low mobility of less than 1 cm 2 •V −1 •s −1 , which is too low to drive high resolution displays [4,5].On the other hand, polycrystalline silicon TFTs possess poor uniformity due to the grain boundary, which limits its application in large-size displays [6,7].Compared to silicon based TFTs, metal oxide TFTs (e.g., InGaZnO [8][9][10], , and ZnO [14][15][16]) have a great potential in flat panel displays because of their high mobility, visible-light transparency, satisfactory uniformity, and low temperature process [17,18].However, TFTs based on metal oxides are meeting a great challenge of long-term stability.Actually, InGaZnO (IGZO) was the most representative among oxide material systems.Since Nomura et al. [19] reported the flexible TFTs based on IGZO, the IGZO has attracted extensive attention.Currently, with the efforts of many scientific researchers, AMOLEDs based on IGZO-TFTs have entered people's life.However, Indium is a rare earth element and is becoming rarer.Therefore, the cost is very expensive.Furthermore, considering the cost and stability, it is necessary to develop some new materials to fabricate TFTs at a low temperature.To develop In-free materials, Alston et al. [20] reported TFTs with a GaSnZnO (GSZO) active layer fabricated below 150 • C.However, the mobility was only 0.14 cm 2 •V −1 •s −1 .Park et al. [21] reported solution-processed TFTs with an alkali metal doped ZnO active layer, but the ZnO surface was sensitive to the atmosphere and the device stability was poor.Kim et al. [22] reported TFTs with an Hf doped ZnO active layer, but the electrical performance was poor with a large subthreshold swing (SS) of 1.09 V/decade and a turn-on voltage (V on ) of −7 V. Jiang et al. [23] reported TFTs with an Al doped ZnO active layer, but the mobility was only 0.17 cm 2 •V −1 •s −1 .It seems difficult to attain high-performance TFTs with a ZnO based active layer without the Indium element.Therefore, it is necessary to develop a new semiconductor material system suitable for an active layer in TFTs.
Lanthanum hexaboride (LaB 6 ) is a known functional ceramic material in the photoelectric field due to its high melting temperature, excellent chemical stability, and high hardness [24][25][26][27].Furthermore, La is relatively abundant in the earth's crust with an annual output of 12,500 t compared to the In with an annual output of 75 t and Ga with an annual output of 30 t.In and Ga are limited resources and becoming rarer, so the relatively rich content in the earth's crust means a lower price.Therefore, LaB x is cheaper than In 2 O 3 based material for an active layer in TFTs.Considering the physical properties and cost, there is a great potential for LaB 6 materials in the semiconductor field.Conventionally, LaB 6 is widely used as cathode emission material [28,29].So far, there is no report about its application in the TFTs field.
In our work, we report the fabrication and performance of TFTs that use compounds with lanthanum and the main family element Boron (LaB x ) for the active layer.This is the first time to use LaB x thin film as the channel materials in TFTs.The LaB x -TFTs exhibited obvious field-effect characteristics.The structure and performance of LaB x thin films were investigated by X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and UV-Visible spectrometer.Compared to the TFTs with ZnO-TFTs with the In element, the outstanding advantages of LaB x -TFTs include the following: the cost of LaB 6 is relatively cheap due to the abundant content of La in the earth crust, which is helpful for reducing the manufacturing cost and the stable chemical properties of LaB 6 , which is beneficial for high-stability devices.Additionally, the LaB 6 has a low coefficient of expansion close to zero, so the stress between LaB 6 and adjacent films is low and it is easy to attain high-stability flexible devices.Therefore, it may provide a new choice for channel materials in TFTs.
Experimental
The LaB x -TFTs were fabricated with a top contact configuration (see Figure 1) by using a heavily doped n-type silicon wafer with a 300 nm thick layer of thermally oxidized SiO 2 (11.4 nF/cm 2 ), which serves as the gate electrode and gate insulator, respectively.The wafers were cleaned in an ultrasonic bath with acetone, de-ionized water, detergent, de-ionized water, and isopropanol for 10 min in sequence.The LaB x thin films with a thickness of 40 nm (optimized thickness) were deposited on the silicon wafer by DC magnetron sputtering with LaB 6 target in a pure argon atmosphere with a flow of 25 sccm and patterned by a shadow mask with an area of 500 µm × 800 µm.For the source/drain electrodes, 380-nm-layer of ITO was sputtered through a shadow mask defining a channel width/length (W/L) of 300/300 µm.The whole preparation process was completed at room temperature.We compared the device A to device B and made a detailed investigation on the different electrical performances between device A and B. In this scenario, the only difference between device A and B is that the LaB x channel layer was prepared under different working pressure.For device A, the LaB x thin film was deposited in pure argon atmosphere with a flow rate of 25 sccm under a working pressure of 0.25 Pa.At the same time, the LaB x thin film was deposited under a working pressure of 3.8 Pa for device B. 2c, respectively.The corresponding properties were summarized in Table 1.Device B exhibited a poor electrical performance with a saturation mobility (µ sat ) of 0.13 cm 2 •V −1 •s −1 , a subthreshold swing (SS) of 0.89 V, a negative turn-on voltage (V on ) of −5.31 V, a negative threshold voltage (V T ) of −2.51 V, and a current on/off (I on /I off ) larger than 10 3 .At the same time, device A exhibited a relative satisfactory electrical performance with a higher µ sat of 0.44 cm 2 •V −1 •s −1 , a lower SS of 0.26 V/decade, a V on of −0.44 V, a V T of −2.27 V, and a I on /I off ratio larger than 10 4 .The significant difference between device A and B was mainly ascribed to the different chemical structure and atom ratio for La/B under a different working pressure [30,31].For LaB x film, the working pressure plays a very important role in the deposition process.Zhao et al. [32] pointed out that LaB 6 thin films, which were deposited at 1.0 Pa, have a higher degree of crystalline structure and superior physical properties in comparison with the other films.Hu et al. [33] also reported that argon pressure strongly influenced the condensing particles' kinetic energy clearly by affecting the scattering processes of sputtered energetic particles and LaB 6 film deposited at 1.0 Pa showed a higher crystallinity degree.However, the optimal conditions are not applicable to LaB x films in this work, which can be used as an active layer for TFTs.It's noted that the huge difference of atomic weight between La and B is extremely large.For the La atom, the atomic weight is 138.9 while, for the B atom, the atomic weight is only 10.8.This means that the scattering probability of those atoms in discharge space by Ar atoms is very different from each other.The scattering of La atoms is small and La atoms are relatively easy to place at the substrate.On the other hand, B atoms are likely to be scattered by Ar.Therefore, some of them will arrive at the substrate level but some will be deposited at the chamber wall or evacuated by the vacuum pump.This implies that the La/B stoichiometric ratio of LaB x film will be changed when deposited under different working pressures.The structure of LaB 6 is similar to the that of CsCl, which exhibits a body centered cubic shape [34].The difference is that the B 6 octahedral clusters occupy the position of the Cl atom and the La atom occupies the position of the Cs atom.To keep the stability of the B 6 octahedral network, two electrons are needed.So the La atom with three electrons in the outermost electron orbital will be electronically spared and the extra electron will be free to move around the La atom.In other words, the electrical properties of LaB x thin film will be largely dependent on the chemical structure and the ratio of La and B. It is reasonable to suppose that the free electrons will increase with the increase of the La/B stoichiometric ratios.However, the resistivity (carrier concentration or mobility) is nonlinear with the La/B stoichiometric ratios because it is also affected by the degree of crystallization and the grain boundary scattering in addition to the La/B stoichiometric ratios [33].To explain the different properties for TFTs with the LaB x active layer prepared under different working pressures, the measurement of XPS, XRD, and UV-visible spectrometer were performed.
XPS Measurement
In order to figure out the composition change of LaB x thin film deposited under different working pressures, the XPS measurement was performed.The 300-nm-thick LaB x thin film samples were prepared on silicon substrate by magnetron sputtering with a LaB 6 ceramic target.The LaB x thin film samples measured for XRD were prepared under the same conditions.Sample A and sample B denoted for the 300-nm-thick LaB x film were prepared with a working pressure of 0.25 Pa and 3.8 Pa, respectively.In addition, the atomic percentage of each element for sample A and B were summarized in Table 2.As shown in Table 2, there are nearly identical atom percentages of La for sample A (14.0%) and B (15.3%) while a significant difference of atom percentage happened between sample A (49.58%) and sample B (36.9%).The atom ratio of La/B increased from 28.1% to 41.5% with the working pressure increased from 0.25 Pa to 3.8 Pa, which indicates that the relative content of La was increased.This resulted in more free electrons in LaB x thin film.Additionally, this is consistent with the transfer characteristics shown in Figure 2c where the LaB x TFT prepared under the working pressure of 3.8 Pa exhibited a more negative threshold voltage.In addition, the relatively small on-current may be ascribed to the carrier scattering with the increase in carrier concentration.
XRD Patterns
The crystal structure of LaB x thin films deposited under different working pressures were investigated by using XRD, which is shown in Figure 3.It is easy to find that the LaB x thin films prepared under different working pressures exhibited obvious crystalline.However, it is noted that we could not match the acquired diffraction patterns to the standard diffraction patterns for LaB 6 .This difficulty can be accounted for by using the following two reasons [35].First, there is a large thermal mismatch between the LaB 6 thin film and the Si substrate.The thermal expansion coefficients are 6.0 × 10 −6 K −1 for LaB 6 versus 2.6 × 10 −6 K −1 for Si and the difference can induce thermal stress in thin films and shift the patterns.Second, due to the Ar implantation, film deposited by sputtering usually has additional problems such as the deviation in the stoichiometric ratio, the defect state's creation and structural change, which result in the mismatch between the acquired diffraction patterns and reference patterns.
Optical Gap
To evaluate the optical bandgap energy (E opt ), the UV-Visible light absorption spectrum was measured.The 40-nm-thick LaB x thin film sample was prepared on quartz glass by magnetron sputtering under a working pressure of 0.25 Pa.The absorption spectrum for LaB x thin film was shown in Figure 4.The Tauc model [36,37] indicates the relationship between the photon energy (hν) and the optical-absorption coefficient (a).Additionally, the plot of (ahν) 1/2 vs. photon energy was shown in the inset in Figure 4.The E opt value can be obtained by extrapolating the linear portion to the photon energy axis in the plot of (ahν) 1/2 vs. photon energy.The E opt value is calculated to be about 2.28 eV.The relatively narrow band gap can lead to smaller activation energies and accumulates the thermally activated carries, which is consistent with the electrical characteristic for LaB x TFT annealed at 400 • C (not shown).
Conclusions
In conclusion, LaB x thin films prepared under different working pressures by DC magnetron sputtering were investigated as an active layer for TFTs.The element distribution and structural properties of LaB x thin films were analyzed by using XPS, XRD, and UV-visible spectrometers.The XPS results demonstrated that the atom ratio for La/B was related to the working pressure during the sputtering process and enhanced with the increase in the working pressure.The XRD results showed that the LaB x thin film was polycrystalline.According to the absorption spectrum, the E opt value was calculated to be about 2.28 eV from the plot of (ahν) 1/2 vs. photon energy.The room-temperature fabricated LaB x -TFT exhibited a µ sat of 0.44 cm 2 •V −1 •s −1 , a SS of 0.26 V/decade, and an I on /I off ratio larger than 10 4 .The room-temperature processes without intentionally annealing steps show a great potential for the applications in the flexible displays.The LaB x may be a new choice for the channel materials in TFTs.
Figure 1 .
Figure 1.The schematic structure of LaB x -TFT.
Figure
Figure 2a,b show the output curves for LaB x TFTs fabricated under the different working pressure of 0.25 Pa and 3.8 Pa, respectively.Both device A and B exhibited strongly-saturated output characteristics.Additionally, device A exhibited a larger output current than device B in a saturation region.The comparison between the transfer curves for device A and B is shown in Figure2c, respectively.The corresponding properties were summarized in Table1.Device B exhibited a poor electrical performance with a saturation mobility (µ sat ) of 0.13 cm 2 •V −1 •s −1 , a subthreshold swing (SS) of 0.89 V, a negative turn-on voltage (V on ) of −5.31 V, a negative threshold voltage (V T ) of −2.51 V, and a current on/off (I on /I off ) larger than 10 3 .At the same time, device A exhibited a relative satisfactory electrical performance with a higher µ sat of 0.44 cm 2 •V −1 •s −1 , a lower SS of 0.26 V/decade, a V on of −0.44 V, a V T of −2.27 V, and a I on /I off ratio larger than 10 4 .
Figure 2 .
Figure 2. Output curves for device A (a) and device B (b). (c) Transfer curves for device A and device B. (Device A: LaB x active layer was prepared in pure argon atmosphere with a flow rate of 25 sccm under a working pressure of 0.25 Pa.Device B: LaB x active layer was prepared in pure argon atmosphere with a flow rate of 25 sccm under a working pressure of 3.8 Pa).
Figure 3 .
Figure 3. XRD patterns of LaB x thin films prepared under different working pressure.(300 nm on silicon substrate).
Figure 4 .
Figure 4. Absorption spectrum of the 40-nm-thick LaB x thin film on quartz glass and the inset shows the plot of (ahν) 1/2 vs. photon energy.
Table 1 .
Comparison of device properties for device A and B.
Table 2 .
The atomic percentage of each element for the LaB x thin films deposited under different working pressure.(Sample A: a 300-nm LaB x was prepared on silicon substrate by magnetron sputtering with a working pressure of 0.25 Pa.Sample B: a 300-nm LaB x was prepared on silicon substrate by magnetron sputtering with a working pressure of 3.8 Pa). | 3,945.2 | 2018-06-01T00:00:00.000 | [
"Materials Science"
] |
Sustainable Retrofitting Solutions: Evaluating the Performance of Jute Fiber Nets and Composite Mortar in Natural Fiber Textile Reinforced Mortars
: Sustainable building materials for integrated (structural and thermal) retrofitting are the need of the hour to retrofit/upgrade the seismic vulnerable and ill-insulated existing building stocks. At the same time, the use of natural fibers and their recyclability could help construct safer and more sustainable buildings. This paper presents three aspects of jute fiber products: (1) the evaluation of the mechanical performance of the jute nets (2.5 cm × 2.5 cm and 2.5 cm and 1.25 cm mesh configurations) through tensile strength tests (with the aim for these to be used in upgrading masonry wall with natural fiber textile reinforced mortars (NFTRM) systems); (2) the hundred percentage recyclability of left-over jute fibers (collected during the net fabrication and failed nets post-tensile strength tests) for the composite mortar preparation; (3) and the evaluation of insulation capacity of the recycled jute net fiber composite mortar (RJNFCM) through thermal conductivity (TC) measurements, when a maximum amount of 12.5% of recycled jute fiber could be added in the mortar mixture at laboratory conditions and with available instruments Notably, when more than the said amount was used, the fiber–mortar bonding was found to be not optimal for the composite mortar preparation. These studies have been carried out considering these products’ applicability for integrated retrofitting purposes. It has been found that the denser mesh configuration (2.5 cm × 1.25 cm) is 35.80% stiffer than the other net configurations (2.5 cm × 2.5 cm). Also, the mesh configuration (2.5 cm × 1.25 cm) shows about 60% more capability to absorb strain energy. TC tests have demonstrated the moderate insulation capacity of these composite mortar samples, and the TC values obtained from the tests range from 0.110 (W/mK) to 0.121 (W/mK).
Introduction
Creating thermally efficient and structurally safe building stocks stands as a key objective within the contemporary construction and building (C&B) sector.This commitment arises from the industry's pursuit of constructing safer, more sustainable, environmentally friendly, and nearly self-sufficient buildings.
Moreover, a building in its complete service life generates vast amounts of waste, some of which is recyclable, and others are not, and therefore, are responsible for creating environmental problems.In the European Union (EU), construction and demolition (C&D) accounts for about 180 tons of waste per year [1].The C&D wastes of the C&B sector are mainly dumped in landfills, thus directly damaging the environment and ecosystem and detrimental to human health [2].The EU, with Directive 2008/98/EC [3], discourages the disposal of 100% of demolition wastes and, to reinforce its Sustainable Development Strategy, in the last decade gradually developed the Integrated Product Policy [4,5] to reduce the environmental impacts of products throughout their life-cycle.Considering environmental and ecological sustainability, different research groups have studied the recyclability of C&D wastes [6][7][8][9][10][11][12] for various C&B applications.Similarly in the literature, the use of argo-wastes also can be found in [6,[13][14][15][16][17][18].
In the EU, buildings stocks constructed before the 1990's are not energy efficient [19], and mostly these buildings are constructed without having properly followed the seismic standard [20].At the same time, these buildings are known to be some of the highest producers of CO 2 and other greenhouse gases, and these figures are about 39% globally and 36% in the EU [21].On the other hand, the C&B sector, directly and indirectly, consumes nearly 36% globally and 40% in the EU, of the total produced energy [22].
Notably, both ancient and modern building stocks are vulnerable to natural and manmade created/caused disasters [23].According to [24], in the Asian continent, the northern Indian traditional Himalayan buildings, particularly the rammed earth and dry-stone buildings, are predominantly vulnerable and susceptible to seismic activity.
Recently, various laws and regulations, obligatorily or voluntarily, obliged the public and private entities to structurally or/and thermally retrofit/upgrade existing buildings according to the latest standards, like Eurocode 8 [25] and near-zero energy building (nZEB) [26].Whereas, Ref. [27] has proposed higher energy-performance buildings by 2030 (2027 for the public sector) with new zero-emission buildings (ZEB) requirements.
Therefore, a wide range of newer building materials and composite materials have been studied by many research groups, with the aim to use these materials for structural, thermal, integrated retrofitting or upgrading purposes, and obviously during new building construction.
Due to superior mechanical properties, man-made fibers, like carbon, basalt, steel, glass, etc., have been used predominantly in raw or in textile form, for structural retrofitting or upgrading [20] of building stocks and various structures.
It is well established that natural fibers are cheaply and abundantly available [28], and are also known to have good thermo-mechanical properties [29].At the same time, they have 78-79.4%lesser carbon footprint [30]; therefore, these fibers can be used for making greener and sustainable building materials.
Whereas due to good insulation capacity, natural fibers like wool, hemp, jute, sisal, etc., are particularly usable for thermal upgrading [20].
Fiber reinforced polymer (FRP) is predominantly used for civil reinforcement applications [48] due to its strength.Particularly carbon FRP [49], glass FRP [50], and basalt FRP [51,52] are mainly used for civil applications, while scholars and researchers are also working on the natural FRP [53,54] or hybrid FRP [55], extensively.
The use and application of a natural fiber (NF) for TRM systems is also gaining momentum, but still its use at commercial level is very limited and its applicability can be found at the research level only.Some important works, and the use of NFTRM, can be found in [64][65][66][67] for jute fiber TRM, hemp fiber TRM, flex fiber TRM systems and banana TRM, respectively.Among all natural fibers, jute fiber ranks second in terms of the amount/quantity being produced [68].Notably, jute fiber-made building materials and composite mortars have been developed, and their mechanical and thermal behaviors have been studied and reported in [31,[69][70][71][72][73][74], respectively.
Authors already have proposed in their previous works [75,76], various compositions of jute fiber composite mortars with different proportions of raw fiber (depending on dry mortar mass) mortar and water combinations, while their physical behaviors and thermo-structural performances are reported and were evaluated.
Also, authors have recycled and used jute fiber waste, derived from various sources, to prepare jute fiber composite mortars [77,78] and other fibers (loofa, sheep wool, hemp shives, thistle fibers) for composite building materials [31,77,78] with the aim that these materials could be used for thermal retrofitting or upgrading.
This paper validates the capability of jute fiber nets that are to be used for NFTRM retrofitting or upgrading masonry walls and analyzes the performance of the recycled jute net fiber mortar that might be used for thermal retrofitting.Therefore, to encourage the United Nationals Sustainable Goals (UNSG) and by following the directive of the EU "EU 2008/98/EC", this research was conducted to encourage the recycling of C&B sector residual wastes.By doing so, 100% of the residual scape thread and net fibers, leftover during the net fabrication process and post-flexural tests were recycled to prepare the jute net fiber composite mortar, with the aim of this composite mortar to be used for the thermal retrofitting purpose.Therefore, the novelty of this research work is threefold: (1) the applicability of natural fiber (jute) for integrated upgrading/retrofitting of masonry walls/structures, (2) the assessment and validation of strength (jute fiber net) and insulation properties of the jute fiber composite mortar, and (3) the possible recyclability of the residual natural fiber (jute) from a previous process (net fabrication), therefore encouraging a sustainable production process.
The structure of this paper can be highlighted as it starts with a brief introduction, thereafter the materials and methods used are explained.Section 3 is subdivided into two parts; in Section 3.1 the observations of the jute net tensile strength tests are reported, while in Section 3.2 the thermal conductivity test results are reported and at the end, in Section 4, the conclusive remarks are stated.
Materials and Methods
For this experimental campaign, the main raw material for net preparation (i.e., jute threads) was collected from the state of West Bengal, India.This three-yarn jute thread type was fabricated in a local jute mill.
While the mortar used for the composite-mortar preparation is a lime-based mortar and has a dry density equal to 750 kg/m 3 .It is a thermo-dehumidifying plaster and it certified as R and T/CSII (EN 998-1 [79]).
Jute Fiber Net Preparation
Figure 1 presents the class 1 mm (1.19 mm with Co.V. of 7.27) [78] jute fiber thread, which has been selected for the jute net fabrications and has tensile strength (f t ) and strain energy (U) measured equal to 122.45 MPa (with Co.V. of 26.16%) and 1.03 kN.mm (with Co.V. of 34.59%), respectively [78].
Two types of jute nets were manually prepared with two distinct inner mesh configurations: (1) 2.5 cm × 2.5 cm and (2) 2.5 cm × 1.25 cm, respectively, in the Strength Laboratory at the University of Salerno, Italy (Figure 2).Notably these configurations have been considered for better mortar penetration during the net application on the wall surface.
The length of each tested sample with net mesh types 2.5 cm × 2.5 cm and 2.5 cm × 1.25 cm, were 0.5 m long.About 18 cm of net (Figure 3) was exposed to the applied loads.The net samples are placed inside two clamps, as shown in Figure 4, and the top and the bottom fixtures were tied with a torque wrench with an adjustable preset torque value of 50 N/m (Figure 4), this was performed to bind the edges of the net uniformly with quasi-equal force.Two types of jute nets were manually prepared with two distinct inner mesh configurations: (1) 2.5 cm × 2.5 cm and (2) 2.5 cm × 1.25 cm, respectively, in the Strength Laboratory at the University of Salerno, Italy (Figure 2).Notably these configurations have been considered for better mortar penetration during the net application on the wall surface.The length of each tested sample with net mesh types 2.5 cm × 2.5 cm and 2.5 cm × 1.25 cm, were 0.5 m long.About 18 cm of net (Figure 3) was exposed to the applied loads.The net samples are placed inside two clamps, as shown in Figure 4, and the top and the bottom fixtures were tied with a torque wrench with an adjustable pre-set torque value of 50 N/m (Figure 4), this was performed to bind the edges of the net uniformly with quasiequal force.Two types of jute nets were manually prepared with two distinct inner mesh configurations: (1) 2.5 cm × 2.5 cm and (2) 2.5 cm × 1.25 cm, respectively, in the Strength Laboratory at the University of Salerno, Italy (Figure 2).Notably these configurations have been considered for better mortar penetration during the net application on the wall surface.The length of each tested sample with net mesh types 2.5 cm × 2.5 cm and 2.5 cm × 1.25 cm, were 0.5 m long.About 18 cm of net (Figure 3) was exposed to the applied loads.The net samples are placed inside two clamps, as shown in Figure 4, and the top and the bottom fixtures were tied with a torque wrench with an adjustable pre-set torque value of 50 N/m (Figure 4), this was performed to bind the edges of the net uniformly with quasiequal force.
Jute Fiber Nets Tensile Strength Tests
The fix-clamp fixtures holding the net sample(s) were fixed to the testing machine (Figure 5) and thereafter the mechanical behavior of these net samples was evaluated through the tensile strength tests.For this test, a Schenck universal machine (Figure 6), was used, and it has a maximum load capacity and maximum workable length of 630 kN and 20 cm, respectively.The tensile tests were conducted at a rate of 2 mm/min.
Jute Fiber Nets Tensile Strength Tests
The fix-clamp fixtures holding the net sample(s) were fixed to the testing machine (Figure 5) and thereafter the mechanical behavior of these net samples was evaluated through the tensile strength tests.For this test, a Schenck universal machine (Figure 6), was used, and it has a maximum load capacity and maximum workable length of 630 kN and 20 cm, respectively.The tensile tests were conducted at a rate of 2 mm/min.
Recycled Jute Fiber Net Composite Mortar (RJFNCM) Preparation
Notably during the net fabrication and after the jute-net tensile strength tests, a significant amount of jute net and thread fibers were left over, and these scrap fibers were recycled along with the tensile test failed net fibers (Figure 7), to prepare the recycled jute net fiber composite mortar (RJNFCM).
The composite mortar's grout was prepared following EN1015-2 [80], while no workability test has been performed as these samples have only been used for the thermal conductivity test.The quantity of water added during the grout preparation was based on the
Recycled Jute Fiber Net Composite Mortar (RJFNCM) Preparation
Notably during the net fabrication and after the jute-net tensile strength tests, a significant amount of jute net and thread fibers were left over, and these scrap fibers were recycled along with the tensile test failed net fibers (Figure 7), to prepare the recycled jute net fiber composite mortar (RJNFCM).author's previous calculation, and experience of working with raw-jute fiber, threads, and composite mortars, while the information about this research work and respective observations can be found in [20,78].
During the mixture preparation, the pre-present aggregates were separated from the mortar, and thereafter 12.5% recycled jute net fibers (Table 1) were added based on the measured mortar mass (without any aggregates).During mixing, about 49.8% of water (Table 1) with respect to the total mixture (mortar + fiber) mass was added slowly to prepare the grout.The mixing process was performed for approximately 7 min.Thereafter, two molds of the dimensions 160 mm × 140 mm × 40 mm were used to prepare two samples to be used for thermal conductivity tests.The samples were left inside molds and in plastic bags for the first 2 days, then they were taken out from the molds and re-placed inside another plastic bag for another 3 days.After, they were left in a quasi-constant environmental condition (22 °C and 65% RH) in a room until the 28th aging day.This first drying process is the part to which the utmost attention must be paid, avoiding the formation of surface depressions or specimen distortions, which would compromise their subsequent thermal characterization tests.After this period, the samples were oven dried (at 50 °C) to remove the remaining moisture, which would influence measurements and thermal conductivity values.
Total Recycled Jute Net Fiber Used
Total Water Used for the Mixture 12.5% of the dry mortar mass 49.8% of the total mixture (mortar + fiber) mass The composite mortar's grout was prepared following EN1015-2 [80], while no workability test has been performed as these samples have only been used for the thermal conductivity test.The quantity of water added during the grout preparation was based on the author's previous calculation, and experience of working with raw-jute fiber, threads, and composite mortars, while the information about this research work and respective observations can be found in [20,78].
During the mixture preparation, the pre-present aggregates were separated from the mortar, and thereafter 12.5% recycled jute net fibers (Table 1) were added based on the measured mortar mass (without any aggregates).During mixing, about 49.8% of water (Table 1) with respect to the total mixture (mortar + fiber) mass was added slowly to prepare the grout.The mixing process was performed for approximately 7 min.Thereafter, two molds of the dimensions 160 mm × 140 mm × 40 mm were used to prepare two samples to be used for thermal conductivity tests.The samples were left inside molds and in plastic bags for the first 2 days, then they were taken out from the molds and re-placed inside another plastic bag for another 3 days.After, they were left in a quasi-constant environmental condition (22 • C and 65% RH) in a room until the 28th aging day.This first drying process is the part to which the utmost attention must be paid, avoiding the formation of surface depressions or specimen distortions, which would compromise their subsequent thermal characterization tests.After this period, the samples were oven dried (at 50 • C) to remove the remaining moisture, which would influence measurements and thermal conductivity values.
Jute Net Fiber Composite Mortar Thermal Conductivity Test
The thermal conductivity values of these samples have been determined at the Applied Thermodynamic and Energetic Laboratory at the University of Cagliari using a TAURUS TCA 300 (Figure 8) device.It is a heat flow meter instrument, that conducts measurements according to ISO 8301 (1991) [81] and EN 1946EN -3 (1999) ) [82].
Jute Net Fiber Composite Mortar Thermal Conductivity Test
The thermal conductivity values of these samples have been determined at the Applied Thermodynamic and Energetic Laboratory at the University of Cagliari using a TAURUS TCA 300 (Figure 8) device.It is a heat flow meter instrument, that conducts measurements according to ISO 8301 (1991) [81] and EN 1946EN -3 (1999) ) [82].The measuring chamber of the TAURUS TCA 300 has two measuring plates, an upper cold plate and a lower hot plate.Notably, the function of these plates can be reversed and can be set accordingly, as required.The original plates have a 300 mm × 300 mm total surface area, while the main measuring zone is located exactly at the center of these plates and the active zones have a 100 mm × 100 mm surface area (Figure 8).Measuring like sample specifications and instrument parameters (Table 2) were set using an instrument- The measuring chamber of the TAURUS TCA 300 has two measuring plates, an upper cold plate and a lower hot plate.Notably, the function of these plates can be reversed and can be set accordingly, as required.The original plates have a 300 mm × 300 mm total surface area, while the main measuring zone is located exactly at the center of these plates and the active zones have a 100 mm × 100 mm surface area (Figure 8).Measuring like sample specifications and instrument parameters (Table 2) were set using an instrumentintegrated computer.
Measuring Intervals min
Intermediate sampling time 1 Total measuring time 300 According to EN 12939 [83], the tests were carried out at sample mean temperatures equal to 10 • C, 20 • C, and 30 • C (Figure 9), always maintaining a difference of 20 • C between the two plates.Consequently, the TC values were calculated based on Equation (1), using the measured heat fluxes values: where Q = heat flux in W/m 2 ; s = sample thickness (m); t H = temperature of the hot plate In Figure 9, the uncertainty bands relating to conductivity values at the three different temperatures are already graphically expressed.While considering the calibration uncertainty of the TAURUS, the repeatability of the measurements obtained and the linear regression error, it is possible to estimate an average standard deviation (SD) equal to 0.004 W/mK.
Results
In this section, the mechanical strength of the jute nets (fabricated by using 1 mm diameter jute threads) and the thermal conductivity value of the recycled jute net fiber composite mortar are reported.
During this experimental campaign, the mechanical strength of the two types of jute Figure 9 presents a typical result sheet with all measured (heat flow in W/m 2 , temperatures of cold and warm side in • C) and calculated (temperature difference in K and thermal conductivity in W/mK) quantities, with respect to the set sample mean temperatures (10 • C, 20 • C, and 30 • C).
In Figure 9, the uncertainty bands relating to conductivity values at the three different temperatures are already graphically expressed.While considering the calibration uncertainty of the TAURUS, the repeatability of the measurements obtained and the linear regression error, it is possible to estimate an average standard deviation (SD) equal to 0.004 W/mK.
Results
In this section, the mechanical strength of the jute nets (fabricated by using 1 mm diameter jute threads) and the thermal conductivity value of the recycled jute net fiber composite mortar are reported.
During this experimental campaign, the mechanical strength of the two types of jute nets, with 2.5 cm × 2.5 cm and 2.5 cm × 1.25 cm mesh configurations, were evaluated through tensile strength tests.
Subsequently, the fiber scraps during the net fabrication and the post-tensile test failed net fibers were collected and recycled to prepare the jute net fiber composite mortar and later the thermal conductivity value of these samples was estimated.
Jute Net Tensile Strength Tests
A total of seven samples of each type of jute thread net (2.5 cm × 2.5 cm and 2.5 cm × 1.25 cm mesh configurations) were used for the displacement-controlled tensile strength tests.The tensile strength and the axial displacement were recorded during the tests.Notably, from each net type, two non-satisfactory results were discarded and were not considered due to faulty measurements.
The results presented in this section clearly highlight that the denser mesh configuration (2.5 cm × 1.25 cm) was found to be significantly more rigid, with an average stiffness increase of 35.80%, compared to the 2.5 cm × 2.5 cm mesh samples (see Figures 10-12).Furthermore, the load-bearing capacity of the denser mesh was observed to be over 50% greater, accompanied by a 14.35% increase in maximum elongation.This mesh also demonstrated superior strain energy transfer, exceeding its counterpart by more than 60%, as evidenced in Table 3.
Further, Table 3 presents a detailed comparison of the mechanical properties, including maximum load, displacement, strain energy, and stiffness, along with the coefficient of variation for each parameter, offering insights into the reliability and variability of the measurements.The different performances of the two mesh configurations can be easily explained, considering that the denser mesh presents more threads for unit area than the other configuration.
Figures 10 and 11 illustrate the force-displacement behavior for each type of mesh configuration.The primary mechanical observations from these tests are further documented in Tables 4 and 5.These tables present specific data points, such as the first collapse load, maximum load, and the corresponding displacement at maximum load for individual samples, providing a granular view of the performance characteristics.Interestingly, none of the net samples experienced complete rupture during testing.Instead, multiple collapses were observed, attributed to failures at knot points or within the thread fiber between knots.This behavior resulted in various spikes in the force-displacement graphs, (Figures 10 and 11), indicating the localized nature of the failures.On the positive side, the denser mesh configuration shows promise for enhanced load-bearing capacity and stiffness, which are desirable traits for applications in construction.However, the occurrence of the multiple collapses suggests that while the material is strong, the nets' reliability is related to the manufacturing processes or material behavior at micro-scales, such as at knots. Figure 12 presents a comparison of the stiffnesses between the jute net mesh configurations 2.5 cm × 1.25 cm and 2.5 cm × 2.5 cm.It is clearly noticeable that the first configuration is stiffer than the latter configuration.The maximum and minimum stiffness for the 2.5 cm × 1.25 cm mesh configuration was found to be 11.75 N/mm and 8.41 N/mm, respectively.While 9.83 N/mm and 5.48 N/mm are the maximum and minimum stiffness for the 2.5 cm × 2.5 cm mesh configuration.
Recycled Jute Net Fiber Composite Mortar (RJNFCM) Thermal Conductivity Tests
The TC values of the jute fiber (12.5% of fiber, with respect to the dry mortar mass) composite mortar samples were evaluated.The samples were prepared based on the authors previous experience, which can be found in [20,75].
Table 6 presents the TC values measured at 10 °C, 20 °C and 30 °C, respectively, and it has been found that the composite sample RJNF(12.5%)CM with 12.5% (with respect to the dry mortar mass) recycled jute fiber is 48.36%, 47.61%, and 46.22%, respectively lower (in average) than the sample [75] with the 6.5% of recycled jute fiber (added with respect to the dry mortar mass) composite mortar (Figure 12).
Figure 13 presents a comparison between the obtained TC values of the RJNFCM (under observation) and similar types of samples (with different fiber lengths and fiber percentages (with respect to the dry mortar mass) combinations) reported in [20,75], while the sample preparation and drying conditions are same.Notably the comparison has been also carried out with the samples made with 6.5% recycled fibers [75].In this regard, it is the authors' opinion that the previous processing of jute fibers does not influence the thermal conductivity of the samples to a significant extent.
Notably, the samples considered here are oven dried before conducting the TC measurements.However, as highlighted by the authors in [78], during the composite mortar mixing, fiber balls are formed when the jute fibers came in contact with water.Therefore, jute fibers not only have the ability to absorb water individually but also can trap some extra water collectively in the fiber balls cavity.As the fiber used in this case were three yarn jute threads, therefore in the yarn cavity, too, there could be the possibility of trapping a small amount of water.Interestingly, none of the net samples experienced complete rupture during testing.Instead, multiple collapses were observed, attributed to failures at knot points or within the thread fiber between knots.This behavior resulted in various spikes in the forcedisplacement graphs, (Figures 10 and 11), indicating the localized nature of the failures.On the positive side, the denser mesh configuration shows promise for enhanced load-bearing capacity and stiffness, which are desirable traits for applications in construction.However, the occurrence of the multiple collapses suggests that while the material is strong, the nets' reliability is related to the manufacturing processes or material behavior at micro-scales, such as at knots.
Figure 12 presents a comparison of the stiffnesses between the jute net mesh configurations 2.5 cm × 1.25 cm and 2.5 cm × 2.5 cm.It is clearly noticeable that the first configuration is stiffer than the latter configuration.The maximum and minimum stiffness for the 2.5 cm × 1.25 cm mesh configuration was found to be 11.75 N/mm and 8.41 N/mm, respectively.While 9.83 N/mm and 5.48 N/mm are the maximum and minimum stiffness for the 2.5 cm × 2.5 cm mesh configuration.
Recycled Jute Net Fiber Composite Mortar (RJNFCM) Thermal Conductivity Tests
The TC values of the jute fiber (12.5% of fiber, with respect to the dry mortar mass) composite mortar samples were evaluated.The samples were prepared based on the authors previous experience, which can be found in [20,75].
Table 6 presents the TC values measured at 10 • C, 20 • C and 30 • C, respectively, and it has been found that the composite sample RJNF(12.5%)CM with 12.5% (with respect to the dry mortar mass) recycled jute fiber is 48.36%, 47.61%, and 46.22%, respectively lower (in average) than the sample [75] with the 6.5% of recycled jute fiber (added with respect to the dry mortar mass) composite mortar (Figure 12).The TC values of RJNF(12.5%)CMsample were compared to the sample combinations with different fiber percentages (0.5%, 1.0%, 1.5%, and 2.0%) and fiber lengths (30 mm, 10 mm, and 5 mm), as highlighted in [75].The TC values of the RJNF(12.5%)CM are lower in the range between 74% and 80% approximately, with respect to the samples mentioned [75].
Figure 13 presents a comparison between the obtained TC values of the RJNFCM (under observation) and similar types of samples (with different fiber lengths and fiber percentages (with respect to the dry mortar mass) combinations) reported in [20,75], while the sample preparation and drying conditions are same.Notably the comparison has been also carried out with the samples made with 6.5% recycled fibers [75].In this regard, it is the authors' opinion that the previous processing of jute fibers does not influence the thermal conductivity of the samples to a significant extent.
Notably, the samples considered here are oven dried before conducting the TC measurements.However, as highlighted by the authors in [78], during the composite mortar mixing, fiber balls are formed when the jute fibers came in contact with water.Therefore, jute fibers not only have the ability to absorb water individually but also can trap some extra water collectively in the fiber balls cavity.As the fiber used in this case were three yarn jute threads, therefore in the yarn cavity, too, there could be the possibility of trapping a small amount of water.
Here, it is worth highlighting that with the available instruments and at the given laboratory condition, it was only possible to prepare the composite mortar with a maximum of 12.5% of recycled jute fiber.Notably, more than the said amount of the fiber-mortar bonding was not optimum for the composite mortar preparation.
When the RJNF(12.5%)CMsamples dried and the trap water was removed, there must be some empty cavities formed inside of the said samples.So, a higher fiber density in the sample means more numbers of empty cavities, consequently improving the insulation capacity of the sample.Therefore, it can be said that RJNF(12.5%)CM is a better insulator than the other compared samples in Figure 13.
where, Co.V. is the coefficient of variation.
Conclusions
This study investigates the mechanical and thermal properties of jute fiber products and the recyclability of residual waste thread and net fibers, which are leftover during the net fabrication process and post-flexural tests.
Tensile strength tests have been conducted to evaluate the feasibility of the jute fiber nets to be used for net fiber textile retrofitting mortar (NFTRM) of the masonry walls or structures.For this campaign, two types of jute nets with 2.5 cm × 2.5 cm and 2.5 cm × 1.25 cm were tested.
Both types of jute nets were manually fabricated using jute fiber threads in the Strength laboratory of the University of Salerno, Italy.During the net fabrication process, scrap thread fibers and net fibers were collected.Additionally net fibers leftovers after the tensile strength tests were also collected and all these collected fibers were recycled to prepare RJNFCM.About 12% of recycled jute fiber (with respect to the dry mortar mass) was used for RJFCM preparation.
The tensile strength tests have demonstrated that the net sample 2.5 cm × 1.25 cm is 35.80% stiffer than the other net configurations (2.5 cm × 2.5 cm).While the previous one also has the capability to dissipate about 60% more applied load in terms of strain energy.
Whereas the TC tests have shown the TC values of the RJNFCM range from 0.110 (W/mK) to 0.121 (W/mK), and these composite samples have shown better insulation capacity in comparison to the author's previous works [20].
Following this experimental campaign, authors have used these jute fiber products (jute fiber nets and jute fiber composite mortars) for integrated upgrading of the masonry walls, and both structural behavior and thermal performance of these walls are evaluated.
Therefore, this work shows how it is possible to use these natural fiber products for integrated retrofitting or upgrading of the masonry wall or structures, to have sustainable, eco-friendly, greener, healthier, safer, and energy-efficient buildings.
More research is needed to optimize the thickness of TRM in order to obtain a given structural or thermal performance.In addition, the production process of jute nets should be improved to be scaled for industrial production.Further research is also scheduled to analyze the aging time of the jute net (prepared with jute threads (Figure 1), which are about one year older) used for the TRM system.
Finally, it is important to highlight that incorporating life cycle assessment (LCA) strategies is essential to comprehensively understand the environmental implications of jute fiber products and fiber-reinforced mortars in the construction industry, see for example [84].LCA can produce a detailed analysis of the carbon footprint at every stage of a product's life; from the cultivation of jute, which typically requires lower inputs of fertilizers and pesticides compared to other crops, thereby reducing the initial environmental impact, through the processing and manufacturing phases, where energy consumption and waste generation can be significant.For jute fiber-reinforced mortars, an LCA would consider not only the direct emissions and energy use during production, but also the potential savings in the use phase, due to improved energy efficiency from the material's insulating properties.Furthermore, the end-of-life recyclability of jute enhances its sustainability profile, potentially reducing landfill waste and allowing for the material's reuse or repurposing, thus contributing to a circular economy [85].By applying LCA to jute products, the construction industry can better align with the EU's sustainability directives, moving towards greener building practices that minimize the carbon footprint and foster long-term ecological balance.
Figure 4 .
Figure 4. (a) Complete view and (b) the set parameter of the adjustable torque-wrench.
Figure 5 .
Figure 5. Net samples with mesh configurations.(a) 2.5 cm × 2.5 cm and (b) 2.5 cm × 1.25 cm were placed in the machine for the tensile strength test.
Figure 5 .
Figure 5. Net samples with mesh configurations.(a) 2.5 cm × 2.5 cm and (b) 2.5 cm × 1.25 cm were placed in the machine for the tensile strength test.
Figure 5 .
Figure 5. Net samples with mesh configurations.(a) 2.5 cm × 2.5 cm and (b) 2.5 cm × 1.25 cm were placed in the machine for the tensile strength test.
Figure 6 .
Figure 6.Schenck universal machine used for tensile strength tests.
Figure 6 .
Figure 6.Schenck universal machine used for tensile strength tests.
Figure 7 .
Figure 7. Leftovers from the jute net fabrication and post-tensile test used for composite mortar preparation.
Figure 7 .
Figure 7. Leftovers from the jute net fabrication and post-tensile test used for composite mortar preparation.
( • C); t C = temperature of the cold plate ( • C); and A = the active zone surface area (m 2 ).Sustainability 2024, 16, x FOR PEER REVIEW 9 of 19
Figure 9 .
Figure 9. Thermal conductivity values of measured sample.
Figure 9 .
Figure 9. Thermal conductivity values of measured sample.
Table 1 .
Amount of fiber and water used.
Table 1 .
Amount of fiber and water used.
Recycled Jute Net Fiber Used Total Water Used for the Mixture
12.5% of the dry mortar mass 49.8% of the total mixture (mortar + fiber) mass
Table 3 .
Mechanical properties of jute fiber nets.
where, Co.V. is the coefficient of variation.
Table 4 .
Mechanical properties of the jute fiber net type N_2.5.
Table 5 .
Mechanical properties of the jute fiber net type N_1.25.
Table 6 .
Thermal conductivity of the RJNFCM. | 7,959 | 2024-01-30T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science"
] |
Laser-induced solid-solid phase transition in As under pressure: A theoretical prediction
In Arsenic a pressure-induced solid-solid phase transition from the A7 into the simple cubic structure has been experimentally demonstrated [Beister et al., Phys. Rev. B 41, 5535 (1990)]. In this paper we present calculations, which predict that this phase transition can also be induced by an ultrashort laser pulse in As under pressure. In addition, calculations for the pressure-induced phase transition are presented. Using density functional theory in the generalized gradient approximation, we found that the pressure-induced phase transition takes place at 26.3 GPa and is accompanied by a volume change"Delta V"= 0.5 bohr^3/atom. The laser-induced phase transition is predicted for an applied pressure of 23.8 GPa and an absorbed laser energy of 2.8 mRy/atom.
Introduction
Femtosecond lasers create extreme nonequilibrium conditions in matter, namely, one with hot electrons and cold ions [1]. This is primarily because laser light interacts very strongly with electrons but not with ions, and secondarily because the electrons thermalize relatively slowly with the ions compared with the time needed for ultrafast structural changes (typically several ps vs several 100's of fs). The possibility of heating the electrons in a material with an ultrafast laser pulse without immediately changing the temperature of the ions has paved the way to the discovery of interesting physical phenomena, such as, perhaps most importantly, laser-induced phase transitions that cannot be explained by the heating of the ions, but are caused by a change of the electronic bonding properties.
In Arsenic a series of pressure-induced phase transitions exists. At ambient conditions As crystallizes in the A7 structure. At 25 GPa there is a transition to the simple cubic (sc) phase [2], followed by transitions [3] to the so-called As(III) structure at 48 GPa and to body-centered cubic As at 97 GPa. The fact that As in the A7 structure is electronically stabilized by the Peierls mechanism [4] makes it a candidate for an ultrafast laser-induced solid-solid phase transition. This is of fundamental interest, because the majority of the experimentally studied laser-induced phase transitions shows ultrafast melting, for example, in Si [5], GaAs [6], Te [7], and InSb [8]. Examples of laserinduced solid-solid phase transitions are a disorder-to-disorder transition in amorphous GeSb [9], a metallic-to-semiconductor phase transition in SmS [10], a monoclinic-torutile transition in VO 2 [11,12], a ferromagnetic-to-paramagnetic transition in CoPt 3 alloy films [13], and an antiferromagnetic-to-ferromagnetic phase transition in FeRh [14]. The main topic of this paper is to analyze the possibility to induce a solid-solid phase transition by an ultrafast laser pulse in As under pressure.
We now describe the relationship between the A7 and sc structures, the two phases of As between which we will predict a laser-induced transition (see section 3). Starting from an sc atomic packing, the A7 structure can be derived in two steps, which are illustrated in figure 1. First the sc lattice is deformed by elongating it along one of the body diagonals (indicated by 1 2 a 3 in figure 1). The magnitude of this deformation is usually expressed by c/a, where c is the length of the lattice vector a 3 and a is the length of a 1 (figure 1). In the sc lattice c/a = √ 6 ≈ 2.45. In the second step a Peierls instability causes the atoms to be displaced along the same diagonal, in opposite directions (figure 1). The magnitude of the displacements is expressed by the atomic coordinate z: A value of 0.25 indicates no Peierls distortion. Deviations from z = 0.25 give the magnitude of the displacement of the atoms in the a 3 direction. In As at ambient pressure z = 0.228 [15], which leads to a displacement of the atoms amounting to 13% of the average distance between the planes [see figure 1(c)]. Figure 1. Relation between the A7 and sc structures. a) The sc structure. The lattice vectors a 1 , a 2 , and a 3 belong to the A7 structure (the sc structure is a special case of the A7 structure). b) Intermediate structure. Compared to a) the unit cell of the sc lattice is elongated along the vector a 3 keeping the volume per atom constant. The new positions of the atoms are shown together with a cube of the sc lattice. c) The A7 structure of As at ambient pressure. Compared to b) the atomic planes perpendicular to a 3 are displaced alternatingly in the a 3 and −a 3 directions due to a Peierls distortion. The cube still represents the sc structure of a), for reference. Below a) -c) projections of the atomic planes onto a 3 are shown.
Method
To determine the phase of As under pressure before the laser excitation when the electrons are in their ground state we compared the enthalpies of the A7 and sc phases, which were computed in the following way. (i) We calculated total energies for a series of unit cell volumes with the computer program wien2k [16]. This is an all-electron full-potential linearized augmented plane wave code, which has been designed to make very accurate predictions for solids within the limitations of density functional theory [17]. For the sc structure these calculations are straightforward, because there are no internal parameters. However, for the A7 structure the energy is a function of both volume and the internal parameters c/a and z described in the Introduction. In this case we minimized the total energy at each volume by optimizing the c/a ratio and the Peierls distortion parameter z. (ii) We fitted the total energy vs volume data for the sc phase to the Birch-Murnaghan equation of state [18], which has been derived for systems with cubic symmetry. The fitting parameters of this equation of state are the minimal total energy E 0 , the corresponding atomic volume V 0 , the bulk modulus B 0 , and the first derivative of the bulk modulus with respect to pressure B 0 . (iii) We fitted the computed energy differences between the A7 and sc structures to We found that this form described our A7 data much better than when the total energies of the A7 structure are fitted directly to the Birch-Murnaghan equation of state [18].
(iv) We calculated the enthalpies H = E + P V for the sc and A7 structures from the analytical fits for the total energy and P = − ∂E ∂V S .
To describe As after laser excitation we use the following physical picture: The laser pulse creates electrons and holes, which undergo dephasing and collisions on a timescale that is much shorter than the typical time of ionic motion (∼ 130 fs, based on the A 1g optical Γ-point phonon frequency of As in its equilibrium structure). Therefore one can for all practical purposes assume that the excited carriers thermalize instantaneously. The effect of the excitation by an ultrashort laser pulse was thus simulated by heating the electrons. In this case, the phase stability is governed by the free energy F = E−T S, where T is the electronic temperature and S is the electronic entropy (To be sure, the free energy was also used for the ground state calculations, but since the electronic temperature was very low in that case and was used for smearing purposes only, we have ignored this point for the sake of clarity of presentation in the discussion above). Note that the entropic contribution of the ions to the free energy was neglected. This is because we assumed that both the laser pulse duration and the timescale on which the phase transition takes place are much shorter than the electron-ion interaction time, so that no substantial heating of the ions due to the laser occurs during the time of interest. Another consequence of the short timescales that we are looking at is that the system does not have time to expand, so that structural changes should be assumed to take place at constant volume. Therefore, we compared the free energies of the A7 and sc structures to predict under which conditions a phase transition takes place in As under pressure.
Details of our wien2k calculations were as follows. We used the generalized gradient approximation [19] for the exchange and correlation energy. Our basis consisted of plane waves with energies less than or equal to 20.25 Ry. Inside the so-called muffintin spheres around each atom the plane waves were augmented by atomic orbitals with a linearized energy dependence. To get a better convergence with respect to the number of plane waves included in the basis the 3d, 4s, and 4p states were described with a combination [20] of energy independent augmented plane waves and local orbitals. Also, additional local orbitals [21] were added for the 3d and the 4s states. The first Brillouin Zone of the A7 structure was sampled using a grid of 32 × 32 × 32 k points excluding the Γ point, which corresponds to 2992 irreducible k points. To rule out errors due to, for example, a different k-space sampling and to allow for a detailed comparison between the A7 and the sc structures, the latter structure was also computed as an A7 structure with the parameters c/a and z fixed to their sc values √ 6 and 0.25, respectively. The electronic ground state calculations were performed using temperature smearing (T = 1 mRy) to determine the electronic occupation numbers. For the laser-excited state an electronic temperature of T = 19 mRy was used. This choice corresponds to an absorbed laser energy of E laser ≈ 2.8 mRy/atom or, equivalently, the creation of ≈ 0.06 electron-hole pairs per unit cell (when the atomic volume V = 114.0 a 3 0 /atom). Since we assumed an instantaneous thermalization of the excited carriers, our predicted results do not depend directly on the energy of the phonons, but only on the total absorbed energy per atom. The incident laser fluence that leads, at the sample surface, to the above-mentioned value of E laser is, however, a function of both Table 1. Fitting parameters for the electronic ground state (T = 1 mRy) and the laser-excited state (T = 19 mRy). V 0 , B 0 , and B 0 give the best fit of our sc data to the Birch-Murnaghan equation of state [18]. V c , β c , and A c represent our best fit of the computed energy differences between the A7 and the sc structures to (1). the reflectivity and the penetration depth of the laser light, which are dependent on the laser wavelength. A complication is that these dependencies might change due to an applied pressure and under the influence of the laser irradiation. Ignoring these latter points and using a laser wavelength of 800 nm we estimated our incident fluence to be ∼ 1.8 mJ/cm 2 (for this wavelength the reflectivity is ∼ 55% [22] and the penetration depth is ∼ 23 nm, based on the experimentally determined complex dielectric constant of 16 + 27i [23]). Here we would further like to mention that we assumed the laser pulse duration to be much shorter than the timescale of ionic motion (∼ 130 fs).
Results
Before turning to the energy vs volume curves, a few words about the equilibrium structure of Arsenic are at place, because this allows us to get a rough idea about the quality of the predictions that the GGA [19] we used is able to make. According to our calculations the ground state parameters of As in the A7 structure are c/a = 2.86, z = 0.226, and V = 154 a 3 0 /atom. This is in reasonable agreement with the measured values c/a = 2.78, z = 0.228, and V = 144 a 3 0 /atom [15]. As was to be expected, our atomic volume V is larger than the experimental value [15] by about 7%, which is consistent with the general trend of the GGA to overestimate bond lengths, and in agreement with the value of 154 a 3 0 /atom recently obtained from GGA calculations by Shang and co-workers [4]. From calculations for smaller atomic volumes we concluded that our too large values for the magnitude of the Peierls distortion (0.25 − z) and for c/a can in part but not entirely be understood as a consequence of the overestimated unit cell volume.
The total energies of the sc and A7 structures as a function of atomic volume are shown in figure 2. These data are for the electronic ground state, i.e., before laser excitation. The parameters obtained from the fits to the Birch-Murnaghan equation of state and (1), which we made using the data points plotted in figure 2 and the inset of figure 2, respectively, are given in table 1. From figure 2 it is apparent that our fits follow the computed data very closely. Indeed, the root mean square of the residuals Figure 2. Total ground state energy vs volume for sc As and As in the A7 structure. The solid curve shows the Birch-Murnaghan fit to the data for the sc structure. The inset shows the energy difference ∆E = E A7 − E sc (plusses) and the fit to (1) (solid curve). The volume V c at which the A7 structure becomes energetically favored over the sc one is indicated. was only 0.003 mRy/atom for the Birch-Murnaghan fit and 0.005 mRy/atom for the fit to (1).
The enthalpy of the A7 structure relative to the enthalpy of the sc structure as a function of pressure is plotted in figure 3. In a narrow pressure range multiple solutions exist for the enthalpy of the A7 structure, as is shown in the inset of figure 3. Of course, the solution with the lowest enthalpy is the stable one. A consequence of these multiple solutions is that the enthalpies of the A7 and sc structures cross and that the transition point from the A7 to the sc structure can thus easily be determined. From figure 3 we see that the transition takes place at 26.3 GPa, in reasonable agreement with earlier GGA calculations by Häussermann and co-workers [24], who found the transition pressure to be 28 GPa. Another consequence of the multiple solutions seen in the inset of figure 3 is that the unstable solutions, corresponding to a narrow range of possible atomic volumes, are not realized for any given pressure, which means that there is a small volume change at the transition, which according to our calculations is ∆V /V c = 0.4%. Both the transition pressure and the volume change are in nearly perfect agreement with the experimental values of Beister and co-workers [2], who have found the transition at a pressure of 25 ± 1 GPa accompanied by a volume change ∆V /V c 0.6%. We did not find a discontinuity in c/a at the transition, which according to Beister and co-workers [2] jumps from 2.49 to √ 6 ≈ 2.45. However, according to our calculations z changes discontinuously from 0.246 to 0.25.
We now describe the effect of an ultrashort laser pulse. As we have already argued in section 2 the timescale that we are interested in is too short for the system to expand. Therefore we present results at constant volume, and we only briefly discuss the effect of pressure at the end of this paragraph. Our optimized c/a ratio and Peierls parameter z as a function of volume for As in the A7 structure are shown in figure 4. It can be seen that the pressure-induced phase transition from the A7 to the sc structure occurs at a larger volume when the electrons have been heated with a laser compared to when they are in their ground state. As a consequence, it is possible to induce a phase transition at constant volume from the A7 to the sc structure in As under pressure. We have indicated this possibility for a specific volume (V = 114.0 a 3 0 /atom) by two arrows in figure 4. The corresponding potential energy surfaces before and after laser excitation are shown in figure 5. When the electrons are in the ground state this volume corresponds to an applied calculated pressure of 23.8 GPa. The optimized ratio c/a = 2.48 and the Peierls parameter z = 0.242 indicate that the system is in the A7 phase. After the laser excitation the optimal c/a and z become 2.45 and 0.25, respectively, signifying that As has made the transition to the sc structure. At this given volume, the internal pressure in laser-excited As has dropped to 22.4 GPa. If the system were given enough time to adjust to the applied pressure of 23.8 GPa, the volume would, according to our calculations, decrease to 113.1 a 3 0 /atom, in which case the simple cubic phase would clearly still be the most stable one (see figure 4).
Comparing the free energies of the A7 and sc structures it follows that the A7 to sc transition in As can, in principle, be induced for atomic volumes up to V ≤ V c = 115.54 a 3 0 /atom (T = 19 mRy data in table 1), which corresponds to an applied pressure of ≥ 22.1 GPa. However, near the transition point the potential energy surface of As becomes very flat (see figure 5), and as a consequence the uncertainty in the optimized c/a and z values becomes relatively large, which explains why in figure 4 the c/a ratio and z Peierls parameter start to deviate from their sc values at volumes less than V c = 115.54 a 3 0 /atom (but greater than V = 114.0 a 3 0 /atom). So, in order to be on the safe side and to avoid any ambiguity about the final state after the laser excitation, the lowest pressure for which we feel confident to predict an unambiguous transition to the sc structure (as defined by its c/a and z values) is the value of 23.8 GPa derived in the previous paragraph.
As can be noted in figure 4(b), the predicted phase transition involves a change in c/a. Since the volume remains constant, in a polycrystalline sample there is no average effect and we expect that the change in c/a takes place on a subpicosecond timescale. In a monocrystal this change might be limited by the sound velocity.
Conclusion
Summarizing, we demonstrated that it is possible to induce a solid-solid phase transition in As under pressure using an ultrashort laser pulse. In particular, for an absorbed laser energy E laser ≈ 2.8 mRy/atom, we predicted that the phase transition will be induced under an applied pressure as low as ≥ 23.8 GPa. For higher absorbed energies we expect that the transition could be induced under an even lower applied pressure. This transition might be observed using time-resolved x-ray diffraction experiments. | 4,400.2 | 2007-12-20T00:00:00.000 | [
"Physics"
] |