text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Photocatalytic Methane Conversion over Pd/ZnO Photocatalysts under Mild Conditions : Here, Pd nanoparticles supported on ZnO were prepared by the alcohol-reduction and the borohydride-reduction methods, and their efficiency towards the photocatalytic conversion of methane under mild conditions were evaluated. The resulting Pd/ZnO photocatalysts were characterized by X-ray fluorescence, X-ray diffraction, X-ray photoelectron spectroscopy, UV–Vis, and transmission electron microscopy. The reactions were performed with the photocatalysts dispersed in water in a bubbling stream of methane under UV-light illumination. The products formed were identified and quantified by gas chromatography (GC-FID/TCD/MSD). The principal products formed were C 2 H 6 and CO 2 with minor quantities of C 2 H 4 and CO. No H 2 production was observed. The preparation methods influenced the size and dispersion of Pd nanoparticles on the ZnO, affecting the performance of the photocatalysts. The best performance was observed for the photo-catalyst prepared by borohydride reduction with 0.5 wt% of Pd, reaching a C 2 H 6 production rate of 686 µ mol · h − 1 · g − 1 and a C 2 H 6 selectivity of 46%. Introduction Methane (CH 4 ) is the main component of natural gas and has been recently used as a fuel due to its higher mass heat compared with other hydrocarbons; it is also an important raw material in many industrial chemical processes [1,2]. However, the CH 4 conversion in these processes require severe conditions of temperature and pressure in order to promote the breaking of the C-H bond, which tends to disrupt the CH 4 conversion processes, leading to carbon further undergoing oxidation towards undesired products. The conversion of CH 4 into value-added multicarbon (C 2+ ) compounds under mild conditions has aroused worldwide interest over the past years, and emerges as an appealing approach to generate desired products while avoiding further oxidation of CH 4 and C 2 hydrocarbon products to CO 2 . However, CH 4 conversion under mild conditions is a challenge given the energy required for its activation, which has received increasing attention, especially to produce ethane (C 2 H 6 ) and ethylene (C 2 H 4 ) [3][4][5][6]. The advantage of using photocatalytic reactions is the possibility of promoting even difficult reactions close to room temperature, since the photoenergy provides sufficient activation energy for the chemical reaction [7][8][9][10]. An interesting photocatalytic process was described by Li and collaborators [3,11], which has the advantage of combining the CH 4 conversion and the hydrogen (H 2 ) evolution from water, simultaneously. In this process, by employing TiO 2 as a photocatalyst, neither methane conversion nor H 2 production were observed. On the other hand, the deposition of Pt or Pd nanoparticles on TiO 2 greatly improved the production of ethane and hydrogen; this is because the Pd/TiO 2 photocatalyst was more selective for ethane production than the Pt/TiO 2 photocatalysts, while the latter was more active for H 2 production. The conversion of methane by photocatalytic processes has been extensively investigated through different reaction conditions, and has mainly used semiconductor and hybrid metal/semiconductor materials as photocatalysts [12]. In this way, extensive work has been devoted to finding a prospective material that combines all the requirements for efficient mild and direct conversion of methane into high value-added products, which still remains an expressive challenge [5,[13][14][15][16][17][18][19]. Arguably, ZnO has been one of the most commonly used wide-bandgap n-type semiconductor for photocatalytic CH 4 conversion [12]. It has been demonstrated that Zn + -O − pairs act as surface active sites, where O − centers are responsible for breaking C-H bonds in CH 4 , while Zn 2+ sites assist the C-C coupling. Nevertheless, pure ZnO is not efficient for CH 4 conversion, where the high recombination rate of photoinduced hole/electron pairs is the major drawback [12,20]. Herein, this work aimed to compare two methods of Pd deposition over ZnO nanoparticles, and to study their effects on the methane conversion in water in a flow reactor under mild conditions. In this way, it was possible to obtain Pd nanoparticles with different sizes and dispersions on the ZnO semiconductor and to observe the influence of these variables in the photocatalytic CH 4 conversion. This work may shed light on the design of modified ZnO photocatalysts to achieve higher efficiency towards the desired products. Table 1 presents the amount of Pd deposited on the ZnO surface determined by the wavelength dispersive X-ray fluorescence (WDXRF). Since the Pd/ZnO (1.00%) photocatalyst synthesized by BRM presented better photoactivity for CH 4 conversion than the material prepared by ARM, Pd/ZnO photocatalysts with different concentration were also produced through BRM in order to observe its influence on the photoactivity. For all samples, the Pd content is close to the nominal values. (204) diffraction peaks belong to the hexagonal crystal structure of ZnO (CARD 00-900-4180). Given the low Pd content of the samples, no peaks different from those found in the ZnO matrix were observed in the XRD patterns of the as-prepared photocatalysts. This can be assigned to the broadening of Pd peaks caused by the small size of the nanoparticles, as well as their low concentration in the material [3,11]. Characterization of Catalysts The synthesized photocatalysts were also characterized by UV-vis diffuse reflectance spectroscopy (DRS), and the samples spectra are shown in Figure 1b. It is possible to notice that the ZnO photocatalyst activation occurs around 380 nm of UV light. For the bandgap energy calculations, the reflectance spectra were converted using the Kulbelk-Munk function as the ideal model for relating the reflectance and absorbance in powder samples [21]. The Tauc plot method was performed considering the direct allowed transition of ZnO [22]. The pristine semiconductor presented a bandgap energy of 3.31 eV, and no change was observed in the bandgap of Pd-containing samples compared to pure semiconductor. This was related to the fact that Pd is only deposited on the surface of ZnO, and no structural change occurs in the bulk material, which is in agreement with the XRD results obtained. (102), (110), (103), (200), (112), (201), and (204) diffraction peaks belong to the hexagonal crystal structure of ZnO (CARD 00-900-4180). Given the low Pd content of the samples, no peaks different from those found in the ZnO matrix were observed in the XRD patterns of the as-prepared photocatalysts. This can be assigned to the broadening of Pd peaks caused by the small size of the nanoparticles, as well as their low concentration in the material [3,11]. The synthesized photocatalysts were also characterized by UV-vis diffuse reflectance spectroscopy (DRS), and the samples spectra are shown in Figure 1b. It is possible to notice that the ZnO photocatalyst activation occurs around 380 nm of UV light. For the bandgap energy calculations, the reflectance spectra were converted using the Kulbelk-Munk function as the ideal model for relating the reflectance and absorbance in powder samples [21]. The Tauc plot method was performed considering the direct allowed transition of ZnO [22]. The pristine semiconductor presented a bandgap energy of 3.31 eV, and no change was observed in the bandgap of Pd-containing samples compared to pure semiconductor. This was related to the fact that Pd is only deposited on the Figure 2 shows the TEM images of pure ZnO. Nanorods and hexagonal nanoparticles varying their size in the range of 20 to 200 nm could be identified, but no uniform size and morphology were present among the ZnO nanoparticles' structure. Methane 2023, 2, FOR PEER REVIEW 4 surface of ZnO, and no structural change occurs in the bulk material, which is in agreement with the XRD results obtained. Figure 2 shows the TEM images of pure ZnO. Nanorods and hexagonal nanoparticles varying their size in the range of 20 to 200 nm could be identified, but no uniform size and morphology were present among the ZnO nanoparticles' structure. Figure 4b). Thus, the ARM method produced larger and more agglomerated Pd nanoparticles, while the BRM method led to smaller and more dispersed Pd nanoparticles on the ZnO surface. Such differences between the materials produced by the different methods could be associated with the presence of the citric acid as a dispersing agent in the BRM [23]. Figure 4b). Thus, the ARM method produced larger and more agglomerated Pd nanoparticles, while the BRM method led to smaller and more dispersed Pd nanoparticles on the ZnO surface. Such differences between the materials produced by the different methods could be associated with the presence of the citric acid as a dispersing agent in the BRM [23]. The XPS analysis was used to determine the chemical state of the elements in the samples before and after irradiation, as shown in Figure 5. Figure 5a shows the Zn 2p 3/2 and 2p 1/2 region of Sample B before irradiation, corresponding to 1021.41 eV and 1044.53 eV binding energies, respectively. For the same sample after photocatalytic testing, the binding energies show a slight shift to 1021.50 eV and 1044.62 eV, respectively [24]. The difference of~23.1 eV between them indicates the presence of Zn in a +2 oxidation state even before or after the irradiation experiments. The peaks located at 335.48 and 340.76 eV are assigned to Pd 3d in its Pd 0 chemical state, with no +2 and +4 oxidation states, respectively. After irradiation, these peaks shifted to 335.14 and 340.38 eV, respectively, and the presence of the Pd +2 species was observed, although most of the Pd existed in oxidation state 0. The XRD (Figure 1a) also confirms that the ZnO structure was preserved after irradiation, showing that the photocatalyst has good stability under reaction conditions. The XPS analysis was used to determine the chemical state of the elements in the samples before and after irradiation, as shown in Figure 5. Figure 5a shows the Zn 2p3/2 and 2p1/2 region of Sample B before irradiation, corresponding to 1021.41 eV and 1044.53 eV binding energies, respectively. For the same sample after photocatalytic testing, the binding energies show a slight shift to 1021.50 eV and 1044.62 eV, respectively [24]. The difference of ~23.1 eV between them indicates the presence of Zn in a +2 oxidation state even before or after the irradiation experiments. The peaks located at 335.48 and 340.76 eV are assigned to Pd 3d in its Pd 0 chemical state, with no +2 and +4 oxidation states, respectively. After irradiation, these peaks shifted to 335.14 and 340.38 eV, respectively, and the presence of the Pd +2 species was observed, although most of the Pd existed in oxidation state 0. The XRD (Figure 1a) also confirms that the ZnO structure was preserved after irradiation, showing that the photocatalyst has good stability under reaction conditions. Photocatalytic Tests The photocatalytic activity of ZnO and Pd/ZnO photocatalysts are show Figure 6, where the graphs represent the percentages of the different prod present in the CH4 flow rate on the ordinate axis and the chromatogra Photocatalytic Tests The photocatalytic activity of ZnO and Pd/ZnO photocatalysts are shown in Figure 6, where the graphs represent the percentages of the different products present in the CH 4 flow rate on the ordinate axis and the chromatographic injections performed during the experiments on the abscissa axis. Among the 12 chromatographic injections performed during the photoactivity analysis, the first was discarded. The subsequent two injections were collected while the light source remained switched off, whereas injections three to nine were performed upon illumination. Before the last two injections, the light was turned off again. Initially, a blank experiment (Figure 6a), using only ultrapure water and CH 4 , was performed in order to observe possible photochemical reactions that may occur in the system. In this case, a very slight increase in the production of CO 2 , C 2 H 6 , and C 2 H 4 was observed upon photoirradiation. This was ascribed to the photochemical reactions that CH 4 can undergo, although the amounts of evolved products were extremely low. The addition of pure ZnO to the system (Figure 6b) increased the CO 2 production by almost 12-fold upon photoirradiation, while small amounts of C 2 H 6 and CO were formed. This behavior was associated with the photocatalytic activity of the ZnO when photoexcited across the band gap, promoting the formation of electron (e − )/hole (h + ) pairs. These species can then promote the formation of methyl radicals (•CH 3 ) through the direct interaction of CH 4 with holes, or indirectly through its interaction with other radicals resulting from photocatalytic reactions, such as hydroxyl radicals (•OH) produced from water. The formation of •CH 3 is a crucial step in the photocatalytic conversion of CH 4 to C 2 H 6 , as it allows coupling reactions between these radicals to occur [19]. The Pd/ZnO photocatalyst synthesized by ARM (Sample A), when compared to pure ZnO, showed enhanced photoactivity, leading to an increase in the CO 2 and C 2 H 6 production, as it can be seen in Figure 6c. The Pd/ZnO (1.0%) photocatalyst synthesized by BRM (Sample B) showed enhanced photoactivity compared to Sample A and its products' formation can be seen in Figure 6d. The products' formation rates (µmol·h −1 ·g −1 ) of the blank experiment using ZnO and Pd/ZnO photocatalysts are shown in Table 2. The addition of Pd to the ZnO semiconductor increased the products; formation rates and strongly modified the selectivity when compared to the bare ZnO photocatalyst. In addition, it is important to highlight that the size and distribution of Pd nanoparticles on ZnO also influence the quantity and selectivity of the evolved products. The best performance was observed for Sample B with a C 2 H 6 formation rate of four times greater than that of Sample A and with a C 2 H 6 selectivity of 45% while, for Sample A, it was only 20%. CH4 to C2H6, as it allows coupling reactions between these radicals to occur [19]. The Pd/ZnO photocatalyst synthesized by ARM (Sample A), when compared to pure ZnO, showed enhanced photoactivity, leading to an increase in the CO2 and C2H6 production, as it can be seen in Figure 6c. The Pd/ZnO (1.0%) photocatalyst synthesized by BRM (Sample B) showed enhanced photoactivity compared to Sample A and its products' formation can be seen in Figure 6d. From the data in Table 3, it is noticeable that the amount of Pd deposited on the ZnO surface affects the products' formation rates, with the amount of 0.5 wt% (Sample C) showing the best performance. It is worth mentioning that an additional increase in the Pd charge causes a decrease in the performance and probably contributes to the recombination process of the charge carriers instead of promoting charge separation [25]. In this manner, Sample C showed a C 2 H 6 formation rate of 686 µmol·h −1 ·g −1 , a C 2 H 4 formation rate of 24 µmol·h −1 ·g −1 , and a C 2 H 6 selectivity of 46%. For all Pd/ZnO samples prepared through BRM, a C 2 H 6 : CO 2 molar ratio of approximately 1:1 was observed. Using Pd/TiO 2 as the photocatalyst, Li and Yu [3] achieved a C 2 H 6 formation rate of 55 µmol·h −1 ·g −1 with a C 2 H 6 : CO 2 molar ratio of approximately 2.5:1 and a H 2 production rate of 122 µmol·h −1 ·g −1 . Curiously, no H 2 production was observed for ZnO and for all Pd/ZnO photocatalysts. On the other hand, when Pd nanoparticles were supported on TiO 2 and Ga 2 O 3 by ARM or BRM methods and tested for CH 4 conversion in the same conditions, the formation of H 2 was observed in appreciable quantities (in the range of 10 to 30 mmol·h −1 ·g −1 ). A similar result was recently described for the photocatalytic non-oxidative coupling of methane using ZnO as the photocatalyst, which showed C 2 H 6 production but no production of H 2 , while H 2 formation was observed for TiO 2 and Ga 2 O 3 photocatalysts. The authors suggest that ZnO was possibly reduced by H 2 upon photoirradiation [26]. Li and Yu [3] proposed a mechanism for Pd/TiO 2 photocatalysts. Initially, the water was activated by holes, forming the •OH radicals, which reacted with CH 4 molecules, forming •CH 3 radicals that were responsible for the formation of the C 2 products. It was also inferred that the H 2 production comes primarily from water molecules and that Pd acts as an electron trap to avoid its recombination with the hole, and as a center for CH 4 activation. Recently, the generation of •OH radicals in metal nanoparticles (Pt, Pd, Au, or Ag) supported on TiO 2 and ZnO was measured by photoluminescence [6]. The authors showed that metal/TiO 2 photocatalysts were more efficient than metal/ZnO in producing •OH radicals [6]. Based on these results, it is possible that in our system using Pd/ZnO photocatalysts, the CH 4 is preferably activated directly by the holes rather than indirectly by •OH radicals, as shown in Figure 7. Methane 2023, 2, FOR PEER REVIEW 10 photocatalyst, Li and Yu [3] achieved a C2H6 formation rate of 55 µmol·h −1 ·g −1 with a C2H6: CO2 molar ratio of approximately 2.5:1 and a H2 production rate of 122 µmol·h −1 ·g −1 . Curiously, no H2 production was observed for ZnO and for all Pd/ZnO photocatalysts. On the other hand, when Pd nanoparticles were supported on TiO2 and Ga2O3 by ARM or BRM methods and tested for CH4 conversion in the same conditions, the formation of H2 was observed in appreciable quantities (in the range of 10 to 30 mmol·h −1 ·g −1 ). A similar result was recently described for the photocatalytic non-oxidative coupling of methane using ZnO as the photocatalyst, which showed C2H6 production but no production of H2, while H2 formation was observed for TiO2 and Ga2O3 photocatalysts. The authors suggest that ZnO was possibly reduced by H2 upon photoirradiation [26]. Li and Yu [3] proposed a mechanism for Pd/TiO2 photocatalysts. Initially, the water was activated by holes, forming the •OH radicals, which reacted with CH4 molecules, forming •CH3 radicals that were responsible for the formation of the C2 products. It was also inferred that the H2 production comes primarily from water molecules and that Pd acts as an electron trap to avoid its recombination with the hole, and as a center for CH4 activation. Recently, the generation of •OH radicals in metal nanoparticles (Pt, Pd, Au, or Ag) supported on TiO2 and ZnO was measured by photoluminescence [6]. The authors showed that metal/TiO2 photocatalysts were more efficient than metal/ZnO in producing •OH radicals [6]. Based on these results, it is possible that in our system using Pd/ZnO photocatalysts, the CH4 is preferably activated directly by the holes rather than indirectly by •OH radicals, as shown in Figure 7. Our results showed that the Pd/ZnO photocatalyst with smaller Pd nanoparticles sizes and good dispersion on a ZnO semiconductor seems to contribute to a more efficient separation of photogenerated charges (holes and electrons), contributing to a greater efficiency in the system, which enhances the C2H6 formation. Recently, a high-performance Pd/TiO2 photocatalyst, where TiO2 was decorated with Pd single atoms highly dispersed on TiO2, was described for the photocatalytic non-oxidative conversion of methane to C2H6, resulting in a Our results showed that the Pd/ZnO photocatalyst with smaller Pd nanoparticles sizes and good dispersion on a ZnO semiconductor seems to contribute to a more efficient separation of photogenerated charges (holes and electrons), contributing to a greater efficiency in the system, which enhances the C 2 H 6 formation. Recently, a high-performance Pd/TiO 2 photocatalyst, where TiO 2 was decorated with Pd single atoms highly dispersed on TiO 2 , was described for the photocatalytic non-oxidative conversion of methane to C 2 H 6 , resulting in a production rate of 910 µmol·h −1 ·g −1 and suppressing the over-oxidation to CO 2 [27]. Compared to the photocatalyst in which Pd nanoparticles were dispersed on TiO 2 , the single-atom photocatalyst was much more active, demonstrating that the size and the dispersion of the metallic atoms on the semiconductor can strongly influence the activity and selectivity of these materials [27]. Photocatalysts Preparation All chemicals were of analytical grade and used without any further purification. The Pd/ZnO photocatalysts were prepared by the alcohol-reduction method (ARM) [28,29] and by the borohydride reduction method (BRM) [30]. For both methods of synthesis, the prepared Pd nanoparticles were deposited over nanosized commercial ZnO obtained from Saint Louis-USA Sigma-Aldrich (≤100 nm particle size). An aqueous solution of sodium tetrachloropalladate (Na 2 PdCl 4 ·3H 2 O) was used as a Pd precursor. Alcohol-Reduction Method (ARM) The ARM utilizes ethylene glycol (EG) as a reducing agent [28,29]. Briefly, proper amounts of Pd precursor and ZnO were dispersed in an aqueous solution containing EG in a ratio of 3:1 EG/H 2 O. The reaction was refluxed (175 • C) under vigorous stirring for 1 h. The solid was separated by centrifugation, washed several times with distilled water, and dried at 80 • C. The resulting material was ground to a powdery appearance. Borohydride Reduction Method (BRM) The BRM utilizes sodium borohydride as a reducing agent and sodium citrate as a dispersing agent [30]. The ZnO was dispersed in an aqueous solution containing a proper amount of Pd precursor and sodium citrate (Pd:citrate ratio 1:3). An aqueous solution of sodium borohydride was dropped into the mixture under vigorous stirring at room temperature. The reaction was maintained under stirring for 24 h. The dispersed solid was separated by centrifugation, washed several times with distilled water, and dried at 80 • C. The resulting material was ground to a powdery appearance. Characterizations The Pd content (wt%) was determined by wavelength dispersive X-ray fluorescence (WDXRF), performed using a Rigaku Supermini200 spectrometer with a 50 kV Palladium anode X-ray tube with 200 W of potency and a zirconium bean filter. Then, UV-Vis diffuse reflectance spectroscopy analysis was carried out using a Varian Cary50 UV-Vis Spectrophotometer with a xenonium lamp and barium sulfate (BaSO 4 ) as a blank pattern. The X-ray diffraction analysis was performed using a Bruker D8 Advance 3 kW instrument using a copper tube and a scintillation detector. Transmission electron microscopy images of the synthesized materials were obtained from a 200 keV JEOL JEM 2010. The X-ray photoelectron spectroscopy (XPS) experiments were carried out with K-alpha surface analysis (Thermo Scientific, Waltham, MA, USA) equipment with an Al-Kα X-ray source (1486.6 eV) and a flood gun. Photocatalytic Tests The photocatalytic activity measurements were carried out in a 250 mL Ace photochemical reactor coupled to the GC-FID/TCD/MSD system. The photocatalysts were dispersed in 250 mL of ultrapure water and CH 4 was bubbled through in a flow ratio of 25 mL min −1 , while a 450 W Hg lamp was used as a light source. In addition, two cooling systems were used, one coupled to a condenser at the output of the photoreactor that was connected to the GC system to condense the water (15 • C), and the other for cooling the Hg lamp (40 • C). In this way, the photocatalytic reactions were carried out at a temperature close to 60 • C. The gas chromatographic (GC) model utilized was Agilent 7890B coupled to MSD 5977B. The equipment has a thermal conductivity detector (TCD), methanizer (MET), and flame ionization detector (FID), as well as a quadrupole mass spectrometer detector (MSD). Two different columns were used in order to separate the reaction products, namely a plot U and a molecular sieve 5 Å column. Twelve injections were performed in a total of 7 h of analysis, each one of 33 min. The first 3 injections took place with the light switched off, while injections 4 to 10 took place with the light switched on; in the last 2 injections the light was turned off again, so it was possible to monitor the influence of light on the system. Prior to testing the activity of the catalyst, calibration curves were produced to quantify CO 2 , C 2 H 4 , C 2 H 6 , C 3 H 8 , C 4 H 10 , H 2 , CH 4 , and CO. The detection limits were 0.001% for CO 2 and C 2 -C 4 ; 0.008% for CH 4 and CO, and 0.3% for H 2 . Two certified gas mixtures containing some of the expected products (carbon dioxide, ethane, ethene, propane, butane, carbon monoxide) at different known concentrations were used to build a calibration curve in order to analyze the products formed during the photocatalytic reaction. The selectivity was calculated according to the following equation: Product selectivity = n product n total o f the products f ormed × 100% (1) where n represents the molar amounts. Conclusions Here, Pd/ZnO photocatalysts dispersed in water in a bubbling stream of methane under UV-light illumination were shown to be active for CH 4 conversion. The main products formed were C 2 H 6 and CO 2 , with minor quantities of C 2 H 4 and CO; however, no H 2 production was observed. The photocatalyst preparation methods influenced the size and dispersion of Pd nanoparticles on the ZnO support, playing a pivotal role in the quantity and selectivity of the products formed. The Pd/TiO 2 photocatalysts with smaller Pd particle sizes, good dispersion, and an optimal Pd content were shown to be more active and selective for C 2 H 6 production. Author Contributions: A.P.M. performed the catalyst synthesis, characterization, and photocatalytic tests; E.R.J. and P.S.F. participated in the catalyst's preparation, characterization, and photocatalytic tests; A.P.M. and S.A.C. analyzed the data and wrote the paper; J.M.V. and E.V.S. designed the study and reviewed the paper. All authors have read and agreed to the published version of the manuscript.
6,039.6
2023-01-07T00:00:00.000
[ "Chemistry" ]
Trends in absolute socioeconomic inequalities in mortality in Sweden and New Zealand. A 20-year gender perspective Background Both trends in socioeconomic inequalities in mortality, and cross-country comparisons, may give more information about the causes of health inequalities. We analysed trends in socioeconomic differentials by mortality from early 1980s to late 1990s, comparing Sweden with New Zealand. Methods The New Zealand Census Mortality Study (NZCMS) consisting of over 2 million individuals and the Swedish Survey of Living Conditions (ULF) comprising over 100, 000 individuals were used for analyses. Education and household income were used as measures of socioeconomic position (SEP). The slope index of inequality (SII) was calculated to estimate absolute inequalities in mortality. Analyses were based on 3–5 year follow-up and limited to individuals aged 25–77 years. Age standardised mortality rates were calculated using the European population standard. Results Absolute inequalities in mortality on average over the 1980s and 1990s for both men and women by education were similar in Sweden and New Zealand, but by income were greater in Sweden. Comparing trends in absolute inequalities over the 1980s and 1990s, men's absolute inequalities by education decreased by 66% in Sweden and by 17% in New Zealand (p for trend <0.01 in both countries). Women's absolute inequalities by education decreased by 19% in Sweden (p = 0.03) and by 8% in New Zealand (p = 0.53). Men's absolute inequalities by income decreased by 51% in Sweden (p for trend = 0.06), but increased by 16% in New Zealand (p = 0.13). Women's absolute inequalities by income increased in both countries: 12% in Sweden (p = 0.03) and 21% in New Zealand (p = 0.04). Conclusion Trends in socioeconomic inequalities in mortality were clearly most favourable for men in Sweden. Trends also seemed to be more favourable for men than women in New Zealand. Assuming the trends in male inequalities in Sweden were not a statistical chance finding, it is not clear what the substantive reason(s) was for the pronounced decrease. Further gender comparisons are required. Background Historically, both New Zealand and Sweden have had a long tradition of universalism and welfarism, and targeted policies for equity. However, in the late 1980s and beginning of 1990s there was a substantial economic recession in both countries. New Zealand responded by considerable reduction in welfare and public services, as did Sweden although to a lesser degree [1][2][3]. Parallel to this phenomenon, socioeconomic inequalities in health have been shown to be increasing overtime in Sweden [4] and, in relative terms at least, in New Zealand too [5]. Likewise, relative inequalities in mortality have been trending upwards in other countries in Western Europe [6][7][8]. In one study, trends in mortality disparities between New Zealand and other Nordic countries (Finland, Norway and Denmark), with the exception of Sweden, was examined. The authors demonstrated that overall, relative inequalities in mortality widened equally rapidly in all four countries [9]. But it remains uninvestigated whether Sweden, with a strong history of egalitarianism, has had different trends in health inequalities. The aim of this present study was to analyse trends in absolute socioeconomic inequalities in mortality in Sweden and New Zealand. We hypothesized that trends in socioeconomic differentials by mortality might be more favorable in Sweden compared to New Zealand. As it is already evident in the above discussion, the magnitude and trends in inequality vary depending on choice of absolute (e.g. rate differences in mortality between low and high socio-economic groups) or relative (e.g. rate ratios) measures. This issue has been an issue of debate in the past [10]. In this paper we have elected to focus mostly on absolute measures of inequality, although we also present relative measures. Data sources The New Zealand Census Mortality Study (NZCMS) was used for mortality analyses in New Zealand. The NZCMS comprise four cohorts formed by anonymous and probabilistic linkage of four censuses to 3 years of mortality records. The four cohorts were; early 1980s (1981-84) (all census respondents from 24 th March 1981); late 1980s (1986-89) (all census respondents from 4 th March 1986); early 1990s (1991-94) (all census respondents from 5 th March 1991); and late 1990s (1996-99) (all census respondents from 5 th March 1996). Detailed methods for linkages have been described earlier [11][12][13]. The NZCMS was approved by the Wellington Regional Ethics Committee (98/7) in compliance to the principles embodied in the Helsinki Declaration. Similarly linked mortality data was obtained for Sweden using the Swedish Survey of Living Conditions (Undersö-kning för Levnadsföllhållanden-ULF). The ULF survey comprises a representative sample of the Swedish population between 16 and 84 years. Each individual participated in a one-hour face-to-face interview. In case a sampled person was not available, close relatives (spouse, parent or children) were interviewed instead. However this occurred in an insignificant number of sampled persons. These data comprise over 100, 000 men and women. Details about the survey are previously published elsewhere [14]. This survey data was linked to mortality data using routine registries. The ULF survey linkage was agreed upon by the Statistics Sweden and the Swedish National Institute of Public Health (8336836/168603) in compliance to the principles embodied in the Helsinki Declaration. We constructed four open cohorts (i.e. individuals were 'recruited' at each annual survey) and each cohort followed up for mortality for up to five years (i.e. all deaths up to the end of the follow-up period included). The following survey years were used; early 1980s (1980-85) (all live individuals interviewed on or after 4 th March 1980); late 1980s (1985-90) (all survey respondents interviewed on or after 4 th March 1985); early 1990s (1990-95); (all survey respondents interviewed on or after 5 th March 1990) and late 1990s (1995-2000) (all survey respondents interviewed on or after 5 th March 1995 and before 5 th March 2000). The last Swedish cohort was truncated at 31 st December 2001. We tried to make these cohorts comparable to those from the NZCMS but the dates are slightly different. Socioeconomic position was measured using education and income. Both variables were classified into three-category variables, for descriptive presentation and calculation of slope of inequality (SII-see later in Methods). Education was classified for both Sweden and New Zealand according to the international OECD classification of education. The three-level categories were; i) low education (no qualifications, primary school 1 to 9 years of schooling), ii) medium (upper secondary school education), and iii) high (college or university education). There are important differences in the income data between countries. In New Zealand, the measure was total income (including transfers and benefits, and before tax) self-reported on the census form using tick-box categories. The total personal income for all adults in the household was aggregated to get the total household income. By contrast, the Swedish income data for each ULF respondent was obtained by record linkage with the Taxation Office for the years 1980, 1985, 1990, 1995, respectively, for each cohort. This data was post-tax total income including earned income, government transfers and capital gains. These differences between countries mean that income was both more accurately collected in Sweden, and allowed for tax-transfers. The implication of this is that differences in mortality by income are likely to be bigger in Sweden than in New Zealand, simply due to better exposure measurement. Also, the Swedish income measure actually included a bit of asset wealth too, by virtue of including capital gains. In addition the Swedish income was categorised into tertiles based on the income distribution of each period, while that of New Zealand was based on 1986 income distribution. On the Swedish data there was education information for almost every person, and those values that were missing were distributed over all the various years. Whereas for income, there was no Income values recorded for interview years 1979-1982 inclusive and this accounted for the majority of the missing income values. The remaining values that were missing were similar to the number of education values missing and had a similar distribution over the years. Total household income was adjusted for inflation using the consumer price indices for both countries, but using 1980 as the base year in Sweden and 1996 in New Zealand. Household income was also equivalised for economies of scale. In New Zealand this was done using a New Zealand-specific equivalisation scale that adjusts for number of children and number of adults in the household [15]. In Sweden, a similar equivilisation scale of household income was used according to Statistics Sweden [16]. Income was categorised into approximate tertiles for both countries for descriptive analyses and for calculation of SIIs as with education above. We believe it is unlikely that different equivalisation methods will bias the comparisons of trends in this paper. Data analyses Analyses were limited to respondents aged 25-77 years duringfollow-up period, i.e., we allowed aging in and aging out of the cohorts. Age standardisation mortality rates were calculated using the European population standard [17]. We used the slope index of inequality (SII) and relative index of inequality (RII) to measure absolute and relative, respectively, differences in mortality by income and education [18,19]. Briefly, these rate difference and rate ratio measures are calculated by ranking the population by the categories of socio-economic factor of interest. Each category is assigned a modified ridit score, equivalent to its mid-point on a cumulative proportion scale. For example, if the first group comprises 20% of the population it is assigned a value of (0.2/2 =) 0.1, and the second group comprises 30% of the population it is assigned a value of (0.2 + 0.3/2 =) 0.35, etc. The mortality rates for each category are then regressed on the modified ridit scores for each category, meaning that the beta or slope coefficient is the expected difference in mortality rates between the lowest (0 th percentile rank) and highest (100 th percentile rank) socio-economic positions in the population. This is the SII. The RII is calculated by dividing the expected mortality rate for the 0 th percentile by that for the 100 th percentile. The SII and RII have considerable advantages for cross-national comparisons, in particular, they are not prone to different group sizes and (somewhat) different categorisations of the socio-economic factor. Table 1 demonstrates distributions of person years, education, income and number of deaths. In both Sweden and New Zealand, the proportion of men and women with low education has been decreasing, while increasing for high education (Table 1). Figures 1 and 2 show the standardized mortality rates by income and education, respectively. Although the mortality rates in Sweden were measured with much greater imprecision as reflected by the wide 95% confidence intervals, there was a clear pattern of higher mortality rates in lower socio-economic groups. Comparing rates Consistent and statistically significant reductions in mortality rates were observed over time in all socio-economic groups in New Zealand for both sexes. Similar statistically significant reductions in Sweden were only observed among men with low income and women with high income -again probably a function of less statistical precision for Sweden. Of particular note, standardised mortality rates/100 000 among high-income women reached low levels in Sweden: the mortality rate decreased from 310 (95% CI 94-526) in early 1980s to 131 (95% CI 66-197) in late 1990s (p-trend = 0.01) (Figure 1). Comparing absolute inequalities in mortality, on average, over the 1980s and 1990s A visual inspection of the absolute gaps in mortality rates between high and low socio-economic groups in Figures 1 and 2 suggests similar gaps in mortality by education between Sweden and New Zealand (with the exception of greater gaps among Swedish men in the 1980s). Gaps in mortality by income were clearly greater in Sweden. This is confirmed by an inspection of the SIIs shown in Table 3. (The SII is a regression-based estimate of the absolute difference in mortality rates between the lowest and highest socioeconomic group.) However, as suggested in the methods, we would expect greater differences in mortality by income in Sweden simply due to the better measurement of income (including some measurement of assets). C o Mortality differentials by income (Standardised rates per 100 000) Figure 1 Mortality differentials by income (Standardised rates per 100 000). mparing trends in absolute inequalities in mortality over the 1980s and 1990s The SIIs results in Table 3 quantify the visual impression of changing gaps in mortality rates across income and education shown in Figures 1 and 2. Included in Table 3 is a regression-based estimate of the percentage change in the SII from early 1980s to late 1990s, and p values for tests of trend. Swedish men clearly stand out as having different trends: a 51% decrease in the income SII (p-trend 0.06) and a 66% decrease in the education SII (p-trend < 0.01). Returning to Figures 1 and 2, the reasons for these pronounced trends are: notably high mortality for low socio-economic men in Sweden in the early 1980s compared to low mortality for high socio-economic males; and strong decreases in mortality among low socio-economic men compared to no real change in mortality for high socio-economic men. Whilst statistical imprecision of each of the men's Swedish mortality rates in the Figures is notable, the p for trend results were 0.06 and <0.01 for income and education, respectively. For the three other groups (Swedish women, and New Zealand men and women), there were similarities in trends: approximately 10% to 20% increases in the income SIIs, and approximately 10% to 20% decreases in the education SIIs. Most trends in the SIIs were statistically significant or approaching statistical significance, with the exception of educational SIIs for women in New Zealand (8% decrease, p-trend 0.53). A closer inspection suggests possibly more favourable trends for men than women in New Zealand, consistent with the clearly more favourable trends for men than women in Sweden. Comparing trends in relative inequalities in mortality The focus of this paper is on absolute inequalities in mortality, but for completeness we also present relative inequalities in Table 4. When average (regardless of socioeconomic position) mortality rates are decreasing, trends in relative inequalities will appear worse than trends in absolute inequalities -but otherwise reflect patterns and trends in absolute inequalities. This is confirmed in Table 4. For example, whereas absolute inequalities in mortality by education decreased (Table 3), relative inequalities by education were stable or increasing with the exception of Swedish men. Likewise, trends in relative inequalities by income were more severe than trends in absolute inequalities by income. Of note, the RII becomes unstable when the estimated mortality rate for the highest socio-economic percentile (i.e. extrapolating beyond the midpoint of the highest socio-economic tertile) becomes close to Mortality differentials by education (Standardised rates per 100 000) Figure 2 Mortality differentials by education (Standardised rates per 100 000). zero -thus the very large income RIIs for Swedish women in late 1990s. Discussion Consistent and statistically significant reductions in mortality rates over time were observed in all socio-economic groups in New Zealand, while similar reductions in Sweden were only observed among men with low income and women with high income. Regarding absolute inequalities in mortality on average over the 1980s and 1990s, they were similar between Sweden and New Zealand by education for both men and women (with the exception of (unweighted) to the SIIs to work out the regression-estimated change in the SII over time, the regression estimated value for 1981-84, and hence the percentage change. ‡ We conducted ordinary least squares regression of the SII on census year (weighted by the inverse of the variance of the SII), and used the pvalue for the census year term as our p-value for trend. greater inequalities among men in Sweden in the 1980s). Absolute inequalities in mortality by income were greater in Sweden, although this is almost certainly due to better income measurement in Sweden. Regarding trends in absolute inequalities, there was a strong decreasing trend for men in Sweden (66% by education and 51% by income). For both men and women in New Zealand, and women in Sweden, there were approximately 12% to 21% increases in inequality by income and 8% to 19% decreases in inequality by education. Trends were clearly most favourable for men in Sweden, and possibly also more favourable for males in New Zealand. The results presented in the present study should be interpreted with awareness of potential limitations. First the New Zealand data base was larger than that of Sweden which makes it difficult to draw conclusions in the presence of wide confidence intervals. However, many of the statistical tests of trend were significant. In addition, in spite of relatively smaller population for Sweden, ULF is a random sample representative of the Swedish population. In addition Swedish mortality rates per 100,000 by socioeconomic position were comparable to the national rates (with reservations for varying age groups and standardisation methods) [20]. Second, sources of income data varied in the two countries. Due to the taxation system, Sweden has better income measurement than New Zealand, such that the inequalities by income in Sweden probably shift up relative to New Zealand. Thus some, if not all, of the mortality differentials by income may almost certainly be due to methodological aspects. It seems likely that mortality dis-parities by income in Sweden are 'too well captured' to be comparable to countries such as New Zealand at any one point in time. However, comparisons of trends over time are likely to be valid, so long as varying baselines are allowed for. Furthermore, income was measured at the household level and didn't distinguish women's from men's individual income. This may make it difficult to draw conclusions on the observed gender differences. However, a study by Fritzell at al showed that health effects were similar regardless of whether household or individual income was used [21]. There are also conceptual advantages with household income as opposed to individual income as a measure of one's ability to purchase items. The advantage of this study is that it provided us with the opportunity to study trends in socioeconomic inequalities in mortality, using both education and income, among men and women. This is the first study we are aware of where inequalities are investigated from a gender perspective comparing Sweden with another non-European country, over a long period of time. There are no strictly comparable published Swedish studies on trends in socioeconomic inequalities in mortality. Trends in total mortality by socioeconomic status have often been limited to younger population up to 64 years [20] and to specific causes of death. Gender differences in socioeconomic differentials have often been interpreted as being less among younger women than among younger men, but this is in part due to lower overall mortality rates among women than men. Because of this, absolute differ- ences in mortality rates between low and high socio-economic groups are greater among men (but may be similar or greater in relative terms among women -as shown in the present paper. In fact in a previous comparison of absolute mortality rates by occupational class between late 1980s and early 1990s demonstrated that mortality had decreased among men and women (aged 20 to 64 years) across all occupational classes with an exception of women with blue collar jobs [20]. Reinterpreting previous comparisons in this light, trends in socioeconomic differentials in women's mortality are expected to be decreasing on a slower rate than those for men -as demonstrated in the present study. Trends in absolute (and relative) inequality by education and income in New Zealand have been published before [22]. These previous results adjusted for ethnicity (a confounder of the association of socio-economic position with mortality in New Zealand), but the trends over time in inequalities were similar to those published in this current paper. We found trends in socioeconomic inequalities in mortality among women to be similar in Sweden and in New Zealand data. Results in the present study suggest that women have not benefited as much as men from the reduction in socioeconomic inequalities in mortality in the past 20 years, especially in Sweden. That said, men's inequalities by education in Sweden appeared to start at a very high level in the early 1980s, but decreased markedly to smaller inequalities than those for men in New Zealand and for women in Sweden, when measured by education. Whilst a statistical chance might explain men's trends in Sweden, (although tests of trend were statistically significant or nearly so) at least two substantive reasons might explain this divergence in trends between men and women in Sweden. Measures of SES for women, particularly income, have become better (relative to men) in recent years due to increased participation in the labour market, which may show more inequalities than in the past -or at least cause increasing income-related inequalities to be observed among women despite decreases among men. Another potential explanation, but one that we do not think is likely, is the changing proportion of single women. In Sweden for example, during the 1990s there was an increase in the proportion of single parents (about 20% of adults, and of them 70% were women) [23]. Single parenting has been associated with economic hardships [24] and increased mortality [25]. Whilst both Sweden and New Zealand have welfare benefits specifically for solo parents, they are probably not sufficient to maintain the same level of equivalised household income as before any separation from an income-earning partner. However, single parents seem an unlikely driver of the results we see, for two reasons: mortality among 25-77 year olds is driven by adults older than those with dependent children; and whilst a greater portion of low income households may now be made up of single parent households, it does not necessarily follow that the mortality rate differences between low and high income thus will also increase. We are not sure of the reasons for a possible profound declining trend in absolute gaps in men's mortality by socioeconomic position. However, the Swedish trends in ischemic heart disease (IHD) mortality, which is a major contributor to total mortality, may in part explain the observed declining trends. Rosengren et al have shown a larger decrease in cardiovascular morbidity among men than women between 1984 and 1999 [26]. In addition Hallqvist et al demonstrated that a decline in mortality due to myocardial infarction (MI) among men in high socioeconomic position men started in the 1970s while that of men in low socioeconomic position started in early 1980s [27]. Thus it is possible that the rapid decline in mortality due to MI occurred first in high socio-economic men (say in the 1970s), and later in lower socio-economic men (say in the 1980s and 1990s). If true, this would mean that our study of the 1980s and 1990s would have missed the rapid fall among higher socio-economic men (and consequent widening absolute gaps), and just observed the 'correction' as men in lower socio-economic position caught up. Such dynamic trends have been proposed by Victora as a result of the inverse equity hypothesis [28]. Regardless, the dynamic nature of trends in inequalities over time is something that both scientists and policy makers must increasingly consider and try to understand. Why might trends in absolute inequalities by education be decreasing, but by income increasing? First, we discuss above that increasing participation by women in the labour market may explain their increasing inequalities by income. Regarding declines in absolute educational inequalities, for both men and women, one possible reason is that there is a shift in western industrialised societies for income being a greater axis of stratification than education (other than education influencing later income), which may explain why educational gaps are tending to decrease and income gaps are tending to increase. Second, it is possible that education is becoming a weaker marker of socio-economic stratification particularly in Sweden due to the fact that that there are increasingly fewer people with no qualifications. However, the SII and RII methods used in the present paper deal with this problem. Third, and more simplistically, it may just be a mathematical consequence of absolute inequalities having to decrease at some point when average or background mortality rates are relentless falling (although relative inequalities may continue to widen, eg Table 4 of this paper). Both New Zealand and Sweden have current national strategies to tackle health inequalities. New Zealand's strategy was established about 5 years ago [29], while Sweden has had a long history to tackle health inequalities. In fact, Sweden is the first country to endorse a unique national public health policy which was agreed on by a majority of political parties with the intention to promote good health for all [30,31]. Based on the results of the present study, these strategies seem to (somehow) have been outstandingly successful for men. It is yet to be seen if these strategies will in the long-run also contribute to more successful reductions in socioeconomic inequalities among women's mortality. Conclusion Trends in socioeconomic inequalities in mortality were clearly most favourable for men in Sweden. These trends may also have been more favourable for men than women in New Zealand. Assuming the trends in male inequalities in Sweden were not a statistical chance finding, it is not clear what the substantive reason(s) was for the pronounced decrease. Thus further gender comparisons are required.
5,875.8
2006-06-21T00:00:00.000
[ "Medicine", "Economics" ]
Ultra-selective flexible add and drop multiplexer using rectangular optical filters based on stimulated Brillouin scattering We demonstrate an ultra-selective flexible reconfigurable add and drop multiplexer (ROADM) structure enablingseparation and aggregation operations for multi-band orthogonal frequency division multiplexing(MB-OFDM) signal with ~2-GHz spectral granularity and 300MHz guard band. The ROADM employs rectangular optical filters based on stimulated Brillouin effect (SBS) in fiber, which have steepedges, ~1-dB passband ripple and tunable bandwidth from 100 MHz to 3 GHz realized by two different kinds of electrical feedback pump control approaches. The ROADM performance is measured with MB-OFDM signals inquadraturephase-shift-keying (QPSK) and 16-quadrature-amplitude-modulation (16QAM) formats. For QPSK format signal, the SBS-ROADM induced penalty is ~0.7 dB while the performance for 16-QAM format is also acceptable. ©2015 Optical Society of America OCIS codes: (060.4265) Networks, wavelength routing; (290.5900) Scattering, stimulated Introduction With the continuing growth in the amount of traffic, high spectral-efficient modulation formats and reduced channel spacing are required.Flex-grid networking has emerged as a key requirement for future dynamic and more efficient networks [1][2][3].In this context, flexible super-channel approaches, such as Nyquistwavelength-division-multiplexing (WDM) and multi-band orthogonal frequency division multiplexing(MB-OFDM) are considered to bepromising candidates for 400 Gbps and 1 Tbps long-haul WDM transmissiondue to their high spectral efficiency, small requisite guard band and all-optical sub-band switching superiority [4][5][6].One of the core techniques for the flex-grid networking is the super-channel switching strategy, which is realized in a reconfigurable optical add and drop multiplexer (ROADM).The extraction-aggregation of a sub-band signal from dense WDM channels requires a very high-resolution frequency selectivityand makes flexible narrowband filters the most important component. An ideal solution for flex-grid ROADM is a rectangular optical filter withprecise bandwidth and central wavelength tunability.Such kind of rectangular filters with large bandwidth have already been achieved employing liquid-crystal on silicon (LCoS) [7] andbulk-gratingtechniques [6], which are widely used in add and drop demonstrations.However, for both techniques, the flat-topped passband shape can only be obtained for bandwidth larger than ~tens of GHz.For small bandwidth such as 10 GHz case, the filter passband tends to a Gaussian shape, which cannot completely meet sub-wavelength switching requirements.Moreover, due to the grating and liquid crystal resolution limitation, it is difficult to realize filters with ~GHz bandwidth using these techniques.Thus it is unfit for supporting ultra-narrow flexible switching.The current state-of-art in small-grid flexible ROADM is based on arrayed-waveguide grating and LCoS technique [8].It can reach ~0.8 GHz resolution and ~GHz bandwidth.However the in-band filter shape is not flat and shows a slowroll-off, which both induces signal distortions and requires larger guardband.For implementing GHz-bandwidth rectangular filter, several solutions have been proposed including specially designed fiber Bragg gratings (FBG) [9], cascaded micro-ring resonators [10],forward stimulated interpolarization scattering [11] and stimulated Brillouin scattering (SBS) [12][13][14], etc.Among all the above methods, SBS active filter has been considered as a promising technique with inherent flexibility.Based on SBS effect, recently we have demonstrated rectangular opticalfilters with tunable bandwidth from 50 MHz to 4 GHz [15].The filter passband ripple is suppressed to ~1 dB using precise digital feedback control.The filter selectivity can reach ~40 dB with pump-splitting dual-stage scheme [16]. In this paper, we realize an ultra-selective ROADM structure with ~2-GHz spectral granularityand 300-MHz guard band.We propose two different kinds of feedback methods to achieve rectangular SBS filters for ROADM applications.One is based on a sweeping probe signal generated by anelectrical vector network analyzer (EVNA) [15], and the other is based on coherent detection directly using OFDM signal as a probe.Both methods obtain desired results and have similar convergence speed.Based on this rectangular filter, we demonstratethe separation and aggregation of a 3-band OFDM signal in quadrature-phaseshift-keying (QPSK) and 16-quadrature-amplitude-modulation (16-QAM) formats.For the "proof of concept", we limit the demonstration to a single polarization MB-OFDM signal.The bandwidth of each OFDM band is only 2-GHz and the net bit-rate is ~5 Gbit/s using 16-QAM modulation format.Thanks to the steep edges of the proposed filter, the guard band can be set to as small as 300-MHz without any obvious extra penalty.Actually the guard bandis only limited by the laser drift and can be further decreased.For QPSK format signal, the filter induced total penalty is only ~0.7 dB benefiting from the flat passband and smooth phase response.For 16-QAM format signal, the ROADM performance is also acceptable considering the low tolerance of the noise and crosstalk.Some preliminary results have been presented previously [17], in this paper we demonstrate the proposed novel feedback method in detail including the convergence speed comparison and we further investigate the influence of the Brillouin gain on the ROADM performance. The rectangular SBS filter generation In order to realize a flexible and precise add and drop function of a ROADM, we first implement a rectangular optical filter with high flexibility.The filter generation process is shown in Fig. 1.First we use an arbitrary waveform generator (AWG) to generate an electrical comb.Then the electrical comb modulates a continuous-wave (CW) light to generate the optical comb acting as the pump.After boosted to a high power level, the pump gives rise tothe SBS effect.If the signal is shifted downward from the pump to the Brillouin frequency, it will be amplified as the Stokes wave, and if it is shifted upward to the Brillouin frequency, it will be absorbed as the anti-Stokes wave [18].This can also be considered as filtering in terms of signal selection.As shown in Fig. 1, in order to obtain a rectangular gain spectrum using Lorentzian-shape natural SBS gain, a pump consisting of equal-amplitude spectral lines with intervals equaling the natural SBS gain bandwidth is required.The programmable AWG allows of controlling the amplitude and the initial phase of each spectral line in the electrical comb digitally and precisely.This approach brings about many benefits.First, the natural SBS bandwidth is only ~20 MHz, thus the filter bandwidth can be very small and the controlling precision is very high.Second, the electrical comb is generated precisely by the AWG within a specific frequency range, ensuring the steep filter edges.Third, the amplitude of each comb line can be alteredprecisely to optimize the filter passband flatness.Fourth, the number of the comb line can be changed to adjustthe filter bandwidth precisely.Fifth, the filter central wavelength can also be shiftedby tuningthe wavelength of the comb electrically and optically.Thus the proposed filter is almost an ideal rectangular filter with multi-dimensional flexibility.In this section, we focus more on the bandpass filter as an example.Note that the filter can be either bandpass with SBS amplification or band-stop with SBS absorption.The rectangular optical filter generation has been introduced in our previous publication [15].Given the nonlinear responses of electrical and optical components, the flat electrical spectral lines actually lead to uneven SBS gain, thus a feedback compensation is proposed to digitally control the amplitude of each electrical spectral line according to the measured SBS gain so as to optimize the shape of the targeted SBS filter.In order to mitigate the incalculable gain induced by four wave mixing (FWM) effect among the multiple pump lines, we set frequency interval of the electrical spectral lines randomly around the natural SBS gain bandwidth instead of the equal interval.In this case the FWM induced gain is no longer superposing on the original lines and the feedback process is more accurate.Note that once the feedback compensation is completed, the optimal pump waveform is fixed and can be stored in the AWG memory for future use.Thus it can be considered as a software-defined optical filter. At first, we used the whole pump to amplify the signal in a single fiber section.However with the increase of the filter bandwidth, the filter gain cannot be increased effectively by simply increasing the pump power due to both the competition between SBS and Stimulated Raman effect (SRS) and four wave mixing (FWM) induced out-of-band gain.In order to increase the filter selectivity, we propose a pump-splitting dual-stage scheme [16].Instead of using a single pump with high power, we split it into two stages and amplify the signal twice successively.In this case, the pump power of each stage is under the threshold of the SRS and induce less out-of-band FWM components.Moreover, the decrease in pump power for each stage will decrease the induced noise from the spontaneous Brillouin emission [19].Thus not only can the filter selectivity be increased dramatically but alsobetter noise performance can be achieved. The amplitude and phase responsesare measured by amplifying anoptical sweeping probe signal which is modulated by an electrical sweeping signal from an EVNA.The SBS gain spectrum can be obtained by comparing the results between the SBS pump switched on and off.We reasonably assume that the SBS gain at a certain frequency is only related to the corresponding electrical spectral line.Thus once the SBS gain shape has been obtained, we can use the relation between the SBS gain and the electrical spectral line from the AWG to calculate new amplitude of each electrical spectral line applied to the AWG [15] Since the SBS gain is only related to the pump power which is proportional to the electrical spectral lines, we just set a random phase for each line to maintain an acceptable peak to average ratio of the waveform.More details can be found in Ref [15].After only 5-10 iterations of the digital feedback compensation, we obtain the long-term stable rectangular filter shape as show in Fig. 2 with different gain values, in other words, with different filter selectivity.It can be tuned by changing the total pump power.The filter bandwidth can also be tuned by easily changing the number of the electrical spectral lines.The filters with bandwidth from 100 MHz to 3 GHz are illustrated in Fig. 3.The tuning resolution can be as small as ~20 MHz equaling the natural SBS gain bandwidth.No matter what the filter selectivity and bandwidth are, the passband ripple can always be suppressed to~1 dB and the filter edges are very steep.Due to the flat passband, the filter phase responses are very smooth.The passband flatness and the smooth phase response can keep signal fidelity to the extreme.The out-of-band gain is due to the FWM components which cannot be mitigated completely.The larger the SBS gain is, the severer the FWM will be. Novel feedback method based on the coherent detection In the previous work, the filter feedback process is based on a sweeping probe generated by an EVNA.This method needs an extra modulator to convert electrical sweeping signal to optical signal, a narrowband filter to suppress one probe sideband and also a photodiode to convert the amplified optical signal back to electrical domain for measurement, which increases the system complexity dramatically.Thus we propose a novel feedback method based on coherent detection.Note that Nyquist-WDM and OFDM signals present flat in-band spectrum thus they can be directly used as the probe signal for the feedback process.Besides, using transmission signal as the probenot only can obtain flat SBS gain, but also can track and compensate the small variation of bothchannel condition and SBS parametersto ensure stable and optimal filtering state at all times.The feedback experimental setup is shown in Fig. 4. In the upper branch, an AWG is used to generate the electrical spectral lines with random frequency interval within ± 1-MHz deviation from the natural SBS bandwidth of 20 MHz, (i.e.19 MHz, 20 MHz and 21 MHz).Then it modulates aCW light from a distributed feedback (DFB) laser to generate the optical carrier-suppressed single-sideband (OCS-SSB) SBS pump lines utilizing an I&Q modulator (IQM).After being boosted by a high power erbium-doped fiber amplifier (EDFA), the OCS-SSB signal is then split into two parts equally and send into two identical 25-km long single mode fibers (SMFs), which are under the same strain and temperature conditions to ensure the same Brillouin characteristics.In each stage, a polarization controller (PC) is used to maintain the SBS gain at the maximum value.It should be noted that a polarization scramblers or a polarization state switch may allow of eliminating the polarization dependent gain (PDG) issue as long as the adjust speed of the scrambler or switch is fast enough where the pump can be treated as being depolarized.In that case, the PCs can be removed from the setup.The SBS gain is ~11-GHz away from the pump as shown in Fig. 4(i).In the lower branch, a QPSK format OFDM signalwith constant amplitudefrom the AWG modulates anotherCW light bya Mach-Zehnder modulator (MZM) to generate the probe signal as shown in Fig. 4(ii).After passing through an isolator (ISO),the probe OFDM signal propagates in the two fibers successively and the central part within the SBS gain region is amplifiedtwice as shown in Fig. 4(iii) and 4(iv).Finally the amplified OFDM signal is send into a coherent receiver (which will be described in details in section 4) and the amplitude of each OFDM subcarrier can be obtained after cyclic prefix removal and being transferred to frequency domain using off-line processing.By comparing the amplitude before and after the SBS amplification, the SBS gain spectrum can be obtained.The SBS loss spectrum can also be obtained with the same approach.The feedback process is almost the same with that in the sweeping probe method mentioned in section 2. The only difference is how we obtain the SBS gain spectrum.It should be noted that the OFDM signal used for feedback is different from that used for system performance evaluation in section 4. The bandwidth of the probe OFDM signal is larger than that of SBS filter.Thus the full filter passband and nearby stopband can be detected at the same time.The precision of the gain spectrum obtained by OFDM probe is 20 MHz equaling the interval of OFDM subcarriers which is worse than the spectrum precision obtained by EVNA.However the feedback compensation with current precision can fully meet the system requirement of filter flatness which is proved by the system performance.Figure 5 illustrates the filter passband shape after proposed feedback compensation with different gains.As shown in the figure, the OFDM-based feedback method obtains similar filter flatness of ~1 dB as the sweeping probe approach thus proving its feasibility.However, when the gain increases to ~32 dB, the ripple is increased to ~2 dB.This does not imply the failure of the feedback.Actually, this is due to the limitation of the receiver dynamic range.In order to ensurethe signal with high SBS gain still within the receiver dynamic range, the unamplified original signal should be very small and is easy to fluctuate affected by random noise from the receiver. The feedback convergence speed of two different methods is shown in Fig. 6.To be fair comparison, we set the same bandwidth of ~2.2 GHz and ~21 dB gainfor both approaches.The figure shows that both approaches have fast and similar convergence speed.Only 5~10 iterations are needed to obtain the flat passband.Theoretically, more iterationslead to flatter filter passband.But the ripple measurement accuracy is limited by the power measurement precision of the EVNA and coherent detection.Thus there is no practical meaning to pursue very small ripple value.Note that the number of feedback iteration is dependent on the filter bandwidth and gain.The larger the bandwidth and gain, the more iterations are needed to achieve the same flatness level. The ROADM experiment and results Once the rectangular filters have been obtained, the ultra-selective ROADM can be implemented.The flexible-grid ROADM structure is shown in Fig. 7.An SBS gain filter (bandpass) only keeps the desired band to realize the drop function meanwhile an SBS loss filter (stop-band) removes the band in the MB-OFDM signal to empty the spectral band for another signal to add in.Thanks to the filter central wavelength and bandwidth tunability, the ROADM can be flexibly configured with very high resolution.Meanwhile the guard band between differentbands can be set very small benefiting from the steep edges of the rectangular filter.As shown in Fig. 8, the experimental setup consists of three parts: the transmitter, the SBS based ROADM itself and the coherent receiver.In the transmitter part, the light from an external cavity laser (ECL) and 2 DFB lasers operating at ~1543 nm are modulated by the electrical OFDM signal.Since we are only interested in the central band signal, we usethe same modulator for the two side-bands signal generation to reduce the system complexity.As only a single output of the AWG (Tektronix7221B) is available for OFDM signal generation, an OFDM signal satisfying the Hermitian symmetry is generated.This constraint does not affect our analysis as all subcarriers are treated independently at the receiver side.Given the instability of the 3 lasers, the minimum guard bandamongthe 3 signal bands is set to 300 MHz.For each band, 128 subcarriers are used in order to mitigate phase noise of the ECL with ~100 KHz linewidth.Both QPSK and 16-QAM modulation formats have beenemployed for each subcarrier at the sampling rate of 2.5GS/s.The bandwidth is set to 2 GHz by adjusting the number of empty subcarriers.After passing through an isolator to block inverse pump light, the single polarization 3-band signal is split into 2 parts.The central band is absorbed or amplified by a 2.2-GHz rectangular dual-stage SBS loss or gain filter in the separate branches respectively.Due to lab constraints, we use25-km SMF28 for each stage amplification.The extra 200 MHz allows for slight laser drift.After passing through a 12.5km long fiber, the amplified central band in the upper path is de-correlated with the 2 side bands in the lower path.Then the 3 bands are combined together with the same polarization state adjusted by using 2 PCs.The signal from the ROADM is adjusted to the optimal power and a broadband amplified spontaneous emission (ASE) noise source is added for the bit error rate vs. signal to noise ratio(SNR-BER) measurement.Finally the OFDM signal is detected by a typical coherent receiver.A narrow linewidth ECL laser is used as local oscillator.I and Q parts of the complex optical signal is obtained by using 90-degree hybrids and are then converted to electrical signals with balanced detectors.50GS/s real time oscilloscope (Tektronix DPO72004B) digitizes the waveform and generates the sampled digital sequence which is then sent to the computer.QPSK and 16-QAM constellations are then obtained by off-line processing.A more precise description of OFDM signal generation and detection is in [20]. Concerning the SBS pump generation, we use the second AWG output to generate a 2.2-GHz wide electrical signal, which modulates the light from 2 DFB lasers.The amplitude of the electrical comb is well controlled using feedback compensation algorithm described in section 3.An IQM is used to realize OCS-SSB modulation for SBS gain and loss pump generation.After pre-amplification, the 2 pump waves with around 22-GHz frequency spacing are separated by a Finisar waveshaper and boosted to a higher level acting as the pump.Note that the waveshaper is not required if another IQM is used for gain/loss pump generation.In each stage, a PC is used to maintain the SBS gain or loss at the maximum value through the 25-km long fiber.An EDFA after the first loss filter stage and a variable optical attenuator (VOA) after the second gain filter are used to maintain the 3 bands at the same power level.In order to further prove the feasibility of the proposed ROADM structure.We evaluate the ROADM performance by BER-SNR measurements.The SNR of the signal is measured in electrical domain as shown in Fig. 10.First we measure the average power level of the whole signal band, which is the total power of the signal and noise.Then we measure the average noise level on the two sides of the signal within the same bandwidth as the signal.Since the noise is generated from a broad ASE noise, the noise level should be uniform in a wide range and the measured noise on the two sides approximately equals the noise level right in the signal band.After we obtain the estimated noise power and total power of the signal with noise, the SNR can be easily calculated.Fig. 10.TheSNR measurement method.The noise level is estimated from the pure noise region near the signal by assuming that the noise level is uniform in a wide range. First, we assess the validity of the SBS amplification with different gains for OFDM signal.We fix the signal bandwidth to 2 GHz and set the guard band between each two signal bands to 500 MHz.2.2-GHz gain filters are used for amplification.The SNR-BER curves for the QPSK format are presented in Fig. 11(a) while the 16-QAM case in Fig. 11(b).Different constellation diagrams are also given in the insets.After being amplified by the SBS gain filter with 25-dB gain, the SNR penalties are only ~0.2 dB and ~1.7dB at a BER of 10 −3 for QPSK and 16-QAM respectively.For 16-QAM, larger penalty has been observed when the SNR is high in large gain cases because of the large SBS-ASE noise.The result has proved that the SBS gain induced penalty is not significant especially when the signal format is QPSK.It has also validatedthe feasibility of the proposed rectangular SBS gain filter in the OFDM system.Then the performance of the full add and drop function is measured, i.e. the dropped decorrelatedsingle band of one branch is inserted between the two remaining side bands of the other branch.The signal is still set to 2 GHz with 500 or 300 MHz guard bands amplified or absorbed by a 2.2 GHz gain or loss filter.The filter induced penalty here is twofold: the Brillouin-ASE and the crosstalk from the remains of the absorbed central band.We study these two factors separately.First we fix the crosstalk and change the Brillouin gain.The SBS loss is ~23 dB corresponding to a pump power of 24 dBm.Ensuring the added central band and two sidebands with the same power, the 23-dB SBS loss corresponds to 23-dB in-band crosstalk of the central band.The BER performanceswith 20, 25 and 30 dB amplification are shown in Fig. 12.For QPSK-format signal, the SNR-BER curves indicate that penalty induced by the add and drop function is ~0.7 dB, no matter what the Brillouin gain is.For 16-QAM signal, the penalties are increased with the increase of the SBS gain.Considering the low tolerance of the noise and crosstalk, the performance of the 16-QAM is also acceptable.Then we reduce the SBS loss from ~23 dB to ~15 dB.Thus the residualcentral band induces more severe crosstalk to the added central band.The guard band is set to 500 MHz.As shown in Fig. 12, the penalty for QPSK signal at BER of 10 −3 is increased to ~1.8 dB at BER of 10 −3 while the BER performance is dramatically degraded for 16-QAM.This is reasonable because the constellation points of 16-QAM format signal are closer than QPSK and are more sensitive to the noise as well as crosstalk.Since the SBS gain corresponds to the in-band crosstalk of the two side OFDM-bands, ~25-dB SBS gain and loss is the basic requirements for 16-QAM format [21].While for QPSK, the requirements are far less demanding. Meanwhile, we also evaluate the relation between the guard band and the performance.Figure 12 shows that when the guard band is reduced from 500 MHz to 300 MHz, all the performance differences arenegligible benefitting from the precise filtering technique.Thanks to the sharp rectangular response of the SBS filter, it is even possible to narrow the guard band as small as 100 MHz, but due to the limitation from the laser stability, we only set the minimum guard band to 300 MHz. Conclusion We have implemented an ultra-selective ROADM structure and demonstrated spectral processing for MB-OFDM signal with 2-GHz granularity and 300-MHz guard band employing flexible rectangular optical filters based on SBS in optical fiber.Steep-edged flattoppedfilters with tunable bandwidth from 100 MHz to 3 GHz havebeen realized usingan optical comb as the pump.The filter passband flatness is controlled to~1 dB utilizing feedback pump control with coherent detection directly using OFDM signals.Based on this rectangular filter, we demonstratethe separation and aggregation of a 3-band OFDM signal in both QPSK and 16-QAM formats.Thanks to the steep edges of the proposed filter, the guard band can be set as small as 300-MHz without any obvious extra penalty.For QPSK format signal, the filter induced total penalty is only ~0.7 dB benefiting from the flat passband and smooth phase response.For 16-QAM format signal, the ROADM performance is also acceptable considering the low tolerance of the noise and crosstalk.Meanwhile, the flexibility of the OFDM is taken to the extreme with the help of the filter bandwidth flexibility.The experimental results validate the SBS-based ROADM structure and prove the feasibility of bothSBS amplification and absorption in OFDM transmission and networks. Fig. 1 . Fig. 1.The principle of the filter generation. : 2 Ideal gain (dB) Electrical amplitude new = Measured gain (dB) Electrical amplitude used Fig. 2 . Fig. 2. The (a) amplitude and (b)phase responses of SBS gain filters with tunable selectivity. Fig. 3 . Fig. 3.The (a) amplitude and (b) phase responses of SBS gain filters with tunable bandwidth. Fig. 4 . Fig.4.The dual-stage SBS filter and the feedback process based on coherent detection using OFDM signal.Inset (i) single sideband pump fp, SBS gain around fg, and DFB laser frequency fc, (ii) OFDM signal as a probe with bandwidth larger than pump, (iii) OFDM probe signal amplified by the first stage SBS gain filter, (iv) OFDM probe signal amplified by the second stage SBS gain filter. Fig. 5 . Fig. 5. Filter passband shapes after feedback process based on coherent detection using OFDM signal as the probe.The spectrum precision is ~20 MHz equaling the interval of the OFDM subcarriers. Fig. 6 . Fig. 6.Typical convergence speed of the two different feedback methods. Fig. 7 . Fig. 7. Concept of the SBS-based ROADM.The SBS gain filter and loss filter are used to realize drop and through function at the same time. Fig. 8 . Fig. 8. Experimental setup and the optical spectrum schemes at different points.The spectra obtained from the oscilloscope are shown in Fig. 9. Due to the low cut-off frequency limitation of the receiver, 12 carriers in the central position are suppressed.Figure 9(b) illustrates the drop function with an SBS gain filter.Only a small parts of the adjacent bands close to the central position are amplified slightly thus making a ~20 dB selectivity.Figure 9(c) illustrates the through function with an SBS loss filter.The central band is completely absorbed and the central peak is only due to the receiver DC noise.After inserting the dropped band to the central empty position shown in Fig. 9(d), the spectrum looks almost the same as the original one shown in Fig. 9(a). Figure 9 ( c) illustrates the through function with an SBS loss filter.The central band is completely absorbed and the central peak is only due to the receiver DC noise.After inserting the dropped band to the central empty position shown in Fig. 9(d), the spectrum looks almost the same as the original one shown in Fig. 9(a). Fig. 9 . Fig. 9.The electrical spectra of the OFDM signal at different points of the ROADM
6,200.6
2015-07-27T00:00:00.000
[ "Engineering", "Physics" ]
Procedural Data Processing for Single-Molecule Identification by Nanopore Sensors Nanopores are promising single-molecule sensing devices that have been successfully used for DNA sequencing, protein identification, as well as virus/particles detection. It is important to understand and characterize the current pulses collected by nanopore sensors, which imply the associated information of the analytes, including the size, structure, and surface charge. Therefore, a signal processing program, based on the MATLAB platform, was designed to characterize the ionic current signals of nanopore measurements. In a movable data window, the selected current segment was analyzed by the adaptive thresholds and corrected by multi-functions to reduce the noise obstruction of pulse signals. Accordingly, a set of single molecular events was identified, and the abundant information of current signals with the dwell time, amplitude, and current pulse area was exported for quantitative analysis. The program contributes to the efficient and fast processing of nanopore signals with a high signal-to-noise ratio, which promotes the development of the nanopore sensing devices in various fields of diagnosis systems and precision medicine. Introduction Nanopore sensing is a promising single molecular technology with the advantages of no amplification, label-free, high sensitivity, and high throughput. It has been developed in various fields such as gene sequencing, protein profiling, nanoparticle characterization, and biological particle detection [1][2][3][4][5][6]. Generally, a nanopore sensor is based on the resistive-pulse model in the electrolyte solution, where the nanoscale pore, which is drilled into a thin insulating film, is the only path of the ionic current flowing under the biased voltages. A single molecule entering the pore arouses transient fluctuation of the ion current, referred to as a current pulse, indicating a single molecule event of translocation. The monitored current pulses that are raised within the current baseline contribute to the physical features of the passing analytes and the dynamic interactions between the analytes and nanopores. The intensity, dwell time, capture frequency, and waveform of these current pulses can provide information on the dynamic changes in analytes including the volume, concentration, surface charge, and conformation features in solution [5,[7][8][9]. Hence, it is important to collect and characterize the current pulses in nanopore measurements for a better understanding of the behavior of various analytes. However, the current pulse is instantaneous and is recorded by a high-gain and low-noise amplifier. The corresponding current pulses are weak and arise from the strong noise background of the ionic current trace. Thus, a well-defined current readout platform is a necessary component of the nanopore sensing in order to detect analytes and to characterize their properties, which will accelerate the nanopore development in modern diagnostic and bio-sensing fields. With the rapid development of nanopore sensing, there is great interest in developing tools and methods for robust data analysis within nanopore fields [5,[10][11][12][13][14][15][16]. Currently, most of the data analyses have been performed with various open source and commercial software. In order to account for the complexity and diversity of the current signals, different techniques have been developed for the analysis of nanopore current traces. Firstly, denoising filters are often used in the software realization for already acquired digital data and signal amplification, including low-pass filters, Kalman filters, and wavelet transform [14,[17][18][19][20]. To separate pulses from noise, Raillon's group introduced Open-Nanopore software, which is based on the cumulative sums algorithm to process the multi-level events in nanopore translocation [11]. Forstater's group developed an improved data analysis tool called Modular Single-Molecule Analysis Interface (MOSAIC) for data measured from both biological and solid-state nanopores experiments based on two key algorithms: ADEPT for short-lived events and CUSUM + for longer events [12]. Meanwhile, Sun's group provided an automated adaptive and robust AutoNanopore platform for event detection in solid-state nanopore current traces with the highest coverage ratio [13,14]. Dekker et al. introduced a local baseline recalculation algorithm by iterative operation for separating DNA folded and unfolded states within translocation events [21]. Long et al. focused on an automatic and accurate nanopore data process with a second-order differential-based calibration method and an integration method to evaluate both the dwell time and current amplitude [22][23][24]. For baseline fitting profiles, Kim et al. proposed a clustering method (density-based spatial clustering of applications) to identify the boundaries of the event and for preliminary estimation of the levels within the events [25,26]. Recently, classical algorithms of machine learning have been proposed to improve the nanopore resolution [14,[27][28][29][30]. Zhang et al. used deep learning based on a bi-path network (B-Net) for feature extraction of nanopore signals, which was capable of processing data with a low signal-to-noise ratio, far beyond that of the threshold-based algorithms [14,27]. Overall, these programs, which have been customized in a local laboratory, improve the quality of nanopore signal and accelerate the signal processing. However, they usually work well with specific targets in particular models, and the learning process, with great potential, usually runs in a large number of well-labeled data sets, which is challenging. Predictably, a more adaptable nanopore signal processing platform is required to explore the stochastic nature of a variety of molecule passage events in a complex environment. Here, an adaptive program of event detection and information extraction in nanopore measurement was designed based on the MATLAB programming interface. Figure 1A,B shows a flowchart of the nanopore current signals detection and the recognition program of the molecular pulse signals in the nanopore sensor. Clearly, the program is divided into several modules including molecular event detection, molecular event correction, event information, and data output. Firstly, the measured current trace can be directly loaded into the program by setting multi-parameters of local window width and double thresholds to identify the current pulse signals. Secondly, a set of the detected pulse signals is fitted and corrected to reduce the interference of background noise and the over coverage of pulse signals in the evaluation stage. Finally, the abundant information from the current signals is exported and saved for further quantitative analysis. These stages are interrelated and gradually progressive, with each step utilizing parameters from the previous step. The program is easily operated and broadly adapted to analyze large amounts of nanopore data with high quality and high throughput, which will accelerate nanopore sensing development in more specialized areas of rapid clinical diagnoses and optimal treatment regimens. Materials and Methods λ-DNA (TaKaRa,Co., Ltd. Dalian, China) was diluted in 1 M KCl at a pH of 8.0. All other chemical reagents used in the nanopore experiment were of analytical grade and use without further purification. The samples were prepared with Milli-Q super purified water with a resistance of >18 MΩ/cm. All solutions were filtered with a 0.02 μm Anotop filter (Whatman, Co. Maidstone, Kent, UK) before using. The DNA was detected by nanopore sensors. A patch clamp amplifier (Axon Instruments, Axopatch 700B) was used to measure the corresponding ionic current flowing through the nanopore as a function of the biased voltages. The sampling frequency was above 100 kHz with a low pass filter of 10 kHz cutoff frequency. The current signals were recorded by the 1440A digitizer (Molecular Devices, Inc. California, USA). Data were collected over multiple experiments with the same nanopore. The whole nanopore device was set in a Faraday cage for shielding electromagnetic noise. Nanopore Signal Processing Program Because the high bandwidth recordings of nanopore sensors instantaneously produce a large amount of current signal data, it is necessary to process them in a programmed manner. Among digital signal software, MATLAB (MathWorks, Natick, MA, USA) is powerful software with the advantages of rapid operation on text, graphics, sound, and interactive features such as a human-machine interface. Thus, a signal processing program based on MATLAB was developed to extract molecular information from large amounts of nanopore data, as shown in Figure 2. Once the nanopore current signals were loaded into the program, a visual interaction interface of MATLAB was established and the selected current segment appeared in a movable data window. The preliminary detection was performed based on raw data by setting multiple parameters, in- Materials and Methods λ-DNA (TaKaRa, Co., Ltd. Dalian, China) was diluted in 1 M KCl at a pH of 8.0. All other chemical reagents used in the nanopore experiment were of analytical grade and use without further purification. The samples were prepared with Milli-Q super purified water with a resistance of >18 MΩ/cm. All solutions were filtered with a 0.02 µm Anotop filter (Whatman, Co. Maidstone, Kent, UK) before using. The DNA was detected by nanopore sensors. A patch clamp amplifier (Axon Instruments, Axopatch 700B) was used to measure the corresponding ionic current flowing through the nanopore as a function of the biased voltages. The sampling frequency was above 100 kHz with a low pass filter of 10 kHz cutoff frequency. The current signals were recorded by the 1440A digitizer (Molecular Devices, Inc. CA, USA). Data were collected over multiple experiments with the same nanopore. The whole nanopore device was set in a Faraday cage for shielding electromagnetic noise. Nanopore Signal Processing Program Because the high bandwidth recordings of nanopore sensors instantaneously produce a large amount of current signal data, it is necessary to process them in a programmed manner. Among digital signal software, MATLAB (MathWorks, Natick, MA, USA) is powerful software with the advantages of rapid operation on text, graphics, sound, and interactive features such as a human-machine interface. Thus, a signal processing program based on MATLAB was developed to extract molecular information from large amounts of nanopore data, as shown in Figure 2. Once the nanopore current signals were loaded into the program, a visual interaction interface of MATLAB was established and the selected current segment appeared in a movable data window. The preliminary detection was performed based on raw data by setting multiple parameters, including local window width and double thresholds. Here, the baseline current was considered a stable value in a short enough time, which is termed a local signal window. Therefore, the current signal trace recorded over a long time could be divided into a local window with a short enough time in order to track the baseline fluctuations, and the whole signal data fragment could be detected and analyzed by a moving the analysis window to minimize the effect of noise background, which improved the efficiency and accuracy of nanopore data analysis. Additionally, the determined current pulse signals were further corrected by a secondorder differential function and rise time function in the main program. Next, the detected pulse signal was determined as a list of the molecular translocation events and individual signals could be checked in an enlarged graphics window. The feature parameters of these molecular events, including the dwell time, signal amplitude, peak value, and pulse signal integral area of the translocation events, were exported. Finally, the extracted information could be sorted into different populations with different criteria in the graph. The program's self-adaptive and multiple standards were used to determine the current pulse signal of nanopore sensors in a moving data window, which significantly improved the statistical efficiency of nanopore data analysis. Meanwhile, more signal information, including peak values and current pulse areas, was helpful to better characterize the single molecular features of the analytes, such as shape, surface charges, volume, conformational change, and concentrations. Biosensors 2022, 12, x FOR PEER REVIEW 4 of 10 cluding local window width and double thresholds. Here, the baseline current was considered a stable value in a short enough time, which is termed a local signal window. Therefore, the current signal trace recorded over a long time could be divided into a local window with a short enough time in order to track the baseline fluctuations, and the whole signal data fragment could be detected and analyzed by a moving the analysis window to minimize the effect of noise background, which improved the efficiency and accuracy of nanopore data analysis. Additionally, the determined current pulse signals were further corrected by a second-order differential function and rise time function in the main program. Next, the detected pulse signal was determined as a list of the molecular translocation events and individual signals could be checked in an enlarged graphics window. The feature parameters of these molecular events, including the dwell time, signal amplitude, peak value, and pulse signal integral area of the translocation events, were exported. Finally, the extracted information could be sorted into different populations with different criteria in the graph. The program's self-adaptive and multiple standards were used to determine the current pulse signal of nanopore sensors in a moving data window, which significantly improved the statistical efficiency of nanopore data analysis. Meanwhile, more signal information, including peak values and current pulse areas, was helpful to better characterize the single molecular features of the analytes, such as shape, surface charges, volume, conformational change, and concentrations. Determination of Molecular Events As the biased voltage was added, a stable ionic current trace as recorded in the nanopore sensor. Once the target molecules were driven into the pore, the baseline current instantaneously fluctuated in the manner of current spike pulses. These transient spike pulses could be categorized as either a current reducing event with electrolyte conductance decreasing or current enhancing events with electrolyte conductance increasing generated from molecule translocations [5,8,10]. Because weak pulse signals in the picoampere and nanoampere range emerge in ionic current traces with a strong noise background, it is challenging to recognize the pulse signals. The typical pulse signals in gray raw line are shown in Figure 3A,B, and the current trace is fitted with a black line. When the current value of the detected data point in the window returned to the baseline current level, the translocation event came to an end. Meanwhile, the process involved multiple Determination of Molecular Events As the biased voltage was added, a stable ionic current trace as recorded in the nanopore sensor. Once the target molecules were driven into the pore, the baseline current instantaneously fluctuated in the manner of current spike pulses. These transient spike pulses could be categorized as either a current reducing event with electrolyte conductance decreasing or current enhancing events with electrolyte conductance increasing generated from molecule translocations [5,8,10]. Because weak pulse signals in the picoampere and nanoampere range emerge in ionic current traces with a strong noise background, it is challenging to recognize the pulse signals. The typical pulse signals in gray raw line are shown in Figure 3A,B, and the current trace is fitted with a black line. When the current value of the detected data point in the window returned to the baseline current level, the translocation event came to an end. Meanwhile, the process involved multiple iterations and decoupling calculations of the local baseline based upon the threshold algorithm in order to remove the influence from the previous events. For the local stable baseline, the program ran throughout the whole current trace to evaluate the data points where the current was lower than the corresponding local threshold. The split strategy with a moving analysis window not only accelerated the signal processing efficiently but also improved the accuracy of the detected event information. iterations and decoupling calculations of the local baseline based upon the threshold algorithm in order to remove the influence from the previous events. For the local stable baseline, the program ran throughout the whole current trace to evaluate the data points where the current was lower than the corresponding local threshold. The split strategy with a moving analysis window not only accelerated the signal processing efficiently but also improved the accuracy of the detected event information. In an analysis window with a stable baseline current, the measured signal can be divided into three components using I t I t ∑ I t I t , where I t is the baseline current, ∑ I t is the event current, and I t is the current noise [14,25]. If the data point in the window is set to (i), the baseline current is the mean value of the current values of all data points in the window, set to baseline (i), the initial state of detection is i = 0, and the judgment starts from the first point after the window (i + 1). When the molecules enter the pores, the ionic current is transiently blocked. Therefore, the current value of the event on the translocation is less than the baseline current, and all testing data points of the event are between the double thresholds. As shown in Figure 3A, the baseline (I0) and double thresholds (the lower threshold u0 and the higher threshold u1) were defined to distinguish between signal and noise in a local window. Once the thresholds are set, the program searches the start point (S1) and end point (E1) in the entire traces. The start time (S1) is defined when a first level is observed away from the base current, and the event end time (E1) is defined when the signal crosses the base current value again. Considering the signal distortion due to noise filtering and digital-to-analog conversion, it is possible to miss some points in the initial position of the signal. In order to accurately cover all data points, the start point (S2) and the end point (E2) are reset by a tracking-back routine in MATLAB. Inevitably, the over-fitting of the pulse signal still causes a minor deviation in each event after back checking algorithm. Thus, the correction of the event is required to improve the measurement system. Correction for Molecular Events Typically, through-pore ionic current is collected with a range of different noise sources in a nanopore measurement. To better determine the molecular events, the current signal data were smoothed and corrected based on mathematical functions of the MATLAB program. First, the different selected fragments of the original current trace with low signal to noise ratio were fitted by Fourier functions, which are equivalent to the role of a low-pass filter. Smoothed signal data are preferable to identify the molecule events due to the reduction in background noise. After the smooth fitting of the electrical signals, the signal detection was analyzed using the correct functions including rise time correction and second-order difference correction. After the data fitting, the extreme In an analysis window with a stable baseline current, the measured signal can be divided into three components using I(t) = I 0 (t) + ∑ N k=1 I event−k (t) + I n (t), where I 0 (t) is the baseline current, ∑ N k=1 I event−k (t) is the event current, and I n (t) is the current noise [14,25]. If the data point in the window is set to (i), the baseline current is the mean value of the current values of all data points in the window, set to baseline (i), the initial state of detection is i = 0, and the judgment starts from the first point after the window (i + 1). When the molecules enter the pores, the ionic current is transiently blocked. Therefore, the current value of the event on the translocation is less than the baseline current, and all testing data points of the event are between the double thresholds. As shown in Figure 3A, the baseline (I 0 ) and double thresholds (the lower threshold u 0 and the higher threshold u 1 ) were defined to distinguish between signal and noise in a local window. Once the thresholds are set, the program searches the start point (S 1 ) and end point (E 1 ) in the entire traces. The start time (S 1 ) is defined when a first level is observed away from the base current, and the event end time (E 1 ) is defined when the signal crosses the base current value again. Considering the signal distortion due to noise filtering and digital-to-analog conversion, it is possible to miss some points in the initial position of the signal. In order to accurately cover all data points, the start point (S 2 ) and the end point (E 2 ) are reset by a tracking-back routine in MATLAB. Inevitably, the over-fitting of the pulse signal still causes a minor deviation in each event after back checking algorithm. Thus, the correction of the event is required to improve the measurement system. Correction for Molecular Events Typically, through-pore ionic current is collected with a range of different noise sources in a nanopore measurement. To better determine the molecular events, the current signal data were smoothed and corrected based on mathematical functions of the MATLAB program. First, the different selected fragments of the original current trace with low signal to noise ratio were fitted by Fourier functions, which are equivalent to the role of a low-pass filter. Smoothed signal data are preferable to identify the molecule events due to the reduction in background noise. After the smooth fitting of the electrical signals, the signal detection was analyzed using the correct functions including rise time correction and second-order difference correction. After the data fitting, the extreme points were searched by the correction function from the beginning to the end of the current pulses, as shown in Figure 3B. The extreme points of second-order difference correction were then recognized as the start point (S 3 ) and the end point (E 3 ). Due to the high band sampling and low-pass filter denoising, a rise time is required for a current pulse signal from the blocked state to the open state, and the pulse current is delayed back to baseline. In order to more precisely determine of the pulse signal model, the rise time correction function was used to reset to the local end point (E 4 ), and the start location of the pulse event was the same (S 3 ). Although the modified changes may appear minor, these corrected methods can improve the quality of the analysis result, especially for the pulses of fast translocation and bumping blockage. Electrical Signal Feature Information Extraction Once the current pulse signal of the nanopore sensor is determined, the method further extracts multiple parameters such as the residence time of the characteristic event, the signal pulse amplitude, and the signal integral area of the entire event; then, the data, are exported, which provides statistical and analytical maps of molecular event signature information. Generally, the basic theory of the nanopore-based detecting procedure mainly focuses on the amplitude (∆I) and dwell time (∆t) of the current pulse signals. The signal amplitude of an abrupt current blockade on the baseline is associated with the volume and geometry of the molecule objects during translocation, and the dwell time is related to the dynamics of the objects within the pore. Analysis of these signatures can provide insight into the molecular structure, surface charge, and interaction between pore and molecules. Here, one more feature of peak areas was described by the time integral of the current amplitude from the baseline current, referred to as the event charge deficit (ECD), ∆I(t)dt = ECD [31][32][33]. It is known that some nonspherical molecules passing through the pore in different conformations, such as DNA in a folded and unfolded state, induce compensating effects of decreased translocation time and increased current blockage for folded molecules. However, the integrated areas of all current pulses are equal for the same molecules in different conformations and same lengths. Thus, ECD is an important standard used to recognize individual molecules or particles in varied structures and orientations during translocation in a variable environment. The additional analysis is helpful for more accurately determining the molecular species and for analyzing the changes in molecular shape and structure in detail. Thus, the multi-parametric information extracted in our program provides a more comprehensive and accurate description of the analytes from nanopore current signals. Applications to DNA Detection Finally, the current signals of DNA translocation through nanopore were analyzed by the Event Detection program based on the MATLAB platform. A large amount of the transient current pulses generated by DNA passing through the pore was recorded in nanopore sensors, as shown in Figure 4. The features of the current amplitude, the dwell time, and the integral area of the pulses were analyzed by statistical function. As represented in Figure 4, three distinct dynamic translocation events were observed from the analysis of this method. The linear DNA molecules translocate over a longer period and a lower blockade current, while the folded ones more quickly pass through the nanopore and with a larger current blockage. In this demonstration of DNA translocation, the phenomenon of the agglomerations and packaging of long DNA trapped at the orifice of a nanopore is not obviously neglectable, which induces more pulse signals of longer duration and the amplitude changes expected from DNA. This phenomenon affects the capture statistics of pulse signals in nanopore sensing. In order to better distinguish the real signal of DNA translocation through a nanopore, an additional feature (event charge deficit (ECD)) was characterized with the amplitude (∆I) and dwell time (∆t) of the current pulse signals in our study. It was confirmed that ECD, referred to as the integral area of obstructed ionic current over the duration of an event, is equal for the same class of DNA passing freely through the nanopore. For trapped molecules, the statistical analysis of ECD is different from that of DNA freely passing through the pore, as shown in Figure 5. As DNA molecules freely enter the pore in a linear and folded form, the ECD statistics follow a normal distribution model with one peak. More peaks appear in the ECD distribution in the presence of long DNA trapped in the pore. Therefore, our software, with comprehensive features of pulse signals, provides a better description of the dynamic process for DNA translocation, in coordination with the optimization experiment condition, to effectively reduce the DNA agglomeration phenomenon, including temperature, pH, voltages, etc. Therefore, the program will be useful for identifying and evaluating the diverse dynamic behavior of the objects interacting with nanopores at the single-molecule level. In this demonstration of DNA translocation, the phenomenon of the agglomerations and packaging of long DNA trapped at the orifice of a nanopore is not obviously neglectable, which induces more pulse signals of longer duration and the amplitude changes expected from DNA. This phenomenon affects the capture statistics of pulse signals in nanopore sensing. In order to better distinguish the real signal of DNA translocation through a nanopore, an additional feature (event charge deficit (ECD)) was characterized with the amplitude (ΔI) and dwell time (Δt) of the current pulse signals in our study. It was confirmed that ECD, referred to as the integral area of obstructed ionic current over the duration of an event, is equal for the same class of DNA passing freely through the nanopore. For trapped molecules, the statistical analysis of ECD is different from that of DNA freely passing through the pore, as shown in Figure 5. As DNA molecules freely enter the pore in a linear and folded form, the ECD statistics follow a normal distribution model with one peak. More peaks appear in the ECD distribution in the presence of long DNA trapped in the pore. Therefore, our software, with comprehensive features of pulse signals, provides a better description of the dynamic process for DNA translocation, in coordination with the optimization experiment condition, to effectively reduce the DNA agglomeration phenomenon, including temperature, pH, voltages, etc. Therefore, the program will be useful for identifying and evaluating the diverse dynamic behavior of the objects interacting with nanopores at the single-molecule level. Conclusions Signal processing is an indispensable component of nanopore sensing. However, the current research focuses on nanopore equipment and multiple detection objects; the processing of the huge amount of data from complex nanopore signals is still not deep enough, which is a time-consuming and unstandardized process. Thus, a well-defined current readout platform is necessary to better detect analytes and characterize their properties. In our work, a robust data processing program of current pulse signals detected by nanopore sensors was developed based on the MATLAB platform. The program considers issues such as signal noise and baseline stability; the selected current segment was rapidly analyzed by the adaptive thresholds and corrected by multiple functions to reduce Conclusions Signal processing is an indispensable component of nanopore sensing. However, the current research focuses on nanopore equipment and multiple detection objects; the processing of the huge amount of data from complex nanopore signals is still not deep enough, which is a time-consuming and unstandardized process. Thus, a well-defined current readout platform is necessary to better detect analytes and characterize their properties. In our work, a robust data processing program of current pulse signals detected by nanopore sensors was developed based on the MATLAB platform. The program considers issues such as signal noise and baseline stability; the selected current segment was rapidly analyzed by the adaptive thresholds and corrected by multiple functions to reduce background noise in a moving local window, which greatly accelerated the processing efficiency of nanopore data analysis. Moreover, multi-dimensional information, such as the residence time of the detected pulse signal, the amplitude of the pulse signal, and the integral information of the pulse signal, was extracted and assessed, which will provide distinct scenarios of molecular translocation at the single-molecule level. Therefore, this automatic and accurate signal processing program can replace the artificial-assisted recognition, which is time-consuming and individually biased, and will promote the development of the nanopore sensing devices in various fields of diagnostic systems and precision medicine.
6,830.6
2022-12-01T00:00:00.000
[ "Chemistry" ]
Polynomial Solutions of Differential Equations We show that any differential operator of the form $L(y)=\sum_{k=0}^{k=N} a_{k}(x) y^{(k)}$, where $a_k$ is a real polynomial of degree $\leq k$, has all real eigenvalues in the space of polynomials of degree at most n, for all n. The eigenvalues are given by the coefficient of $x^n$ in $L(x^{n})$. If these eigenvalues are distinct, then there is a unique monic polynomial of degree n which is an eigenfunction of the operator L- for every non-negative integer n. As an application we recover Bochner's classification of second order ODEs with polynomial coefficients and polynomial solutions, as well as a family of non-classical polynomials. where k a is a real polynomial of degree k ≤ , has all real eigenvalues in the space of polynomials of degree at most n, for all n. The eigenvalues are given by the coefficient of n x in ( ) n L x . If these eigenvalues are distinct, then there is a unique monic polynomial of degree n which is an eigenfunction of the operator L-for every nonnegative integer n. As an application we recover Bochner's classification of second order ODEs with polynomial coefficients and polynomial solutions, as well as a family of non-classical polynomials. The subject of polynomial solutions of differential equations is a classical theme, going back to Routh [10] and Bochner [3]. A comprehensive survey of recent literature is given in [6]. One family of polynomials-namely the Romanovski polynomials [4,9] is missing even in recent mathematics literature on the subject [8]; these polynomials are the main subject of some current Physics literature [9,11]. Their existence and -under a mild condition -uniqueness and orthogonality follow from the following propositions. The proofs use elementary linear algebra and are suitable for class-room exposition. The same ideas work for higher order equations [1]. Proposition 1 and L has eigenfunctions in each j P . Assume that the eigenvalues of L are distinct. Then n P has a basis of eigenfunctions and, for reasons of degree, there must be an eigenfunction of degree n, for every n. Therefore, up to a constant, there is a unique eigenfunction of degree n for all n. We now concentrate on second order operators, leaving the higher order case to [1]. Let then by scaling and translation, we may assume that 1 , 1 ) ( x . Applying the above proposition we then have the following result. Proposition 2 , then the eigenspace in n P for eigenvalue n n n α λ In this proposition there is no claim to any kind of orthogonality properties. Nevertheless, the non-classical functions appearing here are of great interest in Physics and their properties and applications are investigated in [4,9,11]. The classical Legendre, Hermite, Laguerre and Jacobi make their appearance as soon as one searches for self-adjoint operators. Their existence and orthogonality properties [cf:8, p.80-106,2,7] can be obtained elegantly in the context of elementary Sturm-Liouville theory. Proposition 3 Let L be the operator defined by ( ) functions which are at least two times differentiable on a finite interval I. Define a bilinear function on C by where p is two times differentiable and non-negative and does not vanish identically in any subinterval of I. Proof: Let , Equating coefficients of and u u′ on both sides, we get the differential equations for p: Examples: (1) Jacobi polynomials First note that for any differentiable function f with f ′ continuous, the integral α < -as one sees by using integration by parts. Consider the equation , then L would be a self-adjoint operator on all polynomials of degree n and so, there must be, up to a scalar, a unique polynomial which is an eigen function of L for eigenvalue ( 1) n n nα − − + . So these polynomials satisfy the equation (2) The equation This equation is investigated in [5] and the eigenvalues determined experimentally, by machine computations. Here, we will determine the eigenvalues in the framework provided by Proposition 3. . L e t n P be the space of al polynomials of degree at most n. As L maps n P into itself, the eigenvalues of L are given by the . The eigenvalues turn out to be 2 n − . As these eigenvalues are distinct, there is, up to a constant, a unique polynomial of degree n which is an eigenfunction of L. The weight function is on the interval [0,1] and it is not integrable. However, as 0 , the operator maps the space V of all polynomials that are multiples of ) The requirement for L to be self-adjoint on V is 0 As η ξ , vanish at 1, the operator L is indeed self-adjoint on V. , where n P is the space of al polynomials of degree at most n. As the codimension of n V in 1 + n V is 1, the operator L must have an eigenvector in n V for all the degrees from 1 to ) 1 ( + n . Therefore, up to a scalar, there is a unique eigenfunction of degree ( 1 + n ) which is a multiple of ) 1 ( t − and all these functions are orthogonal for the weight Using the uniqueness up to scalars of these functions, the eigenfunctions are determined by the differential equation and can be computed explicitly. (3) The Finite Orthogonality of Romanovski Polynomials These polynomials are investigated in Refs [11,9] and their finite orthogonality is proved also proved there. Here, we establish this in the framework of Proposition3. The Romanovski polynomials are eigenfunctions of the operator not an integer, there is only one monic polynomial in every degree which is an eigenfunction of L ; for α a non-positive integer, the eigenspaces can be 2 dimensional for certain degrees (Propostion2). The formal weight function is For several non-trivial applications to problems in Physics, the reader is referred to the paper [9]. Conclusion: In this note, which should have been written at least hundred years ago, we have rederived several results from classical and recent literature from a unified point of view by a straightforward application of basic linear algebra. Some of these polynomials are not discussed in the standard textbooks on the subject, e.g. [8]-as pointed out in Ref [9]. We have also derived the orthogonality-classical as well as finite-of these polynomials from a unified point of view.
1,500.8
2010-02-22T00:00:00.000
[ "Mathematics" ]
Melittin Induced G1 Cell Cycle Arrest and Apoptosis in Chago-K1 Human Bronchogenic Carcinoma Cells and Inhibited the Differentiation of THP-1 Cells into Tumour- Associated Macrophages Background: Bronchogenic carcinoma (lung cancer) is one of the leading causes of death. Although many compounds isolated from natural products have been used to treat it, drug resistance is a serious problem, and alternative anti-cancer drugs are required. Here, melittin from Apis mellifera venom was used, and its effects on bronchogenic carcinoma cell proliferation and tumour-associated macrophage differentiation were evaluated. Methods: The half maximal inhibitory concentration (IC50) of melittin was measured by MTT. Cell death was observed by annexin V and propidium iodide (PI) co-staining followed by flow cytometry. Cell cycle arrest was revealed by PI staining and flow cytometry. To investigate the tumour microenvironment, differentiation of circulating monocytes (THP-1) into tumour-associated macrophages (TAMs) was assayed by sandwich-ELISA and interleukin (IL)-10 levels were determined. Cell proliferation and migration was observed by flat plate colony formation. Secretion of vascular endothelial growth factor (VEGF) was detected by ELISA. The change in expression levels of CatS, Bcl-2, and MADD was measured by quantitative RT-PCR. Results: Melittin was significantly more cytotoxic (p < 0.01) to human bronchogenic carcinoma cells (ChaGo-K1) than to the control human lung fibroblasts (Wi-38) cells. At 2.5 μM, melittin caused ChaGo-K1 cells to undergo apoptosis and cell cycle arrest at the G1 phase. The IL-10 levels showed that melittin significantly inhibited the differentiation of THP-1 cells into TAMs (p < 0.05) and reduced the number of colonies formed in the treated ChaGo-K1 cells compared to the untreated cells. However, melittin did not affect angiogenesis in ChaGo-K1 cells. Unlike MADD, Bcl-2 was up-regulated significantly (p < 0.05) in melittin-treated ChaGo-K1 cells. Conclusion: Melittin can be used as an alternative agent for lung cancer treatment because of its cytotoxicity against ChaGo-K1 cells and the inhibition of differentiation of THP-1 cells into TAMs. Introduction microenvironment can be divided into two broad groups, one comprises immune cells, such as granulocytes, lymphocytes, and macrophages and the other comprises mesenchymal cells, such as endothelial cells and fibroblasts (Kerkar and Restifo, 2012). Macrophages can be classified into two distinct phenotypes, M1 and M2. The expression of CD38, G-protein coupled receptor 18 (Gpr18), and formyl peptide receptor 2 (Fpr2) genes is unique to M1 macrophages, while expression of early growth response protein 2 (Egr2) and cMyc is unique to M2 macrophages (Jablonski et al., 2015). Additional markers for M1 macrophages like iNOS, IL-6, SOSC3, and TNF-α and ARG1 and CCL24 for M2 macrophages have been reported (Qin et al., 2017). Furthermore, the expression pattern of interleukin (IL)-10, IL-12, transforming growth factor β1, and TNF-α is important for investigating M1/M2 polarization (Mohammadi et al., 2017a). These mesenchymal cells play an important role in cell support and homeostasis. Furthermore, the surrounding extracellular matrix gives strength and flexibility to the tumour (Thanee et al., 2012). Cancer cells in a tumour microenvironment secret cytokines, chemokines, and growth factors for cancer cell proliferation and metastasis (Wong and Chang, 2018). A n i m p o r t a n t m e c h a n i s m i n t h e t u m o u r microenvironment is the activation of circulating monocytes to enter the tumour, where they then differentiate and become tumour-associated macrophages (TAMs) under the influence of IL-4 and IL-10 secreted by the cancer cells (Pilling et al., 2017;Shao et al., 2016). Overall, TAMs may be a major component of the tumour microenvironment and are directly involved in tumour growth, infiltration, and metastasis. In addition, TAMs can suppress the immune functions of the cancer cells, promoting drug and radiation resistance (Shao et al., 2016;. Conventional anti-cancer drugs frequently become ineffective due to chemoresistance, leading to invasion, metastasis, angiogenesis, and inflammation (Konrad et al., 2017). Thus, it is essential to find a newer compound to inhibit lung cancer cells and the differentiation of macrophages to TAMs. For many years, bee venom has been used in apitherapy. Although, it is composed of a diverse range of proteins and peptides, melittin is the major protein and comprises about 50% (w/w) of dry bee venom (Gajski and Garaj-Vrhovac, 2013). Melittin has anti-analgesic, anti-inflammatory, and antimicrobial activities (Gajski and Garaj-Vrhovac, 2013;Lin et al., 2017;Rady et al., 2017;Shi et al., 2016). However, no reports of melittin on lung cancer proliferation and monocyte differentiation are available. Here, the in vitro cytotoxicity of melittin against the human bronchogenic carcinoma (ChaGo-K1), human lung fibroblast (Wi-38), and human monocytic leukaemia (THP-1) cell lines was tested. Cell death and the changes in cell cycle arrest in melittin-treated ChaGo-K1 cells was evaluated in comparison to the Wi-38 cells. Additionally, the effect of melittin on differentiation of monocytes, in vitro cell migration, colony formation, and down-regulation of vascular endothelial growth factor (VEGF) levels involved in angiogenesis, were evaluated. Finally, the changes in gene expression levels of cathepsin S (CatS), B-cell lymphoma-2 (Bcl-2), and mitogen activating protein-kinase activating death domain (MADD) were reported. Melittin cytotoxicity assay ChaGo-K1 and Wi-38 cells were suspended in CM-R and CM-M, respectively, at a concentration of 105 cells/ well and seeded at 200 µL/well in 96-well culture plates. After an overnight incubation at 37°C in a 5% (v/v) CO 2 atmosphere, the media were supplemented with melittin at a final concentration of 7, 0.7, 0.007, 0.0007, and 0 µM and cultured for 24, 48, and 72 h at 37 °C with 5% (v/v) CO 2 . Thereafter, 0.12 µM 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) was added and the cells were incubated for another 4 h before the culture medium was replaced with 150 µL dimethylsufoxide and the absorbance at 540 nm (A540) was measured using a Multiskan™ FC microplate photometer (Thermo Fisher Scientific Inc., MA, USA). The percentage of viable cells relative to control was calculated as show below: Relative cell survival (in%) = (A 540 of sample × 100) / (A 540 of control) A graph of the relative cell survival (in%) against the concentration of melittin was plotted to derive the IC 50 and IC 70 . Programmed cell death ChaGo-K1 cells were suspended in CM-R medium and seeded at 10 6 cells/flask in a 25 mL flat-sided cell culture flask. Five groups of cells were prepared: (i) unstained cells, (ii) stained cells, stained cells treated with melittin at a final concentration of (iii) 0.7 µM (IC50) and (iv) 2.5 µM (IC 70 ), and (v) stained cells treated with 0.9 µM doxorubicin. After treatment, the cells were incubated for 24 h at 37°C with 5% (v/v) CO 2 , then harvested, washed twice in 1 mL cold phosphate-buffered saline of pH 7.4 5% (v/v) CO 2 atmosphere in the presence of melittin at a final concentration of 0, 0.175, 0.35, 0.7, 1.4, and 2.8 µM for 24 h. The cells were then washed twice with cold PBS and incubated in 9 mL of CM-R for 14 d at 37°C in a 5% (v/v) CO 2 atmosphere to allow colony formation. The colonies were washed with cold PBS, fixed in 4% (v/v) neutral-buffered formalin for 10 min and then stained by crystal violet. The number of colonies and their sizes were measured under light microscopy. Secretion of VEGF ChaGo-K1 cells were suspended in CM-R at 5 × 10 5 cells/well in a 6-well plate and incubated with 0 or 0.7 μM of melittin at 37 °C in a 5% (v/v) CO 2 atmosphere for 24 h. The culture medium was then harvested and the concentration of VEGF was measured using the VEGF Human BioAssay™ ELISA Development Kit. Gene expression Total RNA was isolated using the RNeasy® Mini Kit (Cat# 74104, Qiagen, Valencia, CA, USA). Concentration and purity of extracted total RNA was measured by the absorbance at 260 and 280 nm. Quantitative reverse transcriptase polymerase chain reaction (RT-qPCR) was used to amplify CatS, Bcl-2, MADD and ß-actin mRNA using the One Step SYBR PrimeScript RT-qPCR Kit II (Takara, Tokyo, Japan) as per the manufacturer's protocol. Respective forward and reverse primers of CatS, including the optimal condition of RT-qPCR were obtained from , while those of ß-actin, Bcl-2 and MADD, including the optimal condition of RT-qPCR, were obtained from Buahorm et al. (2015). The relative expression levels of target genes were normalized to the expression level of the ß-actin gene as control. Statistical analysis Each assay was performed in triplicates and the results are presented as mean ± 1 standard deviation (SD). Data were analysed by ANOVA, while the significance of differences between means was ascertained by Tukey's-b or Duncan tests. Data was considered to be statistically significant at p < 0.05. Cytotoxicity of melittin Melittin could inhibit the proliferation of both ChaGo-K1 and Wi-38 cells (Figure 1), with an IC 50 of 0.79 ± 0.02 and 1.91 ± 0.10 µM, respectively, during 24 h incubation ( Figure 1A). Melittin IC 50 value did not significantly differ when the incubation period (PBS), and resuspended in 50 µL of 1× binding buffer (10 mM HEPES, pH 7.4, 140 mM NaCl, and 2.5 mM CaCl 2 ). Except for the unstained group, the cells were then stained with 1 µL annexin V-FITC Alexa Fluor® 488 and 0.004 µM PI solution at room temperature in the dark for 30 min prior to flow cytometric analysis using a FC 500 MPL cytometer (Beckman Coulter Inc., CA, USA). Cell cycle analysis ChaGo-K1 cells were suspended in CM-R and seeded at 10 6 cells/flask in a 25 mL flat-sided cell culture flask. Four groups of cells were prepared: (i) untreated cells, melittin-treated cells at a final concentration of (ii) 0.7 µM (IC 50 ) and (iii) 2.5 µM (IC 70 ), and (iv) 0.9 µM doxorubicin treated cells and were incubated for 24 h at 37°C with 5% (v/v) CO 2 and then harvested and washed with 1 mL cold PBS. The cells were then fixed with 1 mL 70% (v/v) ethanol at −20°C overnight. Ethanol was removed, the cells were washed in 1 mL cold PBS, resuspended in 250 µL cold PBS with 0.5 U of RNase A, and incubated at 37°C for 30 min. The cells were then centrifuged, resuspended in a 37.5 µL PBS and 0.02 µM PI solution, and incubated at room temperature in the dark for 30 min prior to flow cytometric analysis. Monocyte-to-macrophage differentiation THP-1 derived macrophages: THP-1 cells were cultured at 5 × 10 5 cells/well in 24-well culture plates in CM-R supplemented with 200 nM PMA for 72 h at 37°C in a 5% (v/v) CO 2 atmosphere to induce differentiation. The differentiated cells displayed a cellular morphology similar to macrophages and adhered to the culture plates. Cells were treated with melittin at a final concentration of 0 (control), 0.044, 0.088, 0.175, and 0.350 µM for 24 h at 37 °C with 5% (v/v) CO 2 . As control, PBS was added to the PMA-primed THP-1 cells, which is similar to Mohammadi et al., (2017b). The culture medium was then harvested and screened for IL-10 using the human IL-10 ELISA kit. M2 polarized macrophages: The same method was followed for the preparation of THP-1-derived M2 macrophages. After melittin treatment, the cells were incubated for 24 h at 37°C with 5% (v/v) CO 2 , but with the addition of IL-4 (25 ng/mL) and IL-13 (25 ng/mL) for a further 24 h to induce differentiation to M2 polarized macrophages. IL-10 levels in the medium were measured using the human IL-10 ELISA kit. Colony formation ChaGo-K1 cells were suspended in CM-R at 103 cells/well in a 6-well plate and incubated at 37°C in a Apoptosis in ChaGo-K1 cells To investigate the apoptotic effect of melittin on ChaGo-K1 cells, programmed cell death was analysed by flow cytometry. ChaGo-K1 cells were divided into four groups as viable cells (annexin V-FITC-PI-), early apoptotic cells (annexin V-FITC+ PI-), late apoptotic cells (annexin V-FITC+ PI+), and necrotic cells (annexin V-FITC-PI+) using unstained cells (autofluorescence) as reference. The results showed that 0.7 µM of melittin (IC 50 ) gave a similar trend as doxorubicin, which primarily increased the early apoptotic cells during the 24 h incubation period ( Figure 2 and Table 1). However, 2.5 µM of melittin (IC 70 ) increased the proportion of late apoptotic and necrotic cells (Figure 2 and Table 1). Cell cycle arrest by melittin Flow cytometric analysis of PI-stained cells revealed that the in vitro antiproliferative effect of melittin on ChaGo-K1 cells was likely to be caused by cell cycle arrest and different concentrations of melittin caused cell cycle arrest in different phases. Melittin at a concentration of 0.7 µM (IC 50 ) caused a slight cell cycle arrest at the G2/M phase, and at 2.5 µM (IC 70 ), the cell cycle arrest was stronger and earlier at the G0/G1 phase. On the other hand, doxorubicin, a currently used chemotherapeutic drug, caused a strong cell cycle arrest at the G2/M phase at 0.9 µM (IC 50 ) (Figure 3). These results support the inhibitory effect of melittin on the cell cycle progression of ChaGo-K1 cells. Monocyte to macrophage differentiation The effect of melittin on IL-10 production by PMA-induced THP-1 and M2-polarized macrophages (induced with IL-4 and IL-13) was investigated. The production of IL-10 in THP-1-derived macrophages increased with increasing melittin concentrations, but was not statistically significant ( Figure 4). IL-10 production by the M2-polarized macrophages was significantly greater (1.4-fold) than the THP-1 cells. However, melittin slightly inhibited the IL-10 production by the M2-polarized cells, and so potentially inhibited the differentiation of monocytes to M2 macrophages at the concentrations of 0.175 and 0.350 µM (Figure 4). Cell migration, angiogenesis, and changes in gene expression Melittin could restrain the migration and colony formation of ChaGo-K1 (Figs. 5A) and Wi-38 cells (Fig. 5B). Although melittin at ≥ 1.4 µM completely inhibited colony formation in both ChaGo-K1 and Wi-38 cells; at lower concentrations (≥ 0.7 µM), it inhibited colony formation in ChaGo-K1 cells more than in Wi-38 cells, perhaps due to the smaller sized colonies formed by ChaGo-K1 cells. Furthermore, it was found that melittin did not affect VEGF secretion or angiogenesis in ChaGo-K1 cells compared to untreated cells (0.171 and 0.191 ng/mL, respectively). Cathepsin S plays an important role in cell proliferation, angiogenesis and metastasis (Gocheva et al., 2006), where an increased expression of CatS is always related to malignancies (Xu et al., 2009). The expression of CatS in the melittin-treated cells was not significantly higher than in the control ( Figure 6). Thus, programmed cell death of melittin-treated ChaGo-K1 cells does not seem to be involved with angiogenesis and CatS expression. The expression levels of Bcl-2 and MADD, as representative apoptotic genes, were evaluated ( Figure 6). A significantly higher Bcl-2 and a lower MADD expression was observed in melittin-treated ChaGo-K1 cells, which supports the melittin-induced apoptosis observed by the annexin V-FITC/PI staining and flow cytometric analysis (Figurs 2 and 3). Discussion New chemotherapeutic compounds, especially from plants, have been used to treat cancer patients. However, side effects and drug-resistance occurs eventually. In recent years, peptides and immunotherapy have emerged as other promising therapeutic approaches. Thus, melittin, was introduced here as a potential therapeutic agent for the treatment of lung cancers. It is beneficial in inducing early apoptosis in ChaGo-K1 cells after a 4 h exposure at the IC 50 dose, which is earlier than cordycepin, isolated from Cordyceps sinensis, which took 48 h (Su et al., 2017). Furthermore, vglycin, a novel natural Interleukin 10 (pg/mL) Melittin (pg/mL) Figure 4. IL-10 Production in Melittin-Treated THP-1 Cells. The data for IL-10 production from PMA-induced macrophages (left, in blue) and IL-4 and IL-13-induced M2 polarized macrophages (right, in green). * represents a significant difference between the untreated groups of both macrophage types (p < 0.05), and different letters in each group represent significantly different IL-10 production (p < 0.05). polypeptide isolated from pea seeds, took 24 h to cause a significant increase in apoptosis in the CT-26, SW480, and NCI-H716 colon cancer cell lines at 10 µM (Gao et al., 2017). Here, melittin-induced apoptosis of ChaGo-K1 was supported by the observed morphological changes (cell shrinkage, round cell formation, nucleus and organelle condensation, cell floating, decrease in viable cell density and presence of cell debris) and the significantly higher and lower expression of Bcl-2 and MADD transcripts, respectively. Similar altered gene expression was seen in apoptosis of breast cancer BT-474 cells induced by cardanol (Buahorm et al., 2015). However, whether melittin induced apoptosis or necrosis depended on the dose. The tumour microenvironment is important for cancer survival and resistance. Hence, monocyte-to-macrophage differentiation was evaluated, focusing on the two distinct macrophage phenotypes of M1, which is involved in anti-tumour immunity, and the activated M2, which has pro-tumour properties. Ruffell et al. (2012) reported that M2-polarized macrophages promoted tumour growth and survival, and possibly caused cancer resistance. Thus, M2-polarized macrophages might be involved in TAMs. In this study, it is interesting that melittin inhibited M2 macrophage differentiation, but not THP-1. However, the molecular mechanism behind melittin inhibition awaits elucidation. It is possible that the M2 macrophages were reprogrammed to the M1 type (Yu et al., 2017), and so the expression of biomarkers for both M1 and M2 macrophages needs to be evaluated. Interestingly, ß-elemene, extracted from the Chinese herb Curcuma wenyujin, could induce the re-differentiation of M2 macrophages to the M1 type, where down-regulation of Arg-1 (M2 macrophage biomarker) and up-regulation of iNOS (M1 macrophage biomarker) was observed (Yu et al., 2017). IL-10 is one of the major obstacles to immunotherapy, which reduces proliferation, cytokine production, and migratory capacities of effector T-cells (Dennis et al., 2013;Joyce and Fearon, 2015). Thus, localized inhibition of IL-10 could be a promising alternative co-treatment strategy. The combination of heterologous vaccination and cyclophosphamide was shown to cause immunopotentiation, in terms of a higher number of effector T-cells and tumour growth inhibition (Xia et al., 2016). In this study, melittin was found to decrease IL-10 production in M2 macrophages, but the underlying molecular mechanism is unknown. In vitro colony formation is a product of both cell proliferation and migration. That melittin inhibited colony formation in both ChaGo-K1 and Wi-38 cells, suggests that its specificity to cancer cells was rather low. In tumour cells and TAMs, CatS is produced to promote tumour growth, angiogenesis, migration, invasion, and metastasis (Fan et al., 2012;Sevenich et al., 2014). In this study, a non-significant higher expression of CatS was observed in melittin-treated ChaGo-K1 cells. Additionally, there was no significant change in the level of VEGF. It is possible that melittin affected ChaGo-K1 cells via cell migration but not the CatS and angiogenesis pathway. Furthermore, since there are many types of cathepsins, such as CatB and CatD that are involved in isocitrate dehydrogenase-wild type glioblastoma (Koh et al., 2017), and CatH in malignant prostate cancer (Jevnikar et al., 2013), the change in expression of those cathepsins should also be ascertained. For now, compounds synergistic with melittin, especially those that can inhibit CatS expression and angiogenesis, are of interest to inhibit ChaGo-K1 cells more effectively. This study indicated that melittin could suppress ChaGo-K1 cell proliferation and cause apoptosis by upregulating Bcl-2 and down-regulating MADD expression. This suggests that melittin is a promising anti-lung cancer ChaGo-K1 cells were cultured for 24 h in CM-R with 0 μM (control) or 0.7 μM melittin. Data are shown as mean ± 1 SD, derived from three independent repeats, and the relative expression levels are normalized to that of the housekeeping gene actin, used as control. Significant differences between the control and treated cells are shown at the p < 0.05 (*). peptide. Future in vivo studies are required to determine the underlying molecular mechanisms.
4,564
2018-12-01T00:00:00.000
[ "Medicine", "Biology" ]
Analysis of Cloud Network Management Using Resource Allocation and Task Scheduling Services Network failure in cloud datacenter could result from inefficient resource allocation; scheduling and logical segmentation of physical machines (network constraints). This is highly undesirable in Distributed Cloud Computing Networks (DCCNs) running mission critical services. Such failure has been identified in the University of Nigeria datacenter network situated in the south eastern part of Nigeria. In this paper, the architectural decomposition of a proposed DCCN was carried out while exploring its functionalities for grid performance. Virtualization services such as resource allocation and task scheduling were employed in heterogeneous server clusters. The validation of the DCCN performance was carried out using trace files from Riverbed Modeller 17.5 in order to ascertain the influence of virtualization on server resource pool. The QoS metrics considered in the analysis are: the service delay time, resource availability, throughput and utilization. From the validation analysis of the DCCN, the following results were obtained: average throughput (bytes/Sec) for DCCN = 40.00%, DCell = 33.33% and BCube = 26.67%. Average resource availability response for DCCN = 38.46%, DCell = 33.33%, and BCube = 28.21%. DCCN density on resource utilization = 40% (when logically isolated) and 60% (when not logically isolated). From the results, it was concluded that using virtualization in a cloud DataCenter servers will result in enhanced server performance offering lower average wait time even with a higher request rate and longer duration of resource use (service availability). By evaluating these recursive architectural designs for network operations, enterprises ready for Spine and leaf model could further develop their network resource management schemes for optimal performance. Keywords—Resource Provisioning; Virtualization; Cloud Computing; Service Availability; Smart Green Energy; QoS A. Background Study Middleware solutions for heterogeneous distributed cloud datacenters aim to respond to high requirements of large scale distributed applications relating to performance, flexibility, portability, availability, reliability, trust and scalability in the context of high number of users.These are usually considered in large geographic distribution of heterogeneous hardware and software resources.The concepts used in the design, implementation, and deployment of systems with such capabilities could be based on demand side management and monitoring, optimization via scheduling, sharing, load balancing, consolidation and other high performance grid based techniques.In most cases, new services and functionalities could be added to the middleware to enhance data-intensive and highly demanding applications with low cost and high performance.New cloud computing architectures must be designed to incorporate solutions for the management of data, resources, tasks, and applications.It must ensure fault tolerance, accounting, service on demand, and other functions required by user communities to operate effectively in a shared services environment. These observations formed the philosophy of an on-going research known as Smart Green Energy Management System (SGEMS).The system is a renewable energy system based on Solar PV microgrid, cloud energy meter and Distributed Cloud Computing Network (DCCN).In this system, the cloud datacenter server acts as the supporting platform for Enterprise Energy Tracking Analytic Cloud Portal (EETACP) deployment.But as energy users send their job tasks, fairness must be maintained optimally.Fairness in context refers to the method of having each job receive equal (or weighted) share of computing resources at any given moment.The DCCN must satisfy the fairness criteria for EETACP workload in the SGEMS research. Studies have shown that datacenters are now the enterprise foundations that support many Internet applications, enterprise operations, and novel scientific computations like cloud computing services for distributed energy management platforms.In fact, they are large-scale data-intensive computing infrastructure.The major challenge facing smart green IT researchers is how to build a scalable cloud based DCN platform that delivers significant aggregate bandwidth and excellent Quality of Service (QoS) for smart grid web platforms.On this issue, research efforts such as Fat-tree [1], [2], VL2 [3], Monsoon [4], DCell [5], MDCube [6], BCube [7], FiConn [8], DPillar [9], DRweb [10], SVLAN [11], and Scafida [12], etc, have been proposed in recent years based on their switch and server-centric network architectures, with no attention on excellent resource management schemes.However, these works have made significant contributions on server interconnectivity primarily.For fairness in this type of www.ijacsa.thesai.orgnetwork, resource allocation and scheduling remains indispensable. According to [13], by running large-scale computation and data-intensive services on inexpensive server clusters and other large-scale data parallel systems, cloud provisioning, (i.e., allocating resources for cluster requests) remains key to consolidating such clusters.Basically, resource management problems in multi-cluster environments are broadly classified into three large categories viz: 1) Cloud providers provisioning/delivering raw clusters based on resource requirements of their customers. 2) Customers running cluster operating systems to manage critical server resources and schedule jobs from multiple frameworks. 3) Tasks Scheduling frameworks with or without assistance from the cluster operating system to get the job done. The fundamental goal of a well-developed resource management scheme on a server cluster is to create a cost effective model taking cognizance of the aforementioned categories while formulating a validation mechanism that will justify performance of the proposed system.The SGEMS EETACP platform that runs on the DCCN places a computation demand on resource allocation in cloud based environment.Since stability criterion must be satisfied, the use of virtual machine as the minimum resource allocation unit in the DCCN could suffice.When a user starts an application, a virtual machine that satisfies the minimum resource requirement for the application is allocated via scheduling map.When workload of the application increases as a result of user traffic, a new virtual machine is allocated for this application.This must allocate more physical resources (CPU, Memory, etc.) to the existing virtual machine, without shutting down the existing virtual machine for resource reallocation.This is most ideal for EETACP as a mission critical application since downtime is not an option or desired. To provide guarantees of server operations, the datacenter with a cluster of servers must provision sufficient resources to meet application needs.Such provisioning can be based either on a dedicated or a shared model.In the dedicated model, a number of cluster nodes are dedicated to each application and the provisioning technique must determine how many nodes to allocate to the application [14].In the shared model, running cloud virtualization for resource scheduling can allow running applications to share server resources with other applications.Once the cloud driver allocates a set of resource units such as virtual machines, the MapReduce system uses the resources that are heterogeneous shared among multiple jobs in EETAACP context. Therefore, this paper focuses on studying resource allocation and scheduling at both the application level and the backend core to see how to map the physical resources to Virtual machines for better resource utilization in DCCN environment. B. Contributions In this work, an experimental investigation on metrics associated with job scheduling and resource allocation on a shared heterogeneous server cluster was carried out using DCCN, DCell and BCube as a case studies.Virtual Machine (Vm) algorithms were developed to provide good performance while guaranteeing fairness in an operational setup.Consequently, this will represent the ideal mode for EETACP service provisioning.The perspective offered in context is that an efficient task resource scheduling algorithm based on virtualization should be implemented at the broker domain for the DCCN.This was carried out to facilitate the deployment of a previous work on EETACP service proposed for SGEMS in an earlier work.The aim is to dynamically allocate the virtual resources in the EETACP as well as other services based on their workload intensities.This is to improve resource utilization, throughput, and availability and reduce the usage cost. The rest of this paper is organized as follows.Section II presented the literature review as well as foundational concepts.Section III discussed the methodology and relevant system Models.Section IV presents the system Validation from the simulation environment.Section V concludes the paper. II. LITERATURE REVIEW In this section, two interrelated concepts will be clarified so as to provide a working foundation for EETACP deployment in DCCN at large.These are virtualization and resource allocation.Secondly, this work will then present the related works. A. Cloud Virtualization and Resource Allocation In [15], Virtualization is defined as the mirror imaging of one or more workstations/servers, etc within a single physical computer utilizing the same system resources.Virtualization makes cloud computing possible, since scalability is the major consideration in cloud computing.Cloud computing servers use the same operating systems, enterprise and web applications as localized virtual machines and physical servers [16].Other views on the concept are detailed in [17] and [18].Other the other hand, resource allocation is the process of assigning available resources to the needed cloud applications over the internet via the cloud DataCenter.A dynamic resource allocation framework (resource controller) in cloud environment helps to monitor traffic load changes, analyse workload and facilitate the implementation of an automated elastic resource controller that ensures high availability. Resource controller in context, controls all the components on the cloud side and it has access to the load balancer, monitoring data, and front end (open nebula) for requesting additional resources on demand [19].Some vendor services on a distributed cloud may include: computational resource configuration of the Virtual Machines (VMs), the programmer's degree of control, network service configuration, the nature of hardware/software security services, portability guarantees, storage scalability, etc, as such there is a need for a comprehensive resource allocation and scheduling system for cloud datacenter networks (CDCN) [19]. It is worthy to note that the allocation of resources to dedicated servers without virtualization schemes could be problematic, while over-provisioning resources based on worst case workload estimates can result in potential network crash/failure and then violate of guarantees of QoS.An alternate approach is to allocate resources to servers dynamically based on the variations in user workloads profiles.In this approach, each server is given a certain minimum share based on coarse-grain estimates of its user resource needs. Such dynamic resource sharing can yield potential multiplexing gains, while allowing the system to react to unanticipated increases in application load thereby meeting the QoS guarantees if both are cloud based. In addition, for an excellent resource provisioning technique, there is need to determine the influence of service availability and processing delays on the server backend.By sharing server resources, this can provide guarantees to applications in the cloud DataCenter model.However, such guarantees are provided by reserving a certain fraction of node resources (CPU, network, and disk) for each application.In this regard, the size of the resources allocated to each cloud server will depend on the expected workload and the QoS requirements of the application.For these workloads, there is a need to actually ascertain the influence of resource allocation using virtualization in enterprise computing cloud applications.Knowing the kind of servers that will scale in the event of high traffic density is very vital. Consequently, this paper will use the concept of virtualization to explain the resource allocation and scheduling features in a cloud based management platform.The emphasis is on user jobs (workload) on server pools for a proposed DCCN.In developing this paper, strong emphasis is placed on dynamic resource allocation and scheduling technique via virtualization so as to handle changing application workloads in shared distributed Cloud Computing backend environment. B. Related Research Efforts Several works have been carried out on resource allocation and scheduling focusing on cloud infrastructure enhancement.This work will discuss these efforts below leveraging the Systematic Literature Review (SLR) approach. The authors in [20] classified resource allocation models into three categories viz: processing resources, network resources and energy efficient resources.The work opined that network performance and resource availability could pose the tightest bottleneck for any cloud platform.Traditional Resource Management Systems (RMSs) such as Condor [21], LoadLeveler [21], Load Sharing Facility (LSF) [22], and Portable Batch System (PBS) [168], all adopt system-centric resource allocation approaches that focus on optimizing overall cluster performance.However, these have not been exploring in Spine leaf DCCN.In [24], a SLA-oriented resource management system built using Aneka [170] for cloud computing was proposed.A representative sample of works on resource allocation in the cloud environment for job task processing, and other resource categories is detailed in [25], [26], [27], [28], [29] and [30].These works failed to justify their relevance in Spine leaf DCCN.Also, a process scheduling and its algorithm was presented in [31], [32].The works considered a resource allocation scheme on multiple clouds in cases of under load and the over load conditions.The paper in [33] a proposed model for cloud computing scheduling based on multiple queuing models for improve the quality of service by minimize execution time per jobs, waiting time and the cost of resources to satisfy user's requirements.The experimental results indicate that the model increases utilization of global scheduler and reduce waiting time.But, achieving resource allocation in a DCCN is only feasible through scheduling as a DCCN service.Similarly, the authors in [33], established some scheduling schemes and their strategies which were explained in [34].The opined that both cannot be used in cloud computing for Application Processing Requests (APR) as found in [35], [36] owing to some identified QoS limitations.In [37], the authors proposed a cost-optimal scheduling in hybrid IaaS due to divergent users' requirements and heterogeneous workload characteristics. The authors observed that the problem of scheduling a user's workload in the cloud remains a complex thereby proposing an optimal cost scheduling scheme.In [38], a genetic algorithm scheduling approach was proposed for addressing the problems of scheduling with traditional algorithms resulting in load imbalance and high migration costs.Other efforts made in literature in these areas of resource scheduling include: Greedy Particle Swarm Optimization (GPSO) [39], Task Length and User Priority (ie.Credit Based Scheduling Algorithm) [40], Cost based scheduling [41], Energy efficient optimization methods [42], Activity based costing [43], [44], Reliability Factor Based [45], Context aware scheduling [46],Dynamic slot based scheduling [47], [48], Multi-Objective Tasks Scheduling Algorithm [49], Public Cloud Scheduling Algorithm with Load Balancing [50], Agent-based elastic Cloud bag-of-tasks concurrent scheduling [51], Analytic hierarchy process (task scheduling and resource allocation) [52], Swarm scheduling [53], Profitdriven scheduling [54], Dynamic trusted scheduling [55], Community-aware scheduling algorithm [56], Adaptive energy-efficient scheduling [57], grid, cloud and workflow scheduling [58].In these algorithms, job/task length and priority are mostly the parameters analyzed.However, the SGEMS DCCN inherits from the characteristics of the above works, but focused on improving the network QoS with respect to state of art Spine Leaf network model.The research gaps below were used to conclude the findings from literature review above. C. Research Gaps The following were clearly identified from literature study. • QoS Resource Management From the literature reviews, it has been shown that resource allocation, scheduling and service provisioning are the critical concepts in DataCenter network operations which must be considered when designing an efficient cloud based network.But existing works have not resolved the issues of excellent quality of service in cloud servers via virtualization scheme particularly for DCCN running on a spine leaf operational mode. Consequently, this work proposes a dynamic architecture that handles all the resources in the DCCN by managing client requests, directing resource allocation, eliminating performance constraints, minimizing cost while ensuring the overall QoS.In this paper, resource management for client requests is carried out in the server cluster pool • Validation Comparison with DCell and BCube Based on the heuristic branch-and-bound concept with Riverbed modeller, a scenario based study with similar network architectures (DCell and BCube) will be carried out considering throughput, resource availability and network density as metrics.To the best of our knowledge, this work is the only work that have carried out a scenario based comparison with scalable DCell and BCube on the basis of heuristic tasks and priority scheduling in spine leaf DCCN. III. RESEARCH METHODOLOGY The method used in this work is referred to procedural benchmarking with Riverbed Modeller 17.5.In this case, a step by step approach was employed in studying BCube and DCell legacy x86 server consolidations as typified in University of Nigeria Nsukka (UNN) DCN as a case study.With the identified QoS issues in the network, this work leveraged server virtualization based on VMware vSphere which is a more mature and trusted technology in the enterprise spine leaf DCCN.In this regard, this work considered three key conditions when applying parallel processing in executing tasks in DCCN server viz: 1) how to allocate resources to tasks; 2) in what order is the task executed in the cloud; and 3) how to schedule overheads when VMs prepare, terminate or switch tasks.Task scheduling and resource allocation basically address these issues.The procedural benchmarking approach used in this work took care of the initial design specification and composite process model of the DCCN.This model architecture is presented next A. DCCN Model Archicture/Specifications Considering the DCCN architecture shown in Fig 1 .The design comprises of two layers functional areas, viz: remote user access and the hybrid speed redundancy layer.The gateway load balancer (GLB)/speed redundancy layer was used interchangeably with the Integrated Service OpenFlow load balancer (ISOLB) in this work.The ISOLB connects the cloud layer to the broker which coordinates the Vms. The cloud computing architecture in Fig. 1 uses the cloud broker to mediate negotiations between EETACP Software as a Service (ESaaS) and cloud provider.This negotiation is driven by QoS requirements.The broker acts on behalf of ESaaS for allocation of resources that can meet application's QoS requirements.In the DCCN, the ISOLB is the major component in the hybrid speed redundancy layer.However, the hybrid speed redundancy layer comprises the Virtual machines interconnected server subnet clusters and the ISOLB. The architectural decomposition of DCCN will be discussed next while exploring its functionalities for grid performance using virtualization metrics.A generalized specification of the proposed DCCN datacenter is presented below. • Let DCCN lb be an acronym chosen for the DCCN server cluster managed by the ISOLB controller.DCCN lb is designed to have various subnets for its clusters (eg.subnet 1to n) referred to as DCCN sa , DCCN sb , DCCN sc , DCCN sd interconnected together.DCCN sa represents a subnet as shown in Fig1.wherenis a subnet factor such that n> 0. Each cluster (DCCNs) uses High Performance Computing (HPC) servers running Vm with the ISOLB controller layered in linearly defined architecture.Since the design of datacenter network is for efficient server load balancing and EETACP application integration, the requirement of 4-ports from ISOLB controller and few servers necessitated the choice of four subnets.Virtual server instances running on the HPC servers expanded the server cluster capabilities. • Servers in DCCN cluster are connected to ISOLB port of the load balancer corresponding to it, and owing to the running virtual instances Vi, a commodity 4-port switching/routing device with 40GB/s per port serve the design purpose.Also, each of the DCCNs is interconnected to each other through the ISOLB switch ports. • The virtualized servers used in this work have two ports for redundancy (in Gigabytes).Each server is assigned a 2-tuple [a 1 , a 0 ] in consonance with its ports (a 1 , a 0 are the redundant factors) together with an OpenFlow VLAN id. • Emulated NEC IP8800 OpenFlow controller was the ISOLB used in this work; hence, the number K is the maximum number of OpenFlow-VLAN that can be created in it.The load balancer switch is a multilayer commodity switch that has a load balancing capability.This capability together with its OpenFlow VLAN capability was leveraged to improve the overall DCCN performance. • Each server has its interface links in DCCN s.One connects to an ISOLB, and other servers connect as well but all segmented within their subnets via OpenFlow-VLAN segmentation, as shown in Fig 2 .OpenFlow DCN s servers have virtual instances running on it and are fully connected with every other virtual node in the architecture. Virtualization facilitates efficient use of hardware and software resources in DCCN.Hence, Virtual Machines Vms, are allocated to the user based on their job in order to reduce the number of physical servers in the cloud environment particularly in a high gird environment.But most VM resources are not efficiently allocated based on the characteristics of the job to meet out Service Level Agreements (SLA).Hence, this work introduced smart Vm allocation algorithm based on the characteristics of the EETACP job which can smartly reconfigure virtual resources thereby improving resource utilization in the server clusters. Again, the DCCN port interface model for the ISOLB is shown in Fig. 2.This creates advanced redundancy and multiplexing of job requests to the server clusters in the DCCN.For the DCell and BCube, these are not visible in their architecture.The Algorithm for Vm allocation and scheduling in a multiplexed server setup is shown in Algorithm I.The first section checks whether DCCN s server subnet cluster is constructed.If so, it connects all the server nodes n to a corresponding ISOLB port and ends the recursion.The second section interconnects the servers to the corresponding switch port and any number of . or servers are connected with one link.Each server in the subnet cluster DCCN lb is connected with 40GB links for all OpenFlow VLANid.The role of the service coordinator in the cloud is enormous.The DCCN logical architecture with the OpenFlow VLAN segmentation shown in Fig 2 uses the linear construction algorithm II for VLAN resource scalability. Input: New job, all jobs running in EETACP host Output: Execution of all jobs submitted to the EETACP host Process 1 Begin () 2 Arrival of New job for user I into Fig 1 3 if (New job.deadline< all jobs running in host) 4 High priority job from user iNew job 5 if (VM is available) 6 allocate High priority job to that VM 7 else 8 Suspend job ←Selection of Job for execution of high priority job(); 9 Suspend (Suspend job) 10 allocate High priority job to VM from which a job was suspended 11 end if; 12 Execution of all jobs running in the VM Instance 13 if (completion of a job which is running in VM) 14 resume (Suspend job) 15 allocate the resumed job request to that VM Instance 16 End if 17 Execution of resumed job in active state 18 End Algorithm II: DCCN Service Coordinator OpenFlow VLAN Construction Algorithm. In the DCCN logical structure shown in Fig 2, the servers in one subnet are connected to one another through one of the ISOLB ports that is dedicated to that subnet.Each server in one subnet is also linked to another server of the same order in all another subnets.Incast collapse in cloud datacenter must be avoided at all cost.As such, each of the servers has two links, with one, it connects to other servers in the same subnet (intra server connection) and with the other it connects to the other servers of the same order in all other subnets (inter server connection).Apart from the communication that goes on simultaneously in the various subnets, the inter server connection is actually an OpenFlow VLAN connection.This OpenFlow VLAN segmentation of the servers logical isolates them for security and improved network performance.Together with other server virtualization schemes ultimately improves the performance in terms of throughput and other QoS metrics.The OpenFlow VLAN segmentation gave each DCCN s (subnet) the capacity to efficiently support enterprise web applications (EETACP, Web Portals, Cloud applications such as ESaaS) running on server virtualization in each port thereby lowering traffic density. C. Coordinator Logical Isolation of Server Clusters As shown in Fig. 2, the application of OpenFlow VLAN in each subnet creates full logical isolation of the DCCN server cluster architecture of Fig 1 .In order to achieve this, each server and nodes in DCCN s is assigned virtualization identity, [V id = av 1 , av 2 ……… av n-1 ] and OpenFlow VLAN identity (V l id) greater than 0, where av 1 , av 2 ……..… av n-1 is the virtualization instances on DCCN s servers.As such each server can be equivalently identified by a unique V l id in the range V l id ≤ 1005*. Hence the total of V l id for servers in the DCCN s is given by ( 1) Where N is the maximum number of OpenFlow VLAN ids, and (, ϕ) is the virtual instances in the DCCN lb physical servers.The mapping between a unique V l id and the DCCN lb physical servers considering that there are four subnet clusters DCCN lb in Fig 2 is given by ( 2) Following the DCCN architecture in Fig. 1, in order to minimize broadcast storms and reduce network traffic/demand density, an OpenFlow VLAN mapping scheme for the servers in the subnet clusters DCCN lb was applied resulting to the system validation model discussed in section IV.Now, consider DCCN sa , DCCN sb, DCCN sc and DCCN sd with servers S 1 to S n .The servers in each of the subnet cluster are mapped into different OpenFlow_VLAN services with their corresponding ids as follows: Where S 1a , S 2a , S 3a , S 4a are the servers in DCCN sa S 1b , S 2b , S 3b , S 4b are the servers in DCCN sb S 1c , S 2c , S 3c , S 4c are the servers in DCCN sc S 1d , S 2d , S 3d , S 4d are the servers in DCCN sd. With this OpenFlow VLAN mapping scheme, a logical isolation of the DCCN cluster subnets in the DCCN architecture was achieved.This make for smart flexibility, improved network security, agility and control of traffic flow in the DCCN core layer. D. DCCN Resource Allocation For user workloads in the virtualized server subnet cluster of Fig 2, by quantifying the workload requirements for the physical servers, the virtual machines can significantly benefit from the resource allocation strategies offered by the full virtualization scheme.It will be established that virtualization can offer excellent resource allocation of distributed server resources.In the DCCN, user tasks are assigned resources for effective performance in the server domain.The tasks represents user job request for server execution.I/Os, CPU, Memories are resources that are assigned in context.Resource allocation via virtualization makes for bandwidth availability, delay reduction and service availability in general.These are vital QoS metrics analyzed in this work. From the framework of Fig. 2 Here, each weight shows the importance of each parameter.Since CPU for instance is important for job execution, then it is activated or enabled by default.The same for other resources like I/Os, RAM, Storage disks, etc.If the resource provides solution for a requested service or task, the validity operator was introduced such that → in Equation ( 6) Where K = the number of the QoS Parameters. From Equ 6, the resource specification for DCCN for Users was optimized. When the resource capability exceeds the task demand, then the virtualization function in (7) holds for the DCCN server. To expand the optimization problem of assigning resources to tasks with QoS parameters in the DCCN server cluster subnets, the compact matrixes was introduced in Equ 8, 9, and 10 for virtualization scheduling.These compact matrixes are described for resource allocation model in context.For users , let task matrix requirements, be given by Equation (8). For user , let task weight matrix requirements be given by Equ 9 For users , let server resource capability matrix requirements be given by Equ 10 By dividing ( 9) by ( 8) to obtain a Vm vector , to get Equation ( 11) But resource allocation by instance is given by (12), Where =max virtual instance that can be accommodated by the physical server. By substituting (10) and ( 11) into (12), a model for virtualization was developed by putting the task/job numbers ( ) , resource numbers ( ) and QoS parameters( ) in ( 13) Originally, it was established that the total server virtual instances in the DCCN server PS K .isgiven by 15) satisfies the requirements used for assigning resources to user request via EETACP (traffic load on DCCN server).From (13), if * = 1, then resources will be precisely provide the tasks or job requirements while maintaining the QoS requirements, but if * = < 1, then the resource will be degraded by the tasks or job requirements and affect server performance. Again, in the DCCN, since user requests or demand arrives randomly into the gateway before the ISOLB, it was assumed that the job arrival follows the stochastic process such that the packet size is exponentially distributed, and the system is considered as an M/M/1 queuing system.Besides, traffic capacity management and optimum utilization of server resources takes care of network instability issues.But, to address the issue of traffic flow from the access layer, Little's law suffices.It takes care of the system response time and scheduling distribution thereby maintaining traffic flow stability criterion.Now, if the average arrival rate per unit time is given by * and is the average service rate per unit time, stability condition for the system resource management can be deduced. Considering the user task/job numbers ( ) , resource numbers( )andQoS parameters( ), the proposed DCCN for enterprise cloud datacenter is considered stable only if * < .If on the other hand, the average arrivals happen faster than the job service completions such that * > , the job queues on the server clusters will grow indefinitely long and making the system to be unstable. In this work, by generating an equivalent DCCN that will support the deployment of the proposed EETACP while exploring the mathematical models derived for the system, the QoS metrics evaluation will primarily be service availability, throughput, and fault tolerance.The algorithms for the distributed cloud management architecture were discussed previously. IV. SYSTEM VALIDATION As explained previously, procedural benchmarking with server virtualization for the spine leaf DCCN was used in this work.This is driven by virtual machines and task schedulers.In carrying out the validation analysis on DCCN, procedural benchmarking was applied in creating the relevant heuristic algorithms.Virtual machines and task schedulers were configured as extended attributes after importing the objects from the object palette in Riverbed Modeller/C++ Version 17.5 simulator [60].Other components configured include the OpenFlow load balancer and the server cluster links.The mathematical characterization above was considered in the design.After comparing the service delay and availability of the two DCCN using the heuristic algorithms for task length and user priority, this work then introduced two related datacenter architectures having task based scheduling without virtualization ie.DCell [5] and BCube [7].Both have been extensively studied in [61].The metrics computed includes average throughput (bytes/Sec), Average resource availability response and DCCN density on resource utilization.By extending the work carried out in [62] that focused on the impact of virtualization on server computing clusters, the contribution of the work now focused on server consolidation via virtualization for fault tolerance improvement in order to reduce down time scenario to the barest minimum.In this work, we focused on resource management using virtual machine for improved QoS.The simulation is done under the following conditions as enlisted in Table I, II and III.Each datacenter consists of several hosts.Each host has its own configuration.Here, the same configuration is applied for each hosts.Host configuration is depicted in Table II.The host in the datacenter consists of several virtual machines.Each virtual machine has its own configuration.Here, same configuration is applied for each VMs.Virtual machine configuration is mentioned in Table III.From Fig 5, it was observed that the proposed DCCN with optimal Virtual instance allocation coordinator had relatively a better throughput.The average throughput responses in percentages were obtained as follows: For DCCN = 40.00%,DCell = 33.33%and BCube = 26.67%.This shows that virtualization enhances performance.From Fig. 7, it was shown that lower resource utilization for the proposed DCCN was achieved compared with BCube and DCell scenarios.When all existing resources (VMs) are allocated to low priority jobs and a high priority job comes in, the low priority job (deadline is high) has to be pre-empted so that its resources can allow a high priority job (deadline is low) to run in its resource tasks.When a job arrives, availability of the VM is checked based on the network density.If the Vm is available, then job is allowed to run on the VM.If the VM is not available, then the algorithm find a low priority job taking into account the job's lease type.The low priority job pauses its execution by pre-empting its resource.In all cases, the high priority job is allowed to run on the resources pre-empted from the low priority.When any other job running on server VMs is completed, the job which was halted previously can be resumed.This Vm process in Algorithm1 facilitates lower resource utilization at large.DCCN density on resource utilization gave 40% (ie.when logically isolated) while the others offered 60% (ie.when not logically isolated).This paper focused on the critical design factors involved in the deployment of SGEMS DCCN as well as analyzing the impact of virtualization on resource allocation and scheduling strategies for managing Cloud DataCenter server clusters.In this respect, optimizing computational resources (ie.network resources) with low cost are the vital considerations for a successful cloud computing service deployment and its operations.The work used CloudSim equivalent tool Riverbed 17.5 with scenario based setups for DCCN, DCell and BCube.The implemented network was simulated while ensuring that Vms were allocated as hosts based on the capacity of the cloud service coordinator available. Jobs are given to the Vms for execution based on First Come First Serve (FIFO) basis.The deadline was checked for high and low priorities.From the work, QoS metrics such as throughput, resource availability, and resource utilization were investigated.With the latest state of art in enterprise application network, the proposed Spine leaf DCCN offered throughout improvement of 6.67% over DCell while offering 13.33% over BCube.Also, the network offered 5.13% availability improvement over DCell while offering 10.25% over BCube.Consequently, the more resource pool and allocation on a virtualized cloud server, the better the overall system performance owing to virtualization effects.This can also stabilize the DCN operations marginally.Conversely, an inefficient resource allocation/scheduling scheme can adversely degrade the network performance.In conclusion, virtualization can smoothly facilitate resource allocation/scheduling schemes in a distributed server domain considering user job workloads.Future work will focus on Vm allocation in high density Spine and leaf architecture for the deployment of EETACP service in SGEMS (microgrid framework) while building datasets for big data analytics using an FPGA based big data environment. Fig. 1 . 2 . Fig. 1.DCCN Resource Allocation and Task Scheduling with Virtual Machines From the DCCN architecture shown in Fig 1, if the resource allocation model is given by Equ 13, then the consolidation model via virtualization for the physical servers can be obtained by substituting Equ 13 in Equ 14.This gives (15) PS K .Vm=∑ (1) Fig 3 shows the testbed used for carrying out basic experimentations and validations in the context of application hosting and provisioning.Fig 4 shows the UNN datacenter which has network failures resulting from traffic imbalance and disturbances.This was addressed in validation section. Fig. 5 . Fig. 5. Plot of Network throughput against Resource AllocationAs shown in Fig.6, resource availability refers to ability to access the DCCN server clusters on demand while completing the job requests.It was shown that the proposed model had fairly good resource availability leading to enhanced performance.Hence, a good resource allocation strategy will enhance performance at large.This still re-echoes the advocacy for Vm based cloud networks particularly in the cell based and spine leaf models like the DCCN in this research. Fig. 6 . Fig. 6.Plot of resource Availability Vs Job requests time The legacy UNN DCN in Fig 4 can be improved with this scheme. , Let = User task number or job numbers User sends job request accompanied by a vector of QoS parameters, but the weights for the parameters are given in Equ 4 and 5 such that /* l stands for the level of DCCN s subnet links, n is the number of nodes in a cluster TABLE III . VIRTUAL MACHINE BASIC CONFIGURATION
8,267
2016-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
SLAC-PUB-14367 A Low-Charge, Hard X-Ray FEL Driven with an X-band Injector and Accelerator After the successful operation of the Free Electron Laser in Hamburg (FLASH) and the Linac Coherent Light Source (LCLS), soft and hard x-ray free electron lasers (FELs) are being built, designed, or proposed at many accelerator laboratories. Acceleration employing lower frequency rf cavities, ranging from L-band to C-band, is usually adopted in these designs. In the first stage bunch compression, higher-frequency harmonic rf system is employed to linearize the beam's longitudinal phase space, which is nonlinearly chirped during the lower frequency rf acceleration process. In this paper, a hard x-ray FEL design using an all X-band accelerator at 11.424 GHz (from photocathode rf gun to linac end) is presented, without the assistance of any harmonic rf linearization. It achieves LCLS-like performance at low charge using X-band linac drivers, which is more versatile, efficient, and compact than ones using S-band or C-band rf technology. It employs initially 42 microns long (rms), low-charge (10 pC) electron bunches from an X-band photoinjector. An overall bunch compression ratio of roughly 100 times is proposed in a two stage bunch compressor system. The start-to-end macroparticle 3D simulation employing several computer codes is presented in this paper, where space charge, wakefields, and incoherent and coherent synchrotronmore » radiation effects are included. Employing an undulator with a short period of 1.5 cm, a Genesis FEL simulation shows successful lasing at a wavelength of 0.15 nm with a pulse length of 2 fs and a power saturation length as short as 20 meters, which is equivalent to LCLS low-charge mode. Its overall length of both accelerators and undulators is 180 meters (much shorter than the effective LCLS overall length of 1230 meters, including an accelerator length of 1100 meters and an undulator length of 130 meters), which makes it possible to be built in places where only limited space is available.« less A laser (light amplification by stimulated emission of radiation) is a device that emits light with a high degree of spatial and temporal coherence.Unlike conventional lasers, free electron lasers (FELs) use the lasing of relativistic electron beam traveling through a magnetic undulator, which can reach high power and can be widely tunable in wavelength (electron beam energy).It was proposed by Madey and demonstrated for the first time at Stanford University in 1970s [1,2], where a 24 MeV electron beam and 5 meters long wiggler were used.In this experiment, the lasing happened with an oscillator configuration.Since then, the radiation wavelength has been pushed downwards with higher beam energy and shorter undulator period available.Lacking mirrors with good reflectivity for shorter and shorter radiation wavelength, researchers improve the electron bunch qualities so that the lasing can happen based on a collective instability mechanism within a single path for the electrons traversing the undulator.This is the so-called self-amplified spontaneous emission (SASE) operation mode of an FEL [3][4][5].Recently there are several FELs being operated, constructed, or proposed [6][7][8] in the x-ray wavelength range.The two currently operating x-ray FEL facilities, Free Electron Laser in Hamburg (FLASH) in Germany and Linac Coherent Light Source (LCLS) in the United States, both work under a self-amplified spontaneous emission (SASE) mechanism, which does not require external seeding (modulation) or mirrors. FLASH is the world's first x-ray FEL and has been open to the photon science user community since 2005.It has a maximum beam energy of 1.2 GeV and can deliver 10-70 fs long FWHM photon pulses in a wavelength range of 44 to 4.1 nm [6].The linac is 150 m long and consists of L-band (1.3 GHz) superconducting rf cavities that can accelerate multibunch beams at a ''real estate'' gradient of 15 MV=m on average.The peak bunch current is increased from 50-80 A to 1-2 kA through two stage bunch compression, with a bunch charge between 0.5 and 1 nC.Because of the nonlinear energy chirp associated with the long (7 ps rms) initial bunch length, there is a sharp spike in the final bunch longitudinal distribution as this part is compressed more than the other parts.Consequently, only the small portion of the charge in this sharp spike can radiate and saturate in the undulator.Recently, a 3.9 GHz harmonic rf module was installed before the first bunch compressor to linearize the bunch energy correlation, which produces a more uniform bunch compression in the first stage and thus improves its x-ray FEL performance. LCLS is now in routine operation, providing soft and hard x rays to its users with good spatial coherence at wavelengths from 2.2 to 0.12 nm by varying the bunch energy from 3.5 to 15 GeV [7].Single bunch electron beam acceleration is achieved with S-band (2.856 GHz) accelerator structures operating at average real estate gradients of about 17 MV=m.The linac length is roughly 1 km long and includes two stages of bunch compression.Similar to the linearization scheme adopted at FLASH, a fourth harmonic rf linearizer is employed at LCLS, in this case using a 0.6 m long X-band (11.4 GHz) structure that had been developed for the next linear collider (NLC).Also, a laser heater is used to increase the uncorrelated energy spread, which provides more Landau damping and suppresses beam collective effects, such as the space charge effect induced microbunching instability.The machine can operate with either a high (250 pC) or low (20 pC) bunch charge and achieve a final peak bunch current of 3 kA in either case.With a slice transverse emittance around 0:6 m and a slice energy spread around 0.01%-0.02%,usually x-ray radiation achieves saturation within 60 meters. Newer XFELs (at Spring8 and PSI) [8] use C-band (5.7 GHz) normal-conducting linac drivers, which can sustain higher acceleration gradients, and hence shorten the linac length, which are more efficient at converting rf energy to bunch energy.The X-band (11.4 GHz) rf technology developed for NLC/GLC offers even higher gradients and efficiencies, and the shorter rf wavelength allows more versatility in longitudinal bunch phase space compression and manipulation. In this paper, a 6 GeV hard x-ray FEL design is described that has a compact (150 m) X-band injector and linac and a 30 m undulator, which can radiate at a wavelength of 0.15 nm.A sketch of the accelerator system is shown in Fig. 1.The real estate acceleration gradient and ''active'' rf cavity length of each linac section are summarized in Table I.It would operate in the short bunch length, low-charge regime, which guarantees a small nonlinear energy correlation along the bunch from the X-band acceleration, where the rf wavelength is 26 mm.No harmonic rf linearization is necessary in either stage of bunch compression, as the employed initial bunch length is short enough with respect to the rf wavelength [9].An average acceleration gradient of 80 MV=m is assumed, which has been achieved routinely in the SLAC NLCTA X-band linac.It produces an effective average real estate gradient of 60 MV=m when components such as quadrupoles are included.A two stage bunch compression system is adopted, where two normal four-dipole chicanes are carefully designed as the bunch compressors.The electron bunch generated by the X-band photoinjector has a low charge of 10 pC, an rms length of 42 m and a peak current of 30 A. This electron bunch is compressed to a final length of 0:9-0:3 m rms and a peak current of 1-3 kA in two stages.A laser heater is located before the first stage of bunch compression to increase the uncorrelated energy spread, which provides more Landau damping to suppress beam collective effects such as the space charge effect induced microbunching instability.The final electron beam energy can be tuned in the range of 2-6 GeV FIG. 1. Sketch of a compact (150 m) hard x-ray FEL driver using all X-band accelerators, from X-band photoinjector to X-band linac end.An average real estate acceleration gradient of 60 MV=m is assumed in the X-band accelerator structures, and two stage bunch compression system is adopted with relatively weak dipole magnets to suppress incoherent and coherent synchrotron radiation (ISR and CSR) effects.The linac for the hard x-ray FEL design has a final electron bunch energy of 6 GeV with a peak current of 3 kA, while the soft x-ray FEL design has 2 GeV, 1 kA electron bunches.A final pulse duration length of 2 to 6 fs is achieved with a bunch charge of 10 pC.The slice energy spread is between 0.01%-0.02%,and the slice normalized transverse emittance is below 0:2 m. depending on the desired x-ray photon energy.Soft x-ray FEL operation at bunch energies below 6 GeV is not presented in this paper as the beam dynamics is less challenging in that case. The following sections discuss the X-band photoinjector design and simulation studies, the overall FEL accelerator design configuration, the accelerator optics, X-band wakefield calculations and impacts, start-to-end 3D simulation results, timing jitter sensitivity, misalignment and FEL simulation with GENESIS [10]. II. X-BAND PHOTOINJECTOR The initial electron beam is produced in an X-band photoinjector.This photoinjector consists of a 5.59 cell gun running with an rf peak field of 200 MV=m on the cathode.This gun is presently being fabricated and is going to be tested at the X-band gun test area of the NLCTA at SLAC [11]. A first generation of this gun had been commissioned in 2006 for a Compton scattering experiment [12].It was demonstrated that a 200 MV=m peak field can be sustained with no breakdown and manageable dark current.Given the high peak field of 200 MV=m and thanks to the short wavelength, the electron beam is very rapidly accelerated to ultrarelativistic energies in this X-band gun.After 1 mm, the Lorentz beta coefficient is 0.4 in this X-band gun, against a value of 0.2 at the same location in the LCLS S-band gun.As a consequence, shorter bunch lengths can be obtained.Simulations were carried out with the computer code ASTRA [13], where the initial distribution of the laser has a longitudinal [1,2] profile with a FWHM length of 100 fs. The photoemission is thus that of a ''blow-out'' regime [14], which linearizes at best the longitudinal phase space of the photoelectrons.In the ASTRA simulations the initial transverse distribution at the cathode is a uniform disk.The optimum transverse profile in the blow-out regime, which leads to a perfect ellipsoid with uniform space charge density, has a radial function distribution represented by a half circle as described by Eq. ( 6) of Ref. [14].A truncated [1,2] distribution could be used as it approaches better the half circle than the uniform one and as it is easier to produce experimentally.However, the purpose of this paper is not to study the blow-out regime in detail.Optimization of the optics to minimize the transverse emittance was done for three different sets of initial spot size radii.In general, the trade-off is made in balancing space charge effects in the transverse and longitudinal planes, or in other words, between choosing bunch length, energy spread, and transverse emittance.The distribution used in the start-to-end simulations was the one having a 0.3 mm radius and a shortest bunch length, which provides a most linear bunch compression process afterwards.This configuration gives a 42 m rms bunch length after acceleration to a beam energy of 54 MeV, which optimization result corresponds to the one providing a higher peak brightness as shown in the fifth column of Table II.The thermal emittance used in the simulations is based on the measured LCLS thermal emittance of 0.9 mm mrad per mm radius, which is twice the theoretical value of the commonly accepted model.The longitudinal phase space, longitudinally sliced energy spread, and transverse emittance of the electron bunch, associated with the electron bunch current profile are shown in Fig. 2, at the end of Linac0 (see Fig. 1) where the electron beam energy is 54 MeV.This output data file from ASTRA denotes an electron bunch with a half million macroparticles, and it is taken as an initial bunch distribution in the subsequent simulations. III. X-BAND LINAC AND BUNCH COMPRESSION LiTrack [15] 1D simulation is performed at a first place, to approximately evaluate the required rf and bunch compressor parameters, such as rf phase and linac length, beam energy at two bunch compressors, longitudinal dispersion R 56 of these two bunch compressors.In detail, a beam energy of 250 MeV is chosen for bunch compressor one, which is the same as LCLS and is a compromise between balancing space charge impacts and maintaining necessary uncorrelated relative energy spread, in order to minimize CSR induced energy change by Landau damping.Given a total final beam energy of 6 GeV, a reasonable beam energy between 1 and 2 GeV should be adopted at bunch compressor two.While the upper limit is from a design final beam energy, ISR in the bends and ability to de-chirp in Linac3, the lower limit is mainly set by the timing jitter tolerance associated with the wakefield cancellation scheme which will be discussed in Sec.III D. The rf phase (relative to crest) is chosen to be as large as possible, so the final bunch length is less sensitive to timing jitter, also one then only needs a smaller longitudinal dispersion R 56 in the bunch compressors given a same required bunch compression ratio.In general, these two choices both result in a more linear bunch compression process [9].However, a large rf phase also introduces a less efficient linac and larger dispersive emittance dilution associated with a larger energy spread.After choosing basic parameters from longitudinal 1D simulation studies, in a next step one needs to design accelerator optics and perform 3D start-to-end simulation with all the collective effects to evaluate the performance of the FEL design. The X-band linac acceleration and bunch compression starts at an arbitrary beam energy of 54 MeV, by taking an output from ASTRA which simulates 3D bunch motion from laser interaction on the target and includes collective effects such as space charge.The ASTRA output bunch distribution, as shown in Fig. 2, is matched to the optics of Linac1 using four quadrupoles.The detailed optics design, X-band wakefields, bunch compression and phase space manipulation, timing jitter, and misalignment impacts are presented in the following subsections. A. Optics design As stated above, an output bunch distribution at 54 MeV is taken as an input for the 3D simulation through the whole linac.It is matched to the periodic FODO (where F focuses vertically and defocuses horizontally, and D focuses horizontally and defocuses vertically, and O is a drift space or deflection magnet) cell of Linac1 using four independent quadrupole magnets.All the matching work is done in the accelerator design code MAD8 [16].The basic X-band rf acceleration cell is based on FODO focusing structure, with both horizontal and vertical betatron phase advance equalling 72 per cell.The consideration on choosing a proper average beta function, or equivalently a proper periodic FODO cell length, is mainly dependent on controlling beam trajectory and emittance dilution from both dispersive plus chromatic effects and wakefield impact.Assuming that a one-to-one trajectory correction scheme is employed to minimize the beam position monitor (BPM) readings at every quadrupole magnet, the emittance dilution due to dispersive effects is shown below, given a constant relative energy spread e E and a periodic FODO lattice with same focusing in horizontal and vertical plane [17]: where Á dis denotes emittance dilution due to dispersive effects, hy 2 BPM i square of rms BPM offset (relative to quadrupole), e rms energy spread of the beam, c c betatron phase advance per FODO cell, L cell FODO cell length, G acceleration gradient, i initial relativistic beam energy, and f final relativistic beam energy.One observes that, under a similar emittance dilution budget, the X-band linac could adopt a shorter cell length for a given betatron phase advance per cell, as it provides a larger acceleration gradient.This also means that the average beta function could be smaller in an X-band linac, in comparison with an YIPENG SUN et al. Phys.Rev. ST Accel.Beams 15, 030703 ( S-band linac.On the other hand, for a given FODO cell length L cell one needs a larger average beta function to keep tan c c 2 small.Considering the accelerator structure misalignment errors which are random about a smoothed trajectory but systematic between a pair of quadrupole magnets, the emittance dilution due to transverse wakefields can be estimated as shown below [17]: where Á wake denotes emittance dilution due to transverse wakefields, hy 2 a i square of rms accelerator structure offset, N e number of electrons per bunch, and W ? transverse wake potential.As the transverse wake potential W ? is inversely proportional to the fourth power of the cavity iris radius, it is much stronger in an X-band linac than an S-band linac.One then needs to adopt a shorter cell length L cell and a smaller average beta function " ¼ L cell sinc c , in order to maintain a smaller emittance dilution Á wake from wakefield impacts. The final choice of the average beta function in the linac should be a balance between controlling emittance dilution from dispersive effects and transverse wakefields.Taking an analytical estimation by using the above two formulas and the associated accelerator parameters, assuming y BPM ¼ y a ¼ 200 m, one finds that here dispersive effects dominate the transverse emittance dilution, given a relatively large energy spread and a very low electron bunch charge of 10 pC.If real balance on emittance dilution is taken as Á dis ¼ Á wake , one finds that it ends up with very large beta functions which is not realistic.This conclusion is also confirmed by the numerical simulation results shown in the following sections.A relatively long FODO cell length or quadrupole spacing should be adopted here to suppress the possible dispersive emittance dilution. Another consideration here is to employ fewer quadrupole magnets in all three linacs of this compact FEL driver which decrease the total cost.Based on all the above considerations, the quadrupole spacing is chosen to be 5 meters in Linac2 and 10 meters in Linac3, which gives an average beta function of roughly 10 and 20 meters, respectively.It is a relatively weak focusing structure, where the energy normalized quadrupole strength equals In Linac3 the quadrupole strength is even weaker, which is K 1 ¼ 0:6.In comparison, an average beta function of 30 to 40 meters is employed in LCLS S-band rf linac.The quadrupole length is chosen to be 0.15 m in both Linac2 and Linac3, with a total quadrupole number of 18 in all.The associated quadrupole pole tip field ranges roughly from 0.2 to 1 kG, with a beam energy from 250 MeV to 6 GeV.It should be easier to achieve a good field quality given a low quadrupole pole tip field employed here. A laser heater is installed in Linac1 before bunch compressor one, to introduce roughly 2.5 keV uncorrelated energy spread, which helps to provide enough Landau damping and suppress collective effects such as longitudinal space charge effects in the overall linac.The length of Linac1 is 5 meters which accelerates the electron bunch from 54 to 250 MeV, on an off-crest rf phase of À15 degree. Both bunch compressors are designed employing a simple four-dipole chicane, with a total length of 4-5 meters.For bunch compressor one at 250 MeV, the dipole length is 0.2 meters and the first order longitudinal dispersion is R 56 ¼ À14 mm.A bunch compression ratio of 7 is achieved in this first stage, which is a reasonable value in maintaining a quasilinear bunch compression process [9].Linac2 operates at an rf phase of À20 degree, boosts the beam energy to roughly 1.5 GeV, with a correlated energy spread of 0.085%.Bunch compressor two employs even weaker dipole magnets than bunch compressor one, resulting in a first order longitudinal dispersion of R 56 ¼ À8 mm.One needs to note that the dipole magnet length is increased from 0.2 to 0.4 meters, in order to minimize the ISR and CSR impacts on the transverse emittance. The Twiss parameters of the bunch compressor are designed in such a way that the overall impact from CSR on the horizontal emittance growth is minimized.Basically the horizontal beta function should decrease when the bunch length decreases, and it achieves a minimum at the end of the bunch compressor, as shown in Fig. 3.These optimization methods have experimentally been proven effective in the LCLS case [7].The transverse optics design is similar for both bunch compressors. There is one matching section at each end of the bunch compressor, which is composed of several quadrupoles and matches the TWISS parameters between linacs and bunch compressors.The electron bunch is running on-crest in Linac3, and the correlated energy offset established in previous linacs is removed by strong longitudinal wakefield of X-band rf cavities.The length of Linac3 is 90 meters, and the beam energy is pushed up to 6 GeV at the linac end, for a hard x-ray FEL radiation at 0.15 nm.The start-to-end linear optics is shown in Fig. 4, for a beam energy boosted from 54 MeV to 6 GeV.The maximum beta function exists in the matching section from linac to each bunch compressor, which is roughly 45 meters. B. X-band wakefield An analytical formula of the longitudinal point charge wake potential for linac accelerating structure was given by Bane [18].This function, when convolved with a bunch distribution, gives the wake induced by a bunch as shown below.The point charge wake is where Z 0 denotes vacuum impedance, c speed of light, a iris radius, ðsÞ a step function [ðsÞ ¼ 1 for s > 0, and ðsÞ ¼ 0 for s < 0]: where g denotes the gap, L the rf period length. Similarly, an asymptotic short-range solution is achieved with the fitting method, and the single bunch transverse point charge wake potential is then expressed as shown below [18]: Using these formulas, plus the NLC/GLC H75 X-band structure (SLAC internal reference name) parameters, both longitudinal and transverse X-band wakefield data files are generated in the ELEGANT [19] format (SDDS format).A very small longitudinal slice length of 0:01 m is employed to improve the precision of the wakefield calculation, given an ultrashort final bunch length around 0:3 m. Both the transverse and longitudinal wakefield strength are stronger for X-band structures, due to their smaller apertures, in comparison with lower frequency rf structures such as S-band or L-band.Employed as an FEL driver, the possible advantage of X-band longitudinal wakefield is that it is more efficient in manipulation of the electron beam's longitudinal phase space, such as canceling the timing jitter induced bunch length (peak current) variation in Linac2, and removing residual energy correlation in Linac3 where acceleration off-crest has a negligible effect as the final bunch length is too short.The possible disadvantage is that it requires a very precise alignment of the accelerator components and control (steering) of the electron beam's trajectory. For this FEL design, first the optics of the linac are optimized to suppress the possible impact on transverse emittance growth from the strong X-band structure wakefield.Then transverse emittance dilution is evaluated by simulation where 100 seeds of random misalignment errors are adopted.Here it is also noted that proper steering technique, such as one-to-one steering or dispersion free steering, should be adopted to control the beam trajectory and suppress the transverse emittance growth.Good FIG. 3. Beta functions and dispersion functions in the bunch compressors.Left: bunch compressor one with a total length of 4 meters, where 0.2-meter-long dipole magnets are employed; right: bunch compressor two with a length of 5 meters, where 0.4-meterlong dipole magnets are employed.Black curve: horizontal beta function; red curve: vertical beta function; blue curve: horizontal dispersion function; green curve: horizontal angular dispersion function.The transverse optics is quite similar in these two bunch compressors, with the beta function decreasing from 12 to 2 meters in the bunch compressor, which is designed to minimize the CSR induced transverse emittance growth.FIG. 4. Beta functions and dispersion functions of the overall accelerator, from Linac0 (in Fig. 1) end with an electron beam energy of 54 MeV to linac end with a beam energy of 6 GeV.Black curve: horizontal beta function; red curve: vertical beta function; blue curve: horizontal dispersion function; green curve: horizontal angular dispersion function.Periodic FODO cells are employed for the X-band rf cavity based linac, with an average beta function of 10 and 20 meters in Linac2 and Linac3, respectively.Simple matching sections are also employed between linac periodic cell and bunch compressors.The first stage bunch compressor is located at a place where the beam energy is 250 MeV, while the second stage bunch compressor is at 1.5 GeV.steering is also necessary in achieving the required final bunch length and peak current.The details will be discussed in Sec.III E. C. ELEGANT simulation The designed optics is translated from MAD8 [16] format to ELEGANT [19] format where a real 3D simulation is performed with half a million macroparticles.As mentioned above, the initial bunch distribution is taken from an output of ASTRA simulation.Longitudinal space charge effects is included in all the drift spaces, as well as CSR (1D model in ELEGANT) and ISR effects in the bunch compressors.Both longitudinal and transverse wakefield effects are included in all the rf cavity elements where an average real estate acceleration gradient of 60 MV=m is adopted, and steering of the beam with random misalignment errors is discussed in the next subsection.The longitudinal space charge effect is also included in the drift spaces of all the linac sections. The initial bunch from ASTRA simulation at 54 MeV has an rms bunch length of 140 fs and a bunch charge of 10 pC, with a normalized transverse emittance of 0:14 m in both planes.With the energy modulation established in Linac1 along the longitudinal direction, the bunch length is compressed (under compression) to an rms value of 30 fs at the exit of bunch compressor one.The bunch is further accelerated from 250 MeV to 1.5 GeV, on an rf phase of À20 degree.The electron bunch length is then further compressed (under compression) to an rms value of 0.6 fs and an FWHM value of 1.2 fs in bunch compressor two. The subsequent bunch longitudinal phase space and bunch current profile evolution during the acceleration and compression process is shown in Fig. 5 and 6, from Linac1 end to Linac2 end.One observes that in Linac1, a quasilinear energy correlation is established by running on an rf phase of À15 degree, with a relatively short electron bunch length of 42 m.After the electron bunch is compressed to an rms length of 10 m in bunch compressor one, the longitudinal phase space tends to be more nonlinear, partially due to the high order dispersions in bunch compressor one and the nonlinear chirp established in Linac1.Then after further acceleration in Linac2 on an rf phase of À20 degree, these nonlinear energy chirps are largely diminished by adiabatic damping and the longitudinal phase space is quasilinear again, as shown in the third subplot of Figs. 5 and 6.In bunch compressor two, the electron bunch length is further compressed to an rms value of 0:4 m, with a peak current over 3 kA.Meantime, at the end of Linac2, a quasilinear residual energy correlation is left on the electron bunch, such that the electron bunch head has a lower energy than its tail, as shown in the fourth subplot of Figs. 5 and 6.Coherent synchrotron radiation (CSR) effects change the particle's energy in the bunch compressor which is a dispersive region, thus it changes the slice trajectory and then increases the projected horizontal emittance of the electron bunch.As mentioned above, the LCLS experience [7] is taken here to minimize the impact from CSR effect on the horizontal emittance, which is quite effective.Another point is to do undercompression in both stages and keep the electron bunch away from the full compression state.Here the longitudinally sliced emittance is preserved during the acceleration and both stages of bunch compression, in the core part of the electron bunch.The normalized horizontal emittance does increase to 0:25 m in the tail of the electron bunch, although this part only represents a very small portion of the beam.The last point is that as the CSR effect is proportional to bunch charge, its impact on electron's energy is relatively small here as a very low bunch charge of 10 pC is adopted.In comparison for a high bunch charge of 250 pC the CSR impact on the emittance growth could be much larger and more difficult to suppress. The main function of Linac3 is to boost the beam energy to 6 GeV and remove the correlated energy offset which is generated in previous linac for bunch compression purpose.As the X-band longitudinal wakefield is strong enough and it has an opposite sign in energy chirp with respect to the rf induced energy correlation, it is possible to remove the energy correlation by using only the wakefields in a reasonable length.Another point is that, as the final bunch length is very short which is in the femtosecond scale, it is not so effective to generate an energy offset (chirp) using rf acceleration off-crest, even for X-band rf system.Here Linac3 length is chosen to be 90 meters which is long enough to generate the required energy correlation from longitudinal wakefield. The final beam properties at Linac3 end which is also the undulator entrance are shown in Fig. 7.A flat energy profile is achieved as can be observed in the longitudinal phase space plot.The longitudinally sliced energy spread is less than 1 Â 10 À4 at a beam energy of 6 GeV, while the longitudinally sliced horizontal normalized emittance is around 0:16 m, both measured for the core part of the bunch.A peak electron bunch current of 4 kA is produced with a pulse length of 2 fs.A high peak current and small transverse emittance is essential to achieve a short gain length, which in turn shortens the FEL saturation length.A small energy spread along the bunch helps to achieve a small bandwidth of the radiation. D. Timing jitter In general, an FEL lasing and saturation condition requires a very bright electron beam with a high peak current and a small 3D normalized emittance.The stability of the FIG. 6.Current profile evolution during the acceleration and bunch compression process.Observing locations from left top, right top to left bottom and right bottom: Linac1 end; bunch compressor one end; Linac2 end; bunch compressor two end.The rms electron bunch length is undercompressed from 140 to 34 fs in a first stage bunch compression, then further undercompressed to 2 fs in a second stage bunch compression. YIPENG SUN et al. Phys.Rev. ST Accel.Beams 15, 030703 (2012) 030703-8 final beam current depends on timing and rf phase jitter, also on bunch compressor parameters and electron bunch charge variations.Gun laser to linac rf timing jitter (referred to as timing jitter in the following sections) is considered to be correlated for all the linac sections, which is the main timing error to be discussed in this paper, as the sum of the uncorrelated (random) rf phase jitter (different from rf source to source) tends to be small.There is also a limit on the final electron beam energy variation as the FEL lasing wavelength is inversely proportional to the square of the final beam energy.The beam energy deviation in the linac section and in the bunch compressors is from rf acceleration gradient errors and rf phase errors, while it is also dependent on the timing jitter between gun laser and linac rf system.The difference between this all X-band based FEL and LCLS (S-band rf based) is relatively small given similar feedback loops running.The details of beam energy and bunch charge variations are not discussed in this paper, and only estimations are given on the deterioration in FEL performance. According to LCLS experiences, a tolerance of 0.1% relative final electron energy and a 12% peak current variation need to be met, in order to achieve a successful FEL operation [7].The tolerance on the initial bunch charge is À6% given a peak current variation of À12%.Given the LCLS achieved S-band gun laser stability which provides a charge variation of 1.1%, in this X-band system a charge variation of 4% is anticipated in the pessimistic sense, considering the gun laser properties and cathode quantum efficiency.The charge variation of 4% in turn would increase the FEL power gain length (and saturation length) by 2.5%, and decrease the output FEL saturation power by 6%, which are estimated under SASE 1D theory. The achievable charge stability depends on gun laser properties, cathode quantum efficiency, and chosen phase with respect to zero crossing (which is quite different between various X-band guns, such as the 5.59 cell X-band gun and the 5.5 cell version), which will be measured experimentally on various X-band guns. As there are 4 times difference in rf frequency and rf wavelength, between S-band rf and X-band rf, a same timing jitter measured in absolute time would have a much larger impact on this X-band rf based FEL.For a low-charge operation mode, such as 10 pC proposed here, its initial bunch peak current is lower if compared with a high-charge mode (such as 250 pC).That means one needs to compress more in the electron bunch length, in order to achieve a similar final peak bunch current such as 3 kA.Illustrated by the bunch length variation formula below which is based on a linear model in a single bunch compression stage [20], a large bunch compression ratio implies a tighter tolerance on the timing jitter between photoinjector laser and linac rf system: where Á rf denotes timing jitter between photoinjector laser phase and linac rf phase, rf design linac rf phase FIG. 7. Electron beam properties at the end of Linac3.Left top: longitudinal phase space; left bottom: longitudinally sliced energy spread; right top: current profile; right bottom: longitudinally sliced emittance.A normalized transverse emittance of 0:14 m is preserved at the core part of the electron bunch, with a pulse duration length of 2 fs.A peak current of 4 kA and a longitudinally sliced relative energy spread less than 1 Â 10 À4 is achieved.The residual correlated energy offset established in Linac1 and Linac2 is removed in Linac3 with the help of strong X-band rf longitudinal wakefield, and a flat final energy profile is achieved at a beam energy of 6 GeV. LOW-CHARGE, HARD X-RAY FREE ELECTRON LASER . . .Phys.Rev. ST Accel.Beams 15, 030703 ( (with respect to crest), Á zf change in rms bunch length, zf design bunch length after one stage compression, and C 0 design compression ratio. However, one also observes from the above formula that choosing a large rf phase rf makes the relative change in bunch length Á zf = zf smaller.The bunch charge error also impacts the variation of final peak bunch current, which is not discussed here as it has little connection with rf frequency.ELEGANT [19] simulations are performed with timing jitter set on all linac rf phase up to 50 fs which is successfully achieved in LCLS operation [7].This timing jitter of 50 fs changes the Linac1 rf phase and in turn changes the electron bunch length at the end of bunch compressor one.The final relative rms bunch length variation is around 20% with 35 meters long Linac2, given a timing jitter of 50 fs between laser and linac rf.One needs to either improve the timing system and achieve a timing error roughly below 25 fs, or find other techniques to cancel the timing jitter effects, to meet the 12% peak current variation upper limit. Under SASE 1D theory, given an FEL radiation wavelength of 0.15 nm and other beam parameters as discussed above, a timing jitter of 50 fs between laser and linac rf would increase the FEL power gain length (and saturation length accordingly) by 7%.The output FEL saturation power is then 92.8% of the design power. For this FEL accelerator design, a rising rf slope is employed for bunch compression systems with normal four-dipole chicane, where an electron bunch head has a lower energy than its tail.In that case, the longitudinal wakefield induced energy chirp always has an opposite sign with the linac rf induced energy chirp.So the longitudinal wakefield in Linac2 can be used to partially or fully cancel the timing jitter induced bunch length variation [7,21], and generate a more constant final electron current profile.A final electron bunch length at the end of BC2 is then expressed as shown below [21], with timing jitter and wakefield effects considered up to first order.The rf phases 1 and 2 denote the ones that have wakefield induced energy chirp subtracted: where Dð z1 ; k 2 Þ denotes the unit-length change of wakefield induced energy chirp in Linac2 which is a function of rf structure (frequency) and electron bunch length in Linac2, k 1 and k 2 the rf wave number in Linac1 and Linac2, and L Linac2 the length of Linac2.In general, the timing jitter induced change on bunch compression ratio (in BC1 and BC2) is compensated by longitudinal wakefield induced energy correlation (chirp) in Linac2.In Linac1 the longitudinal wakefield is weak as Linac1 has a short length, also one could adopt a Linac1 rf phase with longitudinal wakefield induced energy chirp subtracted.In Linac3 the timing jitter only introduces slight energy variation. Based on these arguments, one then can decrease the acceleration gradient in Linac2 and increase the Linac2 length (employing more rf cavities), keep the same beam energy at bunch compressor two, and employ a stronger longitudinal wakefield in Linac2 to cancel the timing jitter effect.However, a longer total accelerator length and a higher total cost are accompanied with increasing Linac2 length and employing more rf cavities.A trade-off needs to be made between the factors mentioned above and the tolerated timing jitter.Analytical and numerical studies show that decreasing the average acceleration gradient from 80 to 65 MV=m in Linac2 could drop the final peak current variation to 12%.The timing jitter induced peak current variation could be fully compensated if an average acceleration gradient of 50 MV=m is adopted in Linac2.The parameters used in the study are optimum to achieve a shortest accelerator length and a lowest cost, which still provide an acceptable FEL performance. E. Misalignment In general, the misalignment of the rf structure and quadrupole magnets has two consequences.First, the misalignment of quadrupoles introduces additional dipole kicks on the electron beam, which in turn generates net dispersion.Given a relatively large energy spread in the FEL driver, this introduces dispersive emittance dilution with quadrupole misalignment.Second, the possible misalignment of X-band rf structure makes the electron beam generate stronger transverse wakefield kicks (than S-band rf or other low frequency rf system), which also introduces emittance dilution.As mentioned above, the X-band rf structure has a much smaller radius and then has much stronger transverse wakefield, when compared with L-band or S-band rf structures.A transverse wakefield can imply a longitudinal-position-dependent transverse kick on the electron beam, and then increase the projected emittance. Here, the transverse wakefield effects are preliminarily evaluated in ELEGANT simulation where 100 k macroparticles are employed.First, an rms value of 20 m is adopted for the random offset of all the quadrupoles and rf structures in both horizontal and vertical plane.No steering on the electron beam is employed and in that case the transverse emittance dilution is still negligible, compared with the perfect case where there is only emittance growth in the bending plane (mainly due to CSR effects).This small projected emittance growth is dominated by the energy spread related dispersive and chromatic emittance dilution, and the transverse wakefield has a negligible effect.The final bunch length and peak current is not changed.These simulation results show that the linac optics is weak enough in betatron focusing to suppress dispersive and chromatic emittance dilution.Another point Next, horizontal and vertical offsets with an rms value of 200 m are generated randomly for all the quadrupoles and rf structures in the linac, while an rms value of 200 m is assumed for the offset between BPM electrical center and quadrupole magnetic center.It is noted that these numbers of 200 m are assumed in a reasonable manner, which are on a similar level of future linear colliders.In the FEL accelerator optics, there is one BPM and one steering corrector attached on each quadrupole, which measures and steers in both horizontal and vertical plane.100 random seeds are generated for quadrupole (rf structure) offsets and BPM to quadrupole offsets.For each error seed, either one-to-one steering or dispersion free steering (DFS) [22] is employed to calculate the required corrector strength and then steer the electron beam on a corrected trajectory.As shown in Fig. 8, it is observed that the projected normalized emittance is almost preserved with either steering method applied with only one iteration.For better illustration, in the bunch compressor the dispersion subtracted emittance is used instead of the projected emittance.Again it is observed that the projected emittance dilution from transverse wakefield is negligible, thanks to the very low bunch charge of 10 pC.Most of the transverse emittance growth is due to CSR impacts in the bunch compression process.Another point is that the longitudinally sliced emittance is not affected by the transverse wakefield. IV. FREE ELECTRON LASER PERFORMANCE The above generated high quality electron bunch is ready to drive a hard x-ray free electron laser (FEL).As mentioned above, in this paper we are studying low-charge operation mode (10 pC) with an ultrashort photon pulse length of 2-4 fs.The electron bunch distribution generated from ELEGANT simulation is then fed into an undulator system, and the associated FEL performance is simulated and evaluated within the code GENESIS [10]. In the following, we show a concrete example of lasing at 0.15 nm FEL, which wavelength is equivalent as that achieved in LCLS.An ideal undulator is designed to have a short period of w ¼ 1:5 cm, and the undulator strength K is chosen according to Eq. ( 9) shown below, with the centroid energy of the electron bunch chosen at 6 GeV, so that the resonant FEL wavelength is r ¼ 0:15 nm.No nonlinear magnetic field is included for the undulator model.In comparison, LCLS has to go to a higher beam energy of 14 GeV and then achieve a resonant radiation wavelength of 0.15 nm, as its undulator has a longer period of 3 cm: where is the Lorentz factor of the centroid electron energy, and we have assumed a planar undulator.External intersperse FODO focusing lattice is assumed to be implemented for the undulator system, so that the averaged function is about 13 m in the undulator. Given such an electron bunch with very small sliced transverse emittance and energy spread, within 20 m long undulator the FEL radiation achieves saturation with a saturation power of P sat $ 10 GW as shown in Fig. 9.In comparison, LCLS has a similar FEL lasing power with a longer saturation length of 60 m.An overall undulator length of 30 m should be enough to achieve saturation.For such a system, the cooperation length of the FEL is about l c $ 0:1 m.As mentioned above, the final electron FWHM bunch length is about 0:3 m as shown in Fig. 7, we expect that the FEL pulse will approach single spike operation mode.GENESIS simulation does support this expectation.The temporal profile and the spectral profile of the FEL pulse at 20 m into the undulator are shown in Fig. 10 and 11, respectively.One sees that the final FEL pulse is reaching single coherent mode in such a lowcharge scheme. The radiation wavelength could be tuned in a range of 1.5-0.15nm, given the same undulator configuration, by changing the electron beam energy between 2 and 6 GeV. For a lower beam energy of 2 GeV and a radiation wavelength of 1.5 nm, it should be easier to achieve saturation as the gain length is shorter.As mentioned before, the radiation wavelength, the saturation power and length, as well as the wavelength tunable range is comparable to the lowcharge mode (20 pC) of LCLS.The number of total photons from a single bunch is only a half of LCLS case, as the bunch charge is a half.One then could employ a higher repetition rate and increase the photon energy per unit time. V. CONCLUSION AND DISCUSSION A compact hard x-ray FEL design is presented in this paper, which is based on all X-band technology.The total length of this FEL accelerator plus undulator is roughly 180 meters, from X-band photoinjector to the undulator end, which makes it possible to be built given a limited available space.Normal four-dipole chicane system is employed as bunch compressors in both stage bunch compression.The optics design of the bunch compressors and linac is optimized to minimize the impacts of CSR, dispersion, and wakefield effects.This FEL design is dedicated for a low-charge mode operated at 10 pC, with a final photon pulse length of roughly 2 fs.3D start-to-end simulations are employed to evaluate the performance of this hard x-ray FEL, using computer codes such as ASTRA, ELEGANT, and GENESIS.A final beam current above 3 kA is achieved with a small transverse emittance and small relative energy spread at a beam energy of 6 GeV.To maintain a relatively high average FEL power, higher repetition rate could be employed in this FEL, while a final pulse length of 2 fs is generated with a bunch charge of 10 pC.In an undulator with a period of w ¼ 1:5 cm, the compressed electron beam lases successfully at a wavelength of r ¼ 0:15 nm, and the FEL power saturates at 10 GW over a length of 20 meters.For this low-charge operation mode, the FEL pulse approaches to a single coherent spike.The radiation wavelength can be tuned by changing the final electron beam energy.The timing jitter effects between photoinjector laser and linac rf phase are evaluated analytically.It is possible to minimize its impact on the final bunch length (peak current) variation by employing a longer Linac2 with a lower acceleration gradient.Preliminary studies show that the transverse wakefield induced emittance dilution is negligible as a low charge of 10 pC is employed.The additional emittance dilution could be suppressed by one-to-one or dispersion free steering after only one iteration, while the overall emittance growth is mainly due to CSR impacts in bunch compressor two.The performance of this X-band rf driven hard x-ray FEL is equivalent to the low-charge mode of LCLS, and it has a shorter accelerator plus undulator length of 180 meters compared with 1230 meters of LCLS accelerator and undulator. FIG. 5 . FIG.5.Longitudinal phase space evolution during the acceleration and bunch compression process.Observing locations from left top, right top to left bottom and right bottom: Linac1 end; bunch compressor one end; Linac2 end; bunch compressor two end.The rms electron bunch length is undercompressed from 140 to 34 fs in a first stage bunch compression, then further undercompressed to 2 fs in a second stage bunch compression. FIG.8.Left: After one iteration of one-to-one steering, dispersive and chromatic emittance dilution (energy spread correlated) is suppressed.Right: After one iteration of dispersion free steering, dispersive and chromatic emittance dilution (energy spread correlated) is suppressed.In the simulation 200 m (rms) random offsets are applied on all the quadrupoles and rf structures, and 200 m is assumed for the offset between BPM electrical center and quadrupole magnetic center.The result is an average of 100 random error seeds. TABLE I . The rf parameters of each linac. TABLE II . ASTRA simulation results, optimized for 3 different initial laser profiles.
11,011
2012-02-17T00:00:00.000
[ "Physics" ]
SIRT4 silencing in tumor-associated macrophages promotes HCC development via PPARδ signalling-mediated alternative activation of macrophages Background The activation of tumour-associated macrophages (TAMs) contributes to the progression of hepatocellular carcinoma (HCC). SIRT4 acts as a tumour suppressor of tumour growth by regulating cell metabolism, inflammation, and anti-tumourigenesis. However, the involvement of SIRT4 in the activation of TAMs is unknown. Based on previous findings, the expression of SIRT4 in distinct groups of TAMs as well as the effect of SIRT4 silencing on macrophage polarization was investigated. Methods The expression of SIRT4 in HCC tissues and peritumour tissues was tested by qRT-PCR, western blotting and histological analysis. A Kaplan-Meier survival curve was generated based on the expression of SIRT4 in the HCC samples. Next, immunofluorescence staining was used to evaluate distinct groups of TAMs in human HCC samples, and the expression of SIRT4 in M1 and M2 TAMs was examined by flow cytometry. A homograft mouse model was used to assess the effect of SIRT4 silencing in TAMs on the development of HCC cells. Results SIRT4 was significantly downregulated in HCC tumour tissues, and the expression of SIRT4 in peritumour tissues was positively associated with survival in patients. We further found that downregulation of SIRT4 was associated with increased macrophage infiltration and a high ratio of M2/M1 macrophages in HCC peritumour tissues. Using gene interference, we found that SIRT4 silencing in TAMs significantly modulated the alternative activation of macrophages and promoted in vitro and in vivo HCC cell growth. Mechanistically, we revealed that HCM restricted the expression of SIRT4 in macrophages and promoted alternative activation of macrophages via the FAO-PPARδ-STAT3 axis. Furthermore, we also revealed that elevated MCP-1 expression induced by SIRT4 downregulation was responsible for increased TAM infiltration in peritumour tissues. Conclusions Overall, our results demonstrate that downregulation of SIRT4 in TAMs modulates the alternative activation of macrophages and promotes HCC development via the FAO-PPARδ-STAT3 axis. These results could provide a new therapeutic target for the treatment of HCC. Background Hepatocellular carcinoma (HCC) is among the top causes of cancer-related mortality [1]. While great strides have been made in treating HCC, the most common therapies remain surgical resection and liver transplantation. Unfortunately, the high mortality of liver cancer is related to its high recurrence and metastasis rate [2,3]. As these outcomes mostly occur in the postoperative residual liver, recent studies have highlighted the significance of the tumour microenvironment in the development, metastasis, and recurrence of HCC [4,5]. Tumour-associated macrophages (TAMs) are copious in the tumour microenvironment and vital in tumour development and metastasis [6]. TAMs usually polarise to the M2-like phenotype [7,8] and express high levels of IL-10, CD206, and arginase (Arg)-1, while producing low levels of inducible nitric oxide synthase (iNOS), IL-12, and tumour necrosis factor-ɑ (TNF-ɑ). SIRT4 is a member of the Sirtuin family (SIRT1-7) that affects cellular proliferation, stress resistance, metabolism regulation, inflammation and cancer [9]. SIRT4 performs the role of an ADP-ribosyltransferase, exhibiting demalonylase and deacetylase behaviours in certain tissues [10]. As a mitochondrial sirtuin, SIRT4 is involved in fatty acid oxidation as well as mitochondrial gene expression in liver and muscles [11]. In addition, SIRT4 can inactivate glutamate dehydrogenase to inhibit tumour formation [12]. Recent studies have found that Sirt4 can affect the inflammatory response in several tissues. It has been reported that SIRT4 suppresses proinflammatory cytokines in human umbilical vein endothelial cells [13,14], and some studies have reported that SIRT4 plays an important role in resolving immune tolerance in monocytes [15]. However, no evidence currently articulates the effect of SIRT4 on the inflammatory response in the liver. The results of this study demonstrate that downregulation of SIRT4 in TAMs and para-cancerous hepatocytes affects the development of HCC as well as the prognosis of HCC patients. In this study, we found that downregulation of SIRT4 in TAMs modulates the alternative activation of macrophages via the FAO-PPARδ-STAT3 axis and that downregulation of SIRT4 in para-cancerous hepatocytes promoted macrophage infiltration by enhanced MCP-1 expression via the NF-κB pathway. Therefore, SIRT4 is a promising target in HCC immunotherapy and reverses macrophage-induced immunosuppression in the tumour microenvironment. Cell lines and cell cultures Human HCC cell lines (Huh7 and HepG2) and mouse hepatoma cell lines (H22 and Hepa1-6) were purchased from American Type Culture Collection (ATCC, Rockville, MD, USA). These cell lines were preserved in Dulbecco's modified Eagle's medium (DMEM; HyClone, Logan, UT, USA) plus 10% foetal bovine serum (FBS) and 1% penicillin G and streptomycin at 37°C in humidified air containing 5% CO2. The human monocytic cell line THP-1 was cultured in 1640 supplemented with 10% foetal calf serum. Human subjects A tissue microarray that included 90 HCC tissues and matched surrounding tissues collected between 2007 and 2009 was purchased from Shanghai Outdo Biotech (Shanghai, China). The gender, age, stage, tumour size, pathological type, and clinical stage of patients were also obtained (Additional file 1: Table S1). All patients were followed for 4-6 years. Fresh HCC tissues and matched surrounding tissues were obtained from primary surgery patients interned at the Second Affiliated Hospital Surgery Department of Chongqing Medical University. No chemotherapy or radiotherapy was allowed before surgical treatment. Pathologists evaluated all samples for histological diagnosis. All patients involved in this study provided informed consent that their tissues could be retained and analysed for research purposes only. The Human Research Ethics Committee of the Second Affiliated Hospital of Chongqing Medical University approved this study. Immunohistochemistry Immunohistochemical staining was carried out according to a prior protocol. The staining intensity was scored as follows: 0, no staining; 1, weak staining; 2, intermediate staining; and 3, strong staining. The positive rate score was determined as follows: 0, 0 of the cells stained positive; 1, 1-20% of the cells stained positive; 2, 21-40% of the cells stained positive; 3, 41-60% of the cells stained positive; 4, 61-80% of the cells stained positive; and 5, 81-100% of the cells stained positive. The total score was the combination of the staining intensity and positive staining rate scores. Samples with a total score < 6 and ≥ 6 were defined as the low and high expression groups, respectively. In vivo tumourigenicity Our institutional ethical board for animal experiments approved all experimental procedures, which followed the Guide for the Care and Use of Laboratory Animals. Four-week-old BALB/c nude mice (n = 30) were kept in a sterile environment to serve as hosts to the HCC homografts. H22 cells together with homologous peritoneal SIRT4-knockdown macrophages (PMs) at a ratio of 6:1 were injected subcutaneously into the liver of BALB/c mice (n = 30) to prepare the homografts. Five days after inoculation, three mice were sacrificed every 3 days, and the weight and volume of the extracted tumours were calculated. The tumours were dissected approximately 2 mm from the liver tumour margins. Lentivirus-mediated overexpression or knockdown of SIRT4 The lentivirus-based SIRT4 overexpression or knockdown vector was constructed according to a prior protocol [12]. Lentivirus packaging and cell transduction were also carried out according to a prior protocol [16]. HCC-conditioned medium H22 cells were cultured in serum-free DMEM for 24 h. The supernatants were collected as HCC-conditioned medium (HCM). PMs were cultured with different amounts of HCM or different exposure times in 6-well plates. Different amount of HCM were mixed with complete medium to reach a volume of 2000 μl and percentages of 0% (0 μl), 5% (100 μl), 10% (200 μl), 15% (300 μl), and 20% (400 μl). Fig. 1 SIRT4 expression is downregulated in human HCC tissues, and the expression of SIRT4 in HCC peritumour tissues was positively associated with HCC survival. a Immunohistochemical staining was utilized to examine SIRT4 expression in HCC tumour tissues and matched peritumour tissues (magnification at × 40 and × 200). b Immunohistochemical scores of SIRT4 expression in HCC tumour tissues and peritumour tissues (* P < 0.05). c Western blot analysis of SIRT4 in HCC tissues and matched peritumour tissues. Equal protein loading was confirmed using GAPDH as a control. d The mRNA level of SIRT4 in HCC tissues and peritumour tissues (* P < 0.05). e The Kaplan-Meier survival curve showing the correlation between SIRT4 expression in tumour tissues and survival of HCC patients (p = 0.133), 1: low SIRT4 expression, 2: high SIRT4 expression. f The Kaplan-Meier survival curve demonstrated the correlation between SIRT4 expression in peritumour tissues and survival of HCC patients (p = 0.015), 1: low SIRT4 expression, 2: high SIRT4 expression Cell apoptosis assay and flow cytometry measurements M1-like TAMs were harvested after co-culture with the supernatant of SIRT4-knockdown M2-like TAMs. Then, Annexin V-FITC/PI Cell Apoptosis kit (KeyGen, Nanjing, Jiangsu, China) was used to perform an apoptosis assay. To complete this assay, a suspension (100 μl) of 5 × 10 5 TAMs were incubated at room temperature with 5 μl of Annexin V and 1 μl of propidium iodide (PI) for 15 min. Flow cytometry (BD Pharmingen, San Diego, CA, USA) was used to measure the apoptotic rate. Oxygen consumption Oxygen consumption rates (OCR) were measured in XF assay media under basal conditions(unbuffered XF assay medium containing 25 mM glucose, 2 mM glutamine and 1 mM sodium pyruvate) and in response to 1.5 μM oligomycin, 1 μM FCCP, 1 μM rotenone and 4μM antimycin (Rot+Ant) with the Seahorse XF-96 Extracellular Flux Analyzer (Seahorse Bioscience). Real-time OCR was recorded according to the manufacturer's manual. The software XFe Wave (Seahorse Bioscience) was utilized to examine the results. Macrophage preparation and polarization PMs were harvested from BALB/c mice after intraperitoneal injection of 1 mL sterile 6% starch solution for 72 h. PMs were stimulated towards M1 or M2 polarization with lipopolysaccharide (LPS) (100 ng/mL, 24 h) to induce M1 or HCM (15%) to induce M2. The control group was stimulated with phosphate buffered saline. Western blot analysis The cells were lysed using a cell lysis buffer (Cell Signaling, USA). A NE-PER nuclear protein extraction kit was used for nuclear protein extraction, following the manufacturer's protocol (Thermo Scientific, USA). Analysis of cell proliferation and invasion Hepa1-6 cells were cultured with the supernatant from SIRT4-knockdown M2 macrophages or controls. Cell Counting Kit-8 (CCK-8, Beyotime, China) was used for cell proliferation analysis. Cell invasion assays were performed in two parts of the studies. 1. In the study of SIRT4-knockdown M2 macrophages promoting invasion of HCC cells, 1 × 10 5 Hepa1-6 cells were added to the upper chamber and co-cultured with the supernatant of SIRT4-knockdown M2 macrophages. 2. In the study of elevated MCP-1 expression promoting TAM infiltration, 1 × 10 5 PMA-differentiated THP-1 cells were added to the upper chamber and co-cultured with the supernatant of SIRT4-knockdown HepG2 cells. IL-6, IL-10, and VEGF neutralization assay HCM was used to stimulate shSIRT4-Lv and shCont-Lvinfected macrophages for 24 h. Then, supernatants were collected and preincubated with immunoglobulin G or the appropriate cytokine neutralization antibody before adding this mixture to Hepa1-6 cells. Three days later, the cells were analysed using CCK-8. Statistical analysis Data are reported as the mean values ± SEM. All experiments were performed in triplicate. Statistical significance was determined using SPSS 21.0 software. Student's t test was used to assess the statistical significance of the differences between experimental groups, while the differences between groups were analysed by the log-rank test. A p-value < 0.05 was considered a significant difference. SIRT4 levels are significantly downregulated in HCC tumour tissues To elucidate the function of SIRT4 in HCC, tissue microarrays were used to examine the expression of SIRT4 in HCC tissues. Immuno-histochemical staining showed that SIRT4 was downregulated significantly in tumour tissues compared with matched surrounding tissues ( Fig. 1a and b). The expression level of the SIRT4 protein in HCC tissues was significantly lower than that in peritumour tissues (1.233 ± 0.596 vs 1.922 ± 0.396, P = 0.000). Furthermore, total protein and RNA were extracted from fresh HCC tissues and matched peritumour tissues, and western blot and qRT-PCR assays confirmed that SIRT4 was downregulated in tumour tissues compared with peritumour tissues ( Fig. 1c and d). The conclusion of these results is that SIRT4 expression is significantly downregulated in HCC tumour tissues. Downregulation of SIRT4 in HCC peritumour tissues is associated with poor survival of HCC patients Correlation analysis between SIRT4 expression and the clinicopathological characteristics of HCC patients revealed that SIRT4 expression in HCC peritumour tissues was negatively associated with the tumour size (Table 1), but there was no correlation between SIRT4 expression in tumour tissues and clinicopathological characteristics of HCC patients (p > 0.05). Furthermore, as shown in Fig. 1e and f, Kaplan-Meier analysis and log-rank statistical test showed that SIRT4 levels in HCC peritumour tissues were positively associated with the survival of HCC patients (42.9% vs 15.0%, p = 0.015), while the expression of SIRT4 in tumour tissues was not associated with the prognosis of HCC patients (p = 0.133). Therefore, these results suggest that SIRT4 may play a tumour-suppressing role in HCC peritumour tissues by inhibiting the development and migration of tumour cells and thus improving the prognosis of patients. Downregulation of SIRT4 is associated with increased macrophage infiltration and M2 macrophages in HCC peritumour tissues Our previous results suggest that SIRT4 expression in HCC peritumour tissues is associated with the clinicopathological characteristics and prognosis of patients. Therefore, the HCC cases were classified into two groups (SIRT4 High and SIRT4 Low) according to SIRT4 expression in HCC peritumour tissues as described in the Methods section (Fig. 2a). It has been reported that SIRT4 may affect the inflammatory response, so its role in the tumour immune microenvironment, especially in tumour-associated macrophages (TAMs) was investigated next. As shown in Fig. 2b and c, TAMs characterized by F4/80 expression were examined by immunohistofluorescence and qRT-PCR. We found that the number of TAMs or F4/80 expression was much higher in the SIRT4 Low group than in the SIRT4 High group. Furthermore, the phenotype of TAMs was characterized by double immunohistofluorescence, which involved co-staining with the macrophage marker F4/80 and either inducible nitric oxide synthase (iNOS) (M1 marker) or mannose receptor CD206 (M2 marker). As shown in Fig. 2b Fig. 2d, FCM analysis showed that SIRT4 expression in CD16+ (M1 marker) TAMs was significantly higher than that in CD16-TAMs, and CD206+ (M2 marker) TAMs showed low SIRT4 expression. These results indicated that SIRT4 was mainly expressed in M1 TAMs and had a low expression level in M2 TAMs. Next, we further investigated SIRT4 expression in CD68+ macrophages from tumours with different grades. As shown in Fig. 2f, we found that SIRT4 expression in CD68+ TAMs from tumours with grades III-IV was much lower than that from tumours with grades I-II. Moreover, Kaplan-Meier analysis showed that the downregulation of SIRT4 in CD68+ macrophages was correlated with the poor survival of HCC patients (Fig. 2g). The HCC microenvironment inhibits SIRT4 expression in macrophages, and SIRT4 silencing facilitates M2 polarization To test our hypothesis that SIRT4 may affect TAM polarization, HCM was used in experiments in vitro. As shown in Fig. 3a-c, HCM inhibited SIRT4 expression in PMs in a concentration-and time-dependent manner, and HCM-stimulated macrophages also displayed heightened expression of M2 markers (CD206, Arg-1) and reduced expression of an M1 marker (TNF-α). Furthermore, M1-like macrophages stimulated by LPS treatment displayed enhanced expression of SIRT4 (Fig. 4d). These results are consistent with our clinical results. Due to the low SIRT4 expression in the M2 phenotype, we investigated the role of SIRT4 in alternative activation of macrophages. As shown in Fig. 3e and g, SIRT4 interference promoted M2 activation of HCM-stimulated macrophages (enhanced Arg-1, CD206, and IL-10 production but decreased TNF-α and IL-12 expression). However, the M2-like phenotype of TAMs could be reversed by overexpression of SIRT4 (Fig. 3f and h). These results indicated that SIRT4 affects the polarization of macrophages. SIRT4 silencing modulates TAM M2 polarization via the FAO-PPARδ-STAT3 signalling pathway Lipid metabolism and its products play a key role in regulating macrophage functions in inflammation and resolution. It has been reported that M2 macrophage polarization is related to an increase in FA oxidation. Recent studies have revealed that SIRT4 is involved in lipid metabolism; therefore, we examined whether SIRT4 drives TAMs to M2-like polarization via an increase in FA oxidation. As shown in Fig. 4a, TAMs with SIRT4 silencing had increased oxygen consumption under basal and FCCP treatment conditions, whereas TAMs with SIRT4 overexpression (SIRT4OE) had decreased oxygen consumption (Fig. 4b). Next, we studied the lipid metabolic activities in SIRT4-knockdown TAMs by examining the expression of genes in the fatty-acid biosynthesis and fatty-acid oxidation pathways. As shown in Fig. 4c, we found that SIRT4 knockdown enhanced lipid catabolic gene expression in TAMs including MCAD (medium chain acyl-CoA dehydrogenase), PDK4 (pyruvate dehydrogenase kinase isoenzyme 4), CPT1 (carnitine palmitoyltransferase1), PPARδ, and PPARɑ. We further found that SIRT4 knockdown also increased mitochondrial gene expression in TAMs (Fig. 4d). Among these elevated lipid catabolic genes, PPARδ and the PPARδ-STAT3 axis have been reported to skew human macrophages to anti-inflammatory polarization. To investigate whether SIRT4 knockdown modulates TAM M2 polarization via the PPARδ-STAT3 axis by metabolic re-programming, the p-STAT3 protein level was examined. As shown in Fig. 4e, the p-STAT3 protein levels were enhanced by SIRT4 knockdown in TAMs. To further determine whether the PPARδ-STAT3 axis was responsible for M2 polarization of TAMs, we retrovirally knocked down PPARδ by its inhibitor GSK3787. As shown in Fig. 4f, PPARδ expression was successfully knocked down, and we found that the expression of p-(See figure on previous page.) Fig. 3 The HCC microenvironment inhibits SIRT4 expression in TAMs, and SIRT4 silencing facilitates M2-like polarization. a-b Western blotting and qRT-PCR were employed to detect the protein level and mRNA level of SIRT4 in PMs treated with HCM at various concentrations and time intervals. c PMs were treated with the indicated dose of HCM for 24 h and then qRT-PCR was performed to assess the expression of Arg-1, CD206, and TNF-α. d Western blotting and qRT-PCR were utilized to detect the protein level and mRNA level of SIRT4 in PMs treated with LPS (100 ng/ml) for different time intervals. e-f PMs were transfected with shSIRT4-Lv or SIRT4-Lv with the indicated dose of HCM for 24 h and qPCR was performed to evaluate the expression of Arg-1, CD206, and TNF-α. g-h An ELISA array was performed to evaluate the expression of IL-10 and IL-12. * p < 0.05 STAT3 was restored to its lower level in SIRT4knockdown TAMs. Moreover, PPARδ inhibition by GSK3787 restored the phenotype markers induced by SIRT4 knockdown in TAMs ( Fig. 4g and h). These results indicated that SIRT4 modulates TAMs to M2-like polarization via the FAO-PPARδ-STAT3 signalling pathway. Silencing of SIRT4 in M2-like TAMs promotes the proliferation, migration, and invasion of HCC cells by enhancing IL-6 production Various studies have reported that M2-like TAMs promote HCC development and metastasis. As shown in Fig. 5a and b, Transwell assays confirmed that SIRT4 silencing in M2-like TAMs promoted the migration and invasion of Hepa1-6 cells in a co-culture system, and SIRT4 was successfully knocked down or overexpressed, as shown in Fig. 5h. Moreover, silencing of SIRT4 in M2-like TAMs promoted Hepa1-6 cell growth, and overexpression of SIRT4 significantly inhibited cell growth ( Fig. 5c and d). ELISA results showed that SIRT4 silencing significantly enhanced the production of IL-6, IL-10 and VEGF, which are known to be the major protumoural cytokines produced by TAMs. We found that neutralization of IL-6 critically decreased the proliferation of Hepa1-6 cells co-cultured with SIRT4knockdown TAMs (Fig. 5e). However, neutralization of IL-10 or VEGF had little effect on shSIRT4-promoted HCC cell growth ( Fig. 5f and g). These results indicated that SIRT4 interference promoted HCC cell growth by augmenting IL-6 production. SIRT4 silencing in M2-like TAMs promotes M1 macrophage apoptosis by enhancing IL-10 production in HCC peritumour tissues We previously found that there was a higher ratio of M2/M1 macrophages in HCC peritumour tissues with low SIRT4 expression. In addition, as shown in Fig. 6a, we found that the apoptotic rate of M1-like TAMs was higher in HCC peritumour tissues with low SIRT4 expression. However, as shown in Fig. 6b, the apoptotic rate of M2-like TAMs was not significantly different. We then aimed to determine the cause of the increase in M1 TAM apoptosis. As previously reported [15], IL-10 secreted by M2 Kupffer cells induced M1 apoptosis through the stimulation of arginase in high iNOSexpressing M1 KCs. Therefore, we investigated the effect of SIRT4 silencing in M2-like TAMs on M1 TAM apoptosis via FCM. We found that the supernatant of SIRT4knockdown M2-like TAMs significantly promoted M1 TAM apoptosis (Fig. 6c), and the anti-IL-10 neutralizing antibody weakened the effects of SIRT4-knockdown M2-like TAMs on M1 TAM apoptosis (Fig. 6d). These results clearly revealed that increased M2 polarization of TAMs and apoptosis of M1 TAMs were responsible for the higher ratio of M2/M1 TAMs in HCC peritumour tissues. SIRT4 silencing in M2-like TAMs promotes the development of H22 homografts in BALB/c mice To determine the effects of SIRT4 silencing in M2-like TAMs on the progression of HCC in mice, we established a subcutaneous tumour model as previously described in the Methods section. In comparison to the controls, SIRT4 silencing in M2-like TAMs significantly promoted the development of H22 homografts ( Fig. 7a and b). Similarly, tumour weight in the control TAM group was less than that in the SIRT4-knockdown M2like TAM group (Fig. 7c). Ki67 immunochemical staining of shSIRT4-Lv-treated tumour section was markedly increased over that of the controls (Fig. 7d). Our data indicate that SIRT4 silencing in M2-like TAMs stimulates HCC growth in vivo. Elevated MCP-1 expression is responsible for increased TAM infiltration in HCC peritumour tissues Previously, we found that downregulation of SIRT4 was associated with increased macrophage infiltration in HCC peritumour tissues. Therefore, we established a co-culture system of HepG2 cells and PMA-differentiated THP-1 cells in Transwell chambers to study the interactions between the two cell types. As shown in Fig. 8a, the Transwell assays confirmed that SIRT4 silencing in HepG2 cells significantly increased the migration of PMA-differentiated THP-1 cells. As macrophage infiltration may be due to several monocyte/macrophage chemo-attracting agents, we examined the effect of SIRT4 knockdown on the expression of several monocytes/macrophage chemo-attracting agents (CCL2, CCL3, CCL4, CCL5, CXCL12, and CXCL8) in HepG2 cells. MCP-1, also called CCL2, is a key chemokine controlling (See figure on previous page.) Fig. 4 SIRT4 interference modulates TAMs to M2-like polarization via the FAO-PPARδ-STAT3 signalling pathway. a and b The oxygen flux (respiration) in control, SIRT4KD and SIRT4OE TAMs. The OCR was measured under basal and oligomycin, FCCP and Rotenone/Antimycin-A treatment conditions. c qRT-PCR was used to detect the mRNA level of FAO genes. d qRT-PCR was used to detect the mRNA level of the mitochondrial gene. e Western blotting was used to detect the effect of SIRT4 silencing on p-STAT3 nuclear translocation in TAMs. f Retrovirally knocked down PPARδ by its inhibitor GSK3787 reversed the effects of shSIRT4-Lv on p-STAT3 nuclear translocation. g Retrovirally knocked down PPARδ reversed the effects of shSIRT4-Lv on the levels of phenotype markers in TAMs. h Retrovirally knocked down PPARδ reversed the effects of shSIRT4-Lv on IL-10 and IL-12 expression in TAMs. Statistical significance was calculated using Student's t-test: *p < 0.05 macrophage accumulation and penetration. As shown in Fig. 8c, SIRT4 knockdown significantly upregulated monocyte chemotactic protein-1 (MCP-1) expression, which was further confirmed in HCC peritumour tissues (Fig. 8d). Neutralization of MCP-1 critically weakened the effects of SIRT4 silencing in HepG2 cells on the migration of PMAdifferentiated THP-1 cells (Fig. 8b). As the transcription factor NF-κB is a well-known transcription controller of MCP-1, we explored the NF-κB-MCP-1 axis in HepG2 cells. We found that SIRT4 silencing aggravated p65 nuclear translocation in HepG2 cells (Fig. 8e), whereas inhibition of NF-κB by its inhibitor ammonium pyrrolidinedithiocarbamate (PDTC) reversed the effects of shSirt4-Lv on MCP-1 cells (Fig. 8f), supporting our hypothesis that SIRT4 inhibited MCP-1 expression via the NF-κB pathway in HCC peritumour tissues. Discussion SIRT4 is a mitochondrial sirtuin that plays the role of an efficient ADP-ribosyltransferase but weak protein deacetylase dependent on stringent substrate specificity [17,18]. It has been reported that SIRT4 regulates fatty acid oxidation and mitochondrial gene expression in liver and muscle cells [19]. In addition, previous studies have demonstrated that SIRT4 has protective properties through anti-apoptosis activity [20]. Importantly, SIRT4 can regulate insulin secretion and has tumour suppressor activity via modulation of glutamate dehydrogenase [16,21]. SIRT4 also plays a role in the inflammatory response in some tissues [22]. However, the role of SIRT4 in the development of HCC is still unclear. This is a study to report a relationship between SIRT4 expression and prognosis in hepatocellular carcinoma. We selected HCC tumour tissues and matched peritumour tissues from 90 patients who were followed for 4-7 years to create an HCC tumour tissue microarray for immunohistochemical staining analysis. TMA and IHC results suggested that the expression of SIRT4 in HCC tumour tissue was significantly decreased compared to that in the matched peritumour tissues. Through Kaplan-Meier survival analysis and the log-rank statistical test for single-factor analysis of survival, we found that SIRT4 expression in the peritumour tissues was positively correlated with the survival of HCC patients, and there was no correlation between the expression of SIRT4 in HCC tumour tissues and pathological characteristics (P > 0.05). After NPar pairing analysis and Spearman's tests, we found that SIRT4 expression in peritumour tissues was negatively associated with the tumour size, pathological grade, T stage, and clinical stage of HCC patients (r = − 0.313, p = 0,003; r = − 0.266, p = 0.011; r = − 0.370, p = 0.001; and r = − 0.390, p = 0.000, respectively). It seems that SIRT4 plays a tumoursuppressing role mainly in peritumour tissues. Since metastasis recurrence after radical operation occurs mostly in the postoperative residual liver, the microenvironment of the residual liver, especially the inflammatory microenvironment, can directly affect the prognosis of postoperative HCC patients. Therefore, we classified all HCC cases into two groups (SIRT4 High and SIRT4 Low) according to SIRT4 expression in HCC peritumour tissues. Using immunohistofluorescence and FCM analysis, we found two interesting phenomena: a higher ratio of M2/ M1 macrophages and increased macrophage infiltration in HCC peritumour tissues. It has been reported that a high density of tumourinfiltrating macrophages can predict poor prognosis in post-surgical HCC patients [23]. Here, our data supported this view. In this study, we found that elevated MCP-1 expression induced by the downregulation of SIRT4 in HCC peritumour tissues was responsible for the increased TAM infiltration. A key consideration is the mechanism for the upregulation of MCP-1 in HCC peritumour tissues. In subsequent experiments, we found that downregulation of SIRT4 could activate the NF-κB pathway, resulting in the downstream upregulation of MCP-1 gene expression. It has been shown that various members of the NF-κB/IKK signalling pathway are purportedly found in mitochondria [24]. Some examples include the NF-κB subunits RelA and p50, the IκBɑ inhibitor and the upstream kinases IKKɑ, IKKβ, and IKKɣ [25]. It was found that p50 NF-κB and p65 NF-κB could be poly-adenosine diphosphate (ADP) ribosylated through interaction with PAR polymerase 1 (PARP1) [26]. Our hypothesis is that SIRT4 may have the ability to catalyse the ADP ribosylation of NF-κB in mitochondria, thereby increasing p65 nuclear translocation. We (See figure on previous page.) Fig. 5 SIRT4 silencing in M2-like TAMs promotes the proliferation, migration, and invasion of HCC cells by enhancing IL-6 production. a-b Overexpression or silencing of SIRT4 in M2-like TAMs inhibited or promoted the migration and invasion of Hepa1-6 cells, respectively, as demonstrated by Transwell assays. c-d Overexpression or silencing of SIRT4 in M2-like TAMs inhibited or promoted the growth of co-cultured Hepa1-6 cells. e Hepa1-6 cells were treated with the supernatant (pre-incubated with IgG or anti-IL-6 neutralizing antibody) from shSIRT-Lv-or Ctr-Lv-infected M2 TAMs. Analysis of cell proliferation after 3 days using CCK-8 assay. f-g Hepa1-6 cells were treated with supernatant (preincubated with IgG or anti-IL-10 (VEGF) neutralizing antibody) from shSIRT-Lv-or Ctr-Lv-infected M2 TAMs. Analysis of cell proliferation after 3 days using CCK-8 assay. h Western blot analysis revealed that SIRT4 was successfully knocked down or overexpressed. *p < 0.05 will continue to explore the detailed mechanisms in future experiments. The other observed phenomenon was the increased ratio of M2/M1 macrophages in HCC peritumour tissues. It is well known that TAMs are important elements of the tumour microenvironment, and their alternative activation critically affects the growth of HCC [27]. Previous studies have also reported that SIRT4 can reprogramme endotoxin tolerance and promote acute inflammation resolution in monocytes [28]. In this study, HCM inhibited SIRT4 expression, and SIRT4 silencing stimulated macrophages towards M2 polarization. Lipid metabolism and its products play an important role in modifying macrophage functions in terms of inflammation and resolution [29]. Macrophage M2 polarization is associated with the activation of oxidative metabolism, which includes a functional tricarboxylic acid (TCA) cycle fuelled by glutamine and glucose catabolism, an increase in mitochondrial oxidative phosphorylation, and the enhancement of FA oxidation (FAO) [30]. It has been demonstrated that macrophage M2 polarization induced by IL-4 requires a peroxisome proliferator- The tumour size served as the measurement for H22 homograft development. c Summary data for each group are provided (n = 5). d Immunohistochemical staining of Ki67 in tumour sections (left). Summary data are shown on the right. *p < 0.05 activated receptor gamma (PPARɣ), which is a nuclear receptor activated by FA derivatives [31]. Treatment of macrophages with IL-4 induces the expression of genes involved in FA uptake and increased FA oxidation [32]. Mechanistically, this process involves the phosphorylation of signal transducer and activator of transcription b The supernatant of SIRT4-knockdown HepG2 cells with the anti-MCP-1 neutralizing antibody reversed the effects of HepG2 cells on co-cultured THP-1 cells. c The mRNA expression levels of cytokines were determined by qRT-PCR. d Elevated expression of MCP-1 was confirmed in HCC peritumour tissues with low SIRT4 expression. e Effects of SIRT4 on p65 nuclear translocation. Cell nuclei were isolated for western blot analysis. f Suppression of NF-κB induced by its inhibitory ligand reversed the effects of shSIRT4-Lv on MCP-1 expression. * p < 0.05 6 (STAT6) and PPARɣ co-activator 1 beta (PGC1-β). Similar to PPARɣ, PPARδ also appears to be an important regulator of alternative activation of resident macrophages [33,34]. Previous studies have reported that PPARδ regulates arginase I expression and the immunologic phenotype in alternatively activated Kupffer cells and that PPARδ regulates Kupffer cell alternative activation by re-programming the lipid metabolism [35]. A more recent study revealed that SIRT4 is involved in lipid metabolism. Therefore, we examined whether SIRT4 modulates TAMs to M2-like polarization via an increase in FA oxidation. We found that FAO genes including MCAD, PDK4, CPT1, PGC1-ɑ, PPARδ, and PPARɑ were increased in SIRT4-knockdown TAMs, and p-STAT3, which is essential for differentiation into the M2 phenotype, was also increased. Our data indicate that SIRT4 modulates TAMs to M2-like polarization via the FAO-PPARδ-STAT3 signalling pathway. On the other hand, the energy sensor AMP-activated protein kinase (AMPK) may also be a significant mechanism of FAO regulation in macrophages [36]. AMPK is able to activate FAO at the expense of FA synthesis through at Fig. 9 The HCC microenvironment inhibits SIRT4 expression in TAMs and modulates alternative activation of TAMs, which contributes to HCC development. The HCC microenvironment inhibits SIRT4 expression in TAMs. The downregulation of SIRT4 increases FA oxidation and the expression of lipid catabolic genes, which induces alternative activation of TAMs via the FAO-PPARδ-STAT3 signalling pathway. Alternatively activated TAMs produce IL-6 to accelerate HCC development and produce IL-10 to accelerate M1 macrophage apoptosis. On the other hand, downregulation of SIRT4 in hepatocytes activates MCP-1 expression to increase TAM infiltration, which accelerates HCC development via the NF-κB signalling pathway least two different mechanisms: first, phosphorylation and inactivation of acetyl-CoA carboxylase (ACC) (the first step of FA synthesis) and second, activation of PGC1-ɑ, which in turn stimulates mitochondrial biogenesis and mitochondrial function. Because SIRT4 can also affect the AMPK pathway, we will explore the relationship between the M2 polarization of TAMs and the SIRT4-AMPK-FAO axis in future studies. In addition, SIRT4 silencing in HCM-stimulated TAMs significantly promoted the production of cytokines including IL-10, IL-6 and VEGF, which have been reported to promote tumour progression, and SIRT4 silencing inhibited IL-12 production, enhancing antitumour immunity and further preventing the development of cancer. Thus, SIRT4 silencing in macrophages significantly promoted macrophage-induced tumour development both in vitro and in vivo. We also focused on how SIRT4 inhibited TAM-mediated HCC progression. This study may point to the important effects of SIRT4 on the STAT3-IL-6 axis, which will be determined in our future experiments. Studies have reported that tumour cells can "educate" macrophages in different areas [37]. We know that human tumour tissues can be categorized into the cancer nest area, invading edge, and peritumoural stroma. Depending on different microenvironments, macrophages acquire specific phenotypes with distinct functions [38,39]. Cancer cells produce elements that corrupt the development of surrounding macrophages, briefly triggering the untimely activation of monocytes (M1-like) in the peritumoural region. This provokes the development of suppressive macrophages (M2-like) in the cancer nests, overtaking the inflammatory response [40,41]. However, the mechanism of the M1 to M2 transition is not clear. One hypothesis is that M2-like macrophages might "educate" M1-like macrophages towards apoptosis. In our study, we found that the apoptotic rate of M1-like TAMs was higher in HCC peritumour tissues with low SIRT4 expression. We found that high IL-10 expression induced by SIRT4 silencing in M2like TAMs might promote M1 macrophage apoptosis [15]. This highlights the hypothesis that SIRT4 might engage in the dynamic education of macrophages in HCC. In addition, it has been reported that a high ratio of M2/M1 macrophages in peritumour tissues is associated with a poor prognosis in patients after resection [42]. The results of this study may provide a powerful explanation for this observation. Conclusion This study suggested that SIRT4 was significantly downregulated in HCC tumour tissues and that the expression of SIRT4 in HCC peritumour tissues was positively associated with HCC survival. The results from clinical studies demonstrated that downregulation of SIRT4 was associated with increased macrophage infiltration and M2 macrophages in HCC peritumour tissues. The reason for this is that downregulation of SIRT4 in TAMs modulates the alternative activation of macrophages and promotes HCC development via the FAO-PPARδ-STAT3 axis. On the other hand, elevated MCP-1 expression induced by downregulation of SIRT4 in HCC peritumour tissues is responsible for increased TAM infiltration. This study indicates that SIRT4 plays a role in HCC progression and may offer an important area of research for a new HCC treatment strategy (Fig. 9).
8,017.8
2019-11-19T00:00:00.000
[ "Biology", "Medicine" ]
Optimization on the distribution of population densities and the arrangement of urban activities In this paper, an approximation on the distribution of population densities and the arrangement of urban activities, over a set of n locations, is derived by using the classical multiobjective optimization theory and Shannon entropy. Introduction, problem description and preliminaries The term of entropy was used for the first time in 1865 in Thermodynamics by Rudolf Clausius [7].Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs and James Clerk Maxwell gave to this concept a statistical basis.In Probability Theory, the degree of uncertainty to a random variable can also be evaluated using the entropy.Consequently, the entropy can be used in the study of some risk assessment problems arising in different fields. The purpose of this paper is to develop, using the classical multiobjective optimization theory, a simultaneous optimization model involving Shannon entropy and spatial Shannon entropy subject to appropriate and meaningful constraints.Moreover, by considering the qualitative concept of utility, we extend our model to the case of Belis ¸-Guias ¸u entropy and spatial Belis ¸-Guias ¸u entropy.Practically, in both cases, we derive an approximation on the distribution of population densities and the arrangement of urban activities over a set of n locations.Now, let us introduce our study problem.According to Batty [4], following Batty [3], we shall represent a city as a set of locations.Also, we assume that: (1) there are n locations, identified by i, with i = 1, ..., n; (2) each location is a point or an area where urban activities can take place; (3) in each location there exists a number of units of urban activity; (4) the location identified by i has the size (area) a i and, therefore, A = n ∑ i=1 a i is the total size (area) of the city.Denote by the total number of units of urban activity (the total amount of urban activity), where N i , represents the number of units of urban acitiviy associated with location i, i = 1, n.If we start with N 1 , the number of allocations of N 1 (i.e., the number of locations with N 1 units of urban activity) if given by and so on.Making the product we find the total number of arrangements considered as a measure of complexity (W depends by allocations) of the city. Remark 1.1 i) If the total amount of urban activity N is allocated to N i , with i ∈ {1, ..., n} fixed, then the measure of complexity W is equal to 1. ii) If , n, then W varies with respect to the total amount of urban activity N and the number of locations n. By maximizing the measure of complexity W (more precisely, the logarithm of W ), we shall find the most enjoyable arrangement of units of urban activity in that it would provide the greatest possibility of distinct individual activities associated with the locations i. Usually, such maximizations are subject to appropriate and meaningful constraints.By a direct computation, using Stirling's formula, we get Taking into account that N i is a frequency that can be trasformed into a probability p i = N i N , by substituting the number of units of urban acitiviy associated with location i in the previous relation (6) and dropping the constant terms, we find that the number of arrangements W is proportional to Shannon entropy (measure of uncertainty, Shannon [14]) Consequently, to maximize ln W is equivalent with the well-known process of maximizing H. Remark 1.2 i) When the total amount of urban activity N is equally distributed to locations, that is and H = ln n is at a maximum.Also, let us remark that H varies with n. ii) If the total amount of urban activity N is allocated to N i , i ∈ {1, ..., n} fixed, that is N = N i , i ∈ {1, ..., n} fixed, then p i = 1 and p j = 0, j ∈ {1, ..., n}, i ̸ = j, and H = 0 is at a minimum.Further, we consider the spatial entropy (for more details, the reader is directed to Batty [3], Batty et al. [5]) which takes into account the numbers , where a i and A are introduced at the beginning of this section.Let us notice that n ∑ i=1 A i = 1 and assume that p i A i is subunitary (otherwise, we must minimize S instead of maximize it). Considering the previous mathematical context, the main aim of this paper is to study the following vector (bi-objective) optimization problem where p i is the probability of finding a place i which has P i population residing there and c i the travel cost from the central business district to the zone i.The constraint (10) is a normalization constraint on the probabilities, (11) is a constraint on the mean population of places, (12) is a constraint on the average travel cost incurred by population and, finally, ( 13) is a constraint on the average "logarithmic" size of locations. The second objective of this work is to investigate a similar problem which involves the qualitative concept of utility.The models proposed here can be regarded as an approximation on (i) the distribution of population densities and (ii) the arrangement of urban activities over a set of n locations. Next, in order to develop our theory, we will enunciate some elements of multiobjective optimization.Consider the following convention between two vectors, u = (u 1 , ..., u s ) and the following vector minimization problem where f : R n → R s and g : R n → R m are vector-valued functions, defined by .., s}, and g j : R n → R, j ∈ {1, ..., m}, continuously differentiable functions on R n .Denote by ▽f i (x) and ▽g j (x) the gradients of f i and g j at x ∈ R n , respectively, and by ⟨x, y⟩ Obviously, if x 0 ∈ X is an efficient solution to problem (P ) then x 0 is a weak efficient solution to problem (P ).However, the converse relation does not hold, in general, and practically the concept of efficient solution is more desirable than that of weak efficient solution. Theorem 1.1 (Necessary efficiency conditions for (P )) Let x 0 ∈ X be any feasible solution to (P ) and suppose that the generalized Guignard constraint qualification holds at x 0 ∈ X.If x 0 ∈ X is an efficient solution to (P ), then there exist the vectors Remark 1.3 If the vector minimization problem (P ) contains, in addition, constraints of the type h(x) = 0, with h : R n → R l a continuously differentiable function, then there exists a vector α ∈ R l such that the first condition in (16) Let ρ be a real number, C ⊆ R n , and b : )-quasiinvex at x 0 ∈ C with respect to η and θ if there exist the vector functions η : In the above definition, if we replace " ≤ " with " = ", we obtain the definition of monotonic (ρ, b)-quasiinvexity at x 0 with respect to η and θ. Theorem 1.2 (Sufficient efficiency conditions for (P )) Let x 0 ∈ X be any feasible solution to (P ) and let there exist the vectors λ ∈ R s and µ ∈ R m such that the conditions (16) are satisfied.If: )-quasiinvex at x 0 with respect to η and θ and there exists at least an index k ∈ {1, ..., s} such that f k (x) is strictly (ρ 1 k , b)-quasiinvex at x 0 with respect to η and θ; (ii) each function g j (x), j = 1, m, is monotonic (ρ 2 j , b)-quasiinvex at x 0 with respect to η and θ; then x 0 ∈ X is an efficient solution to (P ). OPTIMIZATION ON THE DISTRIBUTION OF POPULATION DENSITIES Remark 1.4 If the vector minimization problem (P ) contains, in addition, constraints of the type h(x) = 0, with h : R n → R l a continuously differentiable function, then the conditions (i) and (iii) from Theorem 1.2 change as follows: )-quasiinvex at x 0 with respect to η and θ; (i"') one of the functions given in (i'), (i") is strictly (ρ, b)-quasiinvex at x 0 with respect to η and θ, where ρ = ρ 1 i or ρ 3 k , and, respectively For more details, other notions and their connections, the reader is addressed to Yu [20], Treant ¸ȃ and Udris ¸te [16], Arana et al. [2], Verma [18], Treant ¸ȃ [17]. Main results Let us observe that our bi-objective optimization problem (9), subject to (10) − (13), can be rewritten as follows Taking into account the general context formulated in the previous section (see Theorem 1.1 and Remark 1.3), now we are in a position to establish and prove the first part of our main results. Theorem 2.1 If p = (p i ), i = 1, n, is a normal efficient solution in (V OP ), then there exist the scalars λ 1 , λ 2 , α, β, γ, δ with or, equivalently, where ln A] is a constant of proportionality which ensures that the probabilities sum is 1.Moreover, the "negative" measures of complexity are at a minimum for the given set of constraints, simplify to and, by imposing the necessary conditions of efficiency, we get where ln A] is a constant of proportionality which ensures that the probabilities sum is 1. If we substitute the probability in (19) into the "negative" Shannon entropy H 1 = n ∑ p i ln p i and into the "negative" spatial Shannon entropy , by a direct computation, we obtain the "negative" measures of complexity in (20) and the proof is complete. Over the past years, in order to correlate the quantitative concept of information with the qualitative concept of utility, many researchers (see, for instance, Belis ¸and Guias ¸u [6], Longo [12], Kapur [9], [10]) have introduced several weighted information measures.Given the context in which we are, these weighted measures of information become very important (they take into account both the probabilities with which certain random events occur and, also, some qualitative characteristics of these events).Thus, according to Belis ¸and Guias ¸u [6], let u i be the weight associated to an elementary event with probability p i (in our case, an elementary event is the finding of a place i which has P i population residing there and c i the travel cost from the central business district to the zone i).Consider the weight u i as a finite, positive real number representing the relevance, the significance or the utility of the occurrence of an event with probability p i .If u i > u j , then the event with weight u i (and probability p i ) is strictly more significant, more useful or more relevant than the event with weight u j (and probability p j ), where i, j ∈ {1, ..., n}, i ̸ = j. Using the previous utilities (weights), let us introduce the following weighted bi-objective optimization problem OPTIMIZATION ON THE DISTRIBUTION OF POPULATION DENSITIES Remark 2.1 i) The above minimum is computed for fixed utility distributions. ii) There are two additional constraints in (V OP ) * compared to (V OP ): the constraint n ∑ i=1 u i p i = P on the relevance of the weights u i (of course, if we get P = u and further, if u = 1, we find the first constraint in (V OP ) * ; therefore, for generality, we shall consider the weights u i as different, finite, positive real numbers) and the constraint n ∑ i=1 u i p i ln a i = Ã on the weighted average "logarithmic" size of locations.Now, we shall formulate and prove the second part of our main results. Proof The proof follows in the same manner as in Theorem 2.1.Consider the Lagrangian Applying the necessary conditions of efficiency, by a direct computation, we find which, equivalently written, is (23).Replacing the probability in (23) into the "negative" Belis ¸-Guias ¸u entropy u i p i ln p i and into the "negative" spatial Belis ¸-Guias ¸u entropy , by a direct computation, we obtain the "negative" measures of complexity in (24) and the proof is complete. Further, let us consider the following notations: As it can be verified, all of these functions are (ρ, 1)-quasiinvex at p 0 , for ρ ≤ 0 and any vector function θ = θ(p, p 0 ) (see Definition 1.3), with respect to: where η 1 is the vector function associated with f 1 , η 2 is the vector function associated with f 2 , η 3 is the vector function associated with h 1 , and so on. The following result formulates some sufficient conditions of efficiency for our vector minimization problem (V OP ). Proof First, we have to mention that the real numbers ρ 1 i , i = 1, 2, and ρ 3 k , k = 1, 4, introduced in our theorem, have the same siqnificance as in Remark 1.4 of section 1. Further, taking into account Theorem 1.2 and Remark 1.4 of section 1 (see conditions (iii) and (iii ′ )), the proof is complete. In a similar way, one can find a characterization result of sufficient conditions regarding the weighted bi-objective optimization problem (V OP ) * . As it can be verified, all of the following functions u i p i ln p i , f 2 (p) = n ∑ i=1 u i p i ln p i − n ∑ i=1 u i p i ln A i , (27)h 1 (p) = n ∑ i=1 p i − 1, h 2 (p) = n ∑ i=1 u i p i − P , ProofHaving in mind the general mathematical framework formulated in Theorem 1.1 and Remark 1.3 of section 1, we introduce the Lagrangian Optim.Inf.Comput.Vol. 6, June 2018 S. TREANT ¸Ȃ 213
3,327.8
2018-06-24T00:00:00.000
[ "Mathematics", "Computer Science", "Engineering" ]
Quality Characteristics and Washability Treatment of Nickeliferous Iron Ore of Agios Athanasios Deposit ( Kastoria , Greece ) The Agios Athanasios ore deposit is located within the wider area of Ieropigi in Kastoria, Greece. The specific ore deposit is developed in form of layers between ophiolites and Tertiary molassic conglomerates. The main mineralogical components are hematite, goethite, quartz, and secondarily, garnierite, lizardite, saponite, willemzeite and sepiolite, while scarcers are chromite, calcite and nepouite. Nickel is mainly found in garnierite, willemzeite and nepouite, which in coexistence with quartz are the main components in the binder material of the ore. For the mineral processing gravimetric and magnetic separations are used in the size of fractions −8 + 4 mm, −4 + 1 mm, −1 + 0.250 mm and −0.250 + 0.063 mm. The chemical and mineralogical analysis in combination with microscopic examination showed that mineral processing by gravimetric separation gave the most satisfactory results for the size fraction −1 + 0.250 mm. Introduction Nickel is a chemical element which cannot be encountered purely in nature.It is found as sulfides, oxides and inorganic salts.Nickel presents great affinity with iron, cobalt and copper.Therefore, they coexist in many types of deposits and may replace one another to a great extent.This is of great significance, as it can be easily blended with many metals, forming alloys and thereby increasing the strength, hardness, resistance to erosion or corrosion, elasticity, good thermal and electrical conductivity of the alloy to a large temperature range.Nickel-containing materials play a major role in our everyday lives-food preparation equipment, mobile phones, medical equipment, transport, buildings, power generation-the list is almost endless.About 90% of nickel production every year goes into alloys, two-thirds of each going into stainless steel [1] [2]. According to their genesis, nickeliferous ores belong to the following categories: 1) sulfurous deposits (pentlandites); 2) lateritic deposits (garnierites, limonites); 3) sedimentary deposits [1] [2].Greek nickel deposits are originated from lateritization of ultrabasic rocks of Mesozoic.Lateritization is the nickel deposit creation procedure by weathering of ultramafic rocks in diverse geological periods and in different weathering profiles, which are genetically related to the underlying rocks [3] [4].The products of weathering underwent to complex geological processes which led to the final deposits "nickeliferous iron ore".In Greece there were favorable conditions for the formation of laterites during the Lower Cretaceous, due to the occurrence of tropical or subtropical climate and extensive superficial spread of ophiolites.They are characterized as "fossilized" and are covered by limestone of the Upper Cretaceous (Cenomanian-Senonion) or sediments of Miocene.Their average content in nickel ranges from 0.8% to 1.5% approximately and therefore can be considered as a source of nickel.Nickel chlorite constitutes the main nickel bearer while other nickel minerals have been discovered of lesser significance, such as serpentine (nepouite), talc (willemseite), montmorillonite, takovite.The lateritic profiles vary greatly not only in thickness and continuity between individual zones, but also in the mineralogy and chemistry of the zones even over short distances.They are mainly identified the following lateritic zones: Bedrock, Saprolite zone, Clay zone and Goethitic (oxidic zone) [4] [5]. In Greece, there are more than 110 nickel iron-ore occurrences with nickel content ranging from 0.4 up to 1.2% and iron from 20% to 79%.The total reserves are estimated to be over than 500 Mt of which the 200 Mt are exploitable.The mineral resources of lateritic nickeliferous iron-ore deposits are spread mostly in areas of Euboea, Boeotia and Kastoria.The Greek laterites are exploited by the nickel producing company LARCO GMMSA, which is the most important company in producing Fe-Ni alloy in Greece (2% -3% of the total world production of nickel) [6] [7]. Geology The Agios Athanasios ore deposit is located within the wider area of Ieropigi in Kastoria, Greece.The area under investigation belongs mainly to the zone of Eastern Greece (Sub-pelagonian) and partly to the Pelagonian zone.The area's geological structure is represented by formations of the Sub-pelagonian zone.Moreover, the Kastoria Fe-Ni deposit is overlain by Tertiary molassic conglomerates (Figure 1).Sub-Pelagonian and Pelagonian geotectonic zones are mainly characterized by serpentinised ultramafic rocks (ophiolites) aged from Upper Jurassic to Lower Cretaceous [8].The two zones generally lie on the western segment of the Internal Hellenides and for decades have been investigated in depth to their structural geology, geochemistry and petrology.In lithological terms granites, ortho-and paragneisses as well as ophiolites are mainly found in the area [9].A characteristic transition from the volcano-sedimentary rock sequence in carbonates has been also characteristically described around Kastoria [10].The nickeliferous iron ores in the area appear to be a layer full of discontinuities, which consists of serpentinised ultramafic rocks and sediments on top of them.In a narrow zone extending in a NW-SE direction from Albania to the south Kastoria appear outcrops of ophiolitic rocks which compromised largely by ultramafic rocks.These ultramafic rocks are characterized by a network of veins, which are epigenetic in origin and consist of quartz, calcite and greenish nickel-bearing silicate minerals [11].According to Mountrakis [12], the most usual rocks are mainly serpentinised peridotites, dunites, spilitised mafic volcanics and siliceous sediments.Regarding to the ophiolites in the area, they have shown a Late Jurassic to Early Cretaceous deformation and Tertiary to recent sediments [9]. Experimental The mineralogical examination of the nickeliferous ore was carried out using the X-ray diffraction method (XRD).The automated X-ray diffractometer, D8 Advance of Bruker AXS Company was used.The identification of the ore minerals, the description of the fabric and the examination of the intergrowth in the different products were performed with an ore microscopy.A JENA ore microscope equipped with an OLYMPUS digital camera was used.The mineralogical characteristics of the ore samples have been investigated by using a scanning electron microscope (SEM) type Jeol JSM 5400 in conjunction with EDS-microanalyses.The chemical analysis of nickeliferous ore was carried out using the automated X-ray fluorescence spectrometer (XRF), S2 Ranger of Bruker AXS Company.For the gravimetric separation of nickeliferous ore samples, the heavy liquid tetrabromoethane was used, which has density 2.96 g/cm 3 .The magnetic separation was performed with the Perm Roll Magnetic Separator made by IMPROSYS and with the high intensity Induced Roll Magnetic Separator MIH 111-5 made by Carpco. Mineralogy The ore sampling was held by the lateritic nickeliferous ore deposit of Agios Athanasios in Kastoria area, Greece.The mineralogical study was performed to representatives' samples of the investigated ore by starting from the base towards to the top of the deposit.Above the underlying rock (ophiolites), there are successive layers of silicate ore, saprolithic ore, clay ore and iron ore zone (limonite iron ore and manganese oxides iron ore) (Figure 2).The deposit is covered by molassic conglomerates. The microscopic examination and X-ray diffraction analysis (Figure 3) showed more or less the presence of quartz and lizardite in all samples.Especially in the bottom section of the deposit, the silicate ore, there are mainly found quartz, lizardite, garnierite (nickel antigorite) and willemseite (nickel talc), as well as occasionally sepiolite and nepouite (nickel serpentine), while hematite is in a smaller percentage.Quartz is mainly formed on compact masses.Garnierite is observed in the binder material between the quartz masses as well as lizardite.Occasionally, ferruginisation is identified in the form of hematite replacing the binder material. The mineralogical analysis in the saprolite ore zone showed the presence mainly of lizardite, while to a smaller percentage was observed hematite, goethite, quartz, sepiolite, saponite, garnierite, willemzeite and chromite.The ferruginisation occurs in a very small degree in the form of individual grains of goethite, which alters progressively into hematite due to dehydration.The binder material (matrix) is mainly constituted of lizardite and partially by saponite and garnierite.Micro-analyses in the binder material confirmed the presence of lizardite and garnierite, submitted partially to replacement by ferrous solutions (Figure 4).Furthermore, the chromite crystals are usually multiply cracked, showing thus a characteristic cataclastic texture. Next zone is the clay ore, which mainly consists of calcite, lizardite and to a lesser extent, saponite and willemzeite.Goethite and hematite are recorded as iron mineralization.The binder material's main mass is constituted of lizardite and, partially, of saponite and willemzeite, while calcite is encountered in form of veins within the binder material and is considered of secondary origin.Goethite is also observed in form of thin veins scattered in the matrix. The upper iron ore zone consists of goethite, hematite and quartz, while in a smaller extent were recorded chromite, saponite and garnierite.Quartz is found in the form of compact masses, but also in the form of individual grains.The binder material between the grains is constituted mainly of saponite and garnierite.Individual chromite grains were observed scattered in the matrix, which are characterized by multiple cracks (cataclastic texture).The ferruginisation is encountered occasionally within the binder material, mainly as goethite and partially as hematite, in the form of veins, showing an epigenetic origin.Goethite has progressively transforms into hematite through dehydration (Figure 5). The chemical analyses of representative samples collected from various parts of the nickeliferous ore deposit are presented in Table 1. The SiO 2 is mainly present in quartz so as in the silicate phases (lizardite, garnierite, willemseite, sepiolite, saponite), while Fe 2 O 3 is mainly present in ferrous minerals (goethite and hematite) and secondary in chromite.Nickel is mainly present in garnierite and secondary in willemseite and nepouite.The highest content of the NiO is observed in the silicate ore zone (4.06%). Washability Treatment A jaw crusher was used in order to crush the collected bulk sample at size −8 mm.The product of the jaw crusher was sieved into fractions −8 + 4 mm, −4 + 1 mm, −1 + 0.250 mm and −0.250 + 0.063 mm.All size fractions were used for both magnetic and gravimetric separation tests.The fine fraction −0.063 mm was not used, because it was not suitable for these mineral processing tests.Table 2 shows the results obtained from the chemical analysis of the size fractions produced.SiO 2 and Fe 2 O 3 are the major oxides which are found during the whole process.The highest SiO 2 content is found mainly in the coarser size fraction −8 + 4 mm, while the highest Fe 2 O 3 content is present mainly in the fraction −4 + 1 mm.The quality of NiO is higher in the −1 + 0.250 mm size fraction.However, the finer size fraction −0.063 mm has also a quite high NiO content (1.87%). Gravimetric Separation The gravimetric separation of the ore was carried out using tetrabromoethane which is a heavy liquid with density 2.96 g/ml.Table 3 presents the results of the gravimetric separation of the size fractions −8 + 4 mm, −4 + 1 mm, −1 + 0.250 mm and −0.250 + 0.063 mm.As apparent from these results, the weight percentages of the Grand Total 100 floats are by far higher compared to those of the sinks for all size fractions.In general, the results obtained are satisfactory as in each fraction the float has a very high nickel content.In particular, the results of size fraction −8 + 4 mm are quite satisfactory as the NiO content is 1.45% in the floats and 0.69% in the sinks.The weight distribution in this fraction is 94.01% in the floats and 5.99% in the sinks, which is an indication that the greater quantity of NiO goes to the float products.The previous results are also obtained by microscopic examination of this size fraction.At the size fraction −4 + 1 mm (Figure 6(a)), the results are quite similar to the previous ones.The 1.34% content of ΝiO in the floats is high but lower than the one in the floats of the −8 + 4 mm fraction. The weight distribution in this fraction is 91.38% in the floats and 8.62% in the sinks.The results of size fraction −1 + 0.250 mm are very satisfactory since the NiO content (2.58% in the floats) is the highest quality of the whole process (Figure 6(b), Figure 7).The weight distributions are 94.49% and 5.51% in the floats and in the sinks, respectively.Similar results can be revealed for the size fraction −0.250 + 0.063 mm (2.19% and 1.27% in the floats and the sinks, respectively).Although the high quality of NiO observed in the floats, the NiO content in the sinks (1.27%) is an unsatisfactory factor for the gravimetric separation at this size fraction.However, the weight distribution in the floats (92.15%) still remains high. Magnetic Separation For the magnetic separation of fractions −8 + 4 mm and −4 + 1 mm the Perm Roll separator (equipped with permanent magnets) was used.According to the procedure, each size fraction was separated in two or three passes with decreasing rotation frequency, starting with a rotation frequency of 180 rpm.For the size fraction −8 + 4 mm, the non-magnetics products were passed at 140 rpm and finally at 100 rpm, while the second fraction −4 + 1 mm was separated in two passes, due to its insufficient quantity (180 and 140 rpm).In each pass the magnetics are collected, weighted and assayed and the non-magnetics products are used to feed the next pass.For the fractions −1 + 0.250 mm and −0.250 + 0.063 mm the Induced Roll magnetic separator was used.The fraction −1 + 0.250 mm was separated in three passes with decreasing rotation frequency starting with 140 rpm and electric current I = 2.83 Amps, while the fraction −0.250 + 0.063 mm was separated also in two passes.Table 4 presents the results obtained from the magnetic separation of all the size fractions (except for the fraction −0.063 mm, which was not used).The chemical analysis of NiO in the fraction −8 + 4 mm are 0.96%, 1.17%, 1.52% in the magnetic products 1, 2, 3 respectively and 2.34% in the non-magnetic.It is seen that the non-magnetic product has the highest quality of NiO.The weight distribution is 37.49% in the magnetic product 1, 14.25% in the magnetic product 2, 22.27% in the magnetic product 3 and 25.98% in the non-magnetic product.At the size fraction −4 + 1 mm, the quality of NiO in the non-magnetic product is similar to the one in the coarser size fraction (2.17%).On the other hand, the weight distribution is much lower (15.81%).It is noted that the NiO content in the magnetic 1 product is 1.38% with a high weight distribution (61.67%) which means that a great quantity of NiO goes to the magnetic 1 product.The results of the magnetic separation of size fractions −1 + 0.250 mm and −0.250 + 0.063 mm are quite satisfactory.At the size fraction −1 + 0.250 mm the NiO content is 2.27% with 48.03% weight distribution in the magnetic product 1.It is pointed out that the NiO content in the magnetic 3 product is 4.32%, which is the highest quality of NiO of the whole process.However, the weight distribution of this product is low (10.87%).This fact is justified by its microscopic investigation, where a large proportion of binder material grains have been found.Concerning the size fraction −0.250 + 0.063 mm, the NiO content in the magnetic product 1 is 2.61% with a very high weight distribution (87.87%) (Figure 8(a) and Figure 8(b)).The latter results combined with the high quality of NiO in the magnetic 2 product (4.21%)leads to the fact that this size fraction is better separated with the specified method (Figure 9). Discussion and Conclusions The nickel iron ore of Agios Athanasios in Kastoria area develops in the form of layers over ophiolites.Above the ophiolites, there are successive layers of silicate ore, saprolite ore, clay ore and iron ore zone.The nickeliferous ore zone is covered by the molassic conglomerates on the top.The main minerals of the nickel iron ore are hematite, goethite, quartz and lizardite, while in less extend are garnierite, saponite, willemzeite and sepiolite, and scarcers are chromite, calcite and nepouite.Hematite and goethite are mainly observed in the form of veins, as well as isolated crystals within the binder material, which were mainly constituted of compact masses of quartz, as well as garnierite, lizardite, saponite, willemzeite, sepiolite and nepouite.Furthermore, isolated clastic granules of chromite and veins of calcite of secondary origin were observed.A progressive transition of the goethite into hematite is occurred, due to dehydration.Nickel is mainly found in garnierite, willemzeite and nepouite, which are along with quartz the main components of the binder material. The results of the gravimetric separation lead to the conclusion that in all size fractions nickel is enriched in the light size fraction and distributed mainly in the floating products.The best nickel enrichment was achieved in the −1 + 0.250 mm fraction with a NiO content of 2.58% and a distribution 94.49%.There also occurs high percentage of NiO in the floating product of the size fraction −0.250 + 0.063 mm fraction (2.19%). The results of magnetic separation lead to the conclusion that nickel is enriched mainly in magnetic products, while small concentrations were observed in the non-magnetic products.Exception was the coarser size fraction −8 + 4 mm, where there was the highest concentration of NiO in the non-magnetic product (2.34%).This is due to the insufficient liberation of the binder material.The best nickel enrichments were performed in size fractions −1 + 0.250 mm and −0.250 + 0.063 mm.The NiO content in magnetic product 3 of the size fraction −1 + 0.250 mm was 4.32%, while in the magnetic product 2 of size fraction −0.250 + 0.063 mm was 4.21%. The relatively high nickel concentration (1.87%) in the fine size fraction −0.063 mm is worth noticing.This can be explained by the factor that Ni is found in binder material; as a result, through breaking and sieving, the binder material's granules are led to this finer size fraction.The microscopic investigation of this size fraction confirms the presence of a significant percentage of binder material granules, which are enriched in quartz, garnierite, willemzeite and nepouite. The comparison between the two washability treatment methods leads to the observation that the gravimetric separation provides a better nickel distribution in all size fractions, while the magnetic separation provides higher nickel analysis.In conclusion, satisfactory nickel content in combination with the better distribution in NiO in the gravimetric separation makes it the appropriate enrichment method for this specific deposit in regard to the magnetic separation method. Figure 3 . Figure 3. X-ray diffraction patterns of representative samples from various parts of the deposit profile. Figure 7 . Figure 7.Chemical analysis of NiO according to the mean size of the products, which are obtained from the gravimetric separation. Figure 9 . Figure 9.Chemical analysis of NiO according to the mean size of the products, which are obtained from the magnetic separation. Table 1 . Chemical composition (wt%) of representative samples from various parts of the deposit profile. Table 2 . Chemical analysis of size fraction from initial sample. Table 3 . Results of gravimetric separation. Table 4 . Results of magnetic separation.
4,328.8
2016-04-19T00:00:00.000
[ "Geology" ]
Novel alleles gained during the Beringian isolation period During the Last Glacial Maximum, a small band of Siberians entered the Beringian corridor, where they persisted, isolated from gene flow, for several thousand years before expansion into the Americas. The ecological features of the Beringian environment, coupled with an extended period of isolation at small population size, would have provided evolutionary opportunity for novel genetic variation to arise as both rare standing variants and new mutations were driven to high frequency through both neutral and directed processes. Here we perform a full genome investigation of Native American populations in the Thousand Genomes Project Phase 3 to identify unique high frequency alleles that can be dated to an origin in Beringia. Our analyses demonstrate that descendant populations of Native Americans harbor 20,424 such variants, which is on a scale comparable only to Africa and the Out of Africa bottleneck. This is consistent with simulations of a serial founder effects model. Tests for selection reveal that some of these Beringian variants were likely driven to high frequency by adaptive processes, and bioinformatic analyses suggest possible phenotypic pathways that were under selection during the Beringian Isolation period. Specifically, pathways related to cardiac processes and melanocyte function appear to be enriched for selected Beringian variants. The Beringian migration marks one of the most striking events in modern human history. Genetic and archaeological data confirm that a small population consisting of a few thousand people entered the Beringian corridor from Siberia at the advent of the Last Glacial Maximum (LGM), approximately 30 thousand years ago (kya) [1][2][3][4][5] . The Beringian ecology provided a refuge for this migrant population as the LGM intensified 6 . Plant macro-fossils and fossil pollen from Beringia suggest that it was a productive dry grassland ecosystem 7 inhabited by a variety of large mammals 8 . However, North American glacial coverage and inhospitable Siberian environments during the LGM effectively sealed off the migrant population in the Beringian refugium, preventing either forward or backward movement until approximately 15kya when the surrounding glaciers receded, opening up both coastal and interior corridors of entry into the North American continent 2,4,9,10 (Fig. 1). The demographic and paleo-ecological features of the Beringian experience have been well characterized 2,7,17,18 . However, the genomic impacts of the Beringian experience are still being discovered. Several factors suggest that there was a great deal of opportunity for genetic evolution in the Beringian population. Importantly, the population originated from a small group of founders and maintained a small size for millennia 4,[19][20][21] . The combination of a founder effect and prolonged bottleneck would have greatly enhanced genetic drift 22 . It is well-known that genetic drift will reduce variation and provide a descendant population with only a subset of the variation that was present in their ancestors 23 . The current literature documents such a reduction in variation, and the subset pattern, in Native Americans in comparison to Eurasians and Africans 24 . However, in addition to the loss of alleles, genetic drift can elevate the frequencies of rare alleles and new mutations [25][26][27] . This will occur to a much lesser extent than the loss of standing variation. Nonetheless, full genome analyses make it possible to observe instances of such 'allele gains'. The gain of novel variation tracing back to the Beringian occupation has been less studied than the loss of variation. Allele gains are the major focus of this paper. We expect that most alleles gained through founder effects and bottlenecks will be outside of gene coding and regulatory sequences, and therefore selectively neutral. However, it is also possible that some of the alleles gained through enhanced genetic drift will affect the expression of phenotypes. A portion of the alleles gained may have health consequences. In addition to genetic drift, positive natural selection by Beringian environmental conditions may have produced some allele gains. Indeed, a strong signature of positive selection has been found in several variants contained in the fatty acid desaturase (FADS) gene cluster, potentially modulating a unique lipid profile in response to a protein-rich diet 28 . Similarly, Hlusko and colleagues 29 have argued that an amino acid substitution in the ectodysplasin A receptor (EDAR) may have evolved in response to vitamin D deficiency created by the low UV at high latitude. A survey of the genome may reveal more locations of adaptive changes. The Beringian people are deep ancestors of all contemporary Native Americans 10 . They are also ancestors to populations that were formed in post-colonial times by admixture between Native Americans and people with ancestors in Europe and/or Africa 30,31 . As such, we can expect that the genetic changes acquired by either genetic drift or natural selection in Beringia will be wide-spread throughout populations with Indigenous American ancestry, but absent in all other people of the world. This provides us with a way to identify the allele gains that were made during the Beringia Isolation Period. Here, we perform a full genome investigation to identify allele gains that were made by Native American ancestors during the Beringian Isolation Period and inherited by contemporary populations. Then, we perform a bioinformatic analysis to investigate possible functional consequences of these uniquely American alleles. Results Group specific polymorphisms. We found alleles gained during the Beringian Isolation Period by applying the concept of Group Specific Polymorphism (GSP). A GSP is a common allele in one group of people that is absent or nearly absent in all other groups. After a founder effect, GSPs will be present in both the ancestral and descendant populations. Ancestral and descendant populations can be distinguished after a founder effect by the mix of ancestral and derived alleles. GSPs in the ancestral population will be composed of a mix of ancestral and derived alleles. By contrast, GSPs in the descendant population will be almost entirely derived alleles that were gained from the founder effect. We analyzed whole autosome DNA sequences from the Thousand Genomes Project Phase 3 (TGPP3) sample to identify Group Specific Polymorphisms (GSP). Figure 2 presents group specific polymorphisms for six broad groups of populations. The descendants of the Beringian migration harbor 20,424 GSPs. We found Beringian GSPs by examining the DNA sequences of people with mixed ancestry living in the Americas after controlling for European and African admixture. Only two other geographic divisions of our species showed comparable numbers of GSPs. A total of 28,460 GSPs were found in African people, represented by 5 populations living in Sub-Saharan-Africa. A total of 17,490 GSPs were found 11-13 (a). Additional details specific to the Beringian Migration are given in panels (b-d). A migrant population entered the Beringian corridor by 30kya, during the LGM 10,14 (b). The Laurentide and Cordilleran Ice Sheets blocked entry into the American continents, while an inhospitable mesic tundra developed in Eastern Siberia, preventing backward movement 15,16 . The migrant population was thus isolated for upwards of 15 ky in a Beringian refugium (c) until glacial retreat exposed coastal and interior routes into North America (d). To create panel (a), we drew the outline of continents using the R package maptools version 1.1-2 (https:// cran.r-proje ct. org/ web/ packa ges/ mapto ols/ index. html) and then added the labels and paths of migrations as overlays in Adobe Illustrator. We generated panels (b-d) by adding shading and overlays to portions enlarged from panel (a). Simulation results. We used coalescent simulations to verify that the observed pattern of derived GSP alleles is consistent with the reduction in heterozygosity that was seen in short tandem repeat and single nucleotide polymorphism data sets that support the serial founder effects model for genetic diversity in contemporary human populations 24,32 . Figure 3 shows the probability density for the age of a derived allele with frequency p ≥ 0.3 for each of four geographic regions. The allele age distribution for an African population illustrates the great antiquity of human polymorphism Fig. 3a. The blue vertical bar marks the time window for the out-of-Africa migration 55,000-60,000 years ago. The chance that a derived allele in this frequency range in Africa will be older than this time window is nearly 100%. As such, common derived alleles in contemporary Africans were likely common alleles at the time of the out-of-African migration. They would have been present in the gene pool that gave rise to the out-of-Africa migrant population. Their absence in a contemporary non-Africans can be explained by genetically drifting out of the original out-of-African migrants and their immediate descendants. Panels 3b-d show the probability density of the age of a derived allele with frequency p ≥ 0.3 for a population in Europe, East Asia, and the Americas, respectively. The blue bar in each of these graphs again shows the time window of the out-of-Africa migration. The allele age probability spikes in this interval because founder effects such as the one that occurred with the OOA migration allow a few new mutations to rise to high frequency. The three non-African populations share this spike because they are all descendants of the original OOA migrants. This spike corresponds to the large number of non-African GSPs. The green bar in Fig. 3b marks the time window bracketed by the OOA migration and the diversification of European populations. There is very little area under the curve during this time window, and consequently there was a very small chance that an allele would fulfill the criteria required for a European GSP. The situation in a simulated East Asian population illustrates the same phenomenon. The gold bar in Fig. 3c brackets the time window between the OOA migration and the entry into East Asia and the diversification of East Asian populations. There is very little area under the curve during this time window. The orange bar in Fig. 3d brackets the time window between the separation of Native American ancestors and the ancestors of East Asians and the entry of Native American ancestors into the Americas. This is the Beringian Isolation Period. There is considerable area under the curve in this time window. Accordingly, this result predicts a substantial number of Native American GSPs would arise during the Beringian Isolation Period. Table 1. The second line presents the same categorization of SNPs in the Random SNP Set. These data make it clear that Beringian GSPs are over-represented in the protein coding sequences relative to the Random SNP Set χ 2 = 212.01, d.f . = 2, p < 0.001). Functional annotation of Beringian GSPs. Non-synonymous GSPs in protein coding sequences affected a number of different protein classes and biological pathways according to a Panther 33 analysis (Fig. 5). Eleven genes that code for metabolite proteins contained non-synonymous GSPs (Fig. 5a). Additional categories containing non-synonymous GSPs include protein modifying enzymes, transcription regulators, and regulatory proteins. Amongst the biological pathways impacted by non-synonymous GSPs, the categories with the greatest number of genes include integrin signalling, cytokine-mediated immune response, and nicotinic acetylcholine receptors (Fig. 5b). Interestingly, four different biological categories associated with the p53 tumor suppression pathway were affected by non-synonymous GSPs. Supplementary Figure S1 compares these categories to a random set of non-synonymous GSPs. Evidence for natural selection. We have examined the ratio of non-synonymous to synonymous substitutions in the Beringian GSPs to detect the impact of the Beringian Isolation Period on the efficacy of purifying selection. Table 2 tabulates the percentages of Non-synonymous and Synonymous substitutions in the protein coding SNPs for the Beringian GSPs and Random SNP sets. In protein coding sequences, the proportion of non-synonymous relative to synonymous GSPs is 40.9%/59.1% = 0.69 . Natural selection against deleterious variation is evident because this rate is well-below unity, the expectation for selective neutrality. However, it is 3.14 times greater than the rate of non-synonymous relative to synonymous SNPs in the Random SNP set 18.0%/82.0% = 0.22 . This increase in non-synonymous SNPs may respresent a relaxation of selection against deleterious alleles. It is also noteworthy that given the high frequencies of these alleles, their functional consequences may affect many people. Extreme environments, such as high latitude, provide opportunities for environmental adaptation through positive selection. We computed integrated haplotype homozygosity scores (iHS) for the 20,424 SNPs in the GSP set to test the hypothesis that some Beringian GSPs are environmental adaptations. These tests yielded 2,820 candidate loci with ihs scores exceeding the generally accepted threshold |2.5|. Typically, in tests for selection where many loci pass a minimum significance threshold, only the outliers in the top 1-5% are considered. Here, the top 5% of significant iHS scores across the entire genome includes 141 Beringian GSPs (Fig. 6). There is a sizeable gap between these outliers and the next greatest |iHS| (rs141503817, iHS = −4.05 ). We have also considered genes disproportionately affected by multiple SNPs under positive selection. Table 3 reports genes containing the most SNPs within the top 5% of iHS. Figure 7 displays functional categories enriched for positively selected ( iHS > 2.5 ) Beringian GSPs . By looking at multi-gene pathways impacted by positive selection, we can begin to get a sense of possible polygenic adaptation affecting complex traits. The KEGG pathway with the greatest combined enrichment score identified in our analysis is arrhythmogenic right ventricular cardiomyopathy (ARVC) with eight selected GSPs affecting genes related to this pathway. Within the top 15 categories, at least two other KEGG pathways suggest adaptive evolution in pathways related to cardiac function. Interestingly, the second most enriched category is melanogensis, with 21 selected GSPs across 6 genes related to the production of melanin in the skin, hair, and eyes. Discussion The Beringian Isolation Period encompassed the last of a series of major founder effects that occurred during the peopling of the world. Heretofore, the principal evidence for these founder effects is a decline in heterozygosity in populations that is proportional to the geographic distance of a population from Sub-Saharan Africa 24 . However, reduction in heterozygosity is not the only consequence of founder effects and bottlenecks. These phenomena will occasionally elevate the frequencies of new mutations and rare alleles. Such 'allelic gains' from founder effects and bottlenecks have been less well-studied. We show that the Beringian founding population gained many unique alleles during its isolation, and that these alleles are shared widely among its contemporary descendants. www.nature.com/scientificreports/ We found alleles gained during the Beringian Isolation Period by applying the concept of Group Specific Polymorphism (GSP). A GSP is a common allele in one group of people and absent or nearly absent in all other groups. As such, a GSP will be diagnostic of ancestry from that group, and conversely, group membership will be a reasonable indicator that an individual will carry the allele. After a founder effect, GSPs will be present in both the ancestral and descendant populations. However, the ancestral and descendant populations can be distinguished by the mix of ancestral and derived alleles. GSPs in the ancestral population will be composed of a mix of ancestral and derived alleles, whereas GSPs in the descendant population will be almost entirely derived alleles. Our analyses reveal GSPs in only three groups: Sub-Saharan Africans, Eurasians, and admixed Americans. Sub-Saharan African GSPs have the greatest number of GSPs (28,460). These alleles clearly identify Sub-Saharan Africans as the ancestral population as 71% percent are the ancestral state and 29% are the derived allele state. By SNP set Non-synonymous Synonymous Beringian GSP 3709 5359 Random set 1397 6364 www.nature.com/scientificreports/ contrast, the set of Eurasian GSPs is composed almost exclusively of derived variants (99%), and further document that despite some archaic admixture, the Eurasian gene pool was primarily established by the Out of Africa (OOA) migration. The American GSPs are composed of 99.9% derived alleles and show the Beringian founder effect. It was surprising at first glance that GSPs specific to European, South Asian, and East Asian populations are absent or rare. However, the results are consistent with our simulations of the serial founder effects model and are easily explained by the fact that Eurasia was settled in a narrow time window after the out of Africa migration. One of the most striking features of Beringian GSP architecture is the distribution of GSPs throughout the 22 autosomes. Over 90% of the GSPs are distributed evenly with the spacing pattern approximating that seen in the set of 20,424 random SNPs. The evenness in the GSP distribution is punctuated by distinct clusters in 16 chromosomal regions (Fig. 4). There are genes associated with each cluster, but whether or not these GSPs influence the products or expression of these genes is an open question. We note that the random SNP set does not eliminate the possibility that micro-evolution in the Beringian Isolation Period would have caused clustering in common polymorphisms that are not GSPs. However, there are two important points in interpreting these features. First, Native American ancestry in the CLM, MXL, and PEL accounts for many of the non-GSP common polymorphisms in these populations. In this light, we would have expected to see clusters if they had formed, however the clusters might be somewhat attenuated because they are older . Second, the absence of similar clusters in the random SNP set suggests either that the out-of-Africa migration did not form such clusters, or else, the greater antiquity of that migration has allowed enough time for recombinations to randomize such clusters. This expectation follows from the fact that the major components of ancestry in the CLM, MXL, and PEL samples are Native American and European, and both of these groups of people descended from the Out-of-Africa migrants. Two lines of evidence indicate that purifying selection has been relaxed in the alleles gained during the Beringian Isolation Period. First, the percentage of GSPs in coding sequence (44.4%) significantly exceeds the percentage www.nature.com/scientificreports/ of Random SNPs in coding sequence (38%). Second, in Beringian GSPs that do occur in coding sequence, the ratio of non-synonymous to synonymous nucleotide substitutions ( ω = 0.69 ), is substantially higher than the corresponding ratio in the Random SNP set ( ω = 0.22). We have used the iHS statistic to identify a set GSPs that are candidates for positive selection. These comprise a small fraction of Beringian GSPs 141/20,424 = 0.0069). The relevant phenotypes that favored survival and reproduction of individuals cannot be directly inferred from the nucleotide sequence data alone. Therefore, we have used bioinformatic analyses to gain provisional insights into potential phenotypes influenced by these alleles. Our analyses were performed by parsing the candidate loci according to three criteria, individual SNPs with outlying iHS scores, specific genes that harbor a disproportionate number of putatively selected GSPs, and gene ontology classes enriched for putatively selected GSPs. In combination, these three lines of evidence suggest adaptations in the Beringian Isolation Period are related to cardiac function and melanogenesis. The gene EPHA3 which includes the GSP with the most extreme iHS score (rs190319719, iHS = −5.48 ) encodes a tyrosine kinase receptor that has been shown to important in cardiac cell migration and differentiation, and in regulating the formation of the atrioventricular canal and septum during development. Similarly, the Arrhythmogenic Ventricular Cardiomyopathy, Viral Myocarditis, and Hypertrophic Cardiomyopathy KEGG pathways are enriched with GSPs showing evidence of positive selection. Twenty-one GSPs showing positive selection appear in the melanogenesis pathway. TYRP-1, which contains 5 unique variants under positive selection, ranking highest amongst individual genes targeted by selection in Beringia. It is an intriguing possibility that selection on genes involved in melanocyte function could have favored depigmentation to increase biosynthesis of vitamin D 29,35 in a low UV environment. Altogether, the analyses we present in this paper emphasize the importance of the Beringian Isolation Period for generating unique genomic variation that distinguish Native Americans from other continental groups. The magnitude of this effect relative to the effect of the Out-of-Africa migration underscores the importance of major bottleneck events for the evolution of unique group-specific allele gains in continental populations. Further, the evolutionary approach we demonstrate in this paper has a wealth of potential for insight into population differences in molecular phenotypes relevant for health and disease. Functional studies that link these variants to particular biological phenotypes stand to generate new insights into pathways underlying population disparities in health and disease and may uncover novel candidate genes that may one day serve as potential therapeutic targets. Methods Genomic data. We analyze whole autosomes from the Thousand Genomes Project Phase 3 (TGPP3) sample to identify SNPs originating in the ancient Beringians (Table S1) 36 . Briefly, the total TGPP3 data consist of 84.7 million single nucleotide polymorphisms (SNPs) determined from next-generation sequencing of 2,504 individuals. Each individual was sequenced for the whole genome using mean depth = 7.4× enhanced by sequencing targeted exomes at mean depth = 65.7× . The TGPP3 sample includes populations from five geographic regions: Africa (five populations, total N = 504 ), Europe (five populations, total N = 503 ), South Asia (5 populations, total N = 489 ), East Asia (5 populations, total N = 504 ) and the Americas (6 populations, total N = 504 ). We use data from all TGPP3 populations residing outside of the Americas and three populations with substantial Native American ancestry that currently reside in the Americas. These are ( N = 64 ) individuals with Mexican ancestry from Los Angeles, California (MXL), ( N = 85 ) individuals from Lima, Peru (PEL), and ( N = 94 ) individuals from Medellin, Colombia (CLM). These three populations formed through admixture among Native Americans, European colonists, and African slaves during the colonial period beginning in the 15 th century. These three populations have substantial Indigenous American ancestry. Martin and colleagues report the degrees of Indigenous American ancestry for each population; Peruvian (77% ), Mexican American (47%), and Colombian (26%) 37 . Taken together, the Indigenous American proportions are equivalent to approximately 119 unadmixed genomes. The approach outlined below extracts information about genetic variants contributed by Indigenous American ancestors shared by all three populations. Group specific polymorphisms (GSPs). We define a group specific polymorphism (GSP) as an allele that is at high frequency within a group of populations, private to that group of populations, and shared by all populations within the group (Fig. 8). As such, a GSP will be diagnostic of ancestry from that group, and conversely, group membership will be a reasonable indicator that an individual will carry the allele. Operationally, we required a GSP to (1) be present in all populations belonging to the group for which it is defined, (2) to have an allele frequency greater than 30% in the focal group, and (3) to have an allele frequency less than 1% in all populations and outside the focal group. The 30% and 1% thresholds were pragmatic choices for this study, guided by broad patterns in the human species. By applying the Hardy-Weinberg principle, we see that the expected probability that a group member carries a GSP using the 30% and 1% criteria has an approximate minimum of (0.3) 2 + 2(0.3)(07) = 0.51 , whereas the expected probability that a member of a different group carries the GSP has an approximate maximum of 2(0.01)(0.99) + (0.01) 2 = 0.02 . The actual probabilities will depend on the structure of mate exchanges among members of the groups. The occurrence of GSPs depends on how groups of populations are defined. In this study, we consider the following groups determined by geographic locations, and descent groups formed by the well-documented out-of Africa migration that took place approximately 60,000 years before present 11,24,38 . To search for African, European, South Asian, East Asians, and non-African GSPs we made the comparisons illustrated in Fig. 9. Notably, we have not used the TGPP3 populations that reside in the Americas for these comparisons. This omission is necessary because the American populations likely harbor GSPs from throughout the world owing to formation by admixture in the Colonial era. www.nature.com/scientificreports/ To identify Indigenous American GSPs it was necessary to control for non-indigenous American admixture. Thus, in the Mexican American sample, the 30% GSP threshold was transformed according to a 47% Native American ancestry component ( p = 0.30 * 0.47 = 0.14 ) to yield a new GSP threshold of 14%. For the Peruvian sample, the adjusted threshold was p = 0.23 , and for the Columbian sample the adjusted threshold was p = 0.08 . An allele was considered a Beringian GSP if it met the modified criterion in all three American populations. Coalescent simulations. We used coalescent simulations to verify that the observed pattern of derived GSP alleles is consistent with the loss of variation from which the serial founder effects model was inferred 24,32 . Demographic parameters for these simulations were estimated by fitting a tree to data from the CEPH-HGDP short tandem repeat (STR) data set 39 . We began by choosing a subset of 27 populations from the CEPH-HGDP dataset: San and Kxoe from South Africa; Mandenka, Brong, Igala, Yoruba, and Luhya from Central Africa; Russian, Tuscan, Orcadian, Basque, and French from Western Europe; Punjabi, Tamil, Bengali, Gujarati, and Telugu from South Asia; Cambodian, Dai, Han, North Chinese Han, and Japanese from East and South-East Asia; and Pima, Maya, Mixtec, Embera, and Cabecar who are Native Americans. The San and Kxoe served to root the tree for the remaining 25 populations 32,40 . The non-South African populations from Europe and Asia were chosen to match the populations in the thousand genomes project data as closely as possible. Native American populations were included to estimate demographic parameters for the Beringian Isolation period. Next we used 619 microsatellite loci 39 to compute Nei's minimum genetic distances between all pairs of populations 41 . We re-scaled these distances by multiplying by 2. Using the re-scaled genetic distances, we built a neighbor-joining tree for the 27 populations. We rooted the neighbor-joining topology on the branch between the San-Kxoe and the remaining African and non-African populations, and then fitted branch lengths using the maximum-likelihood method proposed by Cavall-Sforza and Piazza [42][43][44] . The branch-lengths on the tree constructed in this manner measure the increase in gene identity (homozygosity) accrued between each pair of nodes moving from the root to the extant populations. The nodes on this tree were assigned chronological dates www.nature.com/scientificreports/ using estimates of the times at which modern humans inhabited the various regions of the globe. These dates were inferred from archaeological sources and independent genetic data 2,45,46 . We estimated effective population sizes for each branch of the tree by iteratively solving for the population size that would reproduce the genetic distance on that branch while allowing step-wise mutations to occur at a rate of µ = 10 −4 per locus per generation . The fitted tree, branch points, node dates (in generations) and effective population sizes are provided as Supplementary Figure S2. With the chronological dates and effective population size estimates obtained as outlined above, we simulated single-copy DNA sequences using an infinite sites mutation model and mutation rate of µ = 1.2 × 10 −8 per base pair per generation. The simulations projected DNA sequences in existing populations backwards in time through their history of changes in effective size at population splits. We performed these simulations using an original program that implements the algorithm of Hudson 1990 47 . The times of each mutation in the simulated coalescent histories were recorded, as were the frequencies of the mutant (derived) allele in the contemporary population. From this simulated data, we constructed the probability density for the age of a high frequency derived allele found in a population that inhabited a specific geographic region of the world. Thus, we were able to assess the probability that a derived allele arose, on a branch, at a time, that would render it exclusive to a particular set of populations or geographic region. Genome architecture and functional annotation. For comparative purposes, we constructed a random sample of 20,424 SNPs selected from across the genome. The allele frequency distribution and proportion of SNPs per chromosome in the random sample were matched to the set of Beringian GSPs that we discovered (See Results). We searched for spatial clustering within the Beringian GSPs and random SNPs by tabulating the median distance from each SNP to its nearest neighbor within a window size of +/−10 SNPs. We used the ANNOVAR annotation suite 48 (https:// rdocu menta tion. org/ packa ges/ annov arR/ versi ons/1. 0.0) bto categorize each variant from the Beringian GSPs and Random SNP Sets according to a variety of genomic properties. All SNPs were annotated as either intergenic, non-coding RNA (ncRNA), or genic (including introns, exons, UTRs). Exonic variants were further annotated to reflect synonymous and non-synonymous substitutions. We further annotated variants from the Beringian and Random SNP sets with known gene associations as reported by NCBI's gene database. The gene associations included intergenic SNPs that fell within known regulatory regions of specific genes. Genes impacted by Beringian GSPs and Random SNPs were then grouped according to similar functional properties defined by both the Kyoto Encyclopedia of Genes and Genomes(KEGG) categories 49 and gene ontology categories (GO) using the Enrichr https:// cran.r-proje ct. org/ web/ packa ges/ enric hR/ vigne ttes/ enric hR. html and Panther version 16.0 (http:// www. panth erdb. org/ pathw ay/) software packages, respectively 33,50 . Detection of natural selection. To measure purifying selection, we calculated the K a /K s ratio for genes with exonic GSPs following the method of Li et al. 51 and compared it to the same measure on a random set of SNPs. Genic SNPs in both the Beringian and random sets were annotated as synonymous or non-synonymous substitutions using the ANNOVAR suite. To calculate K a (ratio of non-synonymous substitutions per non-synonymous site), we divided the total number of non-synonymous substitutions by the number of non-synonymous nucleotide sites for each gene with an exonic variant. Similarly, K s (ratio of synonymous substitutions per synonymous sites) was calculated as the ratio of synonymous substitutions to synonymous sites in the same gene set. Numbers of synonymous and non-synonymous sites were calculated as the weighted sum of probabilities that each site could experience a non-synonymous or a synonymous change. Finally, combining the data for all GSPs in exons, we calculated the ratio ω = K a /K s to determine whether the amino acid substitutions resulting from GSPs departed from the neutral expectation for neutral evolution ω = 1.0 51 . To determine whether the GSPs displayed an atypical pattern of natural selection in comparison to SNPs chosen from the genome at random, we applied the above steps to the set of 20,424 random SNPs. K a /K s ratios were computed in R 4.0.2 using the seqinr package version 4.2-5 52 (http:// seqinr. r-forge.r-proje ct. org/) on gene sequences downloaded from the NCBI database. To identify signals of positive selection, we calculated integrated haplotype homozygosity scores (iHS) 53 using the rehh package in R 54 . We calculated an |iHS| for each GSP, in each of the three American populations (MXL, CLM, PEL). According to standard practice, an |iHS| score greater than 2.5 standard deviations from the mean is considered a candidate for positive selection; positive scores indicate selection favoring the ancestral allele, whereas negative scores indicate selection favoring the derived allele. The iHS statistic is useful for detecting selective sweeps that have not reached fixation and allows for prioritizing candidate SNPs, or genomic regions, but it does not a provide formal tests of significance. In order to link natural selection and phenotypic targets for adaptation, we examine Beringian GSPs with high |iHS| scores three ways. First we present GSPs with the most extreme outlier |iHS| values. Next we identify which genes contain the greatest number of GSPs within the top 5% of iHS scores. Finally, to characterize biological systems and pathways affected by selected GSPs, we compiled a list of genes with Beringian GSPs within the top 5% of iHS scores and used the Enrichr tool 50 to assess gene set enrichment in pathways described by the Kyoto Encyclopedia of Genes and Genomes (KEGG) 55 . Results from this analysis were rank ordered using the Enrichr combined score metric, which captures a measure of deviation between each category's observed rank and the expected rank for that category by chance.
7,509.6
2022-03-11T00:00:00.000
[ "Biology", "Environmental Science" ]
Green Networking for Major Components of Information Communication Technology Systems Green Networking can be the way to help reduce carbon emissions by the Information and Communications Technology (ICT) Industry. This paper presents some of the major components of Green Networking and discusses how the carbon footprint of these components can be reduced. Introduction The late David Brower [1], a noted environmentalist, stated "We don't inherit the environment from our ancestors, we borrow it from our children". This is a very sobering comment. If the definition of sustainability is that we leave this planet to our children in a better state than we found it, then according to the Intergovernmental Panel on Climate Change (IPCC) [2] we are failing dismally. The major contributor to global warming and climate change is the dramatic increase in human greenhouse gas emissions into the atmosphere; the main greenhouse gas is Carbon Dioxide (CO 2 ). Green Networking Green Networking covers all aspects of the network (personal computers, peripherals, switches, routers, and communication media). Energy efficiencies of all network components must be optimized to have a significant impact on the overall energy consumption by these components. Consequently, these efficiencies gained by having a Green Network will reduce CO 2 emissions and thus will help mitigate global warming. The Life Cycle Assessment (LCA) [3] of the components must be considered. LCA is the valuation of the environmental impacts on a product from cradle to grave. New ICT technologies must be explored and the benefits of these technologies must be assessed in terms of energy efficiencies and their associated benefits in minimizing the environmental impact of ICT. Some of the goals of Green Networking include (i) reduction of energy consumption, (ii) improvement of energy efficiency, (iii) consideration of the environmental impact of network components from design to end of use, (iv) integration of network infrastructure and network services; this integration consolidates traditional different networks into one network, (v) making the network more intelligent; the intelligent network will be more responsive, requiring less power to operate, (vi) compliance with regulatory reporting requirements; for example, the National Greenhouse and Energy Reporting System (NGERS) and the proposed Carbon Pollution Reduction Scheme (CRPS), (vii) promotion of a cultural shift in thinking about how we can reduce carbon emissions. Figure 1 shows the relative power use of the ICT devices used in the ICT industry [4]. Network Components According to Gartner [4], desktop computers and monitors consume 39% of all electrical power used in ICT. In 2002, this equated to 220 Mt (millions tons of CO 2 emission). To reduce the carbon footprint of desktop PCs, their usage must be efficiently managed. Old Cathode Ray Tube monitors should be replaced with Liquid Crystal Display screens which reduce monitor energy consumption by as much as 80% [5]. Replacing all desktop PCs with laptops would achieve a 90% decrease in power consumption [5]. Energy can also be saved by using power saving software installed on desktops and running all the time. The power saving software controls force PCs to go into standby when not in use. Another option is to use solid state hard drives that use 50% less power than mechanical hard drives [6]. When considering the Local Area Network (LAN) network infrastructure, probably the most power hungry device is the network switch. Modern network switches perform various network infrastructure tasks and as a result use considerable power. PoE (Power over Ethernet) is a relative new technology introduced into modern network switches. PoE switch ports provide power for network devices as well as transmit data. PoE switch ports are used by IP phones, wireless LAN access points, and other networkattached equipment. PoE switch port can provide power to a connected device and can scale back power when not required. To reduce power consumption and equivalent CO 2 emissions from a network switch, several techniques are available. One solution is to use a highly efficient power supply within the network switch. A typical PoE network switch has a large number of IEEE Class 3 devices (e.g., an IP phone) attached, with each device consuming up to 15.4 watts of power. A typical high end switch will have about 384 ports. This switch will require about 5.9 KW of power. An 80% efficient power supply would require 7.3 KW. A 90% efficient power supply would require 6.5 KW. By using a highly efficient power supply we can save up to 800 W. Assuming that the devices connected to the network switch were turned on all the time for a year, then a 90% efficient power supply could save 7200 Kilowatt-hours per year per network switch. Assuming that electricity is generated from a coal fired power station, then one Kilowatthour of electricity is equivalent to 0.537 Kg of CO 2 [7]. Therefore, increasing the efficiency of the power supply of the network switch from 80% to 90% will result in a saving of 3866 Kg of CO 2 emissions per network switch per year. Assuming electricity costs $0.15/Kilowatt-hour, this would result in a saving of about $1080 per network switch per year in electricity costs alone. Another solution is to use power management software built into the network switch. With power management software, we can instruct the network switch to turn off ports when not in use, for example, if we consider an attached device such as an IP phone that was only used during office hours (9 am till 5 pm Network Integration and Network Services Initially the network infrastructure was only required to allow connectivity between devices on a network. In the past, data and voice traffic used to be on different networks. This produced inefficiencies and required the duplication of resources. With the wide adoption of Voice over IP (VoIP), the separate infrastructures were replaced with one unified, converged network supporting both data and voice traffic. The introduction of VoIP requires the network infrastructure to provide new network services. In the case of voice traffic, which requires low latency, QoS (Quality of Service) was introduced. This required network devices to support QoS. As networks became more critical in daily business operations, additional network services were required. Network infrastructure devices were required to support VPNs (Virtual Private Networks) and data encryption also. The new integrated network infrastructure with its network services will make the network more energy efficient and reduce the carbon footprint of the network infrastructure. Data Centers The main issue with Data Centers, with respect to Green Networking, is their inefficient use of electrical power by the Data Center components. In addition, electrical power generation from coal becomes a critical issue. Data centers store a vast amount of data used on a daily basis by users, companies, government, and academia. As the demand for data has increased so has the size of Data Centers. Consequently, the power consumed has also increased. In 2003, a typical Data Center consumed about 40 Watts per square foot energy, and in 2005 this figure has been raised to 120 Watts/sq ft energy [8], and it is anticipated that this figure will continue to rise. Rack density, which is number of devices per rack, within the Data Center has also increased. This increase in rack density directly increases the heat load, which needs to be dissipated in form of cooling. Some Data Centers have got to a point where the local electricity supplier cannot supply further electricity. The typical Data Center consists of blade servers, storage devices, and multiprocessor servers. These servers are housed in racks placed in rows on a raised floor. The raised floor allows for power distribution, data cable distribution, and cooling ducts. In a recent report, Gartner [4] predicts that in the future (we are already in 2009!) many organizations will spend more on annual IT energy bills than they will be spending on servers. The main components of the network infrastructure of a Data Center are the data cabling and switches. The power consumption distributions within a typical Data Center are shown in Figure 2. Due to the high power consumption by Data Centers, there are some proposed solutions to save energy and make Data Centers more energy efficient. Some of the solutions include Traditionally the electrical power needed for Data Centers is supplied by the electricity grid. Using alternate energy sources at the Data Center is often impractical. The solution is to take the Data Center to the energy source. The energy source could be solar, wind, geothermal, or some combination of these alternate forms of energy. Instead of the power traveling great distances, the data would need to travel great distances. For this to be feasible, we would require a broadband network infrastructure. Consolidation. Going through a systematic program of consolidating and optimizing your machines and workloads can achieve increased efficiencies at the Data Center. Virtualization. With new virtualization software available, it is possible to reduce the number of physical servers required for a system. Each physical server can host many virtual servers. Virtualization efficiency gains are made possible because of the utilization of CPU potential within the server. Typically a server running without virtualization might run at only 5% of full utilization, with virtualization the CPU can run up to 80% of full utilization. Virtualization is one of the main technologies used to implement a "Green Network". Virtualization is a technique used to run multiple virtual machines on a single physical machine, sharing the resources of that single computer across multiple environments. Virtualization allows pooling of resources, such as computing and storage that are normally underutilized. Virtualization offers the following advantages: less power, less cooling, less facilities, and less network infrastructure. For example, assume a server room has 1000 servers, 84 network switches, consumes 400 K·W of electricity for ICT equipment, 500 K·W of electricity for cooling and requires 190 square meters of floor space. With virtualization we could typically reduce the number of physical servers. The power required for the ICT equipment would be reduced significantly and power required for cooling will be reduced, and the floor space required will only be about 23 square meters. We note that not only the power required for the servers has reduced but so have the cooling, network infrastructure, and floor space requirements. Virtualization can also be used to replace the desktop. With desktop virtualization we can use a thin client consuming little power (typically 4 Watts). The image and all other programs required by the client can be downloaded from one of the virtualization servers. Virtualization can be successfully used in the educational and training environment. A student requiring a complete network of client, server, and interconnects, which would normally require a number of hardware components, can now be done using a single PC. Improved Server and Storage Performances. New multicore processors execute at more than four times the speed compared to previous processors and use new high speed disk arrays with high performance. 144-gigabyte Fiber Channel drives can reduce transfer and improve efficiencies within the Data Center. [9]. Although power management tools are available they are not necessarily being implemented. Many new CPU chips have the capacity to scale back voltage and clock frequency on a per-core basis and this can be done by reducing power supply to the memory. By implementing power management techniques, companies can save energy and cost. High Efficiency Power Supplies. The use of high efficiency power supplies should be considered in all Data Center devices. Poor quality power supplies not only have low power efficiencies, but the power efficiency is also a function of utilization. With low utilization we achieve lower efficiency in the power supply. For every watt of electrical power wasted in a Data Center device, another watt is used in extra cooling. Therefore, investing in high efficient power supplies can double power savings. Another issue with power supply is that quite often Data Center designers overestimate power supply needs. With more accurate assessment of the power requirements of a device, we can achieve high efficiency and energy savings. Improved Data Center Design. When considering improved Data Center design, we must consider electrical power production and distribution, cooling design, data cabling layout, UPS (Uninterruptible Power Supply) design as well as server and data storage design. One new approach is the use of a modular Data Center design. A modular Data Center design is a pod-based design that creates energy-efficient building blocks that could be duplicated easily in Data Centers of any size. A pod is typically a collection of up to 24 racks with a common hot or cold aisle along with a modular set of power, cooling, and cabling components. When considering electrical power production and cooling design, one possible solution could be cogeneration. Cogeneration is not a new technology but it could be well suited to the Data Center environment. Cogeneration is the production of electricity and heat from a single process. With traditional Data Centers, using the electricity grid might produce about 1 ton CO 2 /MWatt per hour, but with cogeneration we could reduce this figure to 0.45 ton CO 2 /MWatt per hour [10]. To measure the efficiency of a Data Center, the Green Grid initiative proposed the use of two measureable metrics [4]: a Power Utilization Effectiveness (PUE) parameter and a Data Center Infrastructure Efficiency (DCiE) parameter. PUE is defined as the total facility power (including Power Distribution Units, generators, UPS, and cooling systems) divided by IT equipment power (including all IT equipment such as servers, storage devices, and network switches), while DCiE is the reciprocal measure (1/PUE) of PUE. These measures provide benchmarks for comparing the overall energy efficiency of a Data Center, establishing trends, and for measuring the effectiveness of design changes. For example a PUE of 2.0 would indicate that for every watt of IT power, an additional watt is consumed to cool and distribute power to the IT equipment. The ideal PUE value is 1.0 corresponding to a Data Center where all of the electrical grid power supplied to a Data Center is devoted to IT equipment and no power is used for cooling and power distribution. For example, Google [11] quotes that its first Container based Data Center, established in 2005, has a PUE of 1.25. The facility consists of 45 containers with 1000 servers per container and supports 10 MW of IT equipment load. Cloud Computing In an ideal computing world, all we will need is an Internet connection. This can be a thin client consuming 4 Watts or a small wireless device. We will not need hardware beyond an Internet connection device. All services could come from the "Cloud". Web services, data storage services, backup services, applications could be provided by service providers operating within the "Cloud". For this to happen the Cloud EURASIP Journal on Wireless Communications and Networking 5 must provide broadband bandwidth, security to users, and should be reliable. From a company's point of view, many of its IT resources could be virtualized or outsourced. Virtualization reduces hardware requirements, needs less maintenance, and requires less capital outlay. Most of the company's resources would be hosted by service providers within the cloud, including data storage and other services. From a Green Networking point of view, "Cloud Computing" offers the promise of low power devices consuming little electricity and connected to highly efficient "Cloud" networks which have been optimized for minimal power consumption. "Cloud Computing" can be considered "Green Networking" through the efficiencies gained using "Cloud Computing". "Cloud Computing" offers the following advantages: (i) consolidation-redundancy and waste, (ii) abstraction-decoupling workload from physical infrastructures, (iii) automation-removing manual labor from runtime operations, (iv) utility Computing-enabling service providers to offer storage and virtual servers that ICT companies can access on demand. Broadband Telecommunications and Applications The proposed Australian NBN (National Broadband Network) offers great opportunities for the ICT industry to reduce greenhouse gas emissions. The new "Green Networking" infrastructure will be a fiber to the node broadband network with high speed connections to households and businesses alike, enabling new improved, energy efficient, low carbon applications. As highlighted by authors in [12], a nationwide broadband network can offer the following advantages: remote appliance power management, presence-based power, decentralized business district, personalized public transport, realtime freight management, increased renewable energy, and "On-Live High Definition Video Conferencing". Remote Appliance Power Management. Broadband can provide monitoring and control of electrical devices. Control can also be centralized. Smart meters will allow consumers to better manage their energy usage by providing more detailed information about their consumption with the opportunity to save money on their power bill and reduce greenhouse gas emissions. Presence-Based Power. With presence-based power the supply of energy follows the user not the appliance. For example, lighting and heating could be switched off when the last person leaves the room. Decentralized Business District. With broadband to every house, it will be easy to work from home. This would require less travel, which saves traveling cost and also reduces CO 2 emission by cars. Humans require interaction but a lot of unnecessary travel can be avoided with the use of broadband with the advantage of having less greenhouse gas emissions. Personalized Public Transport. A personalized public transport system uses on-call public transport vehicles which act as feeders into the public transport system. Using this system, commuters can get accurate information about transport system, updated timetable and will be more convenient. Wireless on-call broadband can implement the use of personalized public transport for commuters placing less reliance on private car use as well as increasing flexibility for the user and reducing waiting times. Real-Time Freight Management. Wireless broadband can be used to monitor freight vehicles in real time. Wireless sensors or RFID (Radio Frequency Identification) can be used to keep track of freight distribution and can estimate accurate travel times for these goods. This system minimizes travel time and increases overall fuel economy thus reducing the freight industries carbon footprint. Increased Renewable Energy. Renewable energy sources such as wind power and solar panels constantly produce varying amounts of power. Broadband networks can monitor this power and better integrate the renewable energy power into the electricity grid (Smart Grid). "On-Live High Definition Video Conferencing". Traditionally video conferencing has suffered from poor quality especially if trying to communicate over large distances. The advent of broadband networks has made high definition television and video conferencing possible and practical. The environmental benefit of high definition video conferencing is becoming clear as companies are required to do less traveling. Instead of traveling to meetings worldwide, such meetings are being conducted using high definition video conferencing technology. The quality of the high definition video conferencing systems has significantly improved over the years along with good audio and video synchronizations in contrast to previous video conferencing systems. The Australian government has recently invested in a new high definition video conferencing which can save in spent Australian $250 million dollars on air travel and will consequently further reduce the carbon footprint. LCA-Life Cycle Assessment Part of the "Green Network" future is to consider not only the energy efficiency of a network component during its lifetime but to consider the complete life cycle of the component as well. The life cycle should include the assessment of raw material, production, manufacture, distribution, use, and disposal of the network devices. We must adopt a "lifecycle" approach to product design, manufacture, and disposal. Byteback is a free computer take-back program to help people dispose of end-of-life equipment [13]. Responsible computing companies are allowing customers to return end-of-life products at no cost. These programs are compliant with WEEE (Waste Electrical and Electronic Equipment) and ROHS (Restriction of Hazardous Substances) recycling laws [14]. Green Network Performance Measurements To enable a "Green Network", we must be able to monitor and measure the savings associated with our green networking strategies in place. A network energy efficiency baseline must be established from which we can measure improvements and compare them with the baseline. We must look at ways to develop meaningful measurements to measure such power savings. In a low carbon "Green Networking" environment, instead of considering bits per second (bps) we might need to consider watts/bit to measure energy inefficiencies or perhaps a better indicator would be bits per CO 2 (b/co 2 ). There are several Government and Non-Government organizations working on and producing "Green Networking" standards. Some of these standards are compulsory and some are voluntary certification programs. Some of these standards include Energy Star Rating, The Green Grid, ISO 1400 Standards, EPEAT, and Climate Savers (as shown in Table 1 [16] is a nonprofit group of consumers, businesses, and conservation organizations dedicated to promote smart technologies that can improve the power efficiency and reduce the energy consumption of computers. Ubiquitous Green Networking Mark Weiser in [19] introduced the concept of Ubiquitous computing in the 1990s as computing anywhere at any time. In a ubiquitous networking environment, the system makes decisions based on user activity. A ubiquitous sensor network infrastructure consists of sensors that monitor and sample the environment. Ubiquitous green networking can be used to monitor and make decisions about energy use to produce highly efficient systems. Within the home, office, or public spaces, ubiquitous green networking can monitor energy consumption to make intelligent decisions based on user activity to minimize energy use. IEEE Electronics and Telecommunications is currently developing a Ubiquitous Green Community Control Network Protocol Standard known as IEEE P1888 [20]. According to IEEE, the protocol IEEE P1888 will be used for environmental monitoring and energy consumption management mechanisms to help address energy shortage and environmental degradation through remote surveillance, operation, management, and maintenance. Conclusion The vision of a Green Network is one where we can all have thin clients using low energy consumption, connected via wireless to the Internet, where all our data is securely stored in highly efficient, reliable Data Centers typically running at low energy per Gigabit per second speed. This can also include access to network services from Cloud computing service providers. Whatever the future is, Green Networking will help reduce the carbon footprint of the ICT industry and hopefully lead the way in a cultural shift that all of us need to make if we are to reverse the global warming caused by human emissions of greenhouse gases. Finally, the issue of Efficiency versus Consumption is an interesting argument, that is, efficiency drives consumption. ICT solutions can solve efficiency; it is society that must solve consumption.
5,212.2
2009-01-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
The time course of the onset and recovery of axial length changes in response to imposed defocus The human eye is capable of responding to the presence of blur by changing its axial length, so that the retina moves towards the defocused image plane. We measured how quickly the eye length changed in response to both myopic and hyperopic defocus and how quickly the eye length changed when the defocus was removed. Axial length was measured at baseline and every 10 minutes during 1 hour of exposure to monocular defocus (right eye) with the left eye optimally corrected for two defocus conditions (+3 D and −3 D) and a control condition. Recovery was measured for 20 minutes after blur removal. A rapid increase in axial length was observed after exposure (~2 minutes) to hyperopic defocus (+7 ± 5 μm, p < 0.001) while the reduction in axial length with myopic defocus was slower and only statistically significant after 40 minutes (−8 ± 9 μm, p = 0.017). The eye length also recovered toward baseline levels during clear vision more rapidly following hyperopic than myopic defocus (p < 0.0001). These findings provide evidence that the human eye is able to detect and respond to the presence and sign of blur within minutes. In a wide range of species, the quality of visual experience influences the axial growth of the eye during early life 1 . Depriving an eye of form vision by lid suture 2,3 or by wearing translucent goggles or diffusers 4-6 causes excessive eye growth and the development of myopia. Similarly, exposure of the eye to positive or negative optical defocus [7][8][9][10][11][12] alters eye growth in a predictable way in order to compensate for the amount of imposed retinal blur. Visually guided eye growth begins with short-term changes in choroidal thickness which are followed by longer-term changes in eye growth [13][14][15][16][17][18] . The net result of these two mechanisms is a movement of the retina towards the defocused image plane to reduce the amount of imposed blur. Animal studies investigating the time course of the eye's response to blur have shown a rapid onset of choroidal thickness change, with only a few minutes of blur exposure required for the eye to discern the sign of blur and to elicit an appropriate directional choroidal response to minimize the amount of imposed blur [13][14][15] . The temporal properties of these compensatory changes are documented to vary according to the type of defocus imposed, with changes associated with myopic defocus being more enduring, suggesting that myopic defocus may produce stronger compensatory signals than hyperopic defocus 14,16 . In human eyes, the response to short-term imposed defocus has also been investigated [17][18][19][20] . Exposing the eyes of children and young adults to short periods (1 or 2 hours) of monocular myopic and hyperopic defocus leads to small but statistically significant bi-directional changes in axial length (measured from the anterior cornea to the retinal pigment epithelium) and choroidal thickness 17,20,21 . In a recent study of presbyopic adults, significant bi-directional changes in choroidal thickness after 1 hour of imposed myopic and hyperopic defocus were also reported 22 . A 12-hour period of monocular myopic and hyperopic defocus has also been shown to alter the normal diurnal variations of both axial length and choroidal thickness in young adults 18,19 . Given that no significant change in anterior eye biometry has been observed during short-term imposed defocus, the changes in axial length with defocus have been primarily attributed to rapid changes in choroidal thickness 17,21,23 . Whilst these findings collectively imply that the human eye is able to discern the sign of defocus and make changes in the thickness of the choroid and hence axial length, details concerning how quickly this occurs, how the eye responds to defocus over time, and the time course of the decay of the eye's response to defocus following the cessation of blur exposure are not well understood. In this study we investigated the time course of the axial length response to 60 minute episodes of continuous myopic and hyperopic defocus, testing the hypothesis that the human eye, similar to those of other animals 17,18 , would be able to detect and respond to defocus within minutes of exposure to blur. We further assessed the persistence of axial length changes following the cessation of myopic and hyperopic defocus, during a period of clear vision, hypothesizing that the response to myopic defocus would be more enduring than the response to hyperopic defocus. Given that the visual system is also known to compensate for optical defocus through a gradual improvement in defocused visual acuity (VA) over time (blur adaptation) [24][25][26][27][28] , we also examined the association between the time course of changes in defocused VA and axial length during exposure to myopic defocus. Methods Twenty-six young adults (14 females, mean age ± SD, 23.6 ± 3.7 years) were recruited. Prior to the experiment, each subject underwent a screening to ensure good ocular health, normal binocular vision and accommodation function, and to ascertain their refractive status. Refractive error was determined by non-cycloplegic subjective refraction using standard procedures. No subjects exhibited anisometropia of more than 0.50 DS or astigmatism of more than −0.75 DC. All subjects had up to date habitual spectacle prescriptions, and none were under myopia control treatment. No soft or rigid gas permeable contact lens wearers were included in this study to avoid potential changes in biometry associated with contact lens wear 29 . The Queensland University of Technology human research ethics committee approved the study. Written informed consent was obtained from each subject and the study adhered to the tenets of the Declaration of Helsinki. This study involved a protocol investigating the short-term (60 minutes) influence of three different levels of imposed monocular defocus (+3 DS, −3 DS and 0 DS) on axial length, assessed before, during and after exposure to each defocus condition. To ensure diurnal variations of axial length did not confound the effects of defocus 30,31 , measurements were taken at a similar time of day for each defocus condition (between 8:00 am and 2:00 pm), and at least 2 hours after each subject's reported time of waking. To prevent prior visual tasks (e.g. high accommodation demands) confounding the measurements of axial length [32][33][34] , before each measurement session, a "washout period" was implemented during which each subject binocularly viewed a television at a distance of 6 m with their optimal sphero-cylinder distance refractive correction in a trial frame. Subsequent to the "washout period", the baseline measurement of axial length from the right eye was taken, and then a 60-minute "defocus period" was commenced. During the "defocus period", subjects were monocularly (right eye only) exposed to either a + 3 DS or a −3 DS defocus lens over their optimal distance correction, with their fellow left eye optimally corrected to maintain a relaxed state of distance accommodation. This monocular defocus paradigm has been implemented previously in several studies [17][18][19]21 . Following the "defocus period", the defocus lens before the right eye was removed, and repeated measurements of axial length were carried out for a further 20 minutes, during the "recovery period" with a controlled 6 m distance viewing task. As a control condition, all of the experimental procedures were repeated with no blur (i.e. both eyes were optimally corrected for the duration of the control condition). For each subject, each test condition (+3 DS, −3 DS, and 0 DS) was conducted on a separate day, and the order of the three defocus conditions was randomized. The Lenstar optical biometer (LS 900, Haag Streit AG, Koeniz, Switzerland) was used to measure axial length (the distance from the anterior corneal surface to the retinal pigment epithelium). On each measurement day, the axial length was measured at baseline (prior to introducing defocus), and then every 10 minutes during 60 minutes of imposed monocular defocus, with the initial measurement taking place after 2 minutes of exposure to defocus. Recovery of defocus-mediated changes in axial length was also assessed at 5 minute intervals during the 20-minute "recovery period", with the initial measurement taking place after 2 minutes of exposure to clear vision. At each measurement time point, five repeated measures of axial length were obtained from the right eye (defocused eye). To provide constant exposure to defocus and to control for accommodation during the measurements of axial length, a binocular periscope system attached to the biometer was used (Fig. 1). The periscopic view of an external target (high contrast Maltese cross displayed at 6 m distance on a TV screen) was provided for both eyes by adjusting the system, and once the centre of the Maltese cross was superimposed with the centre of the internal fixation target of the biometer (red fixation light), the subjects were asked to look at the centre of the Maltese cross and the measurements were taken. When using the periscope system, the subject's vertex distance corrected sphero-cylinder distance refraction was placed in a trial frame before each eye and the additional defocus (equivalent to +3 DS or −3 DS at the corneal plane) was placed before the tested eye (right eye). With the cold mirror placed in front of the eye, there was a reduction in the signal intensity arising from the anterior eye parameters and the Lenstar data was unable to provide anterior eye biometry in all subjects for all time points. To investigate whether the anterior eye biometry data were affected by the presence of defocus, for the subset of subjects where lens thickness (LT) and anterior chamber depth (ACD) data were available, analyses of changes in anterior eye biometry after 60 minutes of exposure to defocus were carried out. The number of subjects where LT was available during continuous myopic defocus was 10, LT during continuous hyperopic defocus was 12, LT during the control condition was 12, ACD during continuous myopic defocus was 13, ACD during continuous hyperopic defocus 13, and ACD during the control condition was 12. Monocular VA was obtained from the right eye (defocused eye) of all subjects using the Early Treatment Diabetic Retinopathy Study (ETDRS) charts (Precision Vision, Vistakon Logarithmic Visual Acuity Charts, 9 series). Visual acuity was scored as the total number of letters read correctly and recorded in logMAR. The test was terminated when three or more letters per line was read incorrectly 35 . The VA was assessed at baseline, then at 10 minute intervals during 60 minutes of myopic defocus, with the initial measurement taking place after approximately 2 minutes of exposure to myopic defocus, and then following removal of the myopic defocus at the beginning of the recovery period. The persistence of defocused VA changes was also evaluated by reintroducing an equal amount of defocus (+3 DS) for a single measurement at the end of the 20 minute recovery period. For each VA measurement, the 9 ETDRS charts were randomized to reduce the potential for learning effects. To further ensure no learning effect had taken place, the changes in VA on the "control day" with continuous optimal correction were also assessed. Repeated measurements of axial length, and VA (or defocused VA) at baseline and during the "defocus period" and the "recovery period" were taken under the same experimental conditions and in a fixed ordered sequence, with axial length measured first, followed by VA. The average of five repeated measures of axial length at each time point during each defocus condition were analysed for each subject. The Shapiro-Wilk test was used to confirm that the axial length and VA data were normally distributed. Axial length data from the defocus and the recovery periods were each analysed using a repeated measures analysis of variance (ANOVA) with two within-subjects factors of time and type of defocus (myopic defocus, hyperopic defocus or control). Visual acuity data were analysed using a repeated measures ANOVA with two within-subjects factors of time and type of defocus (myopic defocus or control). If significant main effects or interactions were found (p < 0.05), post hoc tests with Bonferroni correction were then conducted. The effects of defocus on anterior eye biometry data (LT and ACD) were assessed using Bonferroni adjusted paired t-test on the difference in measurements from baseline to the end of 60 minutes exposure to defocus. In order to assess the possible associations between the changes in axial length and changes in defocused VA during the defocus and recovery periods, an analysis of covariance (ANCOVA) was carried out, using the method of Bland and Altman for calculating the correlation coefficient with repeated observations 36 . All statistical analyses were performed using SPSS for Windows software (version 21.0, SPSS Inc.). Results Axial length. A significant within-subjects effect of type of defocus was observed for axial length measures during the defocus period (p = 0.006). The interaction between time and type of defocus was also significant (p < 0.0001). Figure 2 illustrates the group mean changes in axial length during the defocus and recovery periods across the three defocus conditions for all subjects. www.nature.com/scientificreports www.nature.com/scientificreports/ Approximately 2 minutes after beginning exposure to hyperopic defocus, the group mean axial length increased significantly by +7 ± 5 μm (p < 0.001). Following this initial rapid response, axial length remained relatively stable over the next hour and was significantly longer than the baseline measurement at all subsequent time points (all p < 0.001). The maximum ocular elongation was observed after 50 minutes of exposure to hyperopic defocus, with a mean axial elongation of +10 ± 8 μm (p < 0.001). The first statistically significant reduction in axial length occurred after 40 minutes of exposure to myopic defocus, with a mean reduction of −8 ± 9 μm (p = 0.017). This change peaked shortly after, reaching a maximum axial length reduction of −10 ± 8 μm at 50 minutes (p = 0.001). The eye then remained significantly shorter than the baseline axial length until the end of the myopic defocus period (mean difference of −8 ± 10 μm, p = 0.037) (Fig. 2). Axial length remained stable throughout the control condition with no significant differences observed between the baseline axial length and any of the subsequent axial length measures (all p > 0.05) (Fig. 2). Repeated measures ANOVA revealed a significant interaction between the type of defocus and time for axial length measures during the recovery period (p = 0.003). Approximately 2 minutes after the removal of the myopic defocus, the shortened eye elongated by 37%. The eye then remained relatively stable over the next 20 minutes, recovering towards the baseline level by almost 50% after 20 minutes, but was still −4 ± 10 μm shorter than the baseline axial length (p > 0.05) (Fig. 2). Approximately 2 minutes after the removal of hyperopic defocus, the elongated eye recovered significantly by 63% (p = 0.010). The eye then continued to shorten rapidly over the next 20 minutes, recovering by 91% at the end of 20 minutes (mean difference of +1 ± 10 μm relative to the baseline, p > 0.05). During the control condition, there was no significant difference between baseline axial length and any subsequent axial length measures during the recovery period (all p > 0.05) (Fig. 2). To further compare the pattern of changes in axial length when the eye was recovering from the myopic and hyperopic defocus conditions, a linear mixed-model analysis was used to fit a regression line to the axial length recovery data of each defocus condition and the slope and intercept of the two regression lines was compared. A highly significant difference was observed between the slopes of the recovery of axial length between myopic and hyperopic defocus conditions during 20 minutes of clear vision (slope ß, myopic = 0.109 vs hyperopic defocus = −0.313, p < 0.001). The estimated time for complete recovery to the baseline axial length was then determined based on the intercept of the regression line. For hyperopic defocus, a return to the baseline axial length was estimated to occur after 21 minutes of exposure to clear vision, compared to 35 minutes for myopic defocus. Significant differences were observed in these intercepts of the complete recovery of axial length to the baseline values between the myopic and hyperopic defocus conditions (p < 0.001). Defocused visual acuity. A significant interaction was observed between the type of defocus and time for the defocused VA measures during the defocus period (p < 0.001) (Fig. 3). After the right eye was exposed to myopic defocus, baseline VA (−0.06 ± 0.07 logMAR) decreased significantly to 1.01 ± 0.13 logMAR (defocused VA) (p < 0.001). The defocused VA then improved gradually over time, reaching 0.93 ± 0.15 logMAR after approximately 20 minutes (p < 0.001); accounting for nearly 50% of the total defocused VA improvement. From then until the end of defocus period, the defocused VA continued to improve, but at a slower rate. The maximum blur adaptation effect was observed at the end of the defocus period, with a mean improvement of 0.16 ± 0.12 logMAR from the initial defocused VA (p < 0.001). When the defocus lens was reintroduced at the end of the 20 minute recovery period, the blur adapted VA showed 70% persistence, since the defocused VA was still 0.11 ± 0.10 logMAR better than the initial defocused VA measured at the beginning of the defocus period (p < 0.001). During the control condition, VA remained stable (all p > 0.05 from the baseline measurement) (Fig. 3). www.nature.com/scientificreports www.nature.com/scientificreports/ Association between axial length changes and defocused visual acuity. ANCOVA for repeated measures revealed a significant but weak positive association between the changes over time in axial length and defocused VA following exposure to myopic defocus (ß slope = 0.015, r 2 = 0.055, p = 0.003). As axial length decreased over time in the presence of myopic defocus, the defocused VA value decreased (i.e. the defocused VA improved), with an improvement of 0.1 logMAR in defocused VA over time being associated with an axial length decrease of 5 μm. No significant association was found between the change in axial length and VA during recovery period or during the control condition. There was no significant change in baseline LT and ACD after exposure to continuous myopic and hyperopic defocus (p > 0.05). The mean change in LT after exposure to continuous myopic defocus was +1 ± 15 µm, and −5 ± 12 µm after exposure to continuous hyperopic defocus. The mean change in ACD after exposure to continuous myopic defocus was −6 ± 9 µm, and −3 ± 10 µm after exposure to continuous hyperopic defocus. The changes in LT and ACD during the control condition were also not significant (+4 ± 13 µm and +5 ± 11 µm, respectively, p > 0.05). Discussion In young adults, the eye appears capable of discerning the sign of defocus rapidly (within minutes of exposure to defocus) and making compensatory changes in its axial length, which would have the effect of moving the retina in the direction towards the defocused image plane. These findings are consistent with the general findings from previous reports in humans 21,23 and animals [13][14][15] , where changes in axial length and choroidal thickness occur shortly after exposure to blur. We demonstrated that the speed with which the eye changes its axial length varies according to the sign of defocus, with the eye elongating faster during hyperopic defocus than it shortens during myopic defocus. Our study further suggests a slightly greater persistence of the effects of myopic defocus than hyperopic defocus on axial length during a recovery period of clear vision after defocus. Previous studies in human adults have measured short-term axial length changes in response to defocus after 30 minutes and 1 hour 20,21 . With myopic defocus, there were axial length reductions of 9 μm after 30 minutes and 13 μm after 60 minutes, and with hyperopic defocus there were axial length elongations of 5 to 9 μm after 30 minutes, and 8 to 11 μm after 60 minutes 20,21 . Our findings from the current study are consistent with these previous reports in young adults, with an average decrease of −7 µm with myopic defocus and an average increase of +9 µm with hyperopic defocus after 30 minutes which then peaked at ±10 µm of change after 60 minutes. By using a higher sampling frequency rate with measurements of axial length every 10 minutes (with the initial measurement occurring after only 2 minutes of blur exposure), we further showed that the axial length of the human eye can change in response to relatively brief periods of myopic and hyperopic defocus. Our results demonstrate that after ~2 minutes of exposure to hyperopic defocus; the human eye can discern the sign of blur and increase its axial length which moves the retina towards the defocused focal plane. The response to myopic defocus was slower, showing a significant compensatory shortening of axial length only after 40 minutes of continuous exposure. In a recent report, Chiang et al. 23 demonstrated a temporal difference in the choroidal response to myopic and hyperopic defocus, with a faster choroidal thickening in response to myopic defocus (after 10 minutes) and a slower choroidal thinning in response hyperopic defocus (after 20-30 minutes). Given that the changes in axial length with defocus are expected to be initiated from the changes in the choroidal thickness (i.e. as the choroid thickens the distance to the retinal pigment epithelium {axial length} decreases and vice versa), a similar temporal pattern of response to defocus for axial length and choroidal thickness is expected. The exact reason for the discrepancy between our findings and findings from Chiang et al. is unclear but may be due to the ethnicity of the subjects in the Chiang et al. study which included only South-East Asian participants compared to our relatively ethnically diverse cohort which consisted of 50% Caucasian, 39% East Asian, and 11% Indian. www.nature.com/scientificreports www.nature.com/scientificreports/ Chick models have also shown rapid responses to defocus, with either a similar time course of change in the rates of ocular elongation and choroidal thickness for positive and negative lenses 14 , or a faster time course of change for positive compared to negative lenses 13 . Zhu and Wallman reported that for both positive and negative lenses, about 1 to 4 minutes of exposure to defocus was sufficient to initiate the appropriate compensatory signals required for modulations of ocular elongation and choroidal thickness 14 . Another study in chicks has reported the required time to elicit appropriate compensatory changes in the choroid was 10 minutes for myopic defocus and 60 minutes for hyperopic defocus 13 . Following the cessation of defocus, the axial shortening effects of myopic defocus were slower to trend towards baseline levels than the hyperopic defocus effects. After 20 minutes of clear vision, on average the eye recovered by 50% from myopic defocus while it recovered by 90% from hyperopic defocus. A longer lasting effect for myopic than hyperopic defocus has also been reported previously in chicks. For instance, 2 to 3 hours of unrestricted vision per day in eyes wearing negative lenses reduced eye elongation by almost 95%, whereas in eyes wearing positive lenses, eye shortening was reduced by only 10% after three hours of daily unrestricted vision 16 . Also, chicks exposed to only three hours of myopic defocus per day and unrestricted vision for the remainder of the day (9 hours), still exhibited a significant hyperopic shift 16 . In another study, when various episodes of darkness were imposed between episodes of defocus, a 50% decay in the axial length effects of myopic defocus occurred after 24 hours while for hyperopic defocus, it occurred after 24 minutes 14 . The method through which the human eye is able to detect the sign of defocus is unknown; however, a trial-and-error mechanism of blur identification has been previously suggested 37 . If such a mechanism contributes to blur identification in the human eye, the magnitude of change in the refraction as a result of defocus-induced changes in axial length should be greater than the eye's depth of focus. In this study, we observed a maximum axial length change of 10 μm with myopic and hyperopic defocus which corresponds to only a 0.03 D change in the ocular refraction. This amount of change in refraction is small compared to the depth of focus of the eye, which ranges between ~0.15 D to 0.27 D (for pupil diameters of between 5-6 mm) [38][39][40] . Therefore, the possibility of a trial and error mechanism of blur identification related to axial length changes seems unlikely. Alternatively, other methods such as contrast cues from contrast adaptation (changes in contrast sensitivity at different spatial frequencies) [41][42][43][44] , colour cues from chromatic aberration [45][46][47] , or optical vergence cues from image defocus 48 could be used by the human eye to decode the sign of blur. The rapid changes in axial length in response to defocus observed in our study most likely occurred at least partly through rapid modulations in the thickness of the choroid, but the underlying mechanism is not known [17][18][19]21,23,49 . The defocus-mediated changes in the thickness of the choroid are reported to occur within minutes after exposure to myopic and hyperopic defocus in both animal 13,14 and human eyes 23 . Similar to the previous reports in human eyes 18,19,21 , we found no significant effects of defocus on lens thickness or anterior chamber depth in our study. Given that our study protocol involved exposure to monocular hyperopic defocus (with the fellow eye open and optimally corrected), it was important to ensure that the subjects did not accommodate through the hyperopic defocus lens, which would have minimized its potential effects in the tested eye and induced myopic defocus in the fellow eye. The lack of a significant change observed in both lens thickness and anterior chamber depth with defocus suggests that during monocular hyperopic defocus exposure, the accommodation state of the defocused eye remained relatively unchanged and the clear fellow eye guided the accommodation response in both eyes. Our investigation of the short term axial length response to blur might have implications for understanding the mechanisms underlying eye growth in humans. For instance, it has been proposed that transient exposure to hyperopic defocus associated with near activities (e.g. due to a lag of accommodation) 50,51 , ocular aberrations 52 and peripheral defocus 53 might predispose the eye to myopia. We found the axial length effects of hyperopic blur to quickly subside during short exposure to clear vision, with 2 minutes of clear vision being sufficient to significantly reduce the axial elongation effects of hyperopic defocus. If ocular changes resulting from short-term exposure to hyperopic defocus are associated with longer-term refractive error development in human eyes, then imposing brief exposures to clear vision (e.g. taking frequent breaks by looking at far objects in between near activities) may counterbalance the induced myopigenic hyperopic blur stimulus. Further, incorporating myopic defocus into bifocal or multifocal lenses as an optical intervention to retard myopia progression in school children has shown modest treatment benefits [54][55][56][57][58] . We found a continuous exposure of a minimum of 40 minutes was required for myopic defocus to have a significant effect on axial length reduction in human eyes. Blur adaptation effects were also measurable within minutes after beginning exposure to defocus, as reported previously 28,59 . The magnitude of improvement in the defocused VA over 60 minutes was 0.16 logMAR, which is consistent with the magnitude previously reported in young adults 28,60 . A neural sensitivity gain adjustment in different spatial frequency channels of the visual system has been proposed as a possible underlying mechanism that may mediate this response 24,25,61,62 . We found that nearly 60% of the VA improvement observed after one hour of myopic defocus remained after 20 minutes of clear vision. The durable nature of blur adaptation effects on defocused VA has previously been reported 63,64 . In this study we tested whether the defocus-mediated changes in VA and axial length are strongly associated, which could indicate a common underlying mechanism. That is, the level of adaptation in VA is providing cues to the level (not sign) of image defocus which in turn guides the changes in axial length to move the retina towards the defocused image plane 15,41,44 . Both axial length and defocused VA changed rapidly following exposure to myopic defocus. We investigated the possible association between the temporal changes in defocused VA and axial length during myopic defocus and observed a weak but statistically significant, positive association, with a 0.1 logMAR improvement in defocused VA over time being associated with a 5 µm of change in axial length. Since a 5 μm of change in axial length is approximately equivalent to a 0.012 D change in ocular refraction 65 , this amount of change is too small to influence the improvement in VA. Further, the weak association between the defocus-mediated changes in axial length and VA does not imply a causal link between the two changes in the eye. Our findings from these investigations are limited to the level of defocus used and the durations of the periods of defocus and clear vision. Future studies repeating these experiments at different ages, and utilizing different levels and durations of defocus and clear vision may expand our knowledge of the role of defocus in modulating the eye growth. Further, the temporal properties of the choroidal response to defocus blur were not investigated in this study. Given the important role of the choroid in modulating the eye growth 49,66 , the temporal characteristics of its changes to blur need to be investigated in detail. In summary, we have shown for the first time that in young human adults, the eye is able to discern the sign of defocus within minutes after exposure to blur and make changes in its axial length in a direction to reduce the amount of retinal blur. The human eye elongated faster during hyperopic defocus than it shortened during myopic defocus and, similar to other animal models, the ocular response to myopic defocus was found to be more enduring than hyperopic defocus after removing the defocus stimulus. Whilst these findings improve our understanding of the temporal properties of the eye's response to defocus blur, further research is required to understand the underlying mechanisms that mediate these responses to defocus blur. Data availability The datasets generated and analysed during this study are available from the corresponding author on reasonable request.
7,109.2
2020-05-20T00:00:00.000
[ "Medicine", "Physics" ]
Environment Charge and Covid The primary goal of the paper is to examine the trends of global development and also global changes in the context of selective excise taxes, which are part of the Slovak tax system. First, it is necessary to describe the theoretical research, for clarity in this topic. To this end, we will provide an overview of developments in excise duties and environmental taxes. We will also focus on key events in the global economy that have had an impact on the formation of the Slovak tax system and also on legislative changes in the field of selective excise taxes. Next, we focus on the specification of the main goal of the paper. We will further specify the mathematical and statistical methods used. The result of the paper represents the latest legislative changes that are relevant from our point of view. In the next part of the article, we are going to examine the financial results of the company in connection with the applied environmental policy. In the last part of the article, we summarize the most important findings of our analysis and also point out the impacts of the environmental policy applied by the state on the indicators of green growth in the time of Covid. INTRODUCTION Manufacturing companies strive to make the highest possible profits and do not think about the future in relation to the environment. Countries are also trying to solve this unfavorable situation through taxes. Environmental taxes are key tools for achieving sustainability in the economy by increasing the prices of environmentally harmful goods or the cost of production inputs. Developed countries, such as the European Union, have recently been drawing more and more attention to this area of the economy. Growing concerns about climate change are putting environmental issues at the forefront of the economic agenda in many European countries. Global warming is one of the most important challenges facing the world [1]. Taxes, fees, tradable permits and other economic instruments play an important role in achieving cost-effective control of greenhouse gas emissions. Their potential scope and revenues increase the much broader implications of economic and fiscal policy. European countries introduced carbon taxes in the 1990s, although the draft European energy tax was ultimately unsuccessful. Recently, the focus has been on emissions trading. Several tax measures have been introduced in the EU, in particular with environmental objectives. They included national environmental taxes along with the existence of environmental tax measures. The increasing use of environmental taxes, emissions trading and other economic instruments was partly conditioned by the recognition of the limitations of conventional environmental regulation. They can initiate the necessary changes in the economy, resource use, behavior and general approach to nature [2]. The relationship between environmental knowledge, attitudes and behavior is complex. In addition, electricity consumption in the six Gulf Cooperation Council countries has increased rapidly in recent decades due to rapid population growth and relatively rapid economic growth [3]. Energy is an integral part of our daily lives in a market economy society. We need it for heating, cooling, lighting and movement; it is important for the functioning of our offices, workplaces and the whole economy [4]. This is one of the reasons why the EU Commission has proposed its Energy Union Strategy. Data and methods The main object of our analysis is the company Mondi SCP, a. s., operating in the territory of the Slovak Republic. As the subject of the company's activity is the production of paper and pulp, it is a suitable object for the implementation of our research, which we want to primarily determine the degree of involvement of the company in environmental protection and the consequences of this activity on the achieved financial results. Another subject of the survey can be considered the Slovak Republic, specifically in the field of taxation of its entities and also the European Union, which consisted of 28 member states, as in the analyzed period was also part of the United Kingdom. It is also a comprehensive source of information on the state and development of the environment and related aspects for the general public [5]. The Slovak Environmental Agency processes and regularly evaluates various sets of indicators [6]: To obtain current data, we also drew from the OECD library, where several publications focused on the topic of environmental policy of states, tax policy in general, as well as data on the effects of the coronavirus pandemic on taxation were published. We also drew information from articles in professional journals and from scientific papers. Data related to changes in legislation were available on the MFSR website. We had quantitative data on the development of selective excise duties within the Slovak Republic and the European Union at our disposal thanks to the EUROSTAT database, which falls under EU control. Financial data of Mondi SCP, a. with. we drew from the published financial statements. Information on implemented projects was specified on the company's website and in published annual reports. We obtained data for the analysis of green growth indicators of the country from the DATACUBE database published by the Statistical Office of the Slovak Republic. The procedure for the calculation of indicators is published by the Ministry of the Environment of the Slovak Republic. From the Ministry of Economy of the Slovak Republic, we obtained data for quantifying the efficiency of energy consumed by Mondi SCP. Based on the analysis of indicators and regular evaluation of the European Environment Agency (EEA) [7], the evaluation options of the Organization for Economic Development and Economic Cooperation (OECD) and of the European Union Office of Indicators (Eurostat) were evaluated. Flexible and liquid markets can enable supply and demand to be more efficiently adjusted, reducing production costs and therefore prices. Such situations should also trigger bilateral contract prices in the most developed markets [8]. To interpret the obtained data, we mainly used descriptive statistics, through which we divided the quantum of data into clear tables and graphs. We mainly used quantitative indicators in absolute terms. For a clearer illustration of the relationships between quantities, in some cases we considered it appropriate to use the share of individual parts in the whole. We also used the method of comparison, specifically we compared data for different time periods. Within the analyzed company, we looked for an analogy between the procurement of equipment relevant from an ecological point of view and the achieved results of the company in the area of costs, revenues and emissions of harmful substances into the air. Looking at selective excise taxes as part of the Slovak tax system, we looked for an analogy between the development of this type of taxes and global changes. To express the company's performance, we used relative profitability indicators, which are part of the ex-post analysis. To express the development of CO2 emissions produced into the air in the company after the introduction of more environmentally friendly technology, we expressed by means of a moving average. Its positive is mainly the exclusion of random influences from time series. Only after this adjustment is it possible to objectively assess the data. The charts also includes a forecast based on historical data. We also used time series for the chronological arrangement of the sequence of comparable values, while applying the longterm periodicity of their monitoring. The analysis allowed us to select the most important ones from the amount of available data. Through synthesis, on the other hand, we assessed their interrelationships. Environmental charge, indicators In this part of the paper we will analyze the individual environmental charges from a theoretical point of view. Indicators are measurable quantities that provide information on the development and trends of phenomena and processes from a quantitative and qualitative point of view. In terms of indicators, the basic aim of research is to identify key environmental indicators that reliably reflect environmental policy. The basic starting point for their identification was environmental indicators obtained from data of individual sectors (selective excise tax trends). Environmental taxes are one of the most important instruments of the European Union's environmental policy. • Sustainable Development (TUR) indicators. The chart shows that almost half, namely 45.40% of total revenues are accounted for by mineral oil tax revenues, which also dominated in the previous monitored periods. The second highest share falls on taxes levied on tobacco and tobacco products at 27%. This item can also be considered important as its share in total revenues has consistently exceeded the level of 20% since 1995 and its value has been constantly growing from 1995 to 2004, this growth was accompanied by fluctuations in subsequent periods. The last item, the value of which was above the level of 10% (16.41%) in 2019, was the tax on green energy. As we have already mentioned, this was the last tax introduced from the category of excise duties, although, as we can see, it represents a significant part of revenues. Its share of excise revenue has doubled in the last 10 years. The opposite trend can be observed for the alcohol tax, which accounted for 16.25% in 1995 and is currently at around 7.67%. The percentages of other types of excise duties are below 3%, with the lowest share of coal tax at 0.01%. EUR, while in the Slovak Republic revenues reached 2,245.98 mil. EUR. Current results cannot be analyzed due to a pandemic Covid. As economic growth has slowed significantly and even stopped production altogether for some time, we expect its effects to be significant. • Resource efficiency indicators, implementation of environmental measures. The basic starting point for their identification was environmental indicators obtained from data of individual sectors (selective excise tax trends). The picture shows the development of total revenues from this tax category. Over the last ten years, total EU environmental tax revenues have increased between 2008 and 2019. Data show that total EU environmental tax revenues are increasing [9]. The justification for revenue growth stems from rising environmental tax rates. Exceptions to this rising trend are 2007 and 2009, during which Europe went through a global financial crisis [10]. Environmental taxes are closely linked to the effective tax rate. The chart also shows the effective tax rate, which is calculated as the ratio between the income tax and the company's profit, which we found before tax. The difference between the income tax and the actual tax rate arises when items such as permanent, temporal or other differences and the tax credit are taken into account. It may be affected by changes in statutory income tax rates. The OECD uses income data provided annually by correspondents from national ministries of finance, tax administrations or national statistical offices [11]. Although preliminary data are available for most countries with a lag of about six months, finished data are available with a lag of about one and a half years. Final data on income for 2017 were received in the period May -August 2019. In OECD countries, the reported year coincides with the calendar year. This will be achieved through flexible recruitment procedures designed to attract quality candidates, a focus on training and the development and development of systems that capture and enable best practice to be repeated. As they say, globalization is also one of the most important facts [12]. The modern trend in tax agencies is for more highly skilled workers who have merit based on merit and performance management and development systems [13]. In the context of energy consumption, the issue of greenhouse gas emissions needs to be addressed. For the development of emissions, we have compiled a graph that interprets the results of the moving average of emissions for the period from 2013 to 2019. Source: own processing based on data from: https://www.mondigroup.com/en/sustainability/ Mondi SCP, a. with. after its latest investment in a renewable boiler, it is 100% a selfsufficient company in the field of energy production, with up to 94% of it coming from renewable sources. It is essential for society to pay attention to climate change. The climate crisis is affecting the company's operations and supply chain, through effects on water, weather, carbon regulation and taxation, energy availability and price. Increasing self-sufficiency in energy production contributes to improving energy profitability and security. The production of pulp, paper and packaging is energy-intensive, with energy generation being one of the main sources of greenhouse gases. The highest decrease compared to 2013 was recorded in 2017, when the value of emissions was up to 42% lower than before the introduction of the recovery boiler. Results and Discussions Environmental charges are used to influence the behavior of economic operators, whether producers or consumers. The EU favors these instruments because they provide flexible and cost-effective means. The use of economic instruments for the environment is promoted in the EU Environment Action Program 2020, the EU's Sustainable Development Goals and the Europe 2020 Strategy. Economic instruments for pollution control and natural resource management are an important part of environmental policy in EU Member States [ 14]. Environmental indicators, together with those supporting the Europe 2020 strategy, continue a series of important Eurostat publications in support of the Europe 2020 strategy by monitoring progress towards the objectives set under the three mutually reinforcing priorities of smart, sustainable and inclusive growth. The analysis is based on the Europe 2020 headline indicators selected to monitor progress towards the strategy's objectives. Other breakdowns focusing on specific subgroups of society or the economy are also used to deepen the analysis and provide more detailed information. The data come mainly from official statistics compiled by the European Statistical System and published by Eurostat. To support this statement, we present graph no. 7, which shows data on year-on-year growth, resp. decline in GDP at constant prices, CO2 and CO2 productivity, which we have quantified as the ratio between GDP and CO2 emissions. It is evident that in 2009 there was a year-on-year decrease in both GDP and CO2 emissions, which resulted in an increase in the value of the CO2 productivity indicator. However, in the following year 2010 we can already see an increase in the volume of production. The increased activity of economic entities was also reflected in the increased volume of produced harmful emissions. In the following years, fluctuations can be observed, from 2014 to 2017, even a decline in greenhouse gas productivity, which evokes that the state's environmental policy would probably need to be reconsidered. However, in 2018, for which we had the latest data, we can observe an increase in the value of the indicator. The positive impact was mainly due to the higher volume of gross domestic product in absolute terms than the volume of CO2 emissions, while CO2 emissions recorded a significant year-on-year decrease. Fig. 7. Total energy production in countries European Union (mill. Eura). Source: Eurostat A similar indicator is energy efficiency, which represents the share of energy consumption in gross domestic product. We can observe a similar development, in 2009 there was a year-on-year increase in energy efficiency. In the context of environmental indicators, needs considered in energy. Primary energy production is any extraction of energy products in a usable form from natural sources [15]. Conclusion Our claims are supported by the fact that, after each global crisis, the economy has recovered, accompanied by increased production, which has led to a renewed increase in gas emissions. The proof is the following picture: Great competition is a reality. The market has changed, competitiveness is higher. Slovak manufacturing companies must carefully monitor changes in the market and respond to the requirements of domestic and international markets. In particular, they must consider meeting the conditions in foreign markets [17]. Businesses must also focus on the environmental side of production [16]. they must consider digitization production and logistics [18].
3,665.2
2021-01-01T00:00:00.000
[ "Economics" ]
Bifurcation Problems for Generalized Beam Equations We investigate a class of bifurcation problems for generalized beam equations and prove that the one-parameter family of problems have exactly two bifurcation points via a unified, elementary approach. The proof of the main results relies heavily on calculus facts rather than such complicated arguments as Lyapunov-Schmidt reduction technique or Morse index theory from nonlinear functional analysis. Introduction and Main Results In physics, the vibration of an elastic beam, with length 1 and one endpoint hinged at = 0, which is compressed at the free edge ( = 1) by a force of intensity proportional to > 0, is governed by the so-called beam equation + sin = 0 in (0, 1) ; (1) see [1].The beam maintains its shape when the "force" is sufficiently small, but it will buckle once exceeds a certain value.In mathematics, the set of such values can be studied by exploiting the homogeneous Neumann boundary value problem: + sin = 0 in (0, 1) , (0) = (1) = 0. ( Before stating precisely the properties which we will explore in BVP (2), we embed this problem into a family of such boundary value problems; that is, we introduce the family of problems + ℎ ∘ = 0 in (, ) , () = () = 0, where , ∈ R with < , belongs to a certain nonempty subset of R, and = () is the unknown; the function ℎ ∈ ∞ (R; R) satisfies the following: there exists an > 0 such that (H1) ℎ( + ) = −ℎ(), for all ∈ R, (H2) 0 ∈ R for which : [0, ] ∋ → √∫ Remark 1.We call the equation occurring in BVP (3) "generalized" beam equation; such equations are widely used to describe various physical phenomena. Trivially, BVP (3) admits the trivial solution = 0 for any ∈ R.Here we are focused on the bifurcation theory for BVP (3).The bifurcation points are determined by eigenvalues associated with the differential operator + ℎ ∘ .At such 2 Advances in Mathematical Physics points, the number of solutions to (3) may change.However, very little further work has been done to determine whether the number of solutions changes at these points.In this paper, we give such a criterion for a class of nonlinear problems.Theorem 4. Let , , ∈ R with < .Assume (H1)-(H2).Then ± 2 (( − ) 2 ℎ ( 0 )) (7) are two bifurcation points for BVP (3).Besides, (3) has nonconstant solutions if and only if The proof of a bifurcation assertion of a nonlinear equation often has as ingredients such topological arguments as Krasnoselskii's and Rabinowitz's theorems on bifurcation.These arguments usually have the assumption that the algebraic multiplicity of the associated linear eigenvalue problem is odd; see [1][2][3] and the references therein.Since then, several authors have also attempted to remove such oddness assumption; see [1,2,4].In particular, Ma and Wang [2] developed an elaborate algorithm to prove steady state bifurcation assertions concerning nonlinear equations; this algorithm does not assume the oddness of the algebraic multiplicity.See [5][6][7][8][9][10][11][12][13] for more studies on bifurcation problems.Our approach to prove Theorem 4 does not assume such parity condition on the algebraic multiplicity. As a matter of fact, BVP (2) is a special case of Sturm-Liouville problem or boundary value problems for elliptic partial differential equations.Therefore, BVP (2), possibly in disguise, has been studied extensively in the literature for the existence of solutions satisfying certain prescribed properties, for qualitative properties of solutions, and so on; see [14][15][16][17] and the profound references cited therein. The remainder of this paper is organized as follows.In Section 2 we introduce some nonlinear functional analysis and formulate the problem in a formal way, and in Section 3 we give the proof of Theorem 4. The Existing Bifurcation Results for BVP (2) In this section, we mainly give a brief review of the existing results in the literature concerning bifurcation problems for BVP (2) which can be viewed as archetypes of bifurcation problems for BVP (3).Indeed, bifurcation problems for BVP (2) have been often provided as illustration examples to test the proposed abstract bifurcation-problems-solving method in the literature; see [1,12,13,18]. In particular, Ma and Wang [18] proposed an abstract method which generalizes slightly the previous one obtained by Nirenberg [1].In presenting their method, the authors fixed two Banach spaces and for which embeds continuously and densely into .The abstract problem which they were concerned with reads where : → , ∈ R, is a family of bounded linear operators and : → is a family of continuous mappings.They assumed the following. (H3) is in the form = + with as a linear topological isomorphism of onto and as compact linear operators; hence the spectrum of consists of the exactly countably many eigenvalues { ()} (listed by algebraic multiplicities) of ; there exists 0 for which (H4) For any > 0, there exists a > 0 such that is analytic in the sense that is a continuous, symmetric -form on . The precise problem with which they are concerned is whether there is a 0 ∈ R given in such a way that if ̸ = 0 with in a neighborhood of 0 is a collection of solutions to BVP (9), then If there exists a 0 which satisfies the above requirements, then 0 is called a bifurcation point for nonlinear problem (9); also, problem ( 9) is said to bifurcate from 0 .Concerning (9), they proved the following.Assume (H3)-(H4).Then 0 is a candidate bifurcation point of the nonlinear problem (9). The proof of the above theorem provided in [18] utilizes such complicated methods as Lyapunov-Schmidt reduction method, Morse index theory, and so forth. Here we are tempted to use the results obtained in Ma and Wang [18] to solve the bifurcation problem for BVP (3); it is however obvious that the nonlinear reaction R ∋ → ℎ(sin ) ∈ R precludes our application of such results.In the next section, we will analyze the bifurcation problem for BVP (3) in an elementary way. Proof of the Main Results In this section we propose two lemmas and then prove Theorem 4 based on them.Various calculus theorems are employed in our proofs, and the elementary equality is also used repeatedly. For the other half, we assume without loss of generality that > 0, and assume (8) holds; that is, And we show that (2) has a nonconstant solution.
1,493.8
2014-12-22T00:00:00.000
[ "Mathematics" ]
Femtosecond X-ray coherent diffraction of aligned amyloid fibrils on low background graphene Here we present a new approach to diffraction imaging of amyloid fibrils, combining a free-standing graphene support and single nanofocused X-ray pulses of femtosecond duration from an X-ray free-electron laser. Due to the very low background scattering from the graphene support and mutual alignment of filaments, diffraction from tobacco mosaic virus (TMV) filaments and amyloid protofibrils is obtained to 2.7 Å and 2.4 Å resolution in single diffraction patterns, respectively. Some TMV diffraction patterns exhibit asymmetry that indicates the presence of a limited number of axial rotations in the XFEL focus. Signal-to-noise levels from individual diffraction patterns are enhanced using computational alignment and merging, giving patterns that are superior to those obtainable from synchrotron radiation sources. We anticipate that our approach will be a starting point for further investigations into unsolved structures of filaments and other weakly scattering objects. H igh-resolution X-ray fiber diffraction is a key method for determining the structures of helical filaments that resist conventional crystallization 1,2 . Helical structures consist of identical subunits, which repeat after a defined number of turns along the fiber axis. The diffraction pattern of such a helix, the Fourier transform of its electron density, is confined to layer lines 3 . The diffracted intensities on the layer lines can be used for structure determination as demonstrated for DNA, filamentous bacteriophages, and tobacco mosaic viruses [4][5][6][7] . However, not all filamentous systems with one-dimensional order yield diffraction patterns of a quality sufficient to infer a structure. Amyloid fibers consist of multiple protofibrils, are visibly polymorphic, and exhibit comparatively weak continuous diffraction in very few layer lines [8][9][10] . The sparse features in diffraction patterns from these fibers have so far provided, at best, constraints for lowresolution models or the validation of existing structural models [11][12][13][14][15] . Consequently, over the last few decades our knowledge about the structure of native amyloid fibrils has primarily been derived from other techniques including solid-state nuclear magnetic resonance (NMR) 16,17 and cryo-electron microscopy (cryo-EM) 18,19 . However, these technologies have some limitations in dealing with these heterogeneous samples. Highresolution NMR structures depend on systems with very low polymorphism 20,21 . NMR models give a local reconstruction of a small repeating unit of the fibril, and long-range packing or twists occurring in these fibrils can only be explored by cryo-EM. However, being able to image fibers, but not individual protofilaments, cryo-EM reconstructions represent averages of multiple fibril conformations. Protofibrils are the more relevant, diseasecausing species found in equilibrium with mature fibers, and coexisting with different structured and unstructured assemblies 22 . X-ray free-electron laser (XFEL)-based experiments have the potential to record diffraction from individual protofibrils and build upon existing results from solid-state NMR and cryo-EM to improve our understanding of the structures of individual protofilaments. Until now, the recording of high-resolution X-ray diffraction data from amyloid fibrils was limited by radiation damage, which destroys the specimen before meaningful diffraction can be recorded 23 . This loss of structure strongly depends on the total Xray energy deposited in the sample per unit mass (the dose), which is itself proportional to the total X-ray fluence of the incident X-ray beam and thus the achievable diffraction signal. To mitigate this problem and obtain measurable diffraction patterns, the X-ray energy deposited per fiber is usually reduced by preparing a fiber specimen composed of millions of fibers mutually aligned along their fiber axes, which are simultaneously exposed to the X-ray beam with a lower flux [24][25][26] . For such a specimen, the scattering from the fiber is significantly amplified above the background levels from solvent and air. However, fibers mutually aligned in oriented bundles are usually randomly rotated about their fiber axes, giving a cylindrically averaged diffraction pattern of reduced information content. Furthermore, an average of polymorphic conformations are present in each diffraction pattern. This fact, together with deviations from perfect alignment, blurs details in the diffraction pattern. XFELs extend the conventional dose limit by exposing the sample for only a few femtoseconds to intense X-ray pulses containing over 10 12 quasi-monochromatic and spatially coherent photons that can be focused to a sub-micrometer spot. This allows a 'diffraction-before-destruction' approach, which enables the recovery of structural information before the photoelectron cascade destroys the molecules. At such high X-ray fluence, the conventional damage limit is increased, resulting in 10,000 times more scattered photons than is usually possible 27,28 . Although this was originally proposed for single particle imaging 28 and first implemented in the form of serial femtosecond crystallography (SFX) in 2009 29 , it has also recently been applied to imaging amyloid fibrils 30,31 . Serial fiber diffraction at XFELs using a liquid jet delivery system has provided high-resolution data from a crystalline fiber system 31 . However, data quality for noncrystalline fibrils was poor 30 . Non-crystalline amyloid protofibrils are often only a singlemolecule thick and, therefore, about a thousand times smaller in width than the micrometer-thick water jet, the scattering of which, therefore, obscures their diffraction signal. To increase the achievable signal-to-noise ratio in fiber diffraction patterns, we have combined a new sample delivery medium based on freestanding graphene and the highly brilliant nanofocus XFEL beam of the Coherent X-ray Imaging (CXI) instrument 32 at the Linac Coherent Light Source (LCLS). Ultraclean graphene has recently enabled the imaging of single molecules to about 8 Å resolution by low energy electron holography 33,34 . We present diffraction patterns from a limited number of aligned filaments, which exhibit well-resolved layer lines. In some cases, the diffraction patterns show asymmetric features that indicate the presence of a limited number of molecular rotations. Weak XFEL diffraction patterns can be oriented and merged in reciprocal space to further increase signal levels 31 . The highresolution diffraction features in these merged patterns are better resolved than in conventional X-ray diffraction patterns. More generally, XFEL serial diffraction on graphene approaches the signal-to-noise levels needed to study single particles [35][36][37] , and thus shows promise as a practical method for the general study of amyloid fibrils and other weakly scattering particles of similar size. Results XFEL imaging of fibrils on free-standing graphene windows. We used an ultraclean graphene layer placed on a holey silicon support frame to deliver non-crystalline filaments into the XFEL beam focus. Experiments were conducted in vacuum to minimize background scattering from air, and the X-ray beam was focused to a spot size of about 150 nm full-width at half maximum (FWHM) to maximize the flux incident on individual protofibrils. To further reduce other sources of background, low scattering silicon frames were engineered, as shown in Fig. 1a-c. The silicon frame was designed with robust, efficient and simple sample scanning in mind. The 20-µm diameter holes were an optimal balance between the visibility of holes in the on-axis microscope necessary to align the frame with the X-rays, reduction of the interaction between the wings of the focused beam with the chip, and preventing window membranes from breaking during fabrication. Holey frames were covered with a layer of ultraclean graphene. A fabrication process previously described for smaller free-standing graphene windows 33 was modified as depicted in Supplementary Fig. 1 and detailed in Methods. Support frames were tested for graphene cleanliness, coverage, and stability of the graphene upon sample application using low energy electron and light microscopy ( Supplementary Fig. 2). The cleanliness of the graphene windows was comparable to that described elsewhere 33,38 . Silicon frames with freshly prepared ultraclean graphene layers (with and without samples) were glued onto an aluminum frame prior to their introduction into the CXI 32, 39 vacuum chamber. Data collection was performed using XFEL pulses of 40 fs duration at 8 keV photon energy and 1.5 mJ pulse energy at the beam focus, giving a calculated peak fluence of about 7 × 10 13 photons/µm 2 . The experimental setup is depicted in Fig. 1a. The frames were scanned at 1.5 s −1 through the XFEL beam. This step scan was performed such that the XFEL pulse intersected every silicon hole, similar to previous fixed target approaches 40,41 . Diffraction patterns were collected over two 24-hour shifts. Preparation of aligned filaments on graphene. We selected Tobacco Mosaic Virus (TMV) fibers as a reference sample, and functional hormone amyloid protofilaments prepared from bombesin and β-endorphin peptides 42 . TMV has a large asymmetric unit whose 3D structure has been determined to highresolution by fiber diffraction and refined with cryo-EM 6,43 . Soluble bombesin and β-endorphin act both as neurotransmitters in the central nervous system and control a wide spectrum of activities on the cell periphery, and bombesin has putative roles in cancer growth. Both hormones are arranged as amyloid fibrils inside secretory granules of cells 42 . In contrast to disease-related amyloid fibrils, hormone amyloid fibrils can disassemble into active peptides upon pH change, and they exhibit a very low degree of polymorphism, which is essential to this experiment 44 . Amyloid fibrils form and maintain their structure under extreme conditions including acidic environments and high temperatures 45 , and so are not expected to degrade in the vacuum in the XFEL chamber. A key to obtaining useful diffraction signals from multiple filaments is their mutual alignment. Graphene provides a great benefit in this regard since it exhibits guiding forces to protein filaments, which tend to align them along their fibril axes 46,47 . This effect can be clearly observed by comparing images of TMV (Fig. 2a, b), bombesin amyloid fibrils (Fig. 2c, d) and β-endorphin amyloid fibrils (Fig. 2e, f), when they are placed either on amorphous carbon films (Fig. 2a, c, e) or graphene (Fig. 2b, d, f). In contrast to TMV, amyloid fibers are visibly polymorphic and are composed of different numbers of protofibrils (Fig. 2c, e). We observed that individual protofibrils aligned with graphene ( Fig. 2d, f), whereas the mature fibers did not. Protofibrils were the targets of this experiment, and to initiate their formation we mixed purified peptide solutions of bombesin and β-endorphin with heparin at slightly acid pH values (pH 5.5) mimicking their native acidic environment in secretory granules 42 . Protofibril growth was imaged by negative-stain transmission electron microscopy (TEM) over four days (see Methods) and the existence of protofibrils was observed between 8-24 h after initiation of filamentation. Fibril suspensions were tested for alignment by depositing droplets on ultraclean graphene sheets dispersed on solid silicon. Atomic force microscopy (AFM) imaging showed that graphene appeared to stop assembly processes and maintain protofibril structures during the time of the AFM measurements (a few hours) (Fig. 2d, f). Protofibril dilution was calibrated to maximize the frequency of single layers. β-endorphin protofibrils have an average diameter of 3 nm, which was identified from a one-dimensional intensity profile obtained from TEM images of straight fibers 44 . To estimate the diameter of bombesin protofibrils the signal-to-noise was increased by generating seven 2D class averages of fibers from the TEM micrographs ( Supplementary Fig. 3). Pixel intensities in columns parallel to the fiber axis were summed to a onedimensional profile, which revealed diameters of bombesin fibers ranging from 8.8 to 11.3 nm. Modulations in these intensity profiles ( Supplementary Fig. 3b) suggest that bombesin fibers are composed of three to four protofibrils each with a width of 2-3 nm. Imaging TMV in the same micrograph shows that its diameter is about six times larger than that of the individual bombesin protofibrils (Fig. 2c, Supplementary Fig. 3). Scattering intensities of fibril and background components. A total of 126,768 diffraction frames were acquired from four dilutions of all three fiber types, on empty holes and holes covered with only graphene and no sample. To compare the scattering intensity from the fibril and graphene components and for calibrating background subtraction, we characterized the X-ray scattering from graphene-covered holes (Fig. 3) and sample-free, empty holes ( Supplementary Fig. 4). Supplementary Fig. 4a). This is equivalent to about 0.05 photons/ pixel. The scattering from empty holes was determined from a series of 1569 frames. Empty holes give rise to measurable diffuse scattering from the silicon chip ( Supplementary Fig. 4b). The average total background per image from the series exposing only empty holes, excluding beam-off events, was about 101,345 scattered photons. We find that scattering from empty holes is similar to that of graphene-covered sample-free holes, indicating that the main component of the average background (Fig. 3a, Supplementary Fig. 4a) is due to the empty holes alone. Additional background may contain contributions from misclassified very weak TMV hits (the fraction of patterns containing sample diffraction), as well as the graphene layer. Other sources of background seen in the difference (Supplementary Fig. 4c) between the average background ( Fig. 3, Supplementary Fig. 4a) and the background from empty holes ( Supplementary Fig. 4b) may be attributed to the parasitic scattering of the XFEL and iron fluorescence in the steel vacuum chamber 48 . Hit fractions of 30-50% were achieved with samples that were diluted 20-250 times, starting with initial peptide concentrations of 1 mg/ml. Diffraction patterns from TMV exhibited layer lines in one or more orientations, indicating the presence of single or multiple layers of protofibrils in the nanofocus. An example diffraction frame from TMV with a single orientation on the graphene is shown in Fig. 3b. A pattern from bombesin is shown in Fig. 3c. Radial sections of these patterns are plotted in Fig. 3d. The signal from the amyloid and TMV are seen to extend to 2.4 Å and 2.7 Å, respectively. The background contribution from freestanding graphene is seen to be two orders of magnitude lower than the sample signal in Fig. 3d. Diffraction by TMV fibers. To demonstrate the structural integrity of the samples under our experimental conditions, we compare a single XFEL diffraction pattern from TMV exhibiting 24 layer lines (Fig. 4a) to a synchrotron diffraction pattern obtained from a specimen containing millions of TMV filaments aligned in well-oriented gels (Fig. 4b) 6 . The XFEL pattern resembles the azimuthally averaged synchrotron X-ray diffraction pattern. The qualitative agreement between the strong features suggests that the structure is not damaged in vacuum relative to the solvated form up to relatively high resolution. We selected 37 TMV XFEL frames with well-defined layer lines similar to Fig. 4a and calculated the period of the molecular structure along the c axis (fiber axis) from the layer line spacing. The average value is 68.8 Å, which agrees with the known value of 68.7 Å 43, 49 , and the values from individual patterns are equal to this value within the error bars ( Supplementary Fig. 6). This suggests that the global structure of TMV is maintained during the experiment. Asymmetric features in single-shot XFEL diffraction patterns. A conventional fiber specimen contains many molecules with random axial rotations, and random directions either parallel or antiparallel to the fiber axis. A conventional fiber diffraction pattern is, therefore, cylindrically averaged, and so is symmetric about the equator (horizontal axis) and the meridian (vertical axis), as is evident in Fig. 4b. However, in some of the XFEL diffraction patterns, such as Fig. 4a, there is evidence that this b TMV fibrils align naturally on graphene over hundreds of nanometers. However, on the micrometer scale, aligned and randomly ordered fibrils are co-present. c Bombesin protofibrils associate laterally to form fibers, which randomly twist. A single preparation may consist of different polymorphs, e.g., twisted fibers and fibril rafts which are depicted here with arrows and squares, respectively. Bombesin fibers were mixed with TMV to compare their thickness. d The alignment of bombesin protofibrils on graphene is shown. Mature fibers are detected at larger magnifications. e β-endorphin protofibrils associate laterally to form twisted and striated fibers. f Aligned β-endorphin protofibrils were observed on graphene supports. To confirm that the features that are being imaged by the AFM are from the sample and not an artifact caused by the probe, the sample was rotated by 30°with respect to the scanning direction. Dashed circles represent the XFEL focus with FWHM = 150 nm Fig. 4c. The asymmetry indicates that the XFEL diffraction patterns from TMV are not cylindrically averaged, and that protofibrils with only one or a few axial rotations may be simultaneously exposed in the XFEL focus. Such patterns potentially contain more information than the cylindrically averaged patterns measured in conventional fiber diffraction experiments 50 . Although the number of protofibrils within the focus is limited, their exact number is difficult to determine. The number of protofibrils was estimated from examination of tapping-mode AFM images of graphene-covered silicon next to the selected windows (similar to Fig. 2a-c) as the free-standing graphene is too fragile to withstand AFM analysis. Based on this analysis and the~150 nm XFEL focal diameter, we estimate that fewer than about 50 amyloid protofibrils and about eight TMV fibers contributed to the single diffraction patterns shown in Fig. 3. Diffraction by amyloid protofibrils. Amyloid protofibrils are about six times smaller in diameter than TMV particles ( Fig. 2b and S3), and therefore exhibit broader diffraction features. Single diffraction snapshots from amyloid protofibrils of bombesin and β-endorphin are shown in Fig. 5a, b. These patterns exhibit strong intensity on the equator and a strong meridional layer line at about 4.8 Å due to the characteristic spacing of β-strands in βsheets typical for amyloids 10 . This preserved c-repeat indicates that there are no global structural changes due to the experimental conditions. The second layer line at~2.4 Å (4.8 Å / 2) on the meridian is also present in single snapshots. Forty diffraction frames from each amyloid data set (bombesin and β-endorphin) with well-defined layer lines similar to Fig. 5a, b were selected manually, as existing hit-finding methods were found to be not suitable for detecting layer lines in the somewhat diffuse patterns of this kind. Patterns were aligned and registered in reciprocal space after their rotation angle around the beam axis (φ) and the deviation of the fiber axis from the normal to the beam axis (β) (Supplementary Fig. 7) were determined. The tilt angle between the fiber axis and the X-ray beam varied within a small range, independent of the substrate tilt due to buckling of graphene across the holes. The oriented frames were then mapped into reciprocal space (R, Z) for subsequent analysis, with coordinates normal (R), and parallel (Z), to the fiber axis 51 . The mapped patterns were merged, symmetrized and backgroundsubtracted to give an averaged pattern in (R, Z) space with an improved signal-to-noise ratio (Fig. 5c, d). Averaged equatorial intensity profiles shown in Fig. 5e, f were used to determine positions of the equatorial maxima. For bombesin, three peaks including one pronounced equatorial peak at 10.6 Å are discernible (Figure 5e). β-endorphin fibrils show five peaks amongst which there are three pronounced maxima at 8.1 Å, 9.9 Å, and 12.3 Å labeled 3, 4, and 5, respectively (Fig. 5f). Both equatorial and meridional peaks are summarized in Table 1. Discussion We have demonstrated a new approach to study non-crystalline amyloid fibrils combining femtosecond pulses from the LCLS XFEL and free-standing graphene windows. This approach presents two advantages: very low background scattering and mutual alignment of the particles in the beam focus. The average background scattering obtained of 0.1 photons/pixel approaches that obtained from aerosol injection methods for single particles using hard X-rays 48 is significantly less than previously reported for other fixed target samples at LCLS 41, 52, 53 and is dramatically lower than that obtained with a liquid jet injector 30 . By naturally aligning TMV filaments and protofibrils composed of bombesin and β-endorphin peptides, graphene fixes the alignment of the molecule during exposure. This new mounting scheme limits the number of filaments simultaneously exposed to the XFEL focus R and Z positions were identified in averaged patterns of bombesin and β-endorphin (Fig. 5) s strong, w weak, vw very weak (about eight TMV filaments and less than 50 amyloid protofibrils) and with further development may allow data to be collected from single fibrils. High-quality single-shot diffraction patterns were obtained from TMV. In some cases, asymmetry in the single-shot patterns indicates the presence of a limited number of axial rotations of the exposed particles. There are two implications of this observation. First, it suggests the possibility that the graphene may lock the TMV molecules into a specific rotation. Second, if the range of rotations present is small, then the data may represent a single, or a narrow, section through reciprocal space, rather than the cylindrical average. This is the case even if there is more than one molecule in the beam, as long as the molecules are mutually aligned. A full 3D data set could then be obtained from a range of fiber rotations in the x-ray beam. For molecules of high-order helix symmetry, the range of rotations required is small. For example, with TMV, a rotation range of 22 degrees would be sufficient. Such data could potentially be collected by tilting the support frame in the beam. With a full 3D data set from noncrystalline fibrils (i.e., 1D crystals), the information content is much higher than in a conventional cylindrically averaged fiber diffraction pattern, and direct, model-free phasing is feasible 54 . Snapshots from bombesin and β-endorphin protofibrils are of limited resolution, but they could be oriented and merged in reciprocal space to produce merged patterns with better signal-tonoise ratios and with an extended resolution. These merged patterns show reduced disorientation and background, and are of overall better clarity and higher resolution than those obtained from similar amyloids using synchrotron sources (Fig. 6). The quality of the XFEL diffraction data obtained from the amyloid fibrils is limited due to the limited number of images and averaging of multiple protofibrils co-present in the focus. However, the strong equatorial peak at~10 Å resolution in the XFEL diffraction pattern of bombesin (Fig. 5a, c, e) is consistent with amyloid models with two β-sheets laterally placed 10 Å apart 55 . The β-endorphin structure, which has three peaks at 8.1, 9.9, and 12.3 Å, is known from solid-state NMR data to be in a β-helix conformation 56,57 . The distances of 8.1, 9.9, and 12.3 Å are in agreement with distances of opposing β-sheets in the fibril core. A similar equatorial intensity profiles was published for the β-solenoid structure of Het-s(218-289), with peaks about 17 and 11 (and 8 Å, not highlighted in the paper) 58 . In the study reported here, a limited number of diffraction frames were collected as a result of the scanning system, which could only move the fixed target through the XFEL focus at 1.5 s −1 . Newly available scanning hardware will be capable of keeping up with the 120 Hz repetition rate of the XFEL 59 . This will increase the data collection rate and with it the number of collected patterns and ultimately the quality of the merged data set. The signal level obtained in single snapshots indicates that collection of data from single fibrils may be possible. In order to achieve this, biochemical methods must be developed to segregate single fibrils for exposure to the XFEL. For data collected from single fibrils, fixed rotations of the fibrils on the graphene is not necessary, as it is for the case of multiple fibrils in the beam described above. In fact, a variety of rotations will aid filling out 3D reciprocal space. A small range of fibril tilts (i.e., rotations about an axis normal to the fibril axis), which could be obtained by titling the support frame, would also be required to complete coverage of reciprocal space 31 . Such an approach will require development of computational techniques for auto-orientating the diffraction patterns ( Supplementary Fig. 7). Data from single protofibrils should, therefore, allow reconstruction of the full 3D intensity distribution of a protofibril, similar to that recently demonstrated for crystalline fibrils 31 . Such 3D datasets potentially allow model-free structure determination as described above. Our results indicate that serial fibril diffraction on graphene may become a practical method of for the study of very weakly scattering particles using XFEL diffraction. Protofibrils are not yet accessible in images from a cryogenic specimen, and highresolution cryo-EM reconstructions from fibers, therefore, represent averages of multiple protofilament conformations 18,19 . With this averaging both the high-resolution information of individual protofilaments and the conformational variability in flexible regions, which are likely to be of biological importance, are lost. In fact, there is a common consensus that it is not the amyloid fiber alone, but rather the protofilaments composing the fiber, and the process of fibril formation, that are toxic to the cell 22,60 . XFEL-based experiments have the potential to overcome the challenges that come with a heterogenic specimen, such as a few nanometer thick amyloid fibrils. This represents a complementary tool to solid-state NMR and cryo-EM that has the potential to improve our understanding of individual protofilaments. Methods Design of silicon frames. Silicon frames with a size of 2.54 × 2.54 cm 2 were commercially obtained from Norcada. Each frame contained 12,636 holes of either 20 or 30 μm in diameter arranged in an array of 12 × 13 square windows each 0.9 × 0.9 mm 2 , inside of which the silicon was thinned down to 2 μm. Individual windows contained 81 holes arrayed in a hexagonal layout with a pitch of 60 μm necessary for stability and for withstanding XFEL shock waves. The large hole sizes were necessary to reduce scattering by the edge of the hole from photons in the tails of the focused LCLS beam. Application of trivial transfer graphene on holey silicon. Graphene-PMMA in the form of trivial transfer graphene (TTG) was purchased from Advanced Chemicals Supplier (ACS). The graphene-PMMA was wet well with water before being cut with a scalpel to 2.54 × 2.54 cm 2 and transferred to a deionized distilled water bath. The graphene-PMMA was transferred onto the silicon chip and allowed to dry on silicon at room temperature for 20-30 min. Polymethylmethacrylate (PMMA) was removed by immersing the silicon frame in an acetone bath. The silicon frame was placed on a hot plate at 300°C for 2-4 h and covered with a silicon chip with a 50 nm thick metallic palladium layer, to catalyze the evaporation of residual PMMA 33 ( Supplementary Fig. 1). Preparation of protofilaments. To initiate protofibril formation, peptide solutions of bombesin, and β-endorphin were mixed with heparin in water (pH 5.5) and allowed to assemble and grow under continuous stirring for four days. At different time points after initiation, between 5 min to 4 days, suspension droplets were deposited on carbon films and imaged by negative-stain transmission electron microscopy. Protofibrils were observed between 8-24 h after initiation of filamentation. Sample deposition on silicon frames. Immediately after Palladium-catalysis 33 , the fiber samples were applied on the frame. To ensure optimal coverage, single droplets of about 0.4 μl were applied to all windows. To increase the ratio of single layers of fibrils over many square micrometers, protofilaments were applied at five dilutions (1 × , 20 × , 50 × , 250 × and 1000 × ). After all drops were dried, the silicon frames covered with graphene and protofibrils were imaged by AFM or exposed to the XFEL beam. Atomic force microscopy imaging of fibrils. Images of aligned single fibril layers were acquired on graphene prepared on a silicon wafer, and on graphene placed on a holey silicon support. As free-standing graphene breaks upon contact with the cantilever and without a support underneath, images were acquired next to the graphene windows (on the silicon frame). AFM images were obtained with a Veeco XX in tapping mode. The cantilevers used (MPP-21120-10) were purchased from Bruker with a resonance frequency of 75 kHz and a spring constant of 3 nm −1 . Mounting silicon chips for XFEL experiments. Silicon chips were mounted on aluminum frames supplied by LCLS using small slices of Kapton tape. All work was conducted in a clean-room environment and frames were prepared immediately before the beamtime to allow for the cleanest graphene surface achievable. X-ray data collection. Each window was shot once with a 40 fs XFEL pulse with a calculated focus of 150 nm at FWHM. The sample-detector distance was set to 85 mm, which gave a resolution of 1.6 Å in the outer corners of the detector with a photon energy of 8 keV. A total of 126,768 frames were recorded on the 2D Cornell-SLAC pixel array detector (CSPAD) in high gain mode 61 . Parasitic scattering was reduced using a post-sample tantalum aperture matching the silicon chip dimensions containing a 4.8 mm hole located immediately after the sample plane. The experiment was performed under proposal number LM27. The data is publicly available on the CXIDB 62 (ID 75) . Calculation of the average background. Frames containing diluted TMV sample were also collected and filtered for beam-off events ( Supplementary Fig. 5). After events with no background (beam-off) were discarded, frames were classified in three groups based on the number of photons in the frame, the number of photons in a selected area of the frame expected to contain signal, and the number of photons in a selected area of the frame expected to have no signal. From this histogram, a total of 1607 frames were assigned to be sample-free and of signal level higher than that of empty holes. Transmission electron microscopy imaging of fibrils. Samples were adsorbed at 1×, 2×, or 5× dilutions as above for 40 s to thin carbon films that span a thick fenestrated 300 holes per copper grid. The grids were blotted, washed on two drops of deionized distilled water and negatively stained with 2% (w/v) uranyl formate (UF). The grids were imaged with a JEOL transmission electron microscope operating at 200 kV. Electron micrographs were recorded on a TVIPS TemCam F216 digital camera at a nominal ×50,000 magnification. Transmission electron microscopy and size determination. For TEM, stock solutions were diluted 2 × with D-PBS (Gibco Life Technologies). A volume of 4 μl of diluted samples were adsorbed for 60 s to glow-discharged parlodion carboncoated copper grids. The grids were then blotted, washed on three drops of doubledistilled water, incubated with 2 µl of Tobacco Mosaic Virus solution (TMV; kindly supplied by Ruben Diaz-Avalos, Janelia Research Campus, Ashburn, VA, USA), further washed with two drops of water and negatively stained with two drops of 2% uranyl acetate (pH 4.3) solution. Samples were imaged at a nominal magnification of ×52,000 using a Tecnai12 transmission electron microscope (FEI, Eindhoven, The Netherlands) operating at 120 kV. Electron micrographs were recorded on a 4000 × 4000 pixel charge-coupled device camera (F416, Tietz Video and Image Processing System, Gauting, Germany). Reference-free alignment was performed on manually selected fibril segments from recorded images using the EMAN2 63 image processing package. A total of 488 segments of 128 × 128 pixels were extracted from the micrographs, aligned, and classified by multivariate statistical analysis yielding eight class averages: one of TMV and seven of the amyloid fibers. The TMV class average was aligned horizontally and the amyloid fiber class averages vertically by rotating the corresponding images. Density profiles were plotted using the Plot Profile tool from ImageJ 64 and the apparent diameters of the fibrils were measured manually on the plots between the minima or deepest points. The estimated diameter of TMV was used to redetermine the recorded specimen area by each pixel (0.25 nm). Hit finding. Frames containing defined layer lines instead of arcs from the XFEL were manually selected using the Cheetah Software Suite 65 . Manual frame selection from these (comparatively) small datasets was used as existing hit-finding methods are not suitable for these kinds of patterns. Merging. Since single frames had a low degree of misorientation, the alignment and averaging was done manually. A custom graphical utility ( Supplementary Fig. 7) was used in which the in-plane rotation angle φ and the out-of-plane tilt β are determined. The tilt was determined with the aid of horizontal and vertical guides, which were used to manually check that the layer lines are horizontal and that the non-equatorial peaks on opposite sides of the equator had the same radial coordinate. Four-quadrant averaging improved the signal-to-noise ratio and filled in the panel gaps in the detector. All selected frames were scaled and averaged. The background on the detector was assumed to be circularly symmetric except for the polarization effect. Since the signal was concentrated in layer lines, pixels from between the layer lines were used to calculate this symmetric background, which was then subtracted from the whole frame. The program used is available at https:// github.com/kartikayyer/RZ-Gui. Data availability. Other data are available from the corresponding author upon reasonable request.
7,631.4
2018-05-09T00:00:00.000
[ "Physics" ]
Entrainment and Synchronization to Auditory Stimuli During Walking in Healthy and Neurological Populations: A Methodological Systematic Review Background: Interdisciplinary work is needed for scientific progress, and with this review, our interest is in the scientific progress toward understanding the underlying mechanisms of auditory-motor coupling, and how this can be applied to gait rehabilitation. Specifically we look into the process of entrainment and synchronization; where entrainment is the process that governs the dynamic alignments of the auditory and motor domains based on error-prediction correction, whereas synchronization is the stable maintenance of timing during auditory-motor alignment. Methodology: A systematic literature search in databases PubMed and Web of Science were searched up to 9th of August 2017. The selection criteria for the included studies were adult populations, with a minimum of five participants, investigating walking to an auditory stimulus, with an outcome measure of entrainment, and synchronization. The review was registered in PROSPERO as CRD42017080325. Objectives: The objective of the review is to systematically describe the metrics which measure entrainment and synchronization to auditory stimuli during walking in healthy and neurological populations. Results: Sixteen articles were included. Fifty percent of the included articles had healthy controls as participants (N = 167), 19% had neurological diseases such as Huntington's and Stroke (N = 76), and 31% included both healthy and neurological [Parkinson's disease (PD) and Stroke] participants (N = 101). In the included studies, six parameters were found to capture the interaction between the human movement and the auditory stimuli, these were: cadence, relative phase angle, resultant vector length, interval between the beat and the foot contact, period matching performance, and detrended fluctuation analysis. Conclusion: In this systematic review, several metrics have been identified, which measure the timing aspect of auditory-motor coupling and synchronization of auditory stimuli in healthy and neurological populations during walking. The application of these metrics may enhance the current state of the art and practice across the neurological gait rehabilitation. These metrics also have current shortcomings. Of particular pertinence is our recommendation to consider variability in data from a time-series rather than time-windowed viewpoint. We need it in view of the promising practical applications from which the studied populations may highly benefit in view of personalized medical care. Conclusion: In this systematic review, several metrics have been identified, which measure the timing aspect of auditory-motor coupling and synchronization of auditory stimuli in healthy and neurological populations during walking. The application of these metrics may enhance the current state of the art and practice across the neurological gait rehabilitation. These metrics also have current shortcomings. Of particular pertinence is our recommendation to consider variability in data from a time-series rather than time-windowed viewpoint. We need it in view of the promising practical applications from which the studied populations may highly benefit in view of personalized medical care. INTRODUCTION Research on music and brain typically draws on a cognitive science perspective, in which brain science, psychology, musicology, engineering, and neuroscience (Levitin and Tirovolas, 2009) form the interdisciplinary core for acquiring new insights. While auditory-motor couplings have been studied from a cognitive science perspective, their full potential for clinical applications is not yet fully understood. Yet, evidencebased research related to auditory-motor coupling does hold the prospect of new therapeutic applications in the clinical domain, for example in persons with neurological cognitive or motor impairments. Therefore, in this review, our interest is in the scientific progress toward understanding auditory-motor coupling in rehabilitation and facilitation of walking. Specifically, we look into the process of entrainment and synchronization; where entrainment is defined as the process that governs the dynamic alignments of the auditory and motor domains, whereas synchronization is defined as the stable maintenance of timing during auditory-motor alignment. (In depth explanations of these concepts are followed below). Meanwhile, the use of music and auditory stimuli for different populations has been studied in people with traumatic brain injury (Bradt et al., 2010), neurological diseases (Moumdjian et al., 2017;Sihvonen et al., 2017), cognitive function in elderly (Li et al., 2015) and dementia (Fusar-Poli et al., 2017;van der Steen et al., 2017). For example, as mobility impairments are prominent in persons with Parkinson's disease (PD) (Marras et al., 2002), evidence has been accumulated that the use of auditory stimuli in rehabilitation for PD patients could improve gait and facilitate walking. At a mechanistic level, several facilitation mechanisms have been suggested, such as the activation of auditory-motor pathways (Thaut, 2015), or the activation effect for the motor system due to the firing rates of auditory neurons which entrain firing rates of motor neurons (Rossignol and Jones, 1976). Clinically, this has led to the development of a technique called Rhythmic Auditory Stimulation (RAS), which generalizes the idea of using auditory stimuli (mainly metronome ticks, but also music) for gait rehabilitation in pathologies of PD (Wittwer et al., 2013), stroke (Yoo and Kim, 2016), and multiple sclerosis (Shahraki et al., 2017). The quality of evidence for using RAS to enhance gait is established by systematic reviews and metaanalysis's on persons with stroke (Nascimento et al., 2015;Yoo and Kim, 2016), cerebral palsy (Ghai et al., 2017), PD (Spaulding et al., 2013;Ghai et al., 2018), and aging population (Ghai et al., 2017). It is likely that these studies provide the foundation for future applications in the respective domains. However, a closer look at the studies reveals that different gait-related outcomes (e.g., velocity, step length, cadence, etc.) have been used to map out the positive benefits of RAS on gait (Nascimento et al., 2015;Ghai et al., 2017). One may question whether the use of these gait-related outcomes provides enough detailed information about the effects of using RAS or other types of auditory stimuli on gait, specifically within the neurological population with impairments and often asymmetries. Consequently, at the conceptual level, convergence is needed. A major problem is related to the concept of entrainment, that is, a process that governs the alignment of the auditory and motor domain. This alignment can be understood in terms of coupled oscillators that achieve synchronization by locking into each other's period and/or phase (Bennett et al., 2002;Leman, 2016), or, alternatively, as the effect of minimizing prediction errors (Clayton et al., 2005;Repp and Su, 2013;Leman, 2016). The first is more based on mechanical pull and push forces, while the second is more based on principles of anticipation, involving the concept of an internal model in the brain (Wolpert et al., 1995). For our purpose, it is straightforward to conceive the interaction between music (or repetitive auditory stimuli) and a person (doing repetitive movements) as a coupled oscillatory system. The beats found in the music (or auditory stimuli) and the footfalls generated by a gait cycle thereby mark the cycles of the two different oscillatory systems. Through entrainment, the beats and the footfalls get aligned in time. That is, the beat and the footfall are constantly pulled and pushed toward one another until the time difference between the beat and the footfall becomes (more or less) stable. From that moment on, the interaction reaches a state of synchronization. This state can be conceived as a dynamic attraction point where the timing differences between music and person are stabilized. Rather than pull and push forces, it is also straightforward to assume error-prediction minimization as mechanism for entrainment (Repp and Su, 2013). For an in depth explanation on the factors that determine the strength of the coupling and entrainment, the reader is referred to Leman (2016). Importantly, the synchronization of the oscillators, which one can view as an outcome of entrainment involving a stable maintenance of timing, can be quantified. In this review, we focus on the outcome measures that have been used in studies that use entrainment and synchronization as a factor in walking rehabilitation and facilitation. So far, there is evidence that entrainment can be quantified by measuring timing (Repp and Su, 2013). We hypothesize that the use of metrics that measure timing during entrainment and synchronization is beneficial, as it would facilitate our understanding of the mechanisms of the coupling between human gait and auditory stimuli. This understanding can be beneficial as well as enriching for the discipline of rehabilitation. Having access to and understanding of novel assessment measures may contribute toward the development of tailored clinical interventions that use auditory stimuli in neurological gait rehabilitation. To our knowledge, this work is the first to systematically review the literature in view of the metrics that measure entrainment and synchronization responses to auditory stimuli, during walking in healthy, and pathological populations. The goal of the review is to describe (i) the types of the auditory stimuli, conditions, and the rationale why they were applied, (ii) the metrics which measure entrainment and synchronization to auditory stimuli, (iii) the methods of walking and how they were measured, (iv) the population of participants included in the studies and their motor and/or cognitive characteristics, and finally (v) recommendations for the use of metrics in future research activities. METHODOLOGY This review is registered in PROSPERO (registration number: CRD42017080325). We included cross-sectional studies (e.g., observational studies or controlled trials) that consisted of at least one session intervention. The selection criteria for the included studies were adult populations, with a minimum of five participants, investigating walking to an auditory stimulus, with an outcome measure of entrainment and synchronization. Additionally, articles on cyclic activities other than walking were excluded, as well as animal studies, conference proceedings, reviews, and non-English publications. Two electronic databases (PubMed and Web of Science) were searched up to 9th of August 2017. The following search term were used: Synchronization AND (Rhythm OR Pulse OR Music OR Metronome OR Melody OR Beat OR auditory stimuli) AND (Gait OR Walking OR Treadmill Walking OR Indoor Walking OR Outdoor Walking). Appendix 1 in Supplementary Tables shows the flow of the search strategy. Furthermore, the reference lists of the selected articles were scanned for relevant additional literature. Two independent reviewers (I.W., J.J.) screened the articles systematically. A third reviewer (L.M.) was contacted in case of disagreements or doubts whether to include a study, and a final decision was made. In total, 16 of the 249 screened articles are included in this review. Figure 1 shows the PRISMA (Liberati et al., 2009) flow diagram summarizing the selection process and reasons for exclusion of the studies. The following data were extracted from the selected studies: participant population (healthy or neurological disease), descriptive characteristics of the participants (age, gender, weight, height), type of pathology in the neurological population (motor and/or cognitive characteristics), number of participants, auditory condition, methods, and equipment used to apply the auditory conditions, experimental and control groups, methods and equipment measuring the walking, outcome measures of entrainment and synchronization, and spatiotemporal parameters. In order to assess the risk of bias in individual studies, we employed the STROBE checklist (von Elm et al., 2007). To minimize publication biases, the key words from the titles and last author of the included studies were checked for presence on the EU clinical trial register. The planned method of analysis for this review is a descriptive synthesis. RESULTS Below, we present the results of this review in five sections in order to provide a comprehensive methodological overview. The order is presented as such: (i) the risk of bias, (ii) descriptive characteristics of study participants, (iii) the auditory stimuli and/or interactive auditory software, (iv) the parameters and measures of entrainment and synchronization, and finally (v) the sensor equipment and the spatiotemporal parameters of walking across the studies. Quality Assessment The quality assessment of the included articles is based on the STROBE checklist. The Supplementary Table shows the results of the STROBE checklist across the included studies. Overall, the quality of the studies was acceptable. All articles had a clear explanation of their scientific background and provided clear explanations of the aims, hypothesis, and experimental design of their study. However, none of the studies provided analysis of the sample size. In four of 16 studies, missing data was not addressed. Strengths and limitations of the studies are addressed in discussion. The Auditory Stimuli Used Across the Studies The type of auditory stimuli used, and the method of administration has been heterogeneous across the studies. Table 2 provides a detailed overview of the applications of the auditory stimuli. The Custom-Made Interactive Auditory Software Used Across the Studies Two custom made software's were found in the included studies, these were the D-Jogger and the Walk-Mate. The D-Jogger (Moens et al., 2014) is a music player system that adapts the period and phase of a musical playback in response to human interaction. First, the music player identifies the period and phase of a walking or a running person, using the footfall instant as a salient measurement moment. Based on the selected alignment strategy, music is provided, and adapted if needed, using the musical beat as a salient measurement moment. The system consists of a laptop, sensors, headphones, a wifi connection, and transmitter (to transmit the sensor data to the laptop) and an annotated music library (Buhmann et al., 2016b). The reader is referred to the paper of Moens et al. (2014) for a detailed explanation of the components and functioning of the system. The Walk-Mate system (Miyake, 2009) is a human-robot interaction system based on mutual entrainment of walking rhythms. It was developed to investigate the mechanism of interpersonal synchronization, and its potential applications to provide walking support for patients with gait disturbance. The system consists of the following equipment: a laptop, headphones, pressure sensors, a radio transmitter, and receiver (to transmit the sensor data to the laptop). The reader is referred to the paper of Miyake et al. (Miyake, 2009) for a detailed explanation of the components and functioning of the system. Table 3 and Figure 2 provide the definitions, formulas, and interpretations of the below measures of auditory-motor coupling and synchronization during walking. Please note, in our results, we did not find metrics that measure entrainment specifically, but mostly they measure auditory-motor coupling and/or synchronization. In order for correct use of terminology, from this point forward in the text we use the term auditory-motor coupling and synchronization metrics instead of entrainment and synchronization metrics. Measures of Entrainment and Synchronization In the included studies, six parameters were found to capture the interaction between the human movement and the auditory stimuli: Tempo is a term that refers to the basic tempo of audio or movement and is typically expressed in number of steps or beats per min (SPM/BPM). SPM is calculated as the total number of steps divided by duration expressed in minutes: This is a measure of the timing of the footfall relative to the closest beat. The relative phase angle can be expressed as either a positive (footfall after the beat) or a negative (footfall before the beat) angle in degrees. With the formula below, the relative phase angle for 1 step is calcuated. S t represents the time point where the step investigated takes place, and B n is the beat at the time prior to the S t . To calculate the average relative phase angle, circular statistics (Berens, 2009) is then applied. Resultant vector length (expressed as a value from 0 to 1) This measure expresses the coherence or stability of the relative phase angles over time. If the distribution of the relative phase angles over time is steep, it results in a high resultant vector length (max value 1). If the distribution of the relative phase angle over time is not steep but broad or multimodal, it results in a low resultant vector length (min value 0). Consider S as a step and n as the nth step in the following formula: Asynchrony (measured in ms) This parameter is a measure of the timing expressed in milliseconds (ms) between the footfall and beat instants, i.e., the asynchrony between the beat and the footfall. While the phase angles express the relative differences between the steps and beats, the intervals between the steps and beats are absolute differences. In the below formula, S t represents the time point where the step investigated takes place, and B n is the beat at the time closest to the S t . asynchrony = S t − B n Tempo matching accuracy (measured in ms) This parameter indicates the extent to which the overall tempo of the footfalls matches the overall tempo of the beats. Inter-beat deviation (IBD) was defined as a parameter that measures the tempo-matching accuracy, as expressed by the formula below, where n represents the nth step or beat. The standard deviation of the IBD can also be calculated as a unit of variability of the tempo matching. Detrended Fluctuation analysis (DFA) (measured by the scaling exponent alpha) The DFA is a common mathematical method to analyse the dynamics of non-stationary time series. More specifically, it characterizes the fluctuation dynamics of the time series through looking into its scaling component alpha (Chen et al., 2002). It has been shown that in other physiological time series the current value possesses the memory of preceding values. This phenomenon is known as long-range correlations, long-term memory, long-range correlations and fractal process of 1/f noise. A healthy gait time series pattern consists of a fractal statistical persistent structure equivalent to a pure 1/f noise (Goldberger et al., 2002). Authors suggest that the analysis of this gives an insight into the neuro-physiological organization of neuro-muscular control and the entire locomotion system (Hausdorff, 2007). The 1/f noise is correlated with a scaling exponent alpha value between 0.5 and 1.0 (indicative of a walking pattern found in healthy gait time series). If alpha is ≤0.5, it signifies an anti-correlation, and is associated with unhealthy walking pattern (randomness). For details of calculating the scaling exponent alpha, the reader is referred to Chen et al. (2002) and Terrier et al. (2005). The underlying rationale of using this analysis method in gait is addressed in the discussion section of our review. The integrated time series of N is divided into boxes of equal length. Each box has a length "n" and in each box of length n, a least square line is fit on the data. Definitions, explanations and interpretations The y-coordinate of the straight line segments is denoted by y n (k). The integrated time series y(k) is detrended by subtracting the local trend y n (k) in each box. The root mean square fluctuation in this integrated and detrended time analysis is calculated by: Thus, the fluctuations can be categorized by the scaling exponent (ALPHA), which is the slope of the line relating LogF(n) to log(n): Seven of the 16 included articles used this parameter as a measure of tempo matching (McIntosh et al., 1997;Thaut et al., 1999;Roerdink et al., 2009Roerdink et al., , 2011Cha et al., 2014;Mendonça et al., 2014;Dotov et al., 2017). Relative Phase Angle (Measured in Degrees) Five of 16 studies measure the relative phase angle (McIntosh et al., 1997;Roerdink et al., 2009Roerdink et al., , 2011Nomura et al., 2012;Buhmann et al., 2016a), while two of these studies report the variance expressed by a standard deviation value as well (Roerdink et al., 2009;Nomura et al., 2012). Additionally, one study used the term phase coupling (McIntosh et al., 1997) for referring to this parameter. Resultant Vector Length (Expressed as a Value From 0 to1) Four of 16 articles used this parameter in their study (Hove et al., 2012;Nomura et al., 2012;Buhmann et al., 2016a;Dotov et al., 2017). Of the four studies, one used this parameter in order to group their study population into two categories of phase coherence and incoherence (Buhmann et al., 2016a). A second study used this parameter, yet they used the term synchronization consistency (Dotov et al., 2017). Asynchrony (Measured in MS) Three of 16 studies used this parameter (Pelton et al., 2010;Roerdink et al., 2011;Dickstein and Plax, 2012). Of the three studies, one calculates the variability of the timing as well (Roerdink et al., 2011). Period (or Tempo) Matching Performance (Measured in MS) Two studies from the same research group (Leow et al., 2014(Leow et al., , 2015 calculated the period matching performance through the inter-beat interval deviation. They defined the inter-beat deviation as a parameter that measures the tempo-matching accuracy. They also calculate the standard deviation of the interbeat deviation, and name this parameter the tempo-matching variability (Leow et al., 2015). The third paper measuring the period matching performances does so by calculating the proportion asynchrony error which is relative to the target pulse (also described as period control; Pelton et al., 2010). A fourth paper does this by calculating an error measure E to evaluate the frequency error of the synchronization. When E = 0, there was a perfect frequency synchronization. When E was negative, the participants needed to take more strides before synchronizing (Roerdink et al., 2009). This parameter was also referred to as period control. Detrended Fluctuation Analysis (DFA) (Measured by the Scaling Exponent Alpha) Three (Hove et al., 2012;Terrier and Deriaz, 2012;Marmelat et al., 2014) of the 16 studies included in this review use this analysis for slightly different purposes. One study investigated if the use of an interactive music player (one that adjusts its timing of the beats to the timing of the footfalls) retains the most healthy gait speed. That is one that equals the long range correlation of a healthy speed in Parkinson patients (Hove et al., 2012). The remaining two studies used this parameter to measure how long-term correlations of gait in a healthy population were influenced when changes of the experimental conditions were imposed on healthy controls. Their main research questions were, how long-term correlations change when using isochronous, non-isochronous, and fractal cues (Marmelat et al., 2014), and how these correlations change when imposing simultaneous variations of speed and rhythm (using a treadmill and metronomes; Terrier and Deriaz, 2012). The Sensor Equipment Used Across the Studies Various sensors were used to capture the movement parameters. The spatiotemporal parameters of gait that were calculated using the described sensor equipment are summarized in Table 4. We Grouped the Sensor Technology Into Four Categories: Two different types of sensors were used across three studies: a single large force platform mounted on the treadmill (ForceLink, Culemborg, The Netherlands) (Roerdink et al., 2011;Marmelat et al., 2014) and the FDM-TDL (Scheinworks/Zebris, Schein, Germany). The latter is a treadmill which has 7,168 pressure sensors embedded (1.4 sensors per cm 2 ) in its surface (Terrier and Deriaz, 2012). (c) Sensored walkways Three types of walkways were used across four studies: a Zeno pressure sensor walkway of 16 foot with a sampling rate of 120 Hz (Leow et al., 2014(Leow et al., , 2015); a GAITRite system (GAITRite, CIR systemInc, USA, 2008), with an active area of 366 cm long, containing 16,128 pressure sensors (Cha et al., 2014); and a computerized foot switch system consisting of four separate sensors which measure the surface contact of the heal, toe and 1st and 5th metatarsal (McIntosh et al., 1997). (d) 3D motion capture Three different systems were used across three studies: A six camera motion capture (Vicon MX3+, MXF20 at 120 Hz; Mendonça et al., 2014); an Oxford Metrics Vicon tracking system with six infrared cameras which captured the motion of ankle markers at 120 Hz (Pelton et al., 2010) and a 3D passive-marker motion registration system (SIMI motion; Roerdink et al., 2009). In the latter, markers where attached to the heels of the shoe of participants. DISCUSSION In this systematic literature review, we included 16 articles that measured timing components of auditory motor coupling and synchronization while walking to auditory stimuli (metronomes and music). Half of the studies were in healthy subjects exclusively, while the other half also included persons with neurological conditions. Six outcome measures were found: steps per minute, resultant vector length, relative phase angle, relative timing, period matching performance and DFA. All the metrics we identified, with the exception of the DFA, provide general information about the synchronization of the motor system and the auditory stimuli. Typically, the metrics point to average timing relationships between footfalls and beats, which are the salient markers of the cycles that characterize the essential features of the motor system and the auditory stimuli. In other words, the metrics (with the exception of DFA) can be used to quantify synchronization in an environment where auditory stimuli and gait are coupled. The metrics assume that the best synchronization state is the state where the audio-motor error is minimal, preferably zero. However, it is important to note that the underlying entrainment, which we defined as an alignment dynamics, understood in terms of pull and push forces (mechanics) or audio-motor predictionerror correction (internal models) is not captured by these metrics. Did Synchronization Influence Walking? Seven of the 16 included studies analyzed the effects of different auditory stimuli on gait parameters. That being said, heterogeneity is present in the included studies in terms of the investigations on the different aspects of synchronization to accommodate the different hypotheses and a variety of study aims. Examples of heterogeneity in the methodological applications are the different methods to produce the auditory stimuli, the different stimuli, and characteristics within the stimuli, the use of tempo or phase shifts at different tempi in the experimental conditions, and lastly, the different participant populations in the studies. Similarly, heterogeneity is also found in the reported spatio-temporal parameters of gait. Given the fact that these heterogeneities are seen in the study designs, this hampers the direct comparison of the results reported in the studies. Accordingly, we are not able to estimate the overall effect of synchronization in gait. Instead, below, we provide a short overview of the effects of synchronization on gait seen per study without any direct comparisons between studies. Buhmann et al. (2016a) showed that in healthy participants, synchronization was not specifically necessary to obtain changes in spatiotemporal parameters of gait. However, the authors speculated that the auditory-motor coupling in the process of entrainment was still the main source that brought about the changes. In the study of Roerdink et al. (2011), the cadence of healthy participants increased with the imposed pacing frequencies. When the pacing was close to the preferred cadence, the variability or relative timing was diminished. This in turn resulted in higher phase coherence, i.e., it resulted in a higher stability of synchronization. In Neurological Participants In the study of Cha et al. (2014) (with stroke patients), higher RAS tempi lead to a faster gait velocity, cadence, and stride length and a reduced double limb support duration on both the affected and unaffected lower extremity. The authors reasoned that with a reduced double limb support, the walking pattern became more stable and therefore the balance was increased. Moreover, they concluded that higher RAS tempi allowed for the stroke patients to have larger stride length on their affected side, compared to the non-RAS conditions. In the study of Roerdink et al. (2009), both stroke patients and healthy participants were included, and results indicated that pacing with a double metronome (each metronome pulses the pacing of each other step) was better than pacing with a single metronome (one pulse pacing each step) for both patients and healthy controls in terms of decreasing step asymmetry. Lastly, the study of Pelton et al. (2010) with stroke patients (comparing paretic and non-paretic lower extremities) concluded that nor the accuracy nor the variability was altered when walking on a treadmill in the presence of the metronome compared to walking without a metronome. However, the authors noted that these results could be explained by the high level of symmetry that already was observed during treadmill walking. Thus, little room was available for improvement, if any would be seen. Other neurological pathologies were also found in this review. Thaut et al. (1999) compared walking to music and metronomes for patients with Huntington's disease with three levels of disability. Self-paced, slow, and fast metronomes and music were used. The study demonstrated, that of 27 patients, 19, 23, and 17 were able to increase their velocity from their baseline during the self-paced (no audio), accelerated metronome, and music conditions, respectively. The participants that did have a change in velocity were more disabled. Moreover, the authors also assumed that the difference in numbers for metronome and music might have been caused by the complexity of the music compared to the simple ticking of the metronomes. A crucial finding which the authors comment on, is that the impaired performance in a sensorimotor synchronization task might be a predictor of the neurological disease prior to the evidence of the first symptom in persons with Huntington's disease. Yet, they do emphasize that the general mechanisms for rhythmic entrainment are intact at earlier stages of the disease. Finally, in PD patients, McIntosh et al. (1997) concluded that a faster RAS condition lead to an increase in velocity, cadence, and stride length. The results were also similar to healthy controls. However, they also report that a high severity of the disease lead to a worse synchronization. In summary, an assumption can be made that, overall, synchronization to auditory stimuli had a positive effect on the gait of different patient populations. However, many considerations can be taken in order to have a robust answer to that assumption. Below, these are discussed. Limitations of the Included Studies One of the major limitations of the studies reviewed here was the small sample size in some of the studies (minimal 9, maximal 12; Pelton et al., 2010;Dickstein and Plax, 2012;Marmelat et al., 2014;Mendonça et al., 2014;Leow et al., 2015). Furthermore, in the studies conducting DFA analysis (Hove et al., 2012;Marmelat et al., 2014), the trials are 3 min, which is a relative short time for the DFA analysis, and longer trials are warranted for future studies (Pierrynowski et al., 2005). Other limitations of studies were due to missing data as a result of technological errors (Leow et al., 2014). In addition, some studies included patients who did not exhibit motor dysfunctions, for example the PD patients included in Dotov et al. (2017) did not exhibit clinical dysfunctions such as freezing gait (Dotov et al., 2017). In one study, the cognitive impairment of patients was not taken into account, and therefore patients who did not synchronize were excluded without discussing the underlying causes (Roerdink et al., 2009). Moreover, the methodological design can be questioned, as patients could rest as long as they wished, while listening to the next auditory condition, making the exposure time to sounds variable across participants (Roerdink et al., 2009). Critical Appraisal of the Identified Metrics The DFA measure differs from the remaining metrics identified in this review, because the DFA is not a pure measurement of entrainment and/or synchronization per se. Rather than saying something about the audio-motor relationship, the DFA provides information about the variability of movement during the process of entrainment and/or synchronization. As known, this variability can be related to the quality of movement (Dotov et al., 2017). A healthy gait time series pattern consists of a fractal statistical persistent structure equivalent to a pure 1/f noise (Goldberger et al., 2002). Authors suggest that the analysis of this gives an insight into the neuro-physiological organization of neuromuscular control and the entire locomotion system (Hausdorff, 2007). The 1/f noise structure demonstrates a non-random predictability of the steps in a gait cycle. Alternatively, incorrect time-series that have been found in gait have been associated with various diseases (Goldberger et al., 2002;Hausdorff, 2007). It has been claimed that the loss of the non-randomness (the statistical persistence) could be related to the decreased adaptability of neural structures and looser cortical control (Goldberger et al., 2002;Hausdorff, 2007). The DFA could potentially be a valuable measure to help explain entrainment and/or synchronization in terms of variability rather than in terms of prediction-error minimization. Such an approach would cope with the theory of active inferencing (Brown et al., 2013;Friston et al., 2017), in which a subject's motor variability is understood as a way to sample audio-motor errors so that statistical inferences about those errors can be more accurate. In other words, according to the active inferencing concept, subjects (due to neuromuscular variability) generate small variations in footstep timing but this variability helps them to better estimate the audio-motor error, so that smooth entrainment and/or constancy in synchronization, despite some variability, can be maintained. According to this theory, the differences seen during movement between healthy subjects and subjects with neurological diseases may point to the ability of handling variability, rather than minimizing prediction error. The theory at least assumes that variability measures may be a crucial factor to be taken into consideration, in addition to the synchronization metrics. It is also very important to note, that the DFA considers variability from the viewpoint of continuity in time, that is, as a time series. The other metrics identified in this review are based on average values across different time points. Hence, their variability measure is based on time windows, rather than time series. This difference between time series and time window becomes crucial when considering the attraction dynamics that underlie synchronization. As we work with cyclic or oscillatory phenomena in both the motor and audio domain, the state of synchronization can typically be reached by two attraction points, one at in-phase (i.e., footfall and beat occur together) and the other at anti-phase (i.e., footfall and beat occur at 180 • difference; Haken et al., 1985;Leman, 2016). To better understand this phenomenon, the reader is directed to Figure 3. In this illustration, we present a scenario of a walker entraining to the beats in the auditory stimuli. This walker reaches an inphase (attractor point at 0 • ) synchronization, but changes to an anti-phase (attractor point at 180 • ) synchronization during the course of the trial. When we calculate the resultant vector length according to the described metric in this review, we end up with a very small value, which seems to indicate that the walker, overall, did not have a cohesive phase synchronization. However, in reality, this is not true; the walker maintains synchronization, overall, but the walker suddenly changed and followed a different attractor across time. To capture this phenomenon, we need methods that describe synchronization as time-varying. Practical Considerations for the Application of Auditory Stimuli Choosing the Auditory Stimuli; Is It Crucial? Another crucial aspect in designing studies that use music as auditory stimuli may be the choice of music. In our review, we found two studies that discuss the importance of music selection. The study of Buhmann et al. (2016a) distinguished between the activating and relaxing characteristics of the music. They concluded that the stride length was significantly larger when walking to activating music compared to relaxing music. Activating music has been defined as music that has an increasing effect on walking velocity and/or stride length, whereas relaxing music typically decreases these parameters. The acoustical analysis revealed that the activating music has a more binary emphasis patterns (actually matching the alternating footstep pattern), whereas the relaxing music has a more ternary emphasis pattern, where emphasis is present or absent in each three or six beat periods. Similarly, high groove music has also been found to have a positive effect on the stride length, stride time, and velocity compared to low groove music (Leow et al., 2014). Groove is a musical characteristic associated with the clarity of the beat in the music (i.e., beat salience). Groove is defined as the desire to move: the higher the groove, the higher desire to move, and vice versa (Madison, 2006). Synchronization; Are Instructions Needed? Familiarization sessions have been shown to be important in order to get reliable task performances during experiments. In that context, the task may involve explicit instruction to synchronize to the music, or, alternatively, the task may involve spontaneous (non-instructed) synchronization. In our review, two studies focused on spontaneous synchronization: Mendonça et al. (2014) showed that spontaneous synchronization did not occur without instructions at tempi higher than the participants' natural cadence. Buhmann et al. (2016a) also focus on spontaneous synchronization by providing music with a tempo as close as possible to the walking cadence, in order to induce a spontaneous optimal level of synchronization. To achieve that goal, they used the D-Jogger technology to automatically match the tempo of the music to the walking cadence. Participants were not instructed to synchronize and results showed that approximately half of them walked in optimal synchrony with the musical stimulus whereas the other half lost synchrony to some degree. Instructing to synchronize might have resulted in more synchronized trials. However, the disadvantage of imposing synchronization as task is that it augments cognitive demand, as synchronization can be seen as a supplementary task to the walking task. Such a dual task might be problematic for certain pathologies with cognitive impairments. The Derivate: An Interdisciplinary Viewpoint Given the interdisciplinary nature of the study topic, we believe that a systematic review can provide a helicopter view on methods and data that is beneficial for continued empirical research. Such a viewpoint is beneficial for pinpointing general weaknesses in the overall scientific approach (cf. our discussion about variability). In addition, such a viewpoint may suggest new interdisciplinary research lines, such as in the domain of neurological rehabilitation. For example, the measures of auditory-motor coupling and synchronization can be used to guide and prompt clinicians and researchers to include the assessment methods and measures of auditory-motor coupling and synchronization in studies investigating deviant gait patterns in neurological populations and the impact of auditory stimuli on it. On its turn, experimental data in patient groups with disordered neural control will contribute to deeper understandings into the different dynamics of the auditorymotor coupling. Foreseeing a complementary view-point, that these auditory-motor coupling and synchronization metrics can perhaps function as a diagnostic tool to assess certain co-ordination qualities in movements of neurological patients with cerebellar dysfunctions such as ataxia. All the above may result in the development of innovative and promising clinical interventions using tailored auditory conditions for neurological gait rehabilitation. Yet, the inconsistent terminology found in the identified metrics of auditory-motor coupling and synchronization over different studies is problematic. Parameters have many synonyms, but these synonyms may hamper the fluent understanding of some studies in this domain. For example, relative timing, asynchrony, and phase control all refer to the interval between the beat and the foot contact, which is measured in milliseconds. Yet different terms are used and different formulas are used to calculate measured outcomes. An explanation for the non-unanimous terminology could be the lack of standardized equipment. In our review, we traced two hardware-software systems, the D-Jogger (Moens et al., 2014) and the Walk-Mate (Miyake, 2009), but these systems are custom made. The remaining studies used commercially available lab constructed technology, with proper commercial standards. We believe that the differences in terminology for the metrics that address similar outcome measures, may be a consequence of following commercial standards. It is likely that this terminological confusion may hamper interdisciplinary progress, in particular of translating empirical findings across disciplines. The confusion can be narrowed down by coming to a consensus on terminology, as well as a willingness to understand each other's disciplinary terminology by adopting an interdisciplinary viewpoint at the cost of a small (time) effort to learn terminology of other disciplines. Overall, we believe that the interdisciplinary viewpoint provides a powerful potential to level-up the scientific achievements within the individual disciplines. For example, for the discipline of neurological rehabilitation, understanding the complex parameters of neurodegenerative diseases, as well as having access to calculate and measure these parameters in an efficient way, using a cross-disciplinary consensus on terminology and metrics, will be an enrichment. Coupled with appropriate interpretations, the parameters will provide novel paths to understand clinical dysfunctions from different perspectives and, in turn, advance current clinical practice. In a similar scenario, the interdisciplinary viewpoint offers disciplines, such as musicology, opportunities to study the underlying mechanisms of movementmusic entrainment from a neuro-socio-scientific viewpoint of brain, agency, motivation, and expression. This interdisciplinary viewpoint is applicable for many other disciplines (few examples: cognitive sciences, engineering) as well. CONCLUSION In this systematic review, several metrics have been identified, which measure the timing aspect of auditory-motor coupling and synchronization of auditory stimuli in healthy and neurological populations during walking. The application of these metrics may enhance the current state of the art and practice across the neurological gait rehabilitation. These metrics also have current shortcomings. Of particular pertinence is our recommendation to consider variability in data from a time-series rather than time-windowed viewpoint. A robust community of researchers across different disciplines may be a way to achieve genuine interdisciplinary. We need it in view of the promising practical applications from which the studied populations may highly benefit in view of personalized medical care, assistance, and therapy. AUTHOR CONTRIBUTIONS LM, IW, and PF were involved in the search strategy establishment and application of the methodology of the systematic review. LM and IW extracted the data from the included articles. LM, IW, JB, PF, and ML were involved in formulating the tables, formula's, and figures created in this paper. LM, JB, PF and ML had direct intellectual contribution to the written text. FUNDING This review was conducted within the context of the first author's Ph.D. project.
9,594.2
2018-06-26T00:00:00.000
[ "Medicine", "Engineering" ]
Scattering of universal fermionic clusters in the resonating group method Mixtures of polarised fermions of two different masses can form weakly-bound clusters, such as dimers and trimers, that are universally described by the scattering length between the heavy and light fermions. We use the resonating group method to investigate the low-energy scattering processes involving dimers or trimers. The method reproduces approximately the known particle-dimer and dimer-dimer scattering lengths. We use it to estimate the trimer-trimer scattering length, which is presently unknown, and find it to be positive. I. INTRODUCTION In the last decade, the use of controlled Feshbach resonances in ultra-cold atom experiments have enabled the study of low-energy quantum systems of particles interacting with large scattering lengths. Close to a Feshbach resonance, the interparticle scattering length is much larger than the range of interparticle forces. As a result, the low-energy properties of these systems are universal, in the sense that they depend only upon a few parameters, such as the scattering length [1], and the three-body parameter [2,3] in systems exhibiting the Efimov effect [4][5][6]. Moreover, close to Feshbach resonances, atoms can be associated into clusters of universal character: diatomic molecules called Feshbach molecules that are a realisation of universal dimers [7][8][9][10][11][12][13][14], triatomic molecules that are a realisation of Efimov states [15][16][17]. Theory predicts the existence of a variety of other universal clusters of larger number of particles [18][19][20] that are expected to be observed experimentally in the future [21]. The few-body properties, in particular the scattering properties of clusters, can play a crucial role in the identification and stability of the many-body ground states of these systems. For instance, the stability of a gas of universal dimers made of fermions was observed [8,[10][11][12][13][14] and explained theoretically [22,23] by exact four-body calculations for two scattering dimers. Although it is sometimes feasible to calculate exactly the wave function of an N -body cluster [4,24], the exact computation of the scattering properties of two clusters is generally out of reach for N ≥ 3. In the context of nuclear and sub-nuclear physics, a broad array of approximation schemes have been successfully developed to address similar problems. One of the leading techniques is the so-called Resonating Group Method (RGM), introduced by Wheeler [25], to study light nuclei, such as 16 O and 8 Be, modelled as clusters of α particles. Since then, it has been employed in a variety problems including the scattering of light nuclei, the stability of light nuclei to external nucleon scattering and nuclear particles [26,27]. More recently, it has been used [28][29][30] to study low-energy scattering, and bound states, of baryon-baryon and other multi-quark cluster configurations. In the single-channel approximation, the RGM constructs the low-energy scattering wave function of two or more scattering clusters from the wave functions of the individual clusters, while preserving the full antisymmetrization of wave functions. This gives an effective potential between the clusters that can be used to treat scattering as well as bound states. It is especially accurate in situations in which single clusters are not strongly altered by the scattering process. Here, we propose to apply this method to the low-energy scattering of universal fermionic clusters that are relevant to ultra-cold atoms close to Feshbach resonances. The paper is organised as follows. In section II, we review the essence of the RGM. In section III, we apply it to universal clusters whose scattering properties are known. In section IV, we apply the RGM to the yet unknown scattering of universal trimers. A. General formalism Let us consider the scattering between a cluster A of n particles and a cluster B of N − n particles. It is assumed that the wave functions φ A (1, 2, . . . , n) and φ B (n + 1, n + 2, . . . , N ) of these clusters are known. In the single-channel RGM, the N -body wavefunction Ψ describing the scattering process is constructed as the antisymmetrised product of the cluster wave functions and a wave function ψ(R) for the relative motion between the two clusters: (1) Here, S denotes the symmetrisation (or antisymmetrisation) operator that symmetrises (or antisymmetrises) the wave function under the exchange of identical particles. Symmetrisation is performed for bosonic particles, whereas antisymmetrisation is performed for fermionic particles. The vector R describes the relative position between the centres of mass of the two clusters. The idea behind this approximation is that the structure of the two clusters is not much altered during the collision, and the two clusters mix only through the exchange of identical particles. The purpose of the RGM is to determine the wave function ψ(R) for the relative motion of the clusters. This is done by applying the variation principle to the average quantity, where H is the total hamiltonian and E is the total energy of the system. Requiring ψ to extremise the above quantity implies that for an infinitesimal variation δψ around ψ we have The variations δψ and its complex conjugate δψ * can be formally taken as independent variations, resulting in the following Euler-Lagrange equation of motion, which can be simplified as since the total hamiltonian H is invariant under the exchange of identical particles. The hamiltonian H consists of kinetic operators t i for each particle and pairwise interaction terms V ij for each pair of particles, We have subtracted the kinetic operator t c for the centre of mass, since it can be eliminated from the problem. The hamiltonian can be rewritten as where H A and H B denote the internal hamiltonian of each cluster A and B, T R denotes the kinetic operator for the relative motion of the two clusters, and V AB is the sum of interactions between the two clusters. The wave functions φ A and φ B are eigenstates of H A and H B with eigenvalues E A and E B , i.e. There are two ways this can be used to simplify the equation of motion Eq. (4). Either one applies the hamiltonian to the wave functions φ A and φ B on the right-hand side, or to the wave functions φ A and φ B on the left-hand side. We refer to these two equivalent procedures as the RGM1 and RGM2. Although they result in formally different equations, their solutions are the same. In the RGM1, one writes and using Eqs. (6) and (7) where E = E − E A − E B is the scattering energy between the two clusters. The symmetrisation operator S can be written as S =1 + S , i.e. the action of S gives one term leaving the wave function unchanged, and other terms where particles are exchanged. Thus, Eq. (9) can be written as where we have introduced a local potential V D called the direct potential, a non-local potential V EX1 called the exchange potential, and a non-local operator K called the exchange kernel, In the RGM2, one applies the hamiltonian Eq. (6) to the wave functions φ A φ B on the left-hand side of Eq. (4), using Eq. (7). This gives which can be written as where the exchange potential V EX2 is defined by Hence, the RGM consists in calculating the potentials V D , V EX and kernel K, and solving the equation for the relative motion between the two clusters, either Eq. (10) or (15). This is of course a great simplification over solving the full N -body equation. Nonetheless, the determination of V D , V EX and K involve 3(n − 1) + 3(N − n − 1) = 3(N − 2)-dimensional integrals whose computation may be costly for large N . B. RGM with contact interactions In the following, we apply the RGM to the scattering of universal clusters. Their universal character is described by the zero-range theory, which corresponds to the limit of the range of interaction being much smaller than the s-wave scattering length a. In this limit, the interaction potential V ij between two particles appearing in Eq. (5) and included in the term V AB in Eq. (6) can be approximated by a contact potential, V ij (r) = gδ 3 (r) ∂ ∂r r· (17) with the coupling constant Here, µ is the reduced mass of the two interacting particles, and ∂/∂r r· is an operator regularising the 1/r divergence of the wave function when particles come into contact (r = 0). This potential binds two particles only for a > 0, and we restrict our consideration to this case throughout this paper. The presence of the three-dimensional Dirac delta function in the potential Eq. (17) reduces by three the dimensionality of the integrals. The dimensionality of the integrals Eqs. (11), (12), and (16), for V D , V EX1 and V EX2 , is thus reduced to 3(N − 3). C. Partial-wave expansion To proceed further, one can perform a partial-wave expansion in spherical harmonics Y m in the RGM1 and RGM2 equations. The relative wave function is expanded as whereR denotes the orientation of R. Then, the RGM1 equation, Eq. (10), becomes the following set of coupled equations: and the RGM2 equation, Eq. (15), becomes the set of coupled equations with the kinetic energy operator where µ N is the reduced mass of the two clusters, and The dimensionality of integration in Eqs. (23), (24) and (25) D. Local approximation It turns out, as we shall see in the cases treated below, that the contribution from the non-local kernel K is often small and may be neglected. Moreover, in some cases, the exchange potentials V m, m EX1 and V m, m EX2 are nearly local and may be approximated by the local potentials Neglecting K and using the local form Eq. (26) of the exchange potentials constitute the local RGM approximation. In this approximation, RGM1 and RGM2 equations have the form of conventional Schrödinger equations: EX2 local . Unlike the RGM1 and RGM2 equations, Eqs. (20) and (21), the local RGM1 and RGM2 equations, Eq. (27) and (28), are not equivalent. Nevertheless, they often yield similar results as we shall see in the following sections. E. Scattering length and scattering volume After solving the RGM equations in partial waves, Eq. (20) or (21), or their local-potential approximation, Eq. (27) or (28), one obtains the partial-wave components ψ m (R) of the relative wave function ψ. For zero scattering energy (E = 0), one can extract the partial-wave scattering lengths from these components. From the s-wave component ψ 00 and from the p- These formulas follow from the standard definition of the scattering phase shifts [31]. A. Universal dimers We consider universal dimers made of a polarised fermion of mass M and a polarised fermion of mass m. These dimers are two-body s-wave weakly-bound states. The normalised wave function ϕ(r) for the relative motion of the two particles inside the dimer is given by B. Scattering of a dimer and a particle First, we consider the scattering of a universal dimer with a fermionic particle of mass M . To apply the RGM to this case, we set φ A = ϕ given by Eq. (31), φ B = 1, and the interaction potential given by Eq. (17), assuming that there is no interaction between identical fermions. The antisymmetrisation operator in the calculation of the exchange potentials and kernel is obtained by considering all possible permutations of identical fermions. In this case, there are two possibilities, as shown in Fig. 1: no permutation and the exchange of two fermions of mass M . It follows that the direct and exchange potentials of the RGM equations, Eqs. (10), and (15), are given by the following expressions: where κ = M/m is the mass ratio and The exchange kernel is given by with The kinetic operator in Eqs. (10), (15) is given by The RGM1 and RGM2 equations can be solved by performing the partial-wave expansion of section II C. Here, the potentials Eqs. (32)(33)(34) do not couple partial waves: and for a given partial wave ( , m), we obtain from Eq. (25), The factor (−1) in these expressions comes from the minus sign in the argument of ψ in Eqs. (33)(34). Due to this factor, the exchange potential is repulsive for even partial waves, and it is attractive for odd partial waves. Moreover, Eqs. (39)(40) show that the exchange potentials have an increasingly local character as the mass ratio κ increases. Their local approximation, given by Eq. (26), leads to We solve the resulting RGM and local RGM equations numerically by discretising the coordinate R. s-wave scattering We first consider fermion-dimer scattering in the s wave, for which the effective potential is purely repulsive. The fermion-dimer s-wave scattering length a f d is therefore always positive. It is shown in Fig. 2, as a function of the mass ratio κ. For the equal mass case (M = m), we obtain which is consistent with the exact result ≈ 1.17907a [32,33]. All RGM results are within 2% of the exact results, indicating that there is little excitation during the collision of a dimer and fermion, the dimer remaining bound during the collision. Nonetheless, the exchange of particles is crucial. The dotted curve in Fig. 2 shows that including only the direct potential (neglecting the exchange kernel and potential) yields a much smaller scattering length. On the other hand, the exchange kernel K brings a significant difference only for mass ratios smaller than one, and may be neglected otherwise, as shown by the dashed red and blue curves in Fig. 2. As to the local approximation, it leads to results which are close to those of the RGM for sufficiently large mass ratios, as seen from the red and blue curves in Fig. 2. p-wave scattering In the p-wave channel, the fermion and dimer attract each other. This is due to the Efimov attraction [4,6] that results from the effective interaction between the two heavy fermions mediated by the light fermion. Although the Efimov attraction wins over the centrifugal barrier only for mass ratios M/m > κ c ≈ 13.6069657 [6,24,34], resulting in an infinite discrete-scale-invariant tower of threebody bound states, it also makes the system attractive for lower mass ratios, resulting in an overall negative p-wave scattering length. As the mass ratio increases, the Efimov attraction strengthens, and two universal three-body bound states appear at mass ratios κ 1 = 8.17260 and κ 2 = 12.91743 [24]. At these mass ratios, fermion-dimer p-wave scattering is resonant and the p-wave scattering volume v f d diverges, as shown in Fig. 3. In the RGM, the effective potential between the fermion and the dimer scattering in the p wave is also attractive, due to the factor (−1) of Eqs. (39)(40). The scattering volume calculated in the RGM is thus negative and very close to the exact one up to the mass ratio M/m ≈ 6. For the equal mass case (M = m), the RGM gives v f d = −0.98a, which is consistent with the exact result ≈ −0.96a [35]. Beyond the mass ratio ∼ 6, the RGM results deviate strongly from the exact results. This is explained by the fact that the resonance and the three-body bound state at M/m = κ 1 imply threebody correlations that are not fully captured by the RGM. Nevertheless, the RGM exhibits a similar resonance, but at a shifted mass ratio κ (RGM) 1 ≈ 9.5. This shows that the Efimov attraction, physically due to the exchange of light fermion between the two heavy fermions, is partially captured by the mere antisymmetrisation of the wave function in the RGM, as suggested by Fig. 1. The local RGM equations reproduce approximately the RGM results for M/m < 6, as shown by the blue and red curves in Fig. 3. For larger mass ratios, the difference between the local RGM and full RGM results is substantial and it is mainly due to the absence of the exchange kernel K in the local RGM equations, as shown by the dashed red and blue curves in Fig. 3. C. Scattering of two dimers Now, we consider the scattering of two universal dimers. We thus apply the RGM equations for the two cluster wave functions φ A = φ B = ϕ given by Eq. (31) and the interaction potential given by Eq. (17), assuming again that there is no interaction between identical fermions. The antisymmetrisation operator in the calculation of the exchange potentials and kernel is obtained by considering all possible permutations of identical fermions. In this case, there are two possibilities, as shown in Fig. 4: no permutation and the exchange of two fermions of mass M (which is equivalent to exchanging the two fermions of mass m). After some straightforward calculations, the direct and exchange potentials, as well as the exchange kernel of the RGM equations, Eq. (10) and (15), are given by the following expressions: In these expressions, we have set κ = M/m, λ = (κ + 1) 6 /(2κ) 3 , and The kinetic operator in Eqs. (10), (15), (27) and (28) is given by We solve the RGM1 and RGM2 equations, as well as their local approximation, by performing the partialwave expansion of section II C and discretising the coordinate R. The potentials are repulsive for all partial waves. The resulting dimer-dimer s-wave scattering length is shown in Fig. 5, as a function of the mass ratio κ. For the equal mass case (M = m), we obtain a dd = 0.752a, which is close, although significantly different, from the exact result ≈ 0.6a [22,23,[36][37][38][39][40][41]. This means that compared to dimer-particle scattering, there is a bit more excitation during the collision of two dimers, although it remains small. On the other hand, exchange is less important than in the case of fermiondimer scattering, as the major contribution to the scattering length comes from the direct potential, as seen from the dotted curve in Fig. 5. The exchange potentials have an increasingly local character as the mass ratio increases. We have calculated the scattering length with the RGM up to mass ratio 20. Beyond this mass ratio, the local character of the potential makes it difficult to solve the problem as a non-local one, since a high degree of discretisation is needed. The local RGM equations, on the other hand, are easier to solve. They give results which are very close to the RGM, as can be seen from the blue and red curves of Fig. 5, and can easily be extended to larger mass ratios. Figure 6: Jacobi coordinates r and R decribing a trimer made of two heavy fermions and a light fermion. The vector r = r2 − r1 is the relative position between particle 2 and 1, and the vector R = r3 − M r 2 +mr 1 M +m is the relative position between particle 3 and the centre of mass of particles 1 and 2. A. Universal trimers We now consider universal trimers made of two polarised fermions of mass M and a polarised fermion of mass m. Such trimers exist for a mass ratio M/m > κ 1 ≈ 8.17260. They rotate with one quantum unit of angular momentum, and can therefore be in three possible internal quantum states of rotation, labelled by the quantum number m ∈ {−1, 0, 1}. For a mass ratio M/m > κ c ≈ 13.6069657 [6,24,34], the trimers are Efimov states [1,4], characterised by the scattering length a between the two different kinds of fermions, and a three-body parameter. For a mass ratio M/m < κ c , the trimers are Kartavtsev-Malykh states [24], characterised only by the scattering length a. We restrict our consideration to these states, and therefore to the range κ 1 < M/m < κ c where a ground-state trimer exists. The trimer wave function is expressed as a function of Jacobi vectors r and R shown in Fig. 6. To a good accuracy, the trimer wave function is well approximated by the adiabatic hyperspherical form [24] φ m (r, R) = f (R) where the component incorporates the angular momentum of the trimer through the spherical harmonic Y m 1 . The hyperangular component ψ ang is given by Here, we use the following hyperspherical coordinates The function s(R) is determined by . The function f (R) is the solution asssociated with the lowest eigenvalue ε trimer of the hyper-radial equation and normalised aŝ ∞ 0 dR|f (R)| 2 = 1. The function C(R) is determined by the normalisation condition which guarantees that B. Scattering of two trimers Trimers in the same rotational state i are identical fermions and therefore scatter only in the p wave channel at low-energy. At sufficiently low energy, this p-wave scattering is negligible with respect to the swave scattering between trimers in different rotational states. For this reason, we focus on the latter in this paper. There are three possible pairs of different rotational states, {−1, 0}, {0, 1}, and {1, −1}, and they all lead to the same scattering length, because of the SU(3) symmetry of this system. However, this symmetry is artificially broken by the single-channel RGM, if rotational states are given by the usual spherical harmonics. The different values of scattering lengths for the different pairs of states would thus give an indication of the error of the single-channel RGM approximation. However, a more serious issue is that spherical harmonics are complex-valued and the RGM does not ensure the scattering length to be real. To circumvent this problem, we consider an alternative basis for rotational states, which is the xyz basis formed by rotational states with angular momentum projection zero on the three axes of space. Namely, In an exact calculation, it makes no difference whether one uses the usual spherical harmonics or the xyz basis, but in the case of the RGM, the xyz basis ensures the results to be real, since the wave functions in this basis are all real, and restores the SU(3) symmetry as well. This is evident if one observes that the three pairs {xy}, {yz}, and {zx} can be transformed into each other by a rotation in space. To apply the RGM to this scattering problem, we set φ A = φ x and φ B = φ y (i.e. the two clusters are two trimers in rotational state x and y). There are twelve possible permutations of identical fermions between the two trimers, as shown in Fig. 7. From this we obtain the expressions for the direct potential, the exchange potentials and the exchange kernel, which are given respectively by the following nine, six, and nine-dimensional integrals: In these expressions, we have set and R i stands for (r i , R i ). These variables are given explicitly in terms of r and R in the Appendix. The sign ∓ in Eqs. (49-50) is − for even scattering waves, and + for odd scattering waves. Since we are interested in s-wave scattering, only even waves are involved due to the conservation of parity, and thus ∓ = − in this case. Note that the asterisk in Eqs. (49-51) still denotes the complex conjugate, although in our calculations all wave functions are real. To compute these high-dimensional integrals, we resort to Monte Carlo integration using importance sampling. The total potential (sum of direct and exchange potentials) is anisotropic, due to the anisotropy of the trimers. To visualise this anisotropy, we plot in Fig. 8 the integrated potential as a function of the distance s and angle θ of the spherical coordinates (s, θ, ϕ) -note that the potential does not depend on ϕ by rotational symmetry along the z axis. Fig. 8 shows that the anisotropy of the potential is moderate. As a result, we only need to consider the partial waves = 0 and = 2 to get converged results. Fig. 8 also indicates that the potential is repulsive. This fact is confirmed by the numerical calculation of the potential in each partial wave given by Eqs. (23)(24)(25). As a result, the trimer-trimer s-wave scattering is positive. We have found that the exchange potentials are to a good approximation local potentials. In view of the previous results for dimers, we substitute the exchange potential by their local approximation given by Eqs. (25) and (26) and neglect the exchange kernel, which is costly to evaluate. We therefore use the local RGM1 and RGM2 equations, Eqs. (27)(28). The resulting trimer-trimer s-wave scattering length is plotted in Fig. 9 as a function of the mass ratio κ. The results are similar to the dimer-dimer case. As in the dimer-dimer case, the local RGM1 and RGM2 results are very close, suggesting that the local approximation is enough to reproduce the RGM, and the contribution from the exchange of particles is small compared to the direct contribution. However, unlike the dimer-dimer case, the scattering length decreases with the mass ratio. This is due to the fact that the binding energy of the trimers increases, and thus their size reduces, as the mass ratio increases. Local RGM1 Local RGM2 RGM without exchange The decrease of the scattering length is therefore a consequence of the decrease of the scattering cross section, due to the decreased size of the trimers. V. CONCLUSION We have applied the resonating group method to the scattering of universal clusters which are described by zero-range interactions. We have found that the single-channel RGM is relevant to clusters made of fermions. It reproduces qualitatively, and in some limits quantitatively, the exact results for scattering involving universal dimers. We have also applied the single-channel RGM to the scattering of universal trimers. It is found to be similar to the scattering of dimers: there is little contribution from the exchange of particles and the effective interaction is repulsive, unlike the scattering of a fermion and a dimer, where exchange is dominant and produces attraction related to the Efimov effect. As a consequence, we obtain a positive trimer-trimer s-wave scattering length. This result has implications for the nature and stability of the ground state of a mixture of heavy and light fermions which are to be discussed in a separate work. The validity and accuracy of the present RGM cal-culations are limited by the single-channel approximation. In particular, it is likely that trimers excite into the nearby dimer-particle continuum during their collision, by analogy with nuclear systems where excited channels play an important role [42]. Including these extra channels, i.e. states of the form Eq. (1) constructed with other eigenstates of the n-body and N − n-body subsystems, should converge to the exact results. It remains however numerically challenging to go beyond the single-channel approximation for clusters of more than two particles. As it stands, the single-channel RGM can already give useful insights on the interactions between universal clusters. It could be used to further investigate similar problems, such as scattering of dimers and trimers, involving unpolarised fermions or three-component fermions.
6,301.4
2015-07-23T00:00:00.000
[ "Physics" ]
Blockchain-Based Healthcare Workflow for Tele-Medical Laboratory in Federated Hospital IoT Clouds In a pandemic situation such as that we are living at the time of writing of this paper due to the Covid-19 virus, the need of tele-healthcare service becomes dramatically fundamental to reduce the movement of patients, thence reducing the risk of infection. Leveraging the recent Cloud computing and Internet of Things (IoT) technologies, this paper aims at proposing a tele-medical laboratory service where clinical exams are performed on patients directly in a hospital by technicians through IoT medical devices and results are automatically sent via the hospital Cloud to doctors of federated hospitals for validation and/or consultation. In particular, we discuss a distributed scenario where nurses, technicians and medical doctors belonging to different hospitals cooperate through their federated hospital Clouds to form a virtual health team able to carry out a healthcare workflow in secure fashion leveraging the intrinsic security features of the Blockchain technology. In particular, both public and hybrid Blockchain scenarios are discussed and assessed using the Ethereum platform. Introduction Recent advancements in Information and Communication Technology (ICT) have paved the way toward new innovative tele-healthcare services able to face the growing demand of even more accessible medical treatments [1,2]. Moreover, in the pandemic condition such as that we are living at the time of writing of this paper due to the Covid-19 virus, the need of tele-healthcare service becomes dramatically fundamental to reduce the movement of patients, thence reducing the risk of infection. However, the recent innovation brought by Cloud computing and Internet of Things (IoT) paradigms have been only partially taken into consideration by hospitals and more in general by medical centers so far. In this regard, a crucial aspect that has slowed down the wide adoption of such ICT paradigms in hospitals has regarded integrity, security and privacy of exchanged data. Considering the healthcare domain, it is fundamental that shared pieces of clinical data must be certified and not corrupted to prevent intentional or accidental illegal data manipulation. Furthermore, patients' privacy must be guaranteed. In recent years, Cloud computing and IoT paradigms along with the concept of the federation have been combined so that different variants born. The first paradigm variation regarded Cloud federation that was defined as a mesh of Cloud providers that are interconnected to provide a universal decentralized computing environment where everything is driven by constraints and agreements in a ubiquitous, multi-provider infrastructure [3]. With the advent of IoT, the IoT Cloud paradigm raised. It was defined as a distributed system consisting of a set of smart embedded devices interconnected with a remote Cloud infrastructure, platform or software through the Internet able to provide IoT as a Service (IoTaaS). Furthermore, the natural evolution of the latter brought to the concept of IoT Cloud federation referred as an ecosystem composed of small, medium and large IoT Cloud providers able to federate themselves to gain economies of scale and to enlarge their processing, storage, network, sensing and actuating capabilities to arrange more flexible IoTaaS [4]. The healthcare domain can benefit from these paradigms to improve clinical services and push down management costs through the creation of Hospital IoT Clouds [5] able to federate themselves. In this paper, we focus on medical laboratory as a case study. It is an applied science laboratory typically placed in a hospital or in a clinical centre where clinical pathology exams are carried out on clinical samples to obtain information about the health of a patient to make diagnosis, treatment and prevention of diseases. Blood tests (e.g., complete blood count (CBC), glycaemia and so on) are performed by biomedical laboratory health technicians directly in the clinical laboratory and results are validated and analyzed by doctors to make therapies. Specifically, leveraging the IoT Cloud federation paradigm we propose the emerging concept of tele-medical laboratory. It is a medical laboratory where clinical exams are performed on patients directly in a hospital by technicians through IoT medical devices interconnected with a Hospital Cloud system and results are automatically sent through the hospital Cloud to doctors of federated hospitals for validation and/or consultation. Biomedical laboratory health technicians, nurses, doctors and other clinical personnel belonging to different Federated Hospital IoT Clouds (FHCs) cooperate to form a virtual healthcare team able to carry out a healthcare workflow. However, one of the major concern about the accomplishment of such a workflow regards how to guarantee the non-repudiation and immutability of all health decisions [6]. In recent years different solutions have been proposed to solve such an issue: among these, the Blockchain technology, thanks to its intrinsic features of data non-repudiation and immutability, has aroused a great interest in both scientific and industrial communities. One of the major applications of Blockchain regards smart contract, i.e., a computer protocol aimed to digitally facilitate, verify and enforce the negotiation of an agreement between subjects without the need of a certification third party. Blockchain has been increasingly recognized as a technology able to address existing information access problems in different applications domains including healthcare. In fact, it can potentially enhance the perception of safety around medical operators improving access to hospital Cloud services that are guaranteed by greater transparency, security, privacy, traceability and efficiency. Considering the tele-medical laboratory scenario, smart contracts can make the transactions related to the healthcare workflow track-able and irreversible. Specifically, an architecture blueprint and a system prototype of FHC service enabling to address the healthcare workflow of a tele-medical laboratory scenario is proposed. In particular, a special emphasis is given to Blochchain comparing both public and hybrid (private/public) network scenarios using the Ethereum platform to assess both processing time and economic cost. In particular, the latter is necessary because the Ethereum public network platform available over the Internet requires that users (i.e., in our case federated Hospitals) pay a fee to perform each transaction. The remainder of this paper is organized as follows. A brief overview of most recent initiatives about the adoption of Blockchain in healthcare is provided in Section 2. Motivations are discussed in Section 3. An blueprint of FHC architecture is presented in Section 4, whereas one of its possible implementations is described in Section 5. Experiments specifically focusing on Blockchain comparing public and hybrid network scenarios are discussed in Section 6. Section 7 concludes the paper also providing lights to the future. Related Work Recently, the role of Blockchain technology in the healthcare domain has been surveyed in several scientific works [7][8][9][10][11]. Blockchain can drastically improve the security of hospital information systems as discussed in [12][13][14][15]. However, up to now, most of the scientific initiatives are either theoretical or at an early stage and it is not always clear which protocols and pieces of a framework should be used to carry out system implementations that can be deployed in real healthcare environments. Blockchain has been increasingly recognized as a tool able to address existing open information access issues [16]. In fact, it is possible to improve access to health services by using the Blockchain technology to achieve greater transparency, security and privacy, traceability and efficiency. In this regard, a solution adopting Blockchain with the purpose to guarantee authorized access to the patients' medical information is discussed in [17]. In particular, mechanisms able to preserve both patient's identity and the integrity of his/her clinical history is proposed. Another application of Blockchain regards the supply chain in the pharmaceutical sector and the development of measures against counterfeit drugs. While the development of new drugs involves substantial costs related to studies to evaluate the safety and updating of the drug, the use of smart contracts guarantees informed consent procedures and allows in certifying the quality of data [18]. An efficient data-sharing scheme, called MedChain is proposed in [19]. It combines Blockchain, digest chain and structured P2P network techniques to overcome the efficiency issues in healthcare data sharing. Experiments show that such a system can carry out higher efficiency and satisfy the security requirements of data sharing. As discussed in [20], different medical workflows have been designed and implemented using the Ethereum Blockchain platform which involves complex medical procedures like surgery and clinical trials. In particular, the smart contract system for healthcare management as been studied, also estimating associated costs in terms of feasibility. A piece of framework that integrates IoT networks with a Blockchain to address potential privacy and security risks for data integrity in healthcare is discussed in [21]. A Medical IoT Device represented by a Raspberry Pi 3 Model B+ is attached to the patient's body to monitor his/her vital parameters that are stored in an Off-Chain Database that is accessed by doctor, pharmacy and insurance company via a DApp. All transactions take place utilizing smart contracts in a permissioned Blockchain system implemented employing Ethereum. Differently from the aforementioned scientific initiatives, that are mainly based either on a public Blockchain networks approach, in this paper we focus on how a hybrid Blockchain network approach (mixing both private and public ones) can be used to carry out the healthcare workflow of a tele-medical laboratory running in a FHC environment. Moreover, we demonstrate that our Blockchain hybrid network approach allows reducing the number of required transactions, hence enhancing processing time and reducing economic cost. Motivation In this Section, after a brief overview of recent advances in medical laboratory devices, we discuss the advantages of tele-medical laboratory service. Recent Advancements Medical Laboratory Devices A medical laboratory device is an equipment able to perform several blood tests including CBC, Basic Metabolic Panel, Complete Metabolic Panel, Lipid Panel, Thyroid Panel, Enzyme Markers, Sexually Transmitted Disease Tests, Coagulation Panel and DHEA-Sulfate Serum Test. Currently, there are many medical laboratory devices available on the market. A classification can be done considering "connected" and "not connected" devices. For connected devices, we intend medical laboratory equipment including USB and network (wired and/or wireless) interfaces and able to export and send results to other devices, whereas for not connected devices, we intend medical laboratory devices without any interface for data transmission. In the following, we provide an overview of the major connected medical laboratory devices that are based on future tele-medical laboratory services. Telemedcare Clinical Monitoring Unit (CMU) [22] is a medical device able to perform blood pressure, pulse oximetry and blood glucose exams. Enverse [23] is a device able to perform continuous glucose monitoring. It consists of a chip that is installed subcutaneously on the patient that is connected with a mobile app. Med-Care [24] is an integrated solution for the auto-monitoring of glycemia that works with both web and mobile systems and that can send alerts via email or SMS. HemoScreen [25] is a low-cost portable haematology analyzer which performs a complete blood count at the point of care including a local web interface. Samsung Labgeo PT10S [26] is a portable clinical chemistry analyzer that improves efficiency by saving time for clinicians and patients through fast, easy and accurate blood analysis. It includes an ethernet interface for export exam result in an external Personal Computer (PC). All the aforementioned devices requires a blood sample in order to perform exams. Currently, alternative non-invasive experimental devices able to perform blood tests are the argument of study for both academic and industrial healthcare communities. Towards Tele-Medical Laboratory Tele-medical laboratory allows performing blood exams, results validation, diagnosis and therapy assignment tasks in departments located in different hospitals. This is possible utilizing the creation of a virtual healthcare team composed of biomedical laboratory health technicians and doctors belonging to different federated hospitals. Cooperation is possible through a federation of FHCs. Hospital federation can involve several satellite clinics belonging either to the same healthcare organization or to different ones. An example of healthcare organization including different hospital is the provincial healthcare organization of Messina (Italy), also referred to as ASP Messina. As shown in Figure 1, it includes eight health districts including, Messina, Taromina, Milazzo, Lipari, Barcellona Pozzo di Gotto, Mistretta and Sant'Agata di Militello. The health district of Lipari is placed on the island of Lipari and provides a limited number of health services. It offers a first aid to patients using an emergency room and a medical laboratory of clinical pathology. Due to the limited number of health departments, patients with particular diseases are typically transferred in the near health districts of Milazzo or Barcellona Pozzo di Gotto (indeed by helicopter for urgent cases) if required. In this scenario, a tele-medical laboratory service could help the accomplishment of a clinical workflow involving a virtual healthcare team including technicians and doctors belonging, for example to the Lipari, Milazzo and Barcellona Pozzo di Gotto districts. In particular, blood tests could be performed in the medical laboratory of Lipari by biomedical laboratory health technicians and results transmitted using the FHC ecosystem to a doctor of the Barcellona Pozzo di Gotto district for validations. Furthermore, leveraging the FHC environment an additional consultation could be done with a doctor of the Milazzo district. Defining with the term "home hospital" the hospital that is physically reached by the patient, the generic healthcare workflow accomplishing the aforementioned scenario implies the following phases: Non-repudiation and immutability of all health decisions is a fundamental concern that must be addressed for the accomplishment of such a healthcare workflow. In this regard, the Blockchain technology using smart contracts can make all transactions related to the healthcare workflow track-able and irreversible. In the remainder of this paper, we will focus on such an aspect. System Design It is important to guarantee that only authorized members of the virtual healthcare team are allowed to take actions because a wrong decision can lead to a worsening of clinical condition or death of a patient. Therefore, the FHC system has to guarantee that all actions performed by virtual healthcare team members are track-able and irreversible. To achieve such a goal, the following technologies are fundamental: • Blockchain engine: to use the features of a decentralized and distributed certification system with the technology offered by the development and coding of a smart contract. NoSQL database: to exploit the potential of a document-oriented distributed database able to store and manage patient's data and diseases through tags for a fast and efficient search and to store Blockchain transaction hashes and links to files stored in Cloud Storage; Figure 2 describes the FHC architecture. The system entry point of each hospital Cloud is a dedicated Web Server Gateway Interface (WSGI) where pieces of electronic healthcare record data are created manually by the clinical personnel or automatically collected by IoT devices that can be spread over different federated hospitals. A Data Anonymization Module (DAM) is responsible for hiding the patient's personal data in electronic health records. This is possible by decoupling the patient's personal data from electronic health records storing related pieces of information in different databases. To bind the electronic health record with the patient, a patient's anonymized identification number (patient_id) is generated from the DAM and stored within the electronic health record. Thus, only electronic health records are shared between FHCs, whereas pieces of patients' personal data are never shared to preserve their privacy. All the produced clinical documentation, including electronic health records, is uploaded on a Cloud Storage containing a patient_id to hide patient's data. A Blockchain engine is responsible to store information on the Blockchain to certify data non-repudiation and immutability of treatment details guaranteeing accountability and authenticity. The transaction hash resulting from the mining process is stored in the NoSQL document-oriented database as an attribute of the treatment. A local database instance containing both patients and hospital personnel data is isolated from other federated hospitals because these never require to be shared. Treatments' details are anonymized and stored in a database shared with other participants in the FHC environment. The whole architecture deployed using container virtualization to simplify installation, configuration and maintenance in each FHC. Hospitals belonging to a Federation cooperate ensuring that appropriate therapies and procedures are carried out. Figure 3 shows the sequence diagram describing an example of healthcare workflow including three federated hospitals and where a tele-medical laboratory service is provided. A generic patient who requires a medical visit reaches the emergency room of a home hospital that after evaluating the urgency of the case, identifies an available doctor. At his/her turn, the patient is visited by the assigned doctor who prescribes some clinical analysis such as blood tests which can be done in the home hospital medical laboratory by a biomedical laboratory health technician. If the doctor responsible for the medical laboratory is not available (as the case of small hospital districts) a doctor belonging to the federated hospital (1) validates the results shared via federated Cloud storage. Thence, such a doctor joins the virtual healthcare team along with involved medical personnel of the home hospital. Since the doctor of the home hospital has a doubt the therapy to prescribe, he/she consults a doctor of federated hospital (2) via teleconference. After this consultation, a therapy is assigned to the patient. System Prototype The FHC architecture was designed to enable a virtual healthcare team to carry out every healthcare workflow such as that described in the previous Section. Figure 4 shows the main software components of a possible system prototype implementation. All requests coming from patients, nurses, technicians and doctors flow through the WSGI interface developed with the Python web application framework Flask and deployed on the Gunicorn Python WSGI HTTP server. All the components are configured as Docker containers to take the advantages of virtualization technology allowing service portability, resiliency and automatic updates that are typical of a Cloud Infrastructure as a Service (IaaS). The WSGI provides a front-end that allows retrieving all existing patients' information (such as personal details, disease and pharmaceutic codes, links to clinical documentation and Blockchain hash verification); adding new patients; and submiting new treatments specifying all the required pieces of information. Specifically, a web page is dedicated to the registration of a new patient for saving his/her primary personal information and another web page is dedicated to the registration of a new treatment. It is possible to select the medical examination date, patient and doctor who does the registration. All the produced clinical documentation is uploaded in a local instance of NextCloud storage using a folder for each treatment which does not contain any patient's personal data, but a patient's anonymized identification number. Every change in the files or content of the folder will be tracked making it possible to keep a history of the documentation and its modifications. Recently, a few related works were published for data anonymization in a Multi-Cloud storage environment considering a healthcare scenario, however, these do not guarantee that pieces of data are track-able and irreversible [27,28]. Since patients' sensitive data must be anonymized and health records and treatments must be track-able and irreversible, related pieces of information were stored combining a MongoDB NoSQL DataBase Management System (DBMS) with the Ethereum Blockchain platform. Therefore, all pieces of information are stored in both MongoDB and in the Ethereum network through smart contracts developed in Solidity. For experimental purposes that will be discussed in Section 6, the Ethereum network was implemented in both public and hybrid configurations. In the first case, all FHCs share the public Ethereum network available over the Internet, whereas, in the second case each FHC hosts a private Ethereum network to store local transactions and a public Ethereum network to synchronize the local transactions performed in each FHC. The smart contract accepts the input parameters such as anonymized patient id and doctor id, disease and pharmaceutic codes and stores these pieces of information in a simple data structure. The hash code resulting from the mining of each transaction is stored in the MongoDB database and can be used for verification using services like etherscan.io. This service is capable of detecting any modification that occurred to files or a folder using a listener called External script. It is then possible to store the fingerprint and timestamp of each modification in the database thus making it possible to track the history of each treatment. This feature is important to guarantee the data integrity. Experiments Currently, the most adopted Blockchain configuration is based on a public network approach. With this regard, Ethereum is one of the major Blockchain platforms. A public Ethereum instance is available over the Internet and requires the payment of a fee for the execution of each transaction. However, the Ethereum platform can be also downloaded and installed in a private network. In this paper, using Ethereum, the objective of our experiments was to verify if the Blockchain system of a FHC ecosystem including a tele-medical laboratory service can be optimized in terms of both processing time and economic cost using the proposed hybrid network approach. Apart from processing time, it is important to highlight that considering the Ether (ETH) cryptocoin concurrency used in Ethereum, a small saving of ETH can result in putting aside a relevant amount of money (e.g., USD or EUR) in just a few months. In the following, we provide a description of both considered approaches. • Ethereum public network: each healthcare treatment is recorded in the public Ethereum Blockchain network where time-to-mine and cost are subject to Ethereum network traffic (depending on the queue size, more time is required to extract transaction data, higher is the cost for transaction management). • Ethereum hybrid network: each healthcare treatment is recorded in a private instance of Ethereum Blockchain network consisting of at least one node for each FHC and only one hash code, calculated as the MD5 of the last one-hundred concatenated treatments' transaction hash, is written in the public Ethereum Blockchain network. In case the number of daily treatments is less than one-hundred, the MD5 hash code is calculated as the concatenation of the last 24 h' treatments transactions hash. The result of this is a negligible waiting queue and ETH cost but, on the other hand, there is a reduction of the mining power as a reduced number of miners are present in the private network as compared to the public one. The system assessment has been conducted analysing the total execution time required to perform a varying number of transactions in healthcare workflows. Each FHC was simulated considering a server with following hardware/software configuration: Intel R Xeon R E3-12xx v2 @ 2.7GHz, 4 core CPU, 4 GB RAM running Ubuntu Server 18.04. Each test has been repeated 30 times considering 95% confidence intervals and the average results are plotted. Table 1 summarizes experiment setup and average outcomes. Figure 5 shows the time-to-mine difference expressed in seconds between the two approaches considering new treatment registration requests. On the x-axis we reported the number of treatment registration requests, whereas on the y-axis we reported the processing time expressed in seconds. Looking at the graph, we can observe that for roughly 80 treatment registration requests both configurations present a similar trend, whereas increasing the number of requests the hybrid Ethereum network shows better performances than the public one thanks to the reduced waiting time in the mining queue. Figure 6 describes a cost comparison in Ether (ETH) for the two approaches. On the x-axis we reported the number of treatment registration requests, whereas on the y-axis we reported the cost expressed in Ether (ETH). From our tests, we appreciated that the average cost of a single transaction (i.e., a simple Smart Contract representing a new treatment memorization or modification) that has to be written in the public Ethereum Blockchain, is roughly 0.0002 ETH. It can be noted that for a small number of transactions there is not a perceptible convenience in preferring the proposed hybrid approach. This is because at least one transaction per day is written in the public Ethereum Blockchain. However, it is clear that the money-saving increases exponentially increasing the number of transactions because only one public transaction every one-hundred of private treatments will be paid with ETH cryptocurrency, resulting in an important cost-saving for the FHC ecosystem. Test results demonstrate how the Ethereum hybrid network approach can be adopted to improve both processing time and cost-saving maintaining the same level of accountability and data certification as the public approach through certification on Blockchain. Conclusions and Future Work This paper demonstrated how a tele-medical laboratory service can be developed through a healthcare workflow running in a FHC environment leveraging Blockchain. Experimental results highlight that the performance of the Ethereum hybrid network certification system is improved in terms of cost and response time compared to an alternative public approach. Definitely, the Blockchain technology is destined to evolve shortly improving system capabilities and robustness, and public test instances with different consensus protocols will be made available with benefits on performance and scalability. In the pandemic condition that authors are living at the time of writing of this paper due to the Covid-19 virus, the need of tele-healthcare service becomes dramatically fundamental to reduce the infection risks for patients, thence reducing their movement. We hope that with this paper, we succeeded in stimulating the attention of both academic and industrial communities toward the adoption of Blockchain in the healthcare context to speed up the development of innovative tele-healthcare services. In future developments, this work can be extended integrating a comprehensive healthcare scenario with different involved organizations, such as pharmaceutical companies registering in the Blockchain all the phases of drug production until the sealing of final package and shipment. Thus, when a patient buys a prescribed medicine it is possible to link the patient with the medicine box, which would mean an important step towards the end of drugs' falsification and an important assurance for the end-user who can be identified in case a specific drug package has been recalled. Funding: This research was funded by the Italian PON project "TALISMAN", grant ARS01_01116 and by the Italian Healthcare Ministry Young Researcher project "Do Severe acquired brain injury patients benefit from Telerehabilitation? A Cost-effectiveness analysis study", grant GR-2016-02361306.
6,029.8
2020-05-01T00:00:00.000
[ "Computer Science", "Medicine", "Political Science" ]
The UV sensitivity of the Higgs potential in Gauge-Higgs Unification In this paper, we discuss the UV sensitivity of the Higgs effective potential in a Gauge-Higgs Unification (GHU) model. We consider an $SU(\mathcal N)$ GHU on $\mathbf M^4\times S^1$ spacetime with a massless Dirac fermion. In this model, we evaluate the four-Fermi diagrams at the two-loop level and find them to be logarithmically divergent in the dimensional regularization scheme. Moreover, we confirm that their counter terms contribute to the Higgs effective potential at the four-loop level. This result means that the Higgs effective potential in the GHU depends on UV theories as well as in other non-renormalizable theories. I. INTRODUCTION The standard model (SM) of particle physics is a Yang-Mills theory symmetric under SU (3) c × SU (2) L × U (1) Y gauge transformations. In the SM, the gauge symmetry is spontaneously broken via the Higgs mechanism, which is caused by the nonzero vacuum expectation value (VEV) of the Higgs boson. As a result, physical quantities such as the mass of a particle include the Higgs VEV. By measuring the physical parameters including the Higgs VEV, the SM is confirmed to be consistent with phenomena at Large Hadron Collider [1,2]. While the phenomenology below the electroweak (EW) scale is understandable by the SM, it is concerned that the SM has difficulty explaining the scale hierarchy between the EW and UV theories such as the grand unified theory (GUT) or quantum gravity because of the dangerous quadratic divergences derived from the Higgs boson. In a model with supersymmetry (SUSY) [3][4][5][6][7], there is a superpartner for each particle to cancel the quadratic divergences consequently. Instead of the SUSY scenarios, we can invoke gauge symmetry in a non-SUSY theory defined on extra-dimensional spacetime to protect the Higgs mass term. In the GHU models, the Higgs bosons are identified with the Yang-Mills Aharonov-Bohm (AB) phases. Therefore, the Higgs boson has no potential at the tree level. Meanwhile, at the loop level, the Higgs potential is generated by the AB effect due to the non-simplicity of spacetime. This symmetry breaking mechanism is called the Hosotani mechanism [11,13]. In the previous work [18], we confirmed that the Higgs potential does not suffer from the divergence at the two-loop level in a non-Abelian gauge theory defined on M 4 × S 1 spacetime, where M n is the n-dimensional Minkowski spacetime with n ≥ 1 and S 1 is a circle. Besides, we proceeded to discuss the finiteness of the Higgs potential on M 5 × S 1 spacetime, which is related to this paper. Evaluating the four-Fermi diagrams at the one-loop level, we obtained the logarithmic divergences contributing to the Higgs potential. Hence, the Higgs potential is UV sensitive on the six-dimensional spacetime. This is consistent with the non-renormalizability of higher-dimensional gauge theory. Based on this result, we concluded that the Higgs potential would be also suffering from the divergence in the five-dimensional spacetime. In this paper, we go back to M 4 × S 1 spacetime again and explicitly show that the Higgs potential depends on UV theories in an SU (N ) GHU model. Since there are no logarithmic divergences at the one-loop level in odd-dimensional theories, we consider divergences at the two-loop level in the dimensional regularization scheme. The four-Fermi diagrams, in practice, are evaluated at the two-loop level. We find that they are indeed logarithmically divergent and their counter terms contribute to the Higgs potential at the four-loop level. This fact means the UV sensitivity of the Higgs potential in this GHU model. Since we use a simple setup, the Higgs potential would generically be UV sensitive in other GHU models. The remainder of this paper is organized as followes. In Sect. II, we briefly describe the Hosotani mechanism along with the theoretical setup. In Sect. III, we explain how to evaluate the divergence of the Higgs potential and show that the Higgs potential receives a contribution from counter terms to the divergences that cannot be subtracted by the renormalization to the gauge coupling. Finally, we summarize the results obtained in this paper in Sect. IV. II. HOSOTANI MECHANISM In this section, we describe a theoretical setup used in this paper and review the Hosotani mechanism. Since the Hosotani mechanism is a quantum effect related to the global structure of spacetime, let us consider M 4 × S 1 as an example of non-simply connected spacetime, where M 4 and S 1 are the four-dimensional Minkowski spacetime and a circle with radius R respectively. We use coordinates x µ with µ ∈ {0, 1, 2, 3} for M 4 and y ∈ [0, 2πR) for S 1 . As mentioned above, on a spacetime with a hole, the fifth component of the gauge boson, A a 5 , has its VEV expressed by where θ a 's are the Yang-Mills AB phases around S 1 and g is the coupling constant. Here, a denotes the group index. Through this paper, we use the background field method [23] for calculating the contri-butions to the Higgs effective potential and A a 5 's are shifted by its VEV; . ( Due to compactified extra-dimension, the boundary conditions on field functions are introduced. We consider a massless Dirac fermion, ψ, as only one type of matter field and suppose A a M and ψ satisfy ψ(x µ , y + 2πR) = e iβ ψ(x µ , y), where M ∈ {0, 1, 2, 3, 5} and β ∈ [0, 2π). The Lagrangian we consider with an SU (N ) gauge symmetry is where the gauge fixing terms, L GF , and the Faddeev-Popov ghost terms, L ghost , are given by where n ∈ Z denotes the Kaluza-Klein (KK) modes. To shift the fifth component of a momentum, we consider the following gauge transformations; ψ (x µ , y) → e −i θ a τ a 2πR y ψ (x µ , y), where A M = A a M T a . Without any boundary conditions, arbitrary θ a can be gauged away by the above gauge transformations. With Eqs. where I is the identity matrix. Remaining θ a 's become physical degrees of freedom. Due to the gauge symmetry, θ a has no potential at the tree level. Under the boundary conditions which characterize the non-simplicity of spacetime, however, its potential is generated by loop corrections as shown in [11,[13][14][15][16][17][18][19]. When the minimum point of the potential is nonzero, the gauge symmetry is dynamically broken and the gauge bosons become massive. This is how the Hosotani mechanism works. III. HIGGS POTENTIAL DIVERGENCE AT THE FOUR-LOOP LEVEL Up to the two-loop level, we have found no divergence of the Higgs potential in the previous works [11,[13][14][15][16][17][18][19]. However, since the higher-dimensional gauge theory is non- , which is logarithmically divergent. To obtain its divergent part, we concentrate on loop momenta going around the UV region. Ignoring momenta lying external lines, we have div = 1 2πR where α and γ are spin indices ofψ, and β and δ are those of ψ. i, j, k, l represent indices of τ a 's. Note that all summation indices in this paper run all integers. In the previous work [18], we derived the following formula 1 ; where Θ is an arbitrary Hermitian matrix. Here, S(·) denotes an analytic function and its generalization to a matrix-valued one. Using Eq. (15) for Θ = 0, we get div = −ig 6 Let us define I by where x and y are space-like vectors. For the spacetime dimension D = 5 − 2 with > 0, I is calculated using a formula deduced in Appendix A. Integrals in Eq. (16) can be rewritten as a derivative of I; The behavior of the integrand in the UV region is shown in Eq. (A18); after integration over k and angular variables, the remaining (radial) integral has the form, where a, b, r and β are independent of x and y. Here, K r (z) is the modified Bessel function of the second kind and 0 F 1 (a; z) is a generalized hypergeometric function. Note that 0 F 1 (a; z) is expressed by the Bessel function of the first kind, J α (z); Plugging Eq. (20) into Eq. (19), we see that the integrand is a multiplication of J α , K r , and the power of |p E |. Therefore, the UV divergence of I is suppressed by K r for its exponential dumping when To evaluate the UV divergence of I, we set m 1 = m 2 = 0. Substituting Eq. (A20) into Eq. (18), we obtain . (21) The derivatives are given by In Appendix A, it is shown that Using this formula, we have Repeating the above procedure for the other two-loop four-Fermi diagrams, we find the -poles at the two-loop level. They are shown explicitly in Appendix B. The divergence has the form, where X denotes diagrams with four fermion legs and C X is a constant. Here, G (1,2) X and T (1,2) X are products of γ M 's and τ a 's respectively. The above example corresponds to T (1) The following counter term is introduced to cancel the above divergence 2 ; where δ 4F = δ div 4F + δ fin 4F . Here, δ fin 4F is an arbitrary constant and δ div 4F is defined as to subtract the -pole. Closing the fermion lines, we get a contribution to the Higgs potential from L CT with nontrivial θ-dependence; where we have traced out matrix indices of both γ M 's and τ a 's. Using Eq. (15), we get In the previous work [18], we have derived that From this formula, we obtain The contributions from each diagram are explicitly written down in Appendix B. To get V CT (θ) in an Abelian gauge theory, we replace τ a 's with Q, the U (1)-charge of ψ. We have computed V CT (θ) in an SU (2) gauge theory with a fermion in the fundamental representation and an Abelian case with a fermion having the U (1)-charge Q = 1, which is shown in FIG. 1. Therefore, it is concluded that the θ-dependent part of the Higgs potential is UV sensitive. V CT (θ) in an Abelian gauge theory with a fermion whose U (1)-charge equals to one. β is specified to be zero. In the previous work [18], it was shown that contributions to the Higgs potential from the one-loop four-Fermi diagrams vanished in the Abelian gauge theory. Based on this numerical calculation, we reject the all-order finiteness of the Higgs potential in an Abelian gauge theory. IV. SUMMARY In this paper, we investigate the finiteness of the Higgs potential beyond the two-loop level in the GHU by evaluating the loop corrections explicitly. While the Higgs potential was found finite at the one-or two-loop levels on many non-simply connected manifolds, its finiteness at higher-order had been unclear. As suggested in the previous work [18], it is shown that the Higgs potential receives the nontrivial θ-dependent contributions from the counter terms for the four-Fermi diagrams on M 4 × S 1 at the four-loop level and, thus, it is UV sensitive. For logarithmic divergences found in this paper, when we impose a UV cutoff on the GHU to make sense as an effective field theory, the maximum value of the cutoff, denoted as Λ max , satisfies ln(RΛ max ) ∼ g 2 Λ max . Hence, in the GHU, the perturbation is valid at most around the compactification scale. ACKNOWLEDGMENTS The author A.Y. thanks Junji Hisano for his tremendous supports and meaningful discussions and also thanks Yutaro Shoji for his advice and unconditional dedication to improving this paper. Appendix A: Two-loop integrals This appendix is dedicated to evaluating a following integral; where s, t, u are positive constants satisfying s + t + u > D and x, y are space-like vectors independent of p and k. Introducing the Feynman parameters, we get In the previous work [18], we showed where Re(s) > 0, p 2 + m 2 = 0. Here, K r (z) is the modified Bessel function of the second kind. Using this formula, we have where we have scaled β to (1 − α)β. Using the Wick rotation, we get Carrying out the integral over all angles except θ, the angle between p E = (p 0 E , p E ) and where Ω D is the area of the unit sphere in the D-dimensional space; The integral over θ is evaluated as where x 0 E ≡ x 0 and y 0 E ≡ y 0 . We evaluate angle integrals with a similar way to Eqs. (A10) and (A12); for Re(a) > 0 and b ∈ R. Applying the above formulae to F, we obtain regardless of whether x − βy is space-like or time-like. Therefore, after integration over k and p, we get By expanding the last factor, finding the -pole of I comes down to calculating the following integral; where σ, τ , κ, and λ are arbitrary constants. Setting γ to be we have where B(x, y) is the beta function; Appendix B: Table of divergence and its contribution to the Higgs potential As shown in Sect. III, at the two-loop level, the divergences of the four-Fermi diagrams have the form, where G (1,2) X and T (1,2) X are products of γ M 's and τ a 's respectively. Here, C X is a constant and X denotes divergent diagrams without crossing of the fermion lines. The second term represents the same diagram with the fermion lines being crossed.
3,203.2
2021-03-10T00:00:00.000
[ "Physics" ]
XA4C: eXplainable representation learning via Autoencoders revealing Critical genes Machine Learning models have been frequently used in transcriptome analyses. Particularly, Representation Learning (RL), e.g., autoencoders, are effective in learning critical representations in noisy data. However, learned representations, e.g., the “latent variables” in an autoencoder, are difficult to interpret, not to mention prioritizing essential genes for functional follow-up. In contrast, in traditional analyses, one may identify important genes such as Differentially Expressed (DiffEx), Differentially Co-Expressed (DiffCoEx), and Hub genes. Intuitively, the complex gene-gene interactions may be beyond the capture of marginal effects (DiffEx) or correlations (DiffCoEx and Hub), indicating the need of powerful RL models. However, the lack of interpretability and individual target genes is an obstacle for RL’s broad use in practice. To facilitate interpretable analysis and gene-identification using RL, we propose “Critical genes”, defined as genes that contribute highly to learned representations (e.g., latent variables in an autoencoder). As a proof-of-concept, supported by eXplainable Artificial Intelligence (XAI), we implemented eXplainable Autoencoder for Critical genes (XA4C) that quantifies each gene’s contribution to latent variables, based on which Critical genes are prioritized. Applying XA4C to gene expression data in six cancers showed that Critical genes capture essential pathways underlying cancers. Remarkably, Critical genes has little overlap with Hub or DiffEx genes, however, has a higher enrichment in a comprehensive disease gene database (DisGeNET) and a cancer-specific database (COSMIC), evidencing its potential to disclose massive unknown biology. As an example, we discovered five Critical genes sitting in the center of Lysine degradation (hsa00310) pathway, displaying distinct interaction patterns in tumor and normal tissues. In conclusion, XA4C facilitates explainable analysis using RL and Critical genes discovered by explainable RL empowers the study of complex interactions. Introduction Overall, the introduction section provides a clear background on the significance of ML models in gene expression analysis, addresses the limitations of existing approaches, and introduces the XA4C tool as a solution for explainable analysis and prioritization of critical genes which is good. To highlight the novelty of XA4C, it would be helpful to explicitly state how it differs from existing interpretable ML tools and what unique features or capabilities it brings to the field.This will help readers understand the specific contributions of XA4C. Thanks for the overall positive evaluation.In our initial submission, such comparisons were presented in Discussion.In the revised manuscript, based on your comment, we have integrated them into Introduction. Result When conducting pathway over-representation analysis, it would be valuable to include statistical significance measures, such as p-values or false discovery rates, to determine the significance of pathway enrichment.This would provide more robust evidence for the involvement of specific pathways in cancer. Thank you for the constructive comments, and we agree that p-values are robust evidence to show the involvement of particular pathways in cancer.Actually, we had presented adjusted pvalues at a false discovery rate of 0.05 using the colors of dots in enrichment analysis in Figure 3A (red for small values and blue for large ones), although the actual values were not included.In the revised manuscript, we included the detailed adjusted p-values at a false discovery rate of 0.05 in Supplementary S2 General In order to adhere to the standard practice of writing abbreviations, the author should provide the full name or description of an abbreviation the first time it is mentioned, followed by the abbreviation in parentheses.However, for subsequent mentions of the same abbreviation within the same section or context, it is generally not necessary to repeat the full name or description.Instead, the abbreviation can be used directly.To address the issue in the provided lines, the author should modify the text as follows: Line 120 variables.To quantify each gene's contribution to the latent variables, XA4C employs eXtreme Gradient Boosting (XGBoost) Line 339: eXtreme Gradient Boosting 19 (XGBoost) Regresso Line 377: pathway representations and their corresponding inputs were passed through the eXtreme Gradient Boosting (XGBoost) Thank you for pointing this out.We have removed the duplicated definition of XGBoost and also proofread other places to fix similar problems. Reviewer #2: Tha manuscript proposed "Critical genes", defined as genes that contribute highly to learned representations.Then, applied eXplainable Autoencoder on the genes to find netowrk of genes and highly contributing genes (discriminative genes) for each type of studied cancer (BRCA, etc..).The manuscript is wll-presented, the methods are properly applied.However, I have some minor concerns: -The literature lack of the recent application of explainableAI in cancer and health outcomes.KI suggest if the authors may highlight studies such as (PMID: 36738712 and/or PMID: 37233630). Thanks for providing missing essential literature.We have cited these in the revised manuscript (in Introduction, the end of page 4). -The results does not show any performance measurements such as accuracy, sensitivity, etc. -AUCROC curve of the prediction model may shows the performance of the model.Thank you for your valuable comments.In the revised manuscript, we have added a section to present performance measurements to evaluate the performance of our XA4C model in comparison to the hub genes and DiffEx genes.The outcomes show that Critical genes have better performance than alternatives (Supplementary S6 and S7 Tables).We have added the description of this analysis in Results (Page 15 Line 266-270).The related method is also detailed in Materials & Methods, subsection "Performance measurements of accuracy and sensitivity" (Page 22-23, Line 466-480).For the reviewer's convenience, we outline the methods and results below: In this evaluation, we first construct confusion matrices.Considering DisGeNET-reported genes as the gold-standard.For a particular tool (critical genes, hub genes, or DiffEx genes), we defined true positives (TP) as genes identified by the tool and are reported by DisGeNET, true negatives (TN) as genes not identified and not reported in DisGeNET, false positives (FP) as genes identified but not reported in DisGeNET, and false negatives (FN) as genes identified but not reported in DisGeNET.Based on the confusion matrices (Supplementary S6 Table ), we calculated precision, recall, F1 score and accuracy (detailed formulation defined in Materials & Methods). We compared the performance of XA4C Critical genes to hub genes and DiffEx genes.The results, presented in Supplementary S7 Table, demonstrate that XA4C outperforms the other methods in terms of the F1-score in all six cancers, and these three methods have similar performance in terms of accuracy. It should be noted that, in practice, when people use Hub gene or DiffEx gene analyses, they usually use a fixed parameter, i.e., the default setting, without optimizing outcomes using a tuning parameter.In our study, as stated in Materials & Methods, we utilized the "chooseTopHubInEachModule" function from the Weighted Correlation Network Analysis (WGCNA) package.We applied this function to the gene expression matrix obtained from pathways, while maintaining the default settings for other parameters.Specifically, we set the power parameter to 2 and the type parameter to "signed."As for DiffDESeq2, it employs a generalized linear model framework with a negative binomial distribution to assess differential expression between two groups.Initially, it estimates the fold change for each gene between the groups, and subsequently calculates the Wald test statistics and corresponding p-values.These p-values reflect the level of evidence contradicting the null hypothesis that there is no disparity in gene expression between the conditions.These p-values are further adjusted for multi-test correction, and the significance level (alpha) utilized is 0.05, a conventional parameter for statistical tests.Analogously, we designed Critical genes generated by XA4C as the genes that are in the top 1% among all genes contributing to the autoencoder analysis. Therefore, all these tools (hub genes and DiffEx genes and Critical genes) will not face the trade-offs of adjusting tuning parameters in practice.As such, the AUCROC curve may not be the most suitable measure for evaluating their relative performance.The above analysis of F1score etc., on default parameters has reflected the quantitative performance measurement of accuracy suggested by the reviewer.Also how the model avoided over-fitting. Thanks for reminding us of the issue of over-fitting.We have added a paragraph addressing all issues related to overfitting in Discussion in the revised manuscript (Page 17, Line 312 -322): Machine learning algorithms may run into overfitting.In XA4C, there are two models used: Autoencoder and TreeSHAP.The autoencoder by itself is unsupervised, therefore, it may not run into overfitting [1,2].More importantly, a sparsity penalty with L1 regularization is applied to XA4C autoencoder loss function, which penalizes non-zero activations.This sparsity penalty can prevent overfitting to some extent because it makes the autoencoder prefer to activate only a subset of its nodes.It also helps generalization by preventing the model from remembering noisy or irrelevant patterns in the training data [3,4].It is important to note that TreeSHAP itself does not introduce overfitting if the underlying tree model is not overfitting.In our study, we employed the XGBoost regression model as the tree model.XGBoost models also incorporate regularization techniques to prevent overfitting [5,6].With the regularization penalty in both the autoencoder and TreeSHAP, we believe the overfitting is under control in our XA4C model. 5. Chen crucial for understanding the mechanisms underlying the disease and developing effective therapeutic strategies.The authors have combined autoencoders and the SHAP framework to develop a new computational model, XA4C, which is aimed at extracting hidden features from transcriptome data and determining the contribution of each gene.Integrating these advanced machine learning techniques allows for a more comprehensive analysis and understanding of high-dimensional gene expression data.The ability of XA4C to uncover novel critical genes could potentially contribute to early detection, personalised treatment strategies, and new insights into the biology of cancer. Thank you for the thorough summary and positive evaluation. To enhance the value of this work, the manuscript should incorporate comparisons with existing models, integrates a thorough methodology for gene identification by considering multiple genetic alterations and validate the results with established databases.These additions would enrich the manuscript's quality and fortify its relevance in cancer research. Thank you for the constructive comments.We have thoroughly revised the manuscript based on your input.Please see our item-to-item response below.Comments 1.Consider Multiple Criteria for Identifying Cancer-Related Genes: The manuscript emphasises the use of differential expression in identifying critical genes.However, it is important to acknowledge that differential expression and hub genes are not the only criteria for determining cancer genes.There are various factors, such as changes in DNA methylation, gain-of-function mutations in oncogenes, loss-of-function mutations in tumour suppressor genes, copy number alterations, chromatin accessibility, and changes in protein expression, that also play a role in cancer progression.The authors could enhance the manuscript by analysing whether the critical genes identified through the XA4C model are associated with some or any of these changes in the studied cancer types.This broader approach can provide a more comprehensive understanding of the genes' roles in cancer. Thank you for the comments.In order to analyze whether the Critical genes identified through the XA4C model are associated with some or any of these changes in the studied cancer types, we have resorted to COSMIC database to check whether the XA4C-identified genes are indeed in overlap with genes with the genetic (or epigenetic) mutations mentioned by the reviewer.We first obtained information from the COSMIC database: the genetic mutations, including missense mutations and copy number variations, and epigenetic mutations, including differential methylation.Based on the available mutation information, we observed a significant proportion of Critical genes (70% averaged over six cancers) that exhibited gained or lost copy number variations.Additionally, approximately 25% of the Critical genes showed differential methylation, characterized by a beta-value difference larger than 0.5 compared to the average beta-value across the normal population.Furthermore, around 12% of the Critical genes displayed missense mutations, which have the potential to alter the function of the encoded proteins.In the revised manuscript, the detailed results have been added as Supplementary S4 Table , and have been presented in Results (Page 13 Line 224-234). 2. Validate Findings with Known Cancer Genes from COSMIC Database: To increase the robustness and credibility of the findings, the authors should consider validating the critical genes identified by the XA4C model against known cancer genes listed in the COSMIC (Catalogue of Somatic Mutations in Cancer) database.By evaluating which among the identified critical genes are classified as Tier 1 and Tier 2 cancer genes in relation to the differentially expressed genes and the hub genes, the authors can provide additional evidence that supports the utility and accuracy of the XA4C model in identifying relevant cancer genes.This validation with a reputable external database would add significant value and trustworthiness to the results presented in the manuscript. Thanks for the suggestion.In order to validate the Critical genes identified by the XA4C model, as well as the hub genes and DiffEx genes, we compared them using the COSMIC database's census genes (both Tier 1 and Tier 2) associated with specific cancers.There are 738 genes presented in the COSMIC cancer census.However, only 200 genes are specific to the six cancers analyzed in our study (Supplementary S5 Table ).Although we observed only a small overlap between the Critical genes and these census genes, the overlap ratios are comparatively higher than the overlaps observed between census genes and Hub genes or DiffEx genes for genes in (Supplementary S5 Table ).This demonstrates consistency with the results obtained from analyzing the enrichment of genes using the DisGeNET database.We have added this outcome to the revised manuscript (Page 15, Line 261-265). Incorporate Comparisons with Existing Models: The manuscript presents the XA4C model, which combines autoencoders and SHAP values to interpret the contributions of individual genes in the context of cancer transcriptome data.It might be beneficial for the authors to include a comparison section where the performance and interpretability of XA4C are rigorously compared to other existing models and techniques in the same field.This will help in validating the robustness and utility of the XA4C model.This could include traditional statistical methods such as XGBoost or Random Forests approaches. Thanks for the comments.We performed a comparison between the Critical genes identified by XA4C and the genes prioritized by Random Forest and XGBoost by using the "feature importance values" generated by Random Forest and XGBoost classifiers, respectively.The detailed procedure is in Materials & Methods (Page 23, Line 485-499), and the outcome is presented in Discussion (Page 17-18, Line 326-334).For the reviewer's convenience, we also outline the procedure and outcomes here: The Random Forest and XGBoost classifiers were trained on gene expressions from 335 pathways.The classifiers were trained using default parameter settings with 500 estimators (number of trees in the forest).To ensure a balanced representation of tumor and normal Table.A sentence has been added to the caption of Fig 3 to refer readers to the table for detailed values.
3,350.2
2023-07-17T00:00:00.000
[ "Computer Science", "Biology" ]
Polarized vector meson production in semi-inclusive DIS We make a systematic calculation for polarized vector meson production in semi-inclusive lepton-nucleon deep inelastic scattering $e^-N\to e^-VX$. We consider the general case of neutral current electroweak interactions at high energies which give rise to parity-violating effects. We present a general kinematic analysis for the process and show that the cross sections are expressed by 81 structure functions. We further give a parton model calculation for the process and show the results for the structure functions in terms of the transverse momentum dependent parton distribution functions and fragmentation functions at the leading order and leading twist of perturbative quantum chromodynamics. The results show that there are 27 nonzero structure functions at this order, among which 15 are related to the tensor polarization of the vector meson. Thirteen structure functions are generated by parity-violating effects. We also present the result and a rough numerical estimate for the spin alignment of the vector meson. I. INTRODUCTION Parton distribution functions (PDFs) and fragmentation functions (FFs) are related to the hadron structure and hadronization mechanism, respectively. They are two important quantities in describing high energy reactions (see e.g., [1][2][3][4][5][6][7] for recent reviews). In semiinclusive reaction processes, e.g., lepton-nucleon semiinclusive deep inelastic scattering (SIDIS) with hadron production at small transverse momentum, the transverse momentum dependent (TMD) factorization applies [8][9][10][11][12]. The sensitive observables in experiments are often different azimuthal asymmetries that are theoretically expressed by the convolutions of TMD PDFs and FFs in general. Therefore, processes such as SIDIS, Drell-Yan, or semi-inclusive hadron production in electronpositron annihilation give us opportunities for accessing the three-dimensional hadron structure and hadronization mechanism . One can access the spin dependent TMD FFs in semiinclusive processes by measuring the polarizations or spin dependent azimuthal asymmetries of the produced hadron. For example, for Λ hyperon production, the polarizations of Λ can be measured by its self-analyzing weak decay. Very similar to hyperon polarizations, the polarizations of vector mesons can also be determined by the angular distribution of their decay products through the strong decay into two spin zero hadrons. Since the vector mesons are spin-1, there will be both vector and tensor polarizations for them. Measurements have been carried out, e.g., for the e + e − annihilation process at the Large Electron-Positron Collider more than two decades ago for the spin alignment of vector mesons [40][41][42], which has attracted much attention (see, e.g., [43] for a recent phenomenological study and references therein). One can also study hadron polarizations the in SIDIS process; e.g., the Λ hyperon polarizations can be studied in e − N → e − ΛX. In the QCD parton model and *<EMAIL_ADDRESS>TMD factorization, the Λ polarizations in SIDIS are expressed by the convolution of the TMD PDFs with the corresponding spin dependent TMD FFs (see [44] for a recent phenomenological study). When we consider the final state hadron to be a vector meson V with spin-1, i.e., e − N → e − V X, we can access not only the vector polarization dependent FFs but also the tensor polarization dependent FFs [36]. The Electron-Ion Collider (EIC) has been proposed to be built as the next generation collider on deep inelastic scattering with high energy and high luminosity [45,46]. It gives us new opportunities for exploring physics on quantum chromodynamics (QCD) and nucleon structure. Since the beam energy of the EIC is relatively high, it has a wider kinematic coverage. The momentum transfer Q 2 between the incident lepton and the nucleon has a chance to be comparable with M 2 Z , i.e., the mass square of the Z boson. Therefore, parity-violating effects can arise through the interference of the electromagnetic (EM) and weak interactions [47]. It will also generate new structure functions that are complementary to those from the pure EM interaction. Experimentally, parity-violating asymmetries in deep inelastic scattering (DIS) experiments were first observed at SLAC [48,49] and have been measured widely [50][51][52][53][54][55][56][57][58][59][60]. Proposals for precise measurements are also available [61][62][63]. On the theoretical side, electroweak inclusive and semi-inclusive DIS processes have been studied extensively [23,39,47,[64][65][66][67][68][69]. However, systematic studies are still lacking. These include a full kinematic analysis and QCD parton model calculations for the cross section, and the systematic treatment for the hadron polarization effects. The rest of the paper is organized as follows. In Sec. II, we present the general form of the cross section for e − N → e − V X in terms of structure functions by carrying out a kinematic analysis. In Sec. III, we calculate the hadronic tensor in the QCD parton model and give the results expressed by the convolution of the TMD PDFs and FFs. In Sec. IV, we give the results for the structure functions as well as the spin alignment of the vector meson in terms of TMD PDFs and FFs. Section V is a summary. A. The process and notations We consider the SIDIS process at high energies with unpolarized electron and nucleon beams, i.e., where V is a vector meson with spin-1. The momenta of the incident and outgoing particles are shown in the brackets. S denotes the polarizations of the vector meson. The coordinate system for e − N → e − V X in the photon-nucleon collinear frame is shown in Fig. 1, where the incoming proton and the virtual photon move along the ±z axis and the x axis is determined by the transverse momenta of the leptons. The azimuthal angle φ is spanned by the transverse momentum of the vector meson with respect to the transverse momentum of the outgoing electron. The standard variables for SIDIS are defined as Q 2 = −q 2 , x = Q 2 2p N ·q , y = p N ·q p N ·l , z h = p N ·p h p N ·q , and s = (p N + l) 2 . We will use the light-cone coordinate system, in which a four vector a µ is expressed as a µ = (a + , a − , a ⊥ ) with a ± = (a 0 ± a 3 )/ √ 2 and a ⊥ = (a 1 , a 2 ). The momenta of the particles take the following forms: We also define two unit light-cone vectors n µ = (0, 1, 0 ⊥ ) andn µ = (1, 0, 0 ⊥ ). With these notations, the transverse metric is given by g µν ⊥ = g µν −n µ n ν −n ν n µ . We will also use the transverse antisymmetric tensor defined as ε µν ⊥ = ε µναβn α n β with ε 12 ⊥ = 1. We consider the leading order approximation for QED with the exchange of a single virtual photon γ * or a Z boson with momentum q = l − l between the electron and the nucleon. The differential cross section is given by The symbol r can be γγ, ZZ, and γZ. They correspond to the electromagnetic, the weak, and the interference contributions, respectively. A summation over r is implicit in Eq. (7), i.e., The A r factors and the leptonic tensors are determined in perturbative theory. The A r factors can be calculated as where Γ Z is the width of the Z boson and θ W is the Weinberg angle. The leptonic tensors are given by where c e V and c e A are from the weak interaction vertex defined by Γ e µ = γ µ (c e V − c e A γ 5 ), c e 1 = (c e V ) 2 + (c e A ) 2 , and c e 3 = 2c e V c e A . We see that L µν γγ and L µν γZ can be obtained from L µν ZZ by the replacements of (c e 1 , c e 3 ) → (1, 0) and (c e 1 , c e 3 ) → (c e V , c e A ), respectively. The corresponding hadronic tensors are defined as where the current operators are given by J µ . For the phase space factors, in terms of the standard SIDIS variables, we have where ψ is the azimuthal angle of l around l. Therefore the cross section in Eq. (7) can be written as B. Cross section in terms of structure functions In this subsection, we make a kinematic analysis for the process. There is no restriction from parity since we consider both the EM and the weak interaction contributions. The most general form of the cross section is obtained by adding the contributions from the EM, the weak and the interference terms together. Formally, the pure weak interaction contribution will generate all the possible structure functions. The EM and the interference contributions will not introduce new types of structure functions, so we first concentrate on the analysis of the pure weak interaction contribution. We give the decomposition of the hadronic tensors in terms of basic Lorentz tensors (BLTs). The general form of the hadronic tensor satisfies the constraints of Hermiticity and current conservation. The hadronic tensor is divided into a symmetric part and an antisymmetric part, i.e., W µν where h µν σi 's andh µν σi 's represent the space reflection even and odd BLTs, respectively. They are constructed from available kinematical variables in the process, e.g., p N , p h , q, and the polarization vector or tensor. W σi 's are scalar coefficients. The subscript σ specifies the polarizations of the vector meson. There are nine BLTs for the unpolarized part [36], i.e., The subscript U denotes the unpolarized part. We have defined the four vectors such as p µ N q ≡ p µ N − q µ (p N · q)/q 2 satisfying q · p N q = 0, and similar for p µ hq . We have also used the notations The vector polarization of the vector meson, similar to spin-1/2 hadrons, is described by the helicity λ and the transverse polarization vector S µ T . It has been shown in [36] that the polarization dependent BLTs can be constructed from the unpolarized BLTs by multiplying the corresponding polarization dependent scalars or pseudoscalars. Therefore, we have There are in total 27 BLTs for the vector polarization dependent part. The tensor polarization of the vector meson is described by five independent components. They are given by a Lorentz scalar S LL , a transverse Lorentz vector S µ LT with two independent components S x LT and S y LT , and a transverse Lorentz tensor S µν T T with two independent components S xx T T = −S yy T T and S xy T T = S yx T T [22,36]. Since S LL is a Lorentz scalar, there are nine S LL -dependent BLTs in analog with the unpolarized part. They are given by The S µ LT dependent BLTs are given by For the S µν T T dependent part, we have Substitute the hadronic tensors in Eqs. (20) and (21) into Eq. (19), and after making Lorentz contractions with the leptonic tensor, we will obtain the general form for the cross section. To be explicit, we first parametrize the components of the transverse polarization vector and tensor as Then, the cross section can be divided into six parts according to the polarization states of the vector meson. It is given by where the subscripts of W denote the polarization states of the vector meson. The explicit expressions for each part are calculated to be We have defined the following functions of y for simplicity: A(y) = y 2 − 2y + 2, There are 81 structure functions in total, which is exactly the number of the corresponding 81 independent BLTs for the hadronic tensor. Among all the structure functions, 39 of them, i.e., theW 's, correspond to the space reflection odd part of the cross section and the remaining 42 correspond to the space reflection even part. We also note that there are 45 structure functions that depend on the tensor polarizations of the vector meson. It can be checked that the cross section for e − N → e − V X given in Eqs. (47)-(53) has the same form in terms of the azimuthal angle dependence as that for e + e − → Z 0 → V πX given in [36]. This is because these two processes share the same set of BLTs when constructing the general form of the hadronic tensors. III. PARTON MODEL CALCULATIONS A. The hadronic tensor results in terms of TMD PDFs and FFs We now calculate the hadronic tensor using the QCD parton model. At the leading order of perturbative quantum chromodynamics and the leading twist, the hadronic tensor is given by whereΦ andΞ are the parton correlators related to the parton distribution and the fragmentation process. They are defined as [25] Φ We have suppressed the gauge links for short notations. Taking approximations for the momenta in the δ function of the hadronic tensor, i.e., k − i ≈ 0 and k + f ≈ 0, and after integration, we get where the TMD parton correlators are defined bŷ The parton correlators are 4 × 4 matrices in Dirac space. They can be expanded under the basis of Gamma matrices, i.e., where · · · denotes terms irrelevant for the leading twist. The Φ α 's, etc., are expanding coefficients or are called correlation functions. We have omitted the arguments of the correlators and correlation functions for short notations. For an unpolarized nucleon, the Lorentz decomposition of the correlation functions only generate two TMD PDFs at the leading twist [70]. We have The first term defines the number density distribution, and h ⊥ 1 is the Boer-Mulders function which is chiral-odd. Similarly, for the fragmentation part, the correlation functions are decomposed as [36] We have omitted the arguments of z h and k f T for the TMD FFs for short notations. There are 18 TMD FFs at the leading twist, ten of them are tensor polarization dependent. Substituting the decomposition of the parton correlators of Eqs. (61)-(67) into the hadronic tensor in Eq. (58), by carrying out the traces we can obtain the results for the hadronic tensor. The relevant traces we need are where c q 2 = (c q V ) 2 − (c q A ) 2 and the indices α and β are both transverse. To further simplify the expressions, we define a symmetric tensor α µν and we also use the following notations for the combinations of FFs: Using these notations, we obtain the hadronic tensor given by The convolution C[· · · ] is defined as We see that the hadronic tensor is given by the convolution of the TMD PDFs and FFs. There are 18 different convolution modules, in which half of them are related to the chiral-odd TMD PDFs and FFs. B. The cross section in the parton model Substituting the hadronic tensor Eq. (74) into Eq. (19), and making Lorentz contractions with the leptonic tensor, we will obtain the cross section expressed by the TMD PDFs and FFs. However, we notice that the indices in the hadronic tensor are taken by the convolution variables k iT and k f T . In order to compare with the structure function results, we need to do a transformation to rewrite these terms using physical observ-ables, so that the azimuthal angle modulations will be explicit. To this end, we define a unit perpendicular vectorĥ µ = p µ h⊥ /| p h⊥ | representing the direction of the transverse momentum of the vector meson. Under the convolution, we have, from Lorentz covariance, e.g., where F(k iT , k f T ,ĥ) is an arbitrary scalar function of k iT , k f T , andĥ. The coefficient A can be obtained by contraction on both sides with the unit vectorĥ µ , i.e., This means that, under the convolution, one can equivalently replace k µ iT with −(ĥ · k iT )ĥ µ . For terms with two or more indices, we give the detailed derivations and expressions in Appendix A. These algebras can also be found with a compact form in Ref. [16]. When contracting the hadronic tensor with the leptonic tensor, we need the following basic contraction results, i.e., where we have defined T q 1 (y) = c e 1 c q 3 A(y) + c e 3 c q 1 C(y). We will also define and use the following dimensionless coefficients to simplify the results: We divide the cross section into two parts, according to the chirality of the TMD PDFs or FFs involved, i.e., The chiral-even part is calculated as The chiral-odd part is calculated as It is straightforward to obtain the EM and the interference contributions by doing replacements for the electroweak coefficients, e.g., c q 2 → 1 and c q V for the EM and the interference parts, respectively. To further unify the notations, we define T q 0,r (y)'s andT q 1,r (y)'s with r = ZZ, γZ, and γγ. For the weak interaction part, we have T q 0,ZZ (y) = T q 0 (y) andT q 1,ZZ (y) =T q 1 (y). For γZ and γγ parts, we have T q 1,γγ (y) = 0. For simplicity, we will not show the total cross section explicitly. Instead, we give the explicit parton model results for the structure functions in the next section. We include all the contributions from γγ, ZZ, and γZ channels, i.e., Eq. (8). To simplify the expressions, we define the following electroweak coefficients: We give the structure functions results in the following: We see that there are 27 nonzero structure functions at the leading twist. Among these structure functions, 15 are related to the tensor polarizations of the vector meson. The structure functions denoted byW 's are related to c ew 13 and c ew 31 , which are labels of parity odd structure. If we only consider the EM interaction, 14 terms that are associated with c ew 11 and c ew 12 will survive. After reducing to EM interaction, it can be checked that the unpolarized and vector polarization dependent parts are consistent with the results given in, e.g., [25,28]. The other 13 structure functions (those related to c ew 13 , c ew 31 , and c ew 33 ) will vanish. These 13 structure functions are generated by the weak interaction and the interference between the EM and weak interactions. B. The spin alignment of the vector meson Compared with the hyperon production, the tensor polarizations are unique for polarized vector meson production. The tensor polarizations of the vector meson can be measured through the angular distribution of their decay products [41]. Among different components of the tensor polarizations, the spin alignment is perhaps the most interesting one that has been studied a lot. The spin alignment ρ V 00 is defined by the 00 component of the spin density matrix in the helicity basis. In terms of the differential cross section, it is given by [34] For the helicity λ = ±1 states, S LL = 1/2, while S LL = −1 for the λ = 0 state, and all the other polarization components are zero. Therefore, from the general form of the cross section in Eq. (47), we get We see that the spin alignment depends on both the chiral-even FF D 1LL and the chiral-odd FF H ⊥ 1LL . It also depends on the azimuthal angle φ in general. However, experimentally, it is much easier to measure the φ integrated spin alignment ρ V 00 . In this case, we have If we only consider the EM interaction, we can easily get the expression reduced from Eq. (135) given by that a summation over different quark flavors is implicit both in Eqs. (135) and (136). We see that ρ V 00 em is much simpler than ρ V 00 and independent of y. It is clear that the spin alignment of the vector meson is independent of the quark polarization and will deviate from 1/3 if the D 1LL FF is nonzero. We take the production of the K * 0 vector meson as an example to give a rough numerical estimate of the spin alignment. For the corresponding TMD PDFs and FFs, we consider only light flavors and take the Gaussian ansatz, i.e., They are factorized into a collinear distribution part and a Gaussian distribution part for the transverse momentum dependence. It should be noted that the Gaussian widths, i.e., ∆ f , ∆ D , and ∆ LL can depend on different flavors in principle. However, if we substitute Eqs. (137)-(139) into Eq. (135), carry out the convolution integrals and further integrate over the transverse momentum, p h⊥ , of the produced vector meson, the Gaussian widths will cancel. More explicitly, we get the p h⊥ integrated (or averaged) spin alignment given by the collinear part of the TMD PDFs and FFs, i.e., where a summation over quark flavors is also implicit in the numerator and the denominator of Eq. (140). We choose x = 0.2 and y = 0.5 as a typical value and show the rough numerical estimate of the spin alignment as a function of z h in Fig. 2 for Q = 10 GeV and 100 GeV, respectively. In the numerical calculation, we have taken the CT14 next-to-leading order PDFs [71] for f 1q (x). For the FFs D K * 0 1q (z h ) and D K * 0 1LLq (z h ), we use the parametrization results given in [72]. The factorization scales for the PDFs and FFs are set to µ f = Q. It is clear to see that the spin alignment is deviated from 1/3 at both low and high Q values. We also note that the spin alignment increases monotonically with z h . These properties may be checked in the SIDIS experiments such as JLab or EIC in the future. V. SUMMARY Semi-inclusive deep inelastic scattering is an important process for accessing the three-dimensional partonic structure of the nucleon and the hadronization mechanism. We present a systematic calculation for e − N → e − V X with unpolarized electron and nucleon beams and polarized vector meson production at high energies. We give a full kinematic analysis for this process by considering both the electromagnetic and the weak interactions that introduce the parity-violating effects. The results show that the cross sections are expressed by 81 structure functions. Among all the structure functions, 39 correspond to the space reflection odd part of the cross section and 42 correspond to the space reflection even part. We also carry out a parton model calculation for the process and show that there are 27 nonzero structure functions at the leading twist, in which 15 are related to the tensor polarizations of the vector meson and 13 are generated by the parity-violating effects. The structure functions are given by the convolution of the corresponding TMD PDFs and FFs. We also present the result for the spin alignment of the vector meson. A rough numerical estimate is made, as an example, for the K * 0 spin alignment. It gives us ways for accessing the tensor polarization dependent FFs by measuring the polarization of the vector meson. Future experimental studies such as the EIC will provide us with better opportunities to study the nucleon structure and the hadronization mechanism as well as the polarization effects in detail.
5,524.4
2022-01-07T00:00:00.000
[ "Physics" ]
Infrared imaging with nonlinear silicon resonator governed by high-Q quasi-BIC states Nonlinear light-matter interactions have emerged as a promising platform for various applications, including imaging, nanolasing, background-free sensing, etc. Subwavelength dielectric resonators offer unique opportunities for manipulating light at the nanoscale and miniturising optical elements. Here, we explore the resonantly enhanced four-wave mixing (FWM) process from individual silicon resonators and propose an innovative FWM-enabled infrared imaging technique that leverages the capabilities of these subwavelength resonators. Specifically, we designed high-Q silicon resonators hosting dual quasi-bound states in the continuum at both the input pump and signal beams, enabling efficient conversion of infrared light to visible radiation. Moreover, by employing a point-scanning imaging technique, we achieve infrared imaging conversion while minimising the dependence on high-power input sources. This combination of resonant enhancement and point-scanning imaging opens up new possibilities for nonlinear imaging using individual resonators and shows potential in advancing infrared imaging techniques for high-resolution imaging, sensing, and optical communications. Introduction Frequency conversion in nonlinear optics, emanating from the nonlinear response of matter to light, has been extensively applied in different fields including imaging, sensing, holography, and quantum optics [1][2][3][4].Recently, dielectric nanoresonators, possessing the ability to support strong light-matter interactions utilising multipolar resonances, have been recognised as a super-compact and flexible platform to carry out frequency conversion [5][6][7][8][9][10].Via strong field enhancement, frequency conversion can be boosted several orders of magnitude stronger compared to unpatterned films of equal thickness [6,[11][12][13][14].Additionally, the amplitude, phase change, and directionality of converted frequency emissions can be precisely controlled through the careful design of resonator geometry and the manipulation of resonant responses [5,9,10].This ability to manipulate light at the sub-wavelength scale combined with minimal absorption allows dielectric resonators to overcome inherently low material nonlinear susceptibilities, and thus realise practical nonlinear applications [15][16][17][18][19]. To unlock the potential of nano-resonators, several innovative methods of boosting light-matter interactions have emerged, one of which is the use of bound states in the continuum (BICs). BICs, originally predicted by von Neumann and Wigner in 1929 [20], have recently attracted attention in nanophotonics as a novel approach for enhancing light-matter interactions [21][22][23][24][25].To date, BICs have been widely demonstrated and exploited in photonic structures to procure controllable light confinement within nano-resonators [26][27][28][29][30][31][32][33][34].A widely explored type of BICs is the symmetry-protected BIC existing at the Γ point of a subdiffractive periodic structure.Due to the symmetry mismatch between the mode profile and external propagating modes, non-local resonant modes are formed and can be completely decoupled from the radiating waves [23,35].For the case of a single-particle structure, constructing an anti-crossing of pair modes can be used to form quasi-BICs based on the destructive interference of several far-field radiation channels [10,27,[36][37][38][39]. Infrared nonlinear imaging, utilising frequency conversion to capture images from infrared in the visible region, has gained growing attention due to its numerous applications in sensing, night vision, and spectroscopy [40].Relevant works of nonlinear imaging conversion have been demonstrated utilising arrays of nano-resonators, termed metasurfaces [4,18,40,41].These structures harbour non-local resonances demanding excitation using large irradiance areas needed to satisfy a periodic boundary condition [42].This mandates the use of high input powers for achieving sufficient nonlinear emission output.Moreover, because the resonators can be modelled as discretised point sources that combine to form the upconverted wavefront, the resolution of the upconverted image is limited by the periodicity of these arrays.By adopting a point-scanning imaging system, nonlinear imaging based on single particles can crucially reduce dependence on high power-inputs and improve attainable resolutions expanding the application range for nonlinear imaging further.Considering that identifying boundaries and extracting structural features from infrared images is becoming increasingly important for lots of applications, ranging from security to medical diagnostics, improving these imaging parameters is sought after. In this paper, we explore resonantly enhanced nonlinear processes by employing an individual high-Q silicon resonator hosting quasi-BIC modes.The silicon resonator supports a quasi-BIC magnetic quadrupole (MQ) mode surrounding the signal beam which can be excited using a linearlypolarised (LP) beam, and a quasi-BIC magnetic hexadecapole (MH) mode surrounding the pump beam which is excitable by an azimuthally-polarised (AP) beam.Using a point-scanning imaging system, we obtain an optical image by converting an infrared signal to visible via the four-wave mixing (FWM) process enhanced by a single silicon resonator, as illustrated in figure 1. Results and discussion We first consider a silicon disk-type resonator capable of supporting resonant leaky modes that exhibit strong interactions with the surrounding environment due to its open boundaries.The aspect ratio is defined as the radius to the height of the disk r/h.The size parameter is defined as the ratio of the disk radius to the mode wavelength.For computational simplicity, we assume the silicon resonator is surrounded by homogeneous medium air.Based on our previous work [8], the presence of silica substrate did not lead to qualitative changes in the mode structures.Our focus is on the formation of quasi-BICs at the coupling region for two distinct modes within the NIR and SWIR spectral range for our FWM enhancement and imaging application.Specifically, we focus on a pump wavelength in the region of 800-1200 nm, and a signal beam wavelength in the region of 1400-1800 nm.In the designed coupling region A as shown in figure 2(a), we have modes A1 and A2, which are rotationally symmetric TE-polarised modes, meaning their electric field does not depend on the azimuthal angle ϕ.The designed coupling region B, figure 2(b), is formed by modes B1 and B2, existing as TE-polarised modes whose field is mirror symmetric with respect to xz and yz planes.Based on the mode properties, modes A1 and A2 can be excited under AP light, and modes B1 and B2 can be excited using a conventional LP light.Figure 2(a) gives the calculated dispersion of the A-mode.Two dispersion curves depicted as blue-coloured and green-coloured dots exhibit characteristic anti-crossing around r/h = 0.55.The size of the dots indicates the inverse of the mode's Q-factor at that point.As anticipated, in the vicinity of the anti-crossing regime, mode A1's Q-factor is maximised and mode A2's Q-factor is minimised leading to the formation of a quasi-BIC state.Similarly, for modes B1 and B2, we observe the same trend at r/h = 0.55.This formation of quasi-BICs can be explained through each modes multipolar transformations, in figure 3. The scattering properties of the resonator can be described by the supported leaky eigenmodes and their interference in the system.Each mode can be viewed as a superposition of the spherical multipoles with different orbital index n and azimuthal index m.Spherical multipolar analysis was performed for the modes of our nanodisks.For the multipolar analysis of the resonant modes, first, electromagnetic field distributions of the resonant mode were calculated using the finite element method solver through eigenfrequency analysis in COMSOL Multiphysics.We then employ the polarisation currents induced inside the nanostructure to obtain the contributions associated with multipoles with different orbital index n and azimuthal index m [43].The calculated Q-factor trends are plotted for these four modes at various aspect ratio values in figures 3(a) and (b).To gain a better understanding of the multipolar transformation, we have performed the multipolar analysis of the eigenmode under the spherical coordinate system, as presented in figures 3(c)-(f).For multipoles, it is important to note that with the increase of the orbital index n, the higher-order multipoles become less leaky than lower-order multipoles.For modes A1 and A2, as can be seen from figures 3(c) and (e), they are constructed by MQ with n = 2, m = 0 and MH with n = 4, m = 0.When the frequencies of these two modes approach each other at r/h = 0.55, this leads to a suppression of MQ m=0 n=2 and an enhancement of MH m=0 n=4 for mode A1.Conversely, for mode A2, when the aspect ratio approaches the anti-crossing region r/h = 0.55, MH m=0 n=4 is suppressed, and as a result, the more leaky multipole MQ m=0 n=2 dominates the multipolar structure of mode A2.Consequently, the radiative leakage of mode A1 is significantly suppressed with the presence of less leaky multipole MH and the absence of more radiative multipole MQ in the structure, while mode A2 becomes leakier due to the absence of less radiative multipole MH and the presence of more radiative multipole MQ in the structure.Importantly, both of these two modes Mode A1 and A2 can be efficiently excited under the AP vector beam since the AP beam can be decomposed solely to magnetic multipoles with m = 0 that matches the multipolar composition of these two modes.For details about the multipolar content of an AP beam, see section I in the supplementary material.For Mode B1 and B2, they are primarily dominated by an electric dipole (ED) with n = 1, m ± 1 and MQ with n = 2, m ± 1.Both of these two modes can be excited directly under LP beam incidence as they share the same multipole contents m = ±1 (see section I in the supplementary material).Due to the less radiative nature of MQ as compared to ED, similar to modes A1 and A2, we observe an increase of the Q-factor for Mode B1 when both modes' frequencies are close to each other at the aspect ratio r/h = 0.55 where ED is gradually suppressed in the multipole content of Mode B1 meanwhile, a decrease of Q-factor for Mode B2 in the vicinity of r/h = 0.55 when the MQ is suppressed in the multipole content of Mode B2.Although with a relatively low Q-factor for mode B1 and B2, due to the same formation mechanism of the multipolar transformation model for quasi-BIC formation in individual nanostructure [10,38], we still apply the term quasi-BICs to this region. We expect that by exciting the modes at such anti-crossing region, the electromagnetic field can be strongly enhanced, and the nonlinear light-matter interactions can be boosted significantly.In the following, we simulate the third-order nonlinear processes: third-harmonic generation (THG) and FWM for different aspect ratios with the input pump and signal light at the corresponding frequencies of coupling region A and region B, for each respective r/h as shown in figure 4. To ensure the efficient coupling of our light into the designed modes, at region A, we model our simulation sources as an AP vector beam focused by an objective with numerical aperture NA = 0.7, and an LP plane-wave input frequency at region B. The power-independent FWM conversion efficiency is defined as η FWM = P FWM /(P s P 2 p ), where P FWM is the total generated nonlinear emission power, and P s and P p are the total input powers from the fundamental signal beam and pump beam, respectively.For a special case when the pump and signal beams are the same, it gives the powerindependent THG conversion efficiency η THG = P THG /(P 3 FW ) with P THG being the total generated THG power, and P FW being the total input power for the fundamental wave.The linear and nonlinear optical responses from our silicon resonators are simulated using the finite element method solver in COMSOL Multiphysics in the frequency domain [44][45][46][47][48].For details of the calculation, see section II in the supplementary materials. Notably, we observe strong THG emission when exciting the quasi-BIC Mode B1 around r/h = 0.55 as shown in figure 4(a).As expected, when pumping light at the frequency of the more leaky mode B2 near the anti-crossing region, the THG signal generally remains significantly lower when compared to the case with mode B1 excitation.This also agrees well with the calculated Q-factors of modes B1 and B2 for different aspect ratios.Interestingly, when the silicon resonator is illuminated in the vicinity of the higher-Q mode B1, the generated far-field pattern of the TH emission exhibits a directionality in the forward and backward directions across different aspect ratios.When the silicon resonator is illuminated in the vicinity of the lower-Q mode B2, the generated far-field pattern of the TH emission shows a transverse directionality for different aspect ratios.This nonlinear-type transverse Kerker effect can be interpreted by the nonlinearly generated multipolar interference effect [6,15], similar to the linear case for generalised Kerker effect which has been widely studied in the past [49][50][51][52][53].For the A-mode region, when the silicon resonator is pumped at the resonance position using a AP beam, a substantial enhancement of the electric near field has been observed numerically.Consequently, this enhancement leads to intense nonlinear light-matter interactions, resulting in a sizeable nonlinear electric field and strong conversion at the short wavelength region 250 nm to 400 nm (see figure S3i in supplementary material).this numerical prediction suggests that using a quasi-BIC-type silicon resonator offers the potential for the development of nanoscale ultraviolet light sources.For details of THG from A-mode region, see sections II and III in the supplementary material. Next, we consider inputting two beams to investigate the FWM process.Excitation conditions were chosen fixing the signal to excited mode B1 and calculating FWM efficiency when the pump beam was coupled to mode A1 and A2, respectively.Figure 4(b) presents the calculated FWM efficiency for different mode combinations (A1B1 and A2B1) at various aspect ratios.Profound enhancement of FWM emission is numerically predicted when the pump frequency approaches the quasi-BIC point r/h = 0.55.Here, in contrast to the THG case, we observe a strong normal emission of our FWM signal.In the case of A2B1, it is noteworthy that, by varying the aspect ratio, not only does the multipolar structure of the pump and signal beams change (as shown in figure 3), but the nonlinear emission also transitions from forward emission (r/h = 0.45) to backward emission (r/h = 0.55) and even to entirely transverse emission (r/h = 0.65).This intriguing behaviour can be elucidated by considering the effects of nonlinear multipolar interference [6,15].More details about the evolution of the nonlinear far-field pattern with different aspect ratios r/h can be found in section III of the supplementary material.In our experiments, we fabricated a set of individual silicon nanodisks with radii ranging from 205 nm to 500 nm and a constant height of 550 nm (r/h = 0.37 : 0.01 : 0.91) on a quartz substrate.Each two neighbouring nanodisks were positioned 10 µm apart to isolate local resonances, as shown in figure 5(a).To investigate the designed resonant features we focused exclusively on the nonlinear response of each nanodisk under different excitation conditions.To elicit nonlinear responses we employed two femotsecond laser sources independently tuned from 1425 to 1575 nm and 900 to 1000 nm for signal, λ s , and pump, λ p , beams respectively.The pump wavefront was structured into an AP beam using a vortex half-wave plate, VWP in figure 5(e).Figures 5(b)-(d) show the beam intensity profile with and without subsequent linear polarisation, confirming AP beam generation.The signal beam was left unaltered maintaining a LP beam profile.To maximise particle irradiance both beams were tightly focused using a 100x objective with high N.A. = 0.7 incident through the substrate.Any resulting nonlinear emission was collected in the forward direction by a 50x objective N.A. = 0.42.For FWM processes, to ensure optimal λ s -λ p pulse overlap, the pump beam path length was carefully adjusted using precise on-axis movement of prism P1 constituting an optical delay line, as shown in figure 5(e). Initially, nanodisks were excited solely by λ s with a fixed time averaged input power of 10 mW to generate TH radiation.Note any reference to average input power is measured preceeding objective L1 in figure 5(e).Broadly scanning both aspect ratio and λ s exposed two resonant modes: one centred at r/h = 0.61 providing high THG enhancement and another offering low THG enhancement at r/h = 0.49 (see figure S5i With the goal of achieving strong FWM emission, pump resonances were experimentally probed by evaluating FWM emission enhancement for nanodisks excited simultaneously by λ s and λ p with optimised spatial and temporal coincidence.The highest Q-factor for both modes A1 and B1 are numerically shown to converge at the same aspect ratio, r/h = 0.55 in figures 3(a) and (b).Assuming equivalent experimental behaviour, λ s was fixed at 1500 nm to ensure any maximum FWM enhancement that emerges should correspond to λ p and r/h values that approach the experimental ideal.Measurements were performed varying λ p and r/h with each nanodisk positioned at points of maximum FWM (2ω p − ω s ) enhancement, implying central AP excitation for aspect ratios possessing predominantly AP resonant modes.Results in figure 7(a) reveal significant FWM enhancement at r/h = 0.61 for λ p = 920 nm.For both the THG and FWM emission cases the r/h value for maximum enhancement is identical indicating successful overlap of high-Q regions for modes A1 and B1 at this aspect ratio.The two peaks observed at the anticipated position of mode A1 near r/h = 0.61 were interpreted not as two separate modes but instead as the consequence of the pump laser linewidth exceeding the linewidth of the designed high-Q mode A1 preventing efficient coupling at this r/h region [4,54,55].Both peaks are therefore attributed to mode A1.Spectra in figure 7(b) provide an unobstructed view of FWM enhancement at λ s = 920 nm highlighting this behaviour.Distinguishing mode A2 amongst non-designed neighbouring low-Q resonances with different multipolar origins was unfeasible with our setup.Characterisation of mode A2 was excluded for this reason.As with figure 6(a), the inset in figure 7(a) shows nonlinear emission is strong enough to be imaged by a handheld camera producing a white colour due to a combination of RGB components: 2ω p − ω s , 3ω s , and 2ω s + ω p quantified in figure 7(b). Experimental conversion efficiency was estimated by measuring FWM power in the forward direction at 10 mW input power from both the signal and pump to get a value of 0.5nW.This represents the portion of nonlinear emission in 0.67/4π of the total spherical emission area.Based on the far field emission pattern simulated for excitation parameters at high-Q positions of modes B1 and A1, the collected area is assumed to contain 7% of nonlinear emission power estimated to be 7nW in total.We define FWM conversion efficiency as the ratio of FWM power to input signal, P FWM /P s , which equates to 7.14 × 10 −7 and normalised FWM efficiency as P FWM /(P s P 2 p ) which gives 0.7%W −2 for our input laser specifications. To demonstrate the dependency of THG and FWM strength on the input power of the pump and signal, nonlinear emission strengths were measured over increasing average input power.Figures 8(a) and (b) demonstrate expected linear dependency of FWM(2ω p − ω s ) with respect to signal power, quadratic dependency of FWM(2ω p − ω s ) with respect to pump power, and cubic dependency of THG with respect to signal power.Deviations from nonlinear theory, noticed especially for the quadratic FWM pump dependence, arise from changes in refractive index caused by the Kerr effect and thermal effects which shift the resonance position, in this case slightly in favour of FWM ehancement [44].Notably, demonstrated in figure 8(b), owing to the quadratic dependence of the FWM emission intensity on the pump power, even with comparatively low signal power where the THG emission is undetectable with the hardware configuration, sizeable FWM emission intensity can still be observed.This capability to efficiently convert low-power infrared light into visible radiation with the help of a pump beam suggests a promising route to realise FWM-based infrared imaging techniques. Utilising automated stage movement and high-speed spectral data acquisition to realise a point scanning system, third order nonlinear imaging of a chrome on fused silica imaging target was achieved via frequency conversion enhancement within a single nanodisk.λ p , λ s , and r/h were chosen based on optimal values established during nonlinear characterisation, at 920 nm, 1500 nm, and 0.61 respectively.The target was positioned between two lenses to obstruct the signal beam, AT in figure 5(e), and the delay line was tuned to overlap λ s -λ p pulses when the signal passed through the fused silica substrate.A custom .NET application was programmed to automate motorised stage movement and trigger spectrometer captures.Images were formed by raster scanning the glass slide 3 mm at 1 mm s −1 horizontally whilst sequentially shifting downwards by 10 µm after each scan.Spectra were captured at 100 Hz requiring high FWM/THG strength for adequate emission detection.Prompting both the stage and spectrometer concurrently, spatially encoded nonlinear spectral data was reconstructed to produce infrared images of an arbitrary imaging target with 10 micron resolution.Resolution is dictated by the ratio of scanning speed to spectral acquisition rate and limited by the beam waist size incident on the target.As shown in figure 9, images obtained show clear features of the imaging target even resolving intricate details.To emphasise the significance of utilising a FWM process, figure 9(b) shows images captured at a low signal input power of 1 mW.Consequently, a undetectable THG signal is unable to resolve encoded image data but by boosting nonlinear conversion via the designed high-Q quasi-BIC resonances the pump degenerate FWM process produces an image with good clarity nonetheless.This convincingly demonstrates the potential of FWM to provide an additional degree of freedom for improving nonlinear conversion efficiency compared to harmonic nonlinear processes, such as SHG and THG, where emission strengths rely exclusively on the light intensity from an imaging target.This is crucial in situations where the use of low signal intensities is necessary such as when avoiding photodamage in bioimaging and sensing applications or for achieving nonlinear imaging in ambient environments.The point-scanning techniqe evidently sacrifices temporal resolution for improvements in sensitivity and spatial resolution and in this respect metasurfaces retain an advantage for dynamic imaging situations.The horizontal artefacts in FWM images appear due to highly sensitive positional dependency of nanodisk with respect to the AP beam centre as well as signal and pump overlap.Hence, gradual deviation in nanodisk position over the scan duration causes fluctuations in FWM emission intensity.Interestingly, additional images show that taking into account the stronger light scattering at target edges, we can highlight the edges of a target by mapping the ratio of FWM strength to THG strength (see figure S5iii in supplementary material). Conclusion In conclusion, we have studied resonantly enhanced FWM process from individual nonlinear silicon resonators featuring double BICs.Our designed silicon resonator exhibits strong quasi-BICs around both the signal beam and pump beams, enabling efficient FWM and THG processes.Based on these third-order nonlinear processes, we develop a novel pointscanning imaging platform for infrared-to-visible image conversion.Notably, the quadratic dependence of FWM emission power on pump beam power opens up possibilities for efficient infrared-to-visible conversion through pump power control, significantly reducing the need for high input signal beam power.The ability to control and manipulate nonlinear optical interactions in silicon resonators with double BICs not only enhances nonlinear optical interactions but also paves the way for the development of ultra-thin nonlinear photonic chips for advanced infrared imaging technologies and signal processing applications. Figure 1 . Figure 1.Schematic representation of the degenerate four-wave mixing process from our proposed silicon resonator: a signal beam at shortwave infrared (SWIR) frequency ωs is mixed with a pump structured beam at near-infrared (NIR) frequency ωp on the silicon resonator, to excite quasi-BICs and generate an idler output at visible ω i = 2ωp − ωs, to convert the optical image from infrared to visible. Figure 2 . Figure 2. Dispersion of the eigenmodes A1 & A2, and B1 & B2.(a) Dispersion of A-modes.(b) Dispersion of B-modes.The size of the circles indicates the inverse of quality factor of the mode at each aspect ratio.For both figures, the inset gives the xz-view of near-field distributions of the electric field magnitude: the top row corresponds to the higher-Q modes A1/B1 at r/h = 0.45, 0.55, 0.65; The bottom row corresponds to the lower-Q modes A2/B2 at r/h = 0.45, 0.55, 0.65. Figure 4 . Figure 4. (a) Normalised THG efficiency when exciting modes B1 and B2 at different aspect ratios.(b) Normalised FWM efficiency when exciting different modes A and B at different aspect ratios.For both figures, the inset gives the far-field patterns of the THG/FWM emissions: the top row corresponds to the high-quality modes at r/h =0.45, 0.55, 0.65; The bottom row corresponds to the low-quality mode at r/h =0.45, 0.55, 0.65. in supplementary material).For both modes, peak enhancement occurred when excited close to λ s = 1500 nm and measured spectra are shown in figure 6(b).Each peaks relative position and degree of enhancement agrees with the numerically simulated Q-factor trend in figure 3(b) and suggests the existence and successful coupling to designed modes B1 and B2.Mode B1 was further experimentally explored by precisely scanning aspect ratio and λ s , confirming peak Qfactor at 1500 nm which corroborates size parameter r/λ 0 ≈ 0.2 in figure 2(b).In direct comparison with figure 2(b) at λ s ≈ 1500 nm, simulated maximum Q-factors appear for modes B1 and B2 respectively at r/h = 0.55 and r/h = 0.45 indicating potential nanodisk fabrication errors such as a consistent geometric shift and/or difference in experimental and simulated refractive indices.Inset in figure 6(a) demonstrates noticeable THG emission enhancement enough for capture with a handheld VIS camera under ambient lighting. Figure 6 . Figure 6.(a) Normalised THG emission for varying nanodisk aspect ratio and λs wavelength with a time averaged input power of 10 mW.The inset shows a photographed image of the THG emission with handheld VIS camera under ambient lighting.(b) Experimentally measured spectra showing THG emission for different nanodisks excited by 1500 nm signal with time averaged input power of 10 mW.Extrusion magnifies THG enhancement arising from mode B2. Figure 7 . Figure 7. (a) Normalised FWM emission for fixed λs = 1500 nm with varying nanodisk aspect ratio and λp wavelength.Both λs and λp have time averaged output power of 10 mW.The inset shows a photographic image of the FWM emission with handheld VIS camera under ambient lighting.(b) Experimentally measured visible spectrum for nanodisk emission when different nanodisks are excited by λs = 1500 nm and λp = 920 nm both at time averaged power of 10 mW. Figure 8 . Figure 8.(a) Experimentally measured nonlinear emission with increasing pump and signal input powers.Points denote individual data points with corresponding linear regression line.(b) THG/FWM emission spectra with different time averaged signal input power.The pump power here is fixed as 10 mW. Figure 9 . Figure 9. Experimentally measured and reconstructed nonlinear images using a point scanning system at (a) 10 mW and (b) 1 mW signal powers.Target pattern depicts George Green's Windmill, a historical scientific landmark in Nottingham, UK.First column: the images of targets under white light illumination.Second column: reconstructed image from FWM process.Third column: reconstructed images from THG process.Fourth column: Averaged spectra over entire image acquisition duration quantifying THG and FWM signals for each signal power case.
6,214.6
2024-05-13T00:00:00.000
[ "Physics", "Engineering" ]
Recent Advances and Trends in Lightweight Cryptography for IoT Security —Lightweight cryptography is a novel diversion from conventional cryptography to minimise its high level of resource requirements, thus it would impeccably fit in the internet-of-things (IoT) environment. The IoT platform is constrained in terms of physical size, internal capacity, other storage allocations like RAM/ROM and data rates. The devices are often battery powered, hence maintenance of the charged energy at least for a few years is essential. However, provision of sufficient security is challenging because the existing cryptographic methods are too heavy to adopt in the IoT. Consequently, an interest arose in the recent past to construct new cryptographic algorithms in a lightweight scale, but the attempts are still struggling to gain robustness against improved IoT threats and hazards. There exists a lack of literature studies to offer overall and up-to-date knowledge on lightweight cryptography. Therefore, this effort is to bridge the areas in the subject by summarising the content we explored during our complete survey recently. This work contains the development of lightweight cryptographic algorithms, its current advancements and futuristic enhancements. In contrast, this covers the history, parametric limitations of the invented methods, research progresses of cryptology as well as cryptanalysis. I. INTRODUCTION In modern cryptography, AES (Advanced Encryption Standard), DES (Data Encryption Standard) and RSA (Rivest-Shamir-Adleman) are effective in general purpose computing due to their compatibility with the resource requirements, i.e., high-end processors, large internal capacities in Giga/TeraByte, etc.The nature of the internet-of-things (IoT) is quite distinct because of its constrained resource management, i.e., low-end processors, small data rates in kbps, etc.Therefore, execution of the conventional methods on IoT devices would cause degradation of device performance and/or malfunction over the overall application deliverables, i.e., fast battery drainage, high latency, etc.Thus, a whole new perspective of cryptographic vision towards lightweight inventions for IoT security is crucial. The interest in lightweight cryptography has been there in research for about ten years now.Nevertheless, the conventional cryptography also initially began on a lightweight scale a few decades back, compatible with the very first microprocessor which was 4b, i.e., A5/1, CMEA, DSC, etc [1].Each of those method was either broken or reverse engineered eventually, due to simplicity of their operations. IoT threats and hazards are probably much more advanced and sophisticated, hence the aim must be increased security for decreased resource requirements.In contrast, safety assurance over IoT transmission technologies/protocols is an unavoidable necessity for accurate encryption/decryption and encoding/decoding, i.e., ZigBee, BLE, LoRaWAN, etc. Lightweight cryptography is categorised as symmetric, asymmetric and hash.In the present, many symmetric and hash implementations are available to try in practical systems, i.e., PRESENT, KLEIN, PHOTON, etc., whereas a few asymmetric algorithms are accessible in comparison, i.e., elliptic light (ELLI) derived from elliptic curve cryptography (ECC).Because of the difficulties associated with traditional public key methods in such a constrained platform, it is extremely challenging to innovate ways to gain asymmetric adaptability.Even so, researchers continue to conduct asymmetric approaches in order to provide a better quality-of-service (QoS) via post-quantum1 as well as lattice-based2 cryptography, i.e., cryptoGPS, ALIKE, etc. The predictions in 2000s were that it would be problematic to implement lightweight hash functions, but hybrid techniques via a combination of conventional hash methods and lightweight block ciphers would be a solution [2].However, several lightweight hash inventions have been introduced theoretically later, yet their performance to be verified practically.There has been an immense attention given to block ciphers from the beginning, and stream ciphers became trending after a while.Moreover, sponge-based (SP) hash/message authentication code (MAC), individual authenticated ciphers (authenticated encryption -AE), SP based AE and block cipher (BC) based AE are available in academic and industrial researches [3].Fig. 1 illustrates the scale of the lightweight algorithms published from 1994 -2019. Lightweight cryptography is subdivided considering its applications/limitations as follows [4]; • Ultra-lightweight: Tailored in specific areas of the algorithm, i.e., selected microcontrollers (µC)/cipher sections/operations -PRESENT, Grain (low gate count in hardware), Quarma (low latency in hardware) and Chaskey (high speed on µCs) • Ubiquitous lightweight: Compatible with wide variety of platforms, i.e., 8b to 32b µCs -Ascon, GIMLI and Inventions, observations and adaption of lightweight cryptography are still emerging, so that the outcomes are rapidly being updated over vastly distributed areas.Therefore, literature studies are very useful references for researchers to acquire up-to-date information.Recent survey publications are mainly regarding a narrowed down subject area (specific algorithmic group/experimental type).Thus, our effort is to bridge all areas associated in lightweight cryptography to offer a comprehensive overview. This complete survey summarises the history, development of all available algorithm types followed by standardisation process, benchmarking and finally, security analysis including side-channel leakage.This work also mentions the identified research gaps to be improved in the future. II. LIGHTWEIGHT CRYPTOGRAPHY A. History The preliminary applications of lightweight algorithms go back to late 1980s.Many of those were broken just after those were published.Their upgraded versions continued in use, but eventually many were replaced by AES due to its superior strength and flexibility.Table 3 of [1] includes some ciphers used in history that were in lightweight scale. B. Development The trends in cryptography contain both linear and nonlinear operations.Non-linearity offers unpredictability to cryptographic outputs whereas linearity is for provision of diffusion, i.e., absolute dependability in round-based functionalities.In lightweight primitives, the impending trends are as in Table I along with some of the examples. The gain of small hardware footprints depends on the programming language too.Consequently, attention has been refocused on the use of assembly language in implementations.In fact, the ultimate level of lightweight-ness would be possible if security functions are executed by lightweight scripting languages, i.e., lo, wren, squirrel, etc.There is no evidence of any initial attempt taken regarding the matter. A. Block ciphers These take the highest contribution.The most common block ciphers along with their ordinary parameters are in Table II.Additional ones may be referred in [1], [2], [9]- [11].Among all, KLEIN, Lilliput, PRESENT, Rectangle and Skinny are known as ultra-light-weight because their key sizes, block sizes and computational rounds are in the least range.Also, XTEA which an extended version of TEA, is contemplated to be super-fast.Simon and Speck families [12] used to be very promising due to their satisfying scalability, but dissatisfaction in the security later. B. Stream ciphers The current implementations are as in Table III.Enocoro-80, Grain and Trivium [2] are known to be well suited in terms of light-weight primitives.Even though A2U2 has the smallest key size, it would probably be insecure at this stage as sufficient robustness is benchmarked above 72-bit size in cryptography. C. Dedicated AE Available AE methods are as in Table IV.A greater interest can be seen in ARCON, Ascon and Hummingbird-2 in the present because of their promising functionalities towards adequate security measures [14].Nonetheless, Hummingbiard-2 is still vulnerable to differential attacks in a related key setting.Nonce misuses could be identified in Helix and FIDES was broken shortly after its publication.Full-round NORX v2 could be affected by forgery and key recovery attacks, thus, a later version was introduced to prevent those [15], [16]. D. MAC These are the least contributors.However, the widely accepted one here is Chaskey which has 128b of IS, key and block sizes.It is an ARX based method which requires 3334.33 of GE plus an operating clock frequency of 1MHz for signing.The other one is SipHash which has 64b of key and block sizes along with 256 IS.The latest report of NIST [3] approves TuLP and LightMAC as well. IV. ASYMMETRIC LIGHTWEIGHT CRYPTO Research outcomes of asymmetric implementations are still at a preliminary stage.Satisfactory theoretical impacts can be seen in ECC [9], [17]- [19], ELLI [11] and hyper-elliptic curve cryptography (HECC) [20] that are based on mathematical elliptic curve.Those are approved by both ISO/IEC and NIST standards.Alternative efforts are seen in ALIKE and cryptoGPS recommended by ISO/IEC, post-quantum basis multivariate quadratic (MQ) algorithmic attempts by the NIST and N-th degree truncated polynomial ring (NTRU) which is a lattice crypto technique. Among those, ECC is known to have short key length, low processing time on 8-bit µC and small signatures [19].NTRU is more efficient on 3000 of GE while maintaining short signatures in general, but flexibility is highly required due to its instability [21].On the other hand, MQ algorithms are struggling with robustness, enormous key lengths and unaffordability yet. V. HASH FUNCTIONS Numerous lightweight hashing resolutions exist where families of Keccak, Quark and SPONGENT [22] are enhancing their versions to improve their performance.Keccak is highly demanding due to its small digest and code size.Although PHOTON [23] is equally considered, its code is slightly longer.Table V contains typical parametric values of those.Some other methods are Armadillo, QUARK, Lesamnta-LW, GLUON and SPN-Hash [1], [3], [14].The step-by-step internal mathematical process of lightweight hashing is available in [11].standards in issues of NISTIR 8268 and NISTIR 8114 The NIST is conducting a global lightweight cryptography competition to verify performances [14].The winners will be finalised before end of this year.In addition, post-quantum cryptography standardisation competition of theirs would probably provide useful insights on asymmetric lightweight cryptography. VII. BENCHMARKING Although there are not any defined threshold levels for lightweight-ness, the following are generally considered by the standardisation bodies [24]; • 80b is the minimum security strength whereas 112b is advocated for long time security requirements • 25% -30% of minimum security margin adaption • Hardware implementation to be up to standardised levels, i.e., chip area, etc. • Software execution to be verified through standardised benchmarking tools, i.e., FELICS • Clear licensing and liability where necessary • Maturity of the cryptographic mechanism, i.e., entropy Fair Evaluation of Lightweight Cryptographic Systems (FELICS) [25] is the utmost benchmarking tool that is being upgraded regularly for software benchmarking.It compares code size, RAM consumption and throughput across algorithms over a variety of strategies.Then it summarises into a parameter called the figure of merit (FoM) where the lower, the better.Table 1 of [11] is an example for counter mode encryption of 128b.In addition, eXternal Benchmarking extension (XBX), BLOC project and CRYPTREC contribute in the field [1]. In hardware benchmarking, the metrics depend on the exact technological platform.The ATHENa (Automated Tool for Hardware EvaluatioN) project and CRYPTREC are the main partners in the arena. VIII. SECURITY ANALYSIS A. Cryptological Approaches A survey [10] mentions that it is possible to gain a 12% reduction in area and a 20% increase in speed via AES optimisation.Another study [6] emphasises on an AES-128 modification on LoRaWAN by reducing rounds from 10 to 5, where 26.2% of encryption power consumption was minimised.It further proves its resistance to known-key, replay and eavesdropping attacks theoretically.The researches [5] and [26] propose trustworthy neighbourhood mechanisms to enhance effective security schemes depending on the connection history. Successful trials can be seen in cryptographic key management methodologies that encourage each node on the network to have a different key [27], [28].Then once a key is leaked, only that particular node would be at risk without compromising the entire network.The updatability over keys offers a better quality of service (QoS) which was impossible for some time in the past.In fact, a reduced number of GE enhances energy efficiency.The studies [29] and [27] prove the possibility of battery life maintainability from 5 to 10 years via their lightweight scheduling mechanisms.The study [29] faced an introduction of overheads when the security was better upgraded, but further optimisation lessened 43% of the overheads from the end devices and 48% from the network server edge. B. Cryptanalysis Approaches A study [30] presents the first third-party cryptanalysis of BORON block cipher against differential and linear criteria.The studies [31], [32] and [33] analyse the robustness of Ascon v1.2, COMET and ESTATE respectively. The researches [34] and [13] observe that KLEIN is an ultra-lightweight side-channel resistant crypto because of its Substitution-Permutation Network (SPN) structure.The analysis [34] validates its results up to first-order attacks, also stating that it may be still vulnerable to higher-order incidents due to the exponential growth in data complexity.An AIbased approach over AES and PRESENT was taken by [35] concludes that there is not any significant difference in sidechannel vulnerability between AES and PRESENT in comparison to both 4b and 8b S-box constructions.Another study [36] demonstrates optimal leakage models for ciphertext-only fault attacks (CFA) for SIMON, PRINCE and AES.A correlation power analysis (CPA) on PRESENT [37] was able to derive the first 8B of the encryption key.The highest percentage of work involves either CAP or differential power analysis (DPA).Only a few studies on electromagnetic (EM) analysis are available.One of the successful experiments is a differential EM analysis (DEMA) of PRESENT [38].It verifies the tamper resistance using several selection functions.Other vital impactors like optical, clock, cache and so on, based work are yet unavailable. IX. CONCLUSIONS Adequate IoT security still struggles to provide compatible cryptographic primitives in terms of lightweight to cope with possible and futuristic IoT hazards and threats.The concept of lightweight cryptography was introduced to overcome the challenge. Lightweight cryptographic functions are still emerging to deliver precise privacy and data protection via accurate encryption and decryption models.There exist numerous proposed lightweight ciphers in all forms (symmetric, asymmetric and hash) though, many are still under verification and commercially not available, i.e., PRESENT, KLEIN, Grain v2, ECC, etc.This work particularly identifies a lack of consideration over physical leakage analysis at the current status. Government agencies, regional organisations and international associations are involved in standardisation process where ISO/IEC and NIST are the leading contributors.FELICS is the predominating benchmarking tool for software implementations whereas hardware implementations are case dependent.Also, improvement of lightweight scripting languages would probably cause achieving the ultimate level of lightweight-ness. Fig. 1 . Fig. 1.Published lightweight algorithms from 1994-2019 III. SYMMETRIC LIGHT-WEIGHT CRYPTOThese are usually adopted from a conventional algorithm and their improved light-weight architecture is introduced as either versions or in a new name, i.e., AES based light-weight techniques [5]-[7], Prince and PRESENT derived from AES s-box[8].The majority is still in their trial phases because of deficiency, inadaptability in IoT devices and inaccuracy in decryption results. TABLE I LIGHTWEIGHT CIPHERS BASED ON TRENDING METHOD Look Up Table, ARX: Addition-Rotation-XOR, MDS: Maximum Distance Separable
3,255
2020-11-02T00:00:00.000
[ "Computer Science", "Engineering" ]
Crystal structure of 1,2,3,4-tetrahydroisoquinolin-2-ium (2S,3S)-3-carboxy-2,3-dihydroxypropanoate monohydrate The crystal structure of 1,2,3,4-tetrahydroisoquinolin-2-ium (2S,3S)-3-carboxy-2,3- dihydroxypropanoate monohydrate (orthorhombic crystal system, space group P212121, Z = 4) features an intricate two-dimensional hydrogen-bond network. Supramolecular features The solid state supramolecular structure features an intricate network of N-H� � �O and O-H� � �O hydrogen bonds (Fig. 2).Table 1 lists the corresponding geometric parameters, which are within expected ranges (Thakuria et al., 2017).The hydrogen tartrate anions form hydrogen-bonded chains by translational symmetry in the b-axis direction through hydrogen bonding between the carboxy group and the carboxylate group of an adjacent molecule (O5-H5A� � �O1 iii ).In the a-axis direction, the hydrogen tartrate ions are connected along a 2 1 screw axis via two hydrogen bonds with the two hydroxy groups as donors and a hydroxy group (O3-H3� � �O4 ii ) and the carboxylate group (O4-H4� � �O2 ii ) of a neighbouring molecule as acceptors.These O-H� � �O hydrogen-bonding interactions that extend in the a-and b-axis directions result in diperiodic hydrogenbonded sheets parallel to (001).The protonated amino group of the tetrahydroisoquinolinium cation forms a bifurcated hydrogen bond to the carboxy groups of two adjacent hydrogen tartrate anions (N2-H2B� � �O5 and N2-H2B� � �O6 i ) and another hydrogen bond to the solvent water molecule (N2-H2A� � �O7).The water molecule in turn acts as a hydrogen-bond donor towards the carboxylate group Table 1 Hydrogen-bond geometry (A ˚, � ). Figure 3 Space-filling representation of the crystal structure, viewed along the aaxis direction.Colour scheme: C, grey; H, white; N, blue; O, red. Figure 1 The asymmetric unit of the title compound with displacement ellipsoids at the 50% probability level.Hydrogen atoms are presented by small spheres of arbitrary radius.Dashed lines illustrate hydrogen bonds. (O7-HA� � �O2) and a hydroxy group (O7-HB� � �O3 iv ) of two hydrogen tartrate anions.The hydrocarbon parts of the tetrahydroisoquinolinium cations are oriented approximately perpendicular to the diperiodic hydrogen-bonded sheets formed by the hydrogen tartrate anions.The crystal packing in the third dimension is achieved by stacking in the c-axis direction with interlocking of the hydrocarbon tails through van der Waals packing (Fig. 3).This affords hydrophobic and hydrophilic regions in the crystal structure. Synthesis and crystallization Starting materials were obtained from commercial sources and used as received.A mixture of 1,2,3,4-tetrahydroisoquinoline (266 mg, 2 mmol) and excess (2S,3S)-tartaric acid (1.50 g, 10 mmol) in 60 mL of deionized water was stirred for four h at room temperature.Subsequently, the salt was isolated by filtration.Colourless crystals suitable for single-crystal X-ray diffraction were obtained from a water/methanol (3:1) solution of the salt, after the solvents were allowed to evaporate slowly at ambient conditions. Refinement Crystal data, data collection and structure refinement details are summarized in Table 2. Carbon-bound hydrogen atoms were placed in geometrically calculated positions and refined using the appropriate riding model with C-H aromatic = 0.95 A ˚, C-H methylene = 0.99 A ˚, C-H methine = 1.00A ˚and U iso (H) = 1.2 U eq (C).Nitrogen-and oxygen-bound hydrogen atoms were located in difference-Fourier maps and subsequently refined semi-freely with the N-H and the O-H distances restrained to target values of 0.88 (2) A ˚and 0.84 (2) A ˚, respectively.The absolute structure was inferred from the known absolute configuration of the starting material. Special details Geometry.All esds (except the esd in the dihedral angle between two l.s.planes) are estimated using the full covariance matrix.The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry.An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s.planes. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) Table 2 Experimental details.
872.2
2024-06-01T00:00:00.000
[ "Chemistry" ]
Higgs boson gluon-fusion production at threshold in N3LO QCD We present the cross-section for the threshold production of the Higgs boson at hadron-colliders at next-to-next-to-next-to-leading order (N3LO) in perturbative QCD. We present an analytic expression for the partonic cross-section at threshold and the impact of these corrections on the numerical estimates for the hadronic cross-section at the LHC. With this result we achieve a major milestone towards a complete evaluation of the cross-section at N3LO which will reduce the theoretical uncertainty in the determination of the strengths of the Higgs boson interactions. We present the cross-section for the threshold production of the Higgs boson at hadron-colliders at next-to-next-to-next-to-leading order (N 3 LO) in perturbative QCD. We present an analytic expression for the partonic cross-section at threshold and the impact of these corrections on the numerical estimates for the hadronic cross-section at the LHC. With this result we achieve a major milestone towards a complete evaluation of the cross-section at N 3 LO which will reduce the theoretical uncertainty in the determination of the strengths of the Higgs boson interactions. High precision theoretical predictions for the production rate of the Higgs boson are crucial in the study of the recently discovered particle from the ATLAS and CMS Collaborations [1] and for inferring the existence of phenomena beyond the Standard Model. With the collection of further data at the upgraded LHC, the theoretical uncertainty for the gluon-fusion cross-section will become soon dominant. It is thus highly timely to improve the theoretical accuracy of the cross-section predictions. The quest for accurate Higgs boson cross-sections has been long-standing and it is paralleled with major advances in perturbative QCD. State-of-the-art calculations of the gluon-fusion cross-section (for a review, see Ref. [2] and references therein) comprise next-to-leading-order (NLO) QCD corrections in the full Standard-Model theory, next-to-next-to-leading order (NNLO) QCD corrections as an expansion in inverse powers of the top-quark mass 1/m t , two-loop electroweak corrections and mixed QCD/electroweak corrections. To improve upon the present accuracy, the most significant correction is expected from the N 3 LO QCD contribution in the leading order of the 1/m t expansion. Universal factorization of radiative corrections due to soft emissions, as well as knowledge of the three-loop splitting functions [3], have made possible the derivation of logarithmic contributions to the cross-section beyond NNLO [4]. However, further progress in determining the N 3 LO correction can only be achieved by direct evaluation of the Feynman diagrams at this order. * Corresponding author. Recently, there was rapid progress in this direction. The required three-loop matrix-elements have been computed in Ref. [5]. The partonic cross-sections for the production of a Higgs boson in association with three partons was computed in Ref. [6], while the two-loop matrix-elements for the production of a Higgs boson in association with a single parton and the corresponding two-loop soft current were computed in Ref. [7] and Ref. [8]. Corrections due to one-loop amplitudes for a Higgs boson in association with a single parton were evaluated in Refs. [9], and counter-terms due to ultraviolet [11,12] and initial-state collinear divergencies were computed in Refs. [10]. The N 3 LO Wilson coefficient and the renormalization constants of the operator in the effective theory where the top quark is integrated out have been computed in Refs. [11]. Although all these contributions are separately divergent in four dimensions, a finite cross-section can be obtained by combining them with the remaining one-loop matrix elements for the production of the Higgs boson in association with two partons. The purpose of this Letter is to complete the computation of all matrix-elements integrated over loop momenta and phase-space which are required at N 3 LO in the limit of Higgs production at threshold. We present the fully analytic result for the first term in the threshold expansion of the gluon-fusion cross-section at N 3 LO, and we use this result to estimate the impact of N 3 LO corrections to the inclusive Higgs production cross-section at threshold. Our result is the first calculation of a hadron collider observable at this order in perturbative QCD. The Higgs production cross-section takes the form where σ ij are the partonic cross-sections for producing a Higgs boson from partons i and j, f i (x 1 ) and f j (x 2 ) are the corresponding parton distribution functions, and m 2 H and s denote the mass of the Higgs boson and the hadronic center-of-mass energy, respectively. We work in an effective theory where the top quark has been integrated out, and the Higgs boson couples directly to the gluons via the effective operator where v 246 GeV is the vacuum expectation value of the Higgs field and C (μ 2 ) is the Wilson coefficient, given as a perturbative expansion in the MS-renormalized strong coupling constant α s ≡ α s (μ 2 ) evaluated at the scale μ 2 . Up to three loops, we have [11] C μ 2 = − and N F the number of active light flavours. The partonic cross-section itself admits the perturbative expansion with z ≡ m 2 H /ŝ and V = N 2 − 1, where N denotes the number of colours. The coefficients η (k) ij (z) are known explicitly through NNLO in perturbative QCD [13]. If all the partons emitted in the final state are soft, we can approximate the partonic cross-sections by their threshold expansion, Note that the first term in the threshold expansion, the so-called soft-virtual term, only receives contributions from the gluon-gluon initial state. Soft-virtual terms are linear combinations of a δ function and plus-distributions, 1 0 dz g(z) Through NNLO, we have [13,14] Eq. (9) is the main result of this Letter. While the terms proportional to plus-distributions were previously known [4], we complete the computation of η (3) (z) by the term proportional to δ(1 − z), which includes in particular all the three-loop virtual corrections. Before discussing some of the numerical implications of Eq. (9), we have to make a comment about the validity of the threshold approximation. As we will see shortly, the plus-distribution terms show a complicated pattern of strong cancellations at LHC energies; the formally most singular terms cancel against sums of less singular ones. Therefore, exploiting the formal singularity hierarchy of the terms in the partonic cross-section does not guarantee a fast-converging expansion for the hadronic cross-section. Furthermore, the definition of threshold corrections in the integral of Eq. (1) is ambiguous, because the limit of the partonic crosssection at threshold is not affected if we multiply the integrand by a function g such that lim z→1 g(z) = 1, . (10) It is obvious that Eq. (10) has the same formal accuracy in the threshold expansion, provided that lim z→1 g(z) = 1. As we will see in the following, this ambiguity has a substantial numerical implication, and thus presents an obstacle for obtaining precise predictions. We note however that by including in the future further corrections in the threshold expansion, this ambiguity will be reduced. Bearing this warning in mind, we present some of the numeri- In parentheses we indicate the correction that each term induces to the hadronic cross-section normalized to the leading order cross-section at a center of mass energy of 14 TeV. The ratio is evaluated with the MSTW NNLO [15] parton densities and α s at scales μ R = μ F = m H in the numerator and denominator. We also factorize the Wilson coefficient at all orders, as in Eq. (3), in both numerator and denominator, and it cancels in the ratio. We find that the pure N 3 LO threshold correction is approximately −2.27% of the leading order. We observe that the δ-term which we computed for the first time in this publication is as large as the sum of the plus-distribution terms which were already known in the literature and cancels almost completely against them for μ R = μ F = m H . We note, however, that by choosing a different functional form for the function g(z) in Eq. (10), the conclusion can be substantially different. For example, by choosing g(z) = 1, z, z 2 , 1/z we find that the threshold correction to the hadronic cross-section at N 3 LO normalized to the leading order cross-section is −2.27%, 8.19%, 30.16%, 7.73% respectively. In Fig. 1 we present the percentual change of the N 3 LO threshold corrections to an existing Higgs cross-section estimate based on previously known corrections (NNLO, electroweak, quark-mass effects) in ihixs [2] and the settings of Ref. [16]. The new N 3 LO correction displayed in this plot includes the full logarithmic dependence on the renormalization and factorization scales, as they can be predicted from renormalization group and DGLAP evolution, the Wilson coefficient at N 3 LO and the threshold limit of Eq. (9). The function g(z) of Eq. (10) is fixed to unity. σ NNLO and δσ N 3 LO are defined after expanding the product of the Wilson coefficient and the partonic cross-sections in α s . We conclude that N 3 LO corrections are important for a high precision estimation of the Higgs cross-section. Our result of the N 3 LO cross-section at threshold demonstrates that it is, in principle, possible to calculate all loop and phasespace integrals required for N 3 LO QCD corrections for hadron collider processes, albeit in a kinematic limit. With this publication, we open up a new era in precision phenomenology which promises the computation of full N 3 LO corrections for Higgs production and other processes in the future.
2,235.8
2014-03-18T00:00:00.000
[ "Physics" ]
Implementation of Microstrip Patch Antenna for Wi-Fi Applications : In recent years, the inventions in communication systems require the design of low cost, minimal weight, compact and low profile antennas which are capable of main-taining high performance. This research covers the study of basics and fundamentals of the microstrip patch antenna. The aim of this work is to design the microstrip patch antenna for Wi-Fi applications which operates at 2.4 GHz. The simulation of the proposed antenna was done with the aid of the computer simulation technology (CST) microwave studio student version 2017. The substrate used for the proposed antenna is the flame resistant four (FR-4) with a dielectric constant of 4.4 and a loss tangent of 0.025. The proposed MSA is fed by the coaxial probe. The proposed antenna may find applications in wireless local area network (Wi-Fi) and Bluetooth technology. And the work is the design of a Hexagonal shaped microstrip patch antenna which is presented for the wireless communication applications such as Wi-Fi in S-band. The designed microstrip patch antenna consists of a hexagonal patch which is found to be resonant at the frequency of 2.397 GHz with the return loss of -31.2118 dB having satisfactory radiation properties. The proposed antenna is the compact design of 28.2842mm 48.2842mm area on the FR4-epoxy substrate with dielectric constant of 4.4 and thickness of 1.6. The designed antenna has the realized gain of 3.42 dB at the resonant frequency of 2.397 GHz. After simulating with the CST software, the patch antenna was fabricated using the MITS milling machine on the FR-4 substrate in the YTU’s communication lab. The fabricated antenna was measured by the Vector Network Analyzer. Then, the simulation and measurement results were compared. The designed antenna structure is planar, simple and compact since it can be easily embedded for Wi-Fi applications, cellular phones and wireless communications for low manufacturing cost. Introduction The wireless systems consist of a large variety of different kinds such as radar, navigation, landing systems, direct broadcast TV, satellite communications, and mobile communications and so on. In wireless systems, the antenna is one of the critical components. A good design of antenna can relax requirements and improve overall system performance. An antenna can be classified on the basis of a direction of radiation as isotropic or anisotropic. There is no difference in selection factors relating to transmit-ting and receiving antennas because the same antenna may be used for transmission and reception or separate antennas can be used for transmission and reception. For wireless personal communications (WPC), antennas act as a communication device that a person can carry or move easily from place to place. Antennas in or protruding from a wire-less terminal are needed to support several wireless communications. Some example applications are cellular telephone communications; Wi-Fi, Bluetooth, and Ultra Wide Band communications (UWB); radio frequency identification (RFID); position location (such as GPS) and asset tracking; and body area networks (BAN) [1][2][3][4][5]. The antenna family includes different types of antenna such as patch antennas, point source antenna, monopole or dipole antennas, wire antennas, loop antennas, slot antennas, horn antennas, reflector antennas, lens antennas, helical antennas, and wide band antennas and so on. Patch antennas have various kinds such as a shorted patch antenna, printed antennas, microstrip patch antennas etc. In wireless communications, it is desirable for antennas to have a low profile configuration. Low profile means low cost, lightweight, low volume, small physical thickness, ease of integration, and conformable. The microstrip patch antennas are low profile antennas and have been widely used in recent years because of their good characteristics. The microstrip patch antenna is a special type of printed antenna. It consists of a metallic patch printed on top of a thin substrate with a ground plane on the bottom of the substrate. These low profile antennas are conformable to planar and non-planar surfaces, simple and inexpensive to manufacture. Microstrip antenna shapes may be square, rectangular, circular and elliptical but any other shapes are also possible. Some patch antennas do not use a dielectric substrate and instead are made of a metal patch mounted above ground plane using dielectric spacers; the resulting structure has a wider bandwidth, these type antennas can be shaped as a curve of a vehicle, and also mounted on the exterior of satellite, missile applications, aircraft, and spacecraft. Since the patch antennas can be directly printed onto a circuit board, these are becoming popular in the mobile phone market. Microstrip patch antennas can be fed by a variety of techniques. The four most popular feeding methods used for the microstrip patch are microstrip line feeding, coax-ial probe feeding, aperture coupling and proximity coupling. Microstrip line feeding is easy to fabricate, simple to match by controlling the inset position and even simple to model. However, as the substrate thickness increases surface waves and spurious feed radiation increase, and the practical design limit the bandwidth. Coaxial-line feeds, where the inner conductor of the coax is attached to the radiation patch while the outer conductor is connected to the ground, are also widely used. The coaxial probe feed is also easy to fabricate and match, and it has low spurious radiation. The aperture-coupled feeding technique consists of two parallel substrates separated by a ground plane on the bottom side of the lower substrate there is a microstrip feed line whose energy is coupled to the patch through a slot on the ground plane separating the two substrates The proximity coupled feeding technique is quite similar to that of the aperture coupled feeding method except the ground plane is re-moved. Among the four feeds described, the proximity coupling has the largest bandwidth (as high as 13 percent), and has low spurious radiation. Feeding techniques are governed by many factors like efficient transfer of power between the radiations structure the feeding structure and their impedance matching. Feeding technique influences on resonant frequency, return loss, bandwidth, VSWR, impedance matching and polarization characteristics of the antenna. In all modes of communication whether civilian or military there is need of an antenna which is ease to manufacture and also with compatibility in such a way that it can fit in anything, and that antenna is the patch antenna. Microstrip patch antennas have a number of advantages compared to conventional microwave antennas, so many applications cover the broad band frequency range from 100 MHz to 100 GHz. Some of the advantages of microstrip patch antennas compared to conventional microwave antennas are studied [6][7][8][9][10]. Since they are low profile antennas, they have light weight, low volume, and thin profile configurations; 1. Low fabrication cost and can be made conformal; 2. Linear and circular polarizations are possible with simple feed; 3. Dual frequency and dual antenna can be easily made; 4. Can be easily integrated with microwave integrated circuits; No cavity backing is required; 5. Feed line and matching networks can be fabricated simultaneously with antenna structure. 6. However, microstrip patch antennas also have some drawbacks compared to mi-crowave antennas [11][12][13][14][15][16][17][18]. 7. Narrow bandwidth and associated tolerance problems; Somewhat lower gain ( 6dB); 8. Low efficiency; 9. Required complex feed structures for higher performance; Poor polarization purity; 10. Lower power handling capability ( 100mW); Excitation of surface waves. With progress in both theory and technology, some of these drawbacks have over-come, or at least alleviated to some extent. The rapidly developing markets, especially in wireless personal communication systems (WPCS), mobile satellite communications, direct broadcast (DBS), wireless local area networks (WALAN) and intelligent vehicle highway systems (IVHS), suggest that the demand for microstrip patch antennas will increase even further. Modern communication systems, such as those for satellite links (GPS, vehicular, etc.), for mobile communication, and for emerging applications, such as wireless local-area networks (WLANs), often require compact antennas at low cost. Further, due to their lightness, microstrip antennas are suited for airborne applications, such as synthetic aperture radar (SAR) systems and scatter meters. Because of their low-power handling capability, these antennas can be used in low-power transmitting and receiving applications. The range of applications of microstrip antennas and their performance can be improved by having different shapes and designs. Different types in shapes of microstrip patch antennas have been stud-ied in different papers to obtain linear or circular polarization, frequency tuning, broad banding, impedance matching, higher gain, size reduction and so on. Microstrip antennas are also widely used on based stations as well as handsets. All the importantwireless applications lie in the band starting from 900 MHz to 5.8 GHz. Antennas play a totally critical position inside the field of Wi-Fi communications. As nanotechnology is in the process of being introduced in all sectors of technology, the wireless communication (Wi-Fi) takes interest. Nowadays Wi-Fi is widely used in many electronic gadgets. As of today technology antennas have shown that they are the most important things in all wireless communication. It is used in industry, academic institutes and even at home. Wi-Fi can be used to link gadgets hence removing the running cables everywhere. Wi-Fi is the technology that uses radio waves for local area networking of devices based on the IEEE 802.11 standards. Devices that can use Wi-Fi technology include desktops and laptops, video game consoles, smart phones, tablets, smart TVs, digital audio players, modern printers and so on. Wi-Fi networks generally use two different frequencies: 2.4 GHz and 5 GHz. The primary differences between the two frequencies are the range (coverage) and band-width (speed) that the bands provide. The 2.4 GHz band provides coverage at a longer range (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) but transmits data at slower speeds. The 5 GHz band (10-15 Ft) pro-vides less coverage but transmits data at faster speeds. 2.4 GHz band is cheaper to manufacture devices that use this frequency. It has a much better range than a 5 GHz wireless network. This is due to the fact that the radio waves are able to penetrate solid objects (such as walls and floors) much better than the 5 GHz radio waves. 5 GHz has a much lower range than the 2.4 GHz wireless network. Being the higher frequency of the two, it is not able to penetrate solid objects as great as the 2.4 GHz radio waves. More-over, it is more expensive to manufacture devices, therefore only few wireless devices can use this network. As this is a newer standard and more expensive to implement, fewer devices support this frequency. As a result, 2.4 GHz band has become standard and all Wi-Fi enabled devices can use this network. Therefore, 2.4 GHz band will be used in this research [19][20][21][22][23][24][25][26][27][28]. Research Problem There is no research work for the microstrip patch antenna for Wi-Fi applications in Myanmar. Therefore, the research problem is based to analyze the microstrip patch antenna for Wi-Fi applications based on antenna theory. The designed and fabricated antenna will be mainly useful in the fundamental fields of Wi-Fi applications. Contribution of Research In this research, the two proposed antennas, the rectangular and hexagonal patch antennas will be presented. The length and width of the substrate and patch will be calculated using the antenna theory equations. After getting the dimensions of the two proposed antennas, they will be simulated with the aid of the computer simulation technology (CST) microwave studio student version 2017. The substrate used for the proposed antennas is the flame resistant four (FR-4) with a dielectric constant of 4.4 and a loss tangent of 0.025. The proposed microstrip patch antennas are fed by the coaxial probe. Then, the simulation results will be compared and the one with a better performance will be fabricated. The block diagram for the contribution of this research will be shown in Figure 1. Proposed Antenna Design I The top and front view of the proposed antenna is shown in Figure 2. The rectangular patch is printed on the FR-4 substrate which is inexpensive and easily available with permeability of 4.4. The antenna has been designed and simulated by using the computer simulation technology (CST) microwave studio student version 2017. The proposed rectangular microstrip patch antenna is fed by a coaxial probe feeding. A pin is inserted at (6.8, 0) in between patch and ground the diameter of the pin is 1mm. The frequency resonance can be obtained by properly designing the length and width of the substrate, patch, and also the feed point of the probe. Different parameters with their optimized value of the proposed antenna are listed below in Table 1. Proposed Antenna Design II The simulation of the proposed antenna was done with the aid of the computer simulation technology (CST) microwave studio student version 2017. The radiating patch of hexagonal shape is printed on the FR4-epoxy substrate of dielectric constant 4.4. The proposed antenna is fed by the coaxial probe feeding technique and the feeding point is taken at coordinate as (0, -7). The design consists of a hexagonal patch which is extended an extra length. Various parameters considered are indicated in the Table 2. The proposed antenna geometry is shown in the Figure 3. The target resonant frequency was obtained by properly designing the length and width of the substrate, patch, and also the feed point of the probe. The simulation results show that the antenna fulfils the requirement of Wi-Fi applications in S band. Test and Results Antenna plays a pivotal role in our daily life. This is the reason for so many researches in the field of antenna. There are a number of ongoing researches intending to the efficient and compact antennas. Proceeding with the same aim, the rectangular and hexagonal shaped antenna was designed. It is to be noted that the performance analysis was especially based on radiation pattern, bandwidth and return loss. In the paper work, rectangular and hexagonal shapes of microstrip patch antenna are presented and the patch antenna with a better performance will be fabricated. Simulation Results of Rectangular Shaped Microstrip Antenna In this section, the simulation results such as antenna gain, voltage standing wave ratio (VSWR), return loss, directivity and the radiation pattern of the rectangular shaped microstrip patch antenna will be presented. Antenna Gain Antenna gain describes how much of the power is radiated in a given direction. The designed antenna has a realize gain of 3.38 dB at a resonant frequency of 2.4 GHz which means the antenna is more efficient at this frequency. VSWR The VSWR value of the proposed antenna is 1.1253055 at the resonance frequency of 2.4 GHz. VSWR value implies the impedance matching between the source and the feed is good, which is an essential requirement for the proper working of the antenna. The VSWR graph of the proposed antenna is shown in Figure 5. Return Loss The designed antenna has a good return loss characteristic that is -24.589026 dB at the resonant frequency of 2.4 GHz. Increasing negative value of return loss implies the good impedance matching with respect to the reference impedance of 50W. Return loss could be further improved by using different feeding techniques. The return loss of the rectangular shaped microstrip antenna is shown in Figure 6. Directivity Directivity of the antenna is ability to focus energy in a particular direction when trans-mitting the power during radiation, or to receive energy better from a particular direction. The directivity below shown as the maximum amount of radiation intensity that isequal to 6.37 dBi at a resonant frequency of 2.4 GHz. The directivity of the simulated patch antenna is shown in Figure 7. Radiation Pattern The radiation pattern is represented according to the radiation properties of the antenna as a function of whole space. Radiation pattern describes how the energy is radiated out into the space by the antenna or how it is received. For the resonant frequency 2.4 GHz, the radiation pattern is nearly linear directional in the azimuthal and elevation plane for the resonant frequency. The 2-D radiation patterns (Phi= 0 deg) and (Phi= 180 deg) for the elevation and azimuthal plane of the proposed rectangular microstrip patch antenna are shown in Figure 8& Figure 9. The designed antenna has a good return loss characteristic that is -24.589026 dB at the frequency of 2.4 GHz. Increasing negative value of return loss implies the goodimpedance matching with respect to the reference impedance of 50ohms. The VSWR of the proposed antenna shows 1.1253055 at the resonance frequency of 2.4 GHz. This parameter implies the impedance matching between the sources to the feed is good, and it is essential requirement for the proper working of the antenna. The realized gain of the designed antenna is 3.38 dB at a resonant frequency of 2.4 GHz which means the antenna is more efficient at this frequency. The directivity is achieved as 6.37 dBi i.e. the maximum amount of radiation intensity at the frequency of 2.4 GHz. The 2-D radiation patterns for the elevation and azimuthal plane of the proposed antenna is given in the Figure 8& Simulation Results of Hexagonal Shaped Microstrip Antenna In this section, the simulation results such as antenna gain, voltage standing wave ratio (VSWR), return loss, directivity and the radiation pattern of the hexagonal shaped microstrip patch antenna will be presented. Antenna Gain Antenna gain is the measure of how much power is radiated in a given direction. The designed antenna has a good gain of 3.42 dB at a resonant frequency of 2.4 GHz, which means the antenna is more efficient at this frequency. The realized gain of the hexagonal shaped microstrip patch antenna is shown in Figure 10. VSWR This is the ratio of maximum value of standing wave voltage to its minimum value. The minimum VSWR for an antenna would be 1. The antenna with less VSWR has the better return loss compared to the other antenna. The VSWR graph of the proposed antenna is shown in Figure 11. The VSWR is 1.0565655 at the resonance frequency of 2.4 GHz. VSWR imply as the voltage standing wave ratio and the impedance matching between the source and the feed is good, which is an essential requirement for the proper working of the antenna. Directivity The directivity graph shown below is the maximum amount of radiation intensity that is equal to 6.81 dBi is achieved at a resonant frequency of 2.397 GHz. Directivity is nothing but the ability of antenna to radiate energy in a particular direction as it is transmitting, and if the energy is receiving, this is also the capture area from a particular direction. The directivity of the simulated patch antenna is shown in Figure 12. Radiation Pattern The 2-D radiation patterns (Phi= 0 deg) and (Phi= 180 deg) for the elevation and azimuthal plane of the proposed hexagonal shaped microstrip patch antenna are shown in Figure 13 & Figure 14. Radiation pattern is the representation of the radiation of antenna with the function of space. Radiation pattern describes how the energy is radiated out into the space by the antenna or how it is received. At the resonant frequency 2.397 GHz, the radiation pattern is linear directional in the elevation plane and azimuthal angle. The designed antenna has a good return loss characteristic that is -31.211818 dB at the frequency of 2.397 GHz. Increasing negative value of return loss implies the good impedance matching with respect to the reference impedance of 50W. The VSWR of the proposed antenna shows 1.0565655 at the resonance frequency of 2.4 GHz. This parameter implies the impedance matching between the sources to the feed is good, and it is essential requirement for the proper working of the antenna. The realized gain of the designed antenna is 3.41 dB at a resonant frequency of 2.4 GHz which means the antenna is more efficient at this frequency. The directivity is achieved as 6.81 dBi i.e. the maximum amount of radiation intensity at the frequency of 2.397 GHz. The 2-D radiation patterns for the elevation and azimuthal plane of the proposed antenna is given in Figure 13& Figure 14. Radiation pattern is the graphical representation of the radiation properties of the antenna as a function of space. Radiation pattern describes how the energy is radiated out into the space by the antenna and how it is received onto to antenna. For the band of the frequency 2.397 GHz, the radiation pattern is linear directional in the azimuthal and elevation plane. Moreover, the hexagonal shape antenna has a bandwidth of 75.2 MHz and it can be applicable in Wi-Fi applications in S band. As shown in Table.3, the hexagonal shaped microstrip patch antenna has a little bit well in gain, directivity, VSWR and return loss than the rectangular one. There-fore, the hexagonal patch antenna will be fabricated and the fabricated antenna will be measured by the Vector Network Analyzer. Fabrication and Measurement Results of the Hexagonal Patch Microstrip patch antennas are the recommended antennas because mostly they are easy to fabricate. With the help of CST studio, one can fabricate any type of antenna and characterize it. All materials which are necessary in the modelling of an antenna are found in the CST studio. After the antenna was simulated in the CST studio, the antenna was fabricated using the MITS milling machine. Based on the results obtained the patch antenna made was working fine and it have not deviated much from the real antenna. The patch antenna which are shown below are the ones which are similar to the antenna which was designed and characterized. The one which is shown in Figure 15 is basically used for Wi-Fi applications. While fabricating the antenna, the MITS milling machine was a little error and ithas been faced the difficulty about the P1, P2 positioning on the FR-4 substrate. Moreover, while the machine was doing the routing operation, the milling machine was suddenly stopped, the computer was also hanged and cannot do any other operation. The service which will be given to the machine cannot be done in time before the research work's deadline. Therefore, the routing operation was manually done by a saw in the machine workshop. The substrate area was reduced 3 mm x 3 mm due to the manual routing. So, the fabricated substrate is a little reduced compared to the designed antenna. More-over, the substrate's thickness is also uneven due to the unstable hatching of the milling machine. Then, the fabricated antenna was measured with the Vector Network analyzer but it cannot be calibrated between the antenna and Vector Network analyzer because there is no calibrating head which is mounted at the network analyzer. After measuring with the Vector Network Analyzer, the antenna's return loss is -16.13 dB at the frequency of 2.625 GHz although we can have a better return loss with a calibrating head. The frequency is shifted a little due to the error of the milling machine while fabricating. Since 2.625 GHz frequency is within the Wi-Fi operation frequency range of 900MHz to 5.8GHz, it can be applicable in Wi-Fi applications in S band. The measurement result is shown in Figure 16. Comparison of Simulation and Measurement resultsof Hexagonal Patch Antenna The return loss of the designed antenna is -31.211818 dB from the simulation results at resonant frequency of 2.397 GHz and that of the fabricated antenna is -16.14 dB from measurement results at the resonant frequency of 2.625 GHz. The required bandwidth is 75.2 MHz while the obtained bandwidth is 175 MHz. This means that the bandwidth of the evolved antenna is largely wider than that required in the simulation. Although the frequency is a little shifted due to the error of milling machine during fabrication, it has a wider bandwidth compared to the designed antenna at the resonant frequency of 2.625 GHz. Moreover, 2.625 GHz is within the Wi-Fi operation frequency range, so the fabricated antenna can be applicable in Wi-Fi applications in S band. Discussions The return loss of the designed antenna is -31.211818 dB from the simulation results at resonant frequency of 2.397 GHz. For the fabrication process, the MITS milling machine was a little error and it has faced the difficulty about the P1, P2 positioning on the FR-4 substrate. Moreover, while the machine was doing the routing operation, the milling machine was suddenly stopped. The computer was also hanged and it cannot be done any other operation. Therefore, the routing operation was manually done by a saw in the machine workshop. The substrate area was reduced 3 mm 3 mm area due to the manual routing. So, the fabricated substrate is a little reduced compared to the designed antenna. Moreover, the substrate's thickness is also uneven due to the unstable hatching of the milling machine. And then, the fabricated antenna was measured with the Vector Network analyzer but it cannot be calibrated between the antenna and Vector Network analyzer because there is no calibrating head which must be mounted at the network analyzer. After measuring with the Vector Net-work Analyzer, the antenna's return loss is -16.13 dB at the frequency of 2.625 GHz although it can be a better return loss with calibrating. After measuring the fabricated antenna with the Vector Network analyzer, the antenna's return loss is -16.13 dB and the bandwidth is 175MHz at the frequency of 2.625 GHz. Therefore, the antenna has a good impedance matching. The required bandwidth is 75.2 MHz while the obtained bandwidth is 175 MHz. This means that the bandwidth of the evolved antenna is largely wider than that required in the simulation by comparing with other studies [7,8,[17][18][19]. Although the frequency is a little shifted due to the error of the milling machine during fabrication, it has a wider bandwidth compared to the designed antenna at the resonant frequency of 2.625 GHz. Since 2.625 GHz frequency is within the Wi-Fi operating frequency range (900MHz -5.8GHz), it can be applicable in Wi-Fi applications in S band. Conclusion Microstrip patch antenna is a short radiating structure that consists of dielectric substrate in between a metallic conducting patch and ground plane. The rapid development of modern communication systems such as mobile communication, wireless communication, satellite communication and radar applications are required for portable devices due to some important features including easy design, light in weight, reduction in size, compatibility with microwave, millimeter wave integrated circuits, low production cost and easy fabrication of microstrip antennas. However, single antenna has limitations that they can utilize for single application. Although various aspects of the patch antenna have been broadly discussed in above chapters by for the sake of convenience, the entire investigations are summarized in this chapter. From the simulationthe return loss of the hexagonal shape antenna is -31.211818 dB at the resonant frequency of 2.397 GHz, increasing negative value of return loss implies perfect impedance matching of the reference characteristics impedance of 50 W. The designed antenna has a good realized gain of 3.42 dB at the resonant frequency of 2.397 GHz which means the antenna is more efficient at this frequency. The VSWR of the antenna is 1.0565655 at frequency 2.397 GHz. The directivities i.e. the maximum amount of radiation intensities that is equal to 6.81 dB at the resonant frequency of 2.397 GHz. The radiation pattern of the hexagonal shape antenna is linearly polarized in the elevation plane and azimuthal angle for the resonant frequency 2.397 GHz. After measuring the fabricated antenna with the Vector Network analyzer, the antenna's return loss is -16.13 dB and the bandwidth is 175MHz at the frequency of 2.625 GHz. The presented simulation and measurement results show the usefulness of the proposed antenna structure for Wi-Fi applications.
6,388.2
2018-12-27T00:00:00.000
[ "Engineering", "Computer Science" ]
Rapid removal of Pb2+ from aqueous solution by phosphate-modified baker's yeast Phosphate-modified baker's yeast (PMBY) was prepared, and used as a novel bio-sorbent for the adsorption of Pb2+ from aqueous solution. The influencing factors, absorption isotherms, kinetics, and mechanism were investigated. The scanning electron microscopy (SEM), Fourier-transform infrared spectroscopy (FTIR) characterization and elemental analysis of PMBY showed that phosphate groups were successfully grafted onto the surface of yeast. The kinetic studies suggested that the adsorption process followed a pseudo-second-order chemisorption. The adsorption process of Pb2+ using PMBY was spontaneous and endothermic. Furthermore, the adsorption of Pb2+ on PMBY can rapidly achieve adsorption equilibrium (in just 3 min), and the maximum adsorption capacity of Pb2+ on PMBY was found to be 92 mg g−1 at 30 °C, which was about 3 times that of the pristine baker's yeast. The suggested mechanism for Pb2+ adsorption on PMBY was based upon ion-exchange, electrostatic interaction and chelation between the phosphate groups and Pb2+. However, compared with the pristine baker's yeast, the higher capacity and rapid adsorption of PMBY for Pb2+ was mainly due to the chelation and electrostatic interactions between the phosphate groups and Pb2+. In addition, the regeneration experiments indicated that the PMBY was easily recovered through desorption in 0.01 M HCl, and that PMBY still exhibited 90.77% of the original adsorption capacity for Pb2+ after five regeneration cycles. These results showed the excellent regeneration capability of PMBY for Pb2+ adsorption. PMBY has shown significant potential for the removal of heavy metals from aqueous solution due to its rapid adsorption, high-capacity and facile preparation. Introduction Lead is widely used in various elds, such as lead-acid batteries, construction materials, printing, pigments, fossil fuels, photographic materials, and manufacturing of explosives. 1,2 However, excessive discharge of lead to the environment can damage the ecosystem due to its highly poisonous nature towards living organisms. Lead possesses non-biodegradable features, and easy accumulation in the human body through the food chain, particularly when it is discharged into aquatic environments. 3 It is well known that lead exposure could cause severe health problems, such as physiological and neurological disorders, especially in children even at low lead concentrations. [4][5][6] Lead is classied as a priority pollutant by the US Environmental Protection Agency (EPA). In addition, the permissible levels of Pb 2+ in drinking and wastewater are 0.05 mg L À1 and 0.005 mg L À1 , respectively. 7 Considering the hazards associated with lead, a method involving highly efficient separation and recovery of lead from contaminated water is of great signicance not only for the full utilization of lead resources, but also to protect the human health and ecological environment. Many methods have been used to treat wastewater containing lead, including chemical precipitation, electrochemical treatment, reduction, ionexchange, solvent extraction, adsorption and otation. [8][9][10] There are some disadvantages associated with most of these methods, which restrict their application. These disadvantages include low efficiency, high energy consumption, large quantity of toxic and expensive materials used, and production of large amounts of sludge, which needs secondary treatment in some methods. 8,11 Nevertheless, bio-adsorption has attracted considerable attention due to its environment-friendly nature and low cost. Additionally, bio-adsorption can effectively remove soluble and insoluble pollutants without generating hazardous by-products. 12 Various microorganisms, such as bacteria, fungi and algae are a kind of bio-sorption materials, which can adsorb heavy metal ions. [13][14][15] For bio-adsorption technology, the selection of appropriate biomaterial for the removal of hazardous heavy metals from aqueous solutions is a key process step. 8 The source, safety, cost and adsorption capacity should be considered for the selection of any suitable biomaterial. Among the aforementioned biomaterials, yeast cells are frequently-used fungi, which oen serve as suitable sources of bio-sorbent materials due to their easy cultivation, and have features such as inexpensive large-scale growth media, wide availability and safety. 16,17 Previous researchers have demonstrated that the surface of yeast cells contains abundant amounts of functional groups, which can adsorb heavy metals, such as hydroxyl, carbonyl, and amide groups. However, the sorption capacities of yeast cells are still unsatisfactory due to limited surface functional groups. 18 Therefore, it is necessary to improve the adsorption performance of yeast cells, especially with regards to the adsorption of lead. A number of modied strategies, such as the formation of nano-MnO 2 /nano-ZnO and hydroxyapatite on the yeast surface, [19][20][21] modication of EDTAD/ethylenediamine/polymer, [22][23][24] and pretreatment using ethanol/caustic have been proposed to improve the adsorption capacity of yeast. 25 Surface modications of yeast with organic and inorganic materials provide a hybrid material having higher efficiency and capacity for the removal of heavy metals by either introducing or exposing more surface functional groups on the surface of raw materials. 26 Although, the aforementioned modications of yeast improved the adsorption capacity for heavy metals, their relatively complicated synthesis and difficult procurement of preparation materials led to high costs. Therefore, synthesizing new bio-sorbents was more competitive and practical among various bio-sorbents, which have the capacity to sequester the heavy metal ions from aquatic environment. To achieve this, it is necessary to fabricate low-cost, reliable, rapid adsorption, durable and efficient materials. Among these properties, the rapid adsorption of bio-sorbents is one of the most serious problems hindering the commercial application of bio-sorbents. Many bio-sorbents need a long time to reach adsorption equilibrium, which would result in signicant waste of energy and hence, reduce the treatment efficiency. Therefore, considering the adsorption rate while synthesizing a novel bio-sorbent is highly important for the overall efficiency of the adsorption process. Phosphate is an inorganic material that is non-toxic and inexpensive. Phosphate groups are known to have excellent chelating properties for metal ions. Thus, many phosphorylated materials were applied to removal metal ions. For example, phosphorylated cellulose microspheres, 27 phosphorylated chitosan, 28 and phosphorylated starch have been used as adsorbents for metal ions removal. 29 To the best of our knowledge, phosphate-modied baker's yeast has not been investigated in detail for the removal of lead from aqueous solutions. By forming hydroxyapatite on the surface of yeast, the functional groups of pristine yeast do not participate in the synthesis reaction. In other words, it is worth studying whether the phosphate-modied baker's yeast, which via the interaction between the phosphate and surface functional groups of baker's yeast, is a feasible and effective means to obtain an efficient and cheap bio-sorbent for Pb 2+ or not. Herein, a phosphate-modied baker's yeast (PMBY) was prepared using a simple pathway that involved phosphate treatment of baker's yeast and dry-heating. Then, the adsorption characteristics, kinetics, and isothermal behavior of PMBY for Pb 2+ adsorption from aqueous solution were explored. Subsequently, a comparative analysis along with the scanning electron microscopy (SEM), Fourier-transform infrared spectroscopy (FTIR), and X-ray photoelectron spectroscopy (XPS) analyses were conducted to further explore the adsorption performance and mechanism of PMBY. Materials The commercially fresh baker's yeast was supplied by Angel Yeast Co., Ltd., China, and was repeatedly washed with deionized water to remove adhering dirt and soluble impurities. The resulting yeast was dried at 80 C for 24 h, and then, crushed and sieved to a particle size of less than 100 mesh. The resulting puried yeast was named as the pristine baker's yeast. Various chemicals and reagents, including sodium dihydrogen phosphate (NaH 2 PO 4 $2H 2 O), sodium hydrogen phosphate (Na 2 HPO 4 $12H 2 O), sodium hydroxide (NaOH), nitric acid (HNO 3 ), lead nitrate (Pb(NO 3 ) 2 ), and ammonium molybdate ((NH 4 ) 6 Mo 7 O 24 $4H 2 O) were purchased from Aladdin-Biochemical Technology Co., Ltd., China. All these chemicals were of analytical reagent grade, and used without further purication. Lead nitrate was employed as the Pb 2+ source. The stock standard solution of Pb(NO 3 ) 2 was obtained from the National Analysis Center for Iron and Steel (Beijing, China). The working solutions were obtained by diluting the stock solution. Furthermore, 1 M NaOH and 1 M HNO 3 were used to adjust the pH values. All solutions were prepared using deionized water. 2.2 Preparation of phosphate-modied baker's yeast 7.5 g phosphates including 3.49 g NaH 2 PO 4 and 4.01 g Na 2 HPO 4 (the mass ratio of NaH 2 PO 4 $2H 2 O : Na 2 HPO 4 $12H 2 O was 0.87 : 1 (ref. 29), respectively) was dissolved in 100 mL deionized water. Then 5.0 g of baker's yeast and 0.01 g urea were added in the above solution. The pH of the mixture was adjusted to 6 using a pH meter (PHSJ-4F, China), and the mixture was stirred continuously (200 rpm; 4 h) at room temperature. It was then centrifuged at 4 C, with 1000 rpm for 10 min through high speed freezing centrifuge (GL-21M, China). The solid was dried at 50 C under 0.7 MPa pressure in a vacuum drying oven (DZF-6050, China) until the moisture content was less than 15 wt%. The dried product was incubated at 140 C for 4 h in a vacuum drying oven, aer which, the product was washed using deionized water. Then, it was centrifuged until there was no change in color of the liquid that was obtained aer the centrifugation. (NH 4 ) 6 Mo 7 O 24 $4H 2 O was added and the mixture was heated at around 60-70 C in a thermostatic water bath (HJ-M6, China). Finally, the product was ground in an agate mortar (YXY-A01, China) and sieved to a particle size of less than 100 mesh using a standard sieve. The product was dried in a vacuum drying oven at 50 C under 0.7 MPa for 10 h before further use. The detailed synthesis process is shown in Fig. 1. Characterization The X-ray powder diffraction (XRD) patterns were recorded on an X'Pert 3 Powder diffractometer (PANalytical B. V., The Netherlands) using Cu Ka radiation (l ¼ 1.54 A, 40 kV, 40 mA) over 2q range of 5-90 with a resolution of 0.026 . The scanning speed was 8.0 min À1 and the measurements were conducted at ambient temperature. The morphology and the elemental composition of the samples were studied using tungsten lament scanning electron microscopy (SEM) and energy dispersive spectrometer (EDS) (JSM-7500F, Japan), operated at 20 kV acceleration voltage. Fourier-transform infrared spectra (FTIR) was observed using a PerkinElmer spectrometer (L1600400 spectrum Two DTGS, USA), which used potassium bromide (KBr) pellets. The mass ratio of potassium bromide to sample was 700 : 1, respectively. The FTIR analysis was obtained within the range of 400-4000 cm À1 . 30,31 The elemental analyses (C, H, O and N) were performed on an elemental analyzer (Elementar Vario Micro Cube, Germany). Moreover, the phosphorus content was assayed following the Chinese National Standard (GB 5009.268-2016), and was analyzed using a UV/Vis spectrophotometer (UV-VIS752, China) at 660 nm wavelength. X-ray photoelectron spectroscopy (XPS) was used to analyze the surface elemental composition of the samples. The measurements were carried out using Kratos Axis Ultra DLD (SHIMADZU, Japan) at room temperature. The ejected photoelectrons used a monochromatic beam of Al Ka X-rays (hn ¼ 1486.6 eV) and the resulting binding energy peaks were referenced to C1s peak occurring at 284.8 eV. N 2 adsorption-desorption isotherms were measured using a surface area analyzer (JW-BK132F, China). The specic surface area and pore size distribution of the samples were determined using Brunauer-Emmett-Teller (BET) method and Barrett-Joyner-Halenda (BJH) model. Batch adsorption studies Adsorption experiments were conducted under various conditions of pH, PMBY dosage levels, initial concentrations of lead ions, contact times and temperatures. For the sorption process, 100 mL of simulated Pb 2+ solution with different initial concentrations (ranging between 25-250 mg L À1 ) were added to a series of 250 mL conical asks. Aer a certain amount of PMBY was added to the Pb 2+ solutions and the pH adjusted to a specied value, the mixture was agitated using a rotary shaker (speed of 150 rpm) for a specied time (t, min) at a specied temperature (T, C). Aer reaching equilibrium, the mixtures were ltered through 0.45 mm lter membrane, and the ltrate was used to determine the Pb 2+ concentration using atomic adsorption spectrophotometer (AAS, Hitachi, Z-5000, Japan). In this work, all adsorption experiments were performed in triplicates, and the average values were used to report the results. The removal efficiency and the adsorption capacity of PMBY for Pb 2+ were represented by R (%) and q e (mg g À1 ), respectively, and were calculated using eqn (1) and (2), respectively. where C 0 and C e are the initial and equilibrium concentrations of Pb 2+ in the solution (mg L À1 ), respectively, V is the volume of the testing solution (L), and m is the amount of bio-sorbent PMBY (g). Regeneration of PMBY To evaluate the regeneration of as-obtained PMBY, the cycle number-dependent adsorption capacities were analyzed for 100 mg L À1 Pb 2+ . The saturated PMBY loaded with Pb 2+ was dispersed in various eluents (0.01 HCl, HNO 3 and H 2 SO 4 ). Aerwards, the solid materials were collected by centrifuging at 10 000 rpm for 20 min, washed thoroughly with deionized water, and then, reused in the next run of adsorption experiments. Characterization of PMBY SEM analysis is a useful tool for characterizing the surface morphology of biosorbents. The PMBY exhibited clear differences in morphology relative to the pristine baker's yeast, as can be seen from Fig. 2a and c. The pristine baker's yeast was approximately spherical or ellipsoidal with the diameter of around 3-4 mm, while the surface was smooth and regular. Aer the phosphate modication, the PMBY displayed irregular shape and a large volume of pores was formed due to the aggregation of cells, which could prove benecial to the adsorption of lead ions from aqueous solution. In addition, the corresponding Energy Dispersive Spectrometer (EDS) patterns ( Fig. 2b and d) were used to characterize the basic elements on the surface of pristine baker's yeast and PMBY. As can be seen from Fig. 2b and d, the new peaks of P and Na appeared on PMBY except for the peaks of C, N and O, which were also present in the original yeast. The present form of phosphorus and the introducing mechanism were further studied using the FTIR spectra. The existence of gold elements was attributed to the samples, which were gold-coated with a thin layer of gold before the SEM analysis. Fig. 3a shows the FTIR spectra of baker's yeast and PMBY. The FTIR spectra of pristine baker's yeast consisted of typical peaks of hydroxyl (3298.15 cm À1 ), 20 carboxyl (1384.29 cm À1 ), 24 amine-I (1654.54 cm À1 ), amide-II (1541.63 cm À1 ), amide-III (1239.31 cm À1 ), and phosphate groups (1048.02 cm À1 ). [32][33][34] Compared with the pristine baker's yeast (shown in Fig. 3a), some changes were observed in the FTIR spectra of PMBY. The peaks at 828.09 and 615.76 cm À1 represented the P-O-C aliphatic bonds and symmetric stretching vibration of PO 4 , respectively. 27,35 The new peaks at 828.09 and 615.76 cm À1 coincided with the phosphate group, 36 and the two peaks at 1048.02 and 1076.32 cm À1 presented in the pristine baker's yeast merged into one peak at 1071.86 cm À1 , which was assigned to P-O vibration, while its intensity increased remarkably. 37 These changes indicated that the phosphate groups were successfully graed on the surface of yeast. Besides, the peak height and peak band of hydroxyl, carboxyl and amine groups of pristine baker's yeast changed aer the phosphate modication, which indicated that these groups had participated in the reaction. The phosphate groups, which were linked to the yeast, may have appeared due to either the substitution reaction or the ligand exchange process between the O-H group of hydroxyl groups and carboxylic acids, and phosphate. This can be represented using reaction eqn (3)-(6). where^R represents the surface. Additionally, the amine groups and phosphate groups could react through electrostatic attraction and hydrogen bonding. The XRD patterns of pristine baker's yeast and PMBY composites are shown in Fig. 3b. Pristine baker's yeast presented a broad strong peak at about 2q of 20 . In contrast to the pristine baker's yeast, the PMBY composites not only showed stronger diffraction pattern at about 2q of 20 , but also exhibited few well-dened peaks involving crystal phosphate. These results suggested that the phosphate in PMBY composites may be in a non-stoichiometric and amorphous phase. 20 The results were assigned to the content of phosphate in PMBY, which did not reach XRD's detection limit (5 wt%), whereas the crystallization of these was poor and not within the detectable range. 38 The Adsorption behavior of PMBY for Pb 2+ 3.2.1 Effect of pH. Solution pH is one of the most important environmental factors affecting the sorption of metallic ions. To observe the inuence of pH on Pb 2+ adsorption, adsorption experiments under various pH values were conducted (C 0 ¼ 100 mg L À1 , PMBY dosage ¼ 0.08 g, V ¼ 100 mL, t ¼ 30 min, T ¼ 30 C and pH ¼ 2.0-7.0), and the results are shown in Fig. 4a. The adsorption of Pb 2+ increased rapidly from 5.39 to 83.14 mg g À1 with the increase in pH from 2.0 to 5.0, respectively. The pH-dependence indicated that the bio- sorption capacities of Pb 2+ on PMBY were affected by the surface complexation. When the solution pH values were within the range of 2.0-3.0, relatively low adsorption capacity was observed, which could be attributed to the protonation of active sites and the competition between the H + and Pb 2+ for binding sites. 6 As the pH increased from 3.0 to 5.0, the H + ions le the surface of bio-sorbent PMBY, and decreased the protonation of functional groups to improve the adsorption capacity. In addition, the optimum uptake was observed at the pH value of 5.0 due to the presence of ligands (such as, carboxyl, amide and phosphate groups) on the surface of sorbent, which have pK a values within the range of 3-5 (ref. 39). However, at higher pH values (pH > 6.0), Pb 2+ will precipitate out of the solution, and therefore, it is difficult to judge whether the adsorption or the precipitation has taken place. Hence, the optimum initial pH value of 5.0 was used in all further experiments. 3.2.2 Effect of dosage of PMBY bio-sorbent. The removal of Pb 2+ using PMBY at various dosages was investigated (C 0 ¼ 50 mg L À1 , pH ¼ 5.0, T ¼ 30 C, PMBY dosage ¼ 0.02-0.20 g, V ¼ 100 mL and t ¼ 30 min), and the results are shown in Fig. 4b. It was observed that the adsorption efficiency sharply increased from 45.15% to 88.16% as the PMBY dosage increased from 0.02 to 0.08 g, respectively, which was due to the reason that the surface area and binding sites of PMBY (available to Pb 2+ ) increased accordingly as the sorbent's dosage increased. When the PMBY dosage increased from 0.08 to 0.2 g, the adsorption efficiency for Pb 2+ only increased by 4.8%. Due to a small increase, the PMBY dosage of 0.08 g was chosen to conduct further experiments. 3.2.3 Adsorption kinetics. The effect of contact time on the adsorption capacity of PMBY for Pb 2+ was investigated (C 0 ¼ 50, 100, 150 mg L À1 , pH ¼ 5.0, T ¼ 30 C, PMBY dosage ¼ 0.08 g, V ¼ 100 mL), and the results are presented in Fig. 5a. The results show that the rate of adsorption of PMBY for Pb 2+ was high, and required only around 3 min to reach equilibrium. The rapid interaction of sorbent with the targeted metallic ions is desirable and benecial for practical adsorption applications. The rapid rate of uptake indicated that the surface of PMBY had plenty of vacant active sites for the sorption of lead ions. Aer the rst 3 minutes, the adsorption became difficult due to repulsive forces between the adsorbed lead ions on PMBY surface and the lead ions in the bulk solution. 40 Considering the practical operation, the optimal time was selected as 15 min for further analysis in this work. The pseudo-rst-order (eqn (7)) and pseudo-second-order (eqn (8)) kinetic models were introduced to determine the adsorption kinetics of Pb 2+ . 39 where q t is the amount adsorbed at time t (min) in mg g À1 , and k 1 (min À1 ) and k 2 (g mg À1 min À1 ) represent the adsorption rate constants for pseudo-rst-order and pseudo-second-order, respectively. The tting results are presented in Fig. 5a and Table 1. The calculated correlation coefficient values (r 2 ) for pseudo-rst-order and pseudo-second-order kinetics were found to be higher than 0.97, which show that both kinetic models can be used to predict the adsorption behavior of Pb 2+ using PMBY for the entire contact time ( Table 1). The predicted q e values at different Pb 2+ concentrations using pseudo-second-order model were in a better agreement with the experimental values than the pseudo-rst-order, which indicated that the adsorption process could be explained using pseudo-second-order model, while the adsorption rate was controlled by chemisorption. [41][42][43] In addition, the pseudo-second-order rate constant (k 2 ) decreased as the Pb 2+ concentration increased from 50 to 150 mg L À1 , suggesting that it took longer to achieve the adsorption equilibrium at higher Pb 2+ concentrations, which may have been due to the limited number of available active sites on PMBY. It is interesting to observe that, PMBY not only efficiently removed Pb 2+ from the aqueous solution, but it also resulted in a better and faster removal rate than some other bio-sorbents. In order to display the advantage of PMBY, the maximum adsorption capacity of PMBY at 30 C and the equilibrium time were compared with various yeast-based bio-sorbents used for Pb 2+ adsorption ( Table 2). The results indicated that the PMBY had relatively better adsorption capacity than the most of Paper reported yeast-based bio-sorbents. Although the adsorption capacity of PMBY is lower than some bio-sorbents reported in literature (Table 2), the adsorption equilibrium time was very short compared with other reports. The rapid adsorption of PMBY makes it competitive to various other bio-sorbents. 3.2.4 Isothermal study. Fig. 5b shows the sorption isotherms for Pb 2+ adsorbed on PMBY under the conditions of: pH ¼ 5.0; PMBY dosage ¼ 0.08 g; V ¼ 100 mL, t ¼ 15 min; T ¼ 25 C, 30 C, 35 C and 40 C, and C 0 ranging between 25-250 mg L À1 . The results indicated that the sorption capacity of PMBY increased both with temperature and initial Pb 2+ concentration. The q e increased signicantly at low Pb 2+ concentrations, which indicated that the initial Pb 2+ concentration played a critical role, which could produce a key driving force among lead ions to reduce the mass transfer resistance of lead between the liquid and solid phases, and hence, can enhance the effective collision probability between the lead ions and PMBY. The equilibrium adsorption capacity remained nearly constant even when the initial Pb 2+ concentrations went past a certain value (100 mg L À1 ; in this work), which could be explained by the saturation of active sites on PMBY surface. These results suggest that the available active sites on PMBY were the limiting factor for the adsorption of lead ions. Meanwhile, the adsorption capacity of PMBY for Pb 2+ increased from 84.26 to 98.77 mg g À1 with the increase in temperature from 25 to 40 C, which indicated that the adsorption process was endothermic in nature. To describe the sorption characteristics of PMBY more adequately, the equilibrium data from Fig. 5b was modeled using Langmuir and Freundlich isotherm models. 48 The Langmuir isotherm model assumes homogeneous adsorption during the adsorption process. The Langmuir isotherm can be expressed using eqn (9). where q m is the maximum amount of Pb 2+ adsorbed by PMBY (mg g À1 ) and K L is the Langmuir constant, which is related to the sorption energy (L mg À1 ). The Freundlich isotherm model assumes a heterogeneous adsorption, and infers that the heavy metal ions, which have been bided on the surface sites, may affect the adjacent sites. The Freundlich isotherm is represented by eqn (10). where K F is the Freundlich constant related to the strength of interactions between Pb 2+ and PMBY [(mg g À1 ) (L mg À1 ) 1/n ], and 1/n is the empirical parameter related to the adsorption intensity, which varies according to the heterogeneity of the sorbent. Fig. 5b and Table 3 display the tting results for Langmuir and Freundlich models, and show that the Langmuir isotherm could t the equilibrium data better than the Freundlich isotherm. Firstly, the Langmuir isotherm resulted in a higher correlation coefficient (r 2 > 0.98) than the Freundlich isotherm (r 2 < 0.81). Secondly, the q m values (87.39, 91.53, 96.06 and 99.56 mg g À1 at 25, 30, 35 and 40 C, respectively) obtained using the Langmuir isotherm coincided well with the experimental values. Therefore, it can be said that the sorption process was mainly monolayer sorption of Pb 2+ onto the homogenous surface of PMBY. Consequently, the Langmuir isotherm was further analyzed using the dimensionless constant, which was named as the equilibrium parameter or separation factor, and expressed as R L . R L can be calculate using eqn (11). 6,8 Various R L values represent four kinds of adsorption characteristics, which are as follows: unfavorable (R L > 1), linear (R L ¼ 1), favorable (0 < R L < 1) and irreversible (R L ¼ 0) Based upon the temperature and initial lead ion concentrations used in this work, R L values were calculated, and it was found that, all of them ranged between 0-1 (Fig. 6), conrming that the sorption of Pb 2+ by PMBY was favorable. where N is the universal gas constant 8.314 J (mol À1 K À1 ) and T is the temperature (K). In addition, K is the equilibrium constant at temperature T. DS and DH values can be obtained from the slope and intercept (respectively) of the graph drawn between DG and T values, and which is shown in Fig. 7. The values of the thermodynamic parameters are presented in Table 4. Under different temperature conditions, the negative values of DG demonstrate that the adsorption of Pb 2+ using PMBY was spontaneous, while the decreasing values of DG with increasing temperature (from 25 to 40 C) reveal that the elevated temperature can promote the binding of Pb 2+ onto the surface of PMBY sorbent. The positive values of DH conrm that the adsorption process was endothermic, and the sorption involved chemisorption as higher temperatures can promote the dissolution of lead ions and reduce the protonation of surface functional groups of the adsorbent to facilitate the chelation between Pb 2+ and PMBY. 8 The positive value of DS show that the randomness increased during the reaction, which was due to the destruction of hydration shell formed by water molecules on the surface of PMBY as the Pb 2+ was bound on PMBY to make a number of water molecules enter the solution. All the thermodynamic parameters reect that the bio-sorbent PMBY has an excellent affinity for Pb 2+ . Adsorption mechanism Nitrogen adsorption-desorption isotherms were constructed at À196.15 C and were applied to calculate the specic surface area using the multipoint BET method. The nitrogen isotherms of the adsorbent PMYB before and aer the adsorption (PMBY-Pb) are shown in Fig. 8a. The isotherm of PMYB and PMBY-Pb could be described as a Type IV isotherm, indicating that the PMBY and PMBY-Pb are mesoporous materials. The BET surface areas of PMBY and PMBY-Pb were calculated to be 6.140 and 40.686 m 2 g À1 . The BJH average pore size distribution of PMBY and PMBY-Pb were estimated using the desorption data, and the pore size was found to be 7.586 and 11.216 nm, respectively. Aer the adsorption, the surface area and pore size of PMBY-Pb were substantially increased compared to those of PMBY before the adsorption, thus indicating that PMBY had a great swelling power when it was dissolved in water. This swelling power could be attributed to the presence of phosphate groups in the PMBY, which possessed more water holding capacity and led to higher adsorption performance of PMYB for Pb 2+ . This result is in accordance with the ndings reported by Qintie Lin et al. and Lin 49 Aer the adsorption of lead ions, there were large number of bright precipitates on the surface of PMBY, while the composites displayed a dense and compact structure (Fig. 2c and e). The EDS pattern (Fig. 2d and f) showed that a new peak of Pb appeared, while that of Na disappeared on PMBY-Pb compared to the PMBY. These changes illustrated that the lead ions were indeed adsorbed on the surface of PMBY through the mechanism of ion-exchange. Furthermore, comparing the FTIR spectra of PMBY and PMBY-Pb (shown in Fig. 3a), two new peaks at 1010.17 and 657.69 cm À1 were assigned to P-O-Pb and metal-oxygen (metal-hydroxide), respectively. 27,50 The characteristic peaks of phosphate group obviously shied or became weaker, which demonstrated that the removal of Pb 2+ was mainly due to the phosphate groups. The adsorption mechanism was further investigated using XPS analysis. The XPS spectra of pristine baker's yeast, PMBY and PMBY-Pb are displayed in Fig. 8b. Both the phosphorus and lead were observed obviously (Fig. 8b), indicating that the phosphorylation reaction had occurred, and that the lead ions were adsorbed to the surface of PMBY. The high-resolution spectra of O1s, P2p, N1s and Pb 4f are shown in Fig. 9, whereas the proposed components and their binding energies are presented in Table 5. Comparing the O1s, P2p and N1s spectra of pristine baker's yeast and PMBY (Fig. 9a, b and c), some novel peaks emerged beside the original peaks of O-, P-and N-containing functional groups in pristine baker's yeast. The new peaks conrmed that phosphate groups were introduced on the surface of pristine baker's yeast. The different binding energies of C-O, O]C-O, -NH 2 from PMBY and pristine baker's yeast illustrated that the hydroxyl, carboxyl and amino groups reacted with the phosphate. The results were found to be consistent with the FTIR characterization. Aer the adsorption, the peaks of O-, P-and N-containing functional groups in PMBY showed variations in terms of binding energy. However, the reduction binding energies of P]O and P-O were the most obvious, revealing that the phosphate groups were mainly involved in the adsorption of lead. The Pb 4f spectrum for PMBY-Pb is depicted in Fig. 9d. The peaks at around 140 eV were assigned to Pb 4f due to the adsorption of Pb 2+ . The peaks at 143.19 and 138.33 could be assigned to Pb 2+ , indicating that the lead was loaded on the surface of PMBY through chelation. Moreover, the Pb 4f peaks were centered at 142.66 eV and 137.8 eV, which suggested that Pb 2+ may have been absorbed in PMBY in the form of Pb-O-P through ion-exchange process. According to the XPS spectra of PMBY and PMBY-Pb, the Na peak disappeared in the spectra of PMBY-Pb, indicating that the adsorption process of PMBY for Pb 2+ followed ion-exchange. This result was also conrmed by the results of SEM-EDS. In addition, it is well-known that the metal cations are typical Lewis acids and that the phosphate groups with low acid-base ionization equilibrium constant (pK a ¼ 1-2) show typical Lewis base properties in a wide range of pH values. 27 Therefore, based upon the Lewis acid-base theory, lead ions can interact with the phosphate groups through chelation and electrostatic interaction. Due to the successful introduction of phosphate groups and the interaction (ion-exchange, chelation and electrostatic attraction) Fig. 9 High-resolution spectra of O1s (a), P2p (b) and N1s (c) for the pristine baker's yeast, PMBY and PMBY-Pb, and the Pb 4f XPS spectra of PMBY-Pb (d). between the phosphate groups and Pb 2+ , the adsorption performance of PMBY for Pb 2+ signicantly improved. Fig. 10 shows the reaction scheme and the proposed schematic of the adsorption mechanism of PMBY for Pb 2+ . Firstly, the surface functional groups of baker's yeast cell walls, such as hydroxyl, carboxyl and amine groups, reacted with NaH 2 PO 4 / Na 2 HPO 4 . The detailed synthesis is shown in Fig. 1. The phosphate groups were linked to the yeast through substitution reaction or the ligand exchange process between the O-H group of hydroxyl groups and carboxylic acids, and the phosphate. Additionally, the amine groups and phosphate groups could react through electrostatic attraction and hydrogen bonding. Aer this reaction, the novel PMBY bio-sorbent was obtained and used to remove Pb 2+ from aqueous solution. The phosphate groups, which were grated into the surface of pristine baker's yeast played a signicant role during the adsorption process. As shown in Fig. 10, the PMBY efficiently removed Pb 2+ from aqueous solution. The process mainly depended upon these interactions (ion-exchange, chelation and electrostatic attraction) between the phosphate groups and Pb 2+ . The adsorption mechanism could be conrmed using SEM, FTIR and XPS analyses. Regeneration of PMBY A good adsorbent should not only possess high adsorption affinity, but also show excellent regeneration property. These characteristics are of great importance for decreasing its production and application costs. The adsorption-desorption study was done using different acid solvents (0.01 M HCl, HNO 3 and H 2 SO 4 ). 8 For the process, 0.08 g of PMBY was added to 100 mL of 100 mg L À1 Pb 2+ solution in conical asks, and the pH was adjusted to 5.0. Then, the mixture was shaken using a rotary shaker (speed of 150 rpm) for 15 min at 30 C. Subsequently, the Pb-loaded PMBY (PMBY-Pb) was treated using 100 mL of the abovementioned acid solvents under the aforementioned conditions for 120 min. Then, the mixtures were ltered, and the ltrate was used to determine the Pb 2+ concentration using AAS. The results are shown in Fig. 11. The order of desorption for Pb 2+ was found to be: HCl (89.85%) > HNO 3 (77.42%) > H 2 SO 4 (69.06%) (Fig. 11a). The better recovery of Pb 2+ in 0.01 M HCl was due to the smaller sized Cl À ions in comparison to the NO 3À and SO 4 2À ions. 8 Hence, the recyclability of PMBY for the adsorption of Pb 2+ was conrmed using 0.01 M HCl solution. As can be seen from Fig. 11b, aer ve regeneration cycles, PMBY still exhibited 90.77% of the original adsorption capacity. Therefore, it can safely be said that the adsorption efficiency of PMBY towards Pb 2+ was still satisfactory aer several regeneration cycles, whereas HCl was used as the eluent during these regeneration experiments. All these results suggested that PMBY could act as a renewable and efficient adsorbent for the remediation of wastewater containing Pb 2+ . Fig. 10 Reaction scheme and schematic of adsorption mechanism of Pb 2+ by PMBY. Conclusions In this work, phosphate-modied baker's yeast (PMBY) was successfully synthesized using phosphate treatment of baker's yeast combined with the dry-heating. The surface morphology of PMBY exhibited irregular shape and a large volume of pores, which were benecial for the adsorption of Pb 2+ . The results of FTIR, elemental analysis and XPS showed that phosphate groups were indeed introduced onto the yeast, whereas the hydroxyl, carboxyl and amine groups of pristine baker's yeast participated in the phosphorylation process. The efficient adsorption of Pb 2+ by PMBY mainly depended on the additional phosphate groups, which xed the Pb 2+ ions through ionexchange, electrostatic attraction and chelation. It was found that the adsorption capacity of PMBY was superior to that of the pristine baker's yeast, while the adsorption process was very rapid and could attain equilibrium in around 3 min. The results from adsorption kinetic and isotherm analyses revealed that the Pb 2+ adsorption process could be well described by pseudosecond-order kinetics and Langmuir isotherm model, respectively. Furthermore, the adsorption process of Pb 2+ on the surface of PMBY was spontaneous and endothermic. The main Pb 2+ adsorption mechanism of PMBY was based upon ionexchange, electrostatic interaction and chelation between the phosphate groups and Pb 2+ . In addition, the bio-sorbent PMBY showed excellent regeneration performance. 0.01 M HCl was used as the eluent in regeneration experiments. Finally, the results of the study show that PMBY has signicant potential to be used as an efficient and useful adsorbent for the removal of heavy metal ions from industrial wastewater. Conflicts of interest There are no conicts to declare.
8,575
2018-02-19T00:00:00.000
[ "Chemistry" ]
HydrothermalFoam v1.0: a 3-D hydro-thermo-transport model for natural submarine hydrothermal systems Herein, we introduce HydrothermalFoam, a three dimensional hydro-thermo-transport model designed to resolve fluid flow within submarine hydrothermal circulation systems. HydrothermalFoam has been developed on the OpenFOAM platform, which is a Finite Volume based C++ toolbox for fluid-dynamic simulations and for developing customized numerical models that provides access to state-of-the-art parallelized solvers and to a wide range of preand post-processing tools. We have implemented a porous media Darcy-flow model with associated boundary conditions designed to facilitate numerical 5 simulations of submarine hydrothermal systems. The current implementation is valid for single-phase fluid states and uses a pure water equation-of-state (IAPWS-97). We here present the model formulation, OpenFOAM implementation details, and a sequence of 1-D, 2-D and 3-D benchmark tests. The source code repository further includes a number of tutorials that can be used as starting points for building specialized hydrothermal flow models. The model is published under the GNU General Public License v3.0. 10 Introduction High-temperature hydrothermal circulation through the ocean floor plays a key role in the exchange of mass and energy between the solid earth and the global ocean (German and Seyfried, 2014;Elderfield and Schultz, 1996). It influences the thermal evolution of young oceanic plates (Stein and Stein, 1994;Theissen-Krah et al., 2016), modulates global ocean biogeochemical cycles (German et al., 2016;Tagliabue et al., 2010), and is associated with massive sulfide ore deposits that form around vent sites (Hannington et al., 2011). Hydrothermal convection occurs over large spatial and temporal scales. At fast-spreading ridges, convection cells may either be confined to the upper extrusive crust above the axial melt lens (Faak et al., 2015;Coumou et al., 2008;Fontaine et al., 2009) or extend all the way down to the crust-mantle boundary at approx. 6 km depth (Hasenclever et al., 2014;Dunn et al., 2000;Cathles, 1993). At slow-spreading ridges, fluid circulation may extend much deeper (up to 35 km) into the ultramafic mantle (Schlindwein and Schmid, 2016), although the maximum extent of the brittle layer remains debated and may be confined to the upper 15 km (Grevemeyer et al., 2019). Such deep-reaching fluid flow can sometimes be channelized along deep detachments at fault-controlled systems such as Trans-Atlantic Geotraverse (TAG) and Lonqi (Tao et al., 2020;deMartin et al., 2007) and/or may propagate to greater depths via thermal cracking Lister, 1974). Temperatures can also vary over large ranges, with high-temperature systems typically being driven by a magmatic heat source of 1000 • C or more and porosity and permeability staying open up to 600-800 • C (Lister, 1974). Finally, hydrothermal systems can evolve over long timescales of up to 50-100 kyr (Jamieson et al., 2014) but also respond to shorter events like glacial sea level changes (Middleton et al., 2016) or magmatic and seismic events (Germanovich et al., 2000;Wilcock, 2004;Singh and Lowell, 2015) and even tidal pressure changes (Crone et al., 2011;Barreyre et al., 2018). These spatial and temporal scales, in combination with the extreme pressure (P ) and temperature (T ) conditions (up to 300 MPa and 1000 • C) of submarine hydrothermal systems, make direct and long-term observations challenging and pose a problem for laboratory work (Ingebritsen et al., 2010). Hence, numerical simulations have become indispensable tools for understanding and characterizing fluid flow and for relating seafloor observations to physicochemical processes at depth. In the last few decades, significant progress has been made in hydrothermal flow modeling both theoretically and numerically (Lowell, 1991;Ingebritsen et al., 2010). Due to the high complexity of the heterogeneous sub-seafloor, a continuum porous medium approach is typically used, in which the conservation equations are written for control volumes with effective properties such as Darcy velocity, permeability, and porosity. Such approaches have been successfully used to make fundamental progress in our understanding of the nature and mechanisms of hydrothermal transport, including a thermodynamic explanation of black smoker temperatures (Jupp and Schultz, 2000), the three-dimensional structure of hydrothermal circulation cells at mid-ocean ridges (Coumou et al., 2009;Hasenclever et al., 2014), and phase separation phenomena and salinity variations of hydrothermal fluids (Lewis and Lowell, 2009a, b;Coumou et al., 2009;Weis et al., 2014). Current numerical simulators of hydrothermal flow can be divided into two families: (1) multiphase codes that thrive towards resolving saltwater convection and associated phase separation phenomena and (2) single-phase hydrothermal codes that focus on sub-critical low-temperature fluid flow and/or super-critical high-temperature flow of pure water, i.e., codes that only "work" within single-phase fluid states. Multiphase saltwater codes are at the forefront of what is currently feasible in numerical simulations, as accounting for the complexity of the equation-of-state (EOS) of seawater (Driesner and Heinrich, 2007;Driesner, 2007) in combination with multiphase transport is a challenge (Ingebritsen et al., 2010). Existing codes of this type include CSMP++, which is capable of treating salt water up to magmatic temperatures on unstructured finite-element-finite-volume (FEFV) meshes (Weis et al., 2014). The hydrothermal multiphase version of CSMP++ is currently 2-D and it is a closed-source project. FISHES is a 2-D open-source academic code, which uses the finite-volume method to solve thermohaline convection on structured meshes (Lewis and Lowell, 2009a, b) but has some restrictions on the phase states that can be resolved. Currently there is no 3-D model that can resolve multiphase saltwater convection but there are some developments efforts on the way. In addition, there are a number of geothermal modeling codes that can handle twophase behavior that have not (yet) been adapted to handle the complex EOS of saltwater over sufficiently large pressure and temperature ranges. HYDROTHERM (Kipp et al., 2008), FEHM (Zyvoloski et al., 1997), HT2_NR (Vehling et al., 2018), and TOUGH2 (Pruess et al., 1999) are examples of such codes. The second type of code family refers to somewhat simpler models that circumvent the numeri-cal challenges of multiphase phenomena by staying in P -T regions, where the simulated fluid is in a single phase. A popular approach is to use a pure water instead of a saltwater EOS at pressures beyond the critical end-point (22 MPa). These models, despite making simplifying assumptions, continue to be widely used in the submarine hydrothermal system community and have been successfully applied to solve a wide range of problems. Examples include Schultz (2000, 2004), who showed that hydrothermal systems operate close to optimal efficiency with their maximum vent temperatures set by the thermodynamic properties of water, studies that revealed the complex 3-D structure of recharge and discharge flow in mid-ocean-ridge hydrothermal systems (Hasenclever et al., 2014;Coumou et al., 2008;Fontaine et al., 2014), dedicated case studies for individual vent systems (Tao et al., 2020;Andersen et al., 2015;Lowell et al., 2012), and models exploring tidal forcing of hydrothermal circulation (Crone and Wilcock, 2005;Barreyre et al., 2018). This list is nowhere near complete and there are many more examples. The bottom line is that single-phase circulation models continue to be widely used and highly useful "workhorses" in the hydrothermal community. Somewhere in the hopefully not so distant future, 2-D and 3-D multiphase models will be the new standard but for now robust and tested single-phase codes continue to be useful tools for a variety of applications. Interestingly, even single-phase models are not that easily accessible to the hydrothermal community. Many research groups maintain 2-D research codes that resolve hydrothermal flow but single-phase 3-D models continue to be rare. To our knowledge there are basically three single-phase code families that are routinely used in 3-D studies (Coumou et al., 2008;Hasenclever et al., 2014;Fontaine et al., 2014) and none of them are open source. There are some major opensource initiatives that provide 3-D porous flow simulators or libraries such as RichardsFoam2 (Orgogozo, 2015), porous-MultiphaseFoam (Horgue et al., 2015), Dumux (Flemisch et al., 2011), MRST (Lie, 2019), and OpenGeoSys (Kolditz et al., 2012) that can be used to simulate hydrothermal flow but none of them has been adapted and documented for simulating submarine hydrothermal systems. In this paper, we present a toolbox, named HydrothermalFoam, to simulate 2-D and 3-D hydrothermal circulation in a single-phase regime for seafloor hydrothermal systems. The toolbox is build upon the open-source platform OpenFOAM ® (Jasak, 1996;Weller et al., 1998), which is not only a widely used simulator for solving Navier-Stokes-type problems but also a general toolbox for solving partial differential equations. Open-FOAM is based on the cell-centroid finite-volume method (FVM) and is written in C++. It provides high-level interfaces to field operations and includes a series of features such as support for flexible meshes (e.g., structured meshes, unstructured meshes, and mixed meshes), utilities of preand post-processing, and parallel computing in 2-D and 3-D (Moukalled et al., 2016). Based on this established frame-work, we present a toolbox to simulate flow in submarine hydrothermal systems. We solve the porous flow problem using a continuum porous medium approach in which the fluid velocity is expressed by Darcy's law and the pressure equation is constructed from Darcy's law and the mass conservation equation. All the partial differential equations are solved implicitly in the framework of OpenFOAM and in a sequential scheme, and the thermal-physical models are developed using a pure-water EOS. HydrothermalFoam inherits all kinds of basic features of OpenFOAM, including boundary conditions. In addition, we have also customized several special boundary conditions for seafloor hydrothermal system modeling. The purpose of this toolbox is to provide the hydrothermal community with a state-of-the-art, yet easy-to-use and well-documented, simulator for resolving hydrothermal flow in submarine hydrothermal systems. The paper is organized as follows. In Sect. 2, we present the mathematical model, its implementation in OpenFOAM, information on initial and boundary conditions, and the thermal-physical model selection. In Sect. 3 we describe the different toolbox components, in Sect. 4 we describe the installation options and procedures, and in Sect. 5 we validate the toolbox using several published benchmark tests. We use a continuum porous media approach and describe creeping flow in hydrothermal circulation systems using Darcy's law, where the Darcy velocity of the fluid is given by in which k denotes permeability, µ f the fluid's dynamic viscosity, p total fluid pressure, and g gravitational acceleration. All variables and symbols are listed in Table 1. Considering a compressible fluid in a porous medium with given porosity structure, the mass balance is expressed by Eq. (2) (Theissen-Krah et al., 2011), where ε is the porosity of the rock. Note that we assume the matrix to be incompressible, and thus the porosity is outside the time derivative. The equation for pressure can be derived by substituting Darcy's law (Eq. 1) into the continuity Eq. (2) and treating the fluid's density as a function of temperature T and pressure p: with α f and β f being the fluid's thermal expansivity and compressibility, respectively. Again there is no rock compressibility, as we consider the incompressible matrix case. Energy conservation of a single-phase fluid can be expressed using a temperature formulation (Hasenclever et al., 2014) where C p is heat capacity and k r is the bulk thermal conductivity of porous rock. Subscripts of r and f refer to rock and fluid, respectively. As the matrix is incompressible, ρ r and C pr are constant in time. Fluid and rock are assumed to be in local thermal equilibrium (T = T r = T f ) so that the mixture appears on the left-hand side of equation (4). Note that the assumption of thermal equilibrium is valid for most practical applications in submarine hydrothermal system modeling, as the equilibration timescale is short being related to grain size in a porous medium. However, this assumption should be carefully reviewed when simulating special geometries like fluid-filled cracks (Schmeling et al., 2018) or very short timescales like the response of a hydrothermal circulation cell to a seismic event (Wilcock, 2004). For such specialized cases OpenFOAM offers support for multiphysics models that can resolve different physics in differing parts of the modeling domain, and thus heat transfer between solid and fluid can be explicitly resolved. Under the equilibrium assumption, changes in temperature depend on conductive heat transport, advective heat transport by fluid flow, heat generation by internal friction of the fluid, and pressurevolume work. Note that the third term on the right-hand side of Eq. (4) describes viscous dissipation, while the last describes pressure-volume work (and the pressure dependence of enthalpy, which shows up in the temperature formulation of the energy equation). Whether these terms are important depends on the application (Garg and Pritchett, 1977); our tests have shown that they matter, for example, in the pure vapor state, when simulating large vertical extents, and in supercritical states close to the critical end point of water. All fluid properties are functions of both pressure and temperature and are calculated using the IAPWS-IF97 formulation of water and steam properties as implemented in the freesteam project (Pye, 2010). Further details on the derivation of the governing equations can be found in the appendix of Hasenclever et al. (2014). Implemented formulation We solve for pressure (Eq. 3), velocity (Eq. 1), and temperature (Eq. 4) separately. Based on the finite-volume method implemented in OpenFOAM, the primary variables (p and T ) related equations are discretized on an cell-centroid computational grid. The transient temperature term in Eq. (3) is where the left-hand side terms are pressure transient term and Laplacian term (or diffusion term), respectively. The first term on the right-hand side is evaluated explicitly using a known temperature field, and the second term on the righthand side is a divergence term of gravity-related flux (φ g ), which is defined on each face of the computational grid. To apply the finite-volume method, the advection term (the second term on the right-hand side) in the temperature Eq. (4) should be reformulated as a divergence term Then substituting Eq. (6) in Eq. (4), the temperature equation can be rearranged as where on the left-hand side, the first term is temperature transient term and the second one is the advection term. On the right-hand side, the first two terms represent temperature diffusion and the source term resulting from the reformulation given in Eq. (6), respectively. The last two source terms are calculated explicitly. Time step limitations To determine the time step, we adopt the limitation related to the Courant number Co, which is defined for compressible fluid as with m being the number of neighbor faces to a specific cell, φ F = ρ f U · S F being mass flow through the cell faces, V cell being volume of the cell, and S f being area of face F. Then the coefficient for time step change is written as follows: To avoid changes of the time step that are too large, which could lead to numerical instabilities, the time step is defined as follows (Jasak, 1996) where t last is the time step length of the previous time step. Implementation details can be found in the OpenFOAM documentation and the OpenFOAM source files included by the main source code file HydrothermalSinglePhaseDarcyFoam.C. Boundary conditions To solve the pressure and temperature equations, we have to impose suitable boundary conditions for T and p. The "typical" boundary conditions, e.g., fixed value, fixed gradient, and the mixing of both, are directly inherited from the basic boundary conditions of OpenFOAM. In submarine hydrothermal system modeling,some special adaptations of these basic boundary conditions can also be useful. The hydrothermal heat flux (q h ) boundary condition is a fixed gradient boundary condition that is often used to approximate heat input from a crustal magma chamber and is commonly used for simulations of mid-ocean ridge hydrothermal systems (e.g., Coumou et al., 2009;Weis et al., 2014). This Neumann boundary condition is called hydrothermalHeatFlux in the toolbox and can be used for the temperature field. Using it, the imposed gradient of temperature can be expressed as where n denotes the normal vector of the face boundary. In addition, we implement two options for heat flux distribution. Using the keyword shape allows for modifying the functional form of the heat flux boundary. The available options are fixed, gaussian2d, gaussian3d. The default option is fixed, and if gaussian2d or gaussian3d is specified, the Gaussian shape (Eq. 12) related parameters (qmin, qmax, c, x0 and/or z0) have to be specified (see Sect. 5.3). A similar boundary condition called hydrothermalMassFluxPressure is defined for the pressure field to prescribe a mass influx into the modeling domain (φ m = ρ f U ). The corresponding gradient of the pressure field can be derived from Darcy's law (Eq. 1) Where φ m and φ g denote mass flux and gravity-related flux, respectively. Further, we define another Neumann boundary condition (named noFlux) for the pressure field for impermeable boundaries, which is a special case of HydrothermalMassFluxPressure when φ m = 0. In addition, a Dirichlet boundary condition (named submarinePressure) for the pressure field is defined to describe hydrostatic pressure at the seafloor boundary due to bathymetric relief. Another commonly used boundary condition on hydrothermal venting boundary (e.g., seafloor) is OpenFOAM's inletOutlet boundary condition, which allows for the setting of a constant temperature for inflow and zero heat flux for outflow nodes -this type of boundary condition is often used to mimic free venting at the seafloor. Fluid properties and equation-of-state Numerical solutions of hydrothermal flow are known to strongly depend on the used thermodynamic properties of the simulated fluid. A series of studies using realistic thermodynamic properties of pure and salt water, rather than making a Boussinesq approximation or using linearized properties, have shown that realistic results depend critically on using a realistic EOS (Jupp and Schultz, 2000;Hasenclever et al., 2014;Driesner, 2010;Carpio and Braack, 2012). Note that we here do not address any issues related to using pure versus saltwater EOSs, as outlined in the introduction. We use an EOS for pure water based on the IAPWS-IF97 parameterization and have created a corresponding OpenFOAM thermophysical model in a single phase regime. The phase diagram is shown in Fig. 1. Solution algorithm The governing equations of pressure and temperature are solved in a sequential approach. The primary variables (pressure and temperature) and transport properties (such as permeability, porosity, etc.) have to be initialized before the time loop, and then the initial Darcy velocity and thermodynamic properties of fluid can be updated according to the temperature and pressure fields. The main computational sequence for a single time step is described below and sketched in Fig. 2. 1. The time step size t n+1 is calculated from the condition related to the Courant number (Eq. 8). 2. Temperature field T n+1 is implicitly computed by solving the energy conservation equation (Eq. 7). The syntax of a partial differential equation (PDE) in OpenFOAM is very close to mathematical formulation, and a code snippet of the temperature Eq. (7) implementation is shown in Listing 1. All variable symbols, names, and OpenFOAM types (classes) are shown in Table 1. The transient term ∂T /∂t can be implicitly discretized using the OpenFOAM operator fvm::ddt(T) with the discretization scheme (e.g., Euler scheme) being specified under the keyword ddtSchemes in the system/fvSchemes dictionary file (see Listing A1). The divergence term, Laplace term, and source term are implicitly discretized using fvm::div, fvm::laplacian and fvm::Sp operators. The last term on the right-hand side is explicitly calculated using known field values from the current or previous time step; the corresponding time derivative and gradient can be programmed using fvc::ddt and fvc::grad, respectively. 3. The pressure field p n+1 is implicitly computed by solving the pressure equation (Eq. 5), the code snippet is shown in Listing 2. The temperature temporal term and divergence of φ g on the right-hand side are evaluated explicitly by using fvc::ddt(T) and fvc::div(phig) (see line 7 in Listing 2). Although pressure boundary conditions are customized by flux directly (see Sect. 2.4), in order to specify pressure boundary conditions through velocity boundary conditions, e.g., OpenFOAM's fixedFluxPressure boundary condition, the OpenFOAM function of constrainPressure has to be called before solving the pressure equation (see line 3 in Listing 2). For non-orthogonal mesh, a non-orthogonal correction algorithm (line 4 in Listing 2) is commonly adopted to improve accuracy for gradient computation. The number of non-orthogonal corrections is specified by the nNonOrthogonalCorrectors key in the PIMPLE sub-dictionary in the system/fvSolution file. 4. The velocity field is calculated explicitly using the latest pressure field based on Darcy's law (Eq. 1). Instead of calculating the velocity directly, we implement an indirect approach based on OpenFOAM's function fvc::reconstruct to reconstruct the velocity field from the computed mass flux (see Listing 3), which performs under higher numerical stability and benefits from the flux conservation characteristics of the finite-volume method. In addition, the boundary conditions of the velocity field have to be updated (line 3 in Listing 3) if OpenFOAM's fixedFluxPressure boundary condition is applied for the pressure field. 5. Thermodynamic properties of fluid are updated by the thermo-physical model after solving the temperature and pressure field. The implementation code snippet is shown in Listing 4, in which thermo.correct() is used to update the temperature and pressure value for all the calculating nodes. Then the thermodynamic properties of the fluid, for example density (ρ) at each node, are calculated based on IAPWS-IF97 (see line 2-6 in Listing 4). Numerical schemes Since the numerical evaluation of the divergence and gradient terms in the governing equations has great influence on Listing 1. Implementation of temperature Eq. (7) with OpenFOAM (in EEqn.H). heat and mass transfer, a suitable solution strategy regarding discretization and linear solver schemes needs to be chosen to ensure accuracy, robustness, and stability. In the presented solver HydrothermalSinglePhaseDarcyFoam, the discretization and interpolation scheme of the primary fields (T , p) can be defined in the simulation configuration files. In the following benchmark tests (Sect. 5), the advective discretization scheme is set to upwind to ensure consistency with HYDROTHERM. It should be noted that all of the basic numerical schemes of OpenFOAM are also valid for HydrothermalSinglePhaseDarcyFoam solver. Description of toolbox components The organization of the HydrothermalFoam toolbox is shown in Fig. 3. The toolbox contains five parts: a Hydrothermal-Foam solver, thermo-physical models, boundary conditions, cookbooks, and a manual. -HydrothermalSinglePhaseDarcyFoam. This block compiles the solver (an executable file) that solves the seafloor hydrothermal convection equations described in Sect. 2.1. It can be used to simulate single-phase hydrothermal circulation in an isotropic porous medium. -ThermoModels. This block compiles the libHydroThermoPhysicalModels library, containing the EOS of pure water, which is used to formulate the used thermo-physical model; see Sect. 2.5. -BoundaryConditions. This block compiles libHydrothermalBoundaryConditions library, containing four customized boundary conditions explained in Sect. 2.4. The example usage of each boundary condition can be found in the cookbooks and manual in the following GitLab repository (https://gitlab.com/gmdpapers/hydrothermalfoam, last access: 21 December 2020). benchmarks. This block contains the input files of all the benchmark tests (see Sect. 5) presented in this paper. cookbooks. This block contains some example cases of parallel computing, user-defined boundary conditions, and post-processing. Installation We provided two options for installation: one is building from source and the other is using a pre-compiled docker image. HydrothermalFoam Once OpenFOAM is built successfully, the source code of HydrothermalFoam can be downloaded from https://doi.org/10.5281/zenodo.3755647 (Guo and Rüpke, 2020). The directory structure and components of Hy-drothermalFoam are shown in Fig. 3 and the components can be built following the three steps given below. 1. Build a freesteam-2.1 library. The freesteam project is constructed by https://scons.org (last access: 21 December 2020), which is an open-source software construction tool dependent on https://www.python.org/ downloads/release/python-272/ (last access: 21 December 2020), and based on https://www.gnu.org/software/ gsl/doc/html/ (last access: 21 December 2020) (GNU Scientific Library, GSL). Therefore, Python 2, SCons, and GSL have to be installed first, followed by changing the directory to freesteam-2.1 in the Hydrother-malFoam source code and type command of scons install to compile the freesteam library. Build libraries of customized boundary conditions and thermo-physical model. Change directory to libraries and type command of ./Allmake to compile the libraries. 3. Build solver. Change directory to HydrothermalSinglePhaseDarcyFoam and type command of wmake to compile the solver. All the library files and executable application (solver) file will be generated in directories defined by Open-FOAM's path variables of FOAM_USER_LIBBIN and FOAM_USER_APPBIN, respectively. Pre-compiled docker image In order to use all the tools directly without any compiling and development skills, we have published a precompiled Docker ® image in a repository at https://hub. docker.com/repository/docker/zguo/hydrothermalfoam (last access: 21 December 2020). The docker image can be used on any operation system (e.g., Windows, Mac OS, and Linux) to run HydrothermalFoam cases following the five steps below. Install Docker, then open Docker and keep it running. 2. Pull the docker image by using command of docker pull zguo/hydrothermalfoam in the shell terminal, e.g., bash shell in Mac OS or PowerShell in Windows system. 3. Install a container from the docker image by running a shell script, e.g., Unix shell script, shown in Listing 5. The directory named HydrothermalFoam_runs is a shared folder between the container and host machine. 4. Start the container by running the command docker start hydrothermalfoam. 5. Attach the container by running the command docker attach hydrothermalfoam. The user is now in a Ubuntu Linux environment with pre-compiled Hy-drothermalFoam tools that are located at the directory /HydrothermalFoam. We recommend that the user runs HydrothermalFoam cases in the directory HydrothermalFoam_runs in the container, then the results will be synchronized in the shared directory in the host and thus can be visualized by ParaView ® , Tecplot ® , or other software. Run the first case of HydrothermalFoam The basic directory structure for a HydrothermalFoam case that contains the mandatory files to run an application is shown in Fig. 4. There is a bash script file named run.sh in every HydrothermalFoam case provided by this paper. A HydrothermalFoam case can be run by executing ./run.sh. In addition, we provide a 5 min quick start tutorial video (https://youtu.be/6czcxC90gp0, last access: 21 December 2020) to run the first case of HydrothermalFoam in Docker. Mesh generation The mesh information containing boundary patches definitions, cell face indices, and connections is located in the polyMesh subdirectory in the constant directory in a specific case folder (Fig. 4). All the OpenFOAM mesh generation approaches can be applied to HydrothermalFoam as well. For example, blockMesh generates a simple mesh defined by the blockMeshDict dictionary file in the system directory, and gmshToFoam transforms a mesh file generated by https://gmsh.info (last access: 21 December 2020) (Geuzaine and Remacle, 2009) to polyMesh. Input field data Much of the input-output data in HydrothermalFoam are fields, e.g., temperature or pressure data, that are read from and written into the time directories. For example, the initial time directory is commonly named 0 (see Fig. 4). Hydorther-malFoam writes field data as dictionary files using keyword entries described in Table 2. The required input field data of HydrothermalFoam are temperature, pressure, and permeability. Each kind of input field data begins with an entry for its dimensions, which is expressed by a vector of seven basic SI units (International System of Units) in the following order: kg, m, s, K, mol, A, and cd. The file names are the same as the variable names shown in Table 1. Following this is the internalField, described in one of the following ways. 1. Uniform field. A single value is assigned to all elements within the field, taking the form: internalField uniform <entry>; 2. Nonuniform field. Each field element is assigned a unique value from a list, taking the form: internalField nonuniform <List>; The nonuniform list of an internal field, e.g., permeability, can be specified in different mesh regions by using setFields transform setFieldsDict in the system directory (see the https://gitlab. Listing 6. Example dictionary file for temperature field T. com/gmdpapers/hydrothermalfoam/-/tree/master/ benchmarks/HydrothermalFoam/3d/Heterogeneous (last access: 21 December 2020) benchmark example described in Sect. 5.3.2). The boundaryField is a dictionary containing a set of entries whose names are listed in the polyMesh/boundary file for each boundary patch. The compulsory entry, type, describes the boundary condition specified for the field. Aside from the OpenFOAM internal basic boundary condition types of fixedValue, zeroGradient, codedFixedValue, etc., the customized boundary condition types, e.g., noFlux for pressure and HydrothermalHeatFlux for temperature, are described in Sect. 2.4. An example set of field dictionary Listing 7. thermophysicalProperties file. entries for temperature T are shown in Listing 6. While a boundary condition for permeability is not mathematically required, its internal OpenFOAM data type requires a corresponding boundaryField dictionary with specified boundary conditions. Therefore, we suggest that the type entry of all boundary patches for permeability field should be always set to zeroGradient. Thermo-physical model The thermophysicalProperties is a compulsory file in the constant directory, it contains keywords (Listing 7) of the newly defined thermo-physical model for water, which is described in Sect. 2.5 and contains constant properties of rock, e.g., porosity and density. It should be noted that porosity is implemented as volScalarField type just like permeability. It means that user can define a porosity field in the modeling domain. If the porosity file does not exist in the start time folder, e.g., the 0 folder, the solver will initialize the porosity field by using the constant porosity value specified in the porousMedia sub-dictionary in the constant/thermophysicalProperties file. Discretized schemes and solution control The discretization schemes for primary variables in the PDEs (partial difference equations) and solver for linear equations are specified in fvSchemes and fvSolution files in the system directory, respectively. According to the implementation of temperature equation (Listing 1) and pressure equation (Listing 2), we have to specify discretization schemes for the transient terms, Laplace terms, and gradient and divergence terms, which are shown in Listing A1. A example of solver, preconditioner, and tolerance settings for linear equations of temperature and pressure fields are shown in Listing A2. We recommend keeping these two files the same for different cases unless one attempts to try different options available in OpenFOAM. Benchmark tests We have conducted a number of one-dimensional (1-D), twodimensional (2-D), and three-dimensional benchmark tests and compared the results to other established software packages to validate HydrothermalFoam and to highlight some of its advantages. The reference software we used is version 3.1 of HYDROTHERM, a simulation tool developed and maintained by the US Geological Survey (USGS), which can be downloaded from the internet (https://volcanoes.usgs. gov/software/hydrotherm/index.html, last access: 21 December 2020) for free. All the parameters used in the 1-D and 2-D examples are taken from Weis et al. (2014), who presented a sequence of well-defined and highly useful benchmarks designed to test code performance within different key thermodynamic fluid states. In those benchmarks the transport properties and rock properties are constant and uniform (values are also listed in Table 1); an isotropic permeability of k = 10 −15 m 2 , a porosity of ε = 0.1, a heat capacity of C pr = 880 J/(kg • C), a thermal conductivity of k r = 2 W/(m • C), and a rock density of ρ r = 2700 kg/m 3 are used in all simulations below. One-dimensional simulations We conducted six 1-D simulations to test the code performance along the three p-T paths in the phase diagram of pure water shown in Fig. 1. These runs are designed with constant pressure and temperature conditions on both ends of a domain with 2 km length and 10 m grid spacing. The boundary conditions and initial conditions of each 1-D test are listed in Table 2. For comparison, we use the same parameters for the HYDROTHERM simulations. The computational domain is oriented horizontally (model index are A, C, E) without gravity and vertically (model index are B, D, F) with gravity to evaluate gravitational effects on fluid flow. All input files can be found in the https://gitlab.com/ gmdpapers/hydrothermalfoam/-/tree/master/benchmarks/ HydrothermalFoam/1d (last access: 21 December 2020) and https://gitlab.com/gmdpapers/hydrothermalfoam/-/tree/ master/benchmarks/USGS_HYDROTHERMAL/1d (last access: 21 December 2020) directories. Simulation results of the six 1-D examples are shown in Fig. 5. The example runs A and B describe invasion of a hot fluid into an initially colder domain and the fluids stay in single-phase liquid state. The thermal front moves from the start point (left and bottom for the horizontal and vertical example, respectively) towards the end point. As the vertical flow is opposing gravity, the flow is about 3 times slower. The fluids remain at pressures beyond the critical end point of pure water and therefore in the single-phase regime. Results calculated by HYDROTHERM and HydrothermalFoam are almost identical. In Examples A (horizontal) and B (vertical), the fluid remains in a liquid-like state, while in the examples C (horizontal) and D (vertical) the fluid flows along the pressure gradient from a cold liquid-like state to a hot vapor-like state. In C and D the fluids moves about 2 times faster than in A and B, resulting in a sharper thermal front. The sharpness of this front is, unfortunately, often affected by the numerical scheme (e.g., mesh geometry, upwinding scheme, and advection scheme). In fact, OpenFOAM seems to cope a bit better with resolving the sharpness of the front despite also using an upwind advection scheme. Benchmarks E and F explore sub-critical vapor flow. The results of horizontal and vertical flow look very similar because the density of single-phase vapor fluid is very low and thus the gravitational effects are relatively small with respect to the liquid cases. The results of HydrothermalFoam have a good agreement with that of HYDROTHERM for single-phase vapor flow. Two-dimensional simulations The two-dimensional models are performed on a rectangular domain with a length of 9 km in the x direction and 3 km in the y direction (Fig. 6), loosely representative of a vertical section through the upper oceanic crust with uniform permeability. The top boundary represents the seafloor and is kept at a constant pressure of 30 MPa, which is equivalent to about 3 km water depth, and a constant temperature of 5 • C. At the bottom, a constant heat flux of Q b = 0.05 W/m 2 is applied. Further, we assume a magmatic heat source with constant heat flux of Q m = 5 W/m 2 extending 1 km wide along the x direction located at the center of the bottom boundary (shown in red line in Fig. 6). Homogeneous model Similar to the two-dimensional model in Sect. 5.2, the threedimensional models are performed on a cubic domain with a length of 9 km in both the x direction and the z direction and 3 km in y direction (vertical direction). Note that the vertical coordinate in HydrothermalFoam or OpenFOAM is y rather than z. It can be imagined as representing a three-dimensional section of oceanic crust with uniform permeability. The top boundary is the seafloor and is kept at a constant pressure of 30 MPa and a constant temperature of 5 • C. At the bottom boundary, a zero mass flux is applied for pressure, and a constant Gaussian-shaped heat flux (see Eq. 12) is applied for temperature. The corresponding parameters in Eq. (12) are x 0 = 0, z 0 = 0, q max = 5 W/m 2 , q min = 0.05 W/m 2 , and c = 500. All input files can be found in the 3d/Homogeneous directory in benchmarks/HydrothermalFoam and benchmarks/USGS_HYDROTHERMAL directories, respectively. The simulation results at 50 kyr are shown in Fig. 8. Vertical slices at x = 0 km and x = 0.5 km, and horizontal slices at z = 0.5 km and z = 2.5 km are shown in Fig. 8a-d. Fluid pressure (blue contours) and the temperature field calculated by HydrothermalFoam and HYDROTHERM (dashed contours) agree very well, and the results of the central vertical slice are very close to two-dimensional model results shown in Fig. 7c. Three-dimensional flow path and isothermal surfaces of 300, 200, and 100 • C are shown in Fig. 8e. Heterogeneous model The heterogeneous model with two-layer permeability structure is modified from the homogeneous model described in Sect. 5.3.1. Permeability values of two layers are k 1 = 10 −14 m 2 and k 2 = 10 −15 m 2 , respectively (see Fig. 9a). The thickness of first layer is 1.1 km, and the other parameters are the same as the homogeneous model. All input files can be found in the 3d/Heterogeneous directory in benchmarks/HydrothermalFoam and benchmarks/USGS_HYDROTHERMAL directories, respectively. The simulation results at 50 kyr are shown in Fig. 9. Vertical slices at x = 0 km and x = 0.5 km and horizontal slices at z = 0.5 km and z = 2.5 km are shown in Fig. 9a-d. The higher permeability in the upper layer results in mixing with colder ambient fluids and focusing of the upflow zone. Three-dimensional isothermal surfaces of 300, 200, and 100 • C are shown in Fig. 9e. Cookbooks In addition to the presented benchmarks, we have added a number of cookbooks to the code repositories that can be used as starting points for more complex models. These include simple 2-D and 3-D box models, 2-D single-pass loop models, and time-dependent permeability models. They also include examples of how to use more complex meshes generated by Gmsh (Geuzaine and Remacle, 2009). We intend to add additional cookbooks in the future and hope to receive contributions from users of HydrothermalFoam. Conclusions We have presented a toolbox for simulating flow in submarine hydrothermal circulation systems. Being based on the widely used fluid-dynamic simulation platform OpenFOAM, the toolbox provides the user with robust parallelized 3-D solvers and a whole suite of pre-and post-processing tools. The toolbox is meant to provide the interdisciplinary submarine hydrothermal systems community with an accessible and easy-to-use open-source platform for testing ideas on how hydrothermal systems "work". The benchmark tests have shown that model matches previously published models and that the cookbooks provide the user with starting points for building more sophisticated models. By following an open-source approach and by providing extensive code documentation, we hope that the presented model will facilitate integrative studies that combine models with data to better assess the role of submarine hydrothermalism in the Earth system. -Source code repository on GitLab: https://gitlab.com/ gmdpapers/hydrothermalfoam (Guo and Rüpke, 2020). The latest and developing versions are maintained on this repository. -Nature of problem: seafloor hydrothermal circulation. -Solution method: The numerical approach is based on the finite-volume method (FVM). Author contributions. ZG, LR, and CT designed the project. ZG developed the source code, ran simulations, and wrote the paper. LR provided suggestions for the benchmarks and co-wrote the paper and manual. CT provided further suggestions for the manuscript and the model. All authors discussed and contributed to the final paper.
8,950.2
2020-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Direct and Alternate Current Conductivity and Magnetoconductivity of Nanocrystalline Cadmium-Zinc Ferrite below Room Temperature Nanocrystalline cadmium-zinc ferrite samples were prepared by ball milling method and its electrical transport property were investigated within a temperature range 77 K ≤ T ≤ 300 K in presence of a magnetic field up to 1 T and in a frequency range 20 Hz to 1 MHz. The investigated samples follow a simple hopping type charge transport. The dc magnetoconductivity has been explained in terms of orbital magnetoconductivity theory. The alternating current conductivity follows the universal dielectric response σ(f)  Tf. The values of ‘s’ have a decreasing trend with temperature. The temperature exponent ‘n’ depends on frequency. The dielectric permittivity of the samples depends on the grain resistance and interfacial grain boundary resistance. The ac magnetoconductivity is positive which can be explained in terms of impedance of the sample. Introduction The potential application and unusual properties of nanocrystalline materials has made them an object of interest for many researches.The unique properties of these classes of materials come due to the presence of its atoms at the grain boundaries or interfacial boundaries in comparison to coarse grained polycrystalline counterparts.Nanocrystalline spinel ferrite is a group of technologically important nanomaterials that has a potential application in magnetic, electronic and microwave fields [1][2][3][4][5].Due to their relatively insulating behaviour, they are used as high frequency magnetic materials [6].In various fields like magnetic recording medium, information storage, colour imaging, bio-processing magnetic refrigeration and magneto optical devices [7][8][9], the use of nanoferrites has made them a very important material for industrial application where as the reduction of particle size to nanometre scale level, different new mechanism like super paramagnetism, quantum mechanical tunnelling, spin canting etc. makes them interesting among scientists to study their transport properties [10,11]. The general formula of ferrite are represented as M Fe O , where M is a divalent metallic ion.The spinel structure has a unit cell that consists of a cubic close-packed array of 32 oxygen ions with 64 tetrahedral sites (T sites) and 32 octahedral sites (O sites); but only eight of the T sites and 16 of the O sites are filled.A large number of investigations on structural and magnetic properties like magnetisation measurement, Mossbauer spectroscopy, neutron scattering etc. have been done on the spinel oxide nanoparticles over the last few years [10][11][12][13][14][15][16][17][18][19][20][21][22][23].The dielectric behaviour on high energy ball milling ultrafine Zinc ferrite above room temperature was reported by Shenoy et al. [17].They suggested that the defects caused by the milling, produce traps in the surface layer contributes to dielectric permittivity via spin polarised electron tunnelling between grains.Ravinder et al. [24][25][26] studied the electrical conductivity of cadmium substituted manganese ferrite and cadmium substituted copper ferrites and nickel ferrites above room temperatures.The electrical conduction in these ferrites was explained on the basis of the hopping mechanism.They show a transition near the Curie temperature in the conductivity versus temperature curve and also suggested the activation energy in the ferromagnetic region is in general less than that in the paramagnetic region.But a systematic analysis on the electron transport mechanism of cadmium substituted zinc ferrite below room temperature is still lacking. Thus, the detailed electrical transport properties like ac and dc conductivity, ac and dc magnetoconductivity and dielectric properties of nanocrystalline Cd-Zn ferities reported within a temperature range 77 to 300 K and a frequency range 20 Hz to 1 MHz in presence as well as absence of a magnetic field up to 1 T. Experimental Accurately weighed starting powders of CdO (M/S Merck, 98% purity), ZnO (M/S Merck, 99% purity) and α-Fe 2 O 3 (M/S Glaxo, 99% purity) taken in 0.5:0.5:1mol% were hand-ground by an agate mortar pestle in a doubly distilled acetone medium for more than 5 h.The dried homogeneous powder mixture was then termed as unmilled (0 h) stoichiometric homogeneous powder mixture.A part of this mixture was ball milled for 3 h, 8 h, 20 h and 25 h duration at room temperature in air in a planetary ball mill (Model P5, M/S Fritsch, GmbH, Germany), keeping the disk rotation speed = 300 rpm and that of the vials~450 rpm respectively.Milling was done in hardened chrome steel vial of volume 80 ml using 30 hardened chrome steel ball of 10 mm dia, at ball to powder mass ratio 40:1. The X-ray powder diffraction profiles of unmilled and all ball milled samples were recorded (step size = 0.02 ˚ (2), counting time = 5 s, angular range = 15 ˚ -80 ˚ 2) using Ni-filtered CuK α radiation from a highly stabilized and automated Philips generator (PW1830) operated at 40 kV and 20 mA.The generator is coupled with a Philips X-ray powder diffractometer consisting of a PW 3710 mpd controller, PW1050/37 goniometer and a proportional counter.The Rietveld's analysis based on structure and microstructure refinement method [27][28][29][30][31] has been employed which is considered to be the best method for microstructure characterization and quantitative estimations of multiphase nanocrystalline material containing several number of overlapping reflections of ferrite phase for both the unmilled and ball milled powder sample. The electrical conductivity of the samples was measured by a standard four probe method by using 8 1 / 2 -digit Agilent 3458 multimeter and 6514 Keithley Electrometer.The ac measurement was carried out with a 4284A "Agilent Impedance" analyzer up to the frequency 1 MHz at different temperatures.Liquid nitrogen cryostat was used to study the temperature dependent conductivity by the ITC 502S Oxford temperature controller.To measure the ac response, samples were prepared as 1cm dia pellets by pressing the powder under a hydraulic pressure of 500 MPa.Fine copper wires were used as the connecting wire and silver paint was used as coating materials.The experimental density of the pressed pellets has been calculated from the relation ρ exp = m/r 2 h, where, m, r and h are mass, radius and thickness of the pellet respectively.The measured density lies in the range 5.12 g/cc to 6.38 g/cc.The percentage of error in determining the density is very small (0.21%).On the other hand density has been calculated from X-ray data by using the relation ρ theo = 8M/N A V cell , where, M is the molar mass of the sample, N A is the Avogadro's number and V cell is the unit cell volume.It is observed that the d exp is 80% to 84% of d theo for the investigated samples.The capacitance (C P ) and the dissipation factor (D) were measured at various frequencies and temperatures.The real part of ac conductivity, real and imaginary part of dielectric permittivity have been calculated using the relations σ / (f) = 2fε 0 ε // (f), ε / (f) = C P d/ε 0 A and ε // (f) = ε / (f)D respectively, where ε 0 = 8.854  10 -12 F/m, A and d are the area and thickness of the sample respectively.C P is the capacitance measured in Farad; f is the frequency in Hz.The magnetoconductivity was measured in the same manner varying the transverse magnetic field B  1 T by using an electromagnet. Results and Discussion The XRD powder patterns recorded from unmilled and ball milled powder mixture of CdO, ZnO and α-Fe 2 O 3 are shown in Figure 1.The powder pattern of unmilled mixture contains only the individual reflections of ZnO, CdO and α-Fe 2 O 3 phases.The intensity ratios of individual reflections are in accordance with the stoichiometric composition of the mixture.It is evident from the figure that the particle size of the starting materials reduces very fast as their peaks become broadened rapidly in the course of milling.The ferrite reflections were appeared clearly in 3 h milled sample and the content of ferrite phase increases continuously with milling time as noticed up to 25 h of milling.The CdO phase was not used up completely in the process even after 25 h of milling, whereas the other two starting phases vanishes completely in the course of milling.This indicates to the fact that the formed ferrite phase is a non-stoichiometric in composition.There must be a number of vacancies in the tetrahedral sites of the spinel ferrite lattice due to these unreacted Cd 2+ ions. Figure 2 shows the Rietveld's fitting outputs of unmilled and all ball milled powder patterns.Peak positions phase.It indicates that there are tetrahedral vacancies in normal spinel structure of (Zn,Cd) ferrite which are supposed to fill with Cd 2+ ions. Figure 3 shows the variation of relative phase abundances of different phases with increasing milling time.The content (mole fraction) of ZnO phase decreases very rapidly and after 3 h of milling it becomes almost nil, whereas the variation of CdO content shows that initially the phase was utilized rapidly in ferrite phase but at the higher milling time, the rate of inclusion becomes very slow and till 25 h milling the phase was not completely incorporated in the ferrite matrix and ~0.04 mole fraction of the phase remained unreacted.The α-Fe 2 O 3 phase content decreases in a moderate rate and after 8 h of milling it almost vanishes.The ferrite phase content increases sharply up to 8 h of milling and then approaches towards a saturation to ~0.96 mole fraction (Table 1).It may be noticed that the ferrite content increases considerably until the α-Fe 2 O 3 phase was used up completely and after that a slight increment up to 25 h milling is due to a very slow diffusion of Cd 2+ ions into the ferrite matrix.All these variations in contents indicate that Zn 2+ ions have occupied the tetrahedral positions quite rapidly but the Cd 2+ ions took longer time and even after 25 h of milling some tetrahedral positions remained vacant due to insolubility of CdO in ferrite matrix.It is therefore obvious that the prepared ferrite phase is a Zn-rich non-stoichiometric (Zn,Cd)Fe 2 O 4 normal spinel with tetrahedral vacancies. The nature of variation of lattice parameter of the cubic (Zn,Cd)Fe 2 O 4 phase with increasing milling time is shown in Figure 4.It can be seen from the plot that the lattice parameter of ferrite phase formed after 1h of milling reduces rapidly within 3 h of milling from the value 0.859 nm to ~0.849 nm (Table 1) and then remained almost invariant up to 25 h of milling.In the course of milling, ZnO phase utilized completely in tetrahedral vacancies but CdO phase was not utilized completely and some tetrahedral sites remained unoccupied even after 25 h of milling. The scanning electron micrograph of the samples CZF3h and CZF20h are shown in Figures 5(a) and 5(b).The pictures show that the samples are not closely packed and consist of several grains.The grains are well resolved and have almost circular (spherical) shape.It is clearly seen in the micrographs that the grains are at nanoscale.The average grain size determined from SEM was noted as 35 nm -40 nm. The dc conductivity of the Cd-Zn ferrite samples have been measured in the temperature range 77 K ≤ T ≤ 300 K.It is observed that the incorporation of Cadmium atom within zinc ferrite reduces the dc conductivity in compare to our previous study [32].The room temperature con- ductivity σ(300 K) and conductivity ratio σ r [= σ(300 K)/ σ(77 K)] of the investigated samples increases with the increasing of milling time.The value of conductivity ratio changes between 1.02  10 2 to 2.48  10 3 which is smaller than Zn-ferrite (σ r = 1.12  10 3 to 1.21  10 6 ) [32].The contacts between the nanoparticles are the reason behind this larger conductivity ratio.The conductivity variation of all the samples has the characteristic of a semiconductor i.e. their conductivity increases with in- creasing temperature.The variation of conductivity with temperature is indicated in Figure 6.Lu et al. [18] reported the similar type of variation of conductivity in case of nanocrystalline Zinc ferrite samples.Figure 6 shows the linear variation of ln[σ(T)] with 1/T indicating a simple hopping type charge transport in all the investigated samples [33].The values of activation energy of different samples have been obtained from the slopes of different straight lines.The values of E a with milling time are shown in the inset of Figure 6.The values of E a increase with increasing milling time and hence with decreasing particle size of the samples.The increasing milling time may decrease the size of the metal core which in turn increases the activation energy. The influence of magnetic field on the dc conductivity of the samples has been observed under a magnetic field of strength < 1 T. The magnetoconductivity ratio of the milled samples increases with increasing milling time which may be explained in terms of a simple phenomenological model named as orbital magnetoconductivity theory (Forward interference model) [34,35].Again, for the unmilled sample, there is a decrease in conductivity where t 1 = 5/2016.The variation of magnetoconductivity ratio with magnetic field intensity is shown in Figure 7. The different points in Figure 7 are the experimental data for different samples and the solid lines are the theoretical best fit obtained from Equation (1) for milled samples and from Equation (2) for unmilled sample.For milled samples the value of C sat and B sat can be obtained as a fitting parameter of Equation ( 1) and the values are ratio with increasing magnetic field strength which may be explained by wave function shrinkage model [36]. The orbital magnetoconductivity theory predicts the forward interference among the random paths in the hopping process between the two sites spaced at a distance equal to optimum hopping distance resulting in positive magnetoconduction and can be expressed as where C sat is a temperature independent parameter and B sat is the magnetic field where the magnetoconductivity is saturated [= 0.7(h/e)(8/3) 3/2 (1/L 2 loc )(T/T mott ) 3/8 ].Where L loc is the localization length and T mott is the Mott characteristic temperature.In wave function shrinkage model, the average hopping length reduces due to the contraction of wave function of electrons under the influence of a magnetic field.As a result, the conductivity decreases with increasing magnetic field.Under a small magnetic field, the magnetoconductivity ratio can be expressed as [36]     The ac conductivity of Cd-Zn ferrite samples are investigated in the frequency range 20 Hz to 1 MHz and in the temperature range 77 K ≤ T ≤ 300 K.A general feature of the amorphous semiconductors or disordered systems is that, in addition to the dc conductivity contribution σ dc , the real part of complex ac conductivity σ / (f) is found to follow the so called universal dielectric response behavior, which can be expressed as [37][38][39]     where σ dc is the dc conductivity, α is the temperature dependent constant and the frequency exponent s ≤ 1. The value of σ ac (f)(the frequency dependent of conductivity) has been determined upon subtraction of the dc contribution from the total frequency dependent conductivity σ / (f). Figure 8 shows the linear variation of ln[σ ac (f)] with ln(f) at different temperatures for the sample 25 h.All the other samples behave in a similar manner.The value of 's' has been calculated from the slope of these linear variation.Figure 9 shows the variation of 's' with temperature.A weak variation of 's' with temperature is observed up to 200 K but at higher temperature (T > 200 K), the value of 's' decrease with increasing temperature.In general, the conduction process of disordered systems is governed by two physical processes such as correlated barrier hopping (CBH) [39] and quantum mechanical tunneling (electron tunneling [40], small polaron tunneling [39] and large polaron tunneling [38].For different conduction process, the nature of temperature dependency of 's' are different.So the exact nature of charge transport may be obtained experimentally from the temperature dependence of 's'.According to the electron tunneling theory 's' is independent of temperature, whereas for small polaron tunneling 's' increases with increasing accordance with the CBH model.According to this model, the charge carrier hops between the sites over the potential barrier separating them and the frequency exponent 's' can be written as temperature.But in case of correlated barrier hopping model 's' only de- where W H is the effective barrier height and  0 is the characteristic relaxation time.For large value of W H /k B T, the value of 's' can be considered as independent of frequency as there is a very small variation of 's' with frequency [41].Again for the investigated samples, the linear variation of ln[σ ac (f)] with ln(f) indicates that 's' is frequency independent.Therefore the experimental data has been fitted with Equation ( 4) with W H and  0 as the fitting parameters.In Therefore the trend of variation 's' with temperature indicates that the ac conductivity of the investigated samples can be explained by the CBH model.The temperature dependence of ac conductivity is shown in Figure 10 for the sample CZF25h for different some frequencies.A weak variation is observed at lower temperature (T < 200 K) in comparison to high temperature (T > 200 K).At a particular frequency the real part of complex conductivity increases with temperature and is found to follow a power law σ / (f)  T n , which is shown as the solid lines in Figure 10.The values of n have been calculated from the power law fitting and found to be strongly frequency dependent for all samples.For different frequency ranging from 1 kHz to 1 MHz, the values of 'n' vary between 15.4 to 10.6 for CZF25h.According to the CBH model [37][38][39][40] the ac conductivity σ / (f) is expressed as σ / (f)  T 2 R ω 6  T n with n = [2 + (1s)ln(1/ω 0 )] for broad band limit and σ / (f)  R ω 6  T n with n = (1 -s)ln(1/ω 0 ) for narrow band limit, where R ω = e 2 /{εε o [W H -k B Tln(1/ω 0 )]}.The theoretical values of 'n', have been calculated taking s = 0.39 at 300 K and 0.88 at 77 K for different frequencies and the value of  0 = 3.26  10 -14 s.The variation of the calculated values of 'n' are in the range 15.61 to 11.40 at 300 K and 4.68 to 3.84 at 77 K for broad band limit and 13.61 to 9.40 at 300 K and 2.68 to 1.84 at 77 K for narrow band limit with frequency variation from 1 kHz to 1 MHz.The ex- perimental values are close to the theoretical values of broad band limit at the higher temperature range but at lower temperature, there is a discrepancy between theoretical and experimental result. In general, interfacial polarization is exhibited by the ferrites due to structural inhomogenities and existence of free charges [42].The hopping electrons at low frequencies may be trapped by the inhomogeneties.At a particular frequency, the increase in ε / (f) with temperature is due to the drop in the resistance of the ferrite with increasing temperature.Electron hopping is promoted by the low resistance and hence resulting a larger polarizability or larger ε / (f).The variation of real part of dielectric permittivity ε / (f) with frequency for different samples are shown in Figure 11 at T = 300 K.The dielectric permittivity increases sharply with decreasing frequency for all the samples and such behaviour can be attributed to the presence of large degree of dispersion due to charge transfer within the interfacial diffusion layer present between the electrodes.The magnitude of dielectric dispersion depends on the temperature.At lower temperature the relaxation process becomes easier due to the freezing of the electric dipoles and thus there is decay in polarization with respect to the applied electric field.So a sharp increase in ε / (f) is observed at lower frequency region.Therefore the inhomogeneous nature of the samples containing different permittivity and conducting regions, governs the frequency behaviour of ε / (f) were the charge carriers are blocked by the poorly conducting regions. The effective dielectric permittivity of this type of inhomogeneous systems is explained in terms of Maxwell Wegner capacitor model [43,44] as which the complex impedance can be modelled by an equivalent circuit con- sisting of resistance and capacitance due to grain and interfacial grain boundary contribution and can be expressed as where sub-indexes 'g' and 'gb' refer to the grain and interfacial grain boundary respectively, R is the resistance, C is the capacitance, ω is 2πf and C 0 is the free space capacitance.The real part of the complex impedance for different samples have been calculated from the experimental data for real (ε / ) and imaginary (ε // ) part of the dielectric permittivity.Figure 12 shows the variation of the real part of the complex impedance of different samples with frequency at room temperature and Figure 13 shows the same variation for 20 h and 25 h samples in presence of a magnetic field.The points in both figures are the experimental data and the solid lines are the theoretical best fit obtained from Equation (6).It is observed from both figures that the experimental data are reasonably well fitted with the theory.The grain and grain boundary resistance and capacitance have been extracted from the analysis at room temperature whose The variation of real part of the ac conductivity for different samples at room temperature and f = 1 MHz under the influence of magnetic field is shown in Figure 14.With increasing magnetic field, there is an increase in conductivity for different milled samples, but for unmilled sample opposite behaviour has been observed.At present, no theoretical model is found in literature which can explain directly the behaviour of ac conductivity in presence of magnetic field.The SEM micrograph reveals that the investigated samples are heterogeneous in nature with spherical grains.Thus the materials consist of grain and interfacial grain boundary regions.For such heterogeneous samples, it has been already discussed that the dielectric property and impedance depend on the grain and grain boundary resistance and capacitance.The real part of ac conductivity is related to the dielectric response by the relation σ / (B,f) = ωε 0 ε // (B,f) .As the value of ε // (B,f) is dependent on the grain and grain boundary resistance and capacitances, the ac conductivity can be written as [45]  where The change in the value of any of these resistance by the magnetic field will affect the value of the conductivity.From the analysis of the real part of complex impedance in presence of constant magnetic field at 0.76 T, it has been found that the value of grain and grain boundary resistance increases by the application of magnetic field for unmilled sample, whereas it decreases for milled samples.Hence the total contribution due to grain and grain boundary resistance (R = R g + R gb ) decrease with increasing magnetic field for milled samples.Thus the influence on ac conductivity by magnetic field is due to the change in grain and grain boundary resistance by the applied magnetic field.But due to the inability of the analytical expression, the measured data can not be compared with the theory.Thus a more explicit theoretical and experimental study is required to reveal the true mechanism of magnetic field dependent ac conductivity. Conclusions The different Cd-Zn ferrite samples had been prepared by the high energy ball milling method.The samples were characterized by XRD which confirms the formation of normal spinel structure with tetrahedral vacancies with particle size 7 nm.SEM picture reveals that the different samples consist of grains of almost spherical shape.The dc conductivity of different Cd-Zn-ferrite follows a simple hopping time of charge conduction mechanism.The magnetic field dependent conductivity of the differ- ent investigated samples increases with increasing magnetic field for milled samples whereas decreases with increasing magnetic field for unmilled sample and those can be explained in terms of orbital magneto conductivity theory and wave function shrinkage model respectively.The real part ac conductivity follows the power law σ / (f ) α f s .The temperature dependence of universal dielectric response parameter 's' was found to follow correlated barrier hooping change transfer mechanism.At a particular frequency the conductivity of the investigated samples follows σ / (f ) α T n , where the values of n strongly depend on frequency.The frequency dependent real part of complex permittivity shows large degree dispersion at low frequency , but rapid polarization at high frequencies which can be interpreted by Maxwell Wegner capacitor model.The grain resistance and capacitance was found to be smaller than the grain boundary resistance and capacitance and the total resistance due to grain and grain boundary decrease due to the application of a magnetic field.The ac resistivity of milled samples was found to show positive (increasing with increasing magnetic field) variation in presence of magnetic field which may be due to the variation of grain and grain boundary resistances by the application of a magnetic field. Direct and Alternate Current Conductivity and Magnetoconductivity of Nanocrystalline Cadmium-Zinc Ferrite 227 below Room Temperature Figure 1 . Figure 1.X-ray diffraction patterns of unmilled (0 h) and ball-milled CdO + ZnO + α-Fe 2 O 3 powder mixture for different durations of ball-milling.ofall reflections of all four phases are marked and shown at the bottom of the plot.Residual of fitting (I 0 -I c ) between observed (I 0 ) and calculated (I c ) intensities of each fitting is plotted under respective patterns.In the present analysis, all recorded XRD patterns of ball milled samples were fitted well only with normal spinel ferrite Figure 2 . Figure 2. Observed () and calculated (-) XRD patterns of unmilled (0 h) and different ball-milled samples revealed from Rietveld's powder structure refinement analysis.Residues of fittings are shown under the respective patters.Peak positions of the phases are shown at the base line. Figure 3 . Figure 3. Variations of mole fraction of different phases in ball-milled CdO + ZnO + α-Fe 2 O 3 powder mixture with increasing milling time. Figure 6 .Figure 7 . Figure 6.Variation of dc conductivity of the different samples with temperature.Inset shows the variation of activation energy of different samples with milling time. Figure 8 . Figure 8. Variation of ac conductivity of CZF25h sample with frequency at different constant temperature. Figure 9 . Figure 9. Variation of frequency exponent 's' with temperature for different samples.The different solid lines are fitted with Equation (4). Figure 9 , the points are the experimental data and solid lines are the theoretical best fit values obtained from Equation (4).The best fitted values of the parameters W H and  o (at a fixed frequency of f = 1 MHz) are in the range 0.82 to 1.34 eV and 1.36  10 -14 to 7.63  10 -13 s, respectively for different samples.W H has a greater value than the activation energy measured from grain and grain boundary contribution and the values of characteristic relaxation time  0 has a similar value as expected for typical inverse phonon frequency. Figure 10 . Figure 10.Variation of ac conductivity of CZF25h sample with temperature at different constant frequency.The lines are fitted with the equation σ / (f)  T n . Figure 11 . Figure 11.Variation of real part of dielectric permittivity of different samples with frequency at T = 300 K. Figure 12 . Figure 12.Variation of real part of impedance of different samples with frequency at room temperature.The solid lines are fitted with Equation (6). Figure 13 . Figure 13.Variation of real part of impedance of CZF20h and CZF25h samples with frequency at room temperature in presence of magnetic field of 0.76 T. The solid lines are fitted with Equation (6). Table 1 . Microstructure parameters of unmilled and ball-milled CdO, ZnO and α-Fe 2 O 3 powder mixture as revealed from Rietveld's analysis of XRD data. *Error limits. value lie within the range 19.8 kΩ to 0.32 MΩ for R g , 0.33 MΩ to 2.79 MΩ for R gb , 0.16 to 1.94 nF for C g and 0.20 to 1.62 nF for C gb for different samples Q without magnetic field and 22.03 kΩ to 0.24 MΩ for R g , 0.35 MΩ to 2.05 MΩ for R gb , 0.18 to 1.99 nF for C g and 0.21 to 1.69 nF for C gb for different samples in presence of a magnetic field.As the resistance due to interfacial grain boundary is much larger than the grain resistance, it may conclude that the grain boundary contribution dominates over the grain contribution.
6,634.6
2011-03-22T00:00:00.000
[ "Physics", "Materials Science" ]
MY ENGLISH ONLINE AND THE DEVELOPMENT OF COMMUNICATIVE COMPETENCE The aim of this paper is to analyze the online English course My English Online (MEO) in relation to its efficiency in developing communicative competence. In order to do so, the study offers a brief review of literature of concepts related to the potential of English to promote access to information (Finardi; Prebianca; Momm, 2013) and online education (Finardi; Tyler, 2015); as well as its potential for expanding this access through virtual learning environments (Leffa, 2016); to computer assisted language learning (Cardoso, 2012); hybrid approaches (Graham, 2006) such as the Inverted Classroom approach (Lage; Platt; Tregial, 2000) to foster additional language teaching and learning (Finardi, 2012; Silveira; Finardi, 2015). Results of Finardi et al., (2014) and Finardi, Prebianca and Schmmit (2016) were contrasted with the analysis of MEO taking into consideration Bachman’s description of Communicative Language Ability and Communicative Competence (Bachman, 1990). Results of the study corroborate previous ones (Finardi et al., 2014; Finardi, Prebianca; Schmmit, 2016) with regards to the lack of feedback for oral production activities in the MEO course, for opportunities for real interaction and also insufficient feedback for oral production tasks. The study thus concludes that the MEO course falls short in recognizing the importance of pragmatic aspects in the development of Communicative Competence and as such should be used in the hybrid format as a way of compensating this shortcoming. INTRODUCTION The rapid dissemination of digital resources significantly impacts the access and production of knowledge as well as the use, the teaching and the learning of additional languages (L2) in the 21st century (Menezes, 2012;Finardi & Porcino, 2014). According to Menezes (2012), modern artifacts such as smartphones, laptops, tablets and digital books provide new contexts and registers for teaching/learning L2. Some of these learning contexts include, but are not limited to: Computer Assisted Language Learning (CALL) (e.g.: Cardoso, 2012), Mobile Assisted Language Learning (MALL) (e.g. : Finardi, Leão & Amorim, 2016), Massive Online Open Courses (MOOCs) (e.g.: Finardi & Tyler, 2015), online courses (e.g.: Prebianca, Santos Junior and hybrid approaches such as the inverted classroom (e.g. : Finardi, Prebianca & Schmitt, 2016) that may be used in tandem with the aforementioned contexts. and Finardi, Prebianca and Schmitt (2016) analyzed the potential and limitations of an English (L2) distance learning course offered for free to university students in Brazil with the support of the Brazilian government funded program Languages without Borders (LwB)4, the My English Online (MEO)5 course. These studies analyzed the potential of the MEO course for English (L2) teaching/learning in a hybrid format, concluding that the limitations found in the MEO can be somehow circumvented or minimized if the course is used in the hybrid format, combining face-to-face classes and online instruction. Building on the results of the aforementioned works, this study intends to advance in the discussion about the potential of online courses for the teaching and learning of L2 so as to contribute to the in(formation) of L2 teachers through the analysis of the MEO course under the light of the concept of pragmatic competence (Bachman, 1990), problematizing the possibility of learning L2 in online environments under the perspective of pragmatics. With that in view, the study offers a brief review of literature of online courses for the teaching and learning of L2 and of the concept of pragmatic competence as well as its implications in the development of communicative competence. Vol.10 -Nº18 -JULHO -2018 -ISSN: 1982-6109 Universidade Metropolitana de Santos (Unimes) Núcleo de Educação a Distância -Unimes Virtual that are free only in the beginning, Duolingo is totally free and offered in almost 20 languages. Leffa (2016) claims that a distinctive feature of Duolingo is the merge of characteristics of a social network, a virtual game and sometimes an online course. However, Finardi, Leão and Amorim (2016) concluded that Duolingo cannot be understood mainly as a game or as a social network since its main features are those of an online L2 course. Finardi, Prebianca and Momm (2013) suggest that both English and digital literacy are needed to increase access to online information, and Finardi and Tyler (2015), based on this assumption, analyzed the role of English in accessing online education through the analysis of MOOCs available in different languages. Results of this study showed that most (almost 90%) MOOCs are available only in English, corroborating Finardi, Prebianca and Momm's (2013) hypothesis that some knowledge of English is necessary to increase access to online information. Finardi and Tyler (2015) added that some knowledge of English is also important to expand access to online education in the form of MOOCs. It is important to note that MOOCs were not created for the purpose of teaching/learning L2 although some MOOCs may be found for this purpose. However, Finardi (2015) believes that MOOCs can be used for teaching-learning L2 within a hybrid approach that combines face-toface classes with distance learning activities using MOOCs to teach diverse contents in/through English. To that end, the MOOC would have to be adapted for a bi-multi-lingual approach known as the Content and Language Integrated Learning (CLIL) approach (Alencar, 2016) coined by Finardi (2015) as the Inverted CLIL Approach. According to Finardi's (2015) proposal for the use of MOOCs in the Inverted CLIL approach, students would read and watch the texts and videos available in MOOCs in English for the teaching of diverse contents, after being adapted and mediated by the English teacher who would dedicate the time in the classroom to clarify doubts and practice oral production, thus reversing the order of activities usually performed in the classroom and at home by students. The MEO was chosen as the object of this study for having already been studied previously, though without making a relation with pragmatic competence (Bachman, 1990) as proposed in our analysis. As suggested by and Finardi, Prebianca and Schmmit, (2016), the limitations identified in the MEO course could be overcome if it were used in the inverted classroom format which is a form of hybrid approach. Thus, before we analyze MEO taking into consideration pragmatic competence, we will make a brief review of the hybrid approaches that comprise this study. Finardi, Prebianca and Schmitt (2016) analyzed the potential of the MEO course to be used in a hybrid approach known as the Inverted Classroom approach. Hybrid approaches combine classroom instruction with online instruction and are also known by the term Blended Learning (BL). The Flipped Classroom approach was proposed by Lage, Platte and Treglia (2000) for the teaching of several contents but not specifically for L2 learning, although Finardi, Prebianca and Schmmit, (2016), in the analysis of MEO in this approach, think this is not only possible but also desirable. In the Inverted Classroom approach, activities that usually take place in the classroom, such as the presentation of content via the teacher's lecture, are carried out at home through readings and videos, and problem solving and activities that are usually done at home are carried out in the classroom, hence the name of the approach. In the Inverted CLIL approach proposed by Finardi (2015) the only difference is that the focus is on both the teaching of contents and L2s, and L2 learning is done unconsciously and indirectly through the learning of different contents in L2, using, for example, MOOCs. MEO The MEO offers leveling tests, interactive books, reading activities, grammar exercises with correction, dictionaries, oral and written activities and tests for each of its five levels. Each level contains three parts with activities in an e-book. At the end of each stage, MEO users take a progress test in preparation for the final test of each level. As mentioned earlier, MEO is one of the actions of the English without Borders (LwB) program launched by the Ministry of Education in partnership with SESU and CAPES in 2012 in Brazil in an attempt to improve overall proficiency in English of Brazilian university students. MEO offers authentic materials from the National Geographic database for the development of written comprehension and videos for the development of oral comprehension. At the first level, students are expected to be able, at the end of the unit, to perform the following actions: greet colleagues, say and write phone numbers, follow classroom instructions; identify colleagues, talk about nationality and marital status, say and write addresses and say and write dates. In the second level, students are expected to be able to perform the following actions at According to previous studies on MEO (e.g.: FINARDI et al., 2014;FINARDI, PREBIANCA & SCHMMIT, 2016), this course has a grammatical-lexical content approach, mainly aimed at developing oral and written comprehension skills, at the expense of oral and written production skills. However, when analyzing the potential of this course for teaching-learning L2 in the inverted classroom format, Finardi, Prebianca and Schmmit (2016) suggest that these gaps could be somewhat compensated if the course is used in the inverted format. In order to further this discussion, this study proposes an analysis of the potential of this platform for the teaching and learning of L2 under the light of pragmatic competence (Bachman, 1990) and for that reason this principle is described in the following section. AND PRAGMATIC COMPETENCE Based on current understanding of what it means to speak a language, we can state that any language course that seeks to develop oral skills successfully must necessarily strive, or at least create the necessary conditions for the development of Communicative Competence (CC). Ellis (2008) postulates that in a broad sense, CC means having knowledge of the grammar of L2 and also knowledge of how this system is put into practice in real communication. Núcleo de Educação a Distância -Unimes Virtual The term Communicative Competence was coined by Hymes (1972) who extended the concept of linguistic competence proposed by Chomsky (1965). Hymes defines CC as the aspect of our competence that allows us to produce and interpret messages and negotiate meanings in an interpersonal way within specific contexts. Ellis (2008), in defining CC, states that it is the knowledge that the users of a language have internalized so that they are able to understand and produce messages in the language. The author adds that many models of communicative competence have been proposed, and that most of them recognize that the concept of CC encompasses both Linguistic Competence (e.g. knowledge of grammatical rules) and Pragmatic Competence (e.g. knowledge of what constitutes a behavior in a given situation). In fact, years after the coinage of the term CC, several scholars and researchers contributed to the understanding of the relationship between communicative competence and pragmatic competence among whom we can cite the works of Canale e Swain (1980), considered by Brown (2000) a reference in the discussions on Communicative Competence. The model of CC originally proposed by Canale and Swain in 1980 suffered some alterations, and in 1983 Canale defined four different subcategories that further define communicative competence. Bachman (1990) introduced a theoretical framework for measuring language proficiency describing the concept of Communicative Language Ability (CLA). Bachman (1990), consistent with earlier works in Communicative Competence, cites previous works, among which are Hymes' (1972), Canale and Swain's (1980) and Canale's (1983 While, however, a speaker who is not operating according to the standard grammatical code is at worse condemned as 'speaking badly', the person who operates according to differently formulated pragmatic principles may well be censured as behaving badly; as being an untruthful, deceitful, or insincere person. (THOMAS, 1983, p. 107). Kasper (1997), in reference to Bachman's model, states that it clearly shows that pragmatic competence is neither extra nor ornamental and that it is not subordinated either to grammatical knowledge or to textual knowledge, but, in fact, interacts in a complex way with organizational competence. In fact, communicative competence has a deep interactional aspect that allows us to claim that being communicatively competent means being pragmatically competent. Put differently, there is no communicative competence without pragmatic competence. METHOD The research methodology is qualitative and aims to analyze the potential of MEO for teaching-learning L2 in the face of Brachman's model of communicative competence. In order to do so, the study analyzes the MEO in relation to potential failures concerning the development of pragmatic competence. In accordance with the construct presented by Bachman (1990), we consider the predisposition, the involvement, the commitment and the collaborative work of the interlocutors to be of great relevance for the maintenance and progress of conversational exchanges. Thus, we understand that the aspects of speech and also writing in the interactions between platform users are the linguistic abilities that demand greater dependence of the contribution of other parties involved in dialogical relations. report data, reproduced here in Table 1, of the twenty-five MEO students interviewed regarding their satisfaction with the course and about the potential and limitations of this course for L2 learning. (2) Yes (22) A little (3) No (19) Yes (6) Speaking (11) Other activities /Did not answer (14) Source: . ANALYSIS As we can see in Table 1 Concerning the activities of greatest difficulty to be developed in the MEO course, almost half of the respondents (11) point to oral production as the most complex ability to be developed, that is, communicative competence, possibly due to the lack of opportunity for real interaction in oral activities and consequent lack of feedback, according to reports in Table 2 from another study (FINARDI, PREBIANCA & SCHMMIT, 2016) on the same course but with a different group of MEO users. Finardi, Prebianca and Schmmit (2016) sent a questionnaire to 280 MEO users in order to analyze this course in terms of its potential for L2 teaching/learning in the hybrid format. In a second moment of the study these researchers As can be seen in Table 2 above, the assessment of MEO users is that this course only gives feedback on reading, vocabulary, listening, and grammar activities. Therefore, the non-attendance to elements that compose the pragmatic competence puts in check the efficiency and the purpose of the MEO course, considering that the objective of the course is the preparation of students to be part of academic routines in English, this virtual learning environment should provide the development of oral production skills through real interactions. Previous research from and Finardi, Prebianca and Schmmit (2016) suggest that language learning does not occur satisfactorily in purely virtual learning environments. Given that the MEO course, in terms of oral comprehension, does not provide real opportunities for interaction in the target language, this study corroborates the suggestion made by Finardi, Prebianca and Schmitt (2016) that if MEO were used in the inverted format, some of these limitations could be overcome, and in this case, it is important to note that the teacher, as well as MEO users, have an important role in mediating and acting as interlocutors in this process. In a hybrid approach, classroom time could be used for conversational exchanges, in accordance to the model proposed by Bachman (1990), leaving the contents of the MEO to be used as homework, in the inverted format as suggested by Finardi, Prebianca and Schmitt (2016). CONCLUSION We can see the relevance of the new digital resources and the internet for the educational process in general, and particularly for L2 teaching and learning ( Pereira and dos Santos (2015) affirm that one of the challenges of incorporating technologies in education is to ensure that they are associated with pedagogical practices that effectively promote learning. In this sense, several studies reviewed here have testified to the need of evaluating the potential of certain technologies for teaching-learning L2 specifically, (2016), these caveats can be reduced and even overcome through the mediation of the teacher and other interlocutors.
3,649.8
2018-07-25T00:00:00.000
[ "Education", "Linguistics", "Computer Science" ]
The man with the phrase book in his head : On the literariness of the illiterate Homer The man with the phrase book in his head: On the literariness of the Introduction Ong's tongue-in-cheek characterisation of Homer as the man with "some kind of phrase book in his head" (Ong, 1982:18) invites us to confront our generally comfortable preconceptions regarding so-called "great" (or "timeless") art and literature.This article will in no way contradict the notion that the Homeric poems are indeed great literature, yet it will not take particular umbrage at Ong's rather reductive view of (the great) Homer either.In seeking what exactly it is in the Iliad and Odyssey that makes these poems so "prestigious", so "literary", we will come to the conclusion that the supposed "genius" (or, for that matter, phrase book!) of their presumed author is really of no consequence.The key element in the greatness of the Iliad and Odyssey is their reception.This is true not only of the Iliad and Odyssey, but in fact, of all "literature".Could this also be true of oral literature?And of what significance then is the orality (the presumed fact that they were not composed in writing) of the Homeric poems?In order to answer these questions, we need to begin with the theory that has irrevocably linked the Iliad and Odyssey to oral tradition: the oral-formulaic theory. The oral-formulaic theory: background and criticism In the early 1930s Milman Parry's research on the structure of the hexameter verse of the Iliad and the Odyssey convincingly concluded that the poems were orally composed.Working in conjunction with Albert Lord, who was to carry on his work after his death, Parry found comparative verification for the theory behind his findings in a subsequent study of the oral poetry of the guslari of Serbo-Croatia.Popularized in Lord's immensely influential The Singer of Tales, the oralformulaic theory is also frequently referred to as the Parry-Lord thesis. At the top end of the scale John Miles Foley (1986Foley ( , 1988) ) and Walter Ong (1982Ong ( , 1987) ) have lauded Parry and Lord as providing the theoretical framework for nothing less than a new (albeit strongly interdisciplinary) field called -among others -"Oral Theory" (Foley) and "Orality-Literacy Studies" (Ong).Other appraisals have been less enthusiastic, or at least more careful.At the core of the divergent appraisals of Parry and Lord's work lies the vexed issue of the originality of the oral.On the one hand, every commentator of oral tradition is in agreement as to the need for the oral text 1 to be studied, appreciated and interpreted "on its own terms", and it is on this basis that the notion of a "great divide" between literacy and orality has to some extent been justified -at least as a starting point -as a model for studies in oral tradition (Foley, 1994:169).On the other hand, this generally welcome development has had uncomfortable by-products.Cautioning against an over-zealous application of the oral-formulaic theory's "more ambitious claims" (Finnegan, 1977:72), Ruth Finnegan offers observations from a wide range of oral traditions that serve to contradict the basic tenets of 1 "(A) real, objective and tangible score, an entity that exists both as a thing in itself and as a directive for its perceivers" (Foley, 1990:5)."Text" in this sense includes "performance". the theory 2 (Finnegan, 1977:69-87).Furthermore, the literacy vs orality model, with its emphasis on the differences between written and oral textuality, has tended to obscure what more recently has become an important area of research in its own right, namely the issue of the "overlapping" of the literate and the oral.Werner Kelber, for one, has warned against the tendency -promoted by the oral-formulaic theoryto see in the (supposed) fact that "the Homeric poems were composed without the aid of writing" some kind of "essentialist (oral) purity" (Kelber, 1994:199). The type of criticism of the Parry-Lord thesis voiced here by Finnegan and Kelber can be regarded as theoretical (or scholarly) in the ordinary sense -a theory is evaluated and critiqued for its import to a specific field of study.With Leroy Vail and Landeg White, however, this criticism becomes distinctly ideological.According to them, the oral-formulaic theory offers the proverbial (theoretical) backdoor for nothing less than racism.A "psychologizing interpretation" of the notion of the formula (in which they see Ong as one of the main culprits) has led to the racist elaboration of "oral man" (see Vail & White, 1991:1-39).No longer is the question of literacy/orality merely a theory to be supported, modified or set aside according to the evidence of a field of scholarship.Just as in dichotomies like "civilised vs savage" and "scientific vs primitive", the difference implied by orality takes on the entire political burden of the historical discrimination against people on the basis of predefined categories. Ong shows an awareness of the ideological dimension of orality when he describes the usefulness of the Parry-Lord thesis for addressing the Homeric poetry "on this poetry's own terms" as an "undercutting ... (of) chauvinism" (Ong, 1982:18.My italics).In a way then, Ong can be said to present the Parry-Lord thesis at the kind of ideological level at which detractors like Vail and White pitch the essence of their criticism.Ong's awareness of the implications of the Parry-Lord thesis for questions of social power and prestige are important -there is, of course, a similar awareness in the remark by Ong from which I derive the title of this article. There is one more point of criticism of the oral-formulaic theory that is of importance to us.White (1989:34) overarching concern with oral textuality or form, and accuses them of "(breaking) the link between performance and history".On a theoretical level this criticism, stressing the need for an interpretation of the oral text "beyond the confines of its textuality" (Barber & De Moraes Farias, 1989: 3), has given impetus to the so-called "performance-centred" approach in oral tradition. What we are left with is an impasse between structure on the one handthe oral-formulaic theory -and, on the other, "the processes of performance and audience reception as they actually take place in space and time" (Finnegan, 1986:74) -the performance-centred approach.But as Foley (1992:280) reminds us, Parry's original research was first and foremost concerned with proving the traditional character of the Homeric poems.That the peculiar structures indicative of tradition implied orality was only a subsequent revelation (see Parry, 1971:439).Surely, if the oral formulaic theory's indebtedness to tradition is taken seriously, it should no longer be seen as exclusive of meaning?There is, moreover, no reason why tradition cannot indicate extratextuality."What if it [tradition] came to refer to a reality larger even than the entire individual performance, or group of performances?" asks Foley (1992:281). Clearly, the performance-centred approach needs to look no further than tradition for the meaning it aspires to.Both the Parry-Lord thesis and the performance-centred approach lay claim to verbal art as "a situated, experienced event in traditional context" (Foley, 1992:277).On this basis Foley is able to proceed to an integration of the two positions around the seductive matrix: "performance [text] 3 as the enabling event and tradition as the enabling referent" (Foley, 1992:294). From the traditional to the literary Performance is an enabling event (or sign, signifier?),tradition is an enabling referent (or meaning, signified?).Foley's proposed method will, in fact, be one way of reconceptualising the significance of Homer's orality as creative of literature.Reconnected with meaning (which is embedded in the tradition he is actualising) , the virtuoso technician can fully claim to be a poet.In this article I am suggesting another way. Where Foley departs from the notion of tradition, I am taking as point of departure the notion of literature.Where Foley reminds us of the importance of Parry's original conception of the traditional in order to 3 Foley strongly argues that the expressive qualities attributable to "experienced events" (performances) on the basis of the multifaceted reality (tradition) underlying them can, albeit in reduced fashion, also be applicable to the "oral-derived" text.2. What is "art" in oral art, or "literature" in oral literature? Literature as prestige/influence It should be clear from the preceding that, for purposes of this article, I am setting aside the post-modernist/deconstructionist argument which seeks to refute the distinction between "high" and "popular" culturehence between the literary and the non-literary -through its assignment of all aesthetic discourse to the all-encompassing sea of (inter) textuality (Easthope, 1990).Literature is at the very least something relatively specific.As Tony Bennett (1990:273) puts it: … whilst its [the concept of literature's] conventional understanding as a uniquely privileged kind of writing cannot be sustained, the term does cogently designate a specific, but non-unitary, field of institutionally organised practices -of writing, reading, commentary and pedagogy. The oral literary text is therefore by extension regarded as at least in principle distinguishable from the oral text as such.It is also in this sense that I take the concept of oral art (aesthetic) to be relatively distinct from the concept of oral culture.Not all culture is necessarily artistic. The most obvious category of literature in this sense is what is frequently referred to as the "canonical": texts that over periods of prescribed institutionalised study and critical attention acquire the kind of prestige that makes them "classics".Canonical literature has been characterised in various ways, most often in relation to its supposed ability to retain interest and relevance over extended periods of time: "(T)he work is assumed to transcend history because it encompasses the totality of its tensions within itself", comments Paul de Man (in Jauss, 1982:xi).A different type of characterisation, within the context of what is considered a "high" literary work, is given by Anthony Easthope (1990:90).In relation to the popular, the literary work has a relative plurality of meaning; its meaning is "deferred", the text means "more than it says". In developing his aesthetic of reception, Jauss is critical of "essentialist" conceptions in terms of which the meaning of a work is characterised as "representational or expressive function" (Jauss, 1982:15).He is equally critical of Hans-Georg Gadamer's idea (already expressed in De Man's observation above) that the classical text "signifies itself and interprets itself" (Gadamer, in Jauss, 1982:30).In Gadamer's view the classical work, addressing itself to a kind of eternal present -a "timeless ideality" (Jauss, 1982:13) -achieves its own historical mediation without the interference of a reader/audience.Yet to Jauss the true meaning ("historical essence") of a work lies strictly speaking outside the work -in its influence."The work [of literature] lives to the extent that it has influence.Included within the influence of a work is that which is accomplished in the consumption of a work as well as in the work itself" (Karl Kosík, in Jauss, 1982:15;my italics)."Influence" in this sense involves a multifaceted dialectic of author, work and public, in which "the perpetual inversion occurs from simple reception to critical understanding, from passive to active reception, from recognized aesthetic norms to a new production that surpasses them" (Jauss, 1982:19).If the key concept in all of this can be traced to the idea of productivity of meaning, it is the reader -the addressee (who includes both the critic and the reflective consciousness of the author) -that takes on the prime role in its development.And the basis for this productivity is the reader's horizon of expectations, which describe the criteria readers use to judge literary texts in any given period.These criteria will help the reader decide how to judge a poem as for example an epic, or a tragedy or a pastoral; it will also, in a more general way, cover what is to be regarded as poetic or literary as opposed to unpoetic or non-literary uses of languages4 (Selden, 1985:14). Jauss's aesthetic of reception generally situates the greatness of the classic work in the addressee's reaction to it.What causes a work of literature to be a classic, a "masterwork"?At issue is the degree to which a work demands of its addressee a change towards as yet "unknown experience".This crucial change is the "aesthetic distance" the addressee is required to cover between his own horizon of expectations (which involves his "familiarity of previous aesthetic experience") and that required for the reception of the work.Moreover, this change reveals itself, at least at first, as negativity.The masterwork is distinguished from "entertainment art" (the popular?) to the extent that this aesthetic distance is relatively great.By contrast, in the latter (also called "culinary" art), the reception of the work requires little or no change on the part of the addressee, and blithely fulfils expectations, satisfies desires, "solves" problems (Jauss, 1982:25) 5 . Our brief incursion into the workings of the canonical can be justified, of course, by the enormous prestige enjoyed by the Homeric poems.The following description is by no means uncommon: "The Iliad is the first substantial work of European literature, and has fair claim to be the greatest ... (I)t may fairly be described as the cornerstone of Western civilisation" (Hammond, 1987:7).This virtually universal prestige of the Iliad and the Odyssey is, of course, in stark contrast to the still prevailing dismissiveness -at least at the level of genuine aesthetic appreciationwhich characterises the reception of oral texts generally.The crux of the matter is that the oral text, though quite commonly called "literature", has tended, by and large, to be merely "collected" as "evidence" of a particular type of culture.This point is well made by Karin Barber. Reflecting specifically on African oral literature, she decries the lack of any "developed criticism" in regard to the latter, as a result of which "scholars [who have trained in the tradition of 'mainstream criticism'] ... have tended to abandon the attempt to criticise oral literature and have fallen back instead on the mere collection and annotation of texts".The reason for this, she advances, "is to be found in the political situation of oral literature in general ... Oral literature everywhere has been or is being marginalized with the displacement and impoverishment of its bearers, the illiterate peasantry" (Barber, 1984:497). 5 It is an interesting feature of the aesthetics of reception that the initial aesthetic distance covered by the readers of a work "can disappear for later readers, to the extent that the original negativity of the work has become self-evident and has itself entered into the horizon of future aesthetic experience, as a henceforth familiar expectation".This, according to Jauss, may bring the classic work close to being mere entertainment ("culinary art"), requiring "a special effort to read ... [it] 'against the grain' of the accustomed experience" (Jauss, 1982:25-26). A further point can be made here, concerning the prestige accorded to particular oral texts within the community in which they have been produced.On one level it should be obvious that, within a given culture, some texts (for example praise poems) should be considered as more expressive of power, more "serious" than others (riddles, for example).Yet such distinctions are frequently ignored by a scholarly treatment intent on seeing oral texts as "ethnographic documents".While oral societies are commonly said to have "literature" by analogy with literate societies -"just like us" -oral texts are generally only differentiated within a functionalist perspective, with comparatively little attention given to questions of status or prestige.This attitude can at least partly be attributed to two inter-related notions: the notion that oral societies tend towards "cultural homeostasis" (Goody, 1977:14) and the more Romantic notion of oral societies being "egalitarian" (Finnegan, 1977:34).Slowness of change and perpetual equality do not really promote a perspective on texts as indicative of social power.Yet, as Barber and De Moraes Farias (1989:2) argue, the idea of a "society" or "people" having a "monolithic and homogeneous culture, equally shared by all its members" no longer holds.Many societies are, in fact, characterised by "extraordinarily complex internal cultural differentiation", making it impossible to assign "a single determinate 'translation' to any ideological phenomenon in any society".I see this as implying a generally more "conflictual" model for the oral culture, with that culture's texts being subject to horizons of expectations variously addressing issues of power, tension and conflict.This is a manifestly different perspective to the one that has generally been put forward regarding the oral text in relation to conflict, namely that the text (most famously the folktale) addresses potential conflict, exerting "a stabilizing influence" through its fulfilment of some predetermined societal (pedagogical) function. Of course, the audience of an oral performance participates in its creation and in its production of meaning in ways unthinkable for the reader of the written text (Finnegan, 1977:122).But in the final analysis the kind of explanation Jauss provides for the most influential kind of literature does not really apply to the oral.Not, I would hold, because oral societies somehow lack the conception or possibility of "aesthetic distance" (there is ample evidence to the contrary), but simply because research into oral tradition has failed to develop its own horizon of expectation with regard to the oral text.The following observation by Olabiyi Yai (1989:59) still holds true: No communication seems to exist between the production/ consumption of oral poetry and its criticism.More precisely, communication is unidimensional.When the creator of oral poetry and his academic critics are contemporaries the terms of the critical exchange are unilaterally set by the critic.The poet is thus degraded from his status of creator to that of an informant.He can only make such contributions as required by the initiatives of the critic. We need not give up here, fortunately, for we still possess the example of those ultimately prestigious oral texts, the Iliad and Odyssey.The degree to which their reception counters the diminution of the ultimate poet to the "man with the phrase book in his head" may yet yield possibilities for the recognition of a genuine oral literature. Literature as creativity In the article already referred to, Foley (1992:275) draws attention to "customary organizing principles" which "delineate" the texts of oral cultures.Of these he mentions "author" ("artist"), "text", "genre" and "tradition" (given this article's interest in reception aesthetics, let us also add "addressee" or "audience").As discussed in the introduction, Foley embarks on a kind of theoretical merger of the notions of text (as performance) and tradition: performance as enabling event, tradition as enabling referent.But the "grid" he provides us with is particularly useful for another reason.It defines conceptually separable categories (in the otherwise all-encompassing and amorphous "orality") against which and in which we can attempt our location of the crucial literary concept of creativity. Creativity as a quality of the artist Let us start with the artist, the poet -or the man with the phrase book in his head, provocatively characterised by Ong (1982:22): Homer, by the consensus of centuries, was no beginner poet, nor was he a poor poet ... Yet it now began to appear that he had had some kind of phrase book in his head ... Homer stitched together prefabricated parts.Instead of a creator, you had an assembly-line worker. Ong here catapults us headfirst into the major controversy of the Parry-Lord thesis, what Foley (1988:58) describes as "utility versus contextsensitivity ... convention versus originality".In fact, the man with the phrase book in his head represents the ultimate caricature of the oralformulaic theory.Faced with the overriding necessity to "keep going" (i.e.save face) in front of his all-too-easily distracted audience (the need for "compositional fluency"), Homer quite literally pulls out every trick in the bag, drawing on the "multiform" of expressions of the oral tradition.Rather than the literary ciriteria of "intellectual probing", "self-awareness" and "detachment" (Finnegan, 1973), his sole occupation is utility. Of course, there is (still) the more meliorative view of Homer, presented here by a recent translator of the Iliad, Martin Hammond (1987:11): Homer was a poetic genius of quite exceptional power and range, who far excelled his predecessors (and his few successors) in technical skill, breadth of vision, quality of imagination, and sheer ambition. Lest one is tempted to think that Hammond is somehow unaware of the oral traditional theory underpinning Homer's performance, this is decidedly not the case.He gives a succinct but precise account of the various multiforms (both prosodic and thematic) the oral poet employs, and is under no illusion as to the traditional elaboration of these forms: "The Homeric poems are in one sense the creation and final flowering of a long and distinguished tradition" (Hammond, 1987:11).Yet individual creativity has its definite -and decisive -place: "... (T)he pre-existing epic tradition was a necessary cause of the phenomenon of Homer, but not a sufficient cause" (Hammond, 1987:11; my italics). In the passages quoted here, both Ong and Hammond, although they adopt virtually opposite standpoints, address the issue of the creativity of the oral at the level of the individual, the "author".Of course, the one advance of the Parry-Lord thesis that seems to be genuinely beyond dispute, is the fact that it liberated us (and well before post-modernism got around to it, we might add) from the hackneyed Romantic model of creativity: the solitary tortured poet pondering the Muse: creation ex nihilo (Ong, 1982:21-22).(This particular liberation coincides, of course, with Parry's ultimate demonstration, namely that the oral poet did not have to rely upon the obviously uncreative mechanism of rote memorisation.)The oral artist is not isolated; he is part of a society, a tradition, face to face with an audience.He does not create out of nothing, he improvises with the forms at his disposal.In the case of Homer, Hammond would no doubt say "improvises brilliantly" -he talks of Homer's "extraordinarily skilful control" (Hammond, 1987:14).As for the man with the phrase book in his head, Ong might well use the word "improvise" -he uses the word "rework" in a related context (Ong, 1982:23) -but then no doubt in the less flattering sense of "rehash with some alterations".In his assessment of the significance of a particular song undergoing changes in its transmission from one oral performer to the next, Lord, for his part, situates the creativity of the individual artist in the idea of the "preservation of tradition by the constant re-creation of it" (Lord, 1960:28).But, however tempting this type of characterisation (improvising, skillfully controlling, re-creating -and even conceding Hammond's point about tradition not being a sufficient cause for the phenomenon of Homer), we have to concede that the creativity of the individual remains submerged in the tradition of which it is the instrument -rather than the other way round.In Parry's own words: "there (are) ... certain established limits of form to which the play of genius must confine itself" (Parry, 1971:421). Creativity as a quality of the text The individual artist is, however, not only submerged in the tradition, he is equally submerged in the text.This is perhaps most obvious in the case of oral-derived texts 6 : "works of verbal art that took shape in or under the influence of oral tradition, but that now survive -for historical reasons -only as [written] texts" (Foley, 1992:290).The Iliad and the Odyssey being the definitive example of this type of (oral) text, let us remind ourselves that the very existence of Homer (quite apart from who he actually was) has over the years been a favourite subject of debate.But whereas in the case of the oral-derived text the individual artist is unknown to the point of simply being "not there", the oral performance shows perhaps a more theoretically compelling way of submerging the individuality of the artist.Of course, the oral artist is in all probability well known to the audience he performs to, perhaps even better known, in fact, than the author of a published book to its readers.In the case of the written text, however, the author's act of composition is clearly separate in time and space from its resultant object (the text) and can, for that reason, be studied in its own right.It is fashionable to ask authors questions about how they think about what they write.The same is not true of the oral text -at least the quintessential oral text, as intended by Parry and Lord -the text that is composed in performance.This text sees the conflation of the act of creation and the object of creation, of the composition of the text and the existence of the text.The entire process of creation (revolving, as composition of an individual text, around an individual artist) is "collapsed" into the text, disappears into it.Once composition has merged into text through the event of the performance, the text alone remains as instance of creativity. An alternative way of describing the pre-eminence of oral creativity as being that of text rather than of artist can be formulated on the basis of Jean-Jacques Nattiez's (1990) model of the "symbolic phenomenon".The symbolic phenomenon (for our purposes, the text) is conceptualised as having three "dimensions", namely the poietic (relating to a process of 6 At a public lecture given by Foley in 1995, at the University of Natal, Durban, he in fact expressed doubts about the validity of this concept, in light of research that had brought to the fore oral texts -in the Parry-Lord sense of having been integrated into an oral tradition -deriving from texts originally composed in writing. creation that may be described or reconstituted), the esthesic (relating to the construction of meaning on the part of a receiver) and finally the trace or "neutral level" (relating to the physical and material embodiment of the symbolic form).Following our reasoning above, the oral text merges poietic (creative process) and trace (object of creation) into a single dimension alongside the esthesic, which is the appreciation of the oral performance by the audience. Creativity as a quality of the tradition We shall presently return to the question of the text as "instance" of creativity (within the grid of organising principles proposed by Foley).At this point, let us consider tradition.Perhaps an initial connection between tradition and creativity (which also relates to the artist) can be traced to Parry's idea of tradition enabling the artist, in Foley's words, to "(fill) his work with the spirit of a whole race" (Foley, 1988:21).Tradition makes it possible for the work of art to transcend the limits of the specific and aspire to the truly collective -to universality (which would naturally be a prime requirement for the timelessness of the classic we discussed earlier).Having developed this argument through an analogy with the "perfection" achieved by the Greek sculptor Phidias, Parry concludes: We realize that the traditional, the formulaic quality of the diction was not a device for mere convenience, but the highest possible development of the hexameter medium to tell a race's heroic tales. The poetry was not one in which a poet must use his own words and try as best he might to use possibilities of metre.It was a poetry which for centuries had accumulated all such possibilities ... (Parry, 1971:425;my italics). Directly addressing the notion of literariness in relation to tradition, Lord (1960:141) writes the following: We realize ... [now] that what is called oral tradition is as intricate and meaningful an art form as its derivative 'literary tradition'.In the extended sense of the word, oral tradition is as 'literary' as literary tradition.It is not simply a less polished, more haphazard, or cruder second cousin twice removed, to literature.By the time the written techniques come onto the stage, the art forms have been long set and are already highly developed and ancient. What enables Lord to conceive of tradition here as something "literary" in its own right stems in part, one feels, from a similar conception to that of Parry concerning tradition's inherent collectivity, but also (particularly in light of his view of the artist recreating the tradition) from a specific conception as to how tradition operates, which stresses, in the most general terms, fluidity over rigidity.It has moreover become increasingly common (under the influence, perhaps, of contemporary theories of intertextuality) to see the oral tradition as, rather than a "long chain of interlocking conversations between members of the group" (Goody & Watt, 1968:29), something inherently multifaceted, multidirectional rather than simply linear.Foley (1992:276) expresses this well: I have assumed tradition to be a dynamic, multivalent body of meaning that preserves much that a group has invented and transmitted but that also includes as necessary defining features both an inherent indeterminacy and a predisposition to various kinds of changes or modifications.I assume, in short, a living and vital entity with synchronic and diachronic aspects that, over time and space, will experience (and partially constitute) a unified variety of receptions. It is a fair argument that oral tradition presented in this way makes nonsense of the idea of the oral text "shackled by convention", an idea mostly advanced by detractors of the Parry-Lord thesis (like Vail and White), but also by its followers, at least to the extent that the followers choose to insist on the "formulaic" part of the theory.Had Ong been less under the impression of the formula, Homer might yet have escaped being an assembly-line worker with a phrasebook in his head.Motivated by a similarly flexible view of tradition, Hammond (1987:11) tells us that the "success and quality of a singer's creation [and we have seen how highly he thinks of Homer] will depend on the richness of the tradition within which he works".(Hammond then proceeds to set off the richness of Homer's tradition against -interestingly -the "impoverished tradition" of the guslari, "much studied in recent years for the light they might shed on the Greek tradition, though the illumination is generally oblique and remote".) For all this talk of a "creative" tradition however, the fact remains, as I have argued elsewhere (Alant, 1996), that the notion of the traditional has never been properly integrated into contemporary literary theory."Traditional" remains by and large at a counterpoint to the "literary" that the critic sees as his field of endeavour.(It is also, of course, the most obvious counterpoint to the [post-]"modern").If, as I argued earlier, research into oral tradition has failed to develop a horizon of expectations with regard to the oral text, it is precisely because researchers have been unable to conceive of the text beyond "its" tradition.There is a further point to be made here.We need to remain sensitive to the fact that the notion of tradition is in any event an "organising principle" of a field of knowledge.No matter how wellgrounded it may appear in terms of observed reality, the concept of tradition remains, as Michel Foucault (1976:31) has argued, an intellectual tool designed to serve particular interests (those of the researcher).Yai's pessimistic observation (quoted earlier) regarding the lack of communication between the production/reception of the oral text on the one hand and its criticism on the other can be understood in this perspective (Yai, 1989:59)."Tradition" may well be a tool, but it is especially a barrier. Creativity as a quality of the receiver/audience We have more or less set aside the categories of artist and tradition as instances of orality within which to situate the notion of creativity.I shall not here say anything about genre -it relates more directly to the question of prestige or influence dealt with earlier.We have also talked about the text, and have seen it take on a particularly privileged position. The process of creation (the poietic) "collapses" into the text (Nattiez's "trace" level).But we cannot define the text thus isolated as the "location" of creativity, without committing ourselves to the kind of essentialist definition of literature criticized by Jauss, or without falling into the kind of formalism which has been the basis of the most general criticism of the Parry-Lord thesis.What we can do, however, is to consider what any text received by an addressee (an audience) implies, namely a horizon of expectations.This brings us to Nattiez's esthesic function (the construction of meaning on the part of a receiver), placing us firmly within an aesthetics of reception. It can be easily forgotten, considering the pains both Parry and Lord took in order to define the artistry of the oral Homer (to convince their readers that the oral-traditional mode of production, irrespective of how different it is to the literate, can indeed be literary), that Parry did not discover the Homeric texts to be oral and then proclaimed them literature.No, the Iliad and the Odyssey had already been lauded as literature -great literature -by successions of generations right up to the modern.Of course, we now know that at least part of this literary appreciation was founded on the kind of "chauvinism" (in Ong's words) Parry managed to "undercut", and we may even agree with the irony in the view expressed by Pierre Macherey, namely that the Iliad appears so different to us compared to what it must have been like for its contemporary public that "it was as if we ourselves had written it" (Macherey, 1977:45).This raises the obvious possibility of Homer being "misread".One conception of the role of the reader mentioned by Jauss (attributed to R.G. Collingwood) claims that a text is only understood "if one has understood the question to which it is an answer" (Jauss, 1982:29).One might rightfully ask: what if the reader finds a different question?Jauss mentions Gadamer's concept of the "fusion of horizons" -the awareness of a work's "successive unfolding of the potential for meaning" -as a way of limiting a more or less arbitrary reading.At the same time he concedes, following Gadamer, that the reconstructed question need not stand within the text's original horizon of expectation; "the historical question cannot exist for itself" (Jauss, 1982:30). There exists an enormous historical and cultural gulf between Homer and his reader, yet the fact remains that even contemporary readers have been able -and Jauss recognises their freedom in this respect -to somehow negotiate the aesthetic distance that is part of that separation, to forge a horizon of expectations in the light of the Homeric text.We also have to concede the following point.It is not impossible that Homer's orality has at times been overemphasised as the "standard" by which his poetry should be interpreted.Hammond invites the prospective reader of Homer to "understand something of the tradition within which the poet worked, and the techniques of composition which that tradition had evolved", but he concludes: Awareness of the poem's oral composition may rightly affect some points of detailed interpretation: but generally the Iliad deserves, and will repay, the approach that would be natural to any great work of literature (Hammond, 1987:14). We may reflect on exactly what it is that has made orality's most famous text so amenable to horizons of expectations over the ages.Hammond (1987:12) refers to the "tragic quality" underlying the contrast between the traditional poetic style on the one hand, "expressive of an ordered and stable world in which all things have their own excellence and beauty", and the narrative line of pain, destruction and defeat on the other.But this tells us, I would argue, less about the actual text, than it tells us about the possibility of the reader to respond on the basis of the text.This notion of response is crucial, for not all texts that become integrated into a horizon of expectation necessarily elicit response, or at least the same type of response.According to Jauss, the effect (influence) of a text is measured by the extent to which "those who come after it" respond to the text, a response he characterises as the desire to "appropriate", "imitate", "outdo" or "refute" 7 (Jauss, 1982:22).In other words, the response can be more or less creative. 7 Jauss could in fact quite easily have used these terms in relation to the reception of oral texts.Yai (1989:63-65) describes the critical practices in the Gèlèdé society of Western Yorubaland which allow, amongst others, for a special kind of "dialogic mode" between performer and audience, as well as "poetic contests". Our search for a location of creativity among the different organising principles of orality can therefore bring us to the following.Whether an oral text is literature is determined by its reader/audience or, more precisely, by the latter's response on the basis of his expectation of the text.In terms of the aesthetic distance between the text and the reader's own experience, what kind of a response is the reader/audience inclined to make?It is only at this point that the notion of creativity becomes at all relevant.How creative is the reader/audience's response going to be? The oral text is literature if it allows for a relatively creative response, a response that is not perceived by the reader/audience as, in a sense, pre-determined by his expectation, a response, in other words, that orientates the reader/audience towards as yet "unknown experience".This brings us back to the title of this article.The determining factor in distinguishing the literary from the non-literary is decidedly not whether or not the composer or poet has a phrase book in his head (or is an assembly-line worker).Ong should not have aimed his witticisms at Homer, but rather at the responses of those who have read (or listened to) him. Conclusion The Iliad and Odyssey are particularly privileged oral texts.While their particular position with regard to a powerful Western cultural heritage is an obvious factor in this privilege, there is also a much more mundane reason.They had been written down long before their centuries-long reception. The same privilege would apply, though to a lesser degree, to oralderived texts in general, particularly those that were written down a long time ago and have therefore been received as part of a large corpus of written literature (the case, generally, of "literate" societies).It is this factor more than anything else which accounts for the greater prestige enjoyed, within research in oral tradition, by the "dead-language" traditions in comparison to the living oral traditions (Foley, 1988:110). The texts produced in a contemporary oral tradition are invariably received as written (transcribed) texts by the researcher or field-worker.The latter may of course qualify the text thus received as "oral literature" and generally does -by mere analogy with the written literary text.A more in-depth appraisal of the text's possible literariness remains more or less impossible, largely due, as I have attempted to show, to the exclusion of the notion of tradition from contemporary literary theoretical perspectives. The Iliad and Odyssey may well be ultimately privileged.At the same time, however, they have been instrumental, as the original motivation for a particularly powerful theory (in which many have acknowledged the dawn of a new discipline), in spawning an interest in oral texts -an interest that indeed extends way beyond those oral texts that have been "derived" into a written form.In this way the Homeric poems have raised the very real possibility of an oral literature -recognised and received as such -which would be much larger than the convention of written literature would allow for. I have attempted to consider the importance of the Iliad and Odyssey from a literary, rather than "oral-traditional" point of view.This amounts, in fact, to a reconceptualisation of the origins (within the Parry-Lord framework proposed by Ong and Foley) of the "discipline" of Oral Theory.Ideally this literary conception should provide us with a theoretical standard against which all oral texts could be defined, as literature, in the same terms.But our literary horizons of expectations do not extend far enough.To what extent, then, does this conception of the theoretical origins of Oral Theory constitute an advantage over the oralformulaic theory?The attraction of the oral-formulaic theory lies in the concrete criterion it addresses itself to: stylistic form of expression.In so far as it is able to provide a rigorous definition of the latter (but this is becoming increasingly difficult), hypotheses that are made on the basis of the oral-formulaic theory can be objectively "tested".One can count the formulas, hence "quantifiable formulaic analysis" (Foley, 1992:279). Beauty lies in the eye of the beholder.No matter how rigorously we may try to demarcate its area of influence within a larger process of production, the quality of the postulated response will always defy measurement.But if we take seriously -as much research from various parts of the world has shown -the almost unfathomable variety of oral texts, this very vagueness may, in a sense, be an asset.We cannot say, as a justification for what we are doing, that all oral texts are artful and literary and therefore at least as worthy of contemplation and study, indeed as prestigious, as the canonical literature of Western society has been held to be.But the image of the ultimately privileged oral text Milman Parry presents us with opens, at least, the possibility that other oral texts, from other, less prestigious oral traditions could, given different circumstances of reception, also be literature. There is, perhaps, a further advantage to this view, relating to what we may call the theoretical coherence of research done in the field of oral tradition.An exclusive adherence to the oral-formulaic theory as a conceptual basis for Oral Theory would mean that a large part of research into oral tradition may well fall outside the scope of Oral Theory -notably in Africa (see Finnegan, 1977;Foley, 1988).African oral traditions -we may think of Zulu and Xhosa praise poetry -certainly produce formulaic texts, but the repetition typically found in the African work song -to quote but one example -certainly owes little to the "metrical conditions" that provide the basis of the Homeric formula (Vail & White, 1991:28).Yet if we see the Homeric poems as essentially providing a kind of theoretical space for the notion of oral literature, then all research that deals with oral texts can be brought into line with Oral Theory, irrespective of the texts' formal properties or the researcher's (lack of) concern with the latter. In this sense the Iliad and the Odyssey, texts of received excellence in Western society even though not literate, serve as a kind of master metaphor for all oral texts, whether oral-derived or not.The comparison they invite does not confer certainty.But at least we are left with a potential oral literature, which is a definite theoretical improvement on the common reflex to call oral texts "literature" simply by analogy with the written -because there is such a category of written text.For -to return to Ong at the end of this article -is this analogy not the very chauvinism Parry was supposed to have undercut? Bibliography takes issue withParry and Lord's 2Finnegan's critique centres in the following questions: To what extent is an oralformulaic style indeed indicative of oral composition?Can the formula be defined with enough rigour for it to constitute a distinctive feature of the oral text?Must the text of necessity be composed in performance? establish (a framework for) meaning, I wish to reflect more particularly on the implications of Parry's breakthrough concerning the orality of the Iliad and the Odyssey for what we regard as literature.
9,895.2
2002-04-01T00:00:00.000
[ "Linguistics" ]
A COMPARISON OF TURKEY-MALAYSIA LADM COUNTRY PROFILE FOR 3D CADASTRE PURPOSES This paper summarises the comparison of Turkish and Malaysian cadastral registration systems based on the Land Administration Domain Model (LADM, ISO 2012) associated with 2D and 3D cadastral situations. Literature review shows that many countries propose their profile based on the LADM, such as The Netherlands, Australia/ Queensland, China, Greece and others. Turkey and Malaysia are some of the potential candidates for the LADM based country profile, as described in this paper. The study presents a detailed overview of the Turkish and Malaysian cadastral system, and LADM-based country profiles developed by the two countries are compared thanks to the common ontology offered by LADM. INTRODUCTION Cadastral maps should provide complete information regarding record rights, restrictions and responsibilities (RRR) on the cadastral parcel (Kaufmann and Steudler, 1998;Stoter and Oosterom, 2007). However, although cadastral objects are three and four dimensions in most of the countries, they create cadastral maps that still mainly rely on 2D-based cadastral systems for current Land Administration Systems (LASs) (Ho et al., 2015;Atazadeh et al., 2016;Kalogianni et al., 2017;Rajabifard et al., 2018). Thus, RRR on the land cannot be adequately represented. The research and development in 3D cadastres hence have gained momentum over the last years. A very detailed and advanced analysis of studies in 3D cadastres in a wide range of countries worldwide can be found in the International Federation of Surveyors (FIG) 3D cadastres best practices book (Oosterom, 2018). The most efficient standardised model in the land administration system (LAS) field is the Land Administration Domain Model (LADM). It was an ISO standard in 2012; ISO19152:2012(ISO, 2012, which aims to establish a common ontology for RRR affecting land administration and its geometric components. Thus, it will enable communication between related parties within a country or between different countries (Atazadeh et al., 2017;Sürmeneli and Alkan, 2018;. Although LADMs current version provides an international framework for LAS, it is limited to support 3D cadastre lack of geometry or topology and time profiles (Kalogianni et al., 2020). Having a well created and maintained 3D cadastre will benefit many other applications such as urban planning and land and property management (Rajabifard et al., 2018). Legal and physical standards have been developed for these applications (LADM, CityGML, IFC/BIM, LandInfra etc.). While these standards can be used alone for each application, they can also be used to integrate each other (Sun et al., 2019). Thanks to the advanced technology in geographic information science, 3D cadastral developments have matured with storage, analysis and visualisation of 3D objects (Kitsakis et al., 2016;Dimopoulou et al., 2018;Su et al., 2019;Kalogianni et al., 2020). Within the scope of these developments, most countries have improved their cadastral systems according to the 2014 (Stuedler, 2014) and 2034(ICSM, 2015 visions. While the first 3D cadastral registration has been made in the Netherlands (Stoter et al., 2017), a prototype for a 3D cadastre has been developed in Shenzhen, China, to support urban planning and management (Guo et al., 2013;Ying et al., 2015). Also, many academic studies have been carried out in Queensland and Victoria states of Australia, and physical models for 3D cadastre have been developed (Aien, 2013;Aien et al., 2015;Rajabifard et al., 2018), whereas research on the environmental impact of 3D public law restrictions in Greece is described (Kitsakis and Dimopoulou, 2020). The transformation of 2D analogue cadastral boundary plans into 3D digital information and visualisation in Stockholm, Sweden, is described in academic study (Larsson et al., 2020). The main motivation of the study is to show that the similar and different aspects of the cadastral systems of the countries are easily determined thanks to the LADM, which creates a common ontology. In the study, LADM-based cadastral models developed by Turkey and Malaysia were examined. Thus, it has become more understandable how the solutions developed for different registration systems are modelled. The remainder of this paper describes the Turkish and Malaysia cadastral system and LADM in section 2. Section 3 discusses the comparisons, and finally the discussions and conclusion in section 4. THE TURKISH -MALAYSIA CADASTRAL SYSTEM AND LADM Turkish cadastre consists of two parts: land registry that represents the legal relationship between people and real properties, and the cadastral maps that, besides geometry data and annotations, contain the land-use. There is a title system, and it is under state guarantee in Turkey. The land registry records are officially formed by the title and the registry directorate and the maps by the cadastral branches. A 2D graphic representation of most rights, restrictions and responsibilities is possible. The rights, restrictions and responsibilities regarding the 3D are recorded as textual. Land registration and surveying are managed by the General Directorate of Land Registry and Cadastre (GDLRC). Also, the cadastral system and cadastral data have been improved and developed several projects due to digitisation. Such projects are called in Turkish TUCBS, Turkey National Spatial Data Infrastructure and TAKBIS, Land Registry and Cadastre Information System . TAKBIS project aims to create the Turkish Land Registry and Cadastre Information System across the whole country. TUCBS is an egovernment project aiming at establishing the infrastructure for Geographical Information System following the technological developments at the national level (Turkish National Geographic Information System-TUCBS) (Alkan et al., 2019). Two different organisations manage the Malaysian cadastral system. The Department of Director General of Lands and Mines (DDGLM) and the Department of Survey and Mapping Malaysia (DSMM) are within the Ministry of Water, Land, and Natural Resources. One of the main aims of DSMM is serving cadastral survey information, which includes the dimension, size, and location of parcels. Thus, DSMM produces a Certified Plan (CP) within spatial components, including surveying and mapping cadastre parcels. On the other hand, the DDGLM, which deals with property ownership registration, is tasked with overseeing the legal aspects of land administration. The Land Registration System, which concerns the administrative (legal) data, is the responsibility of the Land Offices. The Land Office deals with ownership registration, namely, who owns the RRRs (Right, Responsible, Restriction). Each of these organisations has its different information management systems, namely the eTanah for the DDGLM and eKadaster in DSMM, which are two independent systems maintained in 2D. The Unique Parcel Identifier (UPI) was introduced to link the Land Office and DSMM, where every cadastral object has a unique identity number to differentiate from other cadastral objects. The Malaysian cadastral system deals with properties located on the surface level and above and below the surface level (Choon and Kam Seng, 2013; Zulkifli et al., 2015;WBG, 2017). The land administration domain model is developed to contribute to Land Administration Systems (LAS). The land administration domain model has been established by ISO/TC211 with the aim of standardizing geographical information and geocharacteristics (Oosterom et al, 2006). The main objective of the LADM is to establish an ontology and facilitate the exchange of cadastral data with the shared land administration system . The main starting point of the LADM is to establish a common ontology for rights, restrictions and responsibilities affecting the land administration and its geometric components. Thus, it will enable communication between related parties within a country or between different countries (Oosterom et al., 2006;Lemmen et al., 2015). The LADM is developed in line with the Cadastre 2014 vision and complies with international ISO and OGC standards (Lemmen et al., 2009;Lemmen et al., 2011;Tjia and Coetzee, 2013). Besides, it has been conducted in the studies showing the compatibility of LADM with INSPIRE (Alkan and Polat, 2017). LADM has three main packages and one sub-package. These are LA_Party (Party package), LA_AdministrativePackage (Management package) and LA_SpatialUnitPackage (Spatial Unit package) and LA_SurveyingAndRepresentation ( Figure 1). Turkish Cadastral Profile based on LADM The researchers developed the LADM-based 3D cadastral data model (Alkan and Polat, 2017;Sürmeneli and Alkan, 2018;Alkan et al., 2019;Alkan et al., 2020; was based on the analysis of the Turkish cadastral system. The model consists of 3 basic classes (TR_Party, TR_RRR and TR_RegistrationObject). The TR_ Right class is divided into two sub-classes as property rights and limited rights. The property right is the right of the owner or legal persons to make all kinds of operations, such as the use of property, purchase, sale, rent, etc. The limited real rights class is divided into two sub-classes as mortgage and easement. The restriction class is the part of the information that restricts the use of limited real rights in the title registration, where the restriction information is registered, and the information is determined. These restrictions are subdivided into representations, rights and liability, annotations and mortgages in the land register. In this class person's obligations are represented. These obligations include pausing the tax on the real estate, maintenance, repair, easement according to the type of real estate. There may be one or more types of obligations.The SpatialUnit class is the parent class where all cadastral objects are represented and associated with the other classes. The SpatialUnit class comprises parcel sub-classes, buildings and independent sections. The parcel class is obligatory for the cadastral system. The building class has a composition relation type with the parcel class. So every building must be on a parcel. The condominium is considered a spatial unit. A building can have non or more independent parts. According to the Property Law, the Annex is outside of a condominium. Also, it is referred to directly as allocated to that section. The Annex cannot be registered alone in the land register. Therefore, the type of 0..* (0-lots) relationship is selected between the condominium and Annex. Utility network, electricity, telephone, drinking water, sewerage, natural gas facilities are called technical infrastructure facilities. In the existing cadastral system in Turkey, utility networks are not registered to the land registry. The existence of a utility network is associated with the parcel or building. In addition, there may not be any utility network equipment under or above each parcel and building the existence facilities. The existing cadastral of a utility network is associated with the parcel or building. In addition, there may not be any utility network equipment under or above each parcel and building. Therefore, the UtilityNetwork class has a type of 0..* (0 and many) relationships with structure and parcel class. Since the land registry is not registered, data related to the utility network facility cannot be kept directly in the system. Also, it can be provided with the external class TR_ExternalUtilityNetwork.Finally, the survey and representation sub-package is the package in which the spatial objects and the geometric status are represented together with the rights, restrictions and responsibility processes. The package represents geographic points, 2D and 3D borders, title and other resources. The attributes of the classes in the package have been created following the INSPIRE and LADM standards. Figure 2 shows the corresponding classes in Turkish conceptual model and the LADM data model. Zulkifli et al.,2013;Zulkifli et al., 2014). The Malaysian country profile is based on an inheritance from the LADM classes. the Malaysian LADM country profile contains the legal part and spatial part. The legal part has a party and administrative package ( figure 4). The main class of the party package is MY_Party class with its specialization MY_GroupParty.There is an optional association class called MY_PartyMember. A party is a person or organization that plays a role in a rights transaction. The organization can be a company, a municipality or the state. A group party is any number of parties forming together a distinct entity. A party member is a party registered and identified as a constituent of a group party. The administrative package concerns the abstract class MY_RRR (MY_ Right, MY_Restriction and MY_Responsibility), MY_Mortgage, MY_BAUnit (Basic Administrative Unit) and MY_AdministrativeSource. A right is an action or activity that a system participant may perform or use an associated resource such as ownership, customary, easement and tenancy rights. A restriction is a formal or informal entitlement to refrain from doing something. Responsibility is a formal or informal obligation to do something such as maintaining a monument or a building. Malaysian Cadastral Profile based on LADM MY_Mortgage is a subclass of MY_Restriction. MY_Mortgage also is associated with the class MY_Right. A mortgage can be associated with (0..*) rights. A BAUnit is an administrative entity consisting of zero or more (parcels) against which one or more unique and homogeneous rights, responsibilities or restrictions are associated with the whole entity as included in the Land Administration System (Zulkifli et al., 2015). (Zulkifli et al., 2015). The spatial unit can be 2D or 3D. The lots are 2D, but subsurface lots exist with 3D volumetric descriptions without 3D topology. MY_GenericLot holds the attributes of a lot, and it has two specialisations, MY_Lot2D and MY_Lot3D, with their attributes and structure. MY_Lot2D is based on 2D topology concerning shared boundaries (MY_ BoundaryFaceString). In 3D, topology is not available for lots (MY_ Lot3D) and strata objects. In the model, one strata object type remains represented in 2D, that is MY_LandParcel (with buildings no more than four storeys). The other strata objects are proposed to be 3D and inherit from an abstract class MY_Shared3DInfo, with strata specialisations (and mutual aggregation relationships): MY_BuildingUnit, MY_ParcelUnit, MY_AccessoryUnit, MY_CommonPropertyUnit and MY_LimitedCommonPropertyUnit. The LimitedCommonPropertys modelled as a part-of relationship to MY_ CommonProperty(the aggregation class). MY_Level class is used to organise the various types of spatial units. There is a type attribute in the MY_Level class that describes the level type of the spatial unit, and such MY_Level is a collection of spatial units with a geometric or thematic coherence. MY_SpatialSource has an association with MY_SpatialUnit and MY_Point (figure 5). Figure 5. Overview of the spatial part of the Malaysian LADM country profile (Zulkifli et al., 2015). Comparison of LADM Countries Model Between Turkey and Malaysia Although there are differences in the cadastral systems of the two countries, they have many similarities in general. They carry out projects for the improvement of cadastral systems in both countries. In addition, academic and institutional studies are carried out for the transition to 3D cadastre. There are two essential differences in the cadastral system between the two countries. First, while it is possible to register the temporal information about real estate in the Turkish cadastral system, there is no temporal registration in Malaysia. However, the developed LADM model provides innovation for the Malaysian cadastral system. Secondly, it allows the registration of 3D real estate in Malaysia underground and above ground. However, the Turkish cadastral system gives this information in textual (mainly underground structures). Thanks to the common ontology provided by LADM, the differences and similarities in the cadastral systems between the two countries can be compared and understood more clearly. Table 1 shows the comparison of LADM classes and the country profiles based on LADM developed by the two countries. Generally developed packages are the same. Only some classes in the packages show differences. In particular, the classes defined as spatial units are different. Each class is explained in detail in section 3.1 and 3.2. These are: Party, Administrative, Spatial Unit and Surveying and Representations Packages respectively. The LA_Party Package describes the owners, right holders. LA_SpatialUnit Package described spatial units (e.g. parcel, building, apartment) and was mapped to the abstract class RealEstate. LA_RRR describes the rights, restrictions, and responsibilities over the property. The LA_BAUnit class represents a basic administrative unit, a set of rights, restrictions, and responsibilities of one or more real properties. Surveying and Representation classes are TR_Point, TR_SpatialSource, and TR_BoundaryFaceString. DISCUSSIONS AND CONCLUSION In this study, the cadastral systems of Turkey and Malaysia were evaluated, and LADM-based cadastral data models were examined for the transition to 3D cadastre. Thanks to the LADMbased cadastral data models developed by Turkey and Malaysia, country profiles can be explained more clearly. As an international standard, LADM provides an opportunity to improve Malaysia's and Turkey's land management systems. There are many improvements and benefits that LADM can bring to both countries. Significantly, internal operations of land offices in Malaysia vary; as such, the way land administration data is recorded and used are different. It has resulted in database systems that are different in structure and schema. While it may not be practical or viable to make the databases consistent at the national level, LADM provides an opportunity for harmonising the data when there is a need to integrate the databases for national interests. Database systems of the land office can be mapped to LADM to provide a common ground for understanding how different data elements are described and used. For Turkey, the LADM is used to create 3D terminology and establish a common ontology. The objects registered in the cadastral system are defined, and the relations between the owners are represented. LADM standard facilitates the sharing of 3D digital land administration data with other jurisdictions since LADM provides a standardised approach for exchanging land administration data among all jurisdictions. Thus, it enabled the introduction of the Turkish cadastral system in national and international platforms during the transition to 3D cadastre with its legal aspects. One of the development purposes of LADM is to create a common ontology for different countries and cadastral systems. In this study, thanks to the common ontology presented by LADM, the cadastral systems of the two countries could be easily compared. Thus, the differences and similarities between the systems could be examined. Both countries improve their cadastral systems and develop models based on LADM for 3D cadastral studies, promoting their countries on international platforms. In addition, they will be able to set an example for other countries that plan to develop a cadastral model using LADM.
4,076.8
2022-01-11T00:00:00.000
[ "Geography", "Engineering" ]
CryoMAE: Few-Shot Cryo-EM Particle Picking with Masked Autoencoders Cryo-electron microscopy (cryo-EM) emerges as a pivotal technology for determining the architecture of cells, viruses, and protein assemblies at near-atomic resolution. Traditional particle picking, a key step in cryo-EM, struggles with manual effort and automated methods’ sensitivity to low signal-to-noise ratio (SNR) and varied particle orientations. Furthermore, existing neural network (NN)-based approaches often require extensive labeled datasets, limiting their practicality. To overcome these obstacles, we introduce cryoMAE, a novel approach based on few-shot learning that harnesses the capabilities of Masked Autoencoders (MAE) to enable efficient selection of single particles in cryo-EM images. Contrary to conventional NN-based techniques, cryoMAE requires only a minimal set of positive particle images for training yet demonstrates high performance in particle detection. Furthermore, the implementation of a self-cross similarity loss ensures distinct features for particle and background regions, thereby enhancing the discrimination capability of cryoMAE. Experiments on large-scale cryo-EM datasets show that cryoMAE outperforms existing state-of-the-art (SOTA) methods, improving 3D reconstruction resolution by up to 22.4%. Introduction Cryo-EM is vital for obtaining high-resolution images of biological entities, such as cells, viruses, and proteins, at cryogenic temperatures, significantly minimizing radiation damage.It has revolutionized structural biology, especially through single-particle analysis (SPA), allowing for the detailed examination of molecular structures in their nearnative state [12].The process starts with sample preparation, where specimens are vitrified in a thin ice layer to maintain their native state.Researchers then use a transmission electron microscope to gather multiple 2D projection images from different angles.Image processing includes denoising and identifying particles for 3D reconstruction.Fig. 1 presents a simplified workflow of SPA using cryo-EM [25]. Particle picking is a pivotal step in cryo-EM for isolating individual protein particles from micrographs for further analysis.The quality of particle picking significantly influences the accuracy and resolution of the reconstructed particle structure in the following steps.Challenges in particle picking include the low SNR and varied particle orientations in cryo-EM micrographs, necessitating a large sample size for accurate 3D reconstructions [1].Moreover, manual picking is inefficient, time-consuming, labor-intensive, error-prone, and introduces dataset inconsistencies [4].Mis-identifications, or false positives, further compromise reconstruction quality.These issues highlight the need for improved particle selection techniques to enhance both the efficiency of particle identification and the overall quality of cryo-EM reconstructions, emphasizing the reduction of false positives and the increase of true positives [11]. Various semi-automated and automated cryo-EM particle picking methods have been developed in response to this need.Traditional methods are categorized into template-free [13] and template-based methods [14,16,17,19].Template-free methods like the Difference of Gaussians (DoG) [21] are noise-sensitive and less effective for irregular particles.Template-based approaches struggle with particle variability and are ill-suited for novel structures, limiting their efficacy in complex cryo-EM analysis.With the advent of deep learning, NN-based particle picking methods [1,22,23,26] have been proposed, marking a significant evolution in the field.These advanced techniques leverage 0 † Chentianye Xu and Xueying Zhan contributed equally to this work.the powerful pattern recognition capabilities of deep learning models to enhance the accuracy and efficiency of particle picking.Among these methods, crYOLO [22] and Topaz [1] are notable for their widespread application.While crYOLO is recognized for its efficiency in particle detection, it occasionally misses real particles.Topaz, though capable of identifying particles with limited labeled data, is susceptible to false positives and duplicates.Despite claims of minimal data requirements, these methods still often require large-scale labeled datasets for improved performance.Moreover, they exhibit limited generalization to unseen data, restricting their applicability in diverse cryo-EM research settings. In this study, we present cryoMAE, a cutting-edge cryo-EM particle picking approach, drawing inspiration from MAE [7].Leveraging the few-shot learning paradigm, cryoMAE is meticulously designed to first learn representative particle features from a limited set of cryo-EM particle regions efficiently, cryoMAE then detects and extracts particles from query micrographs by comparing the latent features generated for exemplars against those from regions within the query micrographs.The operation of cryoMAE unfolds in two distinct stages.Initially, it trains on a curated set of particle regions and a broader selection of unlabeled regions from a reference micrograph, utilizing a self-supervised approach.We introduce a unique self-cross similarity loss, ensuring the cryoMAE encoder generates distinct latent features for particle and non-particle areas.Subsequently, the trained encoder analyzes query micrographs, extracting and comparing latent features to exemplar features to ascertain particle locations through similarity scoring. The performance of cryoMAE was rigorously evaluated using the CryoPPP cryo-EM particle picking dataset [4], showcasing significant enhancements in 3D particle reconstruction resolution.Particles selected by our model from this dataset exhibit up to 22.4% (average 11.1%) improvement in resolution compared to those picked using current SOTA models.Remarkably, these results were achieved using just a few labeled exemplars (e.g., 15) per protein type, highlighting cryoMAE's efficient use of limited data. Our contributions are summarized as follows: orientations directly from the training data, making them more adaptable to different datasets without the need for specific templates.CrYOLO [22] and Topaz [1] are distinguished for their advanced particle picking capabilities in cryo-EM.CrYOLO leverages the You Only Look Once framework [15] for particle detection, and Topaz employs convolutional neural networks (CNNs) with positive-unlabeled (PU) learning.Despite their strengths, crYOLO may overlook true particles, while Topaz is prone to recognizing numerous false positives and duplicates [5].They require extensive labeled datasets, demanding significant time and resources.Our cryoMAE, utilizing few-shot learning, offers high efficiency using a minimal number of exemplars.It effectively reduces false negatives and positives, and minimizes reliance on large labeled datasets, representing a significant leap in cryo-EM particle picking technology. MAEs were initially introduced by He et al. [7], drawing inspiration from the BERT model [3], a transformative approach in natural language processing.MAEs bring the innovative concept of masking into the realm of computer vision, a technique where random sections of an image are obscured (masked) before being processed by an encoder.Subsequently, a decoder attempts to reconstruct these masked sections.[7] demonstrated that masking a substantial portion of the image (up to 75%) compels the model to learn deeper and more comprehensive representations of the data.In our study, we harness the exceptional feature extraction capabilities of MAEs to discern unique features of particles, thereby enhancing the efficiency and accuracy of particle picking in cryo-EM. Contrastive Learning. Contrastive Learning has been a transformative force in unsupervised learning, concentrating on increasing the similarity between representations of positive pairs while simultaneously differentiating those of negative pairs.Pioneering this approach, the concept of contrastive loss was introduced for dimensionality reduction and embedding learning, aiming to preserve semantic similarity [6].Further advances have been made with the development of SimCLR [2], which utilizes data augmentation techniques to enhance the robustness of visual representations.Moreover, He et al. [8] introduced Momentum Contrast, a methodology for building dynamic dictionaries in contrastive learning, which refines the application of contrastive loss.This refinement ensures the consistency of the representations for negative samples across the learning process.In our research, we leverage the principles of contrastive learning to develop a unique contrastive loss mechanism called self-cross similarity loss.This innovation enables our model to effectively discriminate between regions containing particles and background regions. Methodology In this section, we detail cryoMAE, starting with defining the few-shot cryo-EM particle picking problem, followed by our two-stage framework.Given a reference micrograph R, containing the target particles for analysis, we first randomly select a reference micrograph R and manually label m (m is a small number, e.g.15) particle regions x l i as exemplars (X L = {x l i } m i=1 ), and randomly crop additional n regions x u j from the same cryo-EM micrograph as unlabeled regions (X U = {x u j } n j=1 ).The remaining micrographs containing the same particle are query micrograph set Q. Our goal is to leverage the limited set of exemplars X L and unlabeled regions extracted from R to detect the particles within R and Q. Overview As depicted in Fig. 2, our framework unfolds in two distinct stages.In stage 1, cryoMAE is trained using a mixture of labeled exemplars X L and unlabeled regions X U from R. This training process is guided by both mean squared error reconstruction loss and a novel self-cross similarity loss, which helps the model distinguish between regions with and without particles.In stage 2, trained MAE encoder scans query micrographs to identify particles, comparing latent features of regions against those of exemplars to determine similarity scores.Regions with higher similarity scores are identified as more likely to contain particles, facilitating accurate particle picking.For each protein type represented by multiple micrographs, we select a reference micrograph R with manually annotated regions X L as exemplars and crop random unlabeled regions X U from the remaining parts of R. As discussed in [1], particle regions are sparse within micrographs, making most unlabeled regions likely non-particle areas.These images are resized to 224 × 224 and further processed into 16 × 16 patches during training, which are then subjected to random masking at a rate of 75%.This process transforms exemplar and unlabeled regions into xl i for labeled exemplars and xu j for unlabeled regions, respectively.The cryoMAE encoder then generates latent features for these regions, denoted as E(x l i ) and E(x u j ) respectively.Subsequently, the MAE decoder utilizes the generated latent features to reconstruct the original input images.This reconstruction is achieved through a self-supervised process, with the original images serving as the supervisory signal.This masking encourages the model to focus on global features of cryo-EM images, enhancing understanding of particle structures and generalizing across conditions.Such a focus is crucial for overcoming the limited training data challenge in the cryo-EM field, improving the model's performance in particle detection and generalization. Training cryoMAE incorporates both particle and unlabeled regions to bolster model robustness.Exclusive training on particle images could lead MAE to converge towards a homogeneous latent feature space for any given input, potentially escalating the false positive rate by assigning high similarity scores indiscriminately, including to background regions.By including unlabeled regions, cryoMAE learns to recognize features of non-particle spaces, avoiding overfitting to a solely particle-focused feature space.This broader training approach refines the model's ability to distinguish between particle and background regions, markedly lowering false positive rates by assigning more accurate similarity scores to non-particle areas.However, adding unlabeled regions faces some challenges: 1) the diverse background noise in cryo-EM, ranging from crystalline ice contamination and malformed particles to grayscale background regions, which demands a nuanced approach for accurate differentiation; 2) merely incorporating unlabeled data might not prompt the model to learn features unique to particles against complex backgrounds.To optimize the training efficiency of cryoMAE few-shot particle datasets and reduce overfitting risks, while also accounting for a wide range of background noise, we introduced a pre-training phase.Pre-training cryoMAE on a broader set of unlabeled regions better represents background variability.Further, introducing a self-cross similarity loss specifically addresses these noise issues, enhancing the model's ability to discern particles from backgrounds. Self-cross similarity. Drawing from the self-similarity concept [18], we develop a self-cross similarity loss to foster distinct latent features for particles and background within cryo-EM images, enhancing the model's ability to differentiate between these regions.This approach aims to increase the disparity in the feature space, thereby improving the precision of particle identification.As illustrated in Fig. 2, the MAE encoder's latent features are utilized not only for image reconstruction by the decoder but also are evaluated using the self-cross similarity loss, further detailed in Fig. 3 The self-similarity S sel f is calculated as the mean cosine similarity among the features of positive regions, formalized as: Similarly, the cross similarity S cross is the mean cosine similarity between features of positive and unlabeled regions: S sel f measures the similarity among latent features of exemplars, reflecting the internal consistency of particle features.This is crucial for the model to identify and enhance particle-specific patterns, facilitating better distinction from background noise.Ideally, The goal is for S sel f to increase, indicating stronger similarity within particle groups.Conversely, S cross assesses the similarity between exemplar features and those of unlabeled (negative) regions, aiming to capture the distinctiveness between particles and background.The objective is for S cross to decrease, signifying reduced feature similarity between particles and background.Self-cross similarity loss L SCS is designed to optimize these dynamics, thereby improving model's ability to differentiate between particles and backgrounds: α balances self and cross-similarity contributions, and τ sets a minimum difference threshold between them, limiting further distinction efforts beyond it. PU learning. Inspired by [1], we identify a limitation in our previous loss function design, which treats unlabeled data as negative. Randomly cropped training regions may unintentionally include particles, potentially confusing the model's distinction between labeled particles and background noise.This overlap complicates training, as the model could wrongly link particle features with the background, undermining our strategy to reduce background similarity scores and challenging the model's ability to learn discriminatively.To enhance the loss formulation, we accommodate the potential inclusion of particles in unlabeled regions.Acknowledging that a certain proportion ( π) of these samples may harbor particles, we modify the representation of features for these unlabeled samples.We adjust feature representation by implementing a weighting scheme grounded in the estimated probability π that an unlabeled region harbors a particle, alongside a complementary weight 1 − π for regions likely devoid of particles.This probabilistic approach enhances the model's capacity to differentiate between particle-laden regions and pure background, optimizing the use of unlabeled data in training and improving particle identification accuracy.The presence of particles in unlabeled regions necessitates a recalibration of similarity calculations, introducing a deeper analysis of self-similarity among potential positives and their cross-similarity with potential negatives within the unlabeled data: S ll , S lu , and S uu measure the sums of cosine similarities among exemplars, between exemplars and unlabeled regions, and among unlabeled regions, respectively.In the formulas, we decide not to adjust n because we treat each latent feature adjustment as a weighted process.Under this logic, we view it as having n latent features adjusted by π and 1− π, rather than having a total of πn particle regions or (1− π)n background regions within all unlabeled regions.This enhances the clarity of our methodology and ensures its alignment with Fig. 3, thereby preserving logical coherence.The refined self-cross similarity loss, LSCS ( Ŝcross , Ŝsel f ), adeptly captures the complexity of similarity within data subsets.By refining these calculations, we refine these metrics to account for the intricate characteristics of unlabeled data, facilitating a more discerning and efficacious training regimen.The total loss of cryoMAE, taking into account the reconstruction loss: Here β adjusts the weight of the self-cross similarity loss in the overall loss function, balancing reconstruction accuracy with discriminative learning. Stage 2: Particle Picking on Query Micrographs In stage 2, our model undertakes particle picking by utilizing the MAE encoder to scan query micrographs and extract features from each sliding window region, as detailed in Stage 2 of Fig. 2.This stage does not employ masking for the input regions.The extracted latent features are then matched against those of exemplars through cosine similarity, assigning similarity scores to each region based on the highest similarity.Following the completion of the sliding process on a micrograph, these similarity scores are ranked.It is crucial to recognize the variability in the imaging states of different micrographs, where a single threshold does not work well.Therefore, we adopt a density-based method to determine the most suitable cutoff threshold for each micrograph automatically.This process involves calculating the average distance of each score to its k nearest neighbors, and finding the score where the rate of change in these average distances is maximized as the cutoff threshold.Coordinates of all regions with similarity scores exceeding this threshold, along with the micrograph filenames, are recorded in a .starfile.The .star format is widely used in cryo-EM to document particle coordinates, aiding in subsequent steps like 3D reconstruction using CryoSPARC. Experiments This section evaluates cryoMAE SOTA particle picking methods using the CryoPPP dataset, including ablation studies, sensitive analysis, and qualitative visualizations to demonstrate its effectiveness. We evaluated cryoMAE using five distinct particle datasets from CryoPPP [4], which were obtained from the Electron Microscopy Public Image Archive (EMPIAR) database [10].EMPIAR is a publicly accessible resource that offers raw, high-resolution cryo-EM images for research and benchmarking in the field of electron microscopy.The datasets used in our experiments, identified by EMPIAR IDs 10081, 10093, 10345, 10532, and 11056, comprise 300, 300, 300, 300, and 361 micrographs, respectively, each accompanied by particle coordinate information.Each EMPIAR ID corresponds to a unique protein type, facilitating targeted analysis within our SPA framework. Baselines. In this study, we utilized crYOLO1 [22] and Topaz2 [1] introduced in Section 2 as our baselines.For crYOLO, we employed the general model for crYOLO pre-trained on more than 40 datasets that can select particles of previously unseen macromolecular species as claimed in [22].For Topaz, we used a pre-trained model based on ResNet [9] (16 layers, each layer has 64 units) trained on large-scale cryo-EM datasets. Evaluation metrics. Our evaluation metrics include precision, recall, and F1 scores.A true positive occurs when a picked particle region overlaps with a ground truth region, achieving an intersection over union (IoU) of 0.5 or higher, with each ground truth accounted for only once.False positives include picked regions that either have an IoU less than 0.5 with any ground truth region or represent multiple detections for a single ground truth.False negatives are ground truth regions that remain undetected. Particle picking. The cryoMAE encoder slides on and processes query images in stage 2 with a stride of 28, extracting features for each sub-region.These features are matched against exemplar features, assigning the highest similarity score to each region.Following the sliding process, scores are ordered, and a density-based approach determines the cut-off threshold by identifying a sharp change in the 5 nearest neighbor average distance list.Coordinates from regions above this threshold are pinpointed as particle locations. 3D reconstruction. We utilized CryoSPARC [14] to conduct 3D reconstructions on particles selected by various methods and compared the resolutions of the reconstructed particles.The workflow, from particle picking to reconstructed structure, encompasses essential steps: contrast transfer function (CTF) estimation, 2D classification, 2D class selection, ab initio reconstruction, and homogeneous refinement.CTF estimation corrects for the microscope's phase contrast, crucial for high-resolution reconstructions.2D classification sorts particles into classes, removing aberrant particles to improve data quality.2D class selection further ensures only high-quality particles are used, followed by ab initio reconstruction for an initial 3D model creation without prior knowledge. Overall Performance The performance comparison of crYOLO, Topaz, and cryoMAE in particle picking is detailed in Tables 1 and 2, with 3D reconstruction outcomes visualized in Fig. 4. Ablation Studies Ablation studies validate the contributions of key cryoMAE components: self-cross similarity loss, pre-training, and exemplar similarity matching.We assessed the performance of cryoMAE across different configurations of self-cross similarity loss (with self-cross similarity loss, with unadjusted self-cross similarity loss, and with adjusted self-cross similarity loss) in Table 3, revealing optimal performance with adjusted loss.This finding highlights the crucial impact of self-cross similarity loss in enhancing feature extraction, making cryoMAE more discerning in particle selection and greatly lowering the chance of incorrect region identification.CryoMAE without self-cross similarity loss incorrectly scores many non-particle regions highly, evident from widespread white areas in Fig. 5(b)(e).In contrast, with this loss, cryoMAE specificity improves, accurately identifying particle regions, as shown in Fig. 5(c)(f), reducing false scores for background areas.Further insights are shown in Fig. 6, displaying a cosine similarity matrix for 12 regions, including 4 exemplars (1-4) and 8 unlabeled areas (5)(6)(7)(8)(9)(10)(11)(12), with region 10 being a particle region.The matrix demonstrates high similarity among particle regions and lower similarity between particle and background regions, highlighting the model's ability to group particle regions closely in the feature space and distinguish them from the background.This is key to the success of the self-cross similarity loss, enabling the model to significantly reduce similarity scores for non-target areas and concentrate high scores on central particle regions, thus reducing false positives.Conversely, models trained without this loss struggle to separate particle regions from backgrounds, leading to increased false positives.We also conduct 2D t-SNE visualizations to analyze the latent features of cryoMAE under varying conditions: trained on a dataset without unlabeled regions, trained on a dataset with unlabeled regions without the adjusted selfcross similarity loss, and trained on a dataset with unlabeled regions with the adjusted self-cross similarity loss.For each visualization, we randomly select a consistent set of 60 exemplars and 360 unlabeled regions from EMPIAR-10081 to ensure comparability across the three scenarios.The visualizations are in Fig. 7a, Fig. 7b and Fig. 7c, respectively.As demonstrated in Fig. 7a, training exclusively on particle regions leads cryoMAE to generate homogeneous latent features for any input.This approach risks elevating the false positive rate by indiscriminately assigning high similarity scores, including to background regions.Fig. 7b illustrates that incorporating unlabeled regions enables cryoMAE to discern features of non-particle regions, thus mitigating over-fitting to a particle-exclusive feature space.Consequently, the model acquires a preliminary capability to differentiate between particle and background regions, although with limited clarity (as observed in the latent feature space 2D visualization, where the blue and yellow clusters are approximately but not distinctly separated).Further advancements are evident in Fig. 7c, where the introduction of adjusted self-cross similarity loss significantly enhances the model's ability to distinguish between background regions and particles.This improvement is illustrated by the distinct separation between the two clusters in the figure, despite the presence of some yellow points within the blue cluster.These exceptions, representing particle-containing regions within unlabeled areas, are considered reasonable. Max and mean matching strategies. Table 5 presents a comparative study on two similarity score calculation methods for matching sliding regions against exemplar latent features: maximum vs. average cosine similarity.Table 5 reveals that maximum cosine similarity outperforms average cosine similarity, particularly in This advantage is linked to the varied orientation distributions among particle exemplars.Maximum cosine similarity effectively matches regions to their closest exemplar across different orientations, ensuring optimal scores.Conversely, average cosine similarity dilutes scores for particles with diverse orientations, as it averages across all exemplars, including those with markedly different particle orientations from the target region.This dilution lowers similarity scores for such particles, reducing their distinctiveness from the background and making accurate particle identification more challenging amidst noise. Sensitivity Analysis In this section, we conducted a sensitivity analysis to examine the impact of varying the number of exemplars and the sliding stride on model performance.6 shows how the model performance of cryoMAE varies with the number of exemplars used.As expected, adding more exemplars generally improves performance, owing to a more comprehensive representation of particle orientations in the similarity scoring process.This is particularly beneficial for particles with diverse orientations, as more exemplars increase the chance of capturing regions across different orientation states, improving recall.However, the performance improvement plateaus after a certain number of exemplars, with precision potentially decreasing.This is because particle orientations are limited, and once the diversity of these states is adequately covered, additional exemplars offer little benefit and may even raise false positives by increasing the likelihood of background regions being mistakenly scored highly.Thus, considering the diminishing returns beyond 15 exemplars, we identify this count as the optimal number for our few-shot learning approach.7 outlines the model performance of cryoMAE across various sliding strides, noting that decreasing stride from 56 to 14 typically boosts recall but diminishes precision.This trend can be attributed to the fact that larger strides cause a certain particle to be present in fewer windows, minimizing duplicate detections and enhancing precision.However, this can result in lower similarity scores for many particles, as they're more likely to be close to window edges, which can reduce their likelihood of being selected and decrease recall.The F1 score, a precision-recall harmony measure, tends to improve with smaller strides.Yet, reducing stride size significantly lengthens processing time per query image.Considering the trade-off between time efficiency and model accuracy, a 28-pixel stride is identified as the optimal balanced approach. Conclusion We introduce cryoMAE, a pioneering approach in few-shot learning tailored specifically for the cryo-EM field, significantly reducing the dependence on extensive labeled datasets for accurate particle picking.By harnessing the power of MAE and integrating a novel self-cross similarity loss, cryoMAE achieves superior performance in identifying particle-containing regions amidst the challenges posed by low SNR and diverse particle orientations.Validations on the CryoPPP dataset demonstrate cryoMAE's superiority over existing NN-based methods, marking a significant advancement in the cryo-EM analysis pipeline.This innovation not only streamlines the process of high-resolution protein structure determination but also makes it more accessible to a wider scientific audience, promising to accelerate discoveries in structural biology. 1Figure 1 : Figure 1: In cryo-EM with SPA, electron beams capture numerous 2D images of proteins within a cryogenically preserved sample.These images are subsequently denoised and subjected to particle picking, facilitating the reconstruction of the 3D structure of the protein. Figure 2 : Figure 2: Overview of the two-stage cryoMAE framework: stage 1 illustrates the training phase with a mix of labeled particle and unlabeled regions, employing reconstruction loss and self-cross similarity loss.Stage 2 depicts the particle picking process, where the trained MAE encoder assesses query micrographs, leveraging latent feature comparisons to identify particle positions accurately. trained on a dataset w/o unlabeled regions.(b) trained w/o the adjusted self-cross similarity loss.(c) trained w/ the adjusted self-cross similarity loss. Table 2 : Table 1 reveals the high precision of crYOLO but its tendency to 1: Performance comparison of cryoMAE, crYOLO, and Topaz on CryoPPP.Ab-initio reconstruction resolution comparison of cryoMAE, crYOLO, and Topaz across EMPIAR Datasets from CryoPPP. Table 3 : Comparison of cryoMAE supervised self-cross similarity loss, w/ unadjusted self-cross similarity loss L SCS , and w/ adjusted self-cross similarity loss LSCS . Table 4 : W/ and w/o pre-training. Table 5 : Max and mean matching.
5,940.2
2024-04-15T00:00:00.000
[ "Computer Science", "Materials Science" ]
GPU-Accelerated Population Annealing Algorithm : Frustrated Ising Antiferromagnet on the Stacked Triangular Lattice The population annealing algorithm is a novel approach to study systems with rough free-energy landscapes, such as spin glasses. It combines the power of simulated annealing, Boltzmann weighted differential reproduction and sequential Monte Carlo process to bring the population of replicas to the equilibrium even in the low-temperature region. Moreover, it provides a very good estimate of the free energy. The fact that population annealing algorithm is performed over a large number of replicas with many spin updates, makes it a good candidate for massive parallelism. We chose the GPU programming using a CUDA implementation to create a highly optimized simulation. It has been previously shown for the frustrated Ising antiferromagnet on the stacked triangular lattice with a ferromagnetic interlayer coupling, that standard Markov Chain Monte Carlo simulations fail to equilibrate at low temperatures due to the effect of kinetic freezing of the ferromagnetically ordered chains. We applied the population annealing to study the case with the isotropic intraand interlayer antiferromagnetic coupling (J2/|J1| = −1). The reached ground states correspond to non-magnetic degenerate states, where chains are antiferromagnetically ordered, but there is no long-range ordering between them, which is analogical with Wannier phase of the 2D triangular Ising antiferromagnet. Introduction The use of graphics processing units (GPU) for general purpose computing (GPGPU) is motivated by the fact that the theoretical peak performance of the parallel GPU architecture significantly exceeds the performance of the currently available CPU processors, which can be used to effectively reduce the computational time for a suitable task that can be parallelized.This performance disproportion arises from the fact that the increase in the CPU clock rates has slowed down considerably in the last decade due to the limitations of the used semiconductor technology.Therefore, the focus has been redirected to the multicore solutions.However, even with this approach the performance of the a e-mail<EMAIL_ADDRESS>e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>d e-mail<EMAIL_ADDRESS>has been increased by a factor of 16 1 .On the other hand, in the same time frame the single precision performance of the NVIDIA GPUs has grown by an order of two 2 .Of course, the efficiency of completion of a parallel task does not depend only on the sheer performance of the computational unit but also by the way of manipulation with individual types of GPU memories, which differ in size, bandwidth, functionality and locality.The speedup that can be gained as compared to sequential CPU computing, highly depends on our knowledge of the GPU CUDA architecture and the way it executes kernels.It takes a lot of thought and caution to incorporate all of this to create a highly optimized CUDA program.The use of GPGPU for scientific applications is of interest for instance in the stochastic simulations of spin models [1][2][3][4].Our main goal is to incorporate the GPU-accelerated computing in the population annealing (PA) method proposed by Hukushima and Iba [5].In section 2, the principles of the PA algorithm are reviewed.The section 3 describes techniques implemented in the creation and optimization of a GPU code of this algorithm.Our second goal is to study the highly frustrated Ising antiferromagnet on the stacked triangular lattice, which suffers from a slow spin dynamics in the low temperature region [6], where standard Markov Chain Monte Carlo (MCMC) simulations fail.This problem will be briefly discussed in the section 4. To deal with this problem we applied PA annealing algorithm on this system and the results will be presented in the section 5. Population annealing algorithm The PA algorithm was developed to study systems with rough free energy landscapes.The conventional approaches of stochastic statistical physics, such as MCMC algorithm, fail for such systems due to their inability to overcome large energy barriers in the low temperature region.The main strength of the population annealing algorithms lies in the use of a large number R of replicas that undergo the annealing process, starting the replicas at a sufficiently large temperature T (usually at β = (k B T ) −1 = 0), where the energy landscape is quite smooth.The population of replicas helps us to cover the large portion of this surface, so when we reach low temperatures by gradually cooling them, there is a higher probability, that some replicas end up in the global minimum, which corresponds to the searched ground state (GS).However, the effectiveness of this search strongly depends on the ability of these replicas to explore their local neighborhood which brings the entire population to the equilibrium.The population annealing algorithm uses two conceptually different approaches, which are performed at each step Δβ > 0 to achieve this.The first of them is a resampling process.Resampling is based on the use of a reweighting technique which moves the population closer to the Gibbs distribution.The chances of a replica to survive and reproduce is proportional to the reweighting factor e −ΔβE i , where E i is the energy of the i-th replica.Since Δβ > 0 the replicas with low energies are more likely to produce copies than high-energy ones, which will probably die out in this process.To keep the population size R more or less unchanged we have to apply a proper normalization factor Q to the reweighting factors, which is in fact the partition function ratio Then the expected number of i-th replica's copies is In the ideal case with a statistical ensemble of R → ∞ samples, the resampling process should be enough to obtain the Gibss distribution.However, practically we have only a finite population, so the resampling only redistributes population among the already occupied energy levels, thus leaving the lower energy states poorly sampled.Also making many copies of replicas leaves the population correlated.These two obstacles can be overcome by applying θ MC equilibration sweeps on all replicas.After that we are able to average the observables over replicas, which now should sample more correctly the Gibbs distribution.The free energy estimates can be evaluated from partition function ratios.A detailed discussion of the PA algorithm is given in [7]. GPU realization Considering the fact that parts of the algorithm are performed over large number of replicas (R 10 4 ) it is rather straightforward to assume that the PA algorithm is suitable for massive parallelism.Since the modern GPU architectures contains a few thousand of CUDA cores, we decided to use GPGPU to create an optimized CUDA program for the PA algorithm. We implemented two levels of parallelism.The first level is done over replicas, where each replica is manipulated by a single thread.Such parallelism is used when we calculate the partition function ratio Q and the normalized weights τ i .The second and much deeper level is the one that is performed over the spins of each replica, such as internal energy E calculation, magnetization M calculation and parallel checkerboard MC update.In this case one block of threads operates on arrays of spins from one replica, while one block is associated to one replica.To achieve the maximum occupancy of streaming multiprocessors (SMs), we must consider a block size large enough to be partitioned into several warps, but still within the thread-per-block limit, which is specific for each GPU architecture.For instance, we can choose for a 3D system 8 × 8 × 8 = 512 threads per block.However, we have to be very careful not to exceed the register and shared memory limitations for SMs. The Boltzmann factors in the Metropolis algorithm are tabulated and are implemented on GPU as fetches from a texture.We also used the optimized parallel reduction algorithm presented in [8] for summing Q, M and E.Moreover, we chose the spin arrays to have a block-wise coalescent data pattern to improve the global memory bandwidth. Another issue of our simulation is to generate parallel long sequences of pseudo-random numbers (PRN) for Metropolis checkerboard spin update and yet its buffer has to fit into the global memory and we also have to think about the performance of a generator.For starters we are using Philox_4x32_10 from the "cuRAND" library, which meets these criteria quite satisfactorily. Stacked triangular Ising antiferromagnet The stacked triangular Ising antiferromagnet is described by the Hamiltonian where S i = ±1 are Ising spin variables, the first term is summed over all intralayer (interchain) couplings with antiferromagnetic interaction J 1 < 0 and the second term represents the sum over all interlayer (intrachain) couplings with antiferromagnetic interaction J 2 < 0. For simplicity, we will consider the isotropic case with J 1 = J 2 .Figure 1 describes the topology of this system with a checkerboard system decomposition into six sublattices for the parallel spin update. The main issue with this system is that when standard MCMC methods are applied the system is unable to reach GS even for a large number of MC sweeps.Such a behavior happens due to the so Mathematical Modeling and Computational Physics 2015 02016-p.3called kinetic freezing effect [6].We performed MCMC simulation for a system of 24 × 24 × 32 spins and 10 5 MC sweeps (+20% for equilibration) to demonstrate it.Figure 2(a) depicts the temperature dependence of the internal energy and the heat capacity, which are consistent with the previous results [9].The primary heat capacity peak shows the transition from the paramagnetic to the partially ordered phase.The round low-temperature peak represents a structural change which is accompanied by the dominance of correlations in the chains, which leads to the gradual antiferromagnetic aligning of the spin chains.However, the inset clearly demonstrates that the system froze in a metastable state, with the energy slightly above the GS E/N|J where L z = 32 is the number of layers.This parameter reaches saturated values of ±32 for fully antiferromagnetically ordered chains.As we can see, most of the spin chains are fully ordered with no long-range order between them, which matches a 3D analogue of the Wannier phase [10].Only one chain highlighted with a circle has the unsaturated value.Its spin configuration is illustrated in the figure 2(c).The energy difference between the GS and this metastable state lies in the presence of the two ferromagnetic couplings, which delimits the chain fragment which has to be flipped entirely in order to get the desired GS.However, the slow spin dynamics at such low temperatures prevents that to happen.Of course, one can attempt to use multiple-spin-flip updates to resolve this problem [11], in which one of the 2 n states of the n-spin cluster is chosen at each MC trial.In our case, we emphasize the strengths of the PA algorithm to find the GS of our system. Results We ran three simulations with different sets of parameters on the same lattice size as we used in a MCMC simulation.The first simulation (simulation A) ran on a population of R = 10 4 replicas, with θ = 100 MC sweeps and Δβ = 0.01 in the range from β = 0 to 10.In the second one (simulation B) we used twice as finer a step Δβ = 0.005 and the third one (simulation C) has ten times the population size compared to simulation B. The obtained results for the internal energy are plotted in figure 3(a).As we can see for all simulations we have successfully converged to the GS configuration, but there is a small difference in the slope of the energy curves.The case with a larger R samples the Gibbs distribution better.Also the smaller temperature step is reducing the bias, because the energy histograms have a larger overlap. To quantify the equilibration of the population annealing we follow the procedure presented in [12], where Wang et al. calculated the family entropy where the ν i is the fraction of the population that originates in the i-th replica from the initial population.The e S f represents the effective number of surviving families.Figure 3(b) shows the family entropy as a function of the temperature for our PA simulations.The first thing that we can observe is that S f of a population with larger R is larger, which is obvious, because the larger R means reduction of the statistical errors.The figure also shows that S f substantially drops at the positions of the heat capacity peaks.The effect of the kinetic freezing at the secondary peak drastically diminishes the diversity of the families in the population in contrast to the high-T phase transition.However, this drop is more prominent for the case of a larger population, which does not make much sense, because we were expecting more families to survive in this case.Also the number of different GS observed at the lowest T was 32, 23 and 171 for the simulations A, B and C respectively.The effective number of survived families at the lowest temperature was e S f = 1.5857 for the simulation A, e S f = 2.1845 for The best performance of the PA code what we can get so far was achieved in the simulation C with 0.208 ns per spin-flip on NVIDIA GTX Titan with the speedup up to the 443 times compared to the sequential MCMC code (92.274 ns per spin-flip), which ran on a single core of the Intel i7-4790K processor at 4.4 GHz. Conclusions We created a GPU-accelerated population annealing algorithm for the stacked triangular Ising antiferromagnet in order to study its ground states.The algorithm converged for all tested sets of simulation parameters.However, the numerical accuracy of averaging was insufficient due to the small number of different states in the population and the rather small family entropy under the secondary heat capacity peak. There are many possible ways to improve the performance, such as the choice of a more efficient high quality PRN generator, the parallel resampling of replicas in the GPU global memory, the use of asynchronous multispin coding and also by applying an adaptive inverse temperature step based on the fixed overlap of the reweighted energy histograms.We would like to explore these possibilities in the future. Figure 1 . Figure 1.Checkerboard decomposition of the stacked triangular lattice 1 | = − 2 . To observe what happened, we plotted in figure 2(b) the snapshot at k B T/|J 1 | = 0.01 of an intrachain staggered magnetization Figure 2 . Figure 2. (a) The temperature dependence of the internal energy per spin and the heat capacity per spin.The inset depicts a detail on the internal energy in the low-temperature region.The solid line refers to the GS energy.(b) The snapshot of the intrachain staggered magnetization o z at k B T/|J 1 | = 0.01.The unsaturated chain is marked with a circle.(c) Spin configuration of the selected unsaturated chain. Figure 3 . Figure 3. (a) The internal energy comparison of the MCMC (see Fig. 2(a) inset) and PA results in the lowtemperature region for system of a size 24x24x32.(b) The family entropy as a function of the temperature for different simulation setups.
3,427.2
2016-02-01T00:00:00.000
[ "Physics", "Computer Science" ]
Effects of annealing atmosphere on the performance of Cu(InGa)Se2 films sputtered from quaternary targets Quaternary sputtering without additional selenization is a low-cost alternative method for the preparation of Cu(InGa)Se2 (CIGS) thin film for photovoltaics. However, without selenization, the device efficiency is much lower than that with selenization. To comprehensively examine this problem, we compared the morphologies, depth profiles, compositions, electrical properties and recombination mechanism of the absorbers fabricated with and without additional selenization. The results revealed that the amount of surface Se on CIGS films annealed in a Se-free atmosphere is less than that on CIGS films annealed in a Se-containing atmosphere. Additionally, the lower amount of surface Se reduced the carrier concentration, enhanced the resistivity of the CIGS film and allowed CIGS/CdS interface recombination to be the dominant recombination mechanism of CIGS device. The increase of interface recombination reduced the efficiency of the device annealed in a Se-free atmosphere. Introduction In the industrial production of Cu(InGa)Se 2 (CIGS) photovoltaic devices, absorbers are usually produced by a two-step process comprising sputter deposition of a Cu-In-Ga alloy precursor, followed by post-selenization and sulfurization [1,2]. Sputtering has great advantages when the technology is transferred from laboratory-scale solar cells to production-scale panels, because it produces large-area film homogeneity [3][4][5]. Another promising method is based on the sputtering of CIGS quaternary targets and post-annealing; it involves high materials usage and less reliance on toxic selenium powder or H 2 Se [6,7]. The post-annealing in this method includes either a Se-containing or a Se-free atmosphere treatment. The Se-free atmosphere annealing has more potential because it completely avoids the toxic selenium powder or H 2 Se. Various Se-free fabrication routines have been reported [8][9][10]. For example, Frantz et al. [5] and Chen et al. [6] obtained CIGS device conversion efficiencies of 8%-10% by using one-step sputtering at high substrate temperatures with no post-treatment. However, the efficiency of a solar cell prepared in a Se-free atmosphere is far less than those prepared in a Se-containing annealing atmosphere. It is unclear why CIGS device efficiency is lower when prepared by Se-free annealing. Franz et al. found that the upper limit of the device efficiency is most likely the result of impurities in the sputtering target, which leads to electronic defects in the absorber [5]. Park et al. reported that the formation of Se vacancies (V se ) on a CIGS film surface during Se-free treatment is the main limiting factor of the device efficiency [9]. The quality of the absorber, the contact between the absorber and the back electrode, and the interfacial matching between the buffer and absorber layers are three important factors that determine the conversion efficiency of the CIGS solar cell [10]. Previously, we reported that for Mo-CIGS interfaces, the contact is ohmic when the devices are annealed in either a Se-free atmosphere or a Secontaining atmosphere [11]. Furthermore, the phase structures of CIGS films annealed in Se-free and Se-containing atmospheres are both chalcopyrite with nearly identical diffraction peaks [11]. Thus, the limiting factor of the low efficiency in devices made in a Se-free atmosphere is attributed to the quality of the absorber and CIGS-CdS interfacial matching. To explore the effect of annealing atmosphere on the device performance, the morphologies, elemental depth profiles, quantitative compositions, electrical properties, and recombination mechanisms of the device are analysed. Experimental details The base pressure and the working argon pressure for CIGS deposition were 2.0 × 10 −3 Pa and 0.7 Pa, respectively. The CIGS films were deposited by sputtering from a quaternary CIGS target at room temperature in a pure argon discharge atmosphere at a middle frequency of power density approximately 30 W cm −2 . The CIGS precursors were then annealed in either a Se-free or a Secontaining atmosphere. Some of the fabricated glass/Mo/CIGS samples were placed in a quartz tube furnace that was pumped to a base pressure of 2.0×10 −3 Pa and filled with nitrogen at 0.5 atm. The annealing was performed at 550°C for 40 min. The heating rate was 15°C min −1 and the samples were allowed to cool naturally. For comparison, some of the fabricated glass/Mo/CIGS samples were placed in a furnace for selenization at 550°C for 40 min in atmosphere filled with hydrogen selenide and argon (H 2 Se/Ar). The pressure was approximately 10 000 Pa with a H 2 Se/Ar flux ratio of 1 : 100. The morphologies of the films were imaged with a field-emission scanning electron microscope (SEM, Ziess Sigma) and the elemental composition was analysed with energy-dispersive X-ray spectroscopy (EDS) system in the SEM. The accelerating voltage to collect EDS was 15 kV. The depth distributions were measured by secondary ion mass spectroscopy (SIMS, ION-TOF GmbH instrument, TOF.SIMS 5-100), and the sputtering was provided by bismuth primary ion bombardment at 30 keV. The electrical properties were characterized by Hall measurement (Hall, HL5500PC, Nanometric). A device structure of Mo/CIGS/CdS/i-ZnO/AZO/Ni-Al was used in this research. The fabrication details of CIGS solar cell have been described in our previous literature [11]. The temperature-dependent current-voltage (J-V) measurements were conducted under AM1.5 (100 mW cm −2 ) illumination using a solar simulator to examine the recombination mechanisms. Results The surface and cross-sectional morphologies of CIGS films annealed in Se-free and Se-containing atmospheres are shown in figure 1a and b, respectively. In figure 1a, the surface grains are distributed in a cauliflower shape, and the grain boundaries are round. Additionally, the film has a flat and compact morphology, and the arrangement of grains is not dense because there are many holes between the grains. In figure 1b, the grains in the cross-sectional view are approximately 1 μm, with close contact among grains. Furthermore, the grain boundaries in the surface region in figure 1b are straight and the arrangement of the grains is denser with fewer holes between grains than that in figure 1a. According to the principle of chemical equilibrium movement, gaseous H 2 Se can facilitate the transition of the film from an amorphous state to a crystalline state. Overall, Se in the atmosphere royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 7: 200662 accelerates the chemical reactions and promotes atomic diffusion. It has been reported that the grain size has little effect on the properties of CIGS solar cells when the conversion efficiency is below 16% [12][13][14]. Therefore, the small grain size of the CIGS solar cells made in a Se-free atmosphere may not be the key factor in poor device performance. SIMS measurements are used to analyse the elemental depth profile of the CIGS films annealed in Sefree and Se-containing atmospheres; typical results are shown in figure 2. The thicknesses of the two films are almost the same, but the etching rates are slightly different; therefore, the sputtering times have slight differences. Overall, the elemental depth profile of the two samples is similar, but there are slight differences, as discussed below. The indium and gallium depth profiles of the CIGS films annealed in a Se-free atmosphere are almost parallel to the horizontal axis, which indicates that the indium and gallium levels, and the quantity Ga/ (Ga+In) (GGI), are almost constant with depth. Thus, the CIGS band gap is almost constant with depth because it correlates with GGI [7]. However, the gallium concentration at the surface of the CIGS film annealed in a Se-containing atmosphere is lower than that annealed in a Se-free atmosphere. The gallium proportion increases gradually with depth for the film annealed in a Se-containing atmosphere. At the junction of the Mo and CIGS layers, the gallium concentration increases, which indicates gallium aggregates in this region. This is because of the lower reaction rate between gallium and selenium relative to that between indium and selenium. At the CIGS film surface, the reaction rate of indium, copper and hydrogen selenide is faster than that of gallium, copper and hydrogen selenide, which 'drives' gallium to the bottom of the CIGS film. The increase in GGI with depth has positive and negative consequences [15]. On the one hand, in the CIGS solar cell band gap diagram, when GGI increases, the conduction band bends upward and electrons in the neutral zone are subjected to an additional electric field. This is beneficial to electron transport from the neutral zone Combining these two aspects of the solar cell, one can conclude that the quantity GGI with depth in the CIGS film is not the key factor limiting the performance of CIGS solar cells prepared by annealing in a Se-free atmosphere. Furthermore, the selenium depth profile stays almost parallel to the horizontal axis for the Se-free annealed sample, while it bends upward (circled in red) on the surface of CIGS film prepared via Secontaining annealing. A more quantitative composition analysis was conducted with EDS in the SEM. By adjusting the acceleration voltage of the electrons, the measurement depth can be as much as 400 nm. Therefore, the EDS results reflect the composition of the CIGS films in the upper 400 nm and can be used to compare different samples. Typical compositions of the CIGS films are listed in table 1. For the film annealed in the Se-containing atmosphere, the GGI is approximately 0.22, while it is approximately 0.24 for the film annealed in the Se-free atmosphere, which is in accordance with the SIMS results. However, the ratio of Se/metal is approximately 0.89 for the film annealed in the Se-containing atmosphere, while that for the film annealed in the Se-free atmosphere is approximately 0.86. This indicates that the Se-containing atmosphere adds selenium into CIGS films, which could suppress the formation of donor defects of V Se . On the other hand, for the Se-free atmosphere, the Se deficiency at the film surface creates donor defects V Se , which may compensate for the hole concentration and increase defect state density at the interface. Hall measurements were used to analyse the electrical properties of the CIGS films prepared by Sefree and Se-containing annealing atmospheres. The results are listed in table 2. The conductivity type, carrier mobility and carrier concentration of the two films basically meet the electrical performance requirements. Generally, the resistivity decreases with doping concentration. The CIGS film annealed in the Se-containing atmosphere has lower resistivity because the doping concentration is higher. In the Se-free condition, there are many Se vacancies in the film, especially on the surface, which neutralize the hole concentration produced by the copper vacancies. Therefore, the carrier concentration in the CIGS film annealed in the Se-free atmosphere is lower. The device performance, especially the V OC , is affected by recombination mechanisms. To discuss the mechanisms, temperature-dependent J-V curves are shown in figure 3a. The V OC of the device is around 530 mV at 298 K. If V OC at 0 K, obtained by extrapolating the linear fit in figure 3b, is less than E g /q, where E g is the band gap of CIGS absorption layer, the loss of conversion efficiency is g attributed to interface recombination [16]. As shown in figure 3b, the V OC of the solar cell at 0 K is estimated to be approximately 1.03 V. With an E g estimated to be 1.15 eV from our previous research, this suggests that interface recombination is the dominant recombination mechanism. Previously, we found that the V OC at 0 K of the solar cell annealed in the Se-containing atmosphere was higher than that annealed in Se-free atmosphere [17]. Because the concentration of Se at the top surface of the absorber annealed in the Se-containing atmosphere is higher than that for the absorber annealed in the Se-free atmosphere, it is likely that the reduced performance of CIGS solar cell annealed in the Sefree atmosphere is attributed to the decreased selenium in the surface layer. Hence the efficiency of Conclusion The effects of annealing atmosphere on the performance of CIGS films were comprehensively investigated. In summary, the Se-free annealing procedure created CIGS thin films that exhibit uniform elemental distributions without Ga segregation at the bottom of the film. The Se-free annealing creates Se vacancies, which lead to a dominant recombination mechanism at the absorberbuffer interface. From the depth profiles, the composition of the surface of CIGS film and the temperature-dependent J-V curves, it is likely that the key limiting factor for the efficiency of the device annealed in a Se-free atmosphere is the low concentration of selenium in the film surface. Sufficient selenium in the CIGS target is thus expected to improve the device efficiency for CIGS films annealed in a Se-free atmosphere. Data accessibility. All raw data, code, analysis files and materials associated with this study are deposited at Dryad:
2,923.2
2020-10-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Design of a snubber circuit for low voltage DC solid-state circuit breakers Solid-state circuit breakers (SSCBs) are designed to interrupt fault currents typically several orders faster than its electromechanical counterparts. However, such an ultrafast switching operation would produce a dangerous overvoltage which might cause damages to SSCBs and other circuit elements in the system. This paper proposes a novel snubber circuit for suppressing the overvoltage. It takes the advantages of both resistor-capacitor-diode (RCD) snubbers and metal oxide varistors (MOVs). Its operating process is analysed before the proposed snubber circuit for 400V DC SSCBs is designed. Pspice simulator is employed for simulating the operating process and a prototype SSCB with the proposed snubber is built and tested in a lab-scale DC system. The results of simulation and experiment validate the effectiveness of the proposed snubber. INTRODUCTION DC distribution networks are gaining popularity in data centres, commercial buildings and transport power systems [1][2][3][4] because in comparison to traditional AC systems, they demonstrate higher efficiency and more readiness for integrating with various local renewable power sources and ever-increasing DC electronic loads. However, one of the major issues hindering this trend is the lack of effective DC short-circuit fault protection devices. Though working well in AC power networks, conventional electromechanical circuit breakers are not suitable for DC systems because their response time is typically in the range from tens of milliseconds to hundreds of milliseconds which is far longer than the survival time of most power electronic devices (a few tens of microseconds) in DC systems. In recent years, solid-state circuit breakers (SSCBs) have been intensely researched as promising candidates to replace mechanical circuit breakers for DC protection due to its ultrafast switching speeds [5][6][7][8]. However, such a fast switching operation would produce an unacceptably high voltage across SSCBs because of the rapid fall of fault current and small system inductance [9]. Furthermore, the large magnetic energy stored in the system inductance must be dissipated by energy absorption elements since such a huge amount of burst energy during shortcircuit faults is usually far higher than what SSCBs can contain. Therefore, some effective methods must be in place to suppress the overvoltage and meanwhile absorb the energy stored in the system inductance during turn-off of SSCBs. Several approaches were reported and discussed for SSCB applications [10][11][12][13]. Generally, two topologies are commonly adopted alone or combined to serve this purpose: resistor-capacitor-diode (RCD) snubbers [14,15]and metal oxide varistors (MOVs) [16,17]. In this paper, to start with, the operating process of both conventional RCD snubber circuits and MOVs are reviewed and their pros and cons are discussed. In the following, a novel snubber circuit combining a RCD with a MOV is proposed and analysed before the proposed snubber for 400V DC SSCBs is designed and its components are selected. Both simulation and experimental results validate the effectiveness of the proposed snubber design. Finally, the impact factors on the response time of SSCBs are investigated and conventional RCDs, MOVs are compared with the proposed snubbers. The main contribution of this paper is: • Proposal of a novel hybrid snubber configuration which takes into account the advantages offered by both conventional RCD snubbers and MOVs. • Analytical expressions describing each stage of the operating process provide guidance for the snubber design for SSCB application. IET Power Electron. 2021;1-10. wileyonlinelibrary.com/iet-pel 1 FIGURE 1 RCD snubber circuit • The impact factors involved in the snubber on the response time of SSCBs have been identified to optimise the snubber design to meet different application requirements. REVIEW OF SNUBBER CIRCUITS FOR SSCBS Snubber circuits in the form of capacitor(C), resistorcapacitor(RC) or resistor-capacitor-diode(RCD) have been discussed in [10,18]. C type is the simplest. However, a high discharge current will flow through the main semiconductor switch of SSCBs during the turn-on operation, which tends to cause the nuisance trip of SSCBs. To address this issue, a current-limiting resistor is added in series to the capacitor forming RC snubbers. However, a high voltage drop across the resistor during high fault current interruption would damage semiconductor components of SSCBs. To solve this issue, a diode is added in parallel with the resistor to form an RCD snubber as shown in Figure 1. The use of RCD snubbers has been very common for suppressing overvoltage. The operating process is simply divided into four stages as below: Stage 1 starts when a short-circuit event occurs, the fault current ramps up until reaching the trip current level of SSCB. Stage 2 starts when SSCB turns off and the diode D S turns on until the fault current completely commutates from SSCB to the branch of snubber capacitor C S and the diode D S . Stage 3 starts when C S is charged until the energy stored in system inductance L DC is completely transferred to C S . Stage 4 starts when C S discharges through the resistor R S until its stored energy is fully exhausted and fault current is dampened to zero. The main advantages of the RCD snubber is very effective on slowing down the rising speed of the overvoltage and reducing the oscillations during the turn-off. However, this solution requires a very high power resistor to exhaust the stored energy in a very short period. For example, a system with L DC = 100 µH, trip current 100 A and response time 100 µs, would require a resistor with peak power as high as 5 kW, leading to the whole snubber circuit bulky and expensive. MOVs are another common type of voltage clamping components which are widely used for protecting devices against overvoltage caused by either lightning surges or switching operations thanks to its highly nonlinear voltage-current characteristics like back-to-back Zener diodes. Figure 2 shows a MOV for SSCBs application. Its operating process is divided into two stages: Stage 1 starts when a short-circuit event occurs, the fault current rapidly ramps up to the trip current level before SSCB turns off. Once the voltage across SSCB exceeds the reference voltage of MOV, fault current starts to commutate from SSCB to MOV. Stage 2 starts when SSCB turns off and the fault current fully commutates to MOV where the voltage across SSCB is clamped to the protection level of MOV and the energy stored in system inductance L DC is dissipated until fault current is dampened to zero. The main advantages of MOVs are its simplicity and high energy absorption capability with the typical value in the range of hundreds of joules per cubic centimetre [19]. However, it suffers from deterioration over time when frequently exposed to surges and overvoltage transients [20]. Furthermore, compared to the RCD snubber, it has no dv/dt control and displays larger transient oscillations during turn-off of SSCBs [21,22]. To take benefits of both RCD snubbers and MOVs, a novel snubber circuit is proposed herein by combining a MOV with a RCD snubber as shown in Figure 3. This approach exploits both effective overvoltage suppression of RCD snubbers and high energy absorption capability of MOVs. Meanwhile, it eliminates the high-power resistor of RCDs and mitigates the transient oscillations of MOVs. ANALYSIS OF OPERATING PROCESS OF THE PROPOSED SNUBBER CIRCUIT Under normal operating conditions, SSCB stays on and the snubber capacitor is pre-charged to the supply voltage. When a short-circuit fault occurs, the operating process is divided into four stages shown in Figure 4(a)-(d) respectively. The equivalent circuit includes a SSCB, a DC supply voltage source V DC , an equivalent system inductor L DC , an equivalent short-circuit resistor R SC and the proposed snubber circuit constructed by C S , D S and MOV. To serve the main purpose of analysing the operating principle and meanwhile reducing the complexity, several assumptions are made below: 1. Ideal SSCB: turn off instantly and has zero on-resistance. 2. Ideal Diode: reverse recover characteristic is neglected. 3. MOV: Leaking current is neglected. Stage 1: Fault current ramps up ( Figure 4a) When a short-circuit fault occurs, the fault current ramps up until it reaches the trip current I trip of SSCB. At this stage, the snubber is inactive and no currents flow through C S , D S and MOV. By applying Kirchhoff voltage law (KVL) to the main power circuit loop, the expression (1) is obtained: Integrating the Equation (1) and rewriting it, fault current i f at this stage can be derived as Hence, time period T 1 when fault current rise from rated load current I r to trip current I trip at this stage can be calculated as: Due to the assumption of an ideal SSCB, the on-state voltage across SSCB is zero, thus: When SSCB starts turning off and then the snubber diode D S turns on, the fault current is commutating from SSCB to the branch of snubber capacitor C S and diode D S .A ga i n ,d u e to the assumption of an ideal SSCB, fault current and voltage across SSCB V SSCB at this stage are considered constant. Thus, Stage 3: C S is charged until MOV is activated (Figure 4c) The snubber capacitor C S is charged until the voltage across MOV reaches its activated level (reference voltage V ref ). Fault current i f and V SSCB at this stage can be derived as Time period T 3 at this stage can be obtained as Stage 4: Fault current commutates from the branch of C S and D S to MOV (Figure 4d) MOV is activated and fault current is redirected from C S and D S to MOV where stored energy in L DC and C S is dissipated. For simplicity, the V-I characteristic of MOV in its active region is assumed to be linear. Thus, V-I relationship of MOV can be simply expressed as: Where V A and R B are constant. The initial activated current of MOV I o can be estimated as Hence, fault current i f and V SSCB can be obtained respectively as: System inductance L DC 1-100 µH Time period T 4 is estimated as Table 1 lists the main technical specification of the targeted low voltage DC SSCB for a 400 V DC system. 4.1.1 Selection of capacitor C S First condition: The energy stored in C S must be greater than the energy stored in system inductance L DC .Thus: Second condition: Rated voltage of C S must be higher than the maximum blocking voltage across SSCB (1000 V). 4.1.2 Selection of diode D S First of all, a soft and fast recovery power diode is expected. Secondly, pulse current of D S must be higher than the maximum trip current (100 A). FIGURE 5 MOV V-I characteristic and its linear fitted curve Hence, 120 A pulse current, 650 V diode IDP40E65D2 from Infineon [24] is selected. Selection of MOV First condition: the energy absorption capability of MOV must be higher than the energy stored in the system inductance (L DC = 100µH). Thus, Second condition: the protection level of MOV must be lower than a certain level to assure the voltage across SSCB below allowed maximum value (1000 V). Thus, Hence, MOV B72220S0171K101 from TDK [25] is selected. Figure 5 illustrates the selected MOV voltage-current characteristic against its linear fitted curve in the active current region (10-100 A). Hence: Theoretic calculations in each stage for the proposed snubber Substituting those parameters of selected components into corresponding equations derived in Section 3 and assuming worst scenario L DC = 100µH and short-circuit resistance R SC = 0.4Ω, fault current i f , voltage across SSCB V B(SSCB) and time period T in each stage can be calculated in Table 2. SIMULATION VALIDATION Pspice is employed for simulating the snubber operating process. All parameters used for simulation are identical to the aforementioned theoretic calculations and an ideal semiconductor switch model is selected as SSCB. Figure 6 shows the simulation waveforms including fault current (red line), capacitor current (green line), MOV current (blue line) and voltage across SSCB (black line). As can be seen, SSCB turns off right after fault current reaches 100 A. In the following, fault current is redirected to the snubber capacitor C S then to MOV where it eventually damps to zero. Meanwhile, the voltage across SSCB starts rising after turn-off of SSCB until it reaches the peak value around 870 V the moment MOV is activated. In the end, the voltage converges to the steady supply voltage V DC (440 V) when fault current is cleared off at around 53 µs. The simulation results confirm the proposed snubber can Furthermore, the analytical results for fault currents in each stage obtained from Table 2 are compared with simulation. As demonstrated in Figure 7, the analytical results match simulation very well. Furthermore, analytical results of the voltage across SSCB are also compared with simulation results in Figure 8. As can be seen, the simulation results show reasonable matching with calculated results except for some discrepancies during transient period between each stage due to the assumption involved in ideal SSCB and linear I-V relationship of MOV in the calculations. The simulation results verify the correctness of the theoretic analysis. FIGURE 9 Schematic of the snubber test bench EXPERIMENT VALIDATION The experiment of the proposed snubber circuit is conducted in a lab-scale DC system. Table 3 lists the parameters of experimental set-up. A test bench is built as sketched in Figure 9 where a power switch IGBT IRG4PSH71UD from Infineon [26]i s selected as the main switch controlled by a gate driver setting the pulse duration of short-circuit current. Figure 10 shows the experimental results of SSCB without the snubber and with the proposed snubber under the same test condition: L DC = 100 µHandV DC = 100 V. As observed, the peak voltage across SSCB is as high as 974 V without the snubber in Figure 10(a) compared to only 212 V with the proposed snubber in Figure 10(b). Figure 11 presents the waveforms under the test conditions: L DC = 180 µH subjected to various supply voltages of 150, 200 and 250 V respectively. The results demonstrate the overvoltage across SSCB can be effectively suppressed less than twice of the supply voltage with the proposed snubber. Meanwhile, it is worth noticing that in Figure 11(a), (b) voltage ringing appears at the end of the process, leading to longer recovery time of SSCB. The reason is that MOV under lower supply voltage system has not been fully activated, resulting in less dampening effect on Figure 11(c) shows no ringing due to effectively activated MOV under higher supply voltage. Figure 12 compares the waveforms of fault currents and voltages across SSCB of experiment results against simulation results under the same condition: L DC = 100 µH and V DC = 135 V. It demonstrates a reasonable match between them though there are noticeable discrepancies mainly attributed to the parasitic impedance of the wires and PCB traces, which are not accounted for the simulation. In summary, the experimental results validate the effectiveness of the proposed snubber circuit. Discussions of impact factors on the response time of SSCBs It is well known that adoption of the snubbers can prolong the response time of SSCBs. For this reason, it is essential to investigate the factors in what way influence the response time. system inductance and trip current respectively. As indicated, the increase of MOV clamping voltage can reduce the response time whereas the response time would increase in concert with the rising of snubber capacitance, system inductance and trip current level. Therefore, designers can manipulate these factors to meet their own design objective. Discussions of impact of the assumptions on the snubber performance Despite a limited impact on the snubber performance due to the assumptions for simplifying the theoretic analysis, it will be discussed here for completeness. First and foremost, the assumption of instant turn-off of SSCB tends to reduce the total response time. However, the turn-off time of semiconductor devices is generally on the order of several hundreds of nanoseconds, almost two orders lower than the total response time of SSCBs (tens of microseconds). Therefore, the influence is insignificant. Secondly, the negligence of on-state voltage of SSCBs would increase the rising speed of fault current and tends to reduce the time period T1 in Stage 1 as defined by Equations (1)and (3). However, compared to the power supply voltage V DC , the onstate voltage drop of SSCBs is negligible and hence its influence is very limited. The assumption of no reverse current for diode D S would have an impact on the snubber performance in the final stage where the diode is changing from a forward mode to a reverse mode. Since the diode with a slow and hard recovery characteristic would cause transient oscillations or high voltage spikes during this stage, a soft and fast recovery diode with the recovery time below 100 ns is expected. Undoubtedly, the selected diode should be verified in the actual circuit to ensure the snubber to perform as expected. Lastly, the assumption of no leaking current of MOV has nothing influence of the snubber performance rather than MOV itself as a larger leaking current of MOVs tends to lead to the faster deterioration of MOV in the long run. In this scheme, the leaking current of MOV as a function of applied voltage is negligible as no voltage is exposed to MOV under normal operating conditions. To conclude, if designed properly, these assumptions have little impact on the total performance of snubbers. Comparison with conventional RCD snubbers and MOVs For comparison, a conventional RCD circuit is constructed by simply replacing the MOV of the proposed snubber with a 20 Ω snubber resistor R S while maintaining all other parameters of the system and other components identical to the proposed snubber. As shown in Figure 14, the simulated fault current waveforms of both solutions are almost identical. In the meantime, the peak voltage across SSCB with the proposed snubber has the same level with that of the conventional RCD snubber. Figure 15 compares currents and powers through the resistor R s of the RCD snubber and the MOV of the proposed snubber. As observed, both R s and MOV experience very high peak power, 10 and 20 kW respectively. Furthermore, it is noticed that as long as 300 µs is needed to dampen the RCD snubber current to zero through the resistor R s whereas the proposed snubber with MOV can do so by around 55 µs. In addition, Table 4 roughly compares the performances of the conventional RCD, MOV and the proposed snubber used for 400 V DC SSCBs defined in Table 1.I ts h o w st h a tM O V stands out for shorter response time and a much lower cost while the conventional RCD and the proposed snubber share better overvoltage suppression and lower transient oscillations. However, RCD are much more expensive than the proposed snubber for same peak current and clamping voltage requirements. To conclude, the comparison demonstrates that the proposed snubber cannot only suppress the overvoltage as effectively as the conventional RCD snubber but also has a relatively low cost after replacing the bulky and expensive resistor with a simple and low-cost MOV. CONCLUSION In this paper, a novel snubber circuit has been proposed for low voltage DC solid-state circuit breakers. It exploits the advantages of effective overvoltage suppression of RCD snubbers and high energy absorption capability of MOVs while it eliminates the requirement of high-power resistor of RCD snubbers and mitigates the transient fluctuation of MOVs. Its operation principle has been analysed then a snubber design for 400V DC SSCBs is presented. Simulation results against the analytic results validate the correctness of the snubber design. Meanwhile, the impact factors on the response time of SSCBs have been investigated by simulation. Finally, a prototype lab-scale SSCB with the proposed snubber circuit has been constructed and tested. The experimental results further confirm the effectiveness of the proposed snubber circuit design.
4,656.4
2021-03-12T00:00:00.000
[ "Engineering", "Physics" ]
Glucosyl hesperidin exhibits more potent anxiolytic activity than hesperidin accompanied by the attenuation of noradrenaline induction in a zebrafish model Anxiety is a symptom of various mental disorders, including depression. Severe anxiety can significantly affect the quality of life. Hesperidin (Hes), a flavonoid found in the peel of citrus fruits, reportedly has various functional properties, one of which is its ability to relieve acute and chronic stress. However, Hes is insoluble in water, resulting in a low absorption rate in the body and low bioavailability. Glucosyl hesperidin (GHes) is produced by adding one glucose molecule to hesperidin. Its water solubility is significantly higher than that of Hes, which is expected to improve its absorption into the body and enhance its effects. However, its efficacy in alleviating anxiety has not yet been investigated. Therefore, in this study, the anxiolytic effects of GHes were examined in a zebrafish model of anxiety. Long-term administration of diets supplemented with GHes did not cause any toxicity in the zebrafish. In the novel tank test, zebrafish in the control condition exhibited an anxious behavior called freezing, which was significantly suppressed in GHes-fed zebrafish. In the black-white preference test, which also induces visual stress, GHes-fed zebrafish showed significantly increased swimming time in the white side area. Furthermore, in tactile (low water-level stress) and olfactory-mediated stress (alarm substance administration test) tests, GHes suppressed anxious behavior, and these effects were stronger than those of Hes. Increased noradrenaline levels in the brain generally cause freezing; however, in zebrafish treated with GHes, the amount of noradrenaline after stress was lower than that in the control group. Activation of c-fos/ERK/Th, which is upstream of the noradrenaline synthesis pathway, was also suppressed, while activation of the CREB/BDNF system, which is vital for neuroprotective effects, was significantly increased. These results indicate that GHes has a more potent anxiolytic effect than Hes in vivo, which may have potential applications in drug discovery and functional food development. Introduction Flavonoids are secondary metabolites that are abundant in plants, fruits, and seeds, and are responsible for color, fragrance, and flavor characteristics (Dias et al., 2021).Flavonoids possess various physiological functions, including anti-oxidant activity, regulation of cell growth and differentiation, inhibition of inflammation, suppression of bacterial infection, and reduced risk of human diseases (Yoshinaga et al., 2016;Shinyoshi et al., 2017;Dias et al., 2021;Domaszewska-Szostek et al., 2021;Silva et al., 2021;Deng et al., 2022). The health benefits of Hes have been discussed for various diseases.For example, Hes suppresses cancer cell growth by inducing apoptosis through the PI3/AKT pathway (Aggarwal et al., 2020).Hes attenuates nitric oxide deficiency-induced cardiovascular remodeling by suppressing the expression of TGF-β1 and matrix metalloproteinase proteins, MMP-2 and MMP-9 (Maneesai et al., 2018).Hes decreases diabetic nephropathy induction by modulating TGF-β1 and oxidative DNA damage (Kandemir et al., 2018). The anxiolytic and antidepressant-like activities of Hes have been recently reported.Anxiety is a typical symptom of depression and other psychiatric disorders that affect many patients worldwide.Hes suppressed anxious behavior in Parkinson's disease model mice in the Elevated Plus-Maze Test (EPMT) and splash test (Antunes et al., 2020).Hes exhibited antidepressant-like effects on the EMPT, forced swimming test, and open field test in streptozotocin-induced diabetic rats (Zhu et al., 2020).Hes improved depression-like behaviors in rats after exposure to a single prolonged stress (post-traumatic stress model), accompanied by a decrease in freezing behavior (Lee et al., 2021).Hes is also the main component of Chin-pi, a Chinese medicinal herb originating from citrus peels that shows anxiolytic activity in rodents (Ito et al., 2013). Although numerous physiological functions have been reported for Hes, significant metabolic problems are associated with its poor bioavailability, similar to many other flavonoids.In general, Hes is hydrolyzed to hesperetin aglycone by the intestinal microbiota.The absorbed hesperetin is metabolized by UDP-glucuronosyl transferases and sulfotransferases in the colon, small intestine, and liver at the 3′-and 7-position (Boonpawa et al., 2017).Hesperetin and hesperetin glucuronide can traverse the bloodbrain barrier (BBB) in vitro (Youdim et al., 2003).The expression of the biological functions of orally administered Hes is reliant on this metabolic pathway.However, the water solubility of Hes is low (0.002 g/100 g water) and has low absorption efficiency in the intestine, resulting in insufficient bioactivity of Hes metabolites. Glucosyl hesperidin (GHes) is a conjugate of monoglucose with Hes and is produced using Cyclodextrin Glucanotransferase (CGTase) originating from Bacillus species that can conjugate monoglucose to Hes (Chen et al., 2022).The water solubility of GHes is approximately 10,000 times higher than that of Hes (Yamada et al., 2006).As expected, the serum hesperetin concentration increased more rapidly in rats administered GHes than in those administered Hes (Yamada et al., 2006).The area under the concentration-time curve for hesperetin in the sera of rats administered GHes was approximately 3.7-fold greater than that in rats administered Hes (Yamada et al., 2006).The physiological functions of GHes include the inhibition of influenza viral sialidase activity (Saha et al., 2009), clinical trials for preventing obesity (Yoshitomi et al., 2021), inhibition of selenite-induced cataract formation (Nakazawa et al., 2020), and inhibition of gravity-induced lower-leg swelling (Nishimura et al., 2021).These reports suggest that the anxiolytic activity of GHes may be greater than that of Hes.However, despite the high water solubility of GHes, no differences in blood pressure reduction or other effects in hypertensive rats were reported (Ohtsuki et al., 2002;Ikemura et al., 2012).Furthermore, the effects of GHes on anxiety behaviors have not yet been investigated. To investigate the effectiveness of GHes in human health research, the current study aimed to evaluate the anxiolytic activity of GHes compared with Hes.In this study, we used zebrafish as a model of anxiety.Zebrafish are small fish belonging to the Cypriniformes family and are officially recognized by the NIH as the third most commonly used laboratory animal after mice and rats (Zhao et al., 2018).Zebrafish are relatively inexpensive to maintain and require less space than rodents.We also have access to the whole genome information of zebrafish, and approximately 70% of human genes are conserved in zebrafish.Methods to study anxiety, such as open field tanks, black-and-white preference, and t-maze tests, have been developed in zebrafish as well as in mice (Shiozaki et al., 2020).Zebrafish are suitable for evaluating antidepressant drugs or natural compounds.In this study, we performed a novel tank test, black-white preference test, and acute stress induction by low water level and alarm substance to determine the effects of GHes on anxiety behavior in zebrafish.We also investigated the underlying mechanism by which GHes suppressed anxiety. Materials and methods 2.1 Zebrafish prepared as follows: GHes or Hes was mixed with pulverized commercial diet to a 1% concentration, then make it freeze-dried and pellet (0.6-1.0 mm).Under the conditions of this study, feeding 1% Hes to zebrafish is roughly equivalent to 200 mg Hes/day/Kg body weight.Similar Hes concentrations have been used in several studies (Ahmadi et al., 2008;Fu et al., 2019).A control diet was prepared using the above methods without GHes or Hes.The diets were preserved at −20 °C during the administration period.The administration was conducted in 2-L tanks at 28 °C.The zebrafish were separated into 2-L tanks and fed a commercial diet until the experimental diet was administered.Fish were fed to apparent satiation twice a day Feeding was carried out by repeatedly feeding the zebrafish a small amount of experimental diet until they stopped eating.Food intake was expressed as the total diet ingested in each tank/fish number. Evaluation of behavior 2.3.1 Motility test A motility test was conducted using zebrafish-fed control, Hes, or GHes for 52 days.A fish was introduced into the white tank (21.6 cm wide, 22.8 cm long, 12 cm depth), habituated for 15 min, and the swimming behaviors of the zebrafish were recorded using a video camera (HDR-CX430, Sony, Tokyo, Japan) for 5 min.Swimming was automatically recorded and tracked using Movetr/2D software (Library, Tokyo, Japan).The total distance traveled, and swimming velocity were analyzed. Novel tank test The novel tank test was carried out using zebrafish that were fed control, Hes, or GHes diets for 31 days.Fish were introduced into the center of a bright white tank (21.6 cm wide, 22.8 cm long, 12 cm depth), and their swimming behavior was recorded using a video camera for 5 min.Swimming was automatically tracked using Move-tr/2D software.The total freezing time, freezing frequency, and distance traveled were analyzed. Black-white preference test The black-white preference test was performed with slight modifications using zebrafish that were fed control, Hes, or GHes diets for 21 days (Ikeda et al., 2021).A fish was introduced into the black section of the tank (23 cm wide, 13 cm long, and 6 cm height), which was divided into half black-and-white sections, and the swimming behavior of the zebrafish was recorded using a video camera for 20 min.Swimming was automatically tracked using Move-tr/2D software.The total swimming time in the black area, total time in the white area, and total frequency of invasion into the white area were analyzed. Low water level stress test The low water level test was conducted with slight modifications using zebrafish that were fed either control, Hes, or GHes diets for 14 days (Piato et al., 2011).Low water level stress was defined as the zebrafish dorsal being out of the water surface.The fish were exposed to low water level stress for 2 min, then introduced into a white tank (21.6 cm wide, 22.8 cm long, 12 cm depth), and their swimming behavior was recorded using a video camera for 10 min.Five minutes of swimming were automatically tracked using Move-tr/ 2D.The total freezing time and frequency over 10 min were analyzed. Alarm substance exposure test The alarm substance exposure test was conducted using zebrafish that were fed with control, Hes, or GHes diets for 7 days.The alarm substance was prepared as described previously (Speedie and Gerlai, 2008).Briefly, zebrafish scales were peeled with a scalpel and finely crushed in cold PBS.The homogenate was centrifuged, and its supernatant was stored at −80 °C until use.A fish was introduced into the transparent tank (18.5 cm, 10.9 cm, 11 cm height) with the alarm substance (equivalent to the amount derived from 0.03 fish/L), and the swimming behavior of the zebrafish was recorded using a video camera for 10 min.Swimming for 3 min was automatically tracked using Move-tr/2D.The total freezing time was analyzed for 10 min. Real-time PCR The mRNA expression levels of each gene were analyzed using cDNAs from the zebrafish brain using a Step One Real-Time System (Thermo Fisher Scientific, MA).Zebrafish brains were removed after euthanasia with 0.1% tricaine.Tricaine has been used in many studies on anxiety in fish and has been reported to have no effect on anxiety or stress-related behaviors (Nordgreen et al., 2014).Total RNA was extracted from the zebrafish brains using Sepasol-RNA I Super G solution (Nacalai Tesque, Kyoto, Japan), and cDNA synthesis was performed using ReverTra Ace qPCR RT Master Mix with gDNA Remover (TOYOBO, Osaka, Japan).Real-time PCR was conducted using KOD SYBR qPCR Mix or THUNDERBIRD qPCR Mix (TOYOBO).The specific primers used for PCR are listed in Supplementary Table S1.The expression level of actb mRNA was used as an internal standard to compensate for the quality and quantity of mRNA in each sample.Primers were designed by using NCBI Primer-BLAST (https://www.ncbi.nlm.nih.gov/tools/primer-blast/). Determination of monoamines Noradrenaline, serotonin, and dopamine levels were determined as described previously, with slight modifications (Kawano et al., 2020).Briefly, fresh zebrafish brains were homogenized in 0.2 M perchloric acid containing 0.1 mM EDTA.After centrifugation at 12,000 × g, the supernatant was mixed with 0.2 M sodium acetate and analyzed using HPLC.The HPLC system consisted of a pump (JASCO PU-4180, JASCO, Tokyo, Japan), an autosampler (JASCO AS-4550), a column oven (JASCO CO-4061), and an electrochemical detector (ECD-700, EiCOM, Kyoto, Japan) with a graphite carbon working electrode and an Ag/AgCl reference electrode.The ECD potential was set at +750 mV for the working electrode.The mobile phase was an acetate-citrate buffer (pH 3.5) containing 0.053 M citric acid, 0.047 M sodium acetate, 5 mg/L EDTA, 195 mg/L sodium octyl sulfonate, and 17% methanol (v/v).The mobile phase was delivered at a flow rate of 0.5 mL/min to a stainless steel column (Eicompack SC-5ODS, 3 mm φ × 150 mm; EiCOM). Data analysis Results are presented as mean ± standard deviation of the mean.Normality test and Group size used in this study were estimated using IBM SPSS statistics software (Armonk, NY).In the two groups, data were compared using a t-test.In the three groups, data were compared using a one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test. GHes attenuated the anxiety induced via visual stress Before evaluating the anxiolytic activity of GHes, the effects of GHes on non-stressed zebrafish were compared with those of the control and Hes.Zebrafish that were fed GHes or Hes did not differ from the controls in swimming trajectory, swimming speed, or swimming distance (Figures 1A-C).GHes and Hes slightly but significantly enhanced the daily food intake in zebrafish compared to the control (p < 0.01 in GHes and Hes vs control) (F = 11.609,p < 0.0001 in One-way ANOVA, Figure 1D). To evaluate the effect of GHes on anxiety behavior in zebrafish, a novel tank test, the visual stress test, was carried out.Under novel conditions, control-fed zebrafish exhibited freezing behavior accompanied by a drastic change in the swimming track (Figure 2A).However, GHes-fed zebrafish drastically suppressed freezing time (p < 0.05, vs control) (F = 4.489, p = 0.024 in one-way ANOVA, Figure 2B) and the frequency of freezing (p < 0.01 vs control) (F = 7.899, p = 0.0026 in one-way ANOVA, Figure 2C), and its swimming track was similar to that of non-stressed zebrafish, as shown in Figure 1A.According to the decreased anxiety behavior, the total distance traveled in the GHes was significantly elevated due to the increase in normal swimming (p < 0.01 vs control) (F = 11.691,p = 0.00039 in one-way ANOVA, Figures 2A,D).In contrast, Hes did not exhibit anxiolytic activity compared to the control (Figure 2). A black-white preference test was carried out to confirm anxiety suppression by GHes toward visually induced stress.Fish tended to prefer black areas because of their instinct to hide and feel insecure about white areas.However, when anxiety decreases, they swim to the white side in interest-seeking behaviors (Ikeda et al., 2021).As expected, control and Hes diet-fed zebrafish swam mainly in the black areas (Figure 3A).Control fish swam for over 800 s in the black area and 260 s in the white area (Figures 3B,C).In contrast, GHes-fed zebrafish exhibited different swimming patterns compared to the control fish (Figure 3A).GHes-fed fish significantly increased swimming time in the white area (p < 0.01 vs control) (F = 6.751, p = 0.0045 in one-way ANOVA, Figure 3B) and decreased swimming time in the black area (p < 0.01 vs control) (F = 19.709,p < 0.001 in one-way ANOVA, Figure 3C).The frequency of invasion into the white area did not differ among the control, GHes, and Hes groups, indicating that swimming time per invasion was increased on the white side by GHes treatment.These results suggest that GHes attenuates the anxiety caused by visual stress. GHes decreased anxiety behavior induced by low water level stress As this study revealed that GHes reduced anxiety behaviors induced by visual stress, we evaluated the effects of GHes on other stress-induced anxiety behaviors.Low water levels are stimuli recognized by the zebrafish body and are known to induce anxiety (Piato et al., 2011).This test examined two time periods: 0-5 min and 5-10 min.Control-fed fish exhibited drastic freezing behavior during both periods (Figures 4A,B).GHes significantly decreased freezing time in both periods compared to the control (p < 0.01, p < 0.05 vs control at 0-5 and 5-10 min, respectively) (F = 6.61, 6.208, and p = 0.007 and p = 0.0089, in 0-5 and 5-10 min, respectively, in one-way ANOVA, Figures 4B,C).Hes did not exhibit a reduction in freezing time from 0 to 5 min (Figures 4A,B) but showed a significant decrease from 5 to 10 min (p < 0.05; Figures 4A,C), indicating that Hes allowed zebrafish to recover faster from freezing than the control, but less than GHes.According to the reduction in freezing time, the total distance traveled in GHesfed zebrafish in 0-5 min (p < 0.05, vs control) (F = 4.100, p = 0.034 in one-way ANOVA, Figure 4D) and that in GHes and Hes-fed fish in 5-10 min (p < 0.01, vs control) (F = 10.167,p = 0.0011 in one-way ANOVA, Figure 4D).These results suggest that GHes and Hes attenuate the acute stress induced by low water stimulation and that GHes is more anxiolytic than Hes. FIGURE 6 Alteration of the noradrenaline pathway in GHes-fed zebrafish.Control-or GHes-fed zebrafish (7 days) were exposed to alarm substance stress. (A-C) The fish brains were excised 5 min after stress, and the contents of noradrenaline (A), dopamine (B), and serotonin (C) in the brain were estimated using HPLC.(D and F) Fish brains were excised 15 min after stress.Brain lysates were subjected to Western blotting with anti-Th (D) and anti-phospho-ERK (F) antibodies.Actin and ERK were used as internal controls.Quantitative analyses of the intensities of the protein bands were conducted, and the results are presented as Th/β-actin and p-ERK/ERK.n = 8. (E) Fish brains were excised 15 min after stress induction.c-fos mRNA levels in the zebrafish brains were assessed using real-time PCR.Each level of gene expression in GHes-fed zebrafish was relative to that in control.n = 10.n.s., not significant.Results are shown as means ± standard deviation. GHes decreased anxiety behavior induced by alarm substance exposure Alarm substances are secreted from damaged zebrafish skin to alert other fish to danger; their primary component is hypoxanthine 3-N-oxide (Parra et al., 2009).Fish recognize compounds based on their sense of smell and danger.Exposure to alarming substances induces anxiety in zebrafish.Control-fed fish exhibited drastic freezing and tended to swim in one corner (Figure 5).GHes treatment drastically suppressed freezing behavior time (p < 0.01, vs control) (F = 11.937,p = 0.0022 in one-way ANOVA, Figures 5A,B) and the frequency of freezing (p < 0.05, vs control) (F = 15.308,p = 0.0013, Figure 5C) by the alarm substance, similar to other acute stress experiments in this study.Hes did not exhibit decreased freezing behavior (Figure 5). As noradrenaline (NA) is involved in the induction of freezing via the ERK/AP-1 pathway (Miller et al., 2010), NA content was expected to be altered in GHes-fed fish under alarm substance stimulation.As expected, the NA content in the whole brain was significantly suppressed in GHes-fed zebrafish compared to the control (p < 0.01, Figure 6A).Dopamine, a precursor of NA, was also significantly decreased in GHes-treated fish (p < 0.01, Figure 6B), while the level of serotonin, a regulator of NA neurons, was not altered (Figure 6C). Tyrosine hydroxylase 1 (Th1) catalyzes the conversion of the amino acid L-tyrosine to L-3,4-dihydroxyphenylalanine (L-DOPA), which is responsible for catecholamine synthesis, including noradrenaline.To understand the mechanism by which GHes reduces freezing behavior in zebrafish, we analyzed alterations in the ERK/AP-1/TH1 pathway.As expected, GHes suppressed the expression of Th1 polypeptides compared to that in control (p < 0.01, Figure 6D and Supplementary Figure S1), which coincided with a decrease in NA content in GHes-fed zebrafish.The gene expression of c-fos, a marker of activated neurons and a component of AP-1, was also suppressed in the GHes-fed zebrafish (p < 0.05, Figure 6E and Supplementary Figure S1).The phosphorylation of ERK, upstream of AP-1, was also downregulated by GHes (p < 0.05, Figure 6F and Supplementary Figure S1). Brain-derived neurotrophic factor (BDNF) is well known to regulate mental behavior in vertebrates.Upregulation of the BDNF/ CREB pathway reportedly suppresses stress-induced anxiety behaviors (Jiang et al., 2020).In GHes-fed zebrafish, the protein level of CREB was significantly upregulated compared to that in control (3.0-fold increase, p < 0.05, Figure 7A and Supplementary Figure S2), accompanied by an increase in the bdnf mRNA level (1.5fold increase, p < 0.05, Figure 7B).The mRNA expression of tropomyosin receptor kinase B (TrkB), a BDNF receptor, did not differ between the control and GH-treated groups (Figure 7C).These results suggest that downregulation of ERK/AP-1/TH1 and upregulation of the BDNF/CREB pathway may be involved in the suppression of anxiety-like behavior by GHes.Although the hypothalamic-pituitary-adrenal (HPA) axis, serotonin, and γ-Aminobutyric acid (GABA) pathways are also reportedly involved in anxiety behavior, GHes did not affect the expression of genes related to these pathways (Supplementary Figure S3). Oxidative stress is induced by acute stimulation, resulting in neuronal damage and anxiety.Nrf2 is a transcription factor that regulates the upregulation of anti-oxidant genes, and Keap1 interacts with Nrf2 to inactivate it.During stress induction, reactive oxygen species (ROS) are produced, and it Nrf2 is activated to eliminate ROS (Motohashi and Yamamoto, 2004).In the present study, GHes suppressed the mRNA level of nrf2, but not keap1, in zebrafish brains (Figures 7D,E), confirming the reduction in stress in GHes-fed zebrafish. Discussion Several studies have revealed the anxiolytic and antidepressant activities of hesperidin (Hes).It is also known that hesperetin and hesperetin glucuronide, metabolites of hesperidin, can penetrate the brain through the BBB (Youdim et al., 2003).However, its low water solubility attenuates the bioavailability of Hes and metabolites.The present study evaluated the effects of glucosyl hesperidin (GHes), which improves the water solubility of hesperidin by the conjugation of monoglucose, on anxiety behavior in zebrafish induced by various stresses.GHes significantly suppressed freezing behavior in the novel tank test, low water level-induced stress test, and alarm substance test, and increased explorer activity in the white area in the black-white preference test.The anxiolytic activity of GHes was more potent than that of Hes.Furthermore, we suggest that GHes suppressed anxiety in zebrafish by attenuating the ERK/AP-1/ Th1 and BDNF/CREB pathways (Figure 8). GHes and Hes enhanced food intake in zebrafish.Hes is a significant component of the herbal medicine "Chin-pi" contained in Kampo medicine Ninjinyoeito, which is used to support the treatment of various diseases by energizing patients through improved mental health (Kawabe et al., 2021).Hesperidin may enhance feeding, and ghrelin in the gastrointestinal tract may stimulate NPY in the central nervous system, resulting in enhanced feeding (Fujitsuka et al., 2011).This is likely because feeding Ninjinyoeito to zebrafish lacking NPY did not change their food intake (Kawabe et al., 2022). Increased noradrenaline levels are the leading cause of freezing in zebrafish under acute stress.Ninjinyoeito suppresses zebrafish freezing via the inhibition of NA neurons, and it has been shown that Chin-pi is one of the anxiolytic agents (Kawabe et al., 2021).In addition, Hes suppresses the induction of freezing in an animal model of post-traumatic stress disorder, accompanied by a decrease in noradrenaline (Lee et al., 2021).Although the inhibitory effect of Hes on freezing was also observed in the present study in several acute stress tests, the inhibitory effect of GHes on freezing was higher than that of Hes. The BDNF/CREB pathway is deeply involved in the development of psychiatric disorders such as depression, Alzheimer's disease, Parkinson's disease, bipolar disorder, and memory disorders (Nagahara and Tuszynski, 2011).The CREB/BDNF pathway is involved in the mental dysregulation induced by environmental endocrine disruptors (Tang et al., 2022).Thus, this pathway has been considered a target for a drug investigation.Several compounds or extracts from natural products, such as darmmarane sapogenins originating from ginseng (Jiang et al., 2020), tannins from Terminalia chebula fruits (Chandrasekhar et al., 2018), and diterpene quinone Tanshinone IIA isolated from the roots of Salvia miltiorrhiza Bunge (Jiang et al., 2022), attenuate anxiety by activating the CREB/BDNF pathways.Flavonoids are also reported to attenuate mental disorders involved in CREB/BDNF pathway, such as rutin (Moghbelinejad et al., 2014), baicalin (Jia et al., 2021), and naringin (Gao et al., 2022).Several studies have reported the involvement of Hes in the CREB/BDNF pathway in improving memory function (Lee et al., 2022), anxiety in diabetes (Zhu et al., 2023), and pentylenetetrazole-induced convulsions (Sharma et al., 2021).The present study demonstrated the involvement of the ERK/AP-1/Th1 and BDNF/CREB pathways in GHes; similarly, glycosylation of Hes enhanced the anxiolytic activity of hesperidin itself. The present study demonstrates the potentiation of hesperidin glycosylation for anxiolytic activity.Glycosylation of flavonoids is an effective method for enhancing their biological activity.In general, flavonoids have low solubility in water, resulting in low absorption efficiency in the small intestine.Flavonoid glycosylation improves absorption (Yamada et al., 2006).In addition, its low water solubility makes it unsuitable for food processing as a food ingredient. Thus, this study aimed to clarify the anxiolytic effects of GHes.As anxiety is one of the leading causes of depression, its alleviation is crucial for reducing the risk of depression.The pathway through which GHes acts in zebrafish is similar to that of Hes in mammals.This indicates that GHes is a functional ingredient that enhances the action of Hes. However, this study has several limitations.First, even though zebrafish and mice have the same metabolic system for GHes, the cranial nerves and gastrointestinal tract structures are different; therefore, an appropriate dosage and bioavailability of GHes need to be considered.Second, we examined the effects of GHes on acute stress, but the effects of GHes on chronic stress was not tested.Third, the molecular mechanisms of the action of GHes remain unclear.Recent reports have shown that Hes is involved in astrocytes (Nones et al., 2012) and that GHes could potentiate this effect.Future clarification of these limitations will lead to the practical application of the anxiolytic effects of GHes. FIGURE 1 FIGURE 1 Effect of GHes and Hes on non-stress zebrafish behavior.Zebrafish were fed a control diet, GHes-diet, or Hes-diet for 52 d. (A) Tracking in a familiar tank for 5 min.(B) Swimming velocity.(C) Swimming distance.(D) Average food intake per day and fish.Results are shown as means ± standard deviation.n = 6.n.s not significant.Columns with the same letter are not statistically different, and vice versa. FIGURE 2 FIGURE 2 Effect of GHes and Hes on zebrafish behavior in novel tank test.Zebrafish were fed control-, GHes-, or Hes-diet for 31 days and subsequently used for novel tank test, (A) Tracking of zebrafish swimming with control, GHes, or Hes-diet.(B) Freezing time during the 10 min observation.(C) Frequency of freezing.(D) Total distance traveled.n = 7. Results are shown as means ± standard deviation.n.s,.not significant.tr, trace.Columns with the same letter are not statistically different, and vice versa. FIGURE 3 FIGURE 3 Effect of GHes and Hes on zebrafish behavior in the black-white preference test.Zebrafish were fed control, GHes, or Hes diet for 21 days and subjected to the black-white preference test.(A) Tracking zebrafish behavior.The gray and white colors in this picture indicate the black and white areas in the test tank, respectively.(B) Total swimming time in the white area.(C) Total swimming time in the black area.(D) Frequency of invasion of the white area.Results are shown as means ± standard deviation.n = 10.n.s., not significant.Columns with the same letter are not statistically different, and vice versa. FIGURE 4 FIGURE 4Effects of GHes and Hes on zebrafish behavior under the low water level-induced stress.Zebrafish were fed control, GHes, or Hes diet for 14 days and subjected to a low-water level stress test.(A) Zebrafish were tracked after stress induction.Upper panel: 0-5 min after stress induction.Lower panel: 5-10 min after stress induction.(B) Total Freezing time.(C) Freezing frequency.(D) Total distance traveled for 0-5 and 5-10 min.Results are shown as means ± standard deviation.n = 5. n.s., not significant.Columns with the same letter are not statistically different, and vice versa. FIGURE 5 FIGURE 5 Effects of GHes and Hes on zebrafish behavior under the alarm substance-induced stress.Zebrafish were fed control, GHes, or Hes diet for 7 days and subjected to the alarm substance-induced stress test.(A) Zebrafish were tracked after stress induction.(B) Total freezing time and (C) freezing frequency.Results are shown as means ± standard deviation.n = 5. n.s., not significant.Columns with the same letter are not statistically different, and vice versa. FIGURE 7 FIGURE 7Alteration of CREB/BDNF pathway in GHes-fed zebrafish.Control-or GHes-fed zebrafish (7 days) were exposed to alarm substance stress.The brains were excised 15 min after the stress exposure.(A) Brain lysates were subjected to Western blotting with an anti-Creb antibody.β-actin was used as an internal control.Quantitative analyses of the intensities of the protein bands were conducted, and the results are presented as Creb/β-actin.n = 5. (B-E) Fish brains were excised 15 min after exposure to stress.(B) bdnf, (C) trkb, (D) nrf2, and (E) keap1 mRNA levels in zebrafish brains were assessed using real-time PCR.Each level of gene expression in GHes-fed zebrafish was relative to that in control.n = 8. n.s., not significant.Results are shown as means ± standard deviation. FIGURE 8 FIGURE 8Hypothetical mechanism of the anxiolytic activity of GHes in zebrafish.
6,577.2
2023-08-17T00:00:00.000
[ "Biology", "Psychology" ]
Self-consistent approach to magnetic ordering and excited site occupation processes in a two-level system Ferromagnetic ordering in a two-level partially excited system is studied in detail. Magnitudes of magnetization (magnetic order parameter) and lattice ordering (excited level occupation number) are calculated self-consistently. The influence of an external magnetic field and excited level gap on the ferromagnetic phase transition is discussed. Introduction In numerous instances of real physical systems, the problem of ferromagnetic (or ferroelectric) ordering strongly depends on the density of magnetic (or dipole) particles in a medium. Such a situation is observed in different types of materials, namely in solids and liquids. Particularly for solids, the irregularity in the position of magnetic particles is determined by a type of the sample preparation (slow or rapid cooling), and requires the use of equilibrium or nonequilibrium methods of statistics to investigate them. On the other hand, the irregularity of a system can be created by the excitation radiation or by heating in such a way that only a part of the total number of particles moves to the overlying energy levels. In case of constant external radiation (temperature), such a system is equilibrium and stable. The degree of occupation of the ground and excited states depends on their mutual energy distance and the intensity of radiation (thermostat temperature). The exchange interaction between the magnetic particles (in the ground or excited states) enables magnetic ordering processes in a system. Their intensity essentially depends on the correlations between ground-ground, ground-excited and excited-excited states of particles. Both an external magnetic field and radiation (thermostat temperature) form self-consistent thermodynamic states of a system with a certain magnetization and an excited state occupation. For physicallycritical applications, the intensity of external radiation (pump) or thermostat temperature may be considered as a constant. A two-level model system can be used to investigate such a system and calculate its characteristics. Two-level systems are relatively simple, they are generally well studied (see [1][2][3][4]) and form the basis for the microscopic mechanisms description of different physical phenomena and processes, such as non-linear optics, lasers, Josephson junctions, Kondo scattering, glasses, etc. [5][6][7][8][9]. We shall regard the capability of spin particles of occupying one of the two levels: the ground and excited one with a certain probability dependent on the temperature and interaction between particles. A special attention is paid herein to the analysis of a correlation between spin orientation and occupation processes, as well as to the role of the external magnetic field and interlevel distance in the stable state of a system formation. Hamiltonian and free energy We consider the N -site lattice system of spin-like particles in each site which is capable of occupying one of the two quantum states, i.e., ground state and the excited one. The Hamiltonian of such a system in the external field γ, considering the pair exchange interactions between the particles, can be presented in the form:Ĥ is the operator of the number of particles in the i -th site in the quantum state λ; J λν i j is an integral of the exchange interaction between spin particles in the i -th and j -th sites of the crystalline lattice; λ is a configuration part of the energy per one particle in the λ state. Since the total number of particles coincides with N , we can pass from two operatorsĈ (1) i andĈ (2) i to the unique operatorĈ i =Ĉ (2) i , which coincides with the excited particle number operator. The eigenvalues of operatorĈ i are: C i = 0, when a particle in the i -th site is in the ground state, 1, when a particle in the i -th site is in the excited state. (2. 3) The orthonormal set of the wave functions for each site of the lattice is of the form of the product of configuration and spin components in the following way: Thus, in reality we have a four-state situation. It is well-known [10][11][12] that for atoms, the wave functions of excited electron states are much broader than the other ones. As a result, the exchange integrals J 11 i j between a pair of the nearest electrons being in the ground state (similarly between a particle in the ground state and the second one in the excited state J 12 i j ) can be regarded as small compared with the exchange integral J 22 i j between both particles in excited states. As a zero approximation for exchange interaction in expression (2.1), we take J 11 i j and J 12 i j negligibly small compared with J 22 i j . This is not a principal limitation, but makes it possible to use only one interaction parameter J 22 i j . Such an approach is very close to Vonsovskii s-d exchange model in the theory of magnetism [13,14], according to which s-electrons are responsible for electric conductivity while d -electrons form magnetic properties of solids. The comparison of exchange integrals for ground and excited states for rare-earth metals can be also found in [15]. Following the above mentioned restrictions and taking the configurational part of energy of particles in the ground state equal to zero, the model Hamiltonian (2.1) takes the form: (2.5) 33706-2 Here, 0 ≡ 2 − 1 plays the role of energy gap between excited and ground states for a particle in each site, J 22 i j > 0, which corresponds to ferromagnetic exchange coupling. By certain features, the model described by the Hamiltonian (2.5) is close to a well-known Blume-Emery-Griffits (BEG) model [16] introduced to simulate the thermodynamic behaviour of 3 He-4 He mixtures and to numerous modifications of BEG model used for interpretation of diluted magnets properties [17][18][19][20][21][22][23]. The papers in this field of investigation are generally based on the use of the Ising Hamiltonian with a spin equal to unity as well as on some assumption concerning the character of randomness for magnetic particles in a lattice. Briefly speaking, those models are three-state site models due to the three possible projections of S z -operator. Quite different physical situation is observed in our investigation, namely a physical interpretation of expression (2.5) comes to two-level, four-state site model (two spinorientations for the particle in the ground state and two spin-orientations for this particle in the excited state). Temperature helps to increase the excited state occupation but destroys spontaneous magnetization caused by the external field and the exchange interaction between spin particles in the excited states. Such an approach, in our opinion, is new and quite interesting. Moreover, both quantities c and s being physically different are calculated here self-consistently. The method of self-consistent field approximation is used to calculate statistical and thermodynamical properties of the system described by the Hamiltonian (2.5). From the physical point of view, this approximation corresponds to the neglect of square fluctuations of the identical physical values for a pair of different sites in a crystalline lattice, the same being true for different physical values in the common site. It is well-known that this approximation is valid in a wide range of temperatures and external fields except for the immediate phase transition point neighbourhood [24,25]. Thus, we accept the following expressions for the products of the operators: where 〈Ĉ i 〉 ≡ c and 〈Ŝ i 〉 ≡ s are mean values ofĈ i andŜ i operators, correspondingly, according to the Gibbs distribution based on the Hamiltonian (2.5) and expressions (2.6). Being limited in the summation over the sites i , j only to the nearest neighbours ( j J 22 i j = X v J , X v is the number of the nearest neighbours, J is an interaction constant), we obtain forĤ the following Thus, the Hamiltonian (2.7) describes the N -particle system with one spin particle in each site. At the same time, each particle staying at one of the two energy levels can take up one of the four states. The total number of particles in a system is constant, so canonical Gibbs ensemble may be used. Contrary to the well-known situations for the instance of independent subsystems: 1) pure or diluted magnetic system (c = const), or 2) simple lattice-gas system (s = const), the Hamiltonian (2.7) describes both binding subsystems, and the values of c and s are in a close connection depending on temperature T , external field γ and energy gap 0 . A partition function of the system described by the Hamiltonian (2.7) is presented as: After performing this operation we obtain the following expression for the free energy F per one site: (2.9) 33706-3 Order parameters c and s The free energy in the form of (2.9) is the function of three independent parameters: temperature T , external field γ and excited energy gap 0 . Using a self-consistent field approximation, we introduced two order parameters c and s, which as the functions of T , γ, 0 must satisfy the conditions of thermodynamic stability in the form [26,27]: where a 1 = c and a 2 = s in our instance. On the other hand, a system entropy S and magnetization per one site s can be calculated as partial derivatives: Based on the equations (3.1), we get: Combining these two equations we obtain for c and s the system of equations in a more convenient form: It can be shown that for arbitrary values of X v J , the following relationship between c and s takes place: , (3.6) which at γ = 0 and 0 = 0 transforms into a simpler form: Equations (3.6), (3.7) show an unambiguous relation between c and s for different values of external parameters T , γ, 0 . We start the analysis of equations (3.5) from the situation where the occupation process and magnetic ordering are completely independent. In other words, the occupation of the excited level c is determined by temperature only (the effect of magnetic ordering is missing, s = 0). In this instance, according to (3.5), we obtain: and, correspondingly, where c 0 is independent of temperature and has got a fixed (for example by external radiation) concentration of particles in the excited states. The reduced temperature t = T (X v J ) −1 dependencies of c A special role of parameter is connected with a strong correlation between the processes of the excited level occupation and spontaneous ordering of magnetic particles at this level. Heating facilitates the filling of the excited level while the ordered magnetic state decreases. The measure of those processes is the value of the system energy, i.e., the ratio between its growth in the excitation and a decrease due to a magnetic ordering. We observe a situation close to the phenomenon of percolation in the systems having a constant concentration of magnetic particles [28]. However, the phenomenon of percolation in the system studied occurs self-consistently where the excited level occupation by magnetic particles depends on the degree of magnetization. That is why for the first order magnetic phase transition, a sharp decrease (dip) of the excited level occupation is observed above the phase transition temperature Taking into account that magnetization of the system is represented by the following formula: M = µN s, (3.10) where µ is a magnetic moment of an individual particle, based on the equations (3.5), we have obtained the expression for generalized magnetic susceptibility (as a function of temperature and external magnetic field): . Spontaneous magnetization here is impossible at any finite temperature, but the system is sensitive to an infinitesimally small external field at T → 0 K. Thermodynamic functions Using the expression (3.5) we can rewrite the equation (2.9) for the free energy in the following form: This representation for F takes into account the dependencies of c and s on t , h, and will be useful for the calculation of different thermodynamic functions. Having differentiated free energy (4.1) on temperature, we obtained for entropy the following expression: [29,30]. A specific role of the zero external field at the > 0.5 instance is demonstrated in figure 4 (d). At T → 0 K, the total entropy of the system in zero field (h = 0) does not turn into zero, because magnetic ordering is not realized. At the same time, all particles remain at the ground level (c = 0) and the configuration part of entropy is equal to zero. Comparison of numerical values of entropy to the right and to the left of the transition point makes it possible to find the latent heat of the first order magnetic phase transition: (4.4) where S r = S + is a limit value of S when temperature falls to the phase transition point T c , and S l = S − is a limit value of S when temperature increases to this point. The latent heat (in arbitrary units) behaviour under the external field h is presented in figure 5. The q is not equal to zero for h < h c and vanishes at h = h c for all . Every curve possesses a slight maximum connected, in our opinion, with the different temperature behaviours of c and s parameters. Based on the well-known thermodynamic relation: 33706-8 33706-9 figures 2 (d) and 2 (h), the right-hand maximum in curve 2 and the left-hand maximum in curve 6 correspond to a rapid growth of occupation of the excited level by particles (c), but both supplementary maxima correspond to a rapid growth of magnetization (s). Thus, for > 0.5, the temperature behaviours of c and s are noticeably independent. Such a tendency is also visible for = 0.45 [see figures 6 (c)]. It should be underlined that the capacity calculated here corresponds only to the site excited and spin orientation degrees of freedom. Other important degrees of freedom, vibrational or rotational, for example, are not taken into account here. The points of a visible change of thermodynamic functions behaviour (jumps for order parameters c and s) are connected with a transition of the free energy (4.1) from a certain branch to another one. The numerical analysis shows the existence of 1-3 different branches of F in the whole temperature range from 0 to ∞. Two of them realized a minimum value of F and the third one realized its maximum value. Naturally, the real behaviour of the system investigated is determined by the branches with minimum F . The temperature points of critical behaviour are found from the continuity condition of free energy (4.1) Consequently, the excitation energy gap = 0.5 determines the upper limit of , above which all changes in a system have a continuous character. It is hard to compare the theoretical results obtained here with the other ones due to the lack of an exact coincidence of our model with the one well-known in the literature. However, experimental investigations of magnetic phenomena induced by thermal or photo excited electrons in complex metal or metal-organic species are now intensively carried out [31][32][33][34]. Photo-induced magnetic structures based on the organic matrix intercalated by Rb and Cs atoms demonstrate thermodynamic properties above the Curie temperature close to the ones presented in figure 3 and figure 6. Those structures in general are stable at low temperatures and only in V-Cr Prussian Blue Analogy compounds instance [34], a critical temperature for magnetization state is about 350 K. Molecular based magnets of this type are a relatively new class of materials which have good prospects to be used as spin-electronics memory cells. The physical mechanisms of their operation can be quite fully explained within the framework of the two-level four state model discussed here. Conclusions A self-consistent theory for a description of the excited energy levels occupation and spin-orientation ordering in the crystalline lattice is proposed. The Ising-like magnetic interaction between particles in the excited states are taken into account. When this interaction is omitted, the occupation number c and the spin-order parameter s become continuous functions of temperature and an external field, which is characteristic of two independent subsystems: noninteracting two-level site system and diluted magnetic system. The exchange interaction being taken into account significantly changes the situation. Depending on the excited energy gap and on the magnitude of the external magnetic field, the transition to a spontaneous magnetically ordered state of the system is possible only for the fields less than the critical one. Moreover, at the field less than critical, there holds the first order phase transition, and when this field turns into a critical value, the transition becomes the second order. For the fields bigger than the critical one, the spontaneous magnetization of a system is suppressed by an external field (an induced magnetization is proportional to the field magnitude). Those effects take place for the reduced excited energy gap ≡ 0 (X v J ) −1 less than 0.5. For 0.5, no spontaneous magnetic ordering is possible, total magnetization is completely determined by the external magnetic field. The generalized (dependent on the external field) magnetic susceptibility χ behaviour is in accordance with the microscopic parameters c and s temperature dependencies. Its infinite peak at the phase transition points, for fields less than and equal to the critical fields, moves to a significant peak at higher fields. The width of χ curve near T c is much smaller for the first order phase transition instance (h < h c ) compared to the second order phase transition instance (h = h c ). The latent heat q of the first order phase transition in the system investigated is strongly field dependent, vanishing for h h c in accordance with the change of the order of the phase transition. The heat capacity (specific heat) behaves in a usual manner and for h h c remains finite for all temperatures. All those peculiarities for χ, q, C V depend on the value in accordance with the above mentioned objections. A specific place of the excited level gap = 0.5 is established as a limit value between the jump-like and continuous behaviour of thermodynamic functions of the investigated system. Based on all of the research, we may conclude that in the investigated system, correlation between the excited level occupation and magnetic ordering plays a decisive role. Specific percolation effects concerning the magnetic properties of a system take place. They manifest themselves in a self-consistent way. Disappearance of magnetic ordering entails a sharp decrease in the probability of occupation of the excited levels by particles. The additional stimuli (optical radiation for example) being taken into account for particle excitation are quite promising in the study of the mechanisms of photo-induced magnetic phenomena, which are candidates for future applications in spin-electronics. The proposed model also possesses other peculiarities based on the complex ratio between the excited level gap ( ) and the magnitude of the external field (h). A detailed study of these peculiarities will be a matter of future investigations.
4,417
2015-10-23T00:00:00.000
[ "Materials Science", "Physics" ]
ASM-VoFDehaze: a real-time defogging method of zinc froth image When the ambient temperature is low, a large amount of water mist and dust will inevitably appear around the zinc flotation cell, forming haze, which seriously affects the extraction of flotation froth image features. General defogging methods of natural image are difficult to obtain satisfactory results for such industrial haze image. Therefore, we propose a real-time defogging method based on ASM-VoFD (Atmospheric Scattering Model and the Variable-order Fractional Differential). First, the dark pixel ratio is used to detect fog in froth image, which solves the redundant calculation caused by unnecessary defogging operations. Second, the linear transformation of the atmospheric scattering model is used to calculate the initial transmission map, and the gaussian filter is used to optimize the initial transmittance, and the haze-free image is restored with atmospheric light estimation. Finally, a variable order fractional differential operator is used to enhance the edges and texture details of the restored froth image, which solves the problems of blurred edges and low contrast. The experiments show that the algorithm has a good defogging effect on the industrial images, enhance the edges of the image, and can be effectively implemented in O(N) time to meet the application requirements of real-time flotation monitoring. Introduction Froth flotation is a beneficiation method to separate minerals according to the physical and chemical properties of the mineral surface. The main process of froth flotation is that bubbles rise in the pulp with selective but adhered mineral particles at the gas-liquid interface, and then the froth formed on the pulp surface is scraped to achieve the purpose of beneficiation. The surface visual characteristics of these froths (such as froth size, colour, texture, and flow rate) are closely related to process indicators, working conditions, and operating variables. These characteristics can be used as an important basis for judging the effect of mineral separation operations. The accurate extraction of froth image features is the premise of the machine vision flotation production process (Aldrich et al., 2010). However, the complex flotation industry has a harsh environment with a large amount of dust and fog, as well as uneven illumination, resulting in serious pollution in images collected by surveillance video (Jinping et al., 2010). In particular, when the temperature in the plant is low (below 5 • C), The water mist produced by the mineral particles and water droplets splashed during the froth defoaming is very large. This condition causes a thick water mist that covers the entire flotation cell. The existence of these water mists seriously affects the accurate extraction of bubble characteristics. Therefore, dehazing preprocessing on the froth image is an important task in obtaining froth parameters. The flotation state monitoring system based on machine vision (as shown in Figure 1(a)) not only obtains traditional process parameters, but also collects a large amount of visual image information related to working conditions. Then, these parameters are fused with characteristic information to realise the intelligent identification and accurate trend forecast of multi-modal conditions in the flotation process. The collection and processing of data is very important to the analysis results (Liang et al., 2019). Long-term practice observation shows that the flotation plant suffers from insufficient light at night and a lot of floating dust, especially when the temperature is low, the haze will appear, as shown in Figure 1(b). Different flotation tanks (zinc-roughing, -refining, and -scavenging) carried different zinc content of the froth, and the froth showed different grey-brown. Figures 1(c), 1(d) and 1(e) are the foggy froth images in the three flotation tanks, respectively. In particular, the quality of fogged froth images (as shown in Figure 1(e)) taken in dark weather or low light at night is very low, which urgently requires real-time and effective defogging algorithms to improve the clarity of froth images. Figure 3 shows the clear images, hazy images and the corresponding histogram in the zinc-roughing tank. Figure 3 shows that the grey value range of the foggy froth image is greatly reduced, indicating a single peak state. The reason is that a layer of haze is added to the bubble map, and the image is off-white as a whole. Compared with natural images, froth images have the following characteristics: (1) In the froth image, only a small amount of black and dark small pieces are seen, especially the high mineral content carried by the zinc-refining froth, which is not easy to break, and finding pixels with a grey value less than 50 is difficult. The defogging algorithm based on dark channel prior (He et al., 2010;C. Li et al., 2020;J. B. Wang et al., 2015;Wei et al., 2020) has a very good effect in natural images, but it is not suitable for flotation froth image defogging, which will result in high estimated transmittance and the recovered image cannot truly reflect the froth characteristics. (2) Due to the fact that the flotation froth is sporty during the shooting process, it is impossible to find the corresponding clear image from the collected blurred image. This limits the application of some advanced and effective defogging algorithms. For example, the method of estimating scene depth information by obtaining multiple images of the same scene (Narasimhan & Nayar, 2000, 2002, and the supervised deep learning network method (Cai et al., 2016;B. Li et al., 2017). (3) The distance between the froth surface and the industrial camera is relatively stable, that is, the depth of field of the froth image is basically the same. Except for the reflected light-bright spots on the top of the froth, there is no large colour jump at the edges between the foams, which is more conducive to the restoration of hazy images based on the linear transformation of the atmospheric scattering model. For example, the linear transformation method of the atmospheric scattering model can achieve a better defogging effect when the depth of the field is basically the same (Alajarmeh et al., 2018;Ge et al., 2015;Ju et al., 2017;W. Wang et al., 2017). The key is that it has a great advantage in time efficiency. (4) The light source when collecting the froth image is mainly the high-brightness incandescent set in the installation box of the industrial camera. Auxiliary light sources are from atmospheric light and factory lighting. However, froth images taken in dark weather and night environment are low-light images with unclear edges, so many good defogging algorithms cannot achieve the ideal defogging effect. The purpose of this study is to develop a suitable defogging method of flotation froth image, improve the accuracy of feature extraction of flotation froth image, and meet the application requirements of real-time monitoring based on machine vision. In order to achieve this goal, we first analyse the different characteristics between clear froth image and fogged image, and identify foggy froth image with average brightness ratio and dark pixel ratio. Secondly, we estimate the transmission image by the prior information of local image energy of gradient (EOG) and the linear transformation of atmospheric scattering model (ASM), optimise the initial transmittance by Gaussian filtering, and combine the estimated atmospheric light value to reconstruct the clear image. Finally, for the edge blur of the restored low-light image at the froth-highlights, we use the variational fractional differential operator to enhance the edges, texture details and brightness to obtain the optimal defogging performance. The main contributions of this research work are as follows: (1) Before the image defogs, we detect the froth image by extracting a dark pixel ratio, which can effectively identify the foggy image and distinguish it from other froth blur images (motion blur and defocus blur). This provides a reference for the recognition of blurred images in the flotation industry. (2) In the defogging algorithm, the maximisation of the local energy gradient function is adopted as a prior information, which is easy to realise and independent of colour information. Therefore, such information is more suitable for the defogging of the zinc flotation froth image (monochrome colour, greyish brown). (3) Through qualitative and quantitative analysis in the experiment, this method is superior to the five advanced defogging methods in image recovery performance and time efficiency. At the same time, edge enhancement is performed on the defogging results of low-light fogged images, which improves the applicability of the algorithm. Related work At present, dehazing algorithms are mainly divided into two categories: image enhancement and image restoration. The image enhancement method does not consider the causes of image degradation and has wide applicability. This method can effectively improve the contrast of the foggy image, enhance the image details, and improve the visual effect of the image, but it may cause some loss of information in the prominent part. Typical methods include histogram equalisation (Stark, 2000), retinex (Zeng et al., 2014;Zhou & Zhou, 2013), the wavelet transform (Khmag et al., 2018), and contrast enhancement algorithms (Kim et al., 2011). The defogging method based on the physical model mainly establishes a degradation model by analysing the principle of light scattering by the atmosphere and uses the model to obtain a clear fog-free image. This kind of method has strong pertinence and natural dehazing and generally involves no information loss. The key point of processing is the estimation of the parameters in the model. McCartney (1976) first used this principle (Mie Scattering Theory) to establish an atmospheric scattering model in 1977. In 1998, Oakley and Satherley (1998) began to use The Mie scattering theory to carry out some restoration studies on images taken under severe weather. Subsequently, model-based defogging methods have attracted more and more attention and have now become a research hotspot in the field of image processing. Fattal (2008) estimated the reflectivity of the scene and the transmittance of the medium under the assumption that the transmittance and the surface shadow were locally unrelated. This method has a good effect on the processing of light haze images, but may fail for images with heavy haze. Tarel and Hautiere (2009) introduced a method based on contrast enhancement to eliminate haze. The algorithm is simple and efficient, but the parameters in the algorithm cannot be adjusted adaptively. The best and well-known algorithm is single image dehazing based on dark channel prior theory proposed by He et al. (2010). However, the optimisation process takes more time in estimating the transmittance map. Subsequently, extensive studies have been conducted on this theory (Fujita & Fukushima, 2017;Khmag et al., 2018). Kim et al. (2013) enhanced the image by maximising the contrast block by block, minimised the information loss caused by pixel truncation, and minimised the colour distortion of the restored image as much as possible. W. Wang et al. (2017) proposed a fast single-image haze removal method based on linear transformation. The algorithm is effective in removing fog in images with little change in depth of field. But in the low light fog image, the edge of the restored image is blurred. Wu et al. (2021) proposed a method of atmospheric scattering linear transformation based on local image entropy maximisation prior to estimate the transmittance of image blocks and optimised the transmittance by guided filtering. The algorithm has good defogging performance and high efficiency, which can meet the needs of industrial applications. The algorithm (Kim et al., 2013;Tarel & Hautiere, 2009;W. Wang et al., 2017;Wu et al., 2021) uses some prior information to transform the atmospheric scattering model, which can obtain the estimation of transmittance. These algorithms are suitable for foggy images with little change in depth of field and have high efficiency. With the development of neural networks and deep learning, researchers have begun to replace traditional image dehazing methods with methods based on deep learning. Tang et al. (2014) proposed a novel transmission estimation method through a learningbased method; the researchers analysed the relevant characteristics of multi-scale haze in a regression framework based on random forest, but this feature fusion method relies heavily on dark channel features. Cai et al. (2016) proposed an end-to-end training model that used neural networks to estimate t(x) in the atmospheric degradation model. The model inputs the hazy map, outputs the transmission rate t(x) map, and then uses the atmospheric degradation model to restore the haze-free image. B. Li et al. (2017) proposed an image defogging model established by a convolutional neural network (CNN), called all-in-one defogging network (AOD-net). AOD net does not estimate the transfer rate and atmospheric light as previous models do, but directly generates a de-fogging image. These learning-based algorithms require many haze-free and hazy images as training data. If the training sample is not large enough, then it cannot display the true depth information of the image, especially the edge area. In the process of capturing the flotation froth image, the foggy image and the corresponding non-fog image cannot be obtained due to the froth movement and the fog mask. At the same time, the change of illumination makes the extracted fogged characteristics also affect the recovery result. Dehazing algorithm The method of removing the haze in the flotation froth monitoring video consists of the following steps: (1) foggy image recognition (dark pixel ratio features); (2) defogging method based on linear transformation of the atmospheric scattering model; (3) image edge enhancement of the restoration result. First, the detailed content of the foggy image recognition algorithm in the first part is described. Foggy image recognition In the haze image, the human eye's recognition ability is lower than that of normal weather, and it is more sensitive to the lightness information than the colour information of the image. Therefore, the dark pixel ratio is selected as the identified features of the foggy image. Figure 2 shows a flowchart of the froth foggy image recognition. Under the same illumination conditions, the lightness of clear froth image and foggy image in the same flotation cell is very different. This is mainly reflected in the histogram of its lightness component. The histogram of the clear image is evenly distributed and smooth, while the histogram of the foggy froth image becomes narrower and thinner and moves to the right as a whole, as shown in Figure 3. Therefore, compared with clear images, the lightness value of dark pixels increases in foggy images, resulting in a significant reduction in the dark pixel ratio. Perform histogram statistics for lightness component image. Dark pixel ratio D is denoted as where H k represents the histogram value with a grey level of k. Dark pixel l dark refers to the corresponding pixel to the grey value range [0,100]. The statistical result of the dark pixels l dark is k=255 k=0 H k . We choose the froth image of the zinc-roughing flotation tank for comparative analysis. Experimental results are shown in Figure 3. The dark pixel ratio of clear images is larger than that of motion blur images and defocus blur images, while the dark pixel ratio of foggy images is much smaller than that of the first three images, almost 10 times the difference. Thus, this dark pixel ratio method can be used to classify foggy froth images. Atmospheric scattering model of flotation industry site The atmospheric scattering models are as follows: where I is the intensity of the haze image observed by the imaging device, J is the intensity of the haze-free image to be restored, A is the atmospheric light, and t is the transmittance that reflects the ability of light to penetrate the haze. In Equation (2), J(x)t(x) represents the part of the reflected light from the target that enters the camera after atmospheric scattering and attenuation; this part decays exponentially as the depth of the scene increases. A(1 − t(x)) is the part of the camera that enters the camera after the atmospheric light is scattered; this part increases with the depth of the scene, which causes blurring and colour shift distortion. Parameters t(x) and A have to be estimated to restore the haze-free image. The solution to this problem is ill-conditioned. The atmospheric scattering model of the flotation froth foggy image is presented in Figure 4. The clear processing of the flotation froth foggy image requires strong prior information or a reasonable structure to solve the model. The algorithm proposed in this chapter aims to perform linear transformation on the scattering model, and use the local image EOG and the fidelity for the constraint to restore the haze-free froth image. This method not only obtains a good dehazing effect but also retains the textural details of the image. Furthermore, the transmittance is quickly optimised, thereby meeting real-time industry requirements. Estimation of atmospheric light The accurate estimation of atmospheric light A value in the atmospheric scattering model is directly related to the accurate calculation of transmittance, and ultimately affects the colour and brightness of the restored image. Generally, in the dehazing of natural images, if the atmospheric light value A is extremely large, the overall image is darker and the visibility is low. When the atmospheric light value A is low, some parts of the image (such as bright areas, light edges, and white objects) have obvious colour distortion. According to Figure 1, the main light source of the froth image video comes from the light-emitting diode installed at the top of the box. Due to the physical and characteristics of the froth, the froth that adheres to the mineral particles automatically burst after a certain period under the action of gravity. As a result, the mineral content on the top of the froth slowly decreases until only a water film is found at the end, which bursts immediately. The reflection of the light source at the water-air interface is total reflection, so in the froth image we obtain, bright spots are seen at the top of the froth. In the non-fog state, the brightness of the froth highlights is extremely high and is basically pure white. In the foggy state, the froth highlights are shrouded in haze, which is grey and foggy. In the image of the flotation froth with fog, the bright spots on the froth reflect the atmospheric light value of the corresponding area of the image. In this section, the froth image is first greyed out, and then the threshold is set to extract the highlight pixels of the froth image. The average value of these pixels is taken as the estimated result of the atmospheric light A value. Many experiments have proved that the estimated results of atmospheric light A value in different flotation cell froths are different. The atmospheric light A of the zinc-roughing tank is approximately 220, the zinc-refining tank is approximately 175, and the zinc-scavenging tank is approximately 190. The result can effectively reflect the real atmospheric light conditions and restore the texture details of the fog-free image. Estimation of transmission map When atmospheric light A is known, the accurate estimation of the transmittance t(x) determines the effect of image dehazing. This study proposes that in the real-time dehazing of a linear transformation of the atmospheric scattering model, the local image EOG and fidelity maximisation prior is used in estimating the transmission map. (a) Linear transformation of atmospheric scattering model According to Equation (2), the mathematical model of a single dehazing image J(x) can be derived as follows: Based on the assumption that in a small local image, the depth of the image is the same (in fact, the foam layer formed by the bubbles floating up to the surface of the slurry is basically the same as the depth of the camera above the flotation tank), which means the transmission rate t(x) of each small block is a constant. Then, according to Equation (2), each pixel p in the image block (y) can be expressed as where a is the slope and b is the intercept. They are expressed as Equation (4) shows that in a partial image block, a linear relationship exists between the haze-containing image I(p) and the clear haze-free image J(p). The slope a and intercept b change with the change of the transmission rate t. (b) The maximisation prior of EOG-fidelity objective function The clarity of flotation froth foggy image was evaluated by the energy of the gradient function. This function takes the squares sum of the squares of the difference between the grey values of adjacent pixels in the x and y directions as the gradient value of each pixel, and then accumulates the gradient values of all pixels as an evaluation of image sharpness. For the image f (x, y), the definition of the energy gradient function is expressed as follows: In this forum: The symbol ⊗ represents the spatial filtering of the image under the mask template. The size of T x and T y is The pixel neighbourhood corresponding to the mask template is Therefore, the expression of the image energy of the gradient function is Equation (2) shows that if the value of the transmittance t(x) is small, then the value of J(x) may appear more than the upper limit of grey value 255. If the value of the transmittance t(x) is large, the value of J(x) less than the lower limit of the grey value 0. In this case, the grey value beyond the range of [0, 255] is automatically cropped, which causes colour distortion in the restored fog-free image. Therefore, this study uses the local image energy gradient maximisation prior as the constraint to estimate the optimal transmittance t(x), and also designs a subfunction (Fidelity function), that is, the image colour metric sub-function composed of information fidelity, such as where p represents a certain pixel in the restored image, N is the total number of pixels, and S c (t) represents the ratio of pixels in the range of [0, 255] each colour channel of the restored haze-free image J when the transmittance t(x) is known. When the pixel ratio to be cropped is smaller, the value of the information fidelity function F fidelity is larger. The maximum value of this function can be used to control the pixel ratio that is cut off, which maximises the EOG of the local image to the greatest extent and makes the restored image clearer while retaining its original colour. The proposed algorithm designs an objective function based on the energy of the gradient fidelity to constrain the estimated optimal transmittance t(x) as follows: where F eog represents the image energy gradient function, and F fidelity represents the image colour metric subfunction. The optimal estimated value of the transmittance t based on the energy gradient-fidelity objective function proposed in this paper is where F obj (t), F eog (t), and F fidelity (t) represent when the transmittance t is known, the EOG-fidelity of the restored haze-free image J function value, the EOG function value, and the fidelity function value. As shown in Figure 5, the lines with the highest and lowest peak values, respectively, represent the changes in local energy of gradient and fidelity, and the middle line represents the changes in the objective function F obj . In this example, when the transmittance t is less than 0.3, the objective function value is relatively low, and the fidelity is much less than 1, indicating that within this range, few points exist in the image with pixel values between 0 and 255, which does not meet our needs for results. When the transmittance t exceeds 0.6, the fidelity is close to 1, which means that almost no pixels are cut off. At the same time, when the transmittance t approaches 1, F eog gradually decreases. Therefore, the optimal transmittance should be determined between 0.5 and 0.6 to ensure that the energy gradient-fidelity objective function reaches the maximum value. Through the aforementioned method, the transmittance of a small image can be estimated. To obtain the best transmittance, we divide the image into 8 × 8, 16 × 16, 24 × 24, and other small blocks. Then, we use the step size as the 1/2 sliding window of the block size to estimate, which can reduce the degree of block artefacts in the restored image to a certain extent. The preceding steps are repeated to obtain the initial estimated transmission map. Refining the transmission map The transmission map obtained by the method in Section 3.2.3 is pixel-based, and the transmission rate is the same in a small area. Theoretically, the transmission rate of each pixel is different in the image, especially at the image edge or where the depth of field jumps. At this time, the restored image may have halo or block artefacts. To solve this problem, we use Gaussian blur filtering to refine the block-based transmission mapping, reduce block artefacts, and enhance image details. The Gaussian blur method uses the weighted average method. At the image scale of M × N, the Gaussian function represented by pixel (x, y) is as follows: The larger the standard deviation δ of the normal distribution, the more blurred is the image. The distribution map after Gaussian filtering is obtained after convolving two non-zero matrices: By comparing the results of Gaussian, average, anisotropic, median, and guided filtering on the image processing results, we find that the effect of Gaussian filter processing is better than that of mean filter and median filter processing, and the effect of guided filtering is best. Considering the processing time, Gaussian filtering and average filtering take less time. Figure 6 shows the process of estimating the transmission rate, refining the transmission map, and generating a fog-free image. We analyse the impact of various filtering algorithms on the results in Section 4.2. Recovered image enhancement based on variable-order fractional differential operator In order to solve the problems of unclear edge and low contrast of the recovered image, the variable-order fractional differential operator (Xu et al., 2015) is used to enhance the image. The flow chart of this method is shown in Figure 7. In this paper, the Riemann-Liouville fractional derivative is adopted and its definition is as follows: where the (·) represents the Euler's gamma function. Then the expressions of right and left R-L fractional derivatives are shown below, respectively: where v ∈ (n − 1, n), n ∈ N. And the mathematical expression for the discretisation of R-L fractional derivative at point x = x j is shown below: where the v is the order of the fractional derivative. And the h represents the step size, the C v j is the p-order discrete coefficient of the R-L fractional derivative. In addition, we use the fractional backward difference formula of order p (FBDF-p) to generate discretisation coefficients. The discretisation coefficient C v j can be obtained by the following recurrence formulas: where the discrete order p satisfies p = {1, 2, 3}. In this paper, a template structure based on eight symmetric directions is used to construct a variable-order fractional differential mask, as shown in Figure 8. Then, the final enhanced froth images can be obtained through convolution based on the variable-order fractional-order template M, which can be expressed by the following formula: The enhanced image J can be finally obtained by sliding the template size window from left to right and from top to bottom to conduct convolution operation on the froth image. Experimental analysis Experiments were carried out on flotation froth images based on the above proposed defogging and enhancement algorithm, which proved the effectiveness of the proposed method for the clearness of flotation froth foggy images. Date sets The visual data of the froth layer on the pulp in the flotation tank were collected by the industrial camera installed in the flotation field, and the foggy froth image data set was obtained for the defogging pretreatment. The camera is installed above the flotation tank, as shown in Figure 1(a). In the zinc flotation process, the life of froth from generation to deformation and then to rupture is a complex physical and chemical process. At the same time, the whole froth layer on the pulp has dynamic changes such as displacement and collapse due to the surge of pulp and scraping-bubble. Therefore, even for flotation froth manufactured in the laboratory, the corresponding clear image of the foggy image cannot be obtained with the artificial addition of fog. In the experiment, we captured froth images of different illumination conditions, different flotation tanks and different haze levels in the flotation field, and established two image data sets named Image_nor (normal illumination) and Image_low (low illumination) (90 320 × 240 images, respectively). Each data set contains three groups of froth foggy images (30 images per tank) from different flotation tanks (zinc-refining tank, zinc-roughing tank and zinc-scavenging tank), among which the images in each group have fog images of light, medium and heavy fog levels (10 images per stage). Qualitative validation experiments In this part, we will verify the performance of the algorithm proposed in this paper from the qualitative aspect, mainly from fog concentration, lighting conditions, transmission rate optimisation, fog removal algorithm to verify the effect of image restoration. Influence of various filtering methods on dehazing effect In refining the transmission map, various filtering methods have different processing effects. We, respectively, perform Gaussian, average, guided, bilateral and median filtering on the initial transmission map of the hazy image of the zinc flotation cell. The processing effects of filtering are shown in Figure 9, where the VER value is the image quality evaluation standard without reference (see Section 4.3 for specific calculation), and the TIME value is the execution time of optimised transmission graph. In the recovered image, the effects of the guided filter, bilateral filter and Gaussian filter on optimising the transmission map are better than those of the average filter and median filter. However, because the refinement process of guided filter and bilateral filter is highly complex and time-consuming, it is not conducive to real-time industrial scenes. The median and average filter have similar effects, while the result of the Gaussian filtering is closer to the true haze-free image than the two other types, with clearer details and less time consumption. The reason is that simple average filter and median filter do not consider the weight of adjacent pixels in the smoothing operation, which leads to blurred edges. In the Gaussian filter, the closer the relationship between adjacent pixels, the higher the weight. Otherwise, the weight is lower. The bilateral filter effect is more effective but also more timeconsuming. Therefore, under comprehensive consideration, we select the Gaussian filter as the method to refine the transmission map. The enhancement effect of the initial recovered image The hazy froth image was selected as the experimental object in the zinc-roughing tank, and the initial restored image was obtained by the defogging algorithm based on linear transformation in Section 3.2, and then the final defogging result was obtained by edge enhancement of the restored image. According to the definition of the visibility edge ratio in Section 4.3, we can obtain the visibility edge image of fog image, initial recovered image and final defogging image, as shown in Figure 10. The visible edge of the enhanced image is significantly more than that of the original restored image. Dehazing effect under different lighting conditions In the zinc flotation site, the change of ambient light will affect the brightness of the foam image. In this paper, the method proposed for low light foam atomisation image enhances the edge and texture of the restored image after defogging, and obtains a good defogging effect. As shown in Figure 11, the fog removal effects of other classical methods (Berman et al., 2016;Cai et al., 2016;He et al., 2010;Petro et al., 2014;Tarel & Hautiere, 2009;W. Wang et al., 2017) are compared. This paper uses prior information and linear transformation on the atmospheric scattering model to obtain the transmission rate, and the dehazing result after optimising the initial transmission rate is satisfactory. However, in the sharpening result of low-light dense fog images, the edges and the highlights of the froth are blurred. Therefore, the variational fractional differential operator is used to enhance the edges and texture structure of the recovered image, and the energy of gradient value of the image is improved (as shown in Table 1). It can be seen from Figure 11 that the method proposed in this paper can obtain restored images that are most consistent with the colour and texture characteristics of the froth itself under normal or low light conditions. The defogging effect of each algorithm under different fog densities To test the dehazing performance of the algorithm under different fog densities, we select three fog density of low, medium, and high foggy froth images to process the foggy images in the zinc-roughing tank, zinc-refining tank and zinc-scavenging tank. As shown in Figure 12, the proposed algorithm can effectively dehaze images with different densities. The only disadvantage is that when dealing with images with high fog density, the resulting image becomes distorted. The DCP algorithm (He et al., 2010) is an effective algorithm in the current image dehazing effect. However, it has some prerequisites, that is, after statistical analysis, many rich colours or shadows are in the natural image. Then, the intensity value of most of the pixels in its dark channel image is extremely low or close to zero. However, the flotation froth image is different from the natural image. It has a single colour, and most of the pixel values of the corresponding dark channel image are at least greater than 50. If the DCP algorithm is used to remove the fog, the restored fog-free image cannot truly describe the colour and brightness of the froth. Retinex algorithm (Petro et al., 2014) is an enhancement dehazing method that has a certain dehazing effect visually. The algorithm is efficient, but the phenomenon of image distortion appears in certain places. NLDe algorithm (Berman et al., 2016) is effective in removing smog, but the result is often excessive enhancement and colour distortion.Visibresto2 algorithm (Tarel & Hautiere, 2009) has a certain effect on outdoor images, but it seems useless for industrial foggy images such as flotation froth. DehazeNet algorithm (Cai et al., 2016) can remove haze in images with slight and moderate haze levels. However, as the smog becomes denser, its defogging ability is very limited. The algorithm proposed in this paper has a better effect than other methods in low-light especially. The dehazing method based on the linear transformation of the atmospheric scattering model and Gaussian filter optimising has a shorter processing time and high efficiency, and can satisfy real-time industrial dehazing needs. The enhanced algorithm to restore the image solves the problem of edge blur. Quantitative validation experiments In the process of obtaining a flotation foam image, froth deformation, collapse, merger, rupture and irregular motion changes occur at any time. These changes are unpredictable. Therefore, it is impossible to obtain a foggy froth image and a corresponding clear image at the same time. Here, we choose two non-reference standards (EOG described in Section 3.2.3 and visibility edge ratio (Hautiere et al., 2008;Sun et al., 2021) to evaluate the performance of the proposed algorithm. The difference in image pixel brightness is a direct reflection of spatial contrast. Hautiere et al. (2008) introduced a contrast definition that is very suitable for digital images and gave the contrast definition between two pixels x and y of an image f, which is defined by where M is the maximum brightness of the image. Let F(s) be the set of all couples (x, y) separated by s. The optimum thresholds S 0 are set, and if 2 C(S 0 ) > 0.05, F(S 0 ) is considered to be a visible edge. More detail about this method can be found in references (Hautiere et al., 2008;Yu et al., 2011). According to the contrast value and threshold control of each pixel, we can obtain the visible edge image. Let the set of points on the visible edge be expressed as ϕ, the number of visible edge pixels be denoted as N view−edge , and the number of all edge pixels be represented as N all−edge . Finally, the visibility edge ratio of the image is defined as where λ = N view−edge /N all−edge . In order to verify the effectiveness of the proposed algorithm, we conducted a qualitative comparison of each algorithm on froth images of different fog levels in the three flotation tanks in Figure 12, as shown in Figure 13. The results of VER and EOG show that the proposed algorithm (marked by the line with the highest VER value and EOG value), Retinex and NLDe algorithm can obtain better visibility edge ratio values than other algorithms, and the proposed algorithm can obtain higher energy of gradient values than other algorithms. Combined with the dehazing effect of Figures 11 and 12, the Retinex algorithm has a certain defogging effect on light fog images, but the restored images have serious colour distortion. The NLDe algorithm has a strong defogging ability, but no matter how the parameters are adjusted, the restored image is over-enhanced, the colour is dark, and the greyish brown of the froth cannot be restored. In the process of froth image defogging, the first step is fog detection. When the ambient temperature is low, the air near the flotation cell is humid and easy to saturate, forming haze. This situation is relatively low compared to the seasonal change of the year. Therefore, it is necessary to detect the haze before defogging the froth image. In order to improve the detection efficiency, we set the temperature parameter. When the temperature is less than 5 • C, the system detects whether the froth image is foggy every hour, and the operation efficiency of the detection process is O(MN)(MN is the image size). The second step is the dehazing algorithm based on linear transformation of the atmospheric scattering model. The estimated operating efficiency of the transmission diagram is O(MN). In the refinement of the transmission diagram, the time of Gaussian filtering is lower than that of other filtering, which is shown in Figure 10. Thirdly, when the edge enhancement is performed on the initial restored image, the mask size of the differential operator is 7 × 7, and the fractional order is 0.5, the details of the froth image are enhanced without introducing redundant noise. We convert an RGB image into HIS space and only enhance the edge of brightness component, which improves the efficiency of operation without causing colour loss of image. We compared the running times of different algorithms, as shown in Table 1. The size of the test froth image is 320 × 240, 640 × 480. The parameters of the computing platform are CPU3.5 GHz, 64-bit Intel(R) Xeon(R), 32 GB RAM, Matlab2018a. The classical DCP dehazing algorithm consumes a lot of time, and the time efficiency is improved after many improvements by many researchers. Retinex algorithm is used for defogging from the aspect of image enhancement, with low time cost, but its deficiency is that the colour of the restored image seriously deviates from the true colour of the froth. Visibresto2 defogging algorithm takes about 0.5 s. This algorithm is not suitable for industrial image defogging, and the effect is very poor. DehazeNet algorithm can recover high-quality froth images under normal light with no colour distortion. However, its defogging performance is greatly reduced for low-light foggy images. The algorithm proposed in this paper has great advantages in time efficiency, the restored image can truly describe the characteristics of the froth itself, and the image texture is rich in details. Conclusions In the intelligent flotation automation system based on machine vision, defogging is only a pretreatment operation to provide clear froth images for subsequent feature extraction, and it must meet the real-time requirements of the industry. Therefore, this paper aims to solve this problem by proposing a fast and effective method to remove haze for flotation industrial froth images. The energy gradient of the image reflects the sharpness of the image. In the experiment, the maximum local image energy gradient is used as a prior constraint to obtain the initial transmittance. The proposed method is simple in operation and high in calculation efficiency, and does not need a lot of data to participate in the network training for a long time. However, in view of the results of removing low-light fog, the blur phenomenon exists at the edges in the high spots of bubbles. Therefore, the method based on the variational fractional differential is adopted to enhance the edges of images and enrich the details of images. The quantitative and qualitative analysis shows that the defogging performance and computational efficiency of this method are better than those of the other five methods, and it can meet the demand of high-quality samples extracted from flotation froth features. However, in the captured froth image, there are various mixed-blur images. In the future, our goal is to study an algorithm that can remove multiple blurs or mixed blurs, and consider applying the deep neural network method (Sun et al., 2021;Xi et al., 2021) and some algorithms of image enhancement (Y. Li et al., 2018;Zhang et al., 2021) to the new computing environment of the Internet of Things-smart terminal devices (such as edge computing, fog computing, etc.) to meet the requirements of high efficiency and low power consumption in the flotation industry. We will extract froth image features, industrial process data for information integration, fusion and other processing (Sandor et al., 2019;Yu et al., 2021), and finally, realise the intelligent automatic control of flotation. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This work was supported by the National Natural Science Foundation of China [61771492,62171476] and National Natural Science Foundation of China Guang-dong Joint Fund [U1701261].
9,696.4
2022-02-17T00:00:00.000
[ "Materials Science" ]
Cardiac Arrest during Gamete Release in Chum Salmon Regulated by the Parasympathetic Nerve System Cardiac arrest caused by startling stimuli, such as visual and vibration stimuli, has been reported in some animals and could be considered as an extraordinary case of bradycardia and defined as reversible missed heart beats. Variability of the heart rate is established as a balance between an autonomic system, namely cholinergic vagus inhibition, and excitatory adrenergic stimulation of neural and hormonal action in teleost. However, the cardiac arrest and its regulating nervous mechanism remain poorly understood. We show, by using electrocardiogram (ECG) data loggers, that cardiac arrest occurs in chum salmon (Oncorhynchus keta) at the moment of gamete release for 7.39±1.61 s in females and for 5.20±0.97 s in males. The increase in heart rate during spawning behavior relative to the background rate during the resting period suggests that cardiac arrest is a characteristic physiological phenomenon of the extraordinarily high heart rate during spawning behavior. The ECG morphological analysis showed a peaked and tall T-wave adjacent to the cardiac arrest, indicating an increase in potassium permeability in cardiac muscle cells, which would function to retard the cardiac action potential. Pharmacological studies showed that the cardiac arrest was abolished by injection of atropine, a muscarinic receptor antagonist, revealing that the cardiac arrest is a reflex response of the parasympathetic nerve system, although injection of sotalol, a β-adrenergic antagonist, did not affect the cardiac arrest. We conclude that cardiac arrest during gamete release in spawning release in spawning chum salmon is a physiological reflex response controlled by the parasympathetic nervous system. This cardiac arrest represents a response to the gaping behavior that occurs at the moment of gamete release. Introduction Animals have a sophisticated cardiovascular system, which is regulated by the central nervous system, to optimize their aerobic metabolism in response to internal and external changes [1]. Previous studies have reported that startling stimuli, such as visual and vibration stimuli, decrease ventilation and heart rate temporarily (bradycardia) and can lead to cardiac arrest in some animals including molluscs [2], crustaceans [3], fish [4], amphibians [5] and mammal [6]. This cardiac arrest could be considered as an extraordinary case of bradycardia and defined as reversible missed heart beats. Some researchers have interpreted the cardiac arrest as an adaptation for predator avoidance that reduces movement and noise from that animal [4,7,8]. In addition, variability of the heart rate is controlled by a balance between cholinergic vagus inhibition and excitatory adrenergic stimulation of neural and hormonal action [9], suggesting that regulation of the temporal cardiac arrest may be under the control of autonomic systems. Furthermore, cardiac arrest has been reported to occur for several seconds at the moment when the female releases eggs and male ejaculates sperm in the teleost chum salmon Oncorhynchus keta [10] that showed increased heart rate of the fish around the cardiac arrest from the usual rate. The authors observed electrocardiogram of chum salmon during spawning behavior by using a radio telemetry system in combination with a wired system from a pair of fish, and reported that the cardiac arrest might be a reflex response of the cardiovascular to the elevated blood pressure at the moment of gamete release in chum salmon. However, the cardiac arrest and the mechanism that regulates it remain poorly understood. Moreover, at the moment of gamete release in spawning chum salmon, female and male fully gape for several seconds. However, a physiological relationship between the gaping behavior and the cardiac arrest at the moment of gamete release is also unclear. Here we have monitored the cardiac arrest in spawning chum salmon ( Fig. 1) with electrocardiogram (ECG) data loggers, and we show that this cardiac arrest is regulated by the parasympathetic nerve system. Results All tagged fish (eight females and five males) spawned once or twice each. Fifteen instances of egg release in females and ten instances of sperm ejaculation in males were observed, and twentyfive ECG signals during spawning behavior were recorded in total. Cardiac arrest occurred at the moment of gamete release in all fish and lasted for 4.93-10.02 s (6.8160.54 s, n = 8) in females and for 3.41-6.51 s (4.8160.60 s, n = 5) in males in the first spawning, and for 5.87-10.88 s (7.8060.72 s, n = 7) in female and for 4.97-6.45 s (5.4960.30 s, n = 5) in males in the second spawning ( Fig. 2A). The difference between males and females in the duration of the cardiac arrest was significant for both the first and second spawning (Students t-test, P,0.01, respectively). The beginning of the cardiac arrest was synchronized with opening of the mouth (gaping) at the moment of gamete release. Furthermore, such a long duration of cardiac arrest was observed only at the moment of gamete release (Fig. 2B). Throughout the spawning behavior, the heart rate was relatively higher in females than in males. The heart rate of the fish increased from an hour before the spawning behavior started until the fish finished releasing gametes (73.462.9 b.p.m. in female, n = 8 and 65.065.6 b.p.m. in male, n = 5 during the resting period). The fish showed an escalated heart rate just prior to spawning (86.261.5 b.p.m. in female, n = 8 and 76.664.2 b.p.m. in male, n = 5), but the heart rate decreased to 10.665.6% (77.162.5 b.p.m., n = 8) in females and 9.764.8% (69.264.6 b.p.m., n = 5) in males at the moment of gamete release (Fig. 2C). The heart rate calculating beats for every 5 second period clearly showed the sharp decreasing beats at the moment of gamete release for both sexes (Fig. 2D). The heart rate remained high after spawning only in females, demonstrating clear a sex difference in the spawning behavior of salmonids (Fig. 2C). Females built the nest using a caudal fin (''nest digging'') [11], a behavior that requires higher energy in females than in males during spawning behavior [12]. ECG morphology for the T-wave amplitude was calculated as the average of ten consecutive T-wave amplitudes that were normalized by the baseline T-wave amplitude (ECG signals measured approximately 6 hours before spawning). ECG morphological analysis showed that the T-wave amplitude gradually increased as spawning behavior became more advanced, and it peaked at the moment of gamete release and returned to the baseline levels approximately 6 hours after spawning (Fig. 3A) and this trend was found in both sexes (Fig. 3B). A significant elevation in the normalized T-wave at the moment of gamete release was observed at the first (3.0360.41in female, n = 6 and 2.2960.86 in male, n = 4) and second (4.1761.46 in female, n = 5 and 2.4960.92in male, n = 4) spawning in both sexes (Welch's t-test, P,0.05 for both sexes), and the T-wave amplitude tended to be higher in females than in males (Welch's t-test, P = 0.21). All females that were injected with pharmacological autonomic antagonists and monitored with ECG data loggers spawned between one and three times each, and the ECG signals during eighteen instances of egg release were recorded in total (Movie S1 for the fish injected with Salmon Ringer solution and Movie S2 for the fish injected with atropine). Each fish spawned from one to three times. Thus, we pooled the data from each individual in each group. The effects of sotalol on the heart rate (the R-R intervals) were apparent, with a reduction in the heart rate of approximately 29.6% (66.965.5 b.p.m.) as compared with control fish (95.160.7 b.p.m.) an hour after the injection. The heart rate was unaffected by atropine treatment (94.263.4 b.p.m.) as compared with control fish. After the spawning behavior had finished, the heart rate of fish injected with sotalol was reduced by 2.0% (89.861.3 b.p.m.) as compared with the heart rate of control fish (91.163.1 b.p.m.), and the heart rate of fish injected with atropine (92.162.1 b.p.m.) was similar to that of control fish. However, atropine treatment abolished the variability of the R-R intervals after the spawning behavior had finished (compared with the heart rate in control fish, F-test P,0.01, Fig. 4A). Therefore, we assumed that the effects of atropine injection on heart rate were maintained consistently until the spawning behavior finished, whereas the effects of sotalol injection might be attenuated. The elapsed time between data-logger attachment and the spawning episodes were 28.9610.0 hours (16.4-58.9 hours, n = 4) in fish injected with sotalol. Cardiac arrest occurred at the moment of egg release in all fish injected with sotalol (4.761.2 s, n = 4) and in the control fish (5.661.1 s, n = 8, Fig. 4B). However, cardiac arrest was not observed in all 3 fish injected with atropine despite confirmation of egg release during the spawning behavior; thus, atropine injection abolished the cardiac arrest while the female released eggs. From the ECG morphological analysis, a significant increase in T-wave amplitude at the moment of egg release was found in fish injected with sotalol (n = 7) and in control fish (n = 8; Welch's t-test, P,0.05 for both groups). By contrast, this prominent T-wave was not observed in fish injected with atropine at the moment of egg release (Welch's t-test, P = 0.153; Fig. 4B). Discussion This study revealed that a cardiac arrest lasting for approximately 7 s in females and 5 s in males occurred at the climactic moment when females released eggs and males ejaculated sperm, indicating that cardiac arrest is a characteristic physiological phenomenon in spawning chum salmon with a significant difference in its duration between the sexes. Contrary to the cardiac arrest previously reported in some animals that is the result of an external (startling) stimulation, the cardiac arrest that occurred during gamete release in chum salmon was the result of an internal stimulation. A cardiac arrest lasting a few seconds during sperm ejaculation has also been reported in male octopus Octopus vulgaris [13]. Although the biological meaning of the cardiac arrest in some animals remains unclear, cardiac arrest may not be unusual phenomenon during gamete release in some aquatic animals. The ECG morphological analysis revealed that peaked and tall T-waves occurred adjacent to gamete release. A T-wave represents the period of ventricular repolarization. A prominent T-wave is an abnormal T-wave morphology that is encountered during acute myocardial infarction in humans, and an increase in serum potassium level frequently causes the T-wave trend to become tall and peaked [14]. Furthermore, this study showed that cardiac arrest did not occur during egg release in fish injected with atropine, a muscarinic receptor antagonist, indicating that this cardiac arrest is mediated by the parasympathetic nerve system. Activated parasympathetic neurons release the neurotransmitter, acetylcholine (ACh), which increases potassium permeability in cardiac muscle cells, and the higher potassium efflux retards the cardiac action potential towards the threshold for triggering an action potential, resulting in an extension of heart rate [15]. Vagus stimulation causes an increase in T-wave amplitude [16], and injection of ACh causes an increase in T-wave amplitude, a decrease in heart rate and missing beats (cardiac arrest) in dogs [17,18]. Thus, we speculate that the cardiac arrest that occurs during gamete release is a reflex response to vagal cholinergic drive (parasympathetic activation). In addition, this study showed that cardiac arrest at the moment of gamete release was observed in fish injected with sotalol. Regulation of heart rate and its variability in short-horned sculpin Myoxocephalus scorpius is under parasympathetic, cholinergic control [19]. Thus, we speculated The amplitude of T-waves gradually increases as the release of eggs approaches 6 hours before spawning (a) and a halfhour before spawning (b); a peaked and tall T-wave is found immediately after spawning (c); the amplitude then returns to pre-spawning levels 6 hours after spawning (d). (B) A histogram of normalized T-wave amplitude of female (red bars, n = 6) and male (blue bars, n = 4) at 6 hours before spawning (a), a half-hour before spawning (b), immediately after spawning (c), a half-hour after spawning (d) and 6 hours after spawning (e). Statistical analysis was performed by ANOVA with Dunnett's multiple comparison of mean test. Asterisk shows a significant difference compared with the normalized T-wave amplitude at 6 hours before spawning. doi:10.1371/journal.pone.0005993.g003 that chum salmon during spawning behavior might have a dominant cholinergic tone although the effects of sotalol injection might attenuate at the moment of gamete release. Here, we proposed the hypothesis that the cardiac arrest at the moment of gamete release is a physiological response to the behavioral response of gaping, which may cause a reduction in water flow over the gill. For teleost fish, the initial cardiac response to aquatic hypoxia is reflex bradycardia [20,21], which is mediated by vagal cardio-inhibitory fibres [22]. The occurrence of increased systemic blood pressure accompanying the hypoxic bradycardia serves to open perfused vascular spaces in the gill lamellae, creating a more event blood flow within them, and recruiting unperfused lamellae to increase the effective area for gas exchange [23,24]. In addition, the vasoactive mechanism also greatly affects the gill lamellar perfusion patterns [25]. Reflex cholinergic vasoconstriction in the vicinity of the gill filament arteries [26] is thought to enhance lamellar perfusion and oxygen uptake across the gills [27]. The fish showed an escalated heart rate during the spawning behavior as compared with the resting period in both sexes although the resting heart rate might be relatively high because of the handling stress of the attachment surgery. Energy expenditure during spawning behavior in salmon is relatively higher than standard metabolism [12]; as a result, spawning behavior represents relatively severe exercise. Therefore, we speculated that chum salmon greatly increased cardiac output to support increased metabolism during spawning behavior, because fish heart has a remarkable ability to produce large increases in cardiac stroke volume [22]. In vertebrates, baroreflex is essential in arterial pressure homeostasis, and fish has baroreceptor sites in the gills [28,29]. Atropine administration abolishes the baroreflex response in fish, indicating that the origin of the reflex response that mediates modulation of heart rate is cholinergic [30]. In contrast to hypotension with tachycardia, teleost fish rapidly respond to increases in arterial blood pressure with vagus-mediated bradycardia [9]. Furthermore, salmon show a strong burst of activity of the trunk musculature at the moment of gamete release [31]. Taking all these data into consideration, the highest blood pressure resulting from transient hypoxia caused by gaping and the pressure of pushing out gametes might occur in the blood vessels at the moment of egg or sperm release and the cardiac arrest could be considered as an extraordinary case of bradycardia. In addition, cholinergic nerves directly innervate systemic blood vessels in the gill and the chromaffin cells, which are also localized in the heart and along the cardinal vein, and which produce catecholamine [9,32]. In conclusion, we speculate that the cardiac arrest that occurs in spawning chum salmon when female release eggs and males ejaculate sperm represents a remarkable behavioral response of gaping under vagal cholinergic regulation. Attachment procedure of ECG data logger This study (No. 18-3) has been carried out under the control of the committee along the ''Guide for the Care and Use of Laboratory Animals in Field Science Center for Northern Biosphere, Hokkaido University'' and Japanese Governmental Law (No. 105) and Notification (No. 6). Eight female (61.567.7 cm fork length (L F ), 2.660.08 kg mass) and five male (65.561.1 cm L F , 3.060.1 kg mass) chum salmon were tagged with an ECG logger (W400L-ECG, 21 mm in diameter, 110 mm in length, 57 g in air; Little Leonard Co., Tokyo, Japan) to record the heart rate as previously described [33] during 11-29 November 2007. In brief, chum salmon captured in the Shibetsu River estuary were anaesthetized using FA 100 (eugenol; Tanabe Seiyaku Co. Ltd, Osaka, Japan) at a concentration of 0.5 mL L 21 . A bipolar electrode made by a copper disc (approximately 1.5 cm in diameter) was surgically attached on the ventral side by using sutures. The ECG loggers, sutured using nylon ties and instant glue (a-cyanoacrylate; Fujiwara Sangyo Co. Ltd, Hyogo, Japan), were tagged on the back of the fish, anterior to the dorsal fin, through small holes using two stainless needles. During the tagging procedure, which took approximately 20 min, the gills of the fish were irrigated with water containing diluted FA 100 to maintain sedation. The sampling rate of the ECG loggers was set at 200 Hz. After 24 hours to allow for recovery from the tagging, the spawning behavior of the fish was monitored with a digital video camera to synchronize recordings of behavior with the ECG signals in the spawning channel (3.862.961.1 m) connected to the Shibetsu River supplied with spring water (16.8uC) underneath a gravel bottom, which was free of silt [34], in the Shibetsu Salmon Museum, Hokkaido, Japan. Pharmacological study For the injection of pharmacological autonomic antagonists, only females were tagged with an ECG logger during 3-29 November 2008. After the tagging procedure described above, the dorsal aorta was temporarily cannulated by using polyethylene tubing with a diameter of 1.3 mm via the upper jaw [34]. The fish were injected with atropine (a muscarinic antagonist; atropine sulfate, 1.2 mg/kg, SIGMA, Missouri, USA; 66.562.9 cm L F , 3.260.5 kg mass, n = 3) or sotalol (a b-adrenergic antagonist; sotalol hydrochloride, 2.7 mg/kg, SIGMA; 62.460.6 cm L F , 2.860.1 kg mass, n = 4) or 1 ml of Salmon Ringer solution (150 mM NaCl, 310 mM KCl, 0.40 mM HEPES; 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid, 0.34 mM CaCl 2 , 0.10 mM MgCl 2 , 0.03 mM MgSO 4 with distilled water in total 1 L) as a sham control (64.961.9 cm L F , 3.060.4 kg mass, n = 3). Atropine and sotalol were dissolved in 1 ml of Salmon Ringer solution. After the injections, the spawning behavior of the fish was monitored as described above without any recovery period. Data analysis Igor Pro (WaveMetrics Inc., Lake Oswego, OR, USA) and Fluclet WT (Dainippon Sumitomo Pharmacy Co., Ltd., Osaka, Japan) were used to determine the ECG intervals (R-R intervals) and morphology. Statistical significance was achieved at P,0.05. Values are presented as mean6standard error of the mean (s.e.m.). Heart rate is presented as beats per minute (b.p.m.). Supporting Information Movie S1 This movie shows the spawning behavior and gamete release of chum salmon. The female was attached with an electrocardiogram (ECG) data logger and injected with Salmon Ringer solution (QuickTime; 1.2 MB).
4,270
2009-06-19T00:00:00.000
[ "Biology", "Medicine" ]
Leaching Characteristics of Heavy Metals and Plant Nutrients in the Sewage Sludge Immobilized by Composite Phosphorus-Bearing Materials In order to evaluate the environmental risk caused by land application of sewage sludge, leaching characteristics of heavy metals and plant nutrients in the sewage sludge immobilized by composite phosphorus-bearing materials were investigated. Their cumulative release characteristics were confirmed. Furthermore, the first-order kinetics equation, modified Elovich equation, double-constant equation, and parabolic equation were used to explore dynamic models of release. Results showed that sewage sludge addition significantly increased electricity conductivity (EC) in leachates, and the concentrations of heavy metals (Cu, Cr, Zn) and plant nutrients (N, P, K) were also obviously increased. The highest concentrations of Cu, Cr, and Zn in the leachates were all below the limit values of the fourth level in the Chinese national standard for groundwater quality (GB/T14848-2017). The immobilization of composite phosphorus-bearing materials reduced the release of Cu and Cr, while increased that of Zn. The fitting results of modified Elovich model and double-constant model were in good agreement with the leaching process of heavy metals and plant nutrients, indicating their release process in soil under simulated leaching conditions was not a simple first-order reaction, but a complex heterogeneous diffusion process controlled by multifactor. Introduction Sewage sludge is a residue produced during the biological wastewater treatment process. In China, its output has been increasing with the increasement of wastewater amount and treatment ratio. Sewage sludge is rich in organic matter (OM), nitrogen (N), phosphorus (P), and other trace elements such as Ca, Mg, Fe, Mo, B, etc [1,2]. Its land application can effectively utilize the useful resources and provide an important and low-cost alternative for sewage sludge disposal [3,4]. The properly treated sewage sludge is commonly used to improve soil quality [5,6]. Hamdi et al. conducted a field study over a three-year period under a semi-arid climate and found that repetitive sludge addition consistently improved total organic carbon (TOC), N, P, and K content up to soils treated with 120 t·ha −1 ·year −1 , and impacted positively on biological properties, including microbial biomass and soil enzyme activities [7]. Tejada and Gonzalez indicated that land use of sewage sludge effectively reduced bulk density, aggregate instability, and soil loss under simulated rain at 140 mm·h −1 [8]. Cheng et al. suggested that sludge application increased soil cation exchange capacity (CEC), enhanced aggregate stability, and improved the ability of water and fertilizer conservation [9]. The sewage sludge amendment could result in robust plants with fast development and greater biomass production, by shortening their cultivation period [10][11][12]. However, sewage sludge can contain toxic heavy metals, such as Cu, Pb, Zn, Cd, Cr and so on. Its long-term land application would inevitably lead to accumulation of heavy metals in soil, posing a serious risk to surface water, groundwater, and even to human health [11,13,14]. Therefore, further studies are required to refine the migration and transformation of heavy metals before land use of sewage sludge [15]. Previous studies indicated that the mobility of heavy metals depends on the properties of soil [16,17], total concentrations of heavy metals and their speciation in sewage sludge [18,19], interaction of heavy metals with soil such as adsorption reaction [20], and complexation of heavy metals with organic or inorganic species [21,22]. Many kinds of chemical passivating agents have been used to immobilize heavy metals in sewage sludge and reduce the environmental risk of its land application [23]. The commonly used additives include basic compounds [24], aluminosilicates [25][26][27], phosphorus-bearing materials [28,29], and sulfides [30]. The long-term stability of the immobilized heavy metals under natural conditions has attracted wide attention [14]. Leaching in soil column was commonly used to simulate the migration and transformation of heavy metals in soil and groundwater. Gu et al. indicated that there were metal enrichments (Cd, Cu, Pd, and Zn) in the lower profiles of sludge-amended soil columns. Even under the greatest sludge application rate (150 g·kg −1 ), the proportions of the four heavy metals in the leachate over the experimental period were only 2.35%, 0.0453%, 0.244%, and 0.00889%, respectively [11]. Fang et al. carried out a leaching assessment to evaluate the potential leaching of heavy metals during composted sewage sludge application to soils. The authors pointed out that repetitive additions of compost favored the formation of reducing conditions due to the significantly increasing content of OM, causing accumulation in total contents for Cd, Cr, Cu, and Pb, but not enhancement in leaching concentrations [31]. Mortula indicated that both alum and lime treatment were capable of reducing leaching of heavy metals, while aluminum concentrations in the leachate increased with the increase in alum concentrations [32]. In addition to heavy metals, leaching of plant nutrients from sewage sludge might lead to pollution of groundwater. Therefore, in the process of sludge land use, we should not only pay attention to the migration of heavy metals, but also to that of plant nutrients. Li et al. found that the phosphorus content in soil after sludge application increased significantly in the range of 0-20 cm and there was no significant difference in the soil below 40 cm, which was related to the capacity of surface soil to accommodate phosphorus and the weaker mobility of phosphorus in the soil. Nitrogen in the form of nitrate nitrogen had a higher risk of leaching than phosphorus because it was difficult to be absorbed by soil particles [33]. Oladeji et al. suggested that the nitrate nitrogen content in groundwater increased in the remediation site of the mine where sludge was applied at 801 to 1815 Mg·ha −1 cumulative rate between 1972 and 2004 [34]. Mortula et al. found that alum treatment was capable of reducing phosphorus, copper, and ammonia leaching from sewage sludge, but increasing aluminum and chromium leaching [35]. Therefore, leaching experiments could also be used to rationally analyze the migration rules of plant nutrients after sludge land use, so as to comprehensively evaluate the environmental risks of sludge application. In this paper, leaching characteristics of heavy metals and plant nutrients in the sewage sludge passivated by composite phosphorus-containing materials were studied in order to investigate the possible environmental impact of sludge land use. The contents were as follows: (1) to study the tread of pH and electricity conductivity (EC) in leachate, (2) to analyze the cumulative release of heavy metals and plant nutrients, and (3) to fit the cumulative release of heavy metals and plant nutrients with four models to choose the most suitable kinetic model of cumulative release. From the above aspects, the environmental behaviour of the immobilized sewage sludge was evaluated, which could provide a scientific basis for the land application of sewage sludge. Collection and Preparation of Materials The dewatered sludge was obtained from Changqing Municipal Wastewater Treatment Plant in the west of Ji'nan, in which the anaerobic/anoxic/aerobic process was adopted to treat urban sewage. The sludge was a mixture from the primary sedimentation tank and the secondary sedimentation tank. 0.3% polyacrlamide (PAM) was added for centrifugal dehydration and the solid content could reach 15-20%. The sludge was air-dried, homogenized, crushed by a grinder and passed through a 60 mesh sieve for further analysis. The background soil was taken from mature soil within 20 cm from the surface in the campus of Qilu University of Technology. Surface weeds, plant residues, and gravel were removed and then soil was air-dried, homogenized, crushed by a grinder, and passed through a 60 mesh sieve for properties analysis and a 2 mm mesh for column experiment. Rock phosphate, whose main component was fluorapatite (Ca 5 (PO 4 ) 3 F), was purchased from Taizhou Changpu Chemical Reagent Co., Ltd. It was a gray-white powder mainly composed of Ca(H 2 PO 4 ) 2 ·H 2 O, A small amount of CaSO 4 and free phosphoric acid and was passed through a 60 mesh sieve for further analysis. Superphosphate was obtained from Sinopharm Chemical Reagent Co., Ltd. The properties of sewage sludge, background soil and rock phosphate were shown in Table 1. Leaching Column Experiment The test device was a customized leaching device consisting of a water pump, a flow meter, a leaching column, and a polyethylene collecting bottle. The leaching column was made of plexiglass cylinders with an inner diameter of 10 cm and an inner height of 100 cm. The end of the bottom was a porous flange connecting column, and the flange plate was arranged with a quantitative filter paper, a 100 mesh nylon filter, and a 2 cm thick quartz sand from bottom to top. The soil column was composed of two parts. The lower part was hand-packed with immature soil into a height of 40 cm with a bulk density of 1.3 g/cm 3 . The immature soil, taken from more than 20 cm below the surface, was air-dried and sieved with a 2 mm mesh. It was filled into the column according to the method of Hou et al. [36]. The upper part was hand-packed with mixed matrix into a height of 20 cm. In the mixed matrix, the prepared sludge was mixed uniformly with background soil to obtain three treatments: control treatment (control), unstabilized sewage sludge treatment (USS), and stabilized sewage sludge treatment (SSS), whose components were shown in Table 2. In the SSS treatment, sewage sludge was mixed with rock phosphate and superphosphate in the proportion shown in Table 2. The mixture was maintained at 50% water content for 7 days at room temperature to ensure complete immobilization of heavy metals in sewage sludge, and was then air-dried, crushed by a grinder, and passed through a 60 mesh sieve. Three kinds of mixed matrixes were filled into the column according to the method of Hou et al. [36]. In order to ensure uniform distribution of deionized water added to the soil column, its top was covered with a 2 cm height of quartz sand and a 100 mesh nylon filter. The leachate was filtered through quartz sand, nylon filter, filter paper, and then flowed out from the bottom water collecting pipe into a polyethylene plastic bottle. After the filling of the soil column was finished, the valve at the bottom of the soil column was opened and placed in a bucket filled with deionized water, making the soil wetted and saturated by the rising capillary water. The saturated condition was kept for 24 h, and then the column was freely drained to reach field capacity (30% on weight basis). Thereafter, the column was irrigated with 400 mL deionized water, and the leachate was collected into a polyethylene plastic bottle. When the same volume leachate was collected, it was considered that a single elution process was completed. Next 100 mL leachate was taken out, 5 mL (1 + 1) nitric acid was added, and it stored at <4 • C before the determination of Cu, Cr, and Zn. pH, EC, TN, TP, and TK of the leachate were measured after the elution process ended. The elution process was carried out 12 times successively and the interval between two leaches was 12 h. The total volume of the used deionized water was 4.8 L, which was equivalent to the annual precipitation (rain) in the experimental area (Ji'nan, North of China). Ambient temperature throughout the experimental period was 12-16 • C. Analytical Methods pH value and EC were determined in the 1:5 (w/v) suspension of solid sample and distilled water using a pH meter and a conductivity meter, respectively. OM, CEC, and moisture content were determined by the K 2 Cr 2 O 7 volumetric method, the EDTA-ammonium acetate exchange method, and the gravimetric method, respectively. Total nitrogen (TN), total phosphorus (TP), and total potassium (TK) were measured by semi-micro Kjeldahl method, Mo-Sb colorimetric method, and alkaline fusion-flame photometric method, respectively [37]. Soil and sludge samples were first digested with HNO 3 -H 2 O 2 [38], and then the contents of heavy metals (Cu, Cr and Zn) were measured by inductively coupled plasma optical emission spectroscopy (ICP-OES, Optima 2000DV, Perkin Elmer, Waltham, MA, USA). All the used containers were soaked overnight with 20% (v/v) HNO 3 in advance and then rinsed with ultrapure water. HNO 3 and H 2 O 2 were of guarantee grade and the rest reagents were of analytical grade. The ultrapure water was from a Millipore Milli Q system. pH and EC in the Leachate In the three treatments, pH values increased with the increase of leachate volume, and achieved their maximum values, which were 7.35, 7.57, and 7.60 in the control treatment, SSS treatment, and USS treatment, respectively, when the collected leachate reached 1.2 L (Figure 1). The increase of pH value was mainly due to rapid exchange between exchangeable base ions and H+. The addition of sludge increased exchangeable base ions, which was beneficial to the increase of pH. At the same time, ammonia could be produced by the deamination of OM in sludge. Thus, the pH values of leachate in the SSS and USS treatment were higher than those in the control treatment, which was inconsistent with the result of Gu et al. [11]. They indicated that application of sewage sludge induced a small temporary decrease in pH leachate, and attributed the decrease to production of organic acid from decomposition of OM. Increased pH values could result in an increase in the number of negatively charged surface sites in the soil, increasing the adsorption capacity of the soil for cationic metals and decreasing their mobility. In the three treatments, pH values increased with the increase of leachate volume, and achieved their maximum values, which were 7.35, 7.57, and 7.60 in the control treatment, SSS treatment, and USS treatment, respectively, when the collected leachate reached 1.2 L (Figure 1). The increase of pH value was mainly due to rapid exchange between exchangeable base ions and H+. The addition of sludge increased exchangeable base ions, which was beneficial to the increase of pH. At the same time, ammonia could be produced by the deamination of OM in sludge. Thus, the pH values of leachate in the SSS and USS treatment were higher than those in the control treatment, which was inconsistent with the result of Gu et al. [11]. They indicated that application of sewage sludge induced a small temporary decrease in pH leachate, and attributed the decrease to production of organic acid from decomposition of OM. Increased pH values could result in an increase in the number of negatively charged surface sites in the soil, increasing the adsorption capacity of the soil for cationic metals and decreasing their mobility. When the volume of leachate exceeded 2.8 L, pH values in the three treatments decreased to stable values. The final pH values were 7.31, 7.35, and 7.37, and no obvious difference was observed between the final pH values because that soil was a heterogeneous body with complex components and large buffer capacity [39]. When the volume of leachate exceeded 2.8 L, pH values in the three treatments decreased to stable values. The final pH values were 7.31, 7.35, and 7.37, and no obvious difference was observed between the final pH values because that soil was a heterogeneous body with complex components and large buffer capacity [39]. EC values in leachates can reflect the total electrolyte activity in solution. EC values in the three treatments decreased rapidly when the leachate volume was less than 1.2-1.6 L. There was no significant difference among the three treatments, suggesting that at this stage, it was the exchangeable salt-based ions in soil, rather than those in sewage sludge, that entered into the leachate. The adsorption of salt-based ions by the lower soil column prevented the leaching of salt-based ions from sewage sludge ( Figure 2). EC values in leachates can reflect the total electrolyte activity in solution. EC values in the three treatments decreased rapidly when the leachate volume was less than 1.2-1.6 L. There was no significant difference among the three treatments, suggesting that at this stage, it was the exchangeable salt-based ions in soil, rather than those in sewage sludge, that entered into the leachate. In the control treatment, with the increase of leachate volume, EC values tended to be in a quasi-equilibrium state, indicating that the exchangeable base ions were depleted and other salt-based ions were gradually released into leachate. While in the USS and SSS treatment, their EC values increased significantly with application of sewage sludge due to enhancement of ions leaching into the soil solution. The highest EC values were observed, with values reaching up to 2210 µs/cm when the collected volume was 2.0 L in the SSS treatment and up to 2818 µs/cm when the collected volume was 1.6 L in the USS treatment. The results were similar to that of Penido et al. who demonstrated that the addition of sewage sludge represented an increase in EC values [40]. In the SSS treatment, the formation of phosphate precipitation with passivating agents such as rock phosphate and superphosphate made EC values lower than those in the USS treatment. Leaching Characteristics of Heavy Metals The leaching process of heavy metals is actually migration of metal ions between water, soil, and sewage sludge particles, which is mainly related to adsorption-desorption, complexation-dissociation, and precipitation-dissolution reactions. The exchangeable metal ions adsorbed on solid media are displaced or desorbed into aqueous solution and become free ions, which eventually follow the leaching solution out of the system. The concentration variations of Cu, Cr, and Zn with the volume of leachate were shown in Figure 3. The release processes of Cu in the three treatments could be divided into two phases: rapid release phase and slow release phase. During the first phase. Cu concentration in the leachates decreased rapidly with the increasing volume of leachate. The released Cu increased in the following order: control treatment < USS < SSS, indicating that the sludge addition increased the Cu concentration in the leachate and the addition of composting phosphorus materials decreased the leaching Cu concentration. The immobilization of Cu was mainly due to the formation of the metal phosphate precipitates, surface complexation, and adsorption. The released Cu might be exchangeable Cu and ionic Cu. Its releasing rate was mainly determined by the migration rate of the leaching solution in soil column, which was related to the height of soil column, soil bulk density, and copper speciation in sewage sludge [41]. During the slow release phase, various kinds of stable Cu such as carbonate copper, organic copper, sulfide copper, and lattice copper could be released slowly under the continuous elution of water [42,43]. The final Cu concentrations in the three treatments maintained at 0 mg/L, 0.05 mg/L, and 0.05 mg/L, respectively. The release processes of Cr in the control treatment were similar to those of Cu. While in the SSS and USS treatment, Cr leached out at a high rate so that Cr concentration did not change significantly at the beginning of leaching process. The concentrations of Cr decreased from 0.05-0.07 mg/L at the beginning to 0-0.01 mg/L in the end. The land use of sewage sludge significantly increased the concentrations of Zn in the leachate. However, the Zn concentrations in the SSS treatment were higher than those in the USS treatment, indicating that the phosphate-bearing materials could not effectively reduce the mobility of Zn. Cao et al. pointed out that phosphorus-bearing materials show a limited effect on Zn immobilization and even had the potential of activating Zn [42]. The surface adsorption or complexation were primarily responsible for Cu and Zn immobilization. Flow calorimetry indicated that Cu adsorption onto rock phosphate was exothermic, while Zn sorption was endothermic [44,45]. In the SSS and USS treatment, there was more exchangeable Zn, leading to the slow release phase arriving later than in the control treatment. The highest concentrations of Cu, Cr, and Zn in the leachates were 1.47 mg/L, 0.07 mg/L, and 1.49 mg/L, respectively, as shown in Table 3. These values were all below the limit values of the fourth level in the Chinese national standard for groundwater quality (GB/T14848-2017), indicating that it would not cause pollution of heavy metals to groundwater when the added sewage sludge was less than 10%. The release processes of Cu in the three treatments could be divided into two phases: rapid release phase and slow release phase. During the first phase. Cu concentration in the leachates Figure 4 showed the accumulative release of heavy metals with the increasing volume of collected leachate. The cumulative releasing processes of heavy metals could be divided into two stages. In the first stage, the cumulative release increased rapidly, which was due to the desorption of heavy metal ions from the surface of soil particles, and the more active forms of heavy metals entering the leaching solution at a faster speed. In the second stage, the cumulative release increased slowly and reached a state of equilibrium gradually. During the process, heavy metals adsorbing on the surface of soil particles decreased, and those in the micropores within the particles diffused slowly into the solution. The proportion of active heavy metals in this stage would also decrease. Land use of sewage sludge significantly increased the accumulative release of Cu, Cr, and Zn. The releasing Cu and Cr in the SSS treatment were less than that in the USS treatment, while the opposite was true for the releasing Zn. The results showed that the use of rock phosphate and superphosphate decreased the migration of Cu and Cr, while increased that of Zn. Release Kinetics of Heavy Metals The immobilized sewage sludge enters into an open environment system by land use. In the open system, all chemical reactions occur in a dynamic state. It is more helpful for understanding the transformation and migration of elements in soil to study chemical kinetics. Their release process in soils is affected not only by the physical and chemical properties of soil, but also by interaction with other substances present in soil. Mathematical models used to analyze the geochemical behavior of elements in soils has become the focus of research. The commonly used dynamic models include first-order kinetic equation, modified Elovich equation, double-constant rate equation, parabolic diffusion equation and so on [46]. Their kinetics equations were shown as follows, First-order kinetic equation: Modified Elovich equation: Double-constant rate equation: Parabolic diffusion equation: y = a + bx 0.5 (4) where y was the cumulative release of heavy metals when the cumulative volume was x, and a and b were constants. The releases of heavy metals in the leaching process were fitted by the first-order kinetic model, modified Elovich model, double-constant model, and hyperbolic diffusion model, as shown in Table 4. For the first-order dynamic equation, the square ranges of regression coefficient (R 2 ) in the three treatments were 0.66-0.70 for Cu, 0.47-0.68 for Cr, and 0.55-0.74 for Zn, respectively. The three heavy metals could not be well fitted in the three treatments, which indicated that the leaching process of heavy metals was not explained fully by diffusion mechanism. Int. J. Environ. Res. Public Health 2019 The modified Elovich equation is one of the most commonly used equations for describing the kinetics of heterogeneous chemisorption on solid surfaces. It is not suitable for a single reaction mechanism process but is very suitable for processes with large changes in activation energy during the reaction process [47]. The kinetic data in the three treatments fitted well with the modified Elovich equation, with their squares of regression coefficient of 0.96-0.98 for Cu, 0.85-0.99 for Cr, and 0.93-0.97 for Zn, respectively, showing that the migration of heavy metals was a complex heterogeneous dispersion process. The fitting correlation coefficient was relatively high for Cu and Zn, while it was low for Cr. The double-constant model equation is actually a modified Frendlich equation, which is used to describe the heterogeneity of energy distribution and the different affinity of adsorption sites to heavy metals on the surface of soil particles. It is as applicable to complex systems as the modified Elovich equation. For the double-constant model equation, the square ranges of regression coefficient in the three treatments were 0.93-0.94 for Cu, 0.80-0.94 for Cr, and 0.86-0.96 for Zn, respectively. For the hyperbolic diffusion model, the square ranges of regression coefficient in the three treatments were 0.88-0.92 for Cu, 0.69-0.95 for Cr, and 0.81-0.96 for Zn, respectively. The hyperbolic diffusion model was most suitable for describing the diffusion process of substances in particles. The poor fitting results showed that the internal diffusion of heavy metals was not the only limiting factor in the leaching process. Comparing the fitting results of the four models, the order of the fitting degree was as follows: modified Elovich model > double-constant model > hyperbolic diffusion model > first-order kinetic model. It indicated that the release kinetics of heavy metals was not a simple first-order reaction, but a complex heterogeneous diffusion process controlled by precipitation and dissolution, adsorption and desorption, complexation and dissociation, etc. This result was consistent with Zheng et al. [41] and Zhang et al. [48]. They all concluded that heavy metals' releasing processes in soil under simulated leaching conditions were not a simple first-order reaction, but a process of multifactor integrated control. Leaching Characteristics of Plant Nutrients Sewage sludge is rich in plant nutrients such as N, P, and K. Its land application would inevitably enhance these plant nutrients entering into subsoil and groundwater, which would lead to the pollution of groundwater. Thus, leaching characteristics of N, P, and K should also be studied in order to obtain overall environmental influence of sludge land use. Concentrations of TN, TP, and TK in Leachate The concentration variations of TN, TP, and TK with the volume of collected leachate is shown in Figure 5. Sludge addition enhanced the leaching concentrations of TN, TP, and TK. The concentrations of TN, TP, and TK in the control treatment decreased gradually and tended to be stable when the collected volume was more than 2.4 L, indicating that when the soluble plant nutrients finished leaching, the insoluble parts began to be released into the leachate. When the volume of collected leachate reached 4.0 L. the concentrations of TN, TP, and TK in the leachate were 1.00 mg/L, 0 mg/L, and 0 mg/L, respectively. In the SSS and USS treatments, the leaching TN and TK decreased, and then increased to the highest concentration when the collected volume reached 1.6-2.4 L. In next stage, the concentrations decreased rapidly and then reached to stable levels when the collected volume reached 3.6 L. The stable TN and TK concentrations in the USS treatment were almost the same as to those in the SSS treatment, while they were both higher than those in the control treatment. at 2.0 L in SSS treatment and 3.99 mg/L at 1.2 L in USS treatment, respectively. Lei et al. suggested that the higher EC was, the easier phosphorus was to be desorbed from the soil [49]. The higher EC in USS treatment prompted TP leaching, leading to reaching maximum of TP earlier. Then, TP concentration began to decrease and reached a stable state at 4.0 L. The stable concentration in the SSS treatment was about 1.0 mg/L higher than that in the USS treatment, because of the addition of composite phosphorus-bearing materials. Figure 6 shows the accumulative release of TN, TP, and TK in the leachate. The land use of sewage sludge significantly increased the cumulative release of TN, TP, and TK in the USS and SSS treatment. TN in the three treatments increased with the increasing volume of the collected leachate. There was no obvious difference among the three treatments when the collected volume was less than 1.6 L, suggesting that TN from sewage sludge was intercepted by the lower soil column. When the collected volume exceeded 1.6 L, the increasing rates of accumulative release of TN in the USS treatment and SSS treatment were much higher than that in the control treatment. Accumulative Release Characteristics of Plant Nutrients The accumulate release of TP in the control treatment did not increase significantly when the collected volume exceeded 1.6-2.0 L. The accumulative release of TP in the SSS treatment increased with the increasing collected volume, while in the USS treatment the increase slowed down when the collected volume exceeded 3.6 L. Unlike the leaching process of TN and TK, the concentration of leaching TP increased at the beginning of the leaching process. The concentrations of TP reached the highest values of 3.37 mg/L at 2.0 L in SSS treatment and 3.99 mg/L at 1.2 L in USS treatment, respectively. Lei et al. suggested that the higher EC was, the easier phosphorus was to be desorbed from the soil [49]. The higher EC in USS treatment prompted TP leaching, leading to reaching maximum of TP earlier. Then, TP concentration began to decrease and reached a stable state at 4.0 L. The stable concentration in the SSS treatment was about 1.0 mg/L higher than that in the USS treatment, because of the addition of composite phosphorus-bearing materials. Figure 6 shows the accumulative release of TN, TP, and TK in the leachate. The land use of sewage sludge significantly increased the cumulative release of TN, TP, and TK in the USS and SSS treatment. TN in the three treatments increased with the increasing volume of the collected leachate. There was no obvious difference among the three treatments when the collected volume was less than 1.6 L, suggesting that TN from sewage sludge was intercepted by the lower soil column. When the collected volume exceeded 1.6 L, the increasing rates of accumulative release of TN in the USS treatment and SSS treatment were much higher than that in the control treatment. Figure 6 shows the accumulative release of TN, TP, and TK in the leachate. The land use of sewage sludge significantly increased the cumulative release of TN, TP, and TK in the USS and SSS treatment. TN in the three treatments increased with the increasing volume of the collected leachate. There was no obvious difference among the three treatments when the collected volume was less than 1.6 L, suggesting that TN from sewage sludge was intercepted by the lower soil column. When the collected volume exceeded 1.6 L, the increasing rates of accumulative release of TN in the USS treatment and SSS treatment were much higher than that in the control treatment. Accumulative Release Characteristics of Plant Nutrients The accumulate release of TP in the control treatment did not increase significantly when the collected volume exceeded 1.6-2.0 L. The accumulative release of TP in the SSS treatment increased with the increasing collected volume, while in the USS treatment the increase slowed down when the collected volume exceeded 3.6 L. The accumulative releases of TK in the USS treatment and SSS treatment were similar to TP in the USS treatment. The release processes of the three nutrient elements in leaching process were fitted by four kinds of kinetics models. The equations and R 2 were shown in Table 5. Analyzing the fitting results of the The accumulate release of TP in the control treatment did not increase significantly when the collected volume exceeded 1.6-2.0 L. The accumulative release of TP in the SSS treatment increased with the increasing collected volume, while in the USS treatment the increase slowed down when the collected volume exceeded 3.6 L. The accumulative releases of TK in the USS treatment and SSS treatment were similar to TP in the USS treatment. Accumulative Release Model of Plant Nutrients The release processes of the three nutrient elements in leaching process were fitted by four kinds of kinetics models. The equations and R 2 were shown in Table 5. Analyzing the fitting results of the four models in the three treatments, the best fittings were modified Elovich model and double-constant model for N, modified Elovich model for P, and modified Elovich model and double-constant model for K, while the fitting of the first-order kinetics model was the worst. The fittings of USS and SSS treatment were better, and the fitting of control treatment was worse. The leaching of plant nutrients during the land use of sewage sludge was a complex heterogeneous diffusion process, which was similar to the leaching of heavy metals. Conclusions The addition of sewage sludge obviously changed the pH and EC of leachate. pH values in the SSS treatment and USS treatment were higher than that in the control treatment. The addition of sewage sludge increased cumulative release of heavy metals. The highest concentrations of Cu, Cr, and Zn in the leachates were all below the limit values of the fourth level in the Chinese national standard for groundwater quality (GB/T14848-2017). The cumulative releases of heavy metals increased rapidly at first and then slowly. The cumulative release of Cu and Cr in the SSS treatment was higher than that in the USS treatment, while Zn was in the opposite. The best fitting equation for cumulative leaching release of heavy metals was modified Elovich equation with their squares of regression coefficient of 0.96-0.98 for Cu, 0.85-0.99 for Cr, and 0.93-0.97 for Zn, respectively. Sludge addition also enhanced the leaching concentrations of TN, TP, and TK. The best fittings were the modified Elovich model and double-constant model for N and K, and the modified Elovich model for P. The leaching process of heavy metals and plant nutrients was a process of multifactor integrated control.
7,767.2
2019-12-01T00:00:00.000
[ "Materials Science" ]
Durable response to EGFR tyrosine kinase inhibitors in a patient with non–small cell lung cancer harboring an EGFR kinase domain duplication Abstract Epidermal growth factor receptor (EGFR) kinase domain duplication (KDD) has been identified as an oncogenic driver in 0.05% to 0.14% of non–small cell lung cancer (NSCLC) patients. However, little is known of the efficacy of EGFR tyrosine kinase inhibitors (TKIs) for such patients. Here, we report the case of a 45‐year‐old Japanese woman with NSCLC positive for EGFR‐KDD (duplication of exons 18–25) who developed carcinomatous meningitis and showed a marked response to the EGFR‐TKIs erlotinib and osimertinib. As far as we are aware, this is the first report of EGFR‐TKI efficacy for carcinomatous meningitis in a NSCLC patient harboring EGFR‐KDD. INTRODUCTION Tyrosine kinase inhibitors (TKIs) are established as standard therapy for non-small cell lung cancer (NSCLC) with sensitizing mutations of the epidermal growth factor receptor (EGFR). 1,2 EGFR kinase domain duplication (EGFR-KDD) is the result of rare genomic alterations that activate EGFR signaling and confer sensitivity to EGFR-TKIs. [3][4][5][6][7][8][9][10][11] Most instances of EGFR-KDD are the result of duplication of exons 18 to 25 of EGFR, although rare cases due to duplication of exons 17 to 25 or exons 14 to 26 have been described. 4 Such genomic alterations in NSCLC are thought to occur at a frequency of 0.05% to 0.14%. 11 Here, we report a rare case of a patient with NSCLC harboring duplication of exons 18 to 25 of EGFR who experienced benefit from treatment with the EGFR-TKIs erlotinib and osimertinib. Carcinomatous meningitis of the patient showed marked resolution during osimertinib treatment, which represents the first such successful treatment to be reported in an individual with EGFR-KDD. CASE REPORT A 45-year-old Japanese woman without a history of smoking was referred to our hospital for the treatment of recurrent NSCLC with multiple lung and mediastinal lymph node metastases. A supraclavicular lymph node biopsy revealed adenocarcinoma. Routine screening for EGFR mutations (cobas EGFR mutation test v2, Roche Molecular Diagnostics), ALK fusion genes, and ROS1 fusion genes was negative. The tumor proportion score for programmed cell death-ligand 1 (PD-L1, 22C3) was 20%. Next-generation sequencing (NGS) with an Ion AmpliSeq Custom DNA Panel (Genomedia) identified EGFR amplification (copy number of 5.85), which was confirmed by fluorescence in situ hybridization (FISH) (Figure 1). To further investigate the gene alteration, we performed EGFR sequencing using the Sanger method, and duplication of introns 17 to 25 was detected. After the patient had received carboplatin-pemetrexed combination therapy, pembrolizumab monotherapy, gamma knife stereotactic radiosurgery for asymptomatic multiple brain metastases, and docetaxel-ramucirumab combination therapy, she was started on erlotinib (150 mg/day) in the fourth-line setting. Fourteen days after the onset of erlotinib treatment, chest x-rays revealed a pronounced reduction in the size of the lung metastases ( Figure 2(a), (b)). At 70 days after treatment onset, computed tomography (CT) also showed a reduction in the size of multiple lung metastases ( Figure 2 (c), (d)), which was categorized as a partial response (PR) according to Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. The patient experienced progressive disease for lung metastases at 133 days after erlotinib initiation, although no progression was apparent for brain metastases during the treatment period. CT-guided percutaneous lung puncture was performed, and NGS with an Ion AmpliSeq Cancer Hotspot Panel v2 (ThermoFisher Scientific) found no gene alterations of note, while EGFR amplification (copy number of 4.3) remained evident by FISH. After she had experienced progression on an investigational therapy and S-1 monotherapy, the patient complained of numbness in her extremities. Contrastenhanced, T1-weighted magnetic resonance imaging (MRI) of the head revealed diffuse leptomeningeal contrast enhancement following the contours of the gyri and sulci of cerebellar folia and multiple deposits in the subarachnoid space, whereas MRI of the spine revealed nodular enhancement along the cord surface (Figure 3(a)). The patient was therefore diagnosed with carcinomatous meningitis and was started on osimertinib (80 mg/day) in the seventh-line setting. Her weight and body surface area on day 1 were 46 kg and 1.423 m 2 , respectively. Her Eastern Cooperative Oncology Group performance status (PS) was 2, and physical examination showed no abnormalities. Laboratory data were almost normal, other than grade 1 hyponatremia (Common Terminology Criteria for Adverse Events [CTCAE] version 4.0). Her symptoms of numbness was rapidly relieved, and her PS had improved to 0 by day 14. She achieved a PR at extracranial sites (Figure 3(c), (d)). MRI on day 29 revealed a complete response of the central nervous system metastases (Figure 3(a), (b)). Rebiopsy of lung metastatic tissue was performed during osimertinib treatment, and genomic profiling of the specimen with the FoundationOne CDx Panel (Foundation Medicine) showed the EGFR-KDD of introns 17 to 25, as well as TP53 R65fs*58, FGFR4 R248Q, ATRX R498G, and VEGFA S186F mutations. Osimertinib was effective overall for 14.5 months, after which follow-up MRI of the head showed progression of brain metastases. Treatment was changed to afatinib, and the patient was transferred to another hospital, where she continued afatinib therapy for one month but experienced clinical progression with deterioration of performance status. She then received best supportive care and died two months after afatinib initiation. Her overall survival was 44 months from the start of first-line therapy with the carboplatin-pemetrexed regime. DISCUSSION Here, we describe the case of a patient with EGFR-KDD who responded to erlotinib and subsequently to osimertinib. NGS of a tumor specimen obtained after erlotinib therapy did not reveal a potential resistance mechanism, including the EGFR T790M mutation. The limited data available for the treatment of NSCLC patients with EGFR-KDD has shown that it is sensitive to several EGFR-TKIs. [3][4][5][6][7][8][9][10][11] Preclinical studies showed that afatinib inhibited the growth of cells expressing an EGFR-KDD form of the receptor and that cetuximab further enhanced the activity of afatinib. 6,11 An in silico study suggested that osimertinib occupies the ATP binding site of EGFR-KDD more stably compared with gefitinib and afatinib. 12 In the case reported here, osimertinib was effective after the development of resistance to erlotinib, possibly reflecting the high binding affinity of osimertinib for the kinase domain of EGFR. Osimertinib achieved marked disease control for carcinomatous meningitis of the proband. A recent report described the efficacy of osimertinib in an EGFR-KDDpositive NSCLC patient with brain metastasis. 10 However, as far as we are aware, no study has previously reported the efficacy of osimertinib for carcinomatous meningitis in a patient with EGFR-KDD. In the current case, EGFR-KDD was detected by Sanger sequencing and the FoundationOne CDx Panel but not by polymerase chain reaction (PCR) analysis or NGS with the Ion AmpliSeq Cancer Hotspot Panel v2. Given that most conventional sequencing platforms or PCR-based methods do not routinely detect EGFR alterations affecting introns, such screening procedures may miss some patients who would benefit from targeted therapy. We reviewed 19 EGFR-KDD patients from seven studies and the case reported here (n = 20 in total) in order to investigate who should further be examined for EGFR-KDD. [3][4][5][6][7]9,10 Patient F I G U R E 1 Fluorescence in situ hybridization (FISH) analysis showed amplification of EGFR (copy number of 5.85) in a lymph node biopsy specimen obtained before EGFR-TKI treatment. Red and green signals indicate EGFR gene and centromere 7 (CEP7) respectively F I G U R E 2 Chest x-rays revealed a marked reduction in the size of lung metastases at two weeks after initiation of erlotinib treatment (a, b), whereas chest CT scans at 70 days showed a partial response with a reduction in the size of multiple lung metastases (c, d) ages ranged from 33-87 (median, 59.5), and 12/21 (57.1%) patients were male. In cases with smoking history status, all the patients were non-or light-smokers; five out of six (83.3%) were non-smokers, and one out of six patients (16.7%) was a light-smoker (one pack-year). Five out of 14 patients (35.7%) with EGFR-KDD simultaneously harbored EGFR amplification. 4 Given those patient characteristics, non-or light-smokers without any known driver gene alteration might be candidates for further examination targeting EGFR-KDD. EGFR amplification without known EGFR mutation might also suggest the existence of EGFR-KDD. In such cases, using a NGS panel designed to detect F I G U R E 3 Brain and spinal cord MRIs (contrast-enhanced, T1-weighted imaging) revealed carcinomatous meningitis (arrows) before osimertinib therapy (a) and pronounced amelioration of this condition at 29 days after therapy onset (b), whereas CT showed a reduction in the size of pulmonary lesions at 55 days after the onset of osimertinib treatment (d) compared with baseline (c) EGFR rearrangement occurring in intron such as FoundationOne CDx or MSK IMPACT (Memorial Sloan Kettering Cancer Center) should be considered. In conclusion, we report a case of NSCLC, positive for EGFR-KDD, that showed a notable response to two EGFR-TKIs. A complete response of carcinomatous meningitis was achieved with osimertinib treatment. Further studies are warranted to clarify the best treatment strategy for patients with EGFR-KDD.
2,069.2
2021-07-09T00:00:00.000
[ "Medicine", "Biology" ]
Nanoparticle-Based RNAi Therapeutics Targeting Cancer Stem Cells: Update and Prospective Cancer stem cells (CSCs) are characterized by intrinsic self-renewal and tumorigenic properties, and play important roles in tumor initiation, progression, and resistance to diverse forms of anticancer therapy. Accordingly, targeting signaling pathways that are critical for CSC maintenance and biofunctions, including the Wnt, Notch, Hippo, and Hedgehog signaling cascades, remains a promising therapeutic strategy in multiple cancer types. Furthermore, advances in various cancer omics approaches have largely increased our knowledge of the molecular basis of CSCs, and provided numerous novel targets for anticancer therapy. However, the majority of recently identified targets remain ‘undruggable’ through small-molecule agents, whereas the implications of exogenous RNA interference (RNAi, including siRNA and miRNA) may make it possible to translate our knowledge into therapeutics in a timely manner. With the recent advances of nanomedicine, in vivo delivery of RNAi using elaborate nanoparticles can potently overcome the intrinsic limitations of RNAi alone, as it is rapidly degraded and has unpredictable off-target side effects. Herein, we present an update on the development of RNAi-delivering nanoplatforms in CSC-targeted anticancer therapy and discuss their potential implications in clinical trials. Introduction Cancer is one of the leading causes of poor quality of life and mortality worldwide [1,2]. Currently, most cancer patients have micro-or macroscopic systemic metastases when they are initially diagnosed [1]. As such, systemic therapies, including chemotherapy, targeted therapy, and immunotherapy continue to be the main lines of treatment in antitumor strategies. Although numerous new drugs have emerged, drug resistance frequently occurs and remains a dominant obstacle to cancer treatment [3]. Initially, the combined administration of agents with distinct mechanisms of action was employed to solve the resistance of single-agent therapy. This approach, named polychemotherapy, is effective in the early stage of chemotherapy, while its efficacy plateaus over the following period. Multiple mechanisms of drug resistance to structurally and mechanistically distinct antitumor agents has emerged as a new challenge [4]. Most patients who die from cancer eventually develop resistance to multiple therapeutic modalities [5]. Although new therapeutic strategies, including targeted therapies and immunotherapy, have been proposed [6,7], cancer resistance continues to emerge by similar mechanisms [8 -10]. As a result, several approaches aimed at combating unique pathways of drug resistance emerged fast and contributed significantly to improved prognosis [11]. the wide crosslinks and compensatory mechanisms among these pathways lead to huge limitations in therapy effectiveness. In contrast, genetic therapy using RNAi initiatives expands the number of candidates that can be targeted, achieving highly specific, widespread and possibly curative therapeutic effects [38]. Targeting Wnt Pathway with RNAi Therapeutics The canonical Wnt pathway is mediated by the activation of a β-catenin (encoded by CTNNB1)-centered transcriptional complex with the assistance of several transcriptional cofactors [39]. Currently, at least 19 members were found in the WNT family, a series of secretory glycoproteins functioning as ligands. On the other hand, at least 10 isoforms of Frizzled family proteins act as surface receptors together with their various coreceptors, such as low-density lipoprotein receptor-related protein 5 (LRP5) and LRP6. This ligand-receptor interaction disrupts β-catenin from its degradation complex, leading to β-catenin accumulation. Subsequently, β-catenin transposes into the nucleus thus causing the activation of the T cell-specific transcription factor & lymphoid enhancer-binding factor (TCF-LEF) transcriptional complex. The activated TCF-LEF complex results in the regulation of the expression of diverse genes, especially the genes supporting CSC properties, such as MYCN, Cyclin D1 (CCND1), and CD44 [13] (Figure 1). Cancer outcomes have a close relationship with the regulation of the Wnt pathway, including postoperational local relapses and metastasis. For example, Wnt signaling is frequently upregulated and tightly linked with poor prognosis in cancers, commonly due to inactivating mutations of adenomatous polyposis coli protein (APC) that mediate the ubiquitination and degradation of β-catenin [40,41]. Besides, induction of Dickkopf-related protein 1 (DKK1), an endogenous inhibitor of LRP5 and LRP6, results in the delay of cancer progression [42,43]. [87] miR-582-5p NCKAP1 8 PIP5K1C 9 Non-small cell lung cancer [88] Hedgehog-directed RNAi As a result, small-molecule inhibitors and antibodies targeting these critical components of the Wnt pathway were rapidly developed, but have not yet demonstrated sufficient effectiveness or are still being investigated in small-scale clinical trials. In a phase I clinical study, for example, PRI-724, an inhibitor of the β-catenin interaction with the transcriptional coactivator cyclic AMP response element-binding protein (CBP), converted eight patients (40%) into stable disease with a median progression-free survival (PFS) of two months [44]. Another antagonist of the β-catenin-CBP complex, E7386, significantly attenuated Wnt signaling in patient-derived hepatocellular carcinoma (HCC) xenograft models, and is being tested in early phase clinical studies [45]. Besides, antagonistic antibodies of Frizzled receptors, such as Vantictumab and Lpafricept, and anti-ROR1 antibody Cirmtuzumab are still in phase I trials without convincing outcomes [46][47][48][49][50][51]. To date, many attempts have been made for interfering Wnt signaling with RNAi in vitro and in vivo (Table 1). Treatment with nanoparticle-delivered siWNT1 as a single therapy or part of combinatorial immunotherapies acted to halt tumor growth in a lung adenocarcinoma model [52]. However, in this study, DOPC liposomes loaded with siWNT1 caused no more than half reduction of the WNT1 mRNA amount, which suggests that both siWNT1 sequences and their nanocarriers can be modified, thus improving intracellular accumulation and the efficacy of genes silencing. Several miRNAs have been identified as WNT signaling inhibitors in various cancer types, which are mainly directed to β-catenin, WNTs and WNT ligand secretion mediator (WLS) (Figure 1). For example, the miR-34 was identified to directly target multiple genes involved in the Wnt pathway, including WNT1, WNT3, LRP6, CTNNB1, and LEF1, and has a variety of functions in tumor suppression [53]. In the same way, another study found that miR-145 tends to suppress Wnt signaling through targeting CTNNB1 and significantly inhibits colon cancer cell growth [54]. In addition, some upstream activators of the Wnt pathway were also demonstrated to be modulated by miRNAs with roles in antitumor effects, such as miR-8, which directly targets the WLS [55], and miR-9, which modulates the translation of C-X-C motif chemokine receptor 4 (CXCR4) [56]. Accordingly, RNAi therapeutics targeting Wnt signaling might provide promising approaches for CSC therapy. These Wnt interfering miRNAs can be delivered by specifically designed nanoparticles, but their implication in vivo requires more evidence. Moreover, a number of genes in the downstream Wnt pathways can be targeted through RNAi delivery, especially CD44, which is also known as an identity biomarker of CSCs across multiple cancer types [57]. CD44, as a cell-surface glycoprotein, is overexpressed in several types of CSCs and frequently characterized with alternative spliced variants [58]. CD44 is primarily known as a receptor of hyaluronic acid (HA), and is also reported to bind to other extracellular matrix (ECM) ligands, including matrix metalloproteinases (MMPs), osteopontin and collagens, which are deemed to mediate intercellular interactions, ECM adhesion and migration [59,60]. The pre-mRNAs for this gene undergo complex alternative splicing to produce various lengths of variants, resulting in a range of protein isoforms with distinct biofunctions. The different functional roles of these CD44 isoforms are not fully understood. however, the isoform 4 with all the variable exons spliced was indicated to have the strongest correlation with CSC properties in various cancer types, such as breast cancer [61,62], colorectal cancer [63][64][65], liver cancer [66], and bladder cancer [67]. The HA binds to and activates CD44 signaling pathways that induce enhanced cell proliferation and survival, and modulates the cytoskeleton to promote cellular motility. Mounting evidence has demonstrated that a subpopulation of cancer cells with positive CD44 and negative or low expression CD24 (CD44 + /CD24 −/low ) are characterized by high tumorigenicity, as a few hundred of these cells were able to form solid tumors that was found to regain their parental heterogeneity into NOD/SCID mice [68]. Moreover, CD44 was also well known to be critical in stemness maintenance in various cancers, including breast cancer [61,69,70], liver cancer [71], pancreatic cancer [72] and bladder cancer [73]. Decreased CSC phenotypes were found by interfering with CD44 expression in these cancer types. As a result, several RNAi-delivered nanoparticles were designed for cancer therapy through silencing CD44 individually or combined with other antitumor drugs [74,75]. These separate studies indicated that nanoparticle implication significantly increase the work efficacy of RNAimediated CD44 knockdown in vivo, meanwhile, which may be further improved through the modification of these nanoparticles. 1 Wnt ligand secretion mediator; 2 C-X-C motif chemokine receptor 4; 3 Avanti polar lipids liposome; 4 Hyaluronic acid-chitosan nanoparticles; 5 Hyaluronic acid/protamine sulfate interpolyelectrolyte complex; 6 Poly (lactic-co-glycolic acid) nanoparticles; 7 Mixed nanosized polymeric micelles; 8 NCK associated protein 1; 9 Phosphatidylinositol-4-phosphate 5-kinase type 1 γ; 10 ATPase family AAA domain containing 2. Targeting Notch Pathway with RNAi Therapeutics Similar to the Wnt pathway, the Notch signaling pathway is another developing pathway that mediates intercellular communication, and has a great correlation to multiple aspects of cancer biology, especially CSC properties and tumor immunity [95,96]. This pathway functions through transmembrane ligands and receptors interaction, which comprise Delta-like ligand 1 (DLL1), DLL3 and DLL4, Jagged 1 (JAG1) and JAG2 as canonical Notch ligands, and Notch 1-4 paralogues as Notch receptors [97]. This interaction between neighboring cells induces a two-step proteolytic cleavage of the Notch receptor, with the first-step cleavage performed by disintegrin and metalloproteinase domain-containing protein (ADAM) enzymes, either ADAM10 or ADAM17, and the second-step cleavage mediated by γ-secretase. Furthermore, the cleaved Notch intracellular domain (NICD) is released and translocates into the nucleus to regulate the expression of a range of genes, especially CSC-correlated genes such as MYC, CCND3 and ERBB2, where it is combined with several other transcriptional cofactors, such as Mastermind-like 1 (MAML1), (Figure 2) [98][99][100]. Notably, the significance of the Notch signaling outputs in the context of CSCs is highlighted by the findings that Notch signaling interference has the potential to simultaneously repress tumorigenesis and drug resistance. Several Notch-pathway inhibitors with distinct targets and mechanisms have been developed or are now under clinical investigation. The γ-secretase is the first target used for designing inhibitors of Notch signaling. Inhibition of γ-secretase halts NICD release by blocking the second cleavage of Notch receptors, which was shown to have strong antitumor activity in various preclinical cancer models, such as pancreatic adenocarcinoma (PDAC) and T cell acute lymphoblastic leukemia (T-ALL) [101,102]. However, the majority of these γ-secretase inhibitors have been discontinued, most commonly owing to the unfavorable outcomes in phase I/II clinical studies [13]. In addition, their off-tumor side effects are another frequent problem typically involving the gastrointestinal system and electrolyte balance. In addition to small-molecule agents, several antagonistic monoclonal antibodies (mAbs) have been developed to target distinct domains of Notch ligands and receptors, which is another strategy to inhibit aberrant Notch signaling. For the most investigated example, Demcizumab is a humanized anti-DLL4 IgG2 mAb, whose antitumor efficacy in combination with specific first-line antitumor drugs has been tested on PDAC and non-small cell lung cancer (NSCLC) respectively in various phase I/II trials [103,104]. Brontictuzumab is designed as an antagonistic mAb of NOTCH1 mAb; however, it was revealed to have limited antitumor activity in several clinical studies focusing on hematological malignancies and solid tumors [105,106]. Similar to γ-secretase inhibitors, most clinical studies of these mAbs were suspended due to their unfavorable results in early phase studies. Pharmaceutics 2021, 13, x FOR PEER REVIEW 7 of 23 the regulation of the Notch signaling and cancer stemness [111]. Most notably, miR-34a was well documented in targeting and attenuating the expression of NOTCH1 and led to progression arrest in multiple cancer types [47,112]. Several subsequent studies showed that nanoparticle-carried miR-34a potently decreased the expression of NOTCH1, resulting in inhibition of cell proliferation and migration of breast cancer [48,49,51], and viability reduction of fibrosarcoma [51]. In CSC enriched glioma, exogenous miR-10b exposure led to suppression of NOTCH1, thus diminishing the invasiveness, angiogenesis and tumor growth in the brain, and significantly prolonging the survival of tumor-bearing mice [46]. Thus, with the assistance of nanoparticles, these siRNAs and miRNAs could be potent nucleic acid therapeutics of CSCs by interfering with Notch signaling. Hippo Pathway and Potential RNAi Targets The highly conserved Hippo signaling pathway acts to regulate the balance of cell proliferation and apoptosis [113]. The functions of the canonical Hippo signaling pathway are mediated by the transcriptional complex with coactivators Yes-associated protein 1 (YAP1) and WW domain containing transcription regulator 1 (WWTR1, usually known as TAZ), which promote the transcription of target genes involved in CSC properties, such as epithelial-to-mesenchymal transition (EMT), anti-apoptosis, and self-renewal [113]. Indeed, increased activity in YAP1 and/or TAZ led to the expansion of CSC populations and cancer progression [114]. On the other hand, the Hippo pathway is regulated by the successive activation of two kinase complexes, with the first comprising macrophage As such, targeting Notch components with RNAi therapeutics modified with nanomedicine might be another promising strategy to inhibit aberrant Notch signaling in cancer treatment (Table 1). CSCs are considered to largely contribute to tumor relapse in hepatocellular carcinoma (HCC), accounting for poor survival, while a micellar nanoparticle that delivers siNOTCH1 was able to efficiently suppress NOTCH1 expression in HCC cells, leading to increased sensitivity to platinum and decreased CSC percentage in a xenograft model [54]. Several other studies demonstrated the feasibility of inhibiting Notch signaling by delivering siRNA targeting Notch ligands or receptors in vivo [107][108][109][110]. For example, siNOTCH1loaded nanoparticles significantly inhibit Notch signaling, thereby attenuating rheumatoid arthritis in mouse models [107,110]. Thus, nanoparticle-aided highly effective siRNA showed promising implications in Notch-directed cancer therapy. In addition to siRNA, several miRNAs involved in the regulation of Notch signaling were used as monotherapy or codelivery with chemical drugs in various preclinical cancer models (Figure 2), whereas mounting miRNAs are involved in the regulation of the Notch signaling and cancer stemness [111]. Most notably, miR-34a was well documented in targeting and attenuating the expression of NOTCH1 and led to progression arrest in multiple cancer types [47,112]. Several subsequent studies showed that nanoparticle-carried miR-34a potently decreased the expression of NOTCH1, resulting in inhibition of cell proliferation and migration of breast cancer [48,49,51], and viability reduction of fibrosarcoma [51]. In CSC enriched glioma, exogenous miR-10b exposure led to suppression of NOTCH1, thus diminishing the invasiveness, angiogenesis and tumor growth in the brain, and significantly prolonging the survival of tumor-bearing mice [46]. Thus, with the assistance of nanoparticles, these siRNAs and miRNAs could be potent nucleic acid therapeutics of CSCs by interfering with Notch signaling. Hippo Pathway and Potential RNAi Targets The highly conserved Hippo signaling pathway acts to regulate the balance of cell proliferation and apoptosis [113]. The functions of the canonical Hippo signaling pathway are mediated by the transcriptional complex with coactivators Yes-associated protein 1 (YAP1) and WW domain containing transcription regulator 1 (WWTR1, usually known as TAZ), which promote the transcription of target genes involved in CSC properties, such as epithelial-to-mesenchymal transition (EMT), anti-apoptosis, and self-renewal [113]. Indeed, increased activity in YAP1 and/or TAZ led to the expansion of CSC populations and cancer progression [114]. On the other hand, the Hippo pathway is regulated by the successive activation of two kinase complexes, with the first comprising macrophage stimulating 1 (MST1) and MST2, and the second comprising large tumor suppressor kinase 1 (LATS1) and LATS2, together with the adaptors salvador family WW domain containing protein 1 (SAV1) and MOB kinase activator 1 (MOB1), respectively. In this context, the YAP1 and TAZ can be phosphorylated and driven into degradation upon the upstream signals, intercellular contact, G protein-coupled receptors and cell adhesion ( Figure 3). Targeting CSCs through blocking Hippo signaling has been well documented and has showed promising results (Table 1) [115,116]. For example, several small-molecule inhibitors of the YAP1 transcriptional complex, including Verteporfin, CA3 and vestigiallike protein 4 (VGLL4) mimicking peptide, were shown to have potent antitumor activity in various cancer types, especially inhibiting tumorigenesis, CSCs enrichment and resistance to radiation [117][118][119]. In addition, treatment with a NEDD8-activating enzyme (NAE) inhibitor leads to rapid degradation of the YAP1/TAZ complex via suppressing the activity of the cullin-Ring subtype of ubiquitin ligases that stabilize the LATS kinase complex [120,121]. Accordingly, Hippo cascades also have great potential as RNAi targets, and some attempts have been made in this area. Nanoparticle-delivered siRNAs for MST1/2 were shown to effectively suppress the expression of MST1 and MST2, and to enhance Hippo signaling thereby leading to hepatocyte proliferation [122]. In addition, several miRNAs were found to be involved in Hippo signaling regulation, and some showed great antitumor activity as therapeutics ( Figure 3). The miR-195 was identified to suppress Hippo signaling by binding to the 3 -untranslated region (3 -UTR) of the human YAP1 mRNA, whose expression was validated in a separate cohort of colorectal carcinoma (CRC) and significantly associated with poor survival of patients [55]. Subsequent experiments indicated that overexpression of miR-195-5p in CRC cell lines repressed cell growth, colony formation, invasion, and migration [55]. Another study revealed that the expression of miR-582 decreased the proportion of phosphorylated YAP1/TAZ in NSCLC cells, poten- tially by targeting actin regulators [56]. As such, these miRNAs are potential candidates as therapeutics targeting CSCs, although this needs further investigation. However, Hippodirected RNAi therapeutics are currently being investigated in vitro, and still require more preclinical evidence before entry into clinical trials. suppress Hippo signaling by binding to the 3′-untranslated region (3′-UTR) of the human YAP1 mRNA, whose expression was validated in a separate cohort of colorectal carcinoma (CRC) and significantly associated with poor survival of patients [55]. Subsequent experiments indicated that overexpression of miR-195-5p in CRC cell lines repressed cell growth, colony formation, invasion, and migration [55]. Another study revealed that the expression of miR-582 decreased the proportion of phosphorylated YAP1/TAZ in NSCLC cells, potentially by targeting actin regulators [56]. As such, these miRNAs are potential candidates as therapeutics targeting CSCs, although this needs further investigation. However, Hippo-directed RNAi therapeutics are currently being investigated in vitro, and still require more preclinical evidence before entry into clinical trials. Hedgehog Pathway and Potential RNAi Targets The Hedgehog signaling pathway has an important role in embryonic development and its aberrant activity has been linked to a variety of tumor types [123]. The Hedgehog signaling is mediated by three mature hedgehog ligands, including Sonic hedgehog (SHH), Indian hedgehog (IHH) and Desert hedgehog (DHH). The binding of Hedgehog ligands to Patched (PTCH) transmembrane receptors relieves their inhibitory effect on Smoothened (SMO), thereby leading to nuclear localization and activation of GLI transcription factors. The activated GLIs drive gene expression with roles involved in cell self-renewal, proliferation, and survival ( Figure 4) [123]. This pathway provides a novel target for cancer therapy because the modulation of Hedgehog signaling is tightly correlated with CSC properties [124]. The investigation of small-molecule agents targeting the Hedgehog pathway in cancer continues to be an active research area, which is mainly directed to Hedgehog ligands, SMO or GLIs. Several SMO inhibitors, including Vismodegib, Sonidegib and Glasdegib, have been approved successively since their potent activity in repressing Hedgehog signaling and cancer progression [125][126][127]. Accordingly, these targets can also be silenced with siRNA, resulting in suppression of Hedgehog signaling, which has been tested in a range of preclinical studies (Table 1) [128][129][130]. Additionally, several miRNAs have been identified to be involved in the regulation of Hedgehog signaling (Figure 4), of which some might be used as antitumor therapeutics with the assistance of nanomedicine. The upregulated miR-326 was revealed to decrease SMO expression, resulting in an elevated rate of apoptosis in chronic myeloid leukemia (CML) cells, which could be beneficial in eradicating CD34+ CML stem cells [78]. Another SMO-directed miRNA, miR-14, was found to suppress Hedgehog signaling activity through screening the 3 untranslated regions (3 UTRs) of the Hedgehog pathway genes against a genome-wide miRNA library, which functions by cotargeting PTCH and SMO [131]. Various GLI-directed miRNAs have also been identified in different studies. For example, separate studies found that miR-378a-3p directly targets Gli3 in activated hepatic stellate cells and leads to reduced expression of Gli3 [132,133]. Research results from another group indicated that upregulated miR-324-5p significantly inhibited GLI1 expression resulting in reduced stem cell compartment, cell growth and survival in multiple myeloma [77]. In lung adenocarcinoma cells, interference with miR-182-5p mimicked GLI2 silencing and resulted in the suppression of tumorigenesis and cisplatin resistance [80]. In addition, there are several miRNAs that have been indicated to inhibit Hedgehog signaling with unclear mechanisms, such as miR-186 and miR-338-5p [79,81]. The identification of these Hedgehog pathway-directed miRNAs provides possibilities for their implications in cancer therapy as mono-delivered or co-delivered with anti-tumor drugs using nanoplatforms, while their efficacy in vivo for targeting Hedgehog signaling requires additional research. Other CSCs Targets for RNAi Therapy In addition to CD44, another well-documented CSC marker is CD133, also known as prominin 1 (PROM1), which functions to suppress stem cell differentiation. CD133 was first identified in tumor initial cells (TICs) of glioma, since injection of as few as one hundred CD133+ glioma cells produced a new mass with similar phenotypes to the original tumor, whereas injection of one hundred thousand CD133-glioma cells could not even produce a tumor. Afterwards, CD133 was validated to be a CSC marker in HCC, colorectal carcinoma, and ovarian cancers. However, the pathophysiological mechanisms of CD133 in cancer stemness maintenance remain unknown. The finding on CSC lines showed that CSCs in the G1/G0 phase have reduced CD133 activity compared to those in the G2/M phase, suggesting a tight link to the cell cycle of CD133 [134]. In addition, it was suggested that CD133 may play a function in cellular glucose metabolism. In this context, Other CSCs Targets for RNAi Therapy In addition to CD44, another well-documented CSC marker is CD133, also known as prominin 1 (PROM1), which functions to suppress stem cell differentiation. CD133 was first identified in tumor initial cells (TICs) of glioma, since injection of as few as one hundred CD133+ glioma cells produced a new mass with similar phenotypes to the original tumor, whereas injection of one hundred thousand CD133-glioma cells could not even produce a tumor. Afterwards, CD133 was validated to be a CSC marker in HCC, colorectal carcinoma, and ovarian cancers. However, the pathophysiological mechanisms of CD133 in cancer stemness maintenance remain unknown. The finding on CSC lines showed that CSCs in the G1/G0 phase have reduced CD133 activity compared to those in the G2/M phase, suggesting a tight link to the cell cycle of CD133 [134]. In addition, it was suggested that CD133 may play a function in cellular glucose metabolism. In this context, high glucose stimulation induced the upregulation of CD133 with concomitant downregulation of its phosphorylation [135]. As a result, silencing CD33 through nanoparticle-delivered RNAi is deemed to be a promising method for CSC-targeted cancer therapy. Several other CSC biomarkers & effectors, such as TWIST1, ALDH, EpCAM, glucose, and transporters (GLUTs), were also used as CSC targets for cancer therapy, which largely depend on the specific cancer types. Moreover, the number of ABC transporters was found to be correlated with the maturation state that the most primitive cells exhibit the greatest efflux activity. For example, subfamily B member 1 (ABCB1), also known as MDR-1 or p-glycoprotein (P-gp), was first identified and cloned, and was subsequently shown to be responsible for clinical MDR in many cancers, such as colorectal cancer, breast cancer, lung cancer, etc. Afterwards, C subfamily member 1 (ABCC1) and G subfamily member 2 (ABCG2) were identified successively and found to mediate clinical MDR across cancer types. There are 48 members of the human ABC family, with some exhibiting exceptional pharmacological specificity. The most well-known reactive oxidative species (ROS) scavenger NRF2 is shown to be highly expressed in CSCs, and NRF2 silencing returns the high levels of ROS and the sensitivity to chemotherapy. A wide spectrum of agents exert antitumor activity through the production of excessive ROS, but the CSCs have an enhanced ROS elimination system to reduce ROS-mediated DNA damage and cell apoptosis. Mounting exploratory work was carried out, but no highly specific and efficient compounds targeting these CSC factors were found or synthesized. As such, numerous studies have been performed to verify the feasibility of small-molecule RNAi delivery for the abrogation of CSC-associated factors, which should be largely progressed as the rapid advance of nano-material. Nanoplatforms for RNAi Delivery As discussed above, RNAi-based therapy has recently come to be utilized as a novel attractive strategy for cancer treatment. However, RNAi technology presents many limitations in its potential clinical applications, including rapid clearance by the renal system, target tissue uptake selectivity, the efficiency of cellular uptake, and long-term efficacy. To overcome these obstacles, researchers have introduced the use of nonviral carriers for the delivery of RNAi molecules [136,137]. Nanocarriers as a type of nonviral carrier have attracted more considerable attention, which is capable of promoting drug administration and drug accumulation in tumor tissues through elaborate drug encapsulation, thereby maximizing therapeutic efficacy and minimizing the undesirable side effects. Thus, nanocarriers are emerging as an outstanding delivery system for RNAi molecules [137][138][139]. The extensively investigated nanocarriers applied to RNAi molecule delivery can be generally classified into four major groups: polymer-based nanoparticles, lipid-based nanoparticles, inorganic nanoparticles and bio-inspired nanoparticles ( Figure 5) [140][141][142][143]. To deepen our understanding of the potential of these various nanoparticles in the delivery of RNAi molecules for cancer therapy, the following section briefly reviews the different nanocarriers for delivered RNAi molecules and the recent progress of RNAi-based therapy using nanocarriers for targeting cancer stem cells. DSPE-HA conjugate, as a specific ligand of the CD44 receptor. In this study, the GLI1targeted siRNA nanoparticles selectively eliminated gastric CSCs for dual-targeting CD44 and GLI1, and consequently exhibited impressive therapeutic efficacy in gastric cancer [74]. A separate study demonstrated that the drug resistance of hepatocellular carcinoma can be overcome through eliminating HCC CSCs by codelivering Bmi1 siRNA with cisplatin in cationic nanoparticles [152]. Polymer-Based Nanoparticles Polymer-based nanoparticles are well-exploited carriers for RNAi delivery. In terms of origin, they are classified into two major groups: natural and synthetic polymer-based Lipid-Based Nanoparticles Generally, lipid-based nanoparticles are artificially manufactured drug delivery vehicles in which the inner core is completely covered by an outer lipid bilayer coating. Lipid-based nanoparticles are widely used for satisfying biocompatibility, good stability, controlled drug release and targeting properties. Furthermore, for successful drug delivery, the physicochemical parameters of lipid-based nanoparticles can be modified by changing the lipid components, drug-lipid ratio, and fabrication process. In recent decades, a wide variety of lipid-based nanoparticles have been reported, including solid lipid nanoparticles, liposomes, micelles, and emulsions [144][145][146]. Among these various lipid-based nanoparticles, liposomes are the most commonly used because of their excellent performance indicators such as high stability, good bioavailability, controlled release, low toxicity, long-term circulation, and tumor-targeted specificity. Liposomes have been reported to be used as drug delivery vehicles that have attracted increasing interest in both the basic and clinical biomedical sciences. According to their surface charge distribution, liposomes are divided into three types: cationic liposomes, neutral liposomes, and anionic liposomes. Cationic liposomes are the most broadly used as RNAi delivery carriers because of their high affinity for negatively charged nucleic acids. The lipids of cationic liposomes are made up of cationic lipids and neutral auxiliary lipids, cationic lipids include DOTMA, DOTAP, DOSPA, DMRIE, and DC-Chol; and neutral auxiliary lipids include DOPE, DOPC, PE, phosphatidylcholine, and cholesterol. With the rapid progress of liposome-based technology, liposomes have evolved to multifunctional pharmaceutical nanocarriers combing several specific properties, such as long-circulating liposomes, pH-sensitive liposomes, and targeted liposomes [147][148][149][150]. Currently, solid lipid nanoparticles have also been applied for the systemic delivery of RNAi because they can be disinfected and lyophilized owing to their exceptional stability in humans [151]. Fortunately, these recent advances have improved the use of lipid-based nanoparticles for gene-based therapy for targeting cancer stem cells. For example, Li and colleagues developed novel GLI1-targeted siRNA nanoparticles that are functionally modified by a DSPE-HA conjugate, as a specific ligand of the CD44 receptor. In this study, the GLI1-targeted siRNA nanoparticles selectively eliminated gastric CSCs for dual-targeting CD44 and GLI1, and consequently exhibited impressive therapeutic efficacy in gastric cancer [74]. A separate study demonstrated that the drug resistance of hepatocellular carcinoma can be overcome through eliminating HCC CSCs by codelivering Bmi1 siRNA with cisplatin in cationic nanoparticles [152]. Polymer-Based Nanoparticles Polymer-based nanoparticles are well-exploited carriers for RNAi delivery. In terms of origin, they are classified into two major groups: natural and synthetic polymer-based nanoparticles [153,154]. The natural polymers used for gene-based therapy include chitosan, atelocollagen, folate (FA), HA, and gelatin, and are biocompatible, biodegradable, and generally nontoxic, even at high concentrations. Chitosan has been successfully used in gene delivery systems [155][156][157]. Novel chitosan nanoparticles were developed to deliver functional miRNA mimics to macrophages through regulating ABCA1 expression and cholesterol efflux to target atherosclerotic lesions [158]. Apart from natural polymer-based nanoparticles, synthetic polymer-based nanoparticles have also been used in the delivery of RNAi molecules. Synthetic polymer-based nanoparticles dominate the majority of gene delivery systems, which mainly consist of chitosan derivatives, PLGA, PEI, PVA, PLA, PEG, and PAMAM, etc. Similar to natural polymers, synthetic polymers are characterized by good stability, high drug-loading capacity, and biodegradability [159][160][161]. Noteworthily, synthetic polymers are relatively effortless to be modified with ligand bindings and stimuliresponsive units for controlled release and targeted delivery. However, some synthetic polymers could not be directly utilized for RNAi molecule delivery owing to a lack of cationic motifs, thereby leading to low electrostatic interactions between polymers and RNAi molecules. To resolve this issue, nanoparticles need to be modified with various cationic motifs or cationic polymers [162]. The 6 (G6) TEA-core PAMAM dendrimer forms stable dendriplexes that were synthesized with a p 70S6K siRNA, and showed significant tumor suppression by inhibiting stemness and metastasis of ovarian cancer [163]. To effectively enhance the therapy of ovarian cancer, an impressive delivery system was designed, which includes a PPI dendrimer, a synthetic analog of LHRH peptide, paclitaxel, and siRNA molecules targeted to CD44 mRNA, together to be a specific CD44+ ovarian cancer cell death inducer. Consequently, treatment with the designed nanoparticles led to efficient ovarian cancer suppression [75]. In addition, a novel aptamer-PEI-siRNA nanoparticle was utilized for targeting the putative cancer stem cell marker EpCAM, leading to inhibition of the cancer cell proliferation. Another group demonstrated that NPsiPLK1with LY364947 pretreatment cooperatively promotes remarkable antitumor effects on breast cancer [164]. In particular, a novel synthetic siRNA nanoparticle composed of a cationic oligomer (PEI1200), a hydrophilic polymer (polyethylene glycol) and a biodegradable lipid-based crosslinking moiety was developed. This nanoparticle with siMDR1 could significantly downregulate the expression of MDR1 in human colon CSCs, resulting in effectively increasing the chemosensitivity of human colon CSCs to paclitaxel [165]. Likewise, the reduction of MALAT1 by delivering targeted nanoparticles carrying MALAT1 siRNA improved the sensitivity of glioblastoma to temozolomide [166]. It is worth noting that targeting glucose uptake by systemic delivery of NPsiGLUT3, a cationic lipid-associated PFG-PLA nanoparticle that can efficiently deliver specific siRNA targeting GLUT3, is a successful strategy for inhibiting the growth of glioma cells [167]. Taken together, although substantial progress has been achieved in the field of polymer-based nanoparticles over the past decades, there remain many concerns about the ultimate fate of synthetic polymers and their degradation products. Inorganic Nanoparticles In the past few years, inorganic nanoparticles have attracted increasing attention as potential diagnostic and therapeutic applications due to their nanoscale size and unique physicochemical characteristics compared to lipid-and polymer-based nanoparticles. In particular, inorganic nanoparticles possess excellent electrical, optical and magnetic properties, making inorganic nanoparticles applicable for the imaging and ablation of malignant tissue. Numerous inorganic nanoparticles have been reported, including mesoporous silica nanomaterials (MSNs), carbon nanotubes (CNTs), quantum dots (QDs), and metal nanoparticles (e.g., iron oxide and gold nanoparticles) [168][169][170]. Among inorganic nanoparticles, MSNs are most commonly applied due to the following critical physicochemical properties: ordered porous structure, large surface area and pore volume, high tunable particle size, two functional surfaces, and good biocompatibility. MSNs are usually modified to transform into positive charge-functionalized MSNs by appropriate approaches including amination-modification, metal cations codelivered vector and coassembly cationic polymer, because unmodified MSNs often exhibit negative charges which would reduce interactions with negatively charged nucleic acids. Therefore, in addition to surface charge modification of MSNs to enhance gene loading capacity, MSNs have been modified with multiple targeting agents to achieve better applications [171][172][173]. For example, codelivery of siTWIST-MSN-HA and cisplatin showed significant advantages in targeting specificity and targeting efficacy. These nanoparticles have potential applications for overcoming clinical challenges in ovarian and other TWIST overexpressing cancers [174]. Despite the exciting progress in the development of MSN-based nanoparticles for gene delivery, there are still many challenges that need to be addressed to facilitate their further development. In particular, the benefits and disadvantages of MSN-based carriers in vivo should be systematically investigated. Carbon nanotubes exhibit specific physical properties (structural, electronic, optical, and magnetic properties) that render them innovative materials for the delivery of therapeutic molecules. They can be either composed of single-walled carbon nanotubes (SWNT) or multi-walled carbon nanotubes (MWNT). Although SWNT and MWNT have been used to form stable complexes with siRNA to silence tumor-related gene expression in tumor cells [175,176], the applications of siRNA delivery with functionalized carbon nanotubes in targeted treatment of CSCs have not yet been demonstrated. Therefore, this attractive approach based on carbon nanotubes presents a potential therapeutic strategy by targeting CSCs using RNAi delivery across multiple different tumor types. Quantum dots are fluorescent semiconductor materials. Recent advances in new approaches to QD synthesis and covering enable quantum dots to be used as ideal candidates for imaging, diagnostics, and therapeutic delivery. In the field of therapeutic delivery, QDs have been used to promote gene therapy through delivery and imaging of treatment with RNAi [177][178][179][180][181]. However, little is known about the effect of QDs/RNAi complexes in CSCs. The application of QDs/RNAi nanoparticles for gene silencing in CSCs needs to be further explored. Metal nanoparticles are another highly exploited material for inorganic nanoparticles synthesis. Gold nanoparticles (AuNPs), a group of metal nanoparticles, have been widely used in imaging, diagnostics and therapy biomedical applications. In particular, the design of AuNPs-based covalent and noncovalent RNAi nanoparticles provides a promising therapeutic option for cancer and a number of other diseases for humans [182,183]. For instance, a glucose-installed sub-50-nm unimer polyion complexassembled gold nanoparticle (Glu-NP) was developed for systemic delivery of siRNA to GLUT1-overexpressing breast cancer stem-like cells. Subsequent results suggested that multifunctional modified gold nanoparticles could be a promising nanoparticle for CSC-targeted cancer treatment [184]. It is worth noting that the potential toxicity of metal nanoparticles needs to be carefully and precisely studied in gene therapy applications. Bio-Inspired Nanoparticles In addition to the nanoparticles briefly mentioned above, researchers have extensively exploited new bio-inspired nanoparticles for gene delivery, such as exosome-mimetic nanoparticles. Exosomes are nanosized extracellular vesicles naturally secreted by cells, whose function is triggering intercellular communication by transferring biological information between cells. However, cell-derived exosomes are relatively finite, and their purification is also difficult. Thus, the generation of exosome-mimetic nanoparticles based on the knowledge of exosome surface structure and physiology is an attractive concept for the development of future favorable nanoparticles for the delivery of RNAi therapeutics. The exosome-mimetic nanoparticles display eminent physiochemical properties compared to the exosomes that originate from cells. For example, exosome-derived siRNA against RAD51-and RAD52 could decrease fibrosarcoma cell viability and proliferation [185]. In a similar attempt, exosome-mimetic nanoplatforms were designed for targeted cancer drug delivery. Fuente et al. designed a multifunctional nanoplatform mimicking exosomes, F-EMNs loaded with therapeutic RNAs (miR145), that could efficiently transport therapeutic RNAs to targeted cells. In another study, bioengineered exosome-mimetic nanoparticles were designed to deliver chemotherapeutic drugs. The results suggested that the antitumor effect of exosome-mimetic nanoparticles Raw264.7NVDox was significantly greater than conventional chemotherapeutic-loaded nanoparticles [186,187]. Research on exosomebased cancer therapies is not limited to experimental models. Several clinical studies have been completed or remain ongoing. In a phase I study, autologous dendritic cell (DC)derived exosomes (Dex) were directly loaded with MAGE 3 antigens and tested against metastatic melanoma [188]. All of these interesting studies suggest that exosome-based RNAi delivery systems may have advantages in anti-CSC targeted cancer therapy. Moreover, some studies have indicated that DNA/RNA-based nanoparticles, as bio-inspired nanoparticles, are suitable for drug delivery and tissue engineering [141][142][143]. RNA nanotechnology was applied to design RNA nanoparticles containing anti-miR21 and the CD133 aptamer payloads for targeting TNBC. These RNA nanoparticles displayed not only high tumor-targeting specificity but also high efficacy for tumor growth inhibition in TNBC, which further highlighted the potential application of DNA/RNA-based nanoparticles for cancer therapy [189]. Similarly, hTERT promoter-driven VISA nanoparticle-delivered miR-34a (TV-miR-34a) was utilized in BCSCs and presented a great therapeutic effect. In this context, the VISA vectors represent essentially a VP16-GAL4-WPRE integrated systemic amplifier. In brief, TV-miR-34a can significantly inhibit breast cancer cell growth, which has great application potential in breast cancer therapy [190]. Meanwhile, bioinspired functional lipoprotein-like nanoparticles have been studied for gene delivery [191]. For example, CXCR4 receptor-stimulated lipoprotein-like nanoparticles carrying miR-34a achieved efficient accumulation in glioma initiating cells and subsequently availably restrained glioma initiating cell stemness and chemoresistance [192]. Accordingly, there are still many challenges and opportunities for bio-inspired nanoparticles, and they will certainly play a critical role in the realization of multifunctional nanoparticles for RNAi delivery. Conclusions The CSC hypothesis posits that CSCs are greatly responsible for tumor heterogeneity, tumorigenesis, and therapy resistance, having an important role in cancer initiation and progression [14,25]. In particular, CSCs and their fueling heterogeneous mass are widely recognized to facilitate cancer resistance to various therapy approaches, which are directly correlated with poor clinical outcomes [193,194]. In this contexts are characterized by low proliferative ability but a high rate of asymmetric divisions that produce two cell populations, with one cell population succeeding instemness and the other cell population obtaining high proliferative capacity [195]. Therefore, effective treatment strategies must focus on inactively proliferative CSCs, while most traditional therapeutics are directed to highly growing non-CSCs [3,14]. In recent years, substantial advances have been achieved in various areas of cancer gene therapy, especially with the assistance of rapidly developing delivery materials that greatly improve the stability and targeting capacity of nucleic acids in vivo. Moreover, achievements in genomics research largely increased our understanding of the genetic basis of cancers and provided a range of new targets for therapy [196]. However, the majority of recently identified targets remain 'undruggable' by chemical agents. As such, the potential of exogenous RNAi may make it possible to translate our knowledge into therapeutics in a timely manner. More than a decade after the initial implication of RNAi in cancer treatment, several RNAi-based therapeutics have acquired regulatory approvals to be tested in early phase clinical trials [32]. In addition, a range of new miRNAs that have potential in CSC regulation have been revealed with the progress in epigenomics studies, which provide emerging candidates as CSC-directed RNAi therapeutics. With the recent advances of nanomedicine, in vivo delivery of RNAi using elaborate nanoparticles can potently overcome the intrinsic limitations of RNAi alone being rapidly degraded and having unpredictable off-target outcomes; however, their broad application will require continued efforts, especially on the RNAi stability, interfering efficiency and targeting ability. It is important to strictly audit the performance of these recombination agents as they are considered to be prompted into later stage trials; however, a group of RNAi therapeutics directing cancer has shown promising clinical efficacy through subcutaneous administration [32]. This highlights the possibility of siRNA therapeutics in clinical applications, and suggests that RNAi has broad potential in cancer therapy in humans. As discussed before, there are mounting siRNAs and miRNAs that are involved in CSC suppression. It should be believed that many of them can be delivered in vivo for CSC-directed therapy. In addition, a large number of nanoparticles endowed with stability and targeting capacity showed promising results as RNAi cargoes. Therefore, their distinct combinations provide mounting possibilities for potential implications for in vivo investigation.
9,245.6
2021-12-01T00:00:00.000
[ "Medicine", "Materials Science" ]
REGIONAL GEOLOGICAL MAPPING IN THE GRAHAM LAND OF ANTARCTIC PENINSULA USING LANDSAT-8 REMOTE SENSING DATA Geological investigations in Antarctica confront many difficulties due to its remoteness and extreme environmental conditions. In this study, the applications of Landsat-8 data were investigated to extract geological information for lithological and alteration mineral mapping in poorly exposed lithologies in inaccessible domains such in Antarctica. The north-eastern Graham Land, Antarctic Peninsula (AP) was selected in this study to conduct a satellite-based remote sensing mapping technique. Continuum Removal (CR) spectral mapping tool and Independent Components Analysis (ICA) were applied to Landsat-8 spectral bands to map poorly exposed lithologies at regional scale. Pixels composed of distinctive absorption features of alteration mineral assemblages associated with poorly exposed lithological units were detected by applying CR mapping tool to VNIR and SWIR bands of Landsat8. Pixels related to Si-O bond emission minima features were identified using CR mapping tool to TIR bands in poorly mapped and unmapped zones in north-eastern Graham Land at regional scale. Anomaly pixels in the ICA image maps related to spectral features of Al-O-H, Fe, Mg-O-H and CO3 groups and well-constrained lithological attributions from felsic to mafic rocks were detected using VNIR, SWIR and TIR datasets of Landsat-8. The approach used in this study performed very well for lithological and alteration mineral mapping with little available geological data or without prior information of the study region. INTRODUCTION Remote sensing satellite imagery has high potential to provide a solution to overcome the difficulties and limitations associated with geological field mapping and mineral exploration in Antarctic environments.To date, a few studies used remote sensing satellite data for lithological and alteration mineral mapping in the Antarctica. Landsat-8 was launched on 4 February 2013, carrying the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS) sensors.These two instruments collect image data for nine visible, near-infrared, shortwave infrared bands and two longwave thermal bands (Roy et al., 2014).They have high signal to noise (SNR) radiometer performance, enabling 12-bit quantization of data allowing for more bits for better land-cover characterization.Landsat-8 provides moderateresolution imagery, from 15 meters to 100 meters of Earth's surface and Polar Regions. Landsat-8 data have been used for geological mapping and mineral exploration around the world (Ali and Pour, 2014;Pour and Hashim, 2015a,b;Han and Nelson, 2015;Mwaniki et al., 2015).However, Landsat-8 imagery has not been evaluated for lithological mapping in Polar Regions, yet.High radiometric sensitivity in the Landsat-8 TIRS bands has high potential for mapping exposed lithological units in Polar Regions through variation in temperature as felsic to mafic rocks show a modified response to solar heating due to different mineral compositions (Roy et al., 2014). In this research, the north-eastern Graham Land, Antarctic Peninsula (AP) was selected to conduct a remote sensing satellite-based mapping approach to detect poorly exposed lithological units and alteration mineral assemblages in the Antarctic environments.The main objectives of this study is to introduce and test the most suitable image processing techniques for detecting poorly exposed lithologies and alteration mineral assemblages in inaccessible regions without prior information or little available geological data of the study area (such Antarctic environments) using Landsat-8 and reflective and thermal bands. Geology of the study area The Antarctic Peninsula (AP) is the most accessible region and largest tectonic block of West Antarctica that consists of a number of large domains (Fig. 1).Geology of the Antarctic Peninsula is divided into six broad lithological (Fig. 1): (1) the metamorphic basement; Data analysis Continuum Removal (CR) spectral mapping tool and Independent Components Analysis (ICA) were applied to Landsat-8.We applied continuum removal (CR) spectral mapping tool to Landsat-8 (VNIR+SWIR+TIR) bands to normalize reflectance spectra and isolating the absorption bands for detecting pixels related to exposed lithological units and associated alteration mineral assemblages in the study area at regional scale.The continuum is a convex hull fit over the top of a spectrum using straight-line segments that connect local spectra maxima.The first and last spectral data values are on the hull; therefore, the first and last bands in the output continuumremoved data file are equal to 1.0 (Research System, Inc., 2008).Independent component analysis (ICA) was used in this study for detailed mapping of poorly exposed lithologies and alteration mineral zones (anomaly and target detection) in the context of polar environments, where little prior information is available.ICA is used on multispectral or hyperspectral datasets to transform a set of mixed, random signals into components that are mutually independent (Research System, Inc., 2008).It is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible (Hyvarinen and Oja, 2000). RESULTS AND DISCUSSION The visual separation of diagnostic absorption feature and thermal emissivity was achieved using a pseudo-color ramp image derived from CR bands 7, 5 and 10 of Landsat-8 at regional scale.Figure 3 shows the image map covering northern part of Graham Land, Antarctic Peninsula (AP).This image map is operative for identifying poorly exposed rock units and minerals based on diagnostic absorption feature related to Al-OH absorption feature at 2.2 μm, Fe, Mg-O-H and CO3 absorption feature at 2.31-2.33μm absorption features (center on Landsat-8 band 7), Fe 3+ absorption features at 0.83-0.97μm (center on Landsat-8 band 5), and SiO-bond emission minima features in 10.30-11.70μm (center on Landsat-8 band 10).Therefore, the choice of Landsat-8 band 7 was made to identify both Al-OH absorption of muscovite and kaolinite and Fe, Mg-O-H and CO3 absorption of epidote, calcite and chlorite.Landsat-8 band 5 was selected as a representative of ferric oxides such as hematite and goethite and charge transfer of Fe ions in amphiboles, pyroxenes and phyllosilicates.Landsat-8 band 10 records the variation of silicate content, which is generally associated with the transition from felsic to mafic lithologies.The image map shows variety of absorption feature and thermal emissivity values and discriminate different geological features such as rock exposures, sea water, glacier and ice shelves at regional scale.Highest absorption feature and thermal emissivity value (dark brown to orange/yellow colors) is associated with rock exposures (Fig. 3) due to iron oxides/clay/carbonate minerals in their compositions and warmer constituents and high emissivity due to silicate content.Hence, brown/orange/yellow colours are indicative of poorly exposed rock areas especially in eastern segment of Figure 3.For detailed interpretation of Figure 3, a spatial subset scene consisting of rocks exposure covers the Oscar II coast area was selected (Fig. 4).It is evident that the brown color in this image map reflect the presence of high content of muscovite/chlorite and quartz, which are normally associated with quartz-rich and felsic to highly felsic lithological units.The orange/yellow regions match with intermediate/basic rock to quartz-rich units that contain variable content of hematite/goethite or mafic phyllosilicates (Fe, Mg-O-H and CO3) and pyroxenes.The image map derived from CR mapping tool to Landsat-8 imagery (VNIR+SWIR+TIR) could generally show poorly exposed lithologies and major lithological units in the study area at regional scale.In order to detail discrimination of alteration mineral assemblages and lithological units at regional scale, ICA was applied to Landsat-8 (VNIR+SWIR+TIR bands).IC bands contain anomaly pixels attributed to distinctive spectral features were assigned to RGB color combination for generating image map of poorly exposed lithologies and alteration mineral assemblages in the study area.Figure 5 shows FCC image map of selected spatial subset scene of Landsat-8 covering northeastern segment of Graham Land (Trinity Peninsula), which is derived from RGB color combination of IC5 (band 6 of Landsat-8), IC6 (band 7 of Landsat-8) and IC7 (band 10 of Landsat-8), respectively.These IC bands were useful for detecting clay minerals and silicate rocks due to presence of anomaly pixels in the bands of Landsat-8.Clay and carbonate minerals have reflectance features in 1.550-1.750μm (the equivalent of Landsat-8 band 6:1.560-1.660μm) and absorption features in 2.10-2.400μm (the equivalent of Landsat-8 band 7: 2.100-2.300μm) (Pour and Hashim, 2014).Silicate minerals show significant variation from 8.50 to 11.70 μm in TIR portion (the equivalent of Landsat-8 band 10: 10.30-11.30μm). In the FCC image map of IC5, IC6 and IC7, exposed lithologies are outlined by red, magenta, pink and yellow colors corresponding to the variable content of clay and silicate minerals (Fig. 5).Exposed rocks with high albedo and high content of clay minerals appear red and magenta in colors.Pink and yellow anomaly pixels are considered to indicate rock exposures with moderate albedo and normal content of clay minerals in their composition.With reference to geological map of Oscar II coast area, the high-albedo units and the red and magenta areas match high abundance muscovite regions and highly felsic trend rocks; and yellow and pink zones match chlorite-muscovite assemblages associated with quartz rich to intermediate trend lithological units (Fig. 5).Thus, exposed lithological units located in unmapped regions might be generally classified.For instance, the south western part of the image (inside black rectangle, Fig. 5) is expected to contain high abundance of clay minerals (especially muscovite) and highly felsic to felsic trend lithological units (red and magenta pixels).However, pink and yellow pixels are less distributed in this part of the study area (inside black rectangle, Fig. 5). Glacier and ice shelves represent as green to dark blue background in Figure 5 due to different emissivity and temperature compare to rocky exposures in IC7 band (TIR band of Landsat-8: band 10). CONCLUSIONS This investigation has indicated the application of Landsat-8 datasets for extrapolating satellite-based imagery from relatively mapped area such Oscar II coast area, in north-eastern Graham Land, Antarctic Peninsula (AP) into poorly mapped or unmapped (further north and south) domains.Image processing algorithms such as CR spectral mapping tool and ICA mapped pixel related to Al-O-H, Fe, Mg-O-H and CO3 groups and silica content using Landsat-8 and ASTER datasets at regional scale. The approach used in this study performed very well for lithological and alteration mineral mapping with little available geological data or without prior information of the study region. (2) Palaeozoic to Triassic sedimentary rocks; (3) Jurassic to Cenozoic sedimentary rocks; (4) nonmetamorphosed intrusive rocks; (5) Jurassic to Palaeogene volcanic rocks; and (6) Neogene to Recent alkaline volcanic rocks.Metamorphic basement is dominantly composed of orthogneisses and metabasites.Palaeozoic-Triassic sedimentary rocks are the oldest sedimentary sequences on the Antarctic Peninsula recording continental extension.Jurassic-Cenozoic sedimentary rocks are Lower Jurassic to Lower Cretaceous turbidite sandstones and conglomerates with minor volcanic rocks, chert and siliceous mudstones form the LeMay Group, the lowest stratigraphic strata on Alexander Island.Nonmetamorphosed intrusive rocks are mafic to felsic plutonic rocks with dominantly calc-alkaline continental-margin affinities are prevalent on the Antarctic Peninsula.Volcanic rocks on the Antarctic Peninsula were assigned to the Antarctic Peninsula Jurassic-Palaeogene Volcanic Group.Neogene-Recent alkaline volcanic rocks are distributed along the Antarctic Peninsula.They record a change in eruptive setting from subduction to extensional regimes units (Burton-Johson and Riley, 2015). Figure 1 . Figure 1.Geological map of the Antarctic Peninsula.The study area is demarcated by black rectangle.2.2 Remote sensing data A low cloud coverage (2.86 %) level 1T (terrain corrected) Landsat-8 image LC82181062014272LGN00 (Path/Row 218/106) were obtained through the U.S. Geological Survey Earth Resources Observation and Science Center (EROS) (http://earthexplorer.usgs.gov).It was acquired on September 29, 2014 for the northern part of Graham Land, Antarctic Peninsula (AP).The image map projection is Polar Stereographic for Antarctica using the WGS-84 datum.Figure 2 shows Landsat-8 image of the northern part of Graham Land as red-green-blue (RGB) color combination of bands 5, 7 and 10, respectively.The data were processed using the ENVI (Environment for Visualizing Images) version 5.2 and Arc GIS version 10.3 software packages. Figure 2 . Figure 2. Landsat-8 image of the northern part of Graham Land as RGB color combination of bands 5, 7 and 10. Figure 3 . Figure 3. Pseudo-color ramp image map (high values as brown/orange and low values as purple/pink) of Landsat-8 CR bands 7, 5 and 10 covering northern part of Graham Land, Antarctic Peninsula (AP) at regional scale. Figure 4 . Figure 4.A spatial subset scene consisting of rocks exposure covering Oscar II Coast area.
2,742.6
2017-10-16T00:00:00.000
[ "Geology", "Environmental Science" ]
The +4G Site in Kozak Consensus Is Not Related to the Efficiency of Translation Initiation The optimal context for translation initiation in mammalian species is GCCRCCaugG (where R = purine and “aug” is the initiation codon), with the -3R and +4G being particularly important. The presence of +4G has been interpreted as necessary for efficient translation initiation. Accumulated experimental and bioinformatic evidence has suggested an alternative explanation based on amino acid constraint on the second codon, i.e., amino acid Ala or Gly are needed as the second amino acid in the nascent peptide for the cleavage of the initiator Met, and the consequent overuse of Ala and Gly codons (GCN and GGN) leads to the +4G consensus. I performed a critical test of these alternative hypotheses on +4G based on 34169 human protein-coding genes and published gene expression data. The result shows that the prevalence of +4G is not related to translation initiation. Among the five G-starting codons, only alanine codons (GCN), and glycine codons (GGN) to a much smaller extent, are overrepresented at the second codon, whereas the other three codons are not overrepresented. While highly expressed genes have more +4G than lowly expressed genes, the difference is caused by GCN and GGN codons at the second codon. These results are inconsistent with +4G being needed for efficient translation initiation, but consistent with the proposal of amino acid constraint hypothesis. INTRODUCTION While translation initiation in prokaryotes is mediated by baseparing between the Shine-Dalgarno sequence at the 5-UTR on the mRNA and the anti-Shine-Dalgarno sequence at the 39-end of the 16S rRNA [1,2], translation initiation in eukaryotes is mediated by the Kozak consensus [3][4][5][6]. The optimal context for translation initiation in mammalian species is GCCRCCaugG (where R = purine), with the 23R and +4G being particularly important [3,[6][7][8]. Molecular biology textbooks abound with the implication that the 23R and +4G should be salient features of mRNA for highly expressed proteins. The interpretation of +4G has been controversial. It has been suggested that +4G may have little to do with initiation site recognition, but is constrained by the requirement for particular type of amino acid residue at the N-terminus of the protein [9]. One piece of supporting evidence came from a detailed study of an influenza virus NS cDNA derivative [10] which showed that both +4 and +5 sites were important and changes at these sites reduced protein production. In contrast, the +6 site (the third codon position of the second codon) is less important. A simple explanation of this result is that changes at the +4 and +5 sites alter the amino acid, whereas those at the +6 site may not. Recent studies, especially those involving the removal of the initiator methionine (Met) and myristoylation, revived the alternative explanation of amino acid constraint for the presence of +4G in protein-coding genes. First, amino-terminal modifications of nascent peptides occur in nearly all proteins in both prokaryotes and eukaryotes, and the removal of the initiator Met, which occurs soon after the amino terminus of the growing polypeptide chain emerges from the ribosome, is not only an important amino-terminal modification in itself, but also required for further amino-terminal modifications. The efficiency of removing the initiator Met depends heavily on the penultimate (the second) amino acid, with the cleavage occurring most efficiently when the penultimate amino acid is small [11]. Alanine (Ala) and glycine (Gly) happen to be the two smallest amino acids and both are coded by G-starting codons, i.e., Ala by the GCN (where N stands for any nucleotide) and Gly by the GGN codons. The need for removing the initiator Met in proteins implies the presence of many Ala and Gly at the penultimate amino acid position and consequently many +4G due to the GCN and GGN codons coding for Ala and Gly, respectively. Another factor contributing to the prevalence of +4G, but independent of the efficiency of translation initiation, is the myristoylation process. For example, in Coxsackievirus B3, the initiation codon is flanked by both 23R and +4G, and viral mutants with a mutation from +4G to +4C is not viable [12]. This may seem to confirm what one would expect based on the necessity of the Kozak consensus for efficient translation initiation in highly expressed genes. However, it turns out that the +4G is required in Coxsackievirus B3 not because it is essential for translation initiation, but because it is needed for coding Gly (coded by GGN). The Gly at the amino terminus, after the removal of the initiator methionine, is needed to attach to a myristoyl (C 14 H 28 O 2 ) fatty acid side chain, and myristoylation occurs only on a Gly residue [13]. Myristoylation may involve many proteins, and are implicated in protein subcellular relocalization [13], apoptosis [14,15], signal transduction [16,17], and the virulence and colonization of pathogens [12,[18][19][20][21]. The need for myristoylation in proteins would contribute to the presence of +4G in CDSs. We thus have two alternative hypotheses for the presence of +4G in protein-coding genes. The conventional translation initiation hypothesis argues that the presence of +4G is necessary for highly expressed proteins, with two predictions. First, the selection favoring +4G should drive the increased usage of amino acids coded by GNN codons (e.g., Ala coded by GCN, Asp by GAY, Glu by GAR, Gly by GGN, and Val by GUN) at the penultimate amino acid site. Second, the +4G should be more prevalent in highly expressed than in lowly expressed genes. In contrast, the amino acid constraint hypothesis, based on the amino-terminal modification involving the removal of the initiator Met and myristoylation, has two different predictions. First, not all GNN codons should have increased usage, but only GCN coding Ala and GGN coding Gly should have increased usage. Second, highly expressed genes may need more efficient N-terminal processing and may consequently need more GCN and GGN codons. This may increase the frequency of +4G in highly expressed genes relative to lowly expressed genes. Differential use of GNN codons at penultimate site Results from 34169 human coding sequences (CDSs) do not support the translation initiation hypothesis for the presence of +4G. While the five amino acids coded by GNN codons (Ala, Asp, Gly, Glu, Val) account for a majority (64.24%) of the amino acids at the penultimate site (which implies that nucleotide G is the consensus nucleotide at the +4 site), there is no consistent overuse of amino acids coded by GNN codons at the second amino acid site relative to other sites (Fig. 1). This pattern also holds for mouse genes (data not shown). The expected number of codons ( Fig. 1) at the penultimate site is calculated as follows. The total number of codons at nonpenultimate sites is 16347992 (excluding the initiation and termination codon). Designate the number of codon XYZ at non-penultimate site as N XYZ . If codon usage at penultimate sites is the same as the rest of the genes, then the expected frequency of codon XYZ at the penultimate site is simply N XYZ *34169/ 16347992. Only alanine (GCN) codons deviate dramatically from the expected value (Fig. 1).GUN codons (coding for valine) is in fact underused at the penultimate site than at other sites ( Fig. 1). Thus, there is no general increase in GNN codon usage at the penultimate site. Differences in +4G frequencies in highly and lowly expressed genes The translation initiation hypothesis also predicts that highly expressed genes should be more likely to have +4G than lowly expressed genes. Ideally we should have genes with different protein expression for testing the prediction. However, there is now substantial evidence suggesting a strong correlation between mRNA level and protein production, not only in Saccharomyces cerevisiae [22][23][24][25], but also in mammalian species [26]. We used published SAGE (serial analysis of gene expression) data to characterize gene expression because comparative studies have also demonstrated a much higher reproducibility of SAGE (serial analysis of gene expression) experiments than microarray experiments in characterizing mRNA levels [27]. To check any possible differences in amino acid usage at the penultimate site and at other sites between highly and lowly expressed genes, I used the 987 unique SAGE tags that were found ubiquitously in human tissues [28]. The main reason for using ubiquitously expressed genes is that the relationship between mRNA level and protein abundance is generally weak in cellspecific genes [29]. These 987 unique tags were matched against the 34169 human CDSs. One gene (ASNA1) matched 3 tags, 16 genes matched 2 tags and 987 genes matched exactly 1 tag (The number happens to be the same as the number of unique tags, but it is accidental). For those 17 multiple-match genes (MMGs), it is difficult to assign gene expression values. For example, if a gene matches two tags, one with 10 copies/cell and another with 100 copies/cell, there is no unequivocal way of assigning an expression value to the gene. For this reason, only 987 single-match genes (SMG) are used and the data set will be referred to as SMG data set. The SMG data set has the problem involving multiple-match tags. For example, if a tag has n copies per cell and matches two genes, say SMG1 and SMG2, it is impossible to know if the n copies/cell of the tag is contributed by SMG 1 only, or SMG 2 only, or by both. In order to assign expression values unequivocally to genes, we also compiled a more limited data set, with on 168 genes that match only single-match tags (SMTs), i.e., each gene matches only one tag which matches only one gene. These genes are designated as SSGs (to reflect the fact that they are from SMG-SMT gene-tag pairs) and their expression values range from 11 copies/cell (gene LRFN4 matching GGGGGGCUGC, excluding the leading 4-bp NlaIII anchoring enzyme site) to 4374 copies/cell (gene GRIN2C matching GGUGACCACG). This small data set will be referred to as the SSG data set. The 168 genes in the SSG data set were divided into a highexpression (HE) group, including 83 SSGs with expression level of at least 50 copies/cell, and a low expression (LE) group, including 85 SSGs with expression level less than 50 copies/cell. The proportion of +4G is 43.37% in the HE group and 49.40% in the LE group. We also contrasted 30 most highly expressed SSGs (expression level at least 114 copies/cell) with 30 least expressed SSGs (expression level equal to 27 copies/cell or less). The proportion of +4G is 43.33% for the former and 46.67% for the latter. Thus, there is no indication that highly expressed genes are more likely to have +4G than lowly expressed genes. The difference is in fact in the opposite direction. This result does not support the prediction of the translation initiation hypothesis. We then categorized the 168 SSGs into five groups according to the codon at the penultimate site (GCN for alanine, GGN for glycine, GAN for aspartate and glutamate, GUN for valine, and genes without +4G designated as NonG), and compared their expression values by one-way analysis of variance (ANOVA). Gene expression differs significantly among the five groups (F = 3.07, DF1 = 4, DF2 = 163, p = 0.0180), with the average gene expression being 92.5 for GCN genes, 554.89 for GGN genes, 116.74 for GAN genes, 89.40 for GUN genes and 123.91 for NonG genes. Multiple comparisons using the LSD (least significance difference) test [30, pp. 208-209] showed that only genes with GGN (glycine) codons at their penultimate site have significantly higher expression than other groups (p,0.05). One of the genes with a glycine codon (GGU) at its penultimate site (GRIN2C matching GGUGACCACG) has a very high expression value (4374 copies/cell). Excluding this gene results in no significant difference among the five groups. We have performed similar ANOVA for the SMG data set and found the same result, i.e., genes with GGN codons at their penultimate sites have higher expression value than the other four groups, with the average gene expression being 178.51 for GCN genes, 263.21 for GGN genes, 163.79 for GAN genes, 175.52 for GUN genes and 145.14 for NonG genes. There is no other significance difference among the five groups. Thus, we may conclude from analyzing the two SAGE data sets that (1) there is no consistent pattern that GNN codons are overused in highly expressed genes, and (2) genes with GGN (glycine) codons at their penultimate site tend to be more highly expressed than other genes. The result is inconsistent with the translation initiation hypothesis but not incompatible with the amino acid constraint hypothesis. An alternative index of gene expression is codon adaptation index, or CAI [31], which has been shown to correlate well with published gene expression in terms of mRNA level and protein abundance [22,23,32]. Note that CAI is computed with a codon usage table from a set of reference genes known to be highly expressed. I used the reference set in the Ehum.cut file that is distributed with EMBOSS [33]. However, the cai program in EMBOSS is biased because it does not exclude codon families with a single codon, e.g., AUG coding methionine and UGG coding tryptophan in the standard genetic code (see Materials and Methods for details). I used DAMBE [34,35, version 4.5.10] to calculate CAI values. We focus on two groups of genes, the high-CAI group with CAI.0.8 and the low-CAI group with CAI,0.7. Overall, high-CAI genes tend to have more genes with +4G than low-CAI genes (chi-square test, X 2 = 25.36, DF = 1, p,0.0001, based on data in Table 1). This might seem to support the translation initiation hypothesis. However, genes with no +4G may include more false CDSs that would tend to have smaller CAI values. This would result in an association between low-CAI genes with no +4G. It is important to note that, among genes with +4G, there is little difference in their frequencies between the high-CAI and low-CAI group (Table 1, last column). The largest difference between high-CAI and low-CAI group are genes with GCN and GGN codons (coding for alanine and glycine, respectively) at their penultimate site (Table 1, last column), but the difference is not significant (p.0.05). Thus, we may conclude that only GCN and GGN codons exhibit minor differences in their frequencies at the penultimate site between high-CAI and low-CAI genes. This is again inconsistent with the translation initiation hypothesis, but somewhat compatible with the amino acid constraint hypothesis. The use of CAI value [31] as an index of gene expression in human has been controversial [36,37]. While it is well established that codon usage in bacterial species and vertebrate mitochondria is strongly constrained by the relative tRNA abundance and that there is selection pressure favoring codon-anticodon adaptation [38][39][40][41], there is only limited evidence for eukaryotes [42]. One additional piece of evidence supporting codon-anticodon adaptation is that the codon frequencies of the 34169 annotated human CDSs are positively correlated with the copy number of their cognate tRNA genes found at The Genomic tRNA Database (http://lowelab.ucsc.edu/GtRNAdb/), compiled with the tRNAscan-SE program [43]. For example, from the 505 human tRNA genes decoding the regular set of 20 amino acids, one can obtain their cognate codon frequencies from their anticodons. These tRNA-derived cognate codon frequencies correlate positively with the codon frequencies of the 168 genes in the SSG data set (Pearson r = 0.5731, p,0.0001, after grouping all C-ending and U-ending codons into Y-ending codons because these codons are typically translated by tRNA with nucleotide G at its wobble site). It is simpler to explain this significant positive correlation by invoking codon-anticodon adaptation than by random mutation, and suggests the utility of CAI as a measure of gene expression in human genes. DISCUSSION The original study documenting the importance of +4G [44] does not constitute a sufficient proof that +4G is important in translation initiation. The study was based on the production of proinsulin from preproinsulin. The latter has a signal peptide at its amino terminus. The signal peptide is removed during translation, generating proinsulin. When +4G is mutated to other nucleotide, the production of proinsulin is reduced. This reduced proinsulin production was assumed to be caused by reduced efficiency in translation initiation due to the mutation of +4G to other nucleotides. However, one should note that altering +4G also alter the amino acid sequence in the signal peptide and may consequently affect the removal of the signal peptide, leading to reduced production of proinsulin. Thus, the result is compatible with the amino acid constraint hypothesis for the presence of +4G. One may think that the translation initiation hypothesis is partially correct because, after all, two of the five amino acids with G-starting codons (especially alanine) show increased usage. This is wrong. The increased usage of alanine exists not only in eukaryotes, but also in prokaryotes [45] that do not use the scanning mechanism for translation initiation and consequently do not need the +4G. The overuse of small amino acids at the penultimate amino acid site in both prokaryotes and eukaryotes is better explained by the necessity for removing the initiator Met. While the results above are inconsistent with the predictions of the translation initiation hypothesis, they generally appear to support the amino acid constraint hypothesis. First, the latter predicted the overuse of Ala and Gly at the second codon position (to facilitate the removal of the initiator Met and myristoylation), and Ala and Gly (especially Ala) are indeed overused (Table 1). Second, all differences between highly expressed and lowly expressed genes involve GCN and GGN (coding for alanine and glycine, respectively) codons. Both amino acid constraint hypothesis and the translation initiation hypothesis have difficulties in explain certain observations. For example, Kozak [6] found that +4G generally enhances translation initiation, but does not when it occurs in a GUN codon (coding for valine). An associated finding in this paper is that GUN is underused at the penultimate site (Fig. 1). While such findings are difficult to explain by the translation initiation hypothesis, they are also difficult for the amino acid constraint hypothesis unless valine at the penultimate site reduces the efficiency of initiator Met cleavage. Previous studies on prokaryotes and eukaryotes [46] and on the yeast, Saccharomyces cerevisiae [11], suggest that cleavage of initiator Met occurs with valine at the penultimate site, but a recent study on Escherichia coli [47] demonstrates that peptides with Val at the penultimate site dramatically reduces the efficiency of initiator Met cleavage relative to other amino acids such as Ala, Cys, Gly, Pro, or Ser in this position. Further studies on initiator Met cleavage on mammalian species are needed before one can reach a solid conclusion. In summary, we conclude that the presence of +4G is poorly explained by the translation initiation hypothesis that claims the necessity of +4G for efficient translation initiation, but well explained by the alternative amino acid constraint hypothesis that claims the necessity of Ala and Gly at the second amino acid position in many proteins (for the removal of the initiator Met or myristoylation) as the cause of the prevalence of +4G because Ala and Gly happen to be coded by GCN and GGN codons. The necessity of +4G for efficient translation initiation appears to be a misconception that has existed in the molecular biology textbooks for too long. MATERIALS AND METHODS I retrieved the rna.gbk.gz file at ftp://ftp.ncbi.nih.gov/genomes/ H_sapiens/RNA/, dated Sept. 3,2006, and extracted all 34169 annotated coding sequences (CDSs) for evaluating the translation initiation hypothesis and the amino acid constraint hypothesis. CDS extraction and computation of codon adaptation index [31], as well as the analysis of codon usage at the second codon were carried out by using DAMBE (Xia 2001;Xia and Xie 2001). CDSs that are not multiples of three or not terminated with a stop codon are excluded. Because the translation initiation hypothesis predicts that highly expressed genes should be more likely to have +4G than lowly expressed genes, we have used genes of different expression levels to check this prediction. Gene expression level is measured in two ways in this study, one by SAGE (serial analysis of gene expression) data and one by using codon adaptation index. The use of SAGE data instead of available microarray data is mainly because of the much higher reproducibility of SAGE experiments than microarray experiments [27]. SAGE data were retrieved from http://www.nature.com/ng/ journal/v23/n4/extref/ng1299-387b-S1.pdf which listed, among others, 987 unique tags that are ubiquitously expressed in different human tissues, together with their abundance in copies/cell. We searched these tags against the 34169 human CDSs for exact matches. To facilitate presentation, we define multiplematch genes (MMGs) as those CDSs each matching multiple tags, single-match genes (SMGs) as those each matching a single tag, multiple-match tags (MMTs) as those tags each matching multiple CDSs, and single-match tags (SMTs) as those each matching a single CDS. It is difficult to assess the expression level of an MMG because different tags it matches have different copies/cell values. The presence of MMTs causes an even more serious problem. For example, when a tag with 100 copies/cell matches two genes, it is impossible to know if the 100 copies are contributed by only one gene or by both. It would be methodologically wrong to assign both genes an expression value of 100 copies/cell. For this reason, we have compiled two data sets, one including all SMGs and the other including only SMGs that match SMTs (i.e., a SMG that matches a MMT is not included). CAI for a gene is computed from (1) the codon frequencies of the gene and (2) a codon usage table from a set of reference genes known to be highly expressed, according to the following equation [31]: where w i is computed from the Ehum.cut file distributed with EMBOSS [33] and n is the number of sense codons (excluding codon families with a single codon, e.g., AUG for methionine and UGG for tryptophan in the standard genetic code). Note that the exponent in equation (1) is simply a weighted average of ln(w). Because the maximum of w is 1, ln(w) will never be greater than 0. Consequently, the exponent will never be greater than 0. Thus, the maximum CAI value is 1. It is important to exclude codon families with a single codon. Note that for such codons (e.g., AUG and UGG in the standard genetic code), their corresponding w i value will always be 1 regardless of codon usage bias of the gene. If a gene happens to use a high proportion of methionine and tryptophan, then it will have a high CAI value even if the codon usage is not at all biased. The cai program in EMBOSS [33] does not exclude codon families with a single codon because the CAI values from that program are the same as those I computed without excluding the AUG and UGG codons. I used DAMBE [34,35, version 4.5.10] which excludes AUG and UGG in computing CAI. The reason for using CAI as an index of gene expression instead of taking advantage of the availability of gene expression data is that, in higher eukaryotes such as human, many genes are highly expressed only at specific time and in specific tissues. For this reason, a gene with no detectable expression in a specific study, which typically involves few time points and few tissues, should not be taken as a low expressed gene. However, the availability of the gene expression data does vindicate the use of CAI as a general measure of gene expression [32]. Because CAI is based on codon frequencies of a gene with respect to the codon usage of a reference set of genes known to be highly expressed, short sequences with few codons may produce unreliable CAI values. For this reason, two separate analyses were performed, one with all CDSs and the other excluding CDSs shorter than 300 bp. The two sets of results are almost identical because short CDSs constitute only a small fraction.
5,569.2
2007-02-07T00:00:00.000
[ "Biology" ]
Diagnostic Value Investigation and Bioinformatics Analysis of miR-31 in Patients with Lymph Node Metastasis of Colorectal Cancer Colorectal cancer (CRC) is one of the most frequent cancers occurring in developed countries. Distant CRC metastasis causes more than 90% of CRC-associated mortality. MicroRNAs (miRNAs) play a key role in regulating tumor metastasis and could be potential diagnostic biomarkers in CRC patients. This study is aimed at identifying miRNAs that can be used as diagnostic biomarkers for CRC metastasis. Towards this goal, we compared the expression of five miRNAs commonly associated with metastasis (i.e., miR-10b, miR-200c, miR-155, miR-21, and miR-31) between primary CRC (pCRC) tissues and corresponding metastatic lymph nodes (mCRC). Further, bioinformatics analysis of miR-31 was performed to predict target genes and related signaling pathways. Results showed that miR-31, miR-21, miR-10b, and miR-155 expression was increased to different extents, while miR-200c expression was lower in mCRC than that in pCRC. Moreover, we found that the level of both miR-31 and miR-21 was notably increased in pCRC when lymph node metastasis (LNM) was present, and the increase of miR-31 expression was more profound. Hence, upregulated miR-31 and miR-21 expression might be a miRNA signature in CRC metastasis. Moreover, we detected a higher miR-31 level in the plasma of CRC patients with LNM compared to patients without LNM or healthy individuals. With the bioinformatics analysis of miR-31, 121 putative target genes and transition of mitotic cell cycle and Wnt signaling pathway were identified to possibly play a role in CRC progression. We next identified seven hub genes via module analysis; of these, TNS1 was most likely to be the target of miR-31 and had significant prognostic value for CRC patients. In conclusion, miR-31 is significantly increased in the cancer tissues and plasma of CRC patients with LNM; thus, a high level of miR-31 in the plasma is a potential biomarker for the diagnosis of LNM of CRC. Introduction Colorectal cancer (CRC) is one of the most common malignancies and the fourth leading cause of cancer-related death worldwide [1]. However, although the development of early screening modalities and adjuvant chemotherapies has improved the prognosis of CRC, advanced CRC with lymph node metastasis (LNM) remains to have extremely poor prognosis. LNM is common among patients with advanced CRC, and given the poor prognosis, a more sensitive and specific method for LNM diagnosis can significantly benefit the therapeutic planning and clinical follow-up for CRC patients [2]. Although several gene products seem to contribute to the malignancy of CRC, accurate predictive factors of the prognosis and recurrence of CRC are yet to be identified. Among the gene products and signaling molecules involved in the development of CRC, microRNAs (miRNAs), which are short noncoding RNAs measuring 18-25 nucleotides in length, might serve as molecular targets for both diagnosis and therapy. miRNAs have been demonstrated to contribute to a new mechanism of regulating gene expression and are involved in various biological processes of human cancers [3]. They can regulate gene expression posttranscriptionally, and bioinformatics analysis has suggested that miRNAs are capable of regulating the expression of numerous mammalian genes, among which tumor-promoting genes and tumor suppressor genes are included [4]. miRNAs were reported to be active in cancer development, acting as oncogenes (e.g., miR-155 and miR-21), tumor suppressors (e.g., miR-15a and miR-16-1), or metastasis promoters (e.g., miR-10b, miR-182, and miR-29a) [5]. Aberrant miRNA expression is associated with human cancers, and thus miRNA profiling, as one of the most modern modalities for molecular characterization of tumors, is used for cancer diagnosis and prognostic prediction [6,7]. Studies have suggested miRNA profiles in distinguishing CRC tissues from normal colorectal mucosa. Luo et al. have observed that 164 miRNAs were aberrantly expressed in CRC [8]. Further, miR-31 and miR-20a were reported to be significantly elevated, whereas miR-145 and miR-143 were significantly downregulated in CRC tissues [9][10][11]. In addition, several studies have further described the correlations between dysregulated miRNA expression levels with tumor features. For example, the expressions of miR-21, miR-31, and miR-20a were positively correlated with the expression levels of histological markers (Ki-67 and CD34) in CRC tissues [12]. miR-145 was negatively correlated with the expression of K-ras gene, while miR-21 was positively correlated with the expression of K-ras gene in CRC [13]. Results of these studies indicated that dysregulated miRNAs were involved in cell proliferation and angiogenesis in CRC development. However, few studies have focused on the relationship between miRNAs and tumor stages. miRNAs might influence migration, invasion, and intravasation of tumor cells and act as metastasis promoters in breast cancer [14,15], urothelial carcinomas, melanoma, and CRC [16]. Interestingly, upregulation of miR-21 and miR-10b correlates with LNM of breast cancer, while downregulation of miR-31, miR-335, and miR-126 is associated with tumor recurrence [16]. Eslamizadeh et al. have investigated the diagnostic value of miRNA profile in CRC. According to their study, the plasma levels of miR-21, miR-31, miR-20a, and miR-135b were rising with the higher stages of CRC. By contrast, miR-145, miR-let-7g, and miR-200c were decreasing with the higher stages of CRC. And the expression levels of plasma miR-21, miR-31, and miR-135b were significantly different between patients with stage II and III CRC [17]. miRNAs might promote or inhibit tumor metastasis by regulating the expression of target genes [18]. miR-31 was found to target NF-κB-inducing kinase (NIK) to negatively regulate the noncanonical NF-κB pathway, and loss of miR-31 therefore triggers oncogenic signaling in adult T cell leukemia [19]. Another study on head and neck carcinoma suggested that miR-31 could activate hypoxia-inducible factor to promote cancer progression by targeting factorinhibiting hypoxia-inducible factor [20]. Besides, the tumor suppressor gene RhoBTB1 was also suggested to be regulated by miR-31, which was associated with the progression of human colon cancer [21]. However, conclusive studies on the target genes and pathways in CRC are relatively rare. To further explore the possible mechanism by which miR-31 regulates CRC, we performed bioinformatics analysis on miR-31 to predict target genes and signaling pathways. And a comprehensive approach was used to screen for the most possible target genes, including GO and KEGG enrichment analysis, PPI network construction, validation of expression levels, correlation analysis, and survival analysis of the target genes. The aim of this study was to (1) identify miRNAs that can be biomarkers for CRC metastasis. Towards this goal, we compared the expression of five miRNAs commonly associated with metastasis (i.e., miR-10b, miR-200c, miR-155, miR-21, and miR-31) between primary CRC (pCRC) tissues and corresponding metastatic lymph nodes (mCRC). Further, we aimed to (2) determine the mechanism by which miR-31 regulates CRC metastasis and thus adopted bioinformatics analysis to explore putative target genes and related pathways involved in CRC. Finally, we sought to (3) identify the most possible key target genes of miR-31 in CRC using comprehensive bioinformatics methods. Materials and Methods 2.1. Patients and Samples. All experimental procedures and pathological classification were approved by the Zhongnan Hospital Ethics Committee of Wuhan University and were performed following the International Union Against Cancer and American Joint Committee on Cancer TNM staging system for colon cancer established in 2003. Informed consent was obtained from each patient prior to sampling. Tumor tissues were collected from CRC patients after tumor resection at Zhongnan Hospital of Wuhan University. None of them received any preoperative treatment. We designed three sets of experiments. In set 1, 9 primary CRC (pCRC) tissue samples and corresponding mCRC of the same patients (stage III/IV CRC) were paired to identify metastasisrelated miRNAs. Each lymph node was tested via hematoxylin and eosin staining to confirm the LNM. In set 2, pCRC tissue samples were collected from matched CRC patients with or without LNM to validate the candidate miRNAs identified in set 1. In set 3, before surgery or preoperative treatments, blood plasmas were collected from 28 CRC patients (stage III/IV) with LNM, 28 patients (stage I/II) without LNM, and 28 age-and sex-matched healthy volunteers. All tissues and plasmas were well preserved immediately after collection until RNA extraction. RNA Extraction. Small RNAs from the plasma were extracted with the miRVana PARIS RNA isolation kit (Qiagen, Germany). Briefly, 250 μl of plasma was thawed on ice followed by centrifugation at 14,000 rpm for 5 minutes to remove cell debris and organelles. We then lysed 150 μl of supernatant with an equal volume of 2 × denaturing solution (Qiagen, Germany). To normalize the variations among the samples, 25 fmol of synthetic Caenorhabditis elegans miRNA cel-miR-39 (BioVendor, Europe) was added into each denatured sample during the RNA extraction [22]. Small RNAs were then purified following the manufacturer's protocol except that 45 μl of nuclease-free water was used to elute small RNAs. Total RNAs from the tumor tissues were extracted using TRIzol reagent (TaKaRa, Tokyo, Japan) following the manufacturer's protocol. The concentration and purity of total RNAs were measured using the Smart Spec Plus spectrophotometer (Thermo Scientific Inc., USA). Real-Time PCR and Expression Analysis. Plasma RNA or 100 ng tissue RNA was polyadenylated and reverse transcribed into cDNA using the miScript Reverse Transcription kit (Qiagen, Germany). RT-PCR for each sample was performed in duplicates using the miScript SYBR Green PCR kit (Qiagen, Hilden, Germany) on a DNA Engine Opticon II system (Bio-Rad Laboratories, Inc., Hercules, CA, USA). The miRNA-specific primers were designed based on the miRNA sequences obtained from the miRBase database. Each amplification reaction was performed in a final volume of 20 μl containing 1 μl of cDNA, 0.25 mM of each primer, and 1 × SYBR Green PCR Master mix. At the end of PCR, the melting curve analysis and electrophoresis of the PCR products on 3.0% agarose gels were performed to determine the specificity of amplification. U6 snRNA was used as the internal control. The quantification of mature miRNA was performed using BioRad CFX Manager (Bio-Rad Laboratories, Inc., Hercules, CA, USA). The cycle threshold (Ct) was defined as the cycle number when the fluorescent signal crossed the threshold in PCR. The relative miRNA expression was normalized to U6 levels and calculated through the 2 -ΔΔCt method [23]. The data were presented as the fold changes of expression relative to normal tissues. Identification of Metastasis-Related miRNAs in pCRC and mCRC. To identify miRNAs specific for metastatic CRC, miRNAs in nine matched pCRC tissues and corresponding mCRC were analyzed via RT-PCR. We first tested the expression of five miRNAs, i.e., miR-10b, miR-200c, miR-155, miR-21, and miR-31, because they have been suggested to closely associate with metastasis in multiple tumor types. Prediction of Putative Target Genes of miR-31. Target genes of miR-31 were predicted using three online databases frequently used for miRNA target prediction: TargetScan Release 7.2, miRDB, and DIANA-microT web server v5.0 [24][25][26]. The overlapping parts of the possible target genes were considered as the putative target genes. GO and KEGG Clustering Analysis of Putative Target Genes. Putative target genes were uploaded into the function annotation portal of The Database for Annotation, Visualization, and Integrated Discovery (DAVID), online bioinformatics resources for investigating the biological meaning of large gene lists [27]. Gene ontology (GO) analysis was utilized to investigate the possible roles of target genes involved in biological processes (BP), cellular components (CC), and molecular functions (MFs). Kyoto Encyclopedia of Genes and Genomes (KEGG) clustering analysis were adopted to map genes to a related pathway. A p value of <0.05 and a false discovery rate of <0.05 were applied to identify significant GO and KEGG items. Construction of PPI Network and Module Analysis. To study the interaction network among functional proteins, we constructed the protein-protein interactive (PPI) network with the StringApp plugin in Cytoscape from the STRING database [28]. Cytoscape 3.51 is a public platform for establishing biomolecular interaction networks [29]. Further, the CytoHubba containing 12 predicted algorithms (Stress, Betweenness, DMNC, Degree, MNC, MCC, BottleNeck, EPC, Closeness, Radiality, EcCentricity, and ClusteringCoefficient) was used to identify hub genes that were most likely key target genes of miR-31. Validation of the Expression of miR-31 and Hub Genes in Colorectal Cancer. To validate the expression of miR-31 and hub genes in CRC, we downloaded RNA expression data of GDC TCGA Colon Cancer and Rectal Cancer and controls from the UCSC Xena platform [30]. All the expression data have been normalized with log2 transformation. Then, the RNA counts of miR-31 and hub genes were extracted. Expression levels of miR-31 and hub genes were compared between colon cancer, rectal cancer, and controls with Student's t-test using GraphPad Prism 6.0 (GraphPad Software, Inc., La Jolla, CA, USA). The difference in miR-31 and hub gene expression was shown as a box plot. In addition, to reveal the relationship between miR-31 and the hub genes, Spearman's correlation analysis and linear regression plots construction were performed with GraphPad Prism 6.0. Prognostic Value Evaluation of Hub Genes. To evaluate the value of these hub genes in CRC patients' prognosis, we used the GTEx survival data of colon and rectal cancer patients with the GEPIA online tool [31]. Overall survival (OS) and disease-free survival (DFS) of colon and rectal cancer patients were estimated by plotting Kaplan-Meier survival curves. Hazard ratio (HR) was calculated with the logrank test to assess the effects of hub gene expression on patients' survival. 2.10. Statistical Analysis. All data were expressed as the mean ± SEM. Student's t-test and Mann-Whitney U test were applied to assess the statistical significance. To determine the extent to which the obtained -ΔCt efficiently distinguishes different clinical subsets, the receiver operating characteristic (ROC) analysis was performed, and the area under curve (AUC) was used as an indicator of the distin-guished capability. All statistical analyses were performed using SPSS 13.0 software, and p values below 0.05 were defined as statistically significant. Metastasis-Related miRNAs in pCRC and mCRC. The expressions of miR-10b, miR-155, miR-31, and miR-21 were higher, and the expression of miR-200c was lower in mCRC than that in corresponding pCRC tissues. These changes were observed in all nine cases tested. In particular, the expression of miR-21 and miR-31 in mCRC was both more than two-folds higher than those in pCRC tissues (Table 1). Therefore, we focused on miR-21 and miR-31 in further investigations to disclose their significance. PCR. Each group comprised 33 patients. The miR-21 level was significantly higher by 1.62-fold in average in pCRC tissues when LNM was present, as compared with pCRC tissues without LNM (p < 0:05) (Figure 1). Similarly, the miR-31 level was also considerably higher in the pCRC tissues of patients with LNM (average, 2.36-fold, p < 0:01). Plasma miR-31 Distinguishes CRC Metastasis. To assess whether the upregulation of miR-31 can identify CRC or, more specifically, CRC metastasis, we determined the miR-31 level in the plasma of CRC patients and healthy donors. In total, 84 plasma samples were obtained from 28 stage I/II patients without LNM, 28 stage III/IV patients with LNM, and 28 healthy donors. The -ΔCt value was used to indicate miR-31 expression in the plasma. Compared with that of healthy donors, CRC patients had significantly elevated miR-31 in the plasma regardless of LNM (p < 0:001). Furthermore, patients with LNM had even higher plasma miR-31 expression than patients without LNM (p < 0:001) (Figure 2(a)). The ROC curve revealed that miR-31 was potentially a valuable biomarker for discriminating CRC patients with LNM from CRC patients without LNM, as indicated by an AUC of 0.89 (95% CI: 0.81-0.97, p < 0:001) (Figure 2(b)). In the ROC assay, a -ΔCt value of -8.6 (normalized) in patients with LNM was identified as a cut-off to discriminate metastatic CRC from nonmetastatic CRC. The optimal specificity and sensitivity were 86.2% and 78.5%, respectively (Figure 2(b)). Putative Target Genes of miR-31. There were 477, 613, and 595 target genes of miR-31 predicted using the TargetScan, miRDB, and DIANA-microT web server, respectively. The 121 overlapping genes of the 3 databases were considered target genes of miR-31 ( Figure 3). GO and KEGG Enrichment Analysis. There were four, three, and six terms significantly clustered for BP, CC, and MF through GO analysis, respectively ( Figure 4). The top three GO terms were cerebral cortex development, transition of mitotic cell cycle, and locomotion involved in locomotory behavior in BP; cytoplasm, focal adhesion, and cAMPdependent protein kinase complex in CC; and protein heterodimerization activity, SH3 domain binding, and ion channel binding in MF. Seven pathways were identified via KEGG enrichment analysis (Figure 4), and the top three pathways were melanogenesis, ubiquitin-mediated proteolysis, and Wnt signaling pathway. 3.6. PPI Network and Hub Genes. The PPI network was constructed with the 121 putative target genes using String-App ( Figure 5(a)). Each of the 121 target genes may associate with others, which may constitute interactive modules involved in the progression of CRC. After module analysis using 12 algorithms, the results were sorted in a descending order, and seven hub genes (ELAVL1, PPP3CA, DICER1, CBL, GNA13, SSH1, and TNS1) were identified, which are probably the key target genes of miR-31 ( Figure 5(b)). Validation of the Expression of miR-31 and Hub Genes. To compare the expression of miR-31 between CRC tissues and controls, miRNA expression data of 615 tumor tissues and 11controls was downloaded. The miR-31 level was significantly higher in CRC tissues than that in controls ( Figure 6(a)). As for the hub genes, mRNA expression data of 638 GDC TCGA tumor tissues (including colon cancer and rectal cancer) and 51 controls were downloaded (Figure 6(a)). Except for ELAVL1, all of the other six hub genes were significantly downregulated in colon and rectal tumor tissues compared to those in controls. Considering that miR-31 was significantly upregulated in CRC, these six hub genes are possibly regulated by miR-31 in CRC. Further, correlation analysis showed that TNS1 level was negatively correlated with miR-31 ( Figure 6(b)); meanwhile, no statistically significant relationships between the other six hub genes and miR-31 were found. Prognostic Value Evaluation of Hub Genes. Data of 362 CRC patients were used in Kaplan-Meier survival analysis for seven hub genes. The results indicated that low TNS1 expression was significantly associated with improved OS (HR = 1:7, p = 0:012) and DFS (HR = 1:7, p = 0:012) (Figure 7). However, for the other six hub genes including ELAVL1, PPP3CA, DICER1, CBL, GNA13, and SSH1, no statistical significance was found in the survival analysis (Supplement Figures 1-6). Further studies with larger cohorts need to be carried out. Discussion Amplification, deletion, and rearrangement of miRNAs are frequently present in human cancers. Some altered miRNA expression can promote tumorigenesis, and some miRNAs act as tumor suppressors [32]. Recently, the term "metasta-miR" was proposed to describe miRNAs that are associated with tumor metastasis. MetastamiRs can be prometastatic or antimetastatic [16]. For example, miR-10b was firstly reported to contribute to breast cancer metastasis [33]. Later, it was reported that miR-335 could inhibit the invasion of metastatic breast cancer cell. Increasing evidence has demonstrated altered miRNA expression between primary and metastatic tumors, implying the important role of miRNAs in tumor metastasis. Previous studies have reported that 5 upregulated miRNAs and 14 downregulated miRNAs are involved in the metastasis of CRC, but the association between miRNA levels and CRC metastasis remains unclear [34,35]. Thus, it is necessary to investigate the differential miRNA levels between primary tumors and metastatic tissues. A matched comparison method is optimal because it can exclude endogenous differences of miRNA expression. In our research, the levels of five metastasis-related miRNAs were detected and most of them were increased in mCRC. The trend of miRNA expression in our results was similar to previous investigations [33,36,37]. In our study, the higher expression of miR-31, miR-21, miR-10b, and miR-155 in mCRC indicated a miRNA signature might predict CRC metastasis. Because miR-31 and miR-21 were the most elevated miRNAs among the five selected miRNAs, we then wondered if they can be potential metastatic biomarkers of pCRC. As expected, their expression was significantly elevated in the pCRC with LNM. Interestingly, the increase of miR-31 was more profound, suggesting that miR-31 might be a more sensitive biomarker to predict CRC metastasis. Although the alterations in miRNA expression in pCRC with LNM might be helpful for diagnosis, it is difficult for clinical practitioners to collect CRC tissues from patients. A more convenient and less invasive detection approach, such as blood testing, will be substantially beneficial for the prediction or diagnosis of CRC metastasis. Nucleic acid levels in the circulation can be used for the diagnosis of CRC [7,38]. Previously, plasma miRNAs levels were reported to be highly correlated with miRNA expression in tumor tissue from breast cancer patients [39]. miRNAs have been detected in the serum and plasma of CRC, ovarian cancer, and prostate cancer patients. The plasma miRNAs are more stable and consistent than other circulating nucleic acids. Hence, they could be optimal biomarkers for cancer diagnosis. For example, increased plasma miR-92 levels can accurately discriminate CRC from gastric cancer and benign disease [40]. Plasma miR-141 has been proposed in diagnosing metastatic colon cancer [41]. We also evaluated the plasma miR-31 levels in CRC patients with or without LNM. Compared to CRC patients without LNM, patients with LNM have significantly higher plasma miR-31 level. miR-31 yielded a ROC curve area of 0.89 with a sensitivity of 78.5% and specificity of 86.2% in distinguishing CRC with LNM from CRC without LNM, using a cut-off value of -8.6 (normalized). As far as we know, plasma miR-31 has been investigated as a biomarker for oral cancer [42]. Eslamizadeh et al. have reported the plasma miR-31 level was rising with the higher stages of CRC [17]. Here, we come to the conclusion that plasma miR-31 is of diagnostic value for CRC with LNM. However, to further validate our study, future investigations should recruit a large sample cohort. To explore the associations of miR-31 with CRC metastasis, we predicted putative target genes of miR-31 and then carried out enrichment analysis of functions and signaling pathways of the target genes with bioinformatics methods. Results of enrichment analysis show that the target genes of miR-31 are possibly involved in some cancer-related biolog-ical process and signaling pathways such as G1/S transition of mitotic cell cycle, ATP binding, and Wnt signaling pathway. PPI network construction and module analysis were further conducted, and seven hub genes were identified, which were more likely to be target genes of miR-31. Through validation of expression and correlation analysis, among the seven hub genes, the TNS1 level was found lower in CRC tissues compared to controls and was negatively correlated with the miR-31 level. Thus, TNS1 is most possibly to be regulated by miR-31. Through survival analysis, we found that the TNS1 level was significantly associated with OS and DFS of CRC patients. In addition, low expression of TNS1 was predictive of improved OS and DFS for CRC patients. TNS1 is a 220 kD protein localized to focal adhesions and regions of the plasma membrane where the cell attaches to the extracellular matrix. TNS1 protein plays a role in regulating cell motility and is suggested to be involved in tumorigenesis [43]. Elevated TNS1 levels were associated with a poor overall survival in CRC patients. Therefore, we suspect miR-31 may target TNS1, contributing to improved outcomes for CRC patients. Further experimental studies need to be performed to validate whether TNS1 is actually targeted by miR-31 in CRC. Conclusion In summary, miR-31 is significantly elevated in tumor tissues and plasma of CRC patients with LNM. Plasma miR-31 may be utilized as a biomarker for CRC with LNM. In addition, elevated miR-31 may contribute to improved outcomes for CRC patients by targeting TNS1. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest No potential conflicts of interest were disclosed.
5,474
2019-12-16T00:00:00.000
[ "Medicine", "Biology" ]
Abstrak. Tujuan dari penelitian ini adalah: Pertama, untuk menganalisis indikasi moral hazard dan adverse selection pada bank-bank komersial Islam Indonesia. Kedua, menganalisis pengaruh moral hazard dan adverse selection terhadap tingkat pembiayaan bermasalah bank syariah Indonesia. Penelitian ini Received: January 5, 2018; Revised: February 25, 2018; Accepted: March 15, 2018 State Islamic University (UIN) Syarif Hidayatullah Jakarta. Jl. Ir. H. Juanda No. 95, Banten, Indonesia E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>DOI: http://dx.doi.org/10.15408/aiq.v10i2.7392 Abstract. The purposes of this study are: First, to analyze the indications of moral hazard and adverse selection on Indonesian Islamic commercial banks. Second, to analyze the influence of moral hazard and adverse selection on the Non Performance Financing of Indonesian Islamic banks. This study uses Error Correction Model (ECM) as a tool of analaysis. The results show that the indications of moral hazard have a positive effect on the non-performing financing (NPF) in the short run. The indication of the presence of moral hazard occurs at the long run on GDP variable, and the allocation of Murabaha financing (RM) has a positive effect on the mudharabah (FM) profit and loss sharing. The test results also show that adverse selection that represented by the profit sharing rate (PSR) has a positive effect on the level of risk sharing toward non-performing financing (NPF) in the long run. Introduction The development of Islamic banking over the world and especially in Indonesia has experienced significant growth. Ernst & Young report in (2012) on research in the MENA (the Middle East and Nort Africa) concluded that the average growth of Islamic banking per year about more than 20 percent while the conventional banking is less than 9 percent. Islamic banks increase not only in numbers but in also the coverage area. This fact shows that the market share of Islamic banks is growing. At the same time, the social demand for Islamic banking services also increased in accordance. This positive trend is not without obstacles; issues that are micro and macro dynamics have become inseparable from those growths. These issues had become inseparable, indirectly intertwined with the growth of Islamic banking as it is today. On the other hand, the consequences of follow-up showed a negative trend in the form of high levels of non-performance loan (from now on called NPF (Non-Performing Financing). The Islamic banking system, use the term Non-Performance Finance (NPF) that is known as the ratio of defaulting financing on total financing; in the conventional banking system, the term used is non-performance loan (NFL). Belek (2013) state that there is the asymmetrical risk upstream in Islamic banking. The NPF is one of the key indicators to assess the performance of a bank, with the theory proven that lower the NPF the better the performance of the bank, and vice versa the higher the NPF, the poorer the bank performance (Haneef et al., 2012). Therefore, the NPF is one threat toward the liquidity, solvency, and profitability of a bank. To overcome this matter, it is needed an effort to ensure that the existence of Islamic banks in Indonesia is not threatened by bankruptcy. Kennedy (1973) argues that the main cause of the NPL is the neglect of the precautionary principle and action of moral hazard, or a not healthy lending/ borrowing process. Violation of the principle of prudence in granting credit or loans engineering is of a systemic nature if done in a highly sophisticated way. According to Scott (2003), the banking crisis essentially began in reckless loans, even to businesses that were not viable or speculative. This abandonment of the element of prudence eventually leads to the creation of conditions that make it difficult for the bank itself. While Snyder (2012) stated that the asymmetric information between lenders and borrowers, would allow capital flow and cause a problem called moral hazard. Karayalcin, et al. (2002) had stated that the asymmetric information and uncertainty that are introduced, and only focus on centralized investment level, will make capital flows gone astray (moral hazard). There are several causes of NFL (the emergence of moral hazard), the not 405 http://journal.uinjkt.ac.id/index.php/iqtishad DOI: http://dx.doi.org/10.15408/aiq.v10i2.7392 Ahmad Rodoni. Asymetric Information and Non-Performing Financing optimal work of the process of credit distribution when credit is distributed with and avoidance of internal rules prevailing in the bank, which a deviation either intentional or not (Herijanto, 2011). NPL/F can also occur due to the presence of an element of collusion; the crime theory said that more than 90% of crimes in the banking sector are made in cooperation with the people within the sector itself. People, who already understand the intricacies of banking and operational systems, are included in these crimes fictitious credit allocation generally makes credit crimes. Also, the behavior of collusion between the bank as the creditor and the debtor, who was committed from the beginning (Mohammed, 2005). Non-performing loans are caused by conspiracy factors (self-dealing), which are collusive, and are affected by the concerned parties with the credit applicant, may lead to an assessment that is objective, unfair, inaccurate, and not thorough (Siahaan, 2010). Not to mention that the loans are granted by superior instruction. In other words that the loan process has been influenced by the parties with interest in credit disbursement. Sutojo (2008) argues that there are some causes that come from the internal management of the bank that could lead to NPL, such as low ability or the sharpness of the bank in analyzing the creditworthiness, and so forth. Thus it can be said that non-performing loans are a result of the failure of the bank's internal management, all the above conclusions, in the end, point out the internal management of the bank, which is not professionally conducted (Supramono, 2009). Sapuan (2016) said that the profit sharing financing had become less preferable if we compared to Islamic debt financing instruments. This is caused by the existence of asymmetric information in profit sharing financing. Departing from this fact, as a concern of academic researchers, we had considered necessary to do research on asymmetric Information (moral hazard and adverse selection) in the mechanism of financing in Islamic banking especially with the scheme of mudharaba financing. The purpose of this study is first to analyze the indications of moral hazard and adverse selection on the mudharaba financing of some Indonesian Islamic Banks (Bank of BRI Shariah, Bank of Shariah Bukopin, and Bank of BJB Shariah). Secondly, analyze the effect of moral hazard and adverse selection on the Non-Performance Financing of those Islamic Banks. Literature Review The research conducted by Noorsy about moral hazard aimed to analyze the effect of moral hazard on macroeconomic stability. The study population was state-owned banks (Limited Bank), National Private Banks, joint venture banks, and foreign banks, a total number of 10 banks; Mandiri Bank, BCA, BNI, BRI, Danamon Bank, BII, Permata Bank, Panin Bank, Bukopin Bank, and CitiBank. By using quantitative analysis in the form of Structural Equation Modeling with a Partial Least Square (PLS) method. The quantitative test results of the study show that moral hazard in banking does not have a significant impact on macroeconomic stability. The object of his research was conventional banks; therefore the area of Islamic banks wasn`t studied, as it will be in this present paper. The research conducted by Djohanputro and Kountur (2007) and Herijanto (2011), the entitled effect of personal qualifications, the Institution Environment, and Environmental processes as well as the lending/financing control toward the emergence of NPL/NPF: comparative Study on conventional and Islamic banks in Indonesia. These two studies are similar, i.e., have a common objective, which is to determine the factors that may affect the NPL. The difference is that the first research case study was on rural banks (BPR), while the second was conducted to determine the effects of internal management factors on the emergence of NPL/NPF, with the object of research being conventional and Islamic banks. The research conducted by Rayner (1991) focused on the issue of contract (contract regarding Islamic banks). Rayner also investigated the actions related to fraud concluded by stating that financing problems that are intentionally caused by a customer can be referred as an act of fraud that cannot be justified by norms and laws. This study concluded that the act of fraud is a form of moral hazard. Much further research related to non-performance loan (NPL) have been made, among other the one conducted by Shrestha (2011), a study to ascertain the determining factor of the non-performance loan (NPL) in 18 commercial banks in Nepal, using a descriptive statistical approach, trend and econometric factors models. Shrestha had concluded that based on comparative studies that measured from those commercial banks in the financing direction, the results obtained was positive. This result indicated a declining value of NPL, which have an impact on the performance of the banking industry in Nepal. In contrast to the results of the research conducted by Hou and Dickinson (2010) who stated that there was no substantial evidence that moral hazard is part of the (reasons) why the NPLs increase in the banking industry. On the contrary, the increasing NPL interpreted as a positive correlation between the performance of NPL and loan growth. From all of these studies related to financing, specifically, those one related to asymmetric information issues (moral hazard and adverse selection) regarding its effects on the financing performance have not yet been done. For that reason, we assume that this present research is essential and urgent to be conducted to bridge the value gap and find out about whether or not is the practice of asymmetric Information (moral hazard and adverse selection) in the scheme of mudharaba financing in Islamic banking sector. Method The study includes a variety of elements that are very complex regarding the indications of asymmetric information (moral hazard and adverse selection) by customers. To test the influence of Non performance Financing we used several variables, such as: Gross Domestic Product, inflation, banking policy which is represented by the return generated by the ratio of Murabaha margin (MM) against mudaraba profit loss sharing (MPLS), the allocation of finance is represented by the Murabaha financing ratio (RM) toward mudaraba financing (FM), and the level of revenue sharing. Other data was collected from agency-officer (especially marketing officers) and debtors. The sampling technique used in this research is the probability sampling, with proportionate stratified random sampling technique. The samples in this study are Bank of BRI Shariah, Bank of Sharia Bukopin, and Bank of BJB Shariah. This study is a descriptive one, with a qualitative and a quantitative approach, about the secondary data collected using data acquisition instrument of the factors that appear as variables, which were analyzed using the Error Correction Model (ECM). ECM models used in this study has been freed from an-stationary issues models through the stationary test, a test of the degree of integration, cointegration classical assumption test, so the ECM models used in this test was viable. Below is the ECM model used: Result and Discussions Asymmetric Information (Moral Hazard) Asymmetric information on financing is one aspect that can cause the risk of moral hazard. As described by Murray (2011) the deviation of the data shows the evidence of the existence of asymmetric information. At Bank of BRI Shariah, based on interviews with the account officer who explained that asymmetric information could occur because of the customer has more information on the financial data compared with the bank officers themselves. Customers, to make a deposit, dishonesty in the customer business reporting development use this information. These actions of the customer consisting of not providing the right report or the operation of keeping secret some aspects to the banks are a form of moral hazard. Another factor that causes asymmetric information comes from the fact that the banks do not perform oversight and monitoring of the development of the customer's business. Although in principle, the bank cannot interfere with the customer's business, but the bank is allowed to conduct supervision and guidance of the customer's business to make sure that the mudharaba financing is not in a loss trend. The cause of financing risk at Bank of BRI Shariah bank is the reimbursement default from the cooperative's customers. Bank of BRI Shariah mudharaba financing product customer is the corporate employees, the possibility of failure to pay back the loan is therefore due to employees layoff. Members of the cooperative as a final user, are not able to pay the loans to the cooperative, so has an enormous impact on it; that impact is the reason why the payments installment of the cooperative to the Bank of BRI Shariah bank are impaired. We`ve got from the interviews with Bank of BJB Shariah bank some explanations about the kind of moral hazard caused by customers, some customers do duplication or falsification of securities as a guarantee and then use those securities as collateral to obtain loans from other banks. This result means that the customer by this move will get from assurances two loans from two different banks. When a customer makes a doubling of collateral in the form of securities, he can sell those guarantees while on the other hand; those guarantees are utilized by another bank as a loan guarantee. Based on these interviews with Bank of BJB Shariah we can conclude that the kind of moral hazard that occurs is also caused by negligence on the management part of banks in conducting checks of customers financing application documents. Some customers to commit fraud, which could harm the entire bank industry, use these negligences from the bank management side. Bank of BJB Shariah performs 409 http://journal.uinjkt.ac.id/index.php/iqtishad DOI: http://dx.doi.org/10.15408/aiq.v10i2.7392 Ahmad Rodoni. Asymetric Information and Non-Performing Financing periodic monitoring actions once a month, to minimize the occurrence of acts of moral hazard by customers. Those monitoring are also conducted with the aim that the bank will have an eye on his customer's business development in each period. Monitoring is a form of risk mitigation for Bank of BJB Shariah, and they`re conducted to avoid abuse or misuse. Bank of Shariah Bukopin gives a loan to his customers by conducting a strict client selection. This is to avoid the risk of moral hazard where customers can misuse the funds, which is not by the contract. The emergence of the financing risk caused by customers who did not issue self-financing or funding comes entirely from the bank so that it can trigger customer negligent in running the business. Asymmetric information that may occur in the customer financing occurred when the customers deliberately commit fraud in the process of reporting of the business development. The financial statements of the client's business are a guideline for the banks to determine the customer's business development. If the customer intentionally manipulates the financial statement, for example, the customer makes a markup on the expenses so that on the report of the revenue the amount of income will be small. This action will certainly have an impact on the calculation of profit sharing between the bank and the customer and between the bank and its depositors. From these three banks, the object of this study we can conclude that the risk of moral hazard could be caused by asymmetric information. Asymmetric information caused by two factors: internal factors and external factors. Internal factors derived from bank employees associated with the procedure of financing. External factors are obtained from the customer recipient of financing. Internal factors occurred because of bank`s lack of proficiency in the analysis of the customer financing petition filing. Financing analysis is factors that influence the decision of the committee in analyzing the financing of customer request. Errors in the analysis will lead to some losses for banks. Adverse Selection Adverse selection occurs before a contract, where the customer willingly provides data and information that are false to get his applications for financing acceptable by the bank. Banks do not check the documents of customers. Therefore they provide financing to customers who are not eligible. An internal account officer makes Bank of Shariah Bukopin feasibility customer selection study, is performed by an internal account officer, so it is feared that it appears personal interests between that accounts officer and customers. This personal interest sparks the occurrence of adverse selection, customers who are not eligible for financing, because of account officer`s interests on those customers, their application form for financing became accepted and then be realized. Adverse selection faced by Islamic banks, namely the information and data provided to the Islamic banks in the form of false data, ranging from identity cards, office address, and type of false collateral. Manipulation of data is also made by some customers' oo their bank accounts for the sake of obtaining financing. Forgery against a bank account, for example, few months before the submission of financing application, the value of savings of the customer increase, and after the financing is realized and to the check, those savings the bank figure out that those savings values were fictitious. Some other customers portray themselves as a having good character, and therefore worthy to obtain financing. Banks make mistakes by providing financing to customers who have false data, due to the negligence of banks in customer selection. Banks are overconfident with documents provided by customers; they do not follow the principle of 5C in the analysis of financing. Adverse selection occurs because banks don`t have enough information about the reputation and the real financial condition of customers. This limitation is due to the lack of transparency among banks. Bank of BRI Shariah on our interview had explained that the absence of transparency of information among banks is also one of the causes of adverse selection. Adverse selection may also occur due to the behavior of employees of the bank itself. Bank employees knowingly approved the financing request of some customers, but the administration and analysis done to those customers would show that they were not eligible for financing. Because of the relationship between the bank employees and the customers, the financing can be easily approved. Internal factors within the banks can also be the source of the emergence of adverse selection. Adverse selection negatively affects a bank that faces financing problems. Adverse selection can also be the cause of moral hazard. The Result of Error Correction Model In conducting the ECM test, we run several tests as the prelude to assure of the quality of data, namely data normality test, data linearity test, stationary data test, integration test, the degree of co-integration test and classical assumption (multicollinearity, heteroscedasticity and autocorrelation). The results show us no data quality violation; we then run the ECM test. Error Correction Model is one approach used to analyze the time series model used to study the consistency relationships between the short-run and the long run of the tested variables. The long run is a period that allows to hold a full adjustment of any changes that arise, to indicate the extent to which changes in the independent variable the dependent variable is in full conformity. The ECM result can be seen in Table 1. Table 1 shows that the ECM test results in the long run, simultaneously all the independent variables (GDP, inflation, MM_MPLS, RM_FM and PSR) significantly have impact on NPF. That is shown by the value of the F prob (statistically equale to 0.000000), meaning that the independent variables are significant at the significance level of 1%. the results of the partial test shows that GDP and PSR (profit sharing rate) positively significantly affect NPF and the variable RM_FM (murabaha financing allocation toward mudharaba financing) has a significant and negative effect on NPF at 1% level of significancy, while inflation and MM_MPLS (murabaha margin toward mudaraba profit-loss sharing) has significantly no impact on NPF. Furthermore, from the above results, can be seen the long-run residual value, this residual value is used to estimate the short-run equation. The short-run equation results obtained by using Error Correction Model (ECM) a data processing approach shown on Table 2. These short-run estimation results indicate that simultaneously all independent variables have a significant effect on NPF at the level of significance of 5%, but partially no independent variable has a substantial impact on the NPF. The ECT coefficient value as shown in the results of the analysis above is equal to -0.318089 meaning that the balance. The development of NPF in the previous period was adjusted to the current period is equal to 31.8089% with a probability of 0.0013, which certify the authenticity of the model`s specification and can indeed explain the variations of the dependent variable. Interpretations of the ECM Test Results in the long and the short run: The GDP Variable (Gross Domestic Product) has a significant and positive effect on NPF at the long run, while in the short run it has no significant impact. This result indicates, in the long run, the occurrence of moral hazard represented by GDP in Islamic banking. The variables inflation and MM_MPLS (Murabaha margin toward mudharaba profit loss sharing) at both the long and the short-run, has no significant effect on NPF, which mean no moral hazard identification. The variable Murabaha financing allocation (RM) toward mudharaba profit-loss sharing (FM) at the long run has a significant and adverse effect on NPF, while in the short run does not affect the NPF significantly. This result means that in the long run, RM_FM identify the presence of moral hazard, while in the short run there is no indication of moral hazard. PSR variable (profit sharing rate) in the long run has a significant and positive effect on NPF, while in the short run PSR the variable has no significant impact on NPF. This variable PSR identifies the presence of adverse selection on Islamic banking industry in Indonesia. Ameer et al. (2012) said that the full-fledged Islamic banks do not provide comprehensive disclosure related to a profit-sharing investment account. Asymmetric information on mudharaba financing as one cause of the risk of moral hazard as described by Murray (2011), stating that the deviation of the data as evidence of asymmetric information. At Bank of BRI Syariah, we discovered that customers have more information on financial data of the corporate business compared to the bank officers themselves. In Bank of BJB Shariah, we found that there is a kind of action of moral hazard committed by customers; some customers do duplication or falsification of securities so that they can use those securities as collateral to obtain financing from other banks. Regarding Bank of BJB Shariah other results were found, we found out the moral hazard that occurs is also caused by negligence from the bank management side in conducting verification on the customer financing application document. Sukmana and Suryaningtyas (2016) conclude that the role of capital and bank's performance had an essential part in the banking liquidity. Adverse selection occurs because banks generally don`t have all information about the reputation and the financial situation of the customers. This limitation is due to the lack of openness between the bank and its customers. At BRI Syariah, we found that the lack of transparency regarding information among banks is also one of the causes of adverse selection. Adverse selection may also occur due to the behavior of employees of the bank itself. Bank employees knowingly approved the financing request of some customers, who typically are not eligible, this occurred because of the relationship between bank employees and some customers. Conclusion The Error Correction Model (ECM) test results show indications of moral hazard represented by Gross Domestic Product (GDP), which has a positive effect on the non-performing financing (NPF). The variable of MM_FM (Murabaha financing toward mudharabah) significantly has an adverse impact on NPF at the long run, while the other variables (inflation and the ratio of Murabaha margin (MM) toward mudharaba profit and loss sharing (MPLS) did not significantly influence the NPF. This result means that the two variables do not indicate the existence of moral hazard in the long run in Islamic banking sector. From the test,
5,580.4
2018-01-01T00:00:00.000
[ "Economics", "Business" ]
Crystal and volatile controls on the mixing and mingling of magmas The mixing and mingling of magmas of different compositions are important geological processes. They produce various distinctive textures and geochemical signals in both plutonic and volcanic rocks and have implications for eruption triggering. Both processes are widely studied, with prior work focusing on field and textural observations, geochemical analysis of samples, theoretical and numerical modelling, and experiments. However, despite the vast amount of existing literature, there remain numerous unresolved questions. In particular, how does the presence of crystals and exsolved volatiles control the dynamics of mixing and mingling? Furthermore, to what extent can this dependence be parameterised through the effect of crystallinity and vesicularity on bulk magma properties such as viscosity and density? In this contribution, we review the state of the art for models of mixing and mingling processes and how they have been informed by field, analytical, experimental and numerical investigations. We then show how analytical observations of mixed and mingled lavas from four volcanoes (Chaos Crags, Lassen Peak, Mt. Unzen and Soufrière Hills) have been used to infer a conceptual model for mixing and mingling dynamics in magma storage regions. Finally, we review recent advances in incorporating multi-phase effects in numerical modelling of mixing and mingling, and highlight the challenges associated with bringing together empirical conceptual models and theoretically-based numerical simulations. modeling of mixing and mingling, and highlight the challenges associated with bringing together empirical conceptual models and theoretically based numerical simulations. Introduction: Magma Mixing and Mingling and Volcanic Plumbing Systems It is now widely accepted that magmas of different compositions can mix and mingle together (Blake et al., 1965;Eichelberger, 1980;Morgavi et al., 2019;Perugini & Poli, 2012;Snyder, 1997;Sparks & Marshall, 1986, Wiebe, 1987Wilcox, 1999). Textural consequences of mingling have long been observed (Judd, 1893;Phillips, 1880), although the earliest observations were not necessarily interpreted correctly (Wilcox, 1999), with heterogeneities interpreted as originating from metasomatism (Fenner, 1926) or solid-state diffusion (Nockolds, 1933). Advancements in geochemical analysis combined with an understanding of phase equilibria led to acknowledgment of mixing and mingling as key processes, alongside crystal fractionation, in producing the compositional diversity of igneous rocks (Vogel et al., 2008). In addition, interaction between magmas became recognized as a potential trigger for volcanic eruptions (Sparks et al., 1977). Evidently, understanding mixing and mingling processes is crucial for deciphering the evolution of igneous rocks and the eruptive dynamics of volcanoes. Previous work has sometimes been flexible with regard to precise definitions of the terms "mixing" and "mingling." We here define mixing to be chemical interaction between two magmas that produces a composition intermediate between the original end-members (Bunsen, 1851). Chemical mixing proceeds by chemical diffusion (Lesher, 1994;Watson, 1982) and, if allowed to complete, leads to hybridization and homogeneous products . By contrast, mingling is the physical interaction of the two magmas, such as through convective stirring (e.g., Oldenburg et al., 1989) or chaotic where the bulk viscosities of the two magmas become closer, thereby facilitating mingling and mixing before continued crystallization of the mafic magma increases its viscosity. Another scenario is mixing and mingling between partially molten silicic rocks and a hot, rhyolitic injection (Bindeman & Simakin, 2014), which is important for the formation of large, eruptible magma bodies containing crystals mixed from different portions of the same magma storage system (antecrysts; Bindeman & Melnik, 2016;Francalanci et al., 2011;Ubide et al., 2014a;Seitz et al., 2018;Stelten et al., 2015). In all cases, the physico-chemical changes and their associated timescales govern the style of mixing, the resultant textures, and the eruptive potential. Evidence of mixing is preserved primarily at the microscale because the relatively slow rate of diffusion alone (Acosta-Vigil et al., 2012;Morgan et al., 2008) cannot redistribute chemical components over large spatial scales (Bindeman & Davis, 1999). Crystals, in particular, can preserve chemical records of changing storage conditions that can be associated with mixing. For instance, resorption zones and reverse zoning in plagioclase might indicate changes to more mafic melt compositions, possibly due to multiple mixing events (Hibbard, 1981;Lipman et al., 1997;Tsuchiyama, 1985). The mixing history can be determined by combining these observations with methodologies such as major-element (Rossi et al., 2019), trace-element , and isotopic analyses (Davidson et al., 2007), along with measurements from the bulk rock or other minerals. This can include timescales of mixing (Chamberlain et al., 2014;Rossi et al., 2019) and ascent , temperatures and pressures of mixing (Samaniego et al., 2011), and the relative contribution of processes such as fractional crystallization (Foley et al., 2012;Ruprecht et al., 2012;Scott et al., 2013). Despite this, many studies continue to model mingling as taking place between two crystalfree fluids in a vat (Montagna et al., 2015). Such a picture is hard to reconcile with evidence from petrological analysis (Cooper, 2017;Druitt et al., 2012;Turner & Costa, 2007) and the lack of geophysical evidence for large extended bodies of melt (Farrell et al., 2014;Miller & Smith, 1999;Pritchard et al., 2018;Sinton & Detrick, 1992). It is therefore clear that the presence of crystals and volatiles, and their effect on magma rheology (Caricchi et al., 2007;Mader et al., 2013;Mueller et al., 2010;Pistone et al., 2012), must be accounted for when modeling mingling (Andrews & Manga, 2014;Laumonier et al., 2014). Analogue Experiments Early analogue experiments used non-magmatic fluids and particles to model magma mingling by injecting one viscous fluid into another (Campbell & Turner, 1986;Huppert et al., 1984Huppert et al., , 1986. These studies considered magmas as pure melts and demonstrated that large viscosity contrasts prohibit efficient mingling. Field observations that some mafic magmas became vesiculated in response to undercooling by the host magma (Bacon, 1986;Bacon & Metz, 1984;Eichelberger, 1980) motivated experiments focused on bubble transfer from one viscous layer into another, and demonstrated that the rise of bubble plumes could cause mingling (Phillips & Woods, 2001Thomas et al., 1993). Recent experiments have examined the effect of crystals on intrusion break-up. For example, injected a particle-rich corn syrup (high density and viscosity) into a large, horizontallysheared body of particle-free corn syrup (low density and viscosity) to model the injection of cooling (partially crystallized) mafic magma into a convecting magma chamber. They found that low particle concentrations caused the injection to fragment and form "enclaves," whereas at high particle concentrations it remained intact and formed a coherent layer. These experiments further suggest that in the presence of a yield stress in the injected magma, the greater the bulk viscosity contrast the smaller the lengthscale of intrusion fragmentation, thus enhancing homogeneity at the macroscopic scale (Hodge & Jellinek, 2020). Although no analogue experiments have considered liquid injection into variably crystalline suspensions, experiments with gas injection into particle-liquid suspensions show a strong control of particle concentration and injection style, with a threshold between ductile and brittle behavior at random close packing (Oppenheimer et al., 2015;Spina et al., 2016). High-Temperature and/or High-Pressure Experiments Investigations of magma interactions in high-temperature and/or high-pressure experiments can be broadly divided into two categories. Static experiments consider the juxtaposition of heated magmas and study mixing resulting from the diffusion of different melt components Van der Laan & Wyllie, 1993;Watson & Jurewicz, 1984;Wylie et al., 1989). Fluid motion can still occur in these static experiments, as variable diffusion rates between elements can create density gradients that drive compositional convection (Bindeman & Davis, 1999). Additionally, because water diffuses much more rapidly than other components (Ni & Zhang, 2008), transfer of water from hydrous mafic magmas to silicic bodies lowers the liquidus temperature of the latter, leading to undercooling and the production of quenched margins in the mafic member, even without a temperature contrast (Pistone et al., 2016a). Bubbles that exsolve in a lower, mafic layer can also rise buoyantly into the upper layer, entraining a filament of mafic melt behind them (Wiesmaier et al., 2015). Such bubble-induced mingling can be highly efficient and has also been documented in natural samples (Wiesmaier et al., 2015). It has been proposed that a similar style of mingling can occur through crystal settling (Jarvis et al., 2019;Renggli et al., 2016). Dynamic experiments apply shear across the interface between two magmas and reproduce mingling behavior. The shear can be applied in various ways, with a rotating parallel plate geometry (Kouchi & Sungawa, 1982, 1985Laumonier et al., 2014Laumonier et al., , 2015, a Taylor-Couette configuration (De Campos et al., 2004, 2008Perugini et al., 2008;Zimanowski et al., 2004), a Journal Bearing System (De Campos et al., 2011;, or by using a centrifuge . These experiments have produced a variety of textures from homogenous mixed zones to banding. When pure melts are used, the combination of diffusional fractionation and chaotic advection can produce phenomena such as doublediffusive convection and reproduce nonlinear mixing trends for various major and trace elements (De Campos et al., 2011;Perugini et al., 2008). Experimental results also suggest new quantities to describe the completeness of mixing, such as the concentration variance and the Shannon entropy . Where crystals are considered, the presence of phenocrysts can enhance mingling by creating local velocity gradients and disturbing the melt interface (De Campos et al., 2004;Kouchi & Sunagawa, 1982, 1985). In contrast, other studies (Laumonier et al., 2014(Laumonier et al., , 2015 have shown that the presence of a crystal framework in the mafic member prevents mingling, whereas the presence of water can enhance mingling by lowering the liquidus temperature, and thus the crystallinity, of the magma (Laumonier et al., 2015). Sparks and Marshall (1986) developed the first simple model to describe viscosity changes caused by thermal equilibration of a hot mafic magma and a cooler silicic magma, and the resulting (limited) time window in which mingling/mixing can occur. More sophisticated models have simulated mingling between melts driven by double-diffusive convection (Oldenburg et al., 1989), compositional melting (Cardoso & Woods, 1996;Kerr, 1994), and the Rayleigh-Taylor instability (Semenov & Polyansky, 2017). Another group of studies has used single-phase models to simulate elemental diffusion and advection in a chaotic flow field (Perugini & Poli, 2004;Petrelli et al., 2006). These models reproduce naturally observed geochemical mixing relationships, including linear-mixing trends between elements with similar diffusion coefficients and large degrees of scatter when diffusion coefficients differ (Nakamura & Kushiro, 1998;Perugini & Poli, 2004). Interestingly, the simulations produce both regular and chaotic regions, which are unmixed and well mixed, respectively, and have been interpreted to correspond to enclaves and host rock (Petrelli et al., 2006). This framework has been extended to account for a solid crystal phase (Petrelli et al., 2016) by including a Hershel-Buckley shape-dependent rheology (Mader et al., 2013) and a parameterization of the relationship between temperature and crystallinity (Nandedekar et al., 2014). This body of work has demonstrated that chaotic advection can speed up homogenization. Numerical Models Models of mixing and mingling that consider two-phase magmas containing either solid crystals or exsolved volatiles often assume coupling between the phases. In this way, the solid or volatile phase can be represented as a continuous scalar field, and the resultant effect on rheology is accounted for through a constitutive relationship. For example, Thomas and Tait (1997) used such a framework to show that volatile exsolution in an underplating mafic magma could create a foam at the interface with an overlying silicic magma. Depending on the exsolved gas volume fraction and melt viscosity ratio, mixing and mingling could then proceed through foam destabilization, enclave formation, or a total overturn of the system. Folch and Martí (1998) showed analytically that such exsolution could lead to overpressures capable of causing volcanic eruptions. Recent finite-element models show that injection of a volatile-rich mafic magma into a silicic host can cause intimate mingling when viscosities and viscosity contrasts are low (Montagna et al., 2015;Morgavi et al., 2019). The combination of reduced density in the chamber and the compressibility of volatiles can (non-intuitively) lead to depressurization in the chamber (Papale et al., 2017), which is important for interpretation of ground deformation signals (McCormick Kilbride et al., 2016). The effect of crystals on mixing and mingling has also been modeled by treating the crystals as a continuous scalar field. Examples include simulations of mixing across a vertical interface between a crystal suspension (30% volume fraction) and a lighter, crystal-free magma (Bergantz, 2000), and injection of a mafic magma into a silicic host with associated melting and crystallization (Schubert et al., 2013). The role of crystal frameworks in both the intruding and host magma is addressed by Andrews and Manga (2014), who model the role of thermal convection in the host, and associated shear stress on the intruding dike. If convection occurs while the dike is still ductile, then mingling will produce banding. Otherwise, the dike will fracture to form enclaves. Woods and Stock (2019) have also coupled thermodynamic and fluid modeling to simulate injection, melting, and crystallization in a sill-like geometry. Finally, isothermal computational fluid dynamic simulations have been used to examine the case of aphyric magma injecting into a basaltic mush. For sufficiently slow injection rates, the new melt percolates through the porous mush framework, whereas for faster injections, fault-like surfaces delimit a "mixing bowl" within which the crystals fluidize and energetic mixing takes place (Bergantz et al., 2015Carrara et al., 2020;McIntire et al., 2019;Schleicher et al., 2016). By explicitly modeling the particles with a Lagrangian scheme, it is possible to account for particle-scale effects, including lubrication forces (Carrara et al., 2019), that are neglected when using constitutive relations from suspension rheology. These simulations suggest that mushes with ≤60% crystals can be mobilized by injection, but neglect welded crystals or recrystallization of crystal contacts. Furthermore, geophysical observations suggest that mushes spend the majority of their lifetimes with much higher crystallinities (80%-90%; Farrell et al., 2014;Pritchard et al., 2018;Sinton & Detrick, 1992). Despite these limitations, recent simulations using the model have shown that the contrast between the intruding and resident melt densities, rather than bulk densities controls the morphology of intrusion (Carrara et al., 2020). Chaos Crags Chaos Crags comprises a series of enclave-bearing rhyodacite lava domes that erupted between 1125 and 1060 years ago (Clynne, 1990). The host lavas are crystal-rich, containing phenocrysts of plagioclase, hornblende, biotite, and quartz, whereas the enclaves are basaltic andesite to andesite with occasional olivine, clinopyroxene, and plagioclase phenocrysts in a groundmass of amphibole and plagioclase microphenocrysts (Heiken & Eichelberger, 1980). Many, but not all, enclaves have fine-grained and crenulated margins, and all contain resorbed phenocrysts captured from the host (Figure 4a). Some phenocrysts in the host also show resorption textures (Tepley et al., 1999). Enclave Groundmass Textures The enclaves from all four volcanoes show both similar and contrasting textural features. At Chaos Crags, most enclaves have fine-grained and crenulate margins (Figure 4a; Tepley et al., 1999), although those erupted in later domes are more angular and lack fine-grained margins. Enclaves in Lassen Peak samples are subrounded to subangular with an equigranular texture (Figure 4b; Clynne, 1999). Many enclaves from the 1991-1995 eruption at Mt. Unzen have crenulate and fine-grained margins (Browne et al., 2006a), although some have angular edges and a uniform crystal size (Figure 4c; Fomin & Plechov, 2012). Similar features are observed at Soufrière Hills, with many inclusions being ellipsoidal ( Figure 4d) and some angular; most, but not all, have fine-grained, crenulate margins (Murphy et al., 2000). Both the size and volume fraction of enclaves increased during the eruption Plail et al., 2014Plail et al., , 2018. In all localities, fine-grained margins and crenulate contacts are attributed to undercooling of the mafic magma due to juxtaposition against the much cooler felsic host (Eichelberger, 1980) and associated rapid crystallization of the mafic melt near the contact with the felsic host. These crystalline rims have a greater rigidity than the lower-crystallinity enclave interiors so that as the enclave continues to cool and contract, the rims deform to a crenulate shape that preserves the original surface area (Blundy & Sparks, 1992). Enclaves not exhibiting such quench textures are also found at all localities. Plagioclase The composition and texture of plagioclase crystals are extremely good recorders of magmatic processes because (a) their stability field in pressure-temperature-composition (P-T-X) space is very large in volcanic systems, and (b) compositional zoning modulated by changes in the P-T-X space is well preserved due to the relatively slow diffusion in the coupled substitution between Na-Si and Ca-Al (Berlo et al., 2007;Grove et al., 1984;Morse, 1984). Texturally, plagioclase phenocrysts in the host lavas at all four localities comprise a population of unreacted, oscillatory zoned crystals with a smaller amount of reacted crystals that have sieved cores and/or resorption rims (Figure 5a; Browne et al., 2006b;Clynne 1999;Murphy et al., 2000;Tepley et al., 1999). Associated enclaves contain plagioclase xenocrysts incorporated from the host with sieved-texture resorption zones that consist of patchy anorthite-rich plagioclase and inclusions of glass (quenched melt). These reacted zones can penetrate to the cores of smaller crystals (Figures 5b,c), but in larger xenocrysts appear as a resorption mantle surrounding a preserved oscillatory zoned core ( Figure 5d). All xenocrysts are surrounded by a clean rim that is of the same composition as the plagioclase microphenocrysts in the enclave groundmass. Interpretation of Textures and Chemistries The common textural and chemical features of these volcanic systems suggest commonalities in the mixing and mingling processes. First, because enclaves from all volcanoes contain xenocrysts that originated in the host magmas, the mafic component must have been sufficiently ductile to incorporate these crystals during mixing. Plagioclase xenocrysts contain rounded, patchy zones with a sieved texture showing that both partial and simple dissolution occurred (Cashman & Blundy, 2013;Nakamura & Shimakita, 1998;Tsuchiyama, 1985), suggesting that the enclave magmas were undersaturated in plagioclase at the time of incorporation. Because up to 70% of the enclave groundmass consists of plagioclase microphenocrysts, this implies the mafic magmas were crystal-poor at the time of xenocryst incorporation. Compositional variations of FeO and An in the plagioclase crystals provide further information on the relative compositions of the host and enclave melt at Soufrière Hills (Ruprecht & Wörner, 2007). At Mt. Unzen, enclave microphenocryst and xenocryst rims show a strong positive correlation for the whole An range, whereas these phases at Soufrière Hills show a negative correlation for An > 75 mol% ( Figure 6). This difference is attributed to the absence of Fe-Ti oxide as an early crystallizing phase in the Soufrière Hills mafic end-member, which would cause FeO to increase in the residual melt as other phases precipitated until the point of oxide saturation . The lack of this inflection in the Mt. Unzen sample suggests that Fe-Ti oxides were present in the mafic magma prior to mixing, as suggested for the 1991-1995 eruption (Botcharnikov et al., 2008;Holtz et al., 2005). Whereas the observed enrichment in FeO in enclave microphenocrysts, sieved zones in phenocrysts and xenocrysts, and xenocryst rims is likely due to crystallization from a more mafic melt, it is also possible that growth of these regions may be sufficiently fast for kinetic effects to play a role; if growth is faster than diffusion of FeO in the melt, then an FeO-rich boundary layer may develop around the crystals (Bacon, 1989;Bottinga et al., 1966;Mollo et al., 2011) that could also explain the enrichment. However, such a process would generate a negative correlation between FeO and An , not the positive correlation observed at Unzen and Soufrière Hills. The contrasting textures of quartz in the host and enclaves also provide insight into the mingling/mixing process. Rounding of quartz xenocrysts, together with glass-filled embayments, suggests dissolution of quartz in the host. Conversely, quartz reaction rims comprising hornblende microphenocrysts, glass, and vesicles in the enclaves (Figures 3d, 7b) suggest that the dissolution-induced increase in the silica content (and H2O solubility) of the surrounding melt caused diffusion of H2O toward the quartz (Pistone et al., 2016a). Whereas the presence of resorbed xenocrysts in enclaves suggests that there was time for crystals to be incorporated, and to react, before the enclave started to crystallize, the presence of fine-grained rims on some enclaves Browne et al, 2006a;Murphy et al., 2000;Plail et al., 2014;Tepley et al., 1999) implies rapid cooling and crystallization (chilling) of the mafic magma against the cooler silicic host (Bacon, 1986). Xenocrysts must therefore have been incorporated prior to the formation of the chilled margin, providing a limited temporal window for crystal transfer. A comparison of the thickness of xenocryst resorption zones at Mt. Unzen (Browne et al. 2006a) with those produced experimentally (Nakamura & Shimakita, 1998;Tsuchiyama & Takahasi, 1983;Tshuchiyama, 1985) suggests resorption on a timescale of days; this contrasts with thermal modeling (Carslaw & Jaeger, 1959) suggesting that enclaves should thermally equilibrate on a timescale of hours. Again, this requires incorporation of xenocrysts prior to intrusion disaggregation and enclave formation (Browne et al., 2006a). As all the considered volcanic lavas contain similarly resorbed plagioclase xenocrysts within enclaves of comparable sizes, it seems likely that this temporal constraint on the sequence of crystal transfer prior to enclave formation is generally true for the systems presented here. Importantly, all locations also contain enclaves with unquenched margins Tepley et al., 1999) and equigranular textures (Browne et al., 2006a;Heiken & Eichelberger, 1980). Equigranular enclaves at Mt. Unzen have been interpreted as originating from disaggregation of the interior of the intruding magma, which cooled more slowly than the intrusion margin where porphyritic enclaves (xenocrysts-bearing, chilled margin) formed. Similarly, at Soufrière Hills, the quenched enclaves may form from an injected plume of mafic magma, whereas unquenched and more hybridized enclaves form from disturbance of a hybrid layer at the felsic-mafic interface . Angular enclaves with unquenched margins may record the break-up of larger enclaves (Clynne, 1999;Fomin & Plechov, 2012;Murphy et al, 2000;Plail et al., 2014), which can return resorbed host-derived crystals to the host; this explains the presence of resorption zones in crystals in the host lavas Further support for enclave fragmentation comes from microlites that are chemically indistinguishable from enclave phases at Soufrière Hills . A possible method to determine whether equigranular enclaves form from a hybrid layer or disaggregation of larger enclaves is to examine the mineralogy of the crystals in the enclave. The two different mechanisms will produce different degrees of undercooling within the enclave magma, which, in the hybrid-layer model, will depend on the relative proportions of the end-member magmas, and thus can produce different crystal assemblages/textures . Conceptual Model of Magma Mixing and Mingling The common features of the eruptive products described above suggest common aspects of mixing and mingling. Xenocrystic mafic enclaves with chilled margins, in particular, require that magma injection be accompanied by crystal incorporation from the host magma, as also suggested by a comparison of thermal timescales with the times needed to generate the observed thicknesses of resorption zones (Browne et al., 2006a). These constraints on the sequence of mixing processes have led to a similar conceptual model of mixing and mingling ( Figure 8; Browne et al., 2006a;Clynne, 1999;Murphy et al., 2000;Plail et al., 2014;Tepley et al., 1999) in which the mafic magma is injected as a fountain (Clynne, 1999) or collapsing plume before ponding at the base of the silicic host (Figure 8a). Shear caused by the injection disrupts the interface between the two magmas, leading to the formation of blobs of hybridized magma with incorporated host crystals that then rapidly chill against the silicic host, preventing further hybridization Tepley et al., 1999). Heating of the host, in turn, causes partial melting, reducing the crystallinity and causing convective motions that disperse the enclaves. Meanwhile, at the mafic-silicic contact, a hybrid interface layer forms ( Figure 8b). As this layer crystallizes, second boiling drives fluid saturation; exsolved buoyant fluids produce a low-density, gravitationally unstable, interface layer that breaks up to form further enclaves (Figure 8c; Browne et al., 2006a;Clynne, 1999). As cooling propagates downward through the mafic body, enclaves can come from deeper portions resulting in more equigranular enclaves that lack chilled margins or xenocrysts (Brown et al., 2006a;Plail et al., 2014). Enclaves, once formed, can disaggregate. Disaggregation is shown by the presence of broken enclaves (Clynne, 1999;Fomin & Plechov, 2012;Tepley et al., 1999), host phenocrysts with resorption zones and Fe enrichment caused by previous engulfment in mafic magma (Browne et al., 2006b;Clynne, 1999;Humphreys et al., 2009;Tepley et al., 1999), and small clusters of enclave-derived microlite material within the host lavas . Disaggregation allows for subsequent mixing of a type precluded during initial enclave formation, but the timing of disaggregation is poorly constrained. It could occur during highshear conditions in the conduit ; alternatively, disaggregation may be part of a continuous cycle of injection, enclave formation, and fragmentation ( Figure 8d) that gives rise to a continuously convecting magma storage region, which is sometimes sampled during a volcanic eruption (Browne et al., 2006a). Regardless, the dispersion of mafic groundmass into the host has implications for interpreting end-member compositions from petrologic studies Martel et al., 2006). Importantly, neglecting such transfer can lead to an underestimate of the initial silica content of the felsic member. Quantitative Modeling of Crystal and Volatile Controls on Mixing and Mingling Many conceptual models of magma mixing (e.g., Figure 8) have been produced based on petrologic evidence. However, quantitative models of magma mixing are limited. As described in Section 2.4, Sparks and Marshall (1986) first developed a simple model describing how thermal equilibration of a juxtaposed mafic and silicic magma led to rapid viscosity changes that inhibited mixing after a short time. Since then, models developed to account for the role of either crystals or exsolved volatiles have produced significant insights into mingling and mixing dynamics, but have failed to incorporate petrological data within quantitative frameworks. Here, we examine three models: Andrews and Manga (2014), who use continuum modeling and suspension rheology to model mingling resulting from dike injection into a silicic host; Bergantz et al. (2015), who model the injection of melt into a basaltic mush, resolving both fluid and granular behavior; and Montagna et al. (2015), who simulate the effect of exsolved volatiles on mafic injection. We compare the model assumptions and results, as well as their implications for interpreting petrological data. The Model of Andrews and Manga (2014) The model considers the instantaneous injection of a mafic dike into a silicic host, with a prescribed initial composition and temperature, and numerically solves the 1D heat equation. Changes in the crystallinity and bulk viscosity of magmas with time are calculated using MELTS simulations (Asimow & Ghiorso, 1998;Ghiorso & Sack, 1995;) and viscosity models for melt (Giordano et al., 2008) and crystal-bearing suspensions (Einstein, 1906;Roscoe, 1952). If the viscosity of the host immediately juxtaposed with the dike decreases sufficiently, then the host starts to convect (as determined by a Rayleigh number criterion), which exerts a shear stress on the dike. If this shear stress exceeds the yield stress of the dike (which depends on its crystal content), the dike deforms in a ductile fashion and the model predicts banded products. Alternatively, if the yield stress exceeds the shear stress, then the dike fractures in a brittle fashion and enclaves form. In this model context, the principal control on mingling dynamics is the development of crystal frameworks within the dike. Dike crystallization, in turn, is controlled by composition and temperature contrasts. For example, injection of hot, large, and wet dikes causes the silicic host to convect before a crystal framework forms in the dike. The resultant shear causes ductile disruption of the dike and intimate mingling of the two magmas, producing banding and, with time, homogenization. Small and dry dikes, by contrast, experience extensive crystallization before the host starts to convect and thus fracture to form enclaves. The precise initial conditions (temperature, dike size, and water content) that determine mingling style are sensitive to the parameterizations used (e.g., critical Rayleigh number for convection), but the qualitative results are useful. The principal limitation of the model of Andrews and Manga (2014) is that it assumes an instantaneous injection of the mafic dike and therefore neglects any mixing/mingling that occurs during injection itself. Instead, the dike is disrupted only by shear due to convection in the host. Indeed, the relative importance of shear due to injection versus shear due to convection remains a considerable unknown. The assumption that brittle fragmentation of the dike produces enclaves is supported by three-dimensional tomographic observations of enclaves from Chaos Crags, which have crystal frameworks that are lacking in banded pumices from Lassen Peak (Andrews & Manga, 2014). The inference is that these crystal frameworks created a yield stress such that the enclaves formed by solid-like fracturing and banded pumice by ductile deformation. However, this is in direct contradiction with the conceptual model presented above (Figure 8), which is based on field and petrographic observations that suggest enclaves form from fluid-like deformation of the mafic magma. This contradiction highlights the extent to which conditions of enclave formation are unknown. Bergantz et al. (2015) The discrete-element model, which resolves both fluid and granular physics, considers the injection of a crystal-free magma into the base of a crystal mush at random loose packing (approximately 60% crystallinity). The response of the mush is governed by stress chains formed by crystal-crystal contacts. For sufficiently slow injections, the new melt permeates through the mush, which behaves as a porous medium. Once the injection speed is large enough to disrupt the stress chains, however, part of the mush can become fluidized to form a mixing cavity, which is an isolated region where the host melt, crystals, and new melt undergo overturning. The new melt then escapes from the cavity through porous flow into the rest of the mush. For still faster flow speeds, the stress chains orientate to create two fault-like surfaces at angles of about 60° to the horizontal that bound a fluidized region of the mush, within which extensive circulation occurs. Recently, this model has been extended to investigate the effect of a density contrast between the intruding and resident melts on the style of mingling (Carrara et al., 2020), showing that the intrusion geometry is controlled to first order by the contrast between the melt densities rather than the bulk densities. The Model of Although this model captures granular and fluid dynamics on the crystal scale and demonstrates the impact of varying the injection velocity, there are numerous outstanding questions. First, varying the crystallinity of the mush has not been addressed and will presumably affect the values of the injection velocity at which transitions between mingling styles occur. Furthermore, temporal and spatial variations in temperature (due to heat transfer or latent heat release), and therefore in viscosity and crystallinity, have not been considered. Cooling and crystallization of the new melt should control the dynamics of the system, as will associated latent heat release. Finally, the geometry of the modeled magma reservoir (laterally homogenous layers) will affect the specifics of the mixing process, such as the orientation of the bounding faults, and it is not yet clear if the model scales to natural systems. The Model of Montagna et al. (2015) The two-dimensional finite-element model considers two vertically separated magma chambers that are superliquidus and connected by a narrow conduit. The upper chamber initially contains a felsic phonolite, and the lower chamber and conduit are filled with a mafic shoshonite, compositions chosen to represent eruptions from Campi Flegrei. H2O and CO2 exsolve as functions of temperature and pressure (Papale et al., 2006), whereas the transport of exsolved volatiles is modeled as a continuum scalar field satisfying a transport equation. Bubbles are assumed to be sufficiently small that they are undeformable, and an empirical law is used to parameterize their effect on bulk viscosity (Ishii & Zuber, 1979). The shoshonite initially contains exsolved volatiles and so is lighter than the phonolite, creating an unstable density interface at the inlet to the upper chamber. Upon initiation, a Rayleigh-Taylor instability develops at the inlet to the upper chamber, and a plume of light material rises into the chamber while the conduit is filled with a mixed, hybrid magma. Intimate mingling within the chamber is reminiscent of that created by chaotic advection (Perugini & Poli, 2004). The magma entering the upper chamber is a partial hybrid, and the pure parent shoshonite never enters the upper conduit. Intensive mingling occurs on a timescale of hours, promoted by a large initial density contrast and horizontally elongated chambers. Importantly, the reduction in density of the upper chamber can cause depressurization, which has implications for interpreting ground deformation signals (Papale et al., 2017). Although an obvious limitation of the model is the two-dimensional domain, it seems reasonable that the results can be extrapolated to three-dimensional systems. A greater limitation is the restricted range of compositions and temperatures for which the model is valid. The end-member compositions are similar and superliquidus, so that both the absolute bulk viscosities (<3500 Pa s) and their contrast (factor of 7) are relatively low. This allows rapid mingling and entirely ignores the effect of crystals on the flow dynamics. Comparison and Common Limitations Both Andrews and Manga (2014) and Bergantz et al. (2015) focused on the effect of crystals, but a key difference in the two models is the initial condition. Andrews and Manga (2014) assume the instantaneous injection of a dike into an initial rheologically locked host, whereas Bergantz et al. (2015) simulate the flow of new melt into a melt-crystal mixture; they show that new melt flows permeably through a rheologically locked mush. The conditions that spatially constrain a mafic injection (e.g., as a dike) have not been defined. The two models also simulate the role of crystals differently. Andrews and Manga (2014) calculate the crystallinity of a magma at a given temperature and assume the presence of a crystal framework (and yield stress) above a threshold value. Bergantz et al. (2015) allow the crystals to form force chains through which stresses are transmitted , but they consider the system to be isothermal such that no crystallization occurs, a key feature of Andrews and Manga (2014). Both models are limited in addressing the role of volatiles. Diffusion of volatiles from the mafic to felsic member can strongly influence the crystal composition and textures of the silicic member (Pistone et al., 2016a), whereas exsolution of volatiles leads to a reduction in bulk density that can drive convective motions in the mixing dynamics (Eichelberger, 1980;Montagna et al., 2015;Phillips & Woods, 2001;Thomas et al., 1993;Wiesmaier et al., 2015). The presence of exsolved volatiles also affects the magma rheology and requires the use of three-phase rheological models (Mader at al., 2013;Pistone et al., 2016b). One strategy is to treat the exsolved phase as a continuum scalar field and use a suspension model for bulk rheology (Montagna et al., 2015). However, as has been shown for solid phases (Carrara et al., 2019), small-scale effects can be overlooked by this approach, and explicit modeling of such phases may be needed to accurately constrain mixing/mingling processes. Additional complications arise in the number of parameters required for a given model. For example, the Andrews and Manga (2014) model requires values for a maximum crystal packing fraction and a critical Rayleigh number for convection in the host. Constraining these parameters will require extensive experimental efforts involving both high-temperature/highpressure and analogue experiments. Conclusions and Outlook for Future Research We have reviewed progress in understanding magma mixing and mingling, focusing on volatile and crystal controls on mingling processes. Although field and petrologic observations of mixed and mingled products are numerous, models of these processes do not yet include the full range of observed complexities. In particular, conceptual models derived from observations (Browne et al., 2006a;Clynne, 1999;Plail et al., 2014;Tepley et al., 1999;) suggest very different dynamics to those from numerical models (Andrews & Manga, 2014;Bergantz et al., 2015;Montagna et al., 2015). To resolve this discrepancy, several key questions need to be addressed: 1. How do mixing and mingling occur within the framework of crystal mushes, and how does the volume fraction of crystals control the interaction dynamics? 2. How do volatiles, both exsolved and dissolved, affect mixing and mingling? What is the relative importance of chemical quenching (due to volatile diffusion) versus thermal quenching (due to heat diffusion)? 3. How much mingling/mixing takes place during intrusion of the mafic magma compared to that driven by later processes such as convection in the host or the buoyant rise of vesicular mafic/hybrid magma? Only by combining field and analytical observations with experimental (analogue and natural materials) and numerical modeling can we start to address these challenges.
8,453.2
2020-11-07T00:00:00.000
[ "Geology" ]
Annexin A2 Acts as an Adhesion Molecule on the Endometrial Epithelium during Implantation in Mice To determine the function of Annexin A2 (Axna2) in mouse embryo implantation in vivo, experimental manipulation of Axna2 activities was performed in mouse endometrial tissue in vivo and in vitro. Histological examination of endometrial tissues was performed throughout the reproduction cycle and after steroid treatment. Embryo implantation was determined after blockage of the Axna2 activities by siRNA or anti-Axna2 antibody. The expression of Axna2 immunoreactivies in the endometrial luminal epithelium changed cyclically in the estrus cycle and was upregulated by estrogen. After nidatory estrogen surge, there was a concentration of Axna2 immunoreactivities at the interface between the implanting embryo and the luminal epithelium. The phenomenon was likely to be induced by the implanting embryos as no such concentration of signal was observed in the inter-implantation sites and in pseudopregnancy. Knockdown of Axna2 by siRNA reduced attachment of mouse blastocysts onto endometrial tissues in vitro. Consistently, the number of implantation sites was significantly reduced after infusion of anti-Axna2 antibody into the uterine cavity. Steroids and embryos modulate the expression of Axna2 in the endometrial epithelium. Axna2 may function as an adhesion molecule during embryo implantation in mice. Introduction Embryo implantation consists of 3 tightly regulated events, namely apposition, adhesion and penetration. In mice, apposition occurs in the afternoon of Day 4 of pregnancy, attachment begins at around 10-11:00 pm of the same day, and invasion starts in the morning of the next day [1][2][3]. Although implantation is crucial to reproduction, the molecules involved in the process are not fully known and is a hot research area. To address the question, we determined the differentially regulated surface proteins on receptive (4 pm of Day 4 of pregnancy) endometrial luminal epithelium (LE) relative to nonreceptive (Day 1 of pregnancy) LE in mice using biotin labeling followed by 2-dimensional gel electrophoresis and tandem mass spectrometry. Our unpublished results showed a 2-fold increase in the expression of annexin A2 (Axna2) in the mouse receptive LE, a result similar to that in human endometrium [4][5][6]. AXAN2 was recently identified as one of the apical surface molecules in a receptive human endometrial cell line [7]. Axna2 is a member of the annexin family. Annexins are proteins that bind to anionic phospholipids in a calcium dependent manner. At least 12 annexins are found in higher vertebrates [8]. Membrane bound Axna2 is a heterotetramer consisting of two Axna2 and two S100A10 (S100 calcium binding protein A10) molecules. Axna2 is a substrate of Src (Rous sarcoma oncogene) protein kinase [9], and is involved in cellular transformation, differentiation [10], regulation of secretory process, prolactin release and prostaglandin formation [8]. Axna2 was studied recently in the field of reproduction. In human endometrium, the expression of Axna2 is high in the receptive phase [6] and is low in the pre-receptive phase [4,5], consistent with a role of Anxa2 in implantation. Dysregulation of interleukin 11 expression in endometrium is associated with infertility probably via action of the cytokine on blastocystendometrial epithelium adhesion [11]. The cytokine upregulates the expression of Axna2 in the primary human endometrial epithelial cells and an endometrial cell line [12]. Using in vitro human models, ANXA2 is linked to the RhoA/ROCK pathway, which in turns affects trophoblast adhesiveness on endometrial cells, migration of endometrial epithelial cells and outgrowth of trophoblast [13]. The relevance of these studies to implantation in vivo is not known. We hypothesized that Anxa2 was involved in implantation. The present study intended to explore the function of Axna2 in mouse embryo implantation in vivo. The results show that estradiol (E2) controls Axna2 expression in the LE of mice, and that the implanting embryos modulate the expression of Axna2 at the implantation sites. Our data also provide the first direct evidence on the involvement of Axna2 in implantation in vivo. Animals All procedures for handling of animals were approved by the Committee on the Use of Live Animal in Teaching and Research, the University of Hong Kong. Bilateral ovariectomy was performed on 6-week old mice under anesthesia. The success of the operation was confirmed by daily vaginal smears for 4 days after a 4 week of clearing period. The ovariectomized mice were injected subcutaneously with either equal volume of vehicle (sesame oil), 100 ng/mouse of E2, 1 mg/mouse of P4, or a combination of 100 ng/mouse of E2 and 1 mg/mouse of P4 as described [14] to produce circulating steroid levels similar to that in the estrus cycle. The mouse uteri were collected 24 hour later for immunohistochemical staining. Euthanasia was performed by overdose administration of pentobarbital. Pseudopregnant mice were prepared by allowing mating of the nulliparous mature female ICR mice with vasectomized males. The day of the presence of a vaginal plug was defined as Day 1 of pseudopregnancy. Mated female mice were housed individually before experimentation. In vivo knockdown of Axna2 Fresh siRNA-liposome complexes were prepared by mixing 20 μl of siRNA solution (Thermo Scientific, NY, USA) containing 80 or 160 pmol siRNA with 20 μl of Lipofectamine 2000 solution (Invitrogen, Carlsbad, USA) immediately before each experiment. After incubation for 15 minutes at room temperature, 20 μl of the preparation was injected into the lumen of the uterine horns of Day 3 pseudopregnant mice. On Day 4 of pseudopregnancy, the mice were euthanatized and the uterine horns were collected. The siRNA was applied to Day 3 psuedopregnant uterus to allow time for the siRNA to exert its knockdown action. Infusion of anti-Axna2 antibody into uterus Antibody was prepared by mixing 50 μl of anti-Axna2 antibody (1 mg/ml, ab41803, Abcam, Cambridge. UK) or normal rabbit IgG with 1 ml of PBS and spinning of the mixture through an Amicon Ultra-15 Centrifugal Filter Unit with Ultracel-10 membrane (Millipore, MA, USA) at 4°C to remove preservative in the antibody/IgG solution. Ten microliters of the buffer-exchanged anti-Axna2 antibody (10 μg) per uterine horn was infused into pregnant mice on Day 4 of pregnancy between 4:00 and 6:00 pm. The same volume of buffer-exchanged IgG with the same protein concentration was infused into the contralateral horn as control. Mice were killed 2 days later and the numbers of implantation site were counted. Endometrial tissue culture model for implantation The endometrial tissue implantation model and the antibody treatment procedure were reported elsewhere [15]. In brief, Day 4 blastocysts were placed on isolated endometrial tissue of Day-4 pregnant mice and cocultured for 28 hours. The number of blastocysts that remained attached after a brief washing was counted. Immunostaining of Axna2 Uteri from pseudopregnant and pregnant mice at 4:00 pm of Day 1, 10:00 am, 4:00 pm, 11:00 pm of Day 4, 10:00 am, 4:00 pm and 11:00 pm of Day 5, and 10:00 am of Day 6 of pregnancy were collected, fixed, embedded in paraffin wax and sectioned. For collection of uteri at different estrous stages, vaginal smears were checked before tissue collection. For immunohistochemical staining, slides were blocked in 10% goat serum for 1 hour at room temperature before incubation with 0.5 μg/ml anti-Axna2 antibody overnight at 4°C. The slides were then successively incubated with 0.5 μg/ml of goat anti-rabbit secondary biotinylated antibody (Dako, Glostrup, Denmark) for 40 minutes, Vectastain elite ABC reagents (Vector Laboratories, Burlingame, CA, USA) for 30 minutes, and 3' diaminobenzidine tetrahydrochloride (DAB, Dako) for 1-3 minutes. To better observe the expression of Axna2 on plasma membrane, the immunofluorescence technique was used. For immunofluorescence staining, sections were incubated with 1:1000 AlexFluor-488 labeled secondary antibody (Invitrogen, Carlsbad, US) for 1 hour. All incubations with the primary antibody were done at room temperature. Each batch of samples were processed and developed in exactly the same conditions. Images of the stained sections were captured under a fluorescence microscope (Eclipse Ti, Nikon Tokyo, Japan). IgG control and anti-Axna2 antibody preabsorbed control showed no signal in the staining. Besides, mouse colon and liver were used as positive and negative control, respectively in the optimization experiment. H-scoring For each slide, one image at 200 x magnification was taken to show the general annexin A2 intensity in the uterine LE, glandular epithelium (GE) and stroma. Another 10-15 images at 1000x magnification were taken randomly for H-Scoring (histological scoring) of the intensities of Axna2 immunoreactivities in the LE, GE and stroma. All the images were ranked by another colleague of the laboratory having no relationship with the project. The scores were calculated according to the equation H-SCORE = SPi (i+1), where i = intensity that ranges from 0 (no signal) to 3 (strong signal), and Pi = percentage of cells. Western blot analysis Endometrial tissues were frozen in liquid nitrogen and grinded into powder. The proteins in the powder were dissolved in RIPA lysis buffer (RIPA: 1X PBS, 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% SDS), resolved in 10% SDS gel electrophoresis, and transferred to PVDF membrane (Millipore, Temecula, MA, USA). The membrane was blocked with 5% skimmed milk before incubation with primary anti-Axna2 antibody overnight at 4°C. The membrane was then washed in phosphate buffered saline containing Tween 20 (PBST) five times for 5 minutes each, incubated with 1:5000 horseradish peroxidase-conjugated secondary antibody (Sigma, Castle Hill, NSW, Australia) with shaking for 1 hour at room temperature. After washing five times with PBST for 5 minutes each, specific signal on the membrane was detected on X-ray film (Galen, UK) by WESTSAVE Up TM enhanced chemiluminescent solution (Abfrontier Co. Ltd., Seoul, Korea) according to the manufacturer's protocol. The membrane was stripped with mild reblot solution (Millipore) and reprobed with alpha tubulin (Abcam) which served as an internal control. The gel image was scanned and analyzed by the Image Pro Plus (Media Cybernetics, MD, US). Statistical analysis One Way Analysis of Variance on rank was used to compare the quantitative data, which were presented as median and ranges. The data on attachment rates in the in vitro model were expressed as percentage attached blastocysts relative to the number of seeded blastocysts for co-culture. Statistical comparisons were performed using Chi square test. A difference with P<0.05 was considered to be statistically significant. Expression of Axna2 in estrus cycle and early pregnancy There was an increase in Axna2 immunoreactivities on the apical surface of the LE from proestrus to estrus stage ( Fig 1A). The intensity of the staining decreased at metestrus stage. No Axna2 immunoreactivities were detected at diestrus stage. H-Scoring confirmed the observed expression pattern semi-quantitatively ( Fig 1B). A significant lower score (P<0.05) was found in the diestrus stage when compared with the other stages. Proestrus stage had the highest signal intensities. The immunohistochemical study of Axna2 was also conducted in mice from Day 1 to 7 of pregnancy. Axna2 immunoreactivities were expressed on the apical surface of LE on Day 1 of pregnancy ( Fig 1C). The signal was hardly detected in the LE at 10:00 am on Day 4 of pregnancy. The expression of Axna2 on the apical membrane of LE was greatly increased at 4:00 pm after nidatory estrogen surge, and the intensities of the staining were comparable to that in the LE at 4:00 pm on Day 4 of pseudopregnancy. The signal intensity in the LE decreased greatly after Day 5 and was undetectable by Day 6 of pseudopregnancy. Expression of Axna2 at the implantation sites To study the possible action of embryos on expression of Axna2, immunostaining was performed at the implantation site. The Axna2 immunoreactivities of the LE next to the implanting embryos were weak at 10:00 am on Day 4 of pregnancy (Fig 2A). The signal in the LE increased at 4:00 pm (Fig 2B). Interestingly, the immunoreactivities of the implanting embryos also increased. The signal became strong and concentrated at the embryo-LE interface at 11:00 pm on the same day (Fig 2D and 2E) and not between the LE (Fig 2F). No such concentration of signal was observed at the inter-implantation site (data not shown). The strong signal at the implantation site lasted throughout Day 5 and decreased by Day 6 of pregnancy. Fig 2G-2I show the expression of Axna2 in the endometrial stroma at the implantation site using immunofluorescence staining. Axna2 was strongly expressed on the membrane of the subluminal stromal cells closely beneath the implanting embryo from 4:00 pm of Day 4 (Fig 2C and 2G) and Day 5 of pregnancy (Fig 2H), but was relocated to the cytosol on Day 6 of pregnancy (Fig 2I). Steroids regulate expression of Axna2 in the mouse endometrial LE To study steroid regulation of Axna2, ovariectomized mice were treated with vehicle, estradiol (E2), progesterone (P4), or combined estradiol and progesterone (E2P4). The expression of Axna2 was low in the ovariectomized mice. E2, P4, and E2P4 treatment increased the expression of Axna2 (Fig 3A). Strong Axna2 immunoreactivities were observed mainly in the cytoplasm of LE after E2 treatment. Treatment with P4 and E2P4 also increased the Axna2 immunoreactivities with a significant portion of the signal concentrated on the apical surface of LE. Fig 3B shows the H-scoring of Axna2 intensities on the LE upon steroid treatments. All the three treatments significantly increased the scores when compared with that in the ovariectomized mice. E2 treatment showed the strongest effect and had a score significantly higher than that after P4 treatment. Knockdown of Axna2 suppresses blastocyst attachment in endometrial tissue implantation model To study the role of Axna2 in implantation, an in vivo knockdown approach by siRNA was used. Two different doses of siRNA were tested. Western blotting revealed that the higher dose (160 pmol) of siRNA reduced the Axna2 expression level by about 43% in the transfected endometrium when compared with the siRNA control and untreated Day 4 endometrium of pseudopregnant mice (S1 Fig). Immunostaining showed that the epithelium and stroma of the knockdown tissue has reduced expression of Axna2 (S2 Fig). Blastocysts attachment assays on the endometrial tissues transfected with Axna2 siRNA/ scramble control were performed. The blastocyst attachment rates on the endometrium transfected with 80 pmol and 160 pmol of Axna2 siRNA were 38.2% and 28.6%, respectively ( Fig 4A). The attachment rate on the endometrium transfected with control siRNA was 49.1% while that on untreated endometrium (Day 4 pseudopregnant mice) was 55.6%. There was a significant difference (p<0.05) in the attachment rate between the 160 pmol Axna2 siRNA transfected groups and the control groups. Although the attachment rate of the 80 pmol siRNA Axna2 was reduced when compared to the 2 control groups, statistical significant differences were not yet reached (P>0.05). Surface membrane associated Axna2 is involved in the attachment of embryo to endometrium For in vivo functional study, 10 μl of buffer exchanged anti-Axna2 antibody was infused into one uterine horn of Day 4 pregnant mice at 4:00-6:00 pm. The same volume and concentration of buffer exchanged IgG was infused in the contralateral horn as control. As shown in Fig 4B, the implantation sites was reduced by 3-fold (P = 0.008) after anti-Axna2 antibody infusion. Discussion While there are some in vitro data using cell lines to represent embryo and LE during implantation supporting a role of Axna2 in human implantation, the regulation of Axna2 during implantation in vivo is not fully known. This study showed that steroids regulate the expression of Axna2 in the mouse endometrial LE. More importantly, the results showed for the first time that the implanting embryos modulate the expression of Axna2 and that the outer surface membrane-bound Axna2 was involved in the attachment of embryos onto the endometrium in vivo. Steroids regulate mouse endometrial luminal epithelial expression of Axna2 In the mouse estrous cycle, the cyclical pattern of Axna2 in the endometrial LE coincided with that of E2. The Axna2 immunoreactivities were high on the apical surface of LE from proestrous to estrus stage and after nidatory E2 surge. The high LE expression of Axna2 immunoreactivities the next morning (Day 1 of pregnancy) probably represented the time lag required for turnover of the Axna2 protein produced at estrus. The close link between E2 and Axna2 was likely due to an action of E2 on Axna2 expression as the phenomenon also occurred in pseudopregnancy, which had the same hormonal profile though without the implanting embryos. After implantation, the expression of Axna2 in the LE of Day 6 pseudopregnant mice was reduced consistent with the low E2 level at the time. The action of E2 on the expression of Axna2 was confirmed in the ovariectomized mice model. It was noted that Axna2 immunoreactivities were uniformly distributed in the cytosol of LE in the E2 treated group, but were mainly on the apical membrane of LE in the P4 and E2P4 treated groups. The difference was likely due to the action of P4 on the secretory activities of LE, and Axna2 is known to be involved in cellular exocytosis [16][17][18] and Axna2 can be a secretory protein [19]. Bioinformatics analysis of the mouse Axna2 gene supports a direct action of E2 on Axna2 expression. Although no consensus estrogen responsive element (ERE) (GGTCAnnnTGACC) is found in the promoter of mouse Axna2, there is a cluster of 5 closely located half-ERE binding site (TGACC) separated by 18-174 base pairs found from -4117bp to -3870 bp and a number of other variations of the consensus ERE (see S1 Table) within -4000 bp. Half binding sites and ERE variations have been shown to be responsive to estrogen-estrogen receptor complex (reviewed by ref [20]). It has to be confirmed by gene reporter assays whether the half-ERE sites in the Axna2 promoter of mouse are functional. The mouse Axna2 promoter region also contains glucocorticoid-responsive element (GRE) [21]. Given the highly conserved structure of GRE and progesterone responsive element, the mouse Axna2 gene is likely responding directly to P4 treatment as well. Implanting embryos modulate mouse endometrial luminal epithelial expression of Axna2 The relationship between Axna2 and implanting embryos was investigated by comparing the LE expression of Axna2 at the implantation sites and at the inter-implantation sites. Axna2 expression increased sharply upon embryo attachment and became concentrated at the interface between the embryo and the LE from Day 4 4:00 pm till Day 5 11:00 pm. No such concentration of Axna2 signal was found at the inter-implantation site and at the implantation sites away from the implanting embryos and after penetration. The observations indicated modulatory action of the embryo on Axna2 expression in the early phase of implantation, and suggested that Axna2 was mobilized to the plasma membrane upon intimate contact of the embryo with the LE. It is not known how this is accomplished and can only be speculated upon. Calcium is implicated in embryo implantation. Infusion of calcium channel blocker (diltiazem) into the mouse uterine cavity on Day 4 of pregnancy causes complete implantation failure [22,23]. Upon attachment, intracellular calcium of the embryo and the LE may increase through calcium influx or intracellular calcium pool release. There was an increased expression of Axna2 mRNA from the morula to the blastocyst stage in mice and pigs [24], and Axna2 protein was also expressed in a bovine trophectoderm cell line [25] and a human trophoblast cell line (Wang B, unpublished data). It has been demonstrated that ErbB4 on mouse embryos mediates endometrial heparin-binding EGF-like growth factor (HB-EGF)-induced calcium influx of mouse trophoblast [26], and that ligation of integrin elevates mouse trophoblast intracellular calcium levels through calcium influx [27,28]. Although there is no evidence demonstrating that the same happens in the LE, an in vitro implantation model using an endometrial epithelial cell line (RL95-2) and trophoblast (JAr) spheroids as embryo surrogates demonstrates increase in intracellular calcium of the RL95-2 cells after attachment of the spheroids onto the cells [29]. Given that the membrane association and translocation of Axna2 was calcium dependent, the cell contact may induce Axna2 relocation through the action of calcium elevation. Another phenomenon was noted at the implantation site: Axna2 was strongly expressed and adopted a membrane-bound form in the stromal cell during invasion at the implantation site. The phenomenon may be related to calcium elevation during decidualization because deciduogenic concanavalin A-loaded beads can induce a higher level of calcium in the decidualized regions than in the non-decidualized regions [30]. After implantation, not only the intensity of Axna2 in the stroma decreased, its distribution also changed from membrane bound back to cytosol. The observed expression pattern of Axna2 was similar to that of COX2 [3,31], suggesting a possible role of the Axna2 in decidualization. Axna2 and embryo implantation Four observations suggested that Anxa2 functioned as an adhesion molecule during implantation. First, Anxa2 is located on the apical surface of the mouse endometrium. Similar observation is also found in humans [7]. Second, there was a strong expression of Axna2 at the embryo-LE interface. Third, intrauterine knockdown of Axna2 in pseudopregnant mice decreased the attachment of the blastocyst onto the treated endometrial tissue in vitro. Fourth, infusion of anti-Axna2 antibody into the uterine horn led to a 3-fold decrease in the number of implantation sites in vivo. This is the first report on impairment of embryo implantation by blockage of Axna2 in the mouse LE during the implantation window. The results are consistent with a previous report showing that knockdown of S100A10, the binding partner of Axna2 heterotetramers, caused implantation failure in mice [32]. Axna2 is involved in cell adhesion in other systems. Axna2-S100A11 serves as a molecular bridge for cell-cell adhesion between breast cancer cells and microvascular endothelial cells [33]. Apart from adhesion, invasion of the trophoblast through the endometrial epithelium is an integral process of implantation, involving migration of the trophoblast cells. Tyrosine phosphorylation of Axna2 regulates morphological changes associated with cell motility via Rho-mediated actin rearrangement [10]. In sum, this study revealed a novel function of Axna2 during implantation in mice. It is likely that Axna2 is involved in human implantation as well. ANXA2 is expressed mainly in the luminal epithelium of human mid-and late-secretory endometria [13]. The expression of the molecule is dramatically diminished in the endometrial epithelial cells in the presence of intrauterine devices [4]. In vitro study also suggests a role of Axna2 in embryo adhesiveness [13]. Whether its abnormal expression on LE is related to subfertility needs further investigation.
5,132.4
2015-10-07T00:00:00.000
[ "Biology", "Medicine" ]
ON A SPIKE TRAIN PROBABILITY MODEL WITH INTERACTING NEURAL UNITS . We investigate an extension of the spike train stochastic model based on the conditional intensity, in which the recovery function includes an interaction between several excitatory neural units. Such function is proposed as depending both on the time elapsed since the last spike and on the last spiking unit. Our approach, being somewhat related to the competing risks model, allows to obtain the general form of the interspike distribution and of the probability of consecutive spikes from the same unit. Various results are finally presented in the two cases when the free firing rate function (i) is constant, and (ii) has a sinusoidal form. 1. Introduction.Since the seminal papers by Gerstein and Mandelbrot [19] and Stein [32], many efforts have been directed to the formulation of stochastic models for single neuron's activity aimed to describe the relevant features of the behaviour exhibited by neural cells.We mention the contributions by Ricciardi [27] and Ricciardi et al. [30], and the bibliography therein, as a reference to mathematical models and methods on this subject. Various researches have been carried out by the authors of this paper on the construction and analysis of models, based on stochastic processes and aimed to describe dynamic systems of interest in different fields.Their research activity has been performed continuously thanks to the precious guidance and support of Professor Luigi M. Ricciardi, to whose unforgettable memory this paper is gratefully dedicated. Among the numerous investigations performed in biomathematics under his advice and supervision (mainly in neuronal modeling, population dynamics, subcellular stochastic modeling) we recall the following themes: -the characterization of the time course of the neuronal membrane potential as an instantaneous return process (Ricciardi et al. [29]), -the description of neuronal units subject to time-dependent inputs via Gauss-Markov processes (Di Crescenzo et al. [12]), -analysis of the interaction between neuronal units of Stein type based on Monte-Carlo simulations (Di Crescenzo et al. [18]), -stochastic modeling of the evolution of a multi-species population, where competition is regulated by colonization, death and replacement of individuals (Di Crescenzo et al. [13]), -analysis of birth-death processes and time-non-homogeneous Markov processes in the presence of catastrophes (Di Crescenzo et al. [14], [15]), -the study of stochastic processes suitable to describe the displacements performed by single myosin heads along actin filaments during the rising phases (Buonocore et al. [4], [5]). Along the lines traced by some of the above contributions, in this paper we discuss a suitable extension of a spike train stochastic model to neuronal networks with interacting units. In several investigations the synaptic inputs that carry the stochastic component of the neuronal activity is modeled by Poisson processes with a fixed spike rate (see Amit and Brunel [1], Bernander et al. [3], Softky and Koch [31], for instance).We recall that the customary assumption based on Poisson processes allows the approximation of the synaptic input of a typical neuron by a stationary uncorrelated Gaussian process due to the superposition of a large number of incoming spikes (hence a sum of many Poisson processes) of either excitatory as well as inhibitory type (see Ricciardi [27]).However, models based on homogeneous Poisson processes fail to capture the relevant feature of the neural activity consisting in the refractory period.See, for instance, Hampel and Lansky [20] for an investigation on parametric and nonparametric refractory period estimation methods.The refractory period is sometimes modeled by means of a dead time, i.e. the time interval following every firing during which the neuron cannot fire again.This leads to a delayed Poisson process, obtained by a step change to the rate of a Poisson process (see Deger et al. [11], Johnson [21], Ricciardi [28]). Aiming to include the neuronal refractory period and to describe properties of spike trains, another approach has been adopted recently by various authors.It is based on the assumption that the inhomogeneous Poisson process describing the number of neuronal firings has a conditional intensity function expressed as product of the free firing rate function and a suitable recovery function. We purpose to investigate the spike train model based on the conditional intensity, where the recovery function is aimed not only to include the refractory period, but also to devise the interaction between several excitatory neural units.This is performed via a suitable choice of the monotone recovery function, which is increasing when describes the effect of excitatory neurons and is decreasing when models the refractory period.This scheme allows studying various statistics related to the firing activity, by following an approach analogous to the competing risks model (see Di Crescenzo and Longobardi [16]).In the homogeneous case it is shown that the overall activity of the network exhibits exponentially distributed interspike intervals.In addition, it seems that other suitable choices of the recovery function yield further dynamics, such as the bi-exponential and periodic behaviors investigated by Mazzoni et al. [24].This is the plan of the paper: In Section 2 we describe the background on the conditional intensity function model.Section 3 presents a suitable extension of this model to the case of a network formed by a fixed number of units, in which the recovery function depends both on the time elapsed since the last spike and on the last spiking unit.A comprehensive discussion on this model is also given, with attention to the conditional random variables describing the time length between consecutive spikes.A connection with the competing risks model is also pinpointed.Section 4 is devoted to investigate the model in detail.We determine the general form of the interspike distribution and of the probability of consecutive spikes from the same unit.Explicit expressions are thus obtained in the special case of constant free firing rate function, when the interspike distribution is shown to be exponential.We also consider the case when the free firing rate is of sinusoidal type.The spike intertimes density is thus given in closed form, whereas the mean and the variance are obtained and shown for some suitable instances by means of numerical computations. 2. A spike train probability model.A customary believe in neuroscience is based on the hypothesis that the neural coding adopted by the brain to handle information is based on the neuronal spike (the number of spikes in the time unit), or on the temporal occurrence of spikes (the sequence of spikes).Within both paradigms, since spikes have very short duration, point processes or counting processes are commonly used as probability models of spike trains. The occurrence of neuronal spikes is often described by the inhomogeneous Poisson process.It is a continuous-time stochastic process {N (t); t ≥ 0}, with state space the set of non-negative integers, where N (t) denotes the number of spikes of a single neural unit occurring in [0, t] (see, for instance, Burkitt [7] and [8] for comprehensive reviews of the integrate-and-fire neuron model, where the stochastic synaptic inputs are described as a temporally homogeneous and inhomogeneous Poisson process).The intensity function of the inhomogeneous Poisson process is defined as follows: It represents the intensity of occurrence of a spike at time t in a single neural unit.Various choices of λ(t) have been proposed in the past.In the simplest case it is constant in t, this leading to a homogeneous Poisson process.Function ( 1) is useful to describe various quantities of interest.For instance, let τ j be the j-th spike time (j = 1, 2, . ..) of a single unit; denote by Λ(t) = t 0 λ(s)ds the mean function of N (t), and assume that Λ(t) < +∞ for any finite t ≥ 0, with lim t→+∞ Λ(t) = +∞; then the probability density function of τ j is: 1) is based on the assumption that the following conditional intensity function exists: A customary extension of definition ( where 0 < τ 1 < τ 2 < . . .< τ N (t) is the sequence of spike times occurring in [0, t].Function (2) thus describes the intensity of occurrence of a new spike at time t conditional on the spike times occurred in [0, t]. In order to describe specific properties of spike trains, such as the neuronal refractory period, various authors follow an approach based on the assumption that λ(t | τ 1 , τ 2 , . . ., τ N (t) ) is expressed as product of two suitable functions (see, for instance, Berry and Meister [6], Johnson and Swami [22], Kass and Ventura [23], Miller [25]), i.e. In Eq. ( 3), s(•) and r(•) are suitable non-negative functions, s being known as the free firing rate function and r as the recovery function.Recently, Chan and Loh [9] investigated this model with reference to template matching of multiple spike trains, and to maximum likelihood estimators of the free firing rate and recovery functions. We notice that model ( 3) is Markovian because the conditional intensity of spikes is assumed to depend only on the present time t and on the duration t − τ N (t) since the last spike. 3. A model for interacting neural units.We aim to study the model described by Eq. ( 3) in a more general case that includes interaction among units.Indeed, we consider a network of d excitatory neural units, say . ., N d (t) be counting processes, where N i (t) describes the number of spikes of unit U i in [0, t], for 1 ≤ i ≤ d.Moreover, we denote by τ i,k the k-th spike time, k = 1, 2, . .., of unit U i , for 1 ≤ i ≤ d.The sequence of overall spike times of the network occurring in [0, t] will be denoted as where the counting process counts the total number of spikes occurring in [0, t].For k = 1, 2, . . .and 1 ≤ i ≤ d, we set In analogy with the model expressed by (3), the conditional intensity function of the unit U i , for 1 ≤ i ≤ d, is assumed to have the following form, for t ≥ 0: where G t collects all information related to the activity up to time t, i.e. Function s(t) is non-negative and such that +∞ τ s(t) dt = +∞ for any τ > 0. As for model (3), it is named free firing rate function, since it describes the spiking intensity of the network's units due to external inputs, and in absence of firing activity.From Eq. ( 7) we note that if means that the occurrence of the first spike is uniform over the d units.Moreover, we have: In the general setting s(t) is a time-varying function, which allows for the description of stimuli with varying amplitudes such as modulated inputs.Again, function r i (•; •) is non-negative, and is called the recovery function of unit U i .Its main role is the inclusion in the model of the refractory period of U i , and also of the effect of the spiking activity of the other network units. Remark 1. Due to Eq. ( 7) the intensity function of N i (t) does not depend on i when N (t) = 0, whereas it depends on the counting process (5) through τ •N (t) and Z N (t) , when N (t) ≥ 1.The firing activity of the i-th neural unit is thus governed by the last spiking time, τ •N (t) , and by the last spiking unit of the network, Z N (t) .Moreover, N 1 (t), N 2 (t), . . ., N d (t) are conditionally independent processes, in the sense that the distribution of each of such counting processes depends on the remaining d − 1 processes only through the sum (5). From now on we suppose that the recovery function appearing in the right-handside of ( 7) is given by: for all 1 ≤ i ≤ d and 1 ≤ j ≤ d, where: (i) coefficients c i,j are such that (ii) u(t) is a non-negative continuous function, decreasing for all t ∈ [0, +∞), with u(0) = 1 and lim t→+∞ u(t) = 0. We point out that the above assumptions concerning Eq. ( 9) yield the following features of the model: • Coefficients c i,j measure the strength of the spiking activity of U j on the network units.Conditioning on Z N (t) = j, thus being U j the last spiking unit before t, we have: (a) If i = j then c j,j = −1; this describes the auto-inhibition due to a neuron spike, i.e. the effect of the refractory period.(b) If i = j then the coefficients c i,j are strictly positive, this yielding a full interaction (of excitatory type) among the network's units.In some sense, they give a measure of the synaptic strength from U j (the presynaptic neuron) to U i (the postsynaptic neuron). • Function u(•) describes the effect over time of the spiking activity on the network units.When t is close to last spiking time τ •N (t) , the last spiking neuron, U j , is less likely to process the stimuli arriving according to the free firing rate function s(•), since this being in agreement with the effect of the refractory period.Moreover, for all t and 1 ≤ j ≤ d we have • All other units U i , i = j, receive a stimulus from the last spiking neuron U j , the strength of the stimulus being regulated by c i,j .In this case, for all t and i = j, it is • The effect of the last spike tends to vanish as time proceeds; indeed, for all 1 ≤ i ≤ d and 1 Note that an accurate choice of the recovery function r i (t − τ •N (t) ; Z N (t) = j) should treat the cases i = j and i = j as different since they arise from distinct physical situations.When i = j we deal with the auto-inhibition of a neuron due to spikes, and then the modeling of the refractory period should include time-delay effects in function u(•).On the contrary, when i = j we deal with the interaction between different neurons, and thus such delay is not required.Nevertheless, in order to make the model mathematically treatable, the cases i = j and i = j have been unified in the right-hand-side of Eq. ( 9).On the other hand the condition (11) implies that, within the present model, spikes close in time from the same neuron are very unlikely. Remark 2. In order to assess the plausibility of the above assumptions in a model of neural spike trains, we point out that the mean interspike intervals (of the superposed spike trains) should be larger than the characteristic time scale of the recovery function.In a broad sense, the model is physiologically plausible when the recovery function (9) decreases rapidly as t increases. Recalling Remark 1, the first spike occurs according to the free firing rate s(t) (see Eq. ( 8)), so that τ •1 has distribution function Moreover, the probability that the first spike is generated by unit U i is uniform, since P (Z 6) and (7).We now introduce the random vectors X where, in agreement with (7), is a non-negative random variable having hazard rate s(t) r i (t−τ •N (t) ; Z N (t) = j).Assuming that the k-th spike of the network was generated by unit U j at time τ describes the time length between τ •k and the next spike, conditional on the event that the latter spike is generated by unit U i , 1 ≤ i ≤ d.From the above assumptions it follows that the spiking process is regenerative, in the sense that the distribution of X does not depend on k.Hence, we shall write X (τ ) i,j when it is not necessary to specify the index k.Moreover, as soon as a spike occurs, the firing activity restarts afresh according to the scheme described by Eqs. ( 7) and (9).We notice that the components of vector (12) are not observable, whereas the following random variables are observable: for 1 ≤ j ≤ d.Clearly, T (τ ) j denotes the time length between a spike discharged at time τ by unit U j and the next spike produced in the network, the unit producing such spike being described by δ (τ ) j .On the ground of Eqs. ( 7) and ( 9), the distribution function of X (τ ) i,j is given by In the following we shall denote by the probability that a spike of unit U j , occured at time τ , is followed by a spike of the same unit. We remark that the above framework can be viewed as referring to the classical "competing risks model".The latter deals with failure times subject to multiple causes of failure, and deserves interest in various fields such as survival analysis and reliability theory.In the present case the roles of failures and of failure causes are played, respectively, by the observed spikes and by the firing network units.General properties of the competing risks model can be found for instance in Crowder [10], whereas recent results on such model related to ageing notions and shock models are given in Di Crescenzo and Longobardi [16] and [17], respectively.4. Analysis of the model.Aiming to give a deeper description of the model introduced in the previous section, we first consider the simple case where the network is composed of d = 2 units.Due to (10), for d = 2 and i, j = 1, 2 we have so that Eq. ( 9) becomes for i, j = 1, 2. Recalling ( 12) and ( 13), now we deal with the random vectors whose components are not observable.On the contrary, the random variables T (τ ) j and δ (τ ) j , j = 1, 2, defined in (13), are observable.Since the matrix ||c i,j || in this case is symmetric (cf.( 16)), we can introduce two random variables X (τ ) − and X (τ ) + , by renaming the components of the random vector (17) as follows: 1 Hence, from the given assumption it is not hard to prove that X − and X + are non-negative independent random variables, where X A sample of activity of a network with d = 2 units.time length between a spike occurring at time τ and the next spike, conditional on the event that the latter spike is due to the same unit (resp., the other unit).An example of activity of a network with d = 2 units is shown in Figure 1 where, for instance, X Recalling (14), the complementary distribution functions and the probability density functions of variables ( 18) can be expressed respectively as for t ≥ 0.Moreover, due to (15), and since X − and X + are independent, when d = 2 the probability that a spike of a generic unit, occured at time τ , is followed by a spike of the same unit is given by We are now able to provide the expressions of ( 20) and of the distribution function of the observable random variable Note that T (τ ) describes the intertime between a spike occurring at time τ and the subsequent spike.A relevant role is played by the free firing rate function s(•) and by the auxiliary function u(•) appearing in the recovery function (9). Since u(•) is a non-negative function, from (22) we have q (τ ) ≤ 1/2.Thus it is more likely that consecutive spikes are displayed by different units rather than the same unit.The function φ τ (t), defined in Eq. ( 24), is named cumulative firing rate. The analysis of the model in the case of a network of d units can be performed by taking into account that, similarly to (20), the probability ( 15) is given by where f i,j (t | τ ) and F i,j (t | τ ) denote respectively the probability density and the complementary distribution function of X (τ ) i,j , for i, j = 1, 2, . . ., d. Due to (10), the terms in the right-hand-side of (25) do not depend on j, and thus we are now able to give the following extension of Proposition 1. Hereafter, in Sections 4.1 and 4.2 we consider two special cases in which s(t) is constant and of sinusoidal type. 4.1. Constant free firing rate.In this section we discuss the homogeneous case, in which the external inputs arrive to the network's units according to a constant intensity.We thus assume that the free firing rate is constant, so that s(t) = λ for all t ≥ 0, (28) with λ > 0. We point out that in this case the distribution functions given in (14) do not depend on τ , and thus can be expressed as follows: where If r = 1 the following expression of q holds: where c is defined in (32), and Γ(•, •) is the upper incomplete gamma function. For both cases treated above, Figure 2 shows some plots of q as function of c, with various choices of r, and for d = 2. We point out that Proposition 3 states that the interspike intervals described by T are exponentially distributed.This is significantly different from the distribution functions specified in (29). 4.2. Sinusoidal free firing rate.Several papers on neuronal activity focus on modulated stimuli described by periodic inputs.For instance we recall Tateno et al. [33], where the problem of finding the period of the oscillation in an oscillator driven by a period input is studied by means of a first-passage-time approach, and Yoshino et al. [34], where the effect of periodic pulse trains on oscillatory regimes neuronal membranes is investigated.More recent researches studied the behaviour of the leaky integrate-and-fire model driven by a sinusoidal current or slowly fluctuating signal (see, for instance, Barbi et al. [2], Picchini et al. [26]). Aiming to include the presence of periodic external stimuli in model (7), in this section we consider the inhomogeneous case in which the time-varying free firing rate is given by where |A| ≤ λ and P > 0. Hence, due to (27) the density of the spike intertimes T (τ ) for a network of d units is where, due to (24), the cumulative firing rate is Figure 3 displays some plots of density (34) for some choices of the involved parameters.It shows that the multimodality of such density reflects the periodicity of the free firing rate (33).Figure 4 gives the mean M = E[T (τ ) ] and the variance In this case a closed-form expression of probability q (τ ) seems not available.However, it can be numerically evaluated by making use of Proposition 1. See Figure 5 for some plots of q (τ ) when u(t) = e −t , t ≥ 0. In particular, the oscillating behaviour of q (τ ) with respect to τ is evident for large values of A (see the right panel of Figure 5).product of the free firing rate function and a suitable recovery function.We have proposed an extension dealing with a neural network composed of d excitatory units, in which the recovery function of each unit depends both on the time elapsed since the last spike and on the last spiking unit.Our approach, which is somewhat related to the competing risks model, leads to the general form of the interspike distribution and of the probability of consecutive spikes from the same unit.Explicit results have been found when the free firing rate function is constant.We also considered the case when the free firing rate is sinusoidal, for which the density, the mean and the variance of the spike intertimes is investigated by means of numerical evaluations.In both cases we studied the probability that a spike of a generic unit, occured at a fixed time, is followed by a spike of the same unit. 5 .Figure 5 . Figure 5. Plots of q (τ ) in the sinusoidal free firing rate case as a function of A (left panel) and of τ (right panel), for u(t) = e −t , t ≥ 0, with d = 2, λ = 1 and P = 2.
5,644.8
2013-10-01T00:00:00.000
[ "Computer Science" ]
ON THE ABSENCE OF ANTIMATTER FROM THE UNIVERSE, DARK MATTER, THE FINE STRUCTURE PARAMETER, THE FLUCTUATIONS OF THE COSMIC MICROWAVE BACKGROUND RADIATION AND THE TEMPERATURE DIFFERENCE BETWEEN THE NORTHERN AND SOUTHERN HEMISPHERE OF THE UNIVERSE The theory of selfvariations correlates five cosmol ogical observations considered to be unrelated by t he physical theories of the previous century. The abse nce of antimatter from the Universe, the Dark Matte r, the slight variation of the fine structure parameter, t he temperature fluctuations of the cosmic microwave background radiation and the temperature difference between the northern and southern hemisphere of th e Universe can be justified by a common cause. This c au e is the selfvariation of the electric charge of material particles. The antimatter particles of the very ear ly Universe lose their electric charge with the pas sage of time and end up as electrically neutral. These electrica lly neutral particles constitute a significant part of Dark Matter. The cosmological Model of the Selfvariation s predicts another possible mechanism for the creat ion of Dark Matter particles. Thus, we can justify the fac t that the amount of Dark Matter is greater than th e amount of the ordinary, luminous, matter. A fluctuation of the electric charge at cosmological distances is p redicted in the region of the Universe that we observe. This fl uctuation is recorded in the cosmological data in t he value of the fine structure parameter measured at cosmologic al distances, in the temperature of the cosmic micr owave background radiation and is responsible for the tem perature difference between the two hemispheres of the Universe. The study we present proves in detail tha t the law of selfvariations contains enough informa tion to justify the totality of cosmological data that cann ot be justified by the standard cosmological model. These data have been observed by the ultrasensitive moder n observation instruments. The high sensitivity of the instruments is necessary to record the effects from the extremely small variation of the electric char ge. We regard as necessary a re-evaluation of the cosmolog ica data based on the law of selfvariations. INTRODUCTION The law of selfvariations expresses quantitatively a slight continuous increase of the rest mass and the electric charge of material particles. In the macrocosm, the law of selfvariations is expressed by a simple differential equation for the rest mass and by a similar one for the electric charge of material particles. The solutions resulting from these differential equations justify the totality of the cosmological data. Some cosmological data, like the redshift of distant astronomical objects, the Cosmic Microwave Background Radiation (CMBR), the nucleosynthesis of chemical elements and the increased luminosity distances of Type Ia supernovae, result mainly as a consequence of the selfvariation of the rest mass. The selfvariation of the electric charge is respensible for large part of the Dark Science Publications PI Matter. The absence of antimatter from the Universe, the fluctuation of the fine structure parameter observed at cosmological distances, the temperature fluctuation of the CMBR and the temperature difference between the two hemispheres of the Universe are exclusively due to the selfvariation of the electric charge. In the microcosm, the law of selfvariations predicts that the rest mass and the electric charge of material particles spread, are distributed, within spacetime. When we try to define this distribution, the Schrödinger equation, as well as the relevant equations, appear and play a fundamental role. These equations for the microcosm, replace the simple differential equation given by the law of selfvariations for the macrocosm. The selfvariation of the rest mass evolves only in the direction of increase of the rest masses of material particles. On the contrary, the selfvariation of the magnitude of the electric charge can evolve in two different directions. The electric charge of material particles can either increase, or decrease, in absolute value. This difference arises from the fact that the electric charge exists in the Universe as pairs of opposite quantities. In this article we examine in detail the evolution of the selfvariation of the electric charge in two directions and its consequences. FUNDAMENTAL EQUATIONS OF THE COSMOLOGICAL MODEL OF THE SELFVARIATIONS The law of selfvariations in the macrocosm predicts (Manousos, 2013a, Equation 270 and 292) Equation: For the rest mass m 0 of material particles and the corresponding Equation: For the magnitude q(q>0) of the electric charge of material particles. In Equation 2 the electromagnetic potential V 0 is independent of the selfvariations (Manousos, 2013b). With (•) we denote the derivative with respect to time t. Equation 1 and 2 are solved in a flat and static Universe (Manousos, 2013a). Solving Equation 1 we find relation: Between the rest mass m 0 (r) of a material particle in a distant astronomical object at distance r from Earth and the laboratory value of the rest mass m 0 of the same particle on earth. Between parameters k and A, the following relation holds: where, H is Hubble's parameter. Parameter A increases very slightly with the passage of time t according to Equation 5: While it obeys inequality: For every value of the redshift z. Solving Equation 2 we similarly obtain: Given the fact that we know the value of Hubble's parameter H, Equation 4 provides a relation between parameters k and A. Furthermore, Equation 6 confines to a satisfactory degree the values parameter A can take. Thus, we were able to derive a large amount of information about the consequences of the selfvariation of the rest mass at cosmological scales (Manousos, 2013a;2013b). Regarding the selfvariation of the electric charge, we know that B>0 and that it evolves at an extremely slow rate (Manousos, 2013b). We shall now repeat the proof of Equation 7 from Equation 2 in order to highlight the fundamental parameters defining the selfvariation of the electric charge. PI From Equation 2 we obtain: where, σ 1 is the integration constant, measured in units of electric charge. By denoting Equation 10: Equation 9 can be written as: We integrate Equation 11 between moment t 0 , when the electric charge has value q 0 and 0 1 q x σ = and moment t, "now", when the electric charge has value q and 1 q x σ = and after performing the calculations, we get: In order to find the value of the electric charge q(r) at a distant astronomical object located at distance r from Earth, we replace t in Equation 12 with r t c − and get: Denoting Equation 14: Equation 12 and 13 can be written correspondingly as: From which we obtain: In the law of selfvariations (Manousos, 2013a, Equation 265 and 266), the imaginary unit i has being introduced in order to incorporate into the statement of the law the consequences stemming from the internality of the Universe in the process of measurement (Manousos, 2013b). The final Equation 3 and 7, i.e., the solutions of Equation 1 and 2, do not change if we replace the imaginary unit with any constant b ≠ 0 in Equation 1 and 2 (Manousos, 2013a). In the macrocosm we measure the consequences of a real variation of the rest masses and the electric charges of material particles, something that cannot be done in the microcosm (Manousos, 2013b). We could reformulate the law of selfvariations by initially assigning any arbitrary parameter b ≠ 0 in the place of i. In order to avoid the confusion that might arise from the presence of the imaginary unit i, we write Equation 15 as: As we shall see, the arbitrary parameter b ≠ 0 does not play any role in the resulting conclusions, since they are determined by the value of parameter k 1 . This is the parameter we can measure on the basis of the cosmological data. The variation of the electric charge q results in the variation of the fine structure parameter α Equation 17: From the very slight variation of parameter α for cosmological distances King et al., 2011;Molaro et al., 2008;Murphy et al., 2008;2007; Science Publications PI Tzanavaris et al., 2005;Chand et al., 2004;Murphy et al., 2003;Webb et al., 2001;Dzuba et al., 1999;Webb et al., 1999), we conclude that the selfvariation of the electric charge evolves at an extremely slow rate. THE PARTICLES OF THE ELECTRICALLY CHARGED ANTIMATTER OF THE EARLY UNIVERSE ARE CONVERTED, WITH THE PASSAGE OF TIME, INTO DARK MATTER PARTICLES By comparing Equation 1 and 2, we find that in place of the electromagnetic potential V 0 in Equation 2, factor c 2 >0 appears in Equation 1. This is a general characteristic of the Equations resulting from the law of selfvariations and appears from the begining in the Equations of the theory of selfvariations (Manousos, 2013a, see: Energymomentum tensors 254, 259 and the remark in paragraph 4.8). The fact that c 2 >0 has as a consequence that the selfvariation of the rest mass occurs in the direction of increase of the rest mass of material particles. There are also other arguments that strengthen this conclusion, which we will not mention in the present article. The electromagnetic potential V 0 can be either positive (V 0 >0) or negative (V 0 <0). According to Equation 16, a change of sign of the electromagnetic potential V 0 brings about a change of sign of parameter k 1 . This causes parameter k 1 to be either positive (k 1 >0) or negative (k 1 <0). But, according to Equation 7, for k 1 >0 the selfvariation of the electric charge evolves in the direction of increase of the electric charge in absolute value, whereas for k 1 <0 the selfvariation evolves in the direction of decrease of the electric charge in absolute value. Consequently, the possibility for the electromagnetic potential V 0 to be either positive or negative is the reason why the selfvariation of the electric charge can evolve in two directions. The electric charge has an initial value q 0 at moment t 0 in the distant past, in the very early Universe. In the case where k 1 >0 the electric charge increases in absolute value, at an extremely slow rate and reaches the value we measure in the laboratory today. But in the case where k 1 <0, the electric charge decreases in absolute value. If this happens for a long enough time, the electric charge tends to vanish and the initially charged particles are electrically neutral today. We will now determine a difference between the atoms of matter and the atoms of antimatter, which could justify the change in the sign of the electromagnetic potential V 0 of Equation 16 between the two kinds of atoms. In the case of matter, in the hydrogen atom the negative electric charge of the electron overlaps the positive electric charge of the proton. In the case of antimatter, the positive electric charge of the positron overlaps the negative electric charge of the antiproton. This reversal of the sign of the electric charge could justify the change in sign of the electromagnetic potential V 0 at the moment when the opposite electric charges appear. In the macrocosm we know that the electromagnetic potential created by two opposite electric charges changes sign, at every point in space, if we reverse the sign of the two electric charges. Of course, in the case we are studying, further investigation is required, something natural since the investigation of the law of selfvariations is at its initial stage. Nevertheless, the possibility of conversion of the antimatter particles of the very early Universe into electrically neutral particles, is a clear prediction of the theory of selfvariations. The change of sign of potential V 0 between matter and antimatter can justify in a unified way, with a common cause, the absence of antimatter in the Universe today and the origin of a large number of Dark Matter particles. For the potential V 0 for which parameter k 1 in Equation 16 is positive (k 1 >0), the electric charge of particles increases with the passage of time. This leads to the hydrogen atom, as we observe it today. For the potential V 0 for which parameter k 1 in Equation 16 is negative (k 1 <0), the initially electrically charged particles loose their electric charge with the passage of time. According to the difference we specified between matter and antimatter regarding the sign of potential V 0 , the antimatter particles loose their electric charge with the passage of time and end up electrically neutral. These particles, without electric charge, behave like Dark Matter particles. If initially there were equal quantities of matter and antimatter particles in the Universe, 50% of the particles loose their electric charge and, with the passage of time, are converted into Dark Matter particles. The cosmological model of the selfvariations predicts further reasons favoring the creation of Dark Matter particles (Manousos, 2013a;2013b). Thus, the large amount of Dark Matter recorded in the cosmological data, can be justified. A resulting indirect conclusion is that the antimatter particles loose their electric charge before the accumulation of matter for the formation of the large structures in the Universe. This conclusion arises from the fact that antimatter is absent from the large-scale structures of the Universe we observe today. Due to the extremely slow rate of evolution of the selfvariation of Science Publications PI the electric charge, a very long time is required for the antimatter particles to loose their charge. Therefore, a very long time is required for the Universe to evolve from its initial state into the state we observe today. This is consistent with the prediction of the model of selfvariations about the age and size of the Universe (Manousos, 2013a;2013b). We note that the cosmological model of the selfvariations is selfconsistent and its predictions should be correlated, where necessary, with the initial form of the Universe predicted by the model itself and not with the initial form of the Universe predicted by other models. ON THE VARIATION OF THE FINE STRUCTURE PARAMETER α α α α Due to the very large age of the Universe, every material particle has its own past history and it is possible for external factors to act additively and bring about a slight fluctuation in the value of the electromagnetic potential V 0 . In such a case, a slight fluctuation in parameter k 1 will be observed according to Equation 16. This fluctuation will be manifested as a fluctuation of the electric charge according to Equation 7 and it may be observed for distances r of cosmologic scale. For smaller scale distances, the fluctuation of the electric charge cannot be observed, due to the extremely slow rate with which the selfvariation of the electric charge evolves. We note that a corresponding fluctuation cannot occur for the selfvariation of the rest mass. Where the potential V 0 appears in Equation 2, the constant factor c 2 appears in Equation 1. By the very slight variation of the fine structure parameter we conclude that parameter k 1 has an extremely small value and by approximating: In Equation 7, we obtain: Denoting: Equation 18 is written as: From Equation 20 we get: The very small value of parameter k 1 in Equation 19 means a very small value of the quantity According to Equation 17 the fine structure parameter α(r) at a distant astronomical object is: Combining Equation 17 and 23 we get: Combining Equation 22 and 24 we get: we obtain: From Equation 25 we have: For every specific distance r. In Equation 28 we have the distance r and not the redshift z. If we use the relativistic equation: Which is of the order of magnitude where quantity α α ∆ is measured. But in the model of the selfvariations (Manousos, 2013a), the distance r is given as a function of the redshift z by Equation: ( ) PI given by Equation 32, is required. The ratio α α ∆ depends on the distance r of the astronomical object, but this cannot be expressed as long as we use the erroneous, smaller than the actual, distances of the standard cosmological model. We note that an analogous problem has arisen with the large luminosity distances of Type Ia supernovae (Riess et al., 1988;Perlmutter et al., 1999). A problem that goes away if we take into account the cosmological model of the selfvariations (Manousos, 2013a;2013b In order to calculate the exact value of parameter W, as well as the arithmetic values of the preceding Equations, the re-evaluation of the observational data on the basis of the model of the selfvariations is required. Of course, we expect the next generation of observational instruments to contribute to the accurate measurement of the fundamental parameter W. Taking into account the dependence of the fine structure parameter on the angle Θ, according to Equation 35, Equation 22 is written: The fluctuation of parameter k 1 in Equation 19 implies the fluctuation of parameter W, as well. Furthermore, the Milky Way is located at a random position in the Universe and, therefore, there are regions at which we observe a smaller value of parameter α(r) To be more precise, what must be done is a detailed study about the anisotropies predicted by the cosmological model of the selfvariations. We stress again that the main reason for the anisotropies recorded by the observational instruments is the fact that we only observe a small part of the Universe (Manousos 2013a;2013b). The isotropy of the Universe is expected at much larger scales, at much greater distances than the ones we observe today. THE CONTRIBUTION OF THE SELFVARIATION OF THE ELECTRIC CHARGE TO THE REDSHIFT The contribution of the selfvariation of the electric charge, to the redshift of distant astronomical objects is small, because of the slow rate of its evolution. Nevertheless, this contribution could be detected in high accuracy cosmological measurements. In this paragraph we calculate this contribution. Taking into account relation (6) Then, from Equation 43 easily follows that for A→1 − it is: We now get: The atomic excitation energy X n is proportional to the factor m 0 q 4 , where m 0 the rest mass and q the electric charge of the electron and, therefore, Equation 46 can be written as: During the conversion of rest mass ∆m 0 into energy ∆m 0 c 2 . ON THE TEMPERATURE FLUCTUATION OF THE COSMIC MICROWAVE BACKGROUND RADIATION The selfvariations affect almost all astrophysical parameters. In this and the next two paragraphs, we will see how the selfvariations affect the temperature of distant astronomical objects. As an aside, a temperature fluctuation of the CMBR, of the order of the fourth or fifth decimal place, emerges. We consider a system of N particles, which is provided with energy through the process of conversion of rest mass into energy. For the laboratory we get equation: Equation 51 gives the temperature T(z) of a distant astronomical object compared to the expected temperature T, in the case of an object powered by the conversion of rest mass into energy. The very early Universe predicted by the law of selfvariations, only slightly differs from the vacuum at a temperature close to 0K. Starting from this initial state of the Universe, the first energy conversions came from changes that occurred at the level particles, long before the gravitational accumulation of matter began. Therefore, the energy of material particles originated by the conversion of rest mass into energy during the formation and evolution of the primordial particles. PI Thus, we conclude that between the real temperature T(z) of the CMBR and the measured temperature T, Equation 51 holds. Considering the effect of the redshift z, we correct the energy of the CMBR photons by removing the consequences of the redshift. This correction in Planck's law regarding a black body, is equivalent with equation: , T 0 = 2.726 K, the value of the redshift z at the boundaries of the observable Universe, where the CMBR originated and that -1≤cosΘ≤1, a fluctuation of the temperature of the CMBR results at the fourth or fifth decimal place (Hinshaw et al., 2009). THE RELATION BETWEEN TEMPERATURES T(Z) AND T IN THE CASE OF THE GRAVITATIONAL ACCUMULATION OF MATTER In the case when a system of N particles of total mass M acquires its energy by the gravitational accumulation of matter, we have: For the laboratory and: For the distant astronomical object. From the previous Equations we obtain: Equation 58 gives the real temperature T(z) in relation to the expected temperature T, for a distant astronomical object powered by the gravitational collapse of matter. Considering that and 57, should also be taken into account. Furthermore, one should consider that the redshift affects the degree of atomic ionization and, therefore, the multitude N of particles appearing in Equation 57 and also the opacity of stellar surfaces (Manousos, 2013b). THE SELFVARIATIONS DO NOT SIGNIFICANTLY AFFECT THE FUSION TEMPERATURE OF HYDROGEN The fusion temperature T(z) of hydrogen at distant astronomical objects is practically equal to the laboratory value T, because of the extremely slow rate of evolution of the selfvariation of the electric charge. During fusion, the thermal energy If we assume that the distance d is not affected by the selfvariations, something very likely, we get by the above Equations: Combining Equation 41 and 62 we get: Combining Equation 63 and 48, we get after the calculations: Which it is obtained by combining Equation 62 and 7. For the Northern hemisphere of the Universe we know Webb et al., 2011) ON THE TEMPERATURE DIFFERENCE BETWEEN THE NORTHERN AND SOUTHERN HEMISPHERE OF THE UNIVERSE The temperature difference between the Northern and the Southern hemisphere of the Universe is a consequence of the selfvariation of the electric charge. The slight fluctuation of the electric charge in the various regions of the observable Universe have as a consequence a corresponding slight fluctuation of the temperature, with smaller electric charge in the past, corresponding to lower temperatures. We also come to the same conclusion from the dependence of the Thomson και Klein-Nishina scattering coefficients, as well as the degree of atomic ionization, on the electric charge of the electron. In the regions of the Universe with slightly smaller electric charge in the past, slightly lower temperatures are predicted compared to regions of the Universe where the electric charge had slightly larger value. We shall not present these arguments in the current article, but all analyses lead to the same conclusion about how the selfvariation of the electric charge affects the temperature of the Universe. ON THE OKLO NATURAL NUCLEAR REACTOR In Equation 41, factor cosΘ expresses the fluctuation of the constant k 1 and the random position of the Milky Way in the Universe. There are regions of the Universe where parameter α is slightly smaller than the laboratory value (cosΘ>0) and parts where it is slightly larger (cosΘ<0). For phenomena that occur on earth, like the Oklo natural nuclear reactor, the consequences of the increase of the electric charge with the passage of time dominate. For the description of such phenomena, we use Equation 26. where, the time interval t is measured in year. For t = 2×10 9 yr, a time interval of the order of magnitude of the operation of the Oklo natural nuclear reactor, we get: This variation is extremely small and it is difficult to measure (Petrov et al., 2006;Meshik et al., 2004;Gauthier-Lafaye, 2002;De Laeter et al., 1980). We expect that the processing of the cosmological data we possess, as well as the improvement of the observational instruments, will give us a more accurate value for the fundamental parameter W. However, this more accurate measurement cannot considerably affect the theoretical prediction about Oklo's reactor, since Equation 72 gives an extremely small value for the ratio α α ∆ . RESULTS We summarize our obtained results. We predict the absence of antimatter in the Universe as a consequence of the ability of the electric potential V 0 to be either positive or negative in Equation 16, We ccalculate the contribution of the selfvariation of the electric charge to the redshift z of astronomical objects through the Equation 47 and 48, The DISCUSSION The potential for the electric charge to evolve in two directions, constitutes a pronounced anisotropy in the macrocosm. Among the first consequences is the absence of antimatter from the Universe today. Since the electric charge appears in the Universe as pairs of opposite physical quantities and it does not have to start from a zero initial value, the potential for the evolution of the electric charge in two directions exists, even if we could not determine the exact cause, i.e., the change in sign of the electromagnetic potential V 0 . But the law of selfvariations, that quantitatively determines this phenomenon, is compatible with special relativity and the lorentz-einstein transformations. The anisotropy in the macrocosm, due to the absence of antimatter, is apparent and not real. During the evaluation of the observational data on the fluctuation of the fine structure parameter, the same problem appeared as in the evaluation of the luminosity distances of Type Ia supernovae. The distances of astronomical objects, especially for large values of the redshift, are much greater than those predicted by the standard cosmological model. If we take into account the predictions of the model of the selfvariations, the luminosity distances of Type Ia supernovae are completely justified. Furthermore, it emerges that the fluctuation of the fine structure parameter depends on the distance at which we measure it. This information is lost from the evaluation of the observational data, if we consider the predictions of the standard cosmological model. The fluctuation of the electric charge justifies both the temperature fluctuation of the CMBR and the temperature difference between the two hemispheres of the Universe. The selfvariations affect almost the totality of parameters in Astrophysics. This fact requires an overall re-evaluation of the observational data we possess. This article is part of a general study which provides a unified cause for the quantum phenomena and the cosmological data. The general study converges to the law of selfvariations. The law of the selfvariations contains enough information to justify the totality of the current cosmological data. The selfvariation of the rest mass is realized at an extremely slow pace. For this reason, the direct consequences of the selfvariation of the rest mass are recorded at cosmological distances. The selfvariation of the electric charge is realized at an even slower pace and its consequences are only recorded in high-precision measurements. The present day ultrasensitive observation instruments have the required precision and therefore the available cosmological data are also affected by the consequences of the selfvariation of the electric charge. This is exactly the reason why the standard cosmological model cannot justify these particular cosmological data. CONCLUSION The conclusion we arrive at is that the selfvariation of the electric charge can justify in common, as a unifying cause, the absence from the Universe of antimatter, which with the passage of time is converted into Dark Matter, the fluctuation of the fine structure parameter, the temperature fluctuation of the cosmic microwave Science Publications PI background radiation and the temperature difference between the Northern and Southern hemisphere of the Universe. The research presented in this article concerns the observable Universe. This is due to the approximation we did in Equation 7 using Equation 22. For observations concerning much larger distances Equation 3 and 7 should be used. This article together with the already published articles referenced in previous sections give us a large number of equations which arise from the law of selfvariations. These equations are sufficient to carry out a computer simulation of the Universe as predicted by the law of selfvariations. We suggest the realization of this study from colleagues who are experts in the subject.
6,497.4
2013-02-01T00:00:00.000
[ "Physics" ]
Scaling Up: Molecular to Meteorological via Symmetry Breaking and Statistical Multifractality : The path from molecular to meteorological scales is traced and reviewed, beginning with the persistence of molecular velocity after collision induces symmetry breaking, from continuous translational to scale invariant, associated with the emergence of hydrodynamic behaviour in a Maxwellian (randomised) population undergoing an anisotropic flux. An empirically based formulation of entropy and Gibbs free energy is proposed and tested with observations of temperature, wind speed and ozone. These theoretical behaviours are then succeeded upscale by key results of statistical multifractal analysis of airborne observations on horizontal scales from 40 m to an Earth radius, and on vertical scales from the surface to 13 km. Radiative, photochemical and dynamical processes are then examined, with the intermittency of temperature implying significant consequences. Implications for vertical scaling of the horizontal wind are examined via the thermal wind and barometric equations. Experimental and observational tests are suggested for free running general circulation models, with the possibility of addressing the cold bias they still exhibit. The causal sequence underlying atmospheric turbulence is proposed. Introduction The central importance of symmetry for the conservation laws of physics was pointed out by Noether [1]; there was no scale specified for the operation of the principle. The possibility of breaking such symmetries in a solid-state physics context was stated by Landau [2]. Anderson [3] extended the approach into mathematical forms that proved eventually to be ground-breaking in condensed matter and particle physics [4]. Systems with continuous symmetries need not have conservation laws; Hamiltonian treatments with Lagrangians, such as chaotic attractors in phase space, are undermined by dissipation, a situation characteristic of the atmosphere. Although mathematically dissipation in chaos theory is represented by shrinking of the phase space, such a process is irrelevant in the atmosphere, where multiplicative interaction on all scales between absorption of solar photons and emission of infrared ones to the coldness of space is operative. The possibility of the continuous translational symmetry, shown by Maxwellian samples of gas molecules treated as hard spheres, being applicable to air is confounded-broken-by the persistence of velocity after collision [5] and the molecular dynamics result of the emergence of fluid flow under anisotropic flux [6], a ubiquitous atmospheric condition. These considerations were brought into sharp relief by the application of the theory of statistical multifractality to atmospheric observations of sufficient quality by Schertzer and Lovejoy [7][8][9] and later with a molecular emphasis by Tuck and co-authors [10][11][12][13][14]. The implications of these analyses are dealt with in the succeeding sections of this paper. Arguments are provided as to how the scaling-based Gibbs free energy provides the work needed to drive the atmospheric circulation, having been deduced from the molecular collisions involved in producing vorticity on short time and space scales by a nonlinear process that shows how temperature remains defined operationally and acts as the integrator in a fluctuation-dissipation theorem, being influenced differently by gravity than the other variables. The observational data used to successfully test the scaling theory and the values of its exponents were obtained by the NASA ER-2 (Lockheed, Burbank, CA, USA) and DC- 8 We shall see that correlation of the three scaling exponents H, C 1 and α with conventional variables provides insight into the operation of some physical processes, such as temperature, sources and sinks, and jet streams. The procedure will be to review the work undertaken since 1998 via a selection of key diagrams, accessing more detailed formulations and discussions by references. Microscopic and Macroscopic Processes The coupling of molecular and meteorological processes is shown schematically in Figure 1. It is through the central variable of temperature. 1 Figure 1. The determination of measured temperatures by molecular velocities is how microscopic processes couple to meteorological ones. The observed correlation between the ozone photodissociation rate and the intermittency of temperature was an indication that local thermodynamic equilibrium in lower stratospheric air was not a valid assumption. Note that aerosols, not shown, play a significant role in chemical composition, cloud physics and radiative transfer. The Gibbs free energy resulting from the difference between the incoming low entropy beam of solar flux (high energy UV and visible photons) and the outgoing higher entropy terrestrial flux (low energy infrared photons) over the entire 4π solid angle provides the work necessary to drive the circulation. The scales can be specified approximately as micro: 10 −10 to 10 −8 m, meso: 10 −8 m to 10 −6 m, macro: >10 −6 m. One micron, 10 −6 m, is approximately the largest size of aerosol which can remain suspended in the troposphere and be transported significant distances by the winds. Persistence of Molecular Velocity after Collision The persistence of molecular velocity after collision [5] breaks the continuous translational symmetry (randomness) assumed in speed and direction that underlies the Maxwell-Boltzmann probability distribution function (PDF) and hence Lorentzian and Doppler spectral line shapes used in radiative transfer in the atmosphere. Einstein-Smoluchowski diffusion is also negated in the atmosphere and with it the definition of temperature used in laboratories. Maxwell-Boltzmann PDFs have continuous translational symmetry, which is broken by the persistence of velocity after collision. 12 The persistence ratio 12 is the ratio of the mean velocity after collision to that before collision between molecules of masses m 1 and m 2 . If m 1 = m 2 then 12 = 0.406, but in general for m 1 = m 2 the heavier molecule will be slowed less than the lighter one. This breaking of continuous translational symmetry will be discussed further later in the light of observational analyses, but it immediately leads to the interpretation of molecular dynamics calculations that showed the emergence of hydrodynamic behaviour in a population of randomised molecules subject to an anisotropic molecular flux [6]. This phenomenon is shown in Figure 2. Here 'molecules' are represented as hard spheres ('billiard balls'); ring currents-vortices-emerged on very short time and space scales. for m1 ≠ m2 the heavier molecule will be slowed less than the lighter one. This breaking of continuous translational symmetry will be discussed further later in the light of observational analyses, but it immediately leads to the interpretation of molecular dynamics calculations that showed the emergence of hydrodynamic behaviour in a population of randomised molecules subject to an anisotropic molecular flux [6]. This phenomenon is shown in Figure 2. Here 'molecules' are represented as hard spheres ('billiard balls'); ring currents-vortices-emerged on very short time and space scales. Figure 2. The original simulation of the emergence of 'ring currents'-vortices-in a population of Maxwellian atoms subject to an anisotropic flux [6]. The thicker arrows represent averages over the molecular velocity vectors after 9.9 collision times, whereas the thinner arrows represent simulation by the Navier-Stokes equation. Later simulations showed disagreements between the two approaches. It is suggested from the outset that a vorticity approach be adopted from this smallest, molecular scale. Let ω be the vorticity of the field of molecular velocity v [14]: This enables calculation of twice the enstrophy directly by taking the curl of the molecular velocity field: Figure 2. The original simulation of the emergence of 'ring currents'-vortices-in a population of Maxwellian atoms subject to an anisotropic flux [6]. The thicker arrows represent averages over the molecular velocity vectors after 9.9 collision times, whereas the thinner arrows represent simulation by the Navier-Stokes equation. Later simulations showed disagreements between the two approaches. It is suggested from the outset that a vorticity approach be adopted from this smallest, molecular scale. Let ω be the vorticity of the field of molecular velocity v [14]: This enables calculation of twice the enstrophy directly by taking the curl of the molecular velocity field: Because enstrophy propagates downscale, in 2D theory at least, it may be less suited to describing the behaviour of energy deposited on the smallest scales, that of photons and molecules as it is in the atmosphere. The vorticity correlation function can lead to the vorticity form of the Navier-Stokes equation; see Section 3.3 of [15]. Nevertheless, the vorticity of 'air' in a molecular dynamical calculation is obtainable by: where p is molecular momentum and q is molecular position. They are necessitated by the non-spherical symmetries of real molecules and the omnipresent anisotropies of gravity, planetary rotation and the solar beam. The nth moment of the molecular speed is [14][15][16]: This equation permits n to be used in calculating moments in the course of statistical multifractal analysis of molecular dynamics calculations. Figure 2 illustrates the discovery of this phenomenon by Alder and Wainwright [6]. It was appealed to in a meteorological context as the result of an unexpected observational correlation between the ozone photodissociation rate and the intermittency of tempera- [14], about which more later. Physically, the mechanism consists of the faster molecules pushing up higher number density ahead of themselves, leaving lower number densities in their rear. The resulting number density gradient results in a flux of the more numerous, nearly average molecules that produces the ring current. The numerous, near average molecules exchange collisional energy easily, and in so doing maintain an operational temperature, which is not, however, described by a Boltzmann PDF. Note that the interactions are non-linear and result in the vorticity structure being self-sustaining, offering the ability to propagate upscale. Emergence of Fluid Flow from a Molecular Population Extant statistical multifractal analyses expected temperature to scale like a passive scalar (a tracer), but the earliest such treatments of NASA ER-2 data in the lower stratosphere at 17-20 km altitude showed that temperature scaled differently than known tracers [10][11][12], at least under rescaled range analysis. With the availability of high-resolution GPS dropsonde data from the NOAA Gulfstream G4-SP, it became clear from statistical multifractal analyses that temperature was affected differently by gravity than other variables [15,[17][18][19]. The scaling of temperature spans the range from micro (molecular) through meso (nanometres to micrometres) to macro (>micrometres). That accounts for temperature acting as a kind of integrator in the way that other variables do not. A further implication of the Alder-Wainwright mechanism is in the basis of the fluctuation-dissipation theorem as embodied in the Langevin equation. Langevin treated the mean as organised behaviour and fluctuations as dissipation. The Alder-Wainwright mechanism implies the reverse, that the hydrodynamic fluctuation carried by the fastest molecules represents emergent organisation and the mean represented by the near average molecules represents dissipation that defines an effective temperature. Further discussion may be found in chapter Sections 3.1-3.3, 5.2 and 8.1 of [15]. We cannot expect to view atmospheric vorticity in two dimensions and remain quantitative, because the dimensionality of atmospheric flow is 2 + H(s), [8,9,15] where H(s) is the vertical scaling exponent of the horizontal wind, s. The vorticity form of the Navier-Stokes equation is: where κ is kinematic viscosity. The first term on the right says that vorticity, ω, advects itself: nonlinearity is inherent. This term alone is responsible for much of the complexity and difficulty associated with understanding, describing and computing atmospheric flow. ω is defined by Equations (2) and (3) and using (3) we can relate the autocorrelation function for vorticity, A(t), to the enstrophy ε: Enstrophy is governed by: D Dt where Str ij is the straining rate on a fluid element. Equation (8) expresses the generation of vorticity by stretching, or its destruction by compression, via the first term, balanced by viscous dissipation in the second term. The third term is the divergence, often assumed to be locally zero; this cannot be strictly true, for example, if ozone photodissociation is leading to the generation of vorticity. Considering the theoretical case of 2D turbulence, there are reverse cascades of energy and enstrophy, upscale for the former and downscale for the latter. If photon energy is directly converted to vorticity as implied by the Alder-Wainwright mechanism, it would imply energy transfer upscale from the smallest, molecular, ones in the range of 10-100 nanometres at tropospheric temperatures and pressures. However, the point is moot because the scale invariant structure of the fluctuating abundances of the absorbers and emitters of radiation, including ozone, carbon dioxide, methane, nitrous oxide, halocarbons, aerosols and water in all its phases, means that energy is input and lost to air on all scales, eliminating the possibility of conservative energy or enstrophy cascades either upscale or downscale. The observed scale invariant 23/9 dimensionality of air also eliminates 2D theories from relevance. Statistical Multifractality The variability in air is defined by three multifractal scaling exponents, which have been evaluated from observations of adequate quality [20][21][22][23]. The mathematical structure underlying the definition of H, C 1 and α may be found in these references; the notation is that in [21,22]. Table 1 shows equivalences between the variables of equilibrium statistical thermodynamics and the scaling variables from statistical multifractal analysis as applied to open, non-equilibrium systems, which the atmosphere is. The equivalences in Table 1 result from mappings [20] via Legendre transforms; they are not merely formal similarities and lead to the thermodynamic form of multifractality [22]. Table 1. Equivalence between statistical thermodynamic and scaling variables. Variable Statistical Thermodynamics Scaling Equivalent The variables are obtained as follows. q defines the qth order structure function of the observed quantity. The scaling exponent K(q) is derived from the slope of a log-log plot of the signal fluctuations versus its range, and H is obtained from: and examination of energy E in terms of a scale ratio produces an expression for the fractal co-dimension c(γ). C 1 is the co-dimension of the mean, characterising the intensity of the intermittency. The Lévy exponent α characterises the generator of the intermittency which is the logarithm of the turbulent flux; for a real system, its value may not be confined to the theoretical range. In practice here we find H = 0.56, C 1 = 0.05 and α = 1.60. The theoretical ranges are 0 < H < 1; 0 < C 1 < 1 and 0 < α < 2. A Gaussian has H = 0.50, C 1 = 0 and α = 2. See [15,21,22] for further discussion, and below for analysis of observations. The actual values of the three scaling exponents for wind, temperature, ozone and other molecules may provide information about the variables and how they interact. Vertical Scaling: Horizontal Wind, Temperature and Humidity The vertical scaling of the horizontal wind, temperature and humidity was examined by statistical multifractal analysis of GPS dropsonde observations from the January-March NOAA Winter Storms missions of 2004, 2005 and 2006 [17-19,24-26]. The flight tracks were over the eastern Pacific Ocean of the Northern Hemisphere, from 15 • to 60 • N and 13 km altitude to the surface. Figure 3 summarises the 2006 data [19]. Vertical Scaling: Horizontal Wind, Temperature and Humidity The vertical scaling of the horizontal wind, temperature and humidity was examined by statistical multifractal analysis of GPS dropsonde observations from the January-March NOAA Winter Storms missions of 2004, 2005 and 2006 [17-19,24-26]. The flight tracks were over the eastern Pacific Ocean of the Northern Hemisphere, from 15° to 60° N and 13 km altitude to the surface. Figure 3 summarises the 2006 data [19]. If the vertical scaling of temperature is calculated via spectral analysis, assuming for the purpose that intermittency is negligible, the value of H is approximately 5/4 while H for wind speed and relative humidity remains unchanged from the values in Figure 4b,c, If the vertical scaling of temperature is calculated via spectral analysis, assuming for the purpose that intermittency is negligible, the value of H is approximately 5/4 while H for wind speed and relative humidity remains unchanged from the values in Figure 4b,c, respectively. The different scaling for temperature has been attributed to the effect of gravity on air density, upon which it acts directly, unlike wind speed and relative humidity [26]. It will be seen later that the effects of scaling in jet streams are also significant. Meteorology 2022, 2, FOR PEER REVIEW 7 respectively. The different scaling for temperature has been attributed to the effect of gravity on air density, upon which it acts directly, unlike wind speed and relative humidity [26]. It will be seen later that the effects of scaling in jet streams are also significant. The two frames in each of (a-c) show the profile and variogram for temperature, wind speed and relative humidity respectively. H is calculated from the slope of the variogram. Note that temperature scales differently than wind speed and humidity [19,26]; the temperature profile is smoother than those for wind speed and relative humidity, reflected in the value of H approaching 1. The dropsonde results are also apparent in stability analyses of all drops treated fractally [18]. The results were valid for dry adiabatic, dynamic (Richardson number) and moist adiabatic approaches. The correlation co-dimensions were respectively 0.36, 0.22 and 0.15, a demonstration of the importance of wind shear and moisture compared to a static dry adiabatic analysis. The corresponding fractal dimensions of the Cantoresque set are 0.64, 0.78 and 0.85. In Figure 5, for one descent at 500 and 50 m resolutions, a broadly unstable lower troposphere and stable upper troposphere is seen, whereas at 15 m there are embedded unstable layers within stable layers at all scales, forming a 'Russian doll' structure characterised by a fractal dimension of 0.65. There are no unstable layers below about 50 m vertical dimension. This sonde was one of a pair dropped simultaneously; the high correlation of the structures seen by the two sondes eliminates the possibility that the signal is generated by noise [18]. The two frames in each of (a-c) show the profile and variogram for temperature, wind speed and relative humidity respectively. H is calculated from the slope of the variogram. Note that temperature scales differently than wind speed and humidity [19,26]; the temperature profile is smoother than those for wind speed and relative humidity, reflected in the value of H approaching 1. The dropsonde results are also apparent in stability analyses of all drops treated fractally [18]. The results were valid for dry adiabatic, dynamic (Richardson number) and moist adiabatic approaches. The correlation co-dimensions were respectively 0.36, 0.22 and 0.15, a demonstration of the importance of wind shear and moisture compared to a static dry adiabatic analysis. The corresponding fractal dimensions of the Cantoresque set are 0.64, 0.78 and 0.85. In Figure 5, for one descent at 500 and 50 m resolutions, a broadly unstable lower troposphere and stable upper troposphere is seen, whereas at 15 m there are embedded unstable layers within stable layers at all scales, forming a 'Russian doll' structure characterised by a fractal dimension of 0.65. There are no unstable layers below about 50 m vertical dimension. This sonde was one of a pair dropped simultaneously; the high correlation of the structures seen by the two sondes eliminates the possibility that the signal is generated by noise [18]. Composite variograms from all 246 useable dropsondes during the Winter Storm 2004 mission, involving 10 flights over a wide area of the eastern Pacific Ocean of th Northern Hemisphere [19], are shown in Figure 6. See [15,18,19] for further description and analysis. The 'horizontal' scaling of the aircraft observations has been calculated separately fo the NASA ER-2 and WB57F mainly in the lower stratosphere, and for the NOAA G4-SP and NASA DC-8 mainly in the upper troposphere [15]. We note that there is an elemen of vertical velocity in such flight segments [27], and that during their 'vertical' segment the results for temperature, wind speed and humidity for the scaling exponent H agreed within 10% with the dropsonde results in the previous section [15,19]. The missions con cerned ranged from pole to pole [15,19,28] when the DC-8 is included. Figure 7 shows the observations of the longest ER-2 flight available, just over an Earth radius long on 19890220. It was one of the few on Arctic missions AASE, AASE-II and SOLVE that was along rather than across the lower stratospheric polar night jet stream (SPNJ). There were none along the Antarctic SPNJ during AAOE and ASHOE-MAESA [15,28]. There was a greater incidence of more variable encounters with jet streams by th WB57F during WAM, ACCENT, pre-AVE and CRYSTAL-FACE and by the G4-SP durin Winter Storms 2004-2006, with the subtropical jet stream (STJ) and the polar front je stream (PFJ); see [29]. Composite variograms from all 246 useable dropsondes during the Winter Storms 2004 mission, involving 10 flights over a wide area of the eastern Pacific Ocean of the Northern Hemisphere [19], are shown in Figure 6. See [15,18,19] for further description and analysis. The 'horizontal' scaling of the aircraft observations has been calculated separately for the NASA ER-2 and WB57F mainly in the lower stratosphere, and for the NOAA G4-SP and NASA DC-8 mainly in the upper troposphere [15]. We note that there is an element of vertical velocity in such flight segments [27], and that during their 'vertical' segments the results for temperature, wind speed and humidity for the scaling exponent H agreed within 10% with the dropsonde results in the previous section [15,19]. The missions concerned ranged from pole to pole [15,19,28] when the DC-8 is included. Figure 7 shows the observations of the longest ER-2 flight available, just over an Earth radius long on 19890220. It was one of the few on Arctic missions AASE, AASE-II and SOLVE that was along rather than across the lower stratospheric polar night jet stream (SPNJ). There were none along the Antarctic SPNJ during AAOE and ASHOE-MAESA [15,28]. There was a greater incidence of more variable encounters with jet streams by the WB57F during WAM, ACCENT, pre-AVE and CRYSTAL-FACE and by the G4-SP during Winter Storms 2004-2006, with the subtropical jet stream (STJ) and the polar front jet stream (PFJ); see [29]. The presence of an element of vertical velocity in the response of an aeroplane to the air in 'horizontal' flight leads to a prediction that the value of the scaling exponent H under statistical multifractal analysis should be 23/9, (2 + H), see [27], and this indeed proves to be the case [15,21]. The average values of H for wind speed and temperature over all qualifying flight legs for all aeroplanes are in the range 0.51 < H < 0.62 with a standard deviation of ±0.01 and an average of 0.55 overall, corresponding to the predicted 5/9, which arises from 1/3 ÷ 3/5. The values for water, ozone and nitrous oxide are discussed below in Section 3.3, on scaling in molecular species. The presence of an element of vertical velocity in the response of an aeroplane to the air in 'horizontal' flight leads to a prediction that the value of the scaling exponent H under statistical multifractal analysis should be 23/9, (2 + H), see [27], and this indeed proves to be the case [15,21]. The average values of H for wind speed and temperature over all qualifying flight legs for all aeroplanes are in the range 0.51 < H < 0.62 with a standard deviation of ±0.01 and an average of 0.55 overall, corresponding to the predicted 5/9, which arises from 1/3 ÷ 3/5. The values for water, ozone and nitrous oxide are discussed below in Section 3.3, on scaling in molecular species. Figure 8 shows the scaling exponent H for wind speed during ER-2 flight segments along the Arctic SPNJ and across it for the Antarctic SPNJ. The result that the along jet flight segment has the lowest value of H for wind speed suggests that this most highly anticorrelated value is caused by the speed shear being more effective at producing less organised, more random flow than is directional shear. The directional shear then probably accounts for the higher value in the across-jet direction, corresponding to stronger, more organised flow. The results are consistent with the shears in Figure 7c and are responsible for the exchange of air and its chemical content between the vortex and its surroundings [15,21,30]. Figure 8 shows the scaling exponent H for wind speed during ER-2 flight segments along the Arctic SPNJ and across it for the Antarctic SPNJ. The result that the along jet flight segment has the lowest value of H for wind speed suggests that this most highly anticorrelated value is caused by the speed shear being more effective at producing less organised, more random flow than is directional shear. The directional shear then probably accounts for the higher value in the across-jet direction, corresponding to stronger, more organised flow. The results are consistent with the shears in Figure 7c and are responsible for the exchange of air and its chemical content between the vortex and its surroundings [15,21,30]. Scaling in Jet Streams Similar results, although less accurate, were seen by the WB57F [29] in the subtropical jet stream (STJ). The speed shear's effectiveness at producing anticorrelation-on all scalesmay be why clear air turbulence is frequently experienced by aeroplanes at jet stream entrances and exits. When all values of H for wind speed and temperature are plotted against traditional measures of jet stream strength as measured from the aeroplane, a correlation is seen in Figure 9: the stronger the jet stream as observed from horizontal gradients, the higher the value of H. A similar result may be obtained in the vertical from the GPS dropsonde data discussed in the previous section. The vertical scaling exponent for the horizontal wind is positively correlated with both jet stream depth and jet stream wind maximum, as seen in Figure 10. Similar results, although less accurate, were seen by the WB57F [29] in the subtropical jet stream (STJ). The speed shear's effectiveness at producing anticorrelation-on all scales-may be why clear air turbulence is frequently experienced by aeroplanes at jet stream entrances and exits. When all values of H for wind speed and temperature are plotted against traditional measures of jet stream strength as measured from the aeroplane, a correlation is seen in Figure 9: the stronger the jet stream as observed from horizontal gradients, the higher the value of H. A similar result may be obtained in the vertical from the GPS dropsonde data discussed in the previous section. The vertical scaling exponent for the horizontal wind is positively correlated with both jet stream depth and jet stream wind maximum, as seen in Figure 10. Molecular and Photochemical Effects The intimate connection between molecular behaviour and temperature seen earlier necessitates further examination of the behaviour of temperature in terms of its PDF, intermittency and the correlation of the latter with the ozone photodissociation rate. The observations were taken in the lower stratosphere during Arctic summer (April-September) in 1997 (POLARIS) and Arctic winter (January-March) in 2000 (SOLVE). Figure 11 shows the highly non-Gaussian PDFs. These missions had measurements of the ozone photodissociation rate [31] and are discussed in [14,15] and in the next section. Molecular and Photochemical Effects The intimate connection between molecular behaviour and temperature seen earlier necessitates further examination of the behaviour of temperature in terms of its PDF, intermittency and the correlation of the latter with the ozone photodissociation rate. The observations were taken in the lower stratosphere during Arctic summer (April-September) in 1997 (POLARIS) and Arctic winter (January-March) in 2000 (SOLVE). Figure 11 shows the highly non-Gaussian PDFs. These missions had measurements of the ozone photodissociation rate [31] and are discussed in [14,15] and in the next section. The relation between temperature and molecular velocity is given by equations 3.1 and 5.1 of [15]: Figure 12 shows the scaling exponent H for ozone, for all ER-2 polar ozone missions: AAOE, AASE, AASE-II, ASHOE-MAESA and SOLVE. The mean value of 0.47 is less than the passive scalar (tracer) value of 0.55 (5/9), a consistent result for all molecular species under conditions where they are known to have a sink operating. The sequences of points with values resulting in H for ozone in the range of 1/4 to 1/3 are in the polar vortices in the classic 'ozone hole' regime; the scaling is good in the presence of active photochemical loss [15,21,22,32]. The mean value of H for ozone is 0.47, reflecting ozone loss outside and inside the polar vortices in both hemispheres. Regarding water as a molecule, we can see that it too can have scaling exponent H less than the theoretical tracer value of 5/9 in Figure 13. Here the observations are for total water (vapour plus ice measured as vapour) [15,[32][33][34]. The sink is the falling of ice crystals under gravity, a process also seen for reactive nitrogen, NO y , in nitric acid-water particles large enough to sediment [15,35,36]. The ozone instrument [37] was the only one with data characteristics of sufficient signal-tonoise ratio and continuity to obtain the three scaling exponents, intermittency C 1 and Lévy exponent α, in addition to the more robust H. The calculation of intermittency requires a good signal-to-noise ratio, a very low incidence of data dropouts and the use of double precision in computation. The assumption that molecules with a sink process operative display values of H less than the 5/9 expected theoretically of a tracer (passive scalar) can be verified by inspection of Figure 14. During the ASHOE-MAESA mission of 1994, the observations of nitrous oxide, N 2 O, were good enough to provide an experimental test of the assumption, because N 2 O is a known tracer in the lower stratosphere [38]. Tracers do scale with the value of H predicted by the generalised scale invariance theory of statistical multifractals. This result should provide a test and a diagnostic for global models of the general circulation that run chemical mechanisms, whether for weather and air quality forecasting or attempts at climate prediction. Meteorology 2022, 2, FOR PEER REVIEW 13 The relation between temperature and molecular velocity is given by equations 3. [15,21,22,32]. The mean value of H for ozone is 0.47, reflecting ozone loss outside and inside the polar vortices in both hemispheres. Regarding water as a molecule, we can see that it too can have scaling exponent H less than the theoretical tracer value of 5/9 in Figure 13. Here the observations are for total water (vapour plus ice measured as vapour) [15,[32][33][34]. The sink is the falling of ice crystals under gravity, a process also seen for reactive nitrogen, NOy, in nitric acid-water particles large enough to sediment [15,35,36]. The ozone instrument [37] was the only one with data characteristics of sufficient signal-tonoise ratio and continuity to obtain the three scaling exponents, intermittency C1 and Lévy exponent α, in addition to the more robust H. The calculation of intermittency requires a good signal-to-noise ratio, a very low incidence of data dropouts and the use of double precision in computation. The assumption that molecules with a sink process operative display values of H less than the 5/9 expected theoretically of a tracer (passive scalar) can be verified by inspection of Figure 14. During the ASHOE-MAESA mission of 1994, the observations of nitrous oxide, N2O, were good enough to provide an experimental test of the assumption, because N2O is a known tracer in the lower stratosphere [38]. Tracers do scale with the value of H predicted by the generalised scale invariance theory of statistical multifractals. This result should provide a test and a diagnostic for global models of the general circulation that run chemical mechanisms, whether for weather and air quality forecasting or attempts at climate prediction. [12,13,15,28,38,39]. The data extend from 59 • N to 68 • S in the lower stratosphere and show an ozone sink relative to the known tracer, nitrous oxide. The observed value of the scaling exponent H for observations of chlorine monoxide [40,41], the chain-carrying free radical of the ozone catalytic destruction cycle during the SOLVE mission in the Arctic January-March 2000, is shown in Figure 15. The progression of the ClO traces and the associated scaling is shown from late January to mid-March and shows values characteristic of production in the early phase and of a sink in the late phase. These changes were accompanied by values characteristic of an ozone sink in all three days, indicating ozone loss prior to 20000123. Those results are consistent with the scaling behaviour of NOy and its role in the processing of unreactive forms of chlorine into reactive ones by polar stratospheric clouds. A longer discussion with more figures can be found in Chapters 4.3 and 6.2 of [15]. Empirically the molecular species with H values greater than 5/9 are those with a source, usually in the free atmosphere arising from either photochemical production or conceivably by gravitational settling in crystals from above, which then evaporate. Those observations with H less than 5/9 are those with a chemical thermodynamically or photochemically favoured sink, resulting in the dilution of energy density. During AAOE and SOLVE it was observed that the scaling exponents H and α for ozone showed correlated changes in the vortex, whereas the intermittency C 1 stayed constant at about 0.05. All three exponents for wind speed and temperature were unchanged throughout the missions. Atmospheric chemistry is inherently nonlinear [12][13][14][15]26,32,42], set in an inherently nonlinear fluid medium [7][8][9]20,22,23]. . See [15,41,42]. Note the smoother, larger trace and scaling exponent in source conditions, and the rougher, lower scaling exponent in sink conditions. See [13] for further discussion. A further aspect on chemical kinetics lies in the application of the law of mass action in air [13]. Because in most atmospheric volumes the reactant molecules do not have random access to the entire 3D volume on a relevant time scale, their existence in 23/9 space must be accounted for, a process which will result in the acceleration of reactant rates through the reduction of the dimensionality [13]. A diagrammatic summary of AAOE and AASE vertical and latitudinal profiles can be found in [43]. Comprehensive accounts of all the ER-2 and DC-8 missions can be found in the relevant special issues of Journal of Geophysical Research D-Atmospheres and Geophysical Research Letters: volumes 94(D9, 14) and 17 (4), respectively. The Intermittency of Temperature and Its Correlation with Ozone Photodissociation Rate An unexpected result from the measurement [31] of the ozone photodissociation rate, J[O3], was positive correlation with the intermittency of temperature, C1(T). That triggered a search for an explanation, with the causal attribution being the production of translationally hot O and O2 photofragments recoiling into and acting in the vortices produced [15,41,42]. Note the smoother, larger trace and scaling exponent in source conditions, and the rougher, lower scaling exponent in sink conditions. See [13] for further discussion. A further aspect on chemical kinetics lies in the application of the law of mass action in air [13]. Because in most atmospheric volumes the reactant molecules do not have random access to the entire 3D volume on a relevant time scale, their existence in 23/9 space must be accounted for, a process which will result in the acceleration of reactant rates through the reduction of the dimensionality [13]. A diagrammatic summary of AAOE and AASE vertical and latitudinal profiles can be found in [43]. Comprehensive accounts of all the ER-2 and DC-8 missions can be found in the relevant special issues of Journal of Geophysical Research D-Atmospheres and Geophysical Research Letters: volumes 94(D9,14) and 17(4), respectively. The Intermittency of Temperature and Its Correlation with Ozone Photodissociation Rate An unexpected result from the measurement [31] of the ozone photodissociation rate, J[O 3 ], was positive correlation with the intermittency of temperature, C 1 (T). That triggered a search for an explanation, with the causal attribution being the production of translationally hot O and O 2 photofragments recoiling into and acting in the vortices produced by the mechanisms seen in Figure 1 and justified in references [14,15,42]. The results from POLARIS in the Arctic summer of 1997 and from SOLVE in the Arctic winter of 2000 are displayed in Figure 16. C 1 (T) is positively correlated both with the ozone photodissociation rate and with temperature itself. By viewing Figure 11 in conjunction with Figures 16 and 17, we can conclude that atmospheric temperature is not that of a gas in local thermodynamic equilibrium [26,32]. Account must be taken of molecular behaviour from the smallest scales up to the gravest; it will mean acting on the persistence of molecular velocity after collision [5] and its breaking of continuous translational symmetry of a thermalised gas via the Alder-Wainwright mechanism [6]. Further discussion occurs in Chapter 5.2 of reference [15] and in references [22,26,32,42]. Note that even when J[O 3 ] is zero, in the dark, the value of C 1 (T) is not also zero, although it is lower than in sunlight. is averaged over the flight segment, the vertical bars indicating the standard deviation in the left diagram with the intermittency of temper ature on both abscissae. In the right diagram, the temperature on the ordinate is averaged over the flight segment, with the vertical bars indicating the standard deviation. In both diagrams, the inter mittency exponent C1 for temperature T is obtained from the slope of the curve as shown in the lower right diagram of Figure 17. Both ozone photodissociation rate and temperature itself show positive correlation with the intermittency of temperature as measured from the aeroplane; these are respectively cause and effect. ] is averaged over the flight segment, the vertical bars indicating the standard deviation in the left diagram with the intermittency of temperature on both abscissae. In the right diagram, the temperature on the ordinate is averaged over the flight segment, with the vertical bars indicating the standard deviation. In both diagrams, the intermittency exponent C 1 for temperature T is obtained from the slope of the curve as shown in the lower right diagram of Figure 17. Both ozone photodissociation rate and temperature itself show positive correlation with the intermittency of temperature as measured from the aeroplane; these are respectively cause and effect. March 2000. The ozone photodissociation rate J(O3)[O3] is averaged over the flight segment, the vertical bars indicating the standard deviation in the left diagram with the intermittency of temperature on both abscissae. In the right diagram, the temperature on the ordinate is averaged over the flight segment, with the vertical bars indicating the standard deviation. In both diagrams, the intermittency exponent C1 for temperature T is obtained from the slope of the curve as shown in the lower right diagram of Figure 17. Both ozone photodissociation rate and temperature itself show positive correlation with the intermittency of temperature as measured from the aeroplane; these are respectively cause and effect. Table 1 and Figure 1 of [22]. Table 1 and Figure 1 of [22]. Meteorology 2022, 2, FOR PEER REVIEW 18 Figure 18. The scaling equivalent partition function K(q), left ordinate, and scaling equivalent Gibbs free energy, −K(q)/q, right ordinate, for the data in Figure 17. q = 1 is an indication of an approximate steady state, when both K(q) and K(q)/q are near zero. Heating will drive the air to higher values along the black curve, whereas cooling will drive it to lower values. Scaling Based Entropy and Gibbs Free Energy Given that the airborne in situ measurements of molecular species and photodissociation rates are unlikely to become routine, and are presently unattainable from satellite remote sounding, what can be done? Either locally by airborne methods or globally by satellites, it is not foreseeable to the extent necessary. One alternative approach is outlined in [22], where the thermodynamic form of statistical multifractality [20] is adapted [22] to produce Table 1. Those results enable diagnosis of steady states and system directionality Figure 18. The scaling equivalent partition function K(q), left ordinate, and scaling equivalent Gibbs free energy, −K(q)/q, right ordinate, for the data in Figure 17. q = 1 is an indication of an approximate steady state, when both K(q) and K(q)/q are near zero. Heating will drive the air to higher values along the black curve, whereas cooling will drive it to lower values. Scaling Based Entropy and Gibbs Free Energy Given that the airborne in situ measurements of molecular species and photodissociation rates are unlikely to become routine, and are presently unattainable from satellite remote sounding, what can be done? Either locally by airborne methods or globally by satellites, it is not foreseeable to the extent necessary. One alternative approach is outlined in [22], where the thermodynamic form of statistical multifractality [20] is adapted [22] to produce Table 1. Those results enable diagnosis of steady states and system directionality from winds and temperatures, which are observed globally in a routine manner. The results in [22] vindicate the 'bare' cascade models of Schertzer and Lovejoy [9] and Lovejoy and Schertzer [20] by producing results from 'dressed' curves such as that in Figure 18, based on the data in Figure 17. Similar results were obtained for wind speed and ozone and are representative of all suitable 140 ER-2 flight segments between 1987 and 2000. Here, it is suggested that these diagnostics should prove applicable to global atmospheric models, particularly those dealing with air pollution and climate, wherein molecular species have to be simulated. Gibbs free energy does the work that drives the circulation to a different steady state under a perturbation, either on cooling or heating, after the entropic effects of dissipation have been accounted for. What molecular behaviour should be expected in the nonequilibrium conditions displayed in Figures 17 and 18? How will translationally hot and rotationally hot air molecules be manifest, whether observed by, for example, direct molecular beam sampling instruments or calculated by molecular dynamics methods? The red curve in Figure 19 is a hypothetical curve of the PDF of such molecular velocities, whereas the black curve represents an equilibrium Maxwell-Boltzmann state. The difference of the integrals beneath them via Equation (10) provides the Gibbs free energy. The atomic and molecular fragments from ozone photodissociation, which happens from the ultraviolet Hartley band through the Huggins, Chappuis and Wulf bands that stretch across the visible to the near infrared, can have up to an order of magnitude more energy than the average molecules. The symmetry breaking causing non-Maxwellian velocity distributions, and hence non-equilibrium temperatures, will also limit theoretical and analytical techniques that impose a symmetry on the air that it does not possess; Fourier analysis for wave-based formalisms is a candidate for such limitations. Aerosols and Scaling The importance of aerosols for calculating the atmospheric state under global heating from fossil fuel combustion has been recognised as central for at least two decades [44], The symmetry breaking causing non-Maxwellian velocity distributions, and hence non-equilibrium temperatures, will also limit theoretical and analytical techniques that impose a symmetry on the air that it does not possess; Fourier analysis for wave-based formalisms is a candidate for such limitations. Aerosols and Scaling The importance of aerosols for calculating the atmospheric state under global heating from fossil fuel combustion has been recognised as central for at least two decades [44], via their effect on radiative transfer and hence temperature. Recently, the COVID-19 pandemic virus has been observed to be of a size, having a diameter of about 80 nm dependent on water content, that enables it to be transported long distances by winds [45,46]. The dimensions of atmospheric aerosols span the scales from micro through meso to macro, and so are candidates for the application of scale invariant analysis [22,32,47]. Symmetry breaking is an important process in aerosols that have organic coatings, with division being asymmetric into a virally sized and a bacterially sized pair [48,49]. It is also important in the chemistry associated with microdroplets, in that reactions can be different for the different environments: the containing bulk medium, the interior of the particle and importantly, its surface where the free energy is concentrated in the form of surface tension [48][49][50][51]. From a climate heating prediction and action point of view, there are many uncertainties associated with aerosols in addition to those mentioned above and referenced in [42,44]. References [42,52] discuss them, and make it clear that geoengineering actions such as 'solar radiation management' are high risk options with uncertain outcomes. One perspective that can be offered on the role of airborne transmission of bacteria and viruses is that atmospheric oxidant molecules, such as OH, HO 2 , RO 2 and NO x , will act as agents of evolution by natural selection ( [32] and references therein), resulting in a coating that is more resistant to oxidative attack in air. It will vary with the fluctuating abundances of these molecules with geographic location and which, like all atmospheric constituents and variables, are influenced by scale invariant turbulent transfer. Aerosol sizes have power law distributions in the atmosphere, characteristic of scale invariance. Models and Scaling The effects of scaling analyses and the associated scale invariance and statistical multifractality provide a way of testing numerical models of the atmosphere that is an alternative to comparisons with observations localised geographically and temporally, such as zonal and monthly means [15,20,[53][54][55][56][57]. Such models have, by necessity, to parameterise the lowest 8 decades of scale down from the gravest of the 15 that span the molecular mean free path to a great circle. The issues focused on here are tests, cold biases and maximum wind speeds. A start has been made in [55,56]. Scaling Issues and Tests for Models An early attempt at comparing scaling analysis of an aircraft's observations of temperature and wind speed revealed that the MM5 model inset into an ECMWF analysis for the flight on 19980411 did an inadequate job of simulating the mountain waves and the associated severe turbulence in the lee of the Wyoming Rocky Mountains [15,57]. Although the model simulation did scale, it was with different scaling exponents, as seen in Figures 4.13-4.15 and Table 4.2 of [15] and in [57]. More general results were obtained in [55], where global analyses from operational forecasting suites did scale in a manner similar to the predictions of the theories expounded in [20,23], but in better agreement along latitude circles than along meridians. We return to the disagreements seen with dropsonde data by [19] in Section 7.3 below. Detailed comparison with the structure of vortex species with potential vorticity fields and structure along ER-2 flight paths also revealed a lack of correlation on any but the coarsest scale using potential vorticity (PV) fields derived from operational assimilations [21,58,59]. That was not a surprise, given the resolution of the models: T63 in 1989 and T799 in 2010. It is apparent that observations needed to implement scaling tests for numerical model simulations are in rather short supply. In particular, there are few meteorological research or operational aircraft flights with the necessary flight paths and quality. Commercial aviation data are available but, although they are invaluable for improving routine requirements such as wind speeds and temperatures by assimilation, they are not yet suitable for scaling analysis. One observational experiment that could be performed [26,32] would be deployment of GPS dropsondes from a large constant-level balloon in the upper stratosphere. It would examine the hypothesis that the cold bias in free running models is caused by ozone photodissociation inducing a non-equilibrium state via persistence of molecular velocity breaking the continuous translational symmetry of equilibrated air, with associated induction of molecular scale vorticity via the Alder-Wainwright mechanism. Observation of J(O 3 )[O 3 ] from the large balloon or even the dropsondes would add support. Scaling and the Cold Bias in Models There has been a persistent cold bias in free running numerical models of the global atmosphere, which has been confirmed recently by comparison of upper stratospheric analyses with observations from lidar temperature profilers [60]. Suggestions have been made based on scaling approaches that these cold biases may be the result of inadequate accounting for molecular effects [15,26,32,42] arising from results reported in [19]. The intermittency of temperature and its correlation with the ozone photodissociation rate evident in Figure 16 provided the clue to the non-equilibrium state of the air in the lower stratosphere, discussed at length in [15,32]. The posited cause is the recoil of translationally and rotationally 'hot' fragments from ozone photodissociation. Molecular effects causing departures from equilibrium will increase with altitude, as pressure quenching of 'hot' atoms and molecules, and the effect of gravity decrease [26,32]; the ozone photodissociation rate increases with height above the maximum in ozone density at 24-30 km. Those effects will have the largest influence on temperature in the upper stratosphere. Scaling, Molecules and Maximum Observed Wind Speeds The wind speeds that have been recorded and analysed in the atmosphere have maxima in locations such as the subtropical jet stream and the polar night stratospheric jet stream that exceed one quarter and even one half the speed of sound, particularly in the upper stratosphere [21,61]-for example 364 knots (180 ms −1 ) at 47 km altitude above South Uist in December 1967-and also in more local phenomena such as hurricanes and tornadoes. These velocities of order 100 ms −1 call into question the assumptions made in numerical models of the atmosphere in integrating the Navier-Stokes dynamical equations, the most significant of which is that the maximum fluid velocity should be much less than the average molecular velocity. Wind shears produce gradients of a steepness that are also beyond the capacity of current models to simulate accurately, particularly in the presence of intense turbulence [57]. The result that speed shear in the along jet direction is most effective at producing anticorrelation in jet streams, as shown in Figure 8 and discussed in Section 3.3, may account for the fact that turbulence in clear air is experienced by aircraft most frequently near jet stream entrances and exits, see Section 6.1.6 of [61]. Some Examples Scaling analysis has been applied to ground based total ozone [62][63][64] and the incident solar flux [65], illustrating long memory in total (overhead column) ozone and the fact that incident solar radiation also shows scale invariance, with considerable intermittency. The reference list contains many papers in which statistical multifractals have been used to analyse a wide range of geophysical processes, most notably climate [20,23]. Why Turbulence? This question, asked by Heisenberg three decades after his formulation of isotropic turbulence [66], had provoked earlier comment by fluid dynamicists Lamb and von Kármán and by theoretical physicist Feynman. Eady [67] argued that the turbulent transfer of heat was the fundamental driver of the atmospheric circulation, with such phenomena as jet streams, Hadley, Walker and Ferrel cells being secondary phenomena. This idea was examined more mathematically by Eady and Sawyer [68]. It is consistent with the view argued here and in [26,32] to the effect that energy is deposited in the air on the smallest scale-molecules and photons-and propagates upscale initially. The mechanism is the persistence of molecular velocity after collision, which breaks the continuous translational symmetry of an equilibrated gas, with concomitant generation of vorticity and with the most energetic molecules carrying the Gibbs free energy, that provides the work that drives the general circulation. The Gibbs energy is paid for by the dissipation and entropy production arising from outgoing infrared radiation at an average of 255 K to the 2.7 K sink of space, compared to the relatively organised, low entropy incoming solar beam from the solar source at 5800 K and the fact that S = Q/T, where S is the entropy of the black body radiation source, Q is the emitted radiative energy and T is the black body temperature of the source. S = 4Q/3T if radiation pressure is accounted for [69]. All scales interact and so maintain the observed scale invariance. Turbulent flow is the emergent, organised component of the fluctuation-dissipation theorem, and radiation to space is the entropic dissipation. Conclusions The theory of statistical multifractals [7][8][9]20,23] has been vindicated by analysis of observational data from research aircraft, in both the horizontal and the vertical [10][11][12][13][14][17][18][19]21,22,[24][25][26][27]. Atmospheric dimensionality is predicted and observed to be 23/9. When a given population of molecules occupies a 23/9-dimensional space, they will encounter each other more frequently than in a 3D space, altering the Law of Mass Action. Correlation of the ozone photodissociation rate with both temperature and its intermittency in the Arctic lower stratosphere has led to interpretation in terms of molecular dynamics calculation, with the persistence of molecular velocity breaking the continuous translational symmetry of a thermalised gas, resulting in the production of vorticity on the smallest time and space scales, i.e., those of photons and molecules, from which the air acquires its energy. These vorticity structures represent fluctuation, whereas the associated maintenance of an operational temperature by dissipation completes the Langevin equation, doing so nonlinearly and in reverse to the conventional view of the mean as organisation and the departures from it ('eddies') as dissipation. Temperature, through molecular velocities, is the integrator. It acts differently than other variables, because air density is acted upon directly by gravity. Scaling exponents can characterise chemical and physical operation of sources and sinks for reactive free radicals, ozone and total water, independently of numerical modelling. The interpretation of vertical scaling as gravity and pressure decrease, and ozone photodissociation rate increases from the tropopause to the stratopause, via the thermal wind or barometric equations, suggests that a molecular approach to the formulation of general circulation models can address the cold bias they still display in the upper stratosphere. The statistical multifractal approach to Gibbs free energy offers a new view of what drives the general circulation: it is the work enabled by the Gibbs free energy.
12,437.6
2022-02-24T00:00:00.000
[ "Physics", "Environmental Science" ]
An improved Bergeron differential protection for half-wavelength AC transmission line Half-wavelength AC transmission line has the characteristics of long transmission distance and high voltage level, and its fault characteristics are significantly different from conventional transmission line. In order to reduce the interference of distributed capacitive current on half-wavelength AC transmission line on the calculation of current differential protection, this paper proposes a new current differential protection scheme based on Bergeron model. In order to solve the problem of small differential current located at the midpoint when a short circuit fault occurs, a solution using different methods to calculate setting value in different areas is proposed. The protection can move quickly near the terminal and delay to act in the middle area. After simulation and verification on the PSCAD experimental platform, it is found that when there is a fault at both terminals of the line, the protection can quickly operate in about 10 ms; when fault occurs in the middle area, the protection can delay its operation. The experimental results show that the various actions and performance of the protection device can meet the requirements of safe operation of halfwavelength transmission line. Introduction Half-wavelength AC transmission refers to the transmission technology near a half wave (the line length is 3000 km when the transmission frequency is 50 Hz, and 2500 km when the frequency is 60 Hz). Halfwavelength AC transmission technology has better economic efficiency in ultra-long-distance power transmission and has great development potentiality. Compared with traditional AC transmission technology, half-wavelength AC transmission technology has significant economic and technical advantages, reflected in the following points: less voltage loss, good stability, no need to install reactive power compensation devices, no need to set up intermediate stations, economy, etc. With the advancement of the global energy Internet, half-wavelength transmission as a solution suitable for large-scale intercontinental power transmission has received widespread attention. The transmission distance of a half-wavelength AC transmission line is a half-wave. The voltage and current are not only a function of time, but a function of distance. It cannot be regarded as a lumped parameter line. Thus, the electrical characteristics and fault characteristics of the half-wavelength transmission line also have a significant difference from conventional line. There are a great number of problems in actual operation, for example, it is only applicable for the line whose length is closed to half-wavelength; it has huge potential current and overvoltage when short circuit fault occurs and the traditional relay protection has no applicability on half-wavelength transmission line. Due to the superior economic efficiency of half-wavelength transmission line, scholars at home and abroad have conducted a lot of research on its characteristics for decades, such as line tuning [1]- [2], submersible current [3]- [4], overvoltage [5]- [7], insulation coordination [8], the evaluation of economic and reliability [9], protection [10]- [13], etc. However, the research on the relay protection for halfwavelength transmission line has just started. This article focuses on the research of relay protection for half-wavelength transmission line. Since the electrical characteristics and fault characteristics of half-wavelength transmission line are significantly different from those of conventional lines, many problems will appear when traditional relay protection schemes are applied into half-wavelength transmission line. For example, a lot of problems appear when conventional current differential protection is applied into the half-wavelength transmission line: 1) The voltage characteristic curve and current characteristic curve are non-linear, non-monotonic, the capacitive current along the line cannot be accurately calculated; 2) When the fault occurs at the midpoint of the line, the differential current is close to 0, how to choose the action value; 3) Transmission line has a long channel delay and the transmission time of electromagnetic wave gets longer. In order to solve the above problems, this paper proposes a new differential protection scheme: 1) The line adopts the Bergeron model. Bergeron model equals the distributed parameter element to the lumped parameter element, which eliminates the influence of the distributed capacitance current for protect; 2) In order to solve the problem of small fault current which is located at the midpoint of the line, a low setting value is used to improve the protection sensitivity; 3) In order to reduce the protection action time, the protection adopts a two-stage quick action and a delay action in the middle area, and the two terminals act quickly. On the premise that the protection can operate correctly, the quick action is guaranteed to the greatest extent. Experimental results show that the various actions and performances of the protection device can meet the requirements for safe operation of half-wavelength transmission line. IMPROVED BERGERON DIFFERENTIAL PROTECTION METHOD FOR HALF-WAVELENGTH AC TRANSMISSION LINE Compared with traditional relay protection methods, Bergeron differential protection has the advantage of reducing the impact of distributed capacitive current to conventional current differential protection. Its principle is still based on Kirchhoff 's law. Although the transmission line is 3000 km, the use of Bergeron differential protection still has good effects. As the half-wavelength transmission system shown in Figure 1, Bergeron differential protection can be implemented at the M terminal. The detailed steps are as follows: 1) The transient instantaneous values of the threephase voltage and current at the M terminal which can be obtained by the relay protection measuring device,are respectively recorded as um(t) and im(t). The transient instantaneous value of the three-phase voltage and current at the M terminal which are obtained by synchronous measurement device and a low-delay optical fiber communication device, are respectively recorded as un(t) and in(t). 2) Use the Karrenbauer transform to eliminate the electromagnetic coupling between the three-phase line, then convert the mode values of the voltage and current in the time domain to those in the frequency domain by using Fourier transform. It is easy to get the fundamental wave component of voltage and current at time t, which are respectively recorded as Um(t), Im(t), Un(t) and In(t). 3) The fundamental components of voltage and current at the M and N terminals at time t-τ are Um(t-τ), Im(t-τ), Un(t-τ) and In(t-τ). For the M terminal, the current at time t can be described as: 4) The specific judgment basis for the Bergeron differential protection at the M terminal is as follows: The Bergeron differential current is directly related to the current at the fault point. The current at the fault point can be described by the fault distance function, and its specific expression is as follows: According to formula (3), the current at the short circuit point can be calculated by following expression: Considering the special case when a three-phase short circuit fault occurs at the midpoint of a halfwavelength line, there are: Considering the special case when a three-phase short circuit fault occurs at the M terminal of a halfwavelength line, there are: For the actual half-wavelength transmission system, the system voltages at the M terminal and the N terminal are generally equal in magnitude and opposite in phase. When a short circuit fault occurs at the midpoint of the line of the system, the Bergeron differential current is approximately equal to the short circuit fault current, which is 0; when a short circuit fault occurs at the M and N terminals, the short circuit current or the Bergeron differential current is very large. The conditional Bergeron differential protection is obviously effective when a fault occurs at the M and N terminals of the line. But when the fault occurs near the midpoint of the line, the effect of the Bergeron differential protection is no longer obvious and there may even be a dead zone in the relay protection, which may cause the protection device refuses to act when short circuit fault occurs. In order to improve the reliability of the Bergeron differential protection on half-wavelength transmission system, it needs to improve the sensitivity of the protection device. The Bergeron differential protection scheme proposed in this paper divides the half-wavelength transmission line into an intermediate short circuit section (1000-2000 km of the line) and an external short circuit section of the line. It is known that there are such relationship between the electrical quantity at M terminal of the line and the distance of the fault: Supplement the criterion on the basis of the previous analysis can further improve the sensitivity. On the condition that the short circuit faults outside the area do not cause malfunctions, the short circuit faults in the area can be removed through appropriate delay operations. Equation (7) can be used as the theoretical basis for supplementing criterion. The internal resistance of the M terminal power source can be ignored, but the reactance value should be retained. After this simplified analysis, the power source impedance can only retain its imaginary part. Substitute impedance into the formula (7), the current at the M terminal can be written as: By the formula (8), it can be seen that when the short circuit fault occurs in the middle section (the area which is close to the midpoint of the line) the denominator terminals to be infinite. It makes the current value measured by the installation protection device very small, even may be less than the load current value. But when the short circuit fault occurs on the line outside the middle section, the current value measured by the installation protection device is very large, may be close to the value of terminal short circuit. By observing the value of the protection current, it is possible to judge whether the short circuit fault occurs in the middle section or outside it. This paper adopts the Bergeron differential protection scheme with high sensitivity and low sensitivity which specific criterion satisfies the following formula: Where Imdset1 is the setting value of low-sensitivity differential protection. It is slightly larger than the maximum differential current when a short circuit occurs outside the zone. Imdset2 is the setting value of highsensitivity differential protection, which is based on the minimum short circuit current. It should be satisfied that Imdset1 is greater than Imdset2. Imset is the setting value used to judge the fault section. The measured current value of the M terminal, Im, is larger than the internal short circuit fault current in the middle section, but less than the external short circuit fault current. If formula (9) is established, it can be judged that a serious short circuit fault occurs in the middle section of the line and the protection device needs to act immediately; if formula (10) and formula (11) are both established, it can be judged that the short circuit fault occurs in the middle section of the half-wavelength line and the protection device can act after the setting delay; if the formula (10) is established but formula (11) is not established, it can be judged that the short circuit fault occurs outside the area and the protection device does not act. SIMULATION VERIFICATION Build a half-wavelength AC transmission system as shown in Figure 1 The specific parameters of the unit length line include: zero sequence resistance R 0 , positive sequence resistance R 1 , zero sequence resistance X L0 , positive sequence resistance X L1 , zero sequence capacitive reactance X C0 and positive sequence capacitive reactance X C1 , Where R 0 =0.139706Ω/km, R 1 =0.00647Ω/km, X L0 =0.876470Ω/km, X L1 =0.25294Ω/km, X C0 =0.393547MΩꞏkm, X C1 =0.226413MΩꞏkm. In order to reduce the influence of overvoltage and overcurrent in the actual measurement, voltage transformer (PT) and current transformer (CT) are put into operation at M terminal and N terminal, with their conversion ratios of 1000 kV / 100 V and 5000 A / 1 A respectively. When the half-wavelength AC transmission system is operating normally, the voltage, current, differential current and impedance of the M terminal keep constant. However, when a short circuit fault occurs in a halfwavelength AC transmission system, the voltage amplitude first increases and then decreases with the change of the short circuit distance. The voltage amplitude arrivals the maximum at 2850 km, and the change rate of voltage amplitude is greater when the short circuit point is close to 3000 km. When the short circuit point reaches the midpoint of the transmission line, the current amplitude of the M terminal gradually decreases and the rate of change becomes smaller and smaller. When the short circuit point exceeds the midpoint of the line, the current amplitude firstly increases and then decreases and the amplitude of the current also achieves the maximum value at 2850 km. In the conventional protection scheme, two maximum values of the differential current appear at 150 km and 2850 km respectively with the change of the short circuit point. While the short circuit point is close to the midpoint of the line, the amplitude of the current is smaller, even close to zero at the midpoint which easily causes the relay protection device refuses to trip. The amplitude of the impedance obtains the only maximum value at the midpoint of the line, and it is almost symmetric about the middle point of line. The phase angle of impedance is also almost symmetric about the midpoint of the line. When the short circuit occurs at the midpoint of the line, the phase angle of the impedance changes abruptly to 180 degrees. After the above analysis, it can be known that when a short circuit fault occurs at the middle section of the line, that is, between 1000 km and 2000 km, the conventional differential current will be close to zero; and when a short circuit fault occurs at the M and N terminals of the line, the conventional differential current is very large. The conventional differential protection can achieve better results when a fault occurs at the M and N terminals of the line, but when the fault occurs in middle section, that is the area between 1000 km and 2000 km, the effect of the conventional differential protection is no longer obvious, there even may be a dead zone in the relay protection, which causes the normal cut-off operation cannot be performed after the short circuit fault occurs. Therefore, the traditional differential protection needs to be improved urgently. In order to improve the protection scheme, this paper divides the faults into three types. When a short circuit fault occurs between the M terminal and the N terminal, it is defined as an internal fault; when a short circuit fault occurs in the middle section between 1000 km and 2000 km, it is defined as a fault in the middle section; and when a short circuit fault occurs outside the line, it is defined as an outside fault. The simulation duration time is set to 4 s, the time to apply faults is set to 1 s and the duration time of fault is set to 3 s. The simulation results of the differential currents at the M and N terminals when three-phase short circuits occur at different locations are shown in Figure 3. It shows that when short circuit faults occur at the M and N terminals, the amplitude of the Bergeron differential current at the M and N terminal is very large; but when the internal fault occurs at the midpoint and outside the area, the amplitude of the Bergeron differential current at the M terminal and the N terminal is very small. Other types of short-circuit faults have similar characteristics. Therefore, the simple Bergeron differential protection can only protect the area outside the middle section but cannot protect the entire halfwavelength transmission line. Time/s N terminal/A (d)The differential currents of two terminals when a three-phase short circuit fault occurs outside the zone Figure 3. The Bergeron differential current of three-phase short circuit According to the previous analysis, in order to improve the conventional Bergeron differential protection, it is necessary to choose a proper setting value of the low-sensitivity differential protection, that is, Imdset1 in equation (9) and a proper setting value of the high-sensitivity differential protection, that is, Imdset2 in equation (10). Moreover, it is necessary to get the setting value for judging the fault section, that is Imset in formula (11). Only considering the values of the secondary device, Imdset1, Imdset2 and Imset are set to 0.4 A, 0.1 A and 0.2 A respectively. By using the MATLAB, the simulation of Bergeron differential protection is improved so that high sensitivity can cooperate well with low sensitivity. The specific protection action time is shown in Table 1 ~ Table 4. From the analysis of Table 1 ~ Table 4, it can be seen that the protection of high sensitivity cooperates well with low sensitivity, which makes the protection of the entire line come true. When a short circuit fault occurs at the M terminal and the N terminal, the Bergeron differential protection device at the M terminal acts quickly and can be completed within 10 ms. When a fault occurs in the middle section of the line, a certain time delay is required to complete the action of the protection device. The protection action time at the middle point generally exceeds 60 ms. When a fault occurs outside the line, the protection device will not operate. Generally speaking, the protection action time is within the range allowed by the actual project. The improved Bergeron differential protection scheme in this paper has high reliability and can significantly improve the safety of half-wavelength power transmission systems. CONCLUSION Half-wavelength AC transmission line has long transmission distance and high voltage level, and its fault characteristics are significantly different from those of conventional transmission line. There are some problems in actual protection operation. For example, the capacitance current along the half-wavelength transmission line cannot be accurately calculated; the differential current is close to 0 when the fault occurs at the midpoint; transmission line has a long channel delay and the transmission time of electromagnetic wave gets longer. Therefore, conventional differential protection cannot be used on the half-wavelength transmission line. This paper uses a new differential protection scheme which adopts the Bergeron model. In this protection scheme, several problems are solved well: the Bergeron model equals the distributed parameter element to the lumped parameter element, which eliminates the influence of the distributed capacitance current; in order to solve the problem of small fault current which is located at the midpoint of the line, a low setting value is used to improve the protection sensitivity; in order to reduce the protection action time, the protection scheme of two-stage quick action on the whole line and the delay action in the middle area are used. On the premise that the protection can operate correctly, the quick action is guaranteed to the greatest extent. Experimental results show that the various actions and performances of the protection device can meet the requirements for safe operation of half-wavelength transmission line.
4,363.8
2021-01-01T00:00:00.000
[ "Physics" ]
Influence of the geometry of nanostructured hydroxyapatite and alginate composites in the initial phase of bone repair 1 Abstract Purpose To analyze, histomorphologically, the influence of the geometry of nanostructured hydroxyapatite and alginate (HAn/Alg) composites in the initial phase of the bone repair. Methods Fifteen rats were distributed to three groups: MiHA - bone defect filled with HAn/Alg microspheres; GrHA - bone defect filled with HAn/Alg granules; and DV - empty bone defect; evaluated after 15 days postoperatively. The experimental surgical model was the critical bone defect, ≅8.5 mm, in rat calvaria. After euthanasia the specimens were embedded in paraffin and stained with hematoxylin and eosin, picrosirius and Masson-Goldner’s trichrome. Results The histomorphologic analysis showed, in the MiHA, deposition of osteoid matrix within some microspheres and circumjacent to the others, near the bone edges. In GrHA, the deposition of this matrix was scarce inside and adjacent to the granules. In these two groups, chronic granulomatous inflammation was noted, more evident in GrHA. In the DV, it was observed bone neoformation restricted to the bone edges and formation of connective tissue with reduced thickness in relation to the bone edges, throughout the defect. Conclusion The geometry of the biomaterials was determinant in the tissue response, since the microspheres showed more favorable to the bone regeneration in relation to the granules. Influence of the geometry of nanostructured hydroxyapatite and alginate composites in the initial phase of bone repair Santos GG et al. 2 researchers have developed this material on a nanometer scale, considering that the HA nanostructured crystals (HAn) have higher biodegradation due to the smaller size of its particles and larger surface area exposed to the biological environment, which accelerates the speed of formation and growth of the biologically active apatite layer 7,8 . Another way to improve bioceramics is to associate them with natural or synthetic polymers to produce biomaterials of the type composites 2,9 . These materials have, in the same scaffold, physicalchemical properties of the ceramic and polymer, which are improved in relation to the materials when used in an individual way and mimetize the inorganic and organic phases of the natural bone [10][11][12] . In this perspective, the alginate is a natural polymer widely used, since this material can alter the crystallinity, solubility, network parameters, thermal stability, surface reactivity, bioactivity and adsorption properties of the HA structure 9,13,14 . Therefore, the physicochemical characteristics of the composites vary according to the polymer and its percentage used during the synthesis, as well as the processing of the sample and the final geometry of the biomaterial produced. Thus, these biomaterials, especially in the form of microspheres and granules are configured as a promising alternative to bone substitution, particularly in situations where damage and/ or trauma reach critical dimensions which preclude spontaneous bone regeneration and hinders the functional property or aesthetic of the affected 15,16 . Given the above, parallel to the need, in worldwide level, to develop new biomaterials, with national technology and affordable cost, most versatile, and promising biological properties for use, especially in cases of extensive bone loss, the present study aims to analyze the influence of the geometry of HAn ■ Introduction Researchers of the bone tissue bioengineering have sought to develop ideal conditions for the repair and/or replacement of damaged or lost tissue through the use of cellular elements, growth factors, regenerative techniques and biomaterials, in order to provide the scaffold and essential requirements for tissue neoformation [1][2][3] . The biomaterials can be synthesized from different substrates and processed in different forms of presentation, namely: fiber, membrane, gel, powder, among others; and different geometries such as plates, cylinders, microspheres and granules. Microspheres have attracted great scientific interest due, in particular, to their ability to promote the formation of interstices between them, which allows cell migration, adhesion, proliferation and differentiation, mainly, mesenchymal and osteoprogenitoring, liberation of growth factors, angiogenesis, diffusion of nutrients and new extracellular matrix (ECM) synthesis 4 . By turn the granules, in addition to having the aforementioned properties, are widely used in clinical practice in the filling of defects and tissue lesions with irregular shapes. Both microspheres and granules can be applied through injectable systems in minimally invasive surgical procedures 5 . Among the substrates most used in the synthesis of biomaterials with the geometry of microspheres and granules, bioceramics based on calcium phosphate (CaP) stand out, mainly, the HA due to its biocompatibility, osteoconduction and bioactivity 6 . However, in spite of these fundamental properties, in the biological interaction with the host, the HA presents a low in vivo degradation rate which, in some applications, may limit its use 7 . In seeking to improve the physicochemical characteristics of HA, The precipitate resulting was filtered and washed until the pH of the wash water was 7. Soon after, the solid obtained was dried by freeze-drying for 24h and then separated using sieves with a desired mesh aperture. 15g of the obtained solid were weighed into becker and added to a 1.5% w/v solution of sodium alginate, vigorously mixed until obtain a homogeneous paste. To obtain the microspheres, this paste was extruded with the use of a syringe in 0.15M calcium chloride solution at room temperature. Finally, the microspheres were washed and dried in a glass-house at 50º C and then submitted to the sieve with a granulometric range of 250-425 μm. In order to obtain the granules, the obtained paste was dried in a glass-house and then crushed and sieved in the granulometric range of 250-425 μm. The biomaterial samples were conditioned in eppendorf tubes, properly identified, and sterilized by gamma rays (Fig. 1). Each aliquot was used to fill the bone defect of, approximately, four animals. Superficial area The characterization of the surface area by the BET method (ASAP 2020 -Micromeritics ® ) showed that the HA evaluated in our study has a surface area of 35.9501 m 2 /g, characteristic of manometric biomaterials. Chemical analysis The chemical analysis of the X-ray fluorescence (XRF) (PW2400-Philips ® ) demonstrated that HA evaluated in this work presents Ca/P molar ratio of 1.67 (Table 1). Acta Cir Bras. 2019;34(2):e201900203 Influence of the geometry of nanostructured hydroxyapatite and alginate composites in the initial phase of bone repair Santos GG et al. X-ray diffraction (X-RD) analysis The analysis of X-ray diffraction was performed using the high-resolution diffractometer HZG4 (Zeiss ® ) with CuKa radiation (l = 1.5418 Å) and angular scanning of 10-80 o (2ɵ), with step of 0.05/s, time 160 seconds, with reference to the standard PCPDFWIN 09.0432 (International Centre for Diffraction Data -ICDD) 17,18 . The diffractogram evidenced peaks corresponding to the crystalline profile of a standard HA (Fig. 2). Experimental phase Fifteen male wistar albino adult rats with body weight between 350 and 400 g were randomly distributed to compose three experimental groups, with 5 animals in each group: MiHA -bone defect filled with HAn microspheres and alginate; GrHA -bone Influence of the geometry of nanostructured hydroxyapatite and alginate composites in the initial phase of bone repair Santos GG et al. defect filled with HAn granules and alginate; DV -bone defect without implantation of biomaterial. Surgical procedure The surgical technique used was the same described by Miguel et al. 19 . However, in the present study, the 8.0 mm trephine drill used in the confection of the critical bone defect was with a diameter of ≅8.5 mm and ≅0.8 mm of thickness. After removal of the bone fragment, it was procedered the implantation of the biomaterials, according to each experimental group. In the control group (DV) the bone defect remained empty, without biomaterial. Finally, the tissue flap was repositioned and sutured with simple points (Fig. 4). Histological processing and histomorphological analysis After 15 days post surgery, the animals were euthanized and the calvaria removed. The obtained specimens were fixed formaldehyde 4% for 48 hours, included in paraffin and subsequently cut to 5μm of thickness and stained with hematoxylin-eosin (HE), picrosirius-red (PIFG) and Masson-Goldner trichrome (GOLD) and analyzed under an optical microscope (DM1000 -Leica ® ). The images were captured using a digital camera (DFC310FX -Leica ® ). For morphometric analyses, the Leica Application Suite (version 4.12 -Leica ® ) and optical microscope (DM6 B -Leica ® ) were used. To compare the differences between the groups the Kruskal-Wallis test was applied using the software SPSS version 20.0 (IBM SPSS ® ), at a 5% level of significance (p≤0.05). ■ Results The histomorphologic analysis evidenced neoformation of osteoid matrix restricted to the borders of the bone defect with formation of fibrous connective tissue in the remaining area, in all experimental groups. When compared to the bone edge, the tissue thickness produced in the defect region remained proportional in the groups with implantation of microspheres and granules, and significantly reduced in the group without implantation of biomaterials (Fig. 5). In the MiHA, the biomaterials were arranged in monolayer, with small variation of microspheres size, throughout the extent of the bone defect. The majority remained intact and some presented partial and/or total fragmentation with neoformation of osteoid matrix within the scaffold. In this group, it was observed the presence of mononuclear inflammatory cells and multinucleated giant cells, characteristic of chronic granulomatous inflammatory response, discreetly noticed around the microspheres, especially those located at the periphery of the bone defect. In GrHA, the granules were distributed in mono and multilayer, throughout the extent of the bone defect. In this group the chronic granulomatous inflammatory reaction observed was more evident than in the MiHA. Most of the granules remained intact, while others presented partial fragmentation, less accentuated in relation to the microspheres, without osteoid neoformation inside of the biomaterials. In both groups with implantation of biomaterials, it was noticed abundant proliferation of blood capillaries around the particles (Fig. 6). The histomorphometric analysis measured the percentage of mineralized linear extension ( Table 2). The Kruskal-Wallis test demonstrated statistically significant difference between the 3 groups (MiHA x GrHA x DV -p=0.007) and between the experimental and control groups -MiHA x DV (p=0.010) and GrHA x DV (p=0.046). In the groups which the biomaterials were implanted the Kruskal-Wallis test did not demonstrate statistically significant difference-MiHA x GrHA (p=1.00). Influence of the geometry of nanostructured hydroxyapatite and alginate composites in the initial phase of bone repair Santos GG et al. ■ Discussion The bone repair mechanism is a dynamic, temporal and complex phenomenon that depends on the presence of a threedimensional scaffold and on the manner in which the bone, undifferentiated mesenchymal and endothelial cells interact with each other and with the microenvironment around them. Under the physiological conditions, this event is consolidated by regeneration. On the other hand, when the tissue loss presents critical dimensions, extension and morphology, there is impairment of this mechanism and the tissue repair occurs by fibrosis 19,20 . Under these conditions, bone regeneration becomes limited, as observed in the control group (DV) of our study and in other studies [19][20][21][22] . These results validate the simulation of extensive bone loss as it occurs in cases of congenital pathologies, extensive surgical resections, trauma and severe inflammatory diseases. In view of these inhospitable conditions, it is evidenced the need for the use of bone substitutes that make feasible the complete tissue regeneration. The spatial arrangement of the biomaterials and the neoformed tissues between and around the particles, in the two implanted groups showed that the microspheres and granules acted as scaffolds suitable to applications as fillers materials. However, in view of the almost complete reduction of the neoformed interstitium between the granules, it is noted that the spatial arrangement of these biomaterials, similar to the mosaic, interfered in the migration of the cells between the particles during bone repair. These findings were consonant those were evidenced by Ribeiro et al. 22 , who evaluated granules of HA and alginate in the same biological point. The evident presence of neoformed blood vessels surrounding the biomaterials demonstrates that the materials evaluated were biocompatible and osteoconductive, independent of the geometric shape, and provided a structure favorable to migration, adhesion and proliferation cellular, the release of growth factors, angiogenesis and tissue neoformation 4,12,23 . Thus, the neoformed osteoid matrix was similarly consolidated in both groups with implantation of biomaterials (p>0.05). The partial biodegradation of the biomaterials evaluated in this study, most noticeable in the MiHA, demonstrated the influence of alginate on the materials that, when coming into contact with fluids and living tissues, was dissolved by local enzymes and biorreabsorbed. In this way, it induced the gradual release of the inorganic components of the composite, mainly ions of Ca and PO 4 , contained in the crystals of HA. These results contradict those observed by Barreto 24 , in which the microspheres of HA and alginate did not undergo evident biodegradation at the same biological point. This occurred due to the removal of the polymer from the structure of the microspheres during the calcination process. This finding can be attributed, in sum, to the size of the microspheres used by Barreto 24 in relation to those used in our experiment, 400-600 μm and 250-425 μm, respectively, because the smaller the particle of the biomaterial, the larger the superficial area of the material exposed to the biological environment. In view of these findings, it is noted that the calcination and sintering processes increase crystallinity, since it favors crystal fusion, aggregation and growth particle, making them resistant to biodegradation 24 . Corroborating with this analysis, in the study by Paula et al. 25 , non calcined microspheres with 400μm showed partial biodegradation with deposition of collagen fibers inside the particles. Conversely, in the study realized by Rossi et al. 26 . the sintered HA microspheres, with the same size as those used in our study, did not present expressive biodegradation, at the same biological point. One of the technological innovations presented by the biomaterials evaluated in this study, in both geometric shape, was the conception of the HA at nanometric scale. As observed in the MiHA, according to Valenzuela et al. 8 , the nanostructured HA crystals tend to dissolve (biodegrade) faster because of the smaller particle size and the larger surface area exposed to the biological environment. In our study, the chronic granulomatous inflammation, noted in both groups in which biomaterials were implanted, was compatible with that expected every time a biomaterial is implanted in the living organism 27,28 . This finding proves the biocompatibility of the biomaterials, since there was no rejection by the organism, characterized by acute exacerbated inflammation 27,28 . This potentiality can be attributed, mainly, to the physical-chemical composition of the materials that mimic the inorganic and organic phases of the natural bone tissue, by HAn and alginate, respectively. On the other hand, the more evident chronic granulomatous inflammation in the GrHA, in comparison to the MiHA, reveals that the irregular surface of the granules modulated the cellular response in the interstice between the particles of the biomaterial 27,28 . These findings reveal that the geometry of the biomaterials interferes directly in the tissue response to the presence of the particles 5,29 . In the study by Ribeiro et al. 22 the microspheres acted better as filling scaffold and the granules presented superior osteoconductive potential, contrasting our results. It should be emphasized which, in that study, the biomaterials had a diameter of 425-600 μm; the granules contained 1% of alginate; and the biomaterials were synthesized by another way. In our experiment, the percentage of alginate used was 1.5% and diameter of 250-425 μm, which may have influenced the tissue response to granules 22 . Considering that the results of this study are related to the initial phase of the bone repair mechanism (15 days) and can influence, significantly, subsequent cellular events, it becomes pressing the need to develop new studies to observe this response in the long term. ■ Conclusions In the initial phase of bone repair, the geometry of the biomaterials influenced the tissue response to the implantation of HAn and alginate composites. Both biomaterials exhibited neoformation of osteoid matrix, although the microspheres exhibited histological characteristics more favorable to bone regeneration than granules.
3,742.2
2019-02-28T00:00:00.000
[ "Medicine", "Materials Science" ]
Extending Ecological Network Analysis to Design Resilient Cyber-Physical System of Systems The design of resilient infrastructure is a critical engineering challenge for the smooth functioning of society. These networks are best described as cyber-physical systems of systems (CPSoS): integration of independent constituent systems, connected by physical and cyber interactions, to achieve novel capabilities. Bioinspired design, using a framework called the ecological network analysis (ENA), has been shown to be a promising solution for improving the resilience of engineering networks. However, the existing ENA framework can only account for one type of flow in a network. Thus, it is not yet applicable for the evaluation of CPSoS. This article addresses this limitation by proposing a novel multigraph model of CPSoS, along with guidelines and modified metrics that enable ENA evaluation of the overall (cyber and physical) network organization of the CPSoS. The application of the extended framework is demonstrated using an energy infrastructure case study. This research lays the critical groundwork for investigating the design of resilient CPSoS using biological ecosystems inspiration. I. INTRODUCTION Infrastructure networks, such as power grids, water distribution networks, and supply chains, are essential to the functioning of modern society.Resilience to catastrophic events, including extreme weather and cyberattacks, is a critical requirement for the successful operation of such networks.Infrastructure networks are made up of a set of physical systems that accomplish the sourcing, processing, and distribution of physical flows (such as energy or water).This networked integration of heterogeneous and independent constituent systems that together produce capabilities that cannot be obtained by using any of the constituent systems alone [1], [2], [3] makes them systems of systems (SoS).The constituent systems in SoS networks have operational and/or managerial independence and are usually developed independently.The behavior of the overall SoS depends largely on how the constituent systems interact with each other and cannot be determined only by knowing the behaviors of the systems in isolation, a property called emergence [2], [4].These characteristics make design and evaluation extremely challenging. Infrastructure networks also more recently include a set of cyber systems that monitor and regulate the operations of the physical systems through "computation, communication, sensing, and actuation" [5], making them cyber-physical system of systems (CPSoS).Recent work by Guariniello et al. [6] recognized the overlap between SoS engineering and complex cyber-physical systems, including dynamic interactions between components, the possible presence of multiple stakeholders, and emergent behavior in the operational domain.These areas of overlap are part of what makes design for SoS resilience extremely challenging.Quantifying resilience in the early design stages for complex, large-scale, and (often) geographically dispersed CPSoS with a large number of possible disruption scenarios is extremely difficult.Because of this, early-stage design decisions for resilience are based on qualitative guidelines (heuristics) such as physical and functional redundancy, localized capacity, internode communications, and human-in-the-loop [7], [8].While such guidelines are useful, they cannot be used to assess tradeoffs with other attributes of interest because of their qualitative nature. The inclusion of cyber elements in the CPSoS only increases the complexity of evaluating and designing for resilience.Disruptions in the cyber domain, such as false data injection or denial of service attacks, can lead to cascading failures in the physical domain.Physical disruptions, which can stop or reduce the operation of constituent systems, are typically easy to detect compared to cyber disruptions, which can negatively modify the operation of constituent systems, instead of stopping them, making timely detection difficult.For example, during a false data injection attack, all constituent systems appear to be operating normally despite potentially sending doctored inputs that would lead to inappropriate regulation decisions and subsequent failures in the physical operations [9].Evaluating the resilience of CPSoS to such attacks also requires the ability to cosimulate the cyber and physical systems operations under disrupted conditions, which is a formidable task in the early/conceptual design stages [10]. Recent work has presented promising evidence that the architecting principles of biological ecosystems (Nature's resilient SoS) can be used to design resilient engineering SoSs.Ecologists have found that biological ecosystems achieve a simultaneously resilient and sustainable (efficient) design through a unique balance of constraints and redundancies in their network architectures.This architectural feature is evaluated using an approach called ecological network analysis (ENA, detailed in Section II-B).Investigation of the resilience versus affordability trade spaces of (> 38 000) notional SoS architectures under various disruption scenarios indicated that ecologically similar SoS architectures had more desirable resilience and affordability attributes [11], [12].A recent study found promising correlations between SoS resilience and ENA-based metrics (and other graph-theoretic metrics) [13].Bioinspired designs of electric power grids (and microgrids), using a similar approach, were also found to have significantly fewer violations (better resilience) in various disruption scenarios compared to traditional configurations [14], [15], [16], [17].The ENA framework as used in ecology and those studies, however, is only applicable to networks with one type of flow/interaction.In addition, ecological modeling guidelines for ENA are focused on flows of physically conserved quantities, such as energy and nutrients.The CPSoS have multiple types of interactions, physical material flows and monitoring and regulation interactions (information flows), and information flows are not bound by the same conservation laws.Because of this mismatch, the traditional ecology-based ENA framework is not suitable for CPSoS, hindering research into the application of ecological principles for designing resilient CPSoS. This work addresses this limitation by proposing a novel multigraph model of the CPSoS, along with guidelines and modified metrics that enable ENA evaluation of the overall (cyber and physical) network organization of the CPSoS.The modeling decisions for the proposed multigraph model are discussed in detail and compared to previously studied ENA models of engineering networks and conventional topological analyses of cyber-physical systems.The application of the extended framework is shown using an eight-substation power grid case study.This lays the critical groundwork for future research investigating the design of resilient CPSoS using biological ecosystem inspiration.A preliminary version of this research was presented at IEEE SmartGridComm 2021 [18].This work approaches the resilience of the CPSoS from a proactive standpoint: it investigates how to take actions better at the design-phase, or ahead-of-time of the disruptive events.Hence, the proposed approach differs from the usual reactive approach of "sense-plan-act" after disruptions.The reactive approaches to resilience are outside the scope of this work.In addition, the modified ENA models and metrics presented in this work are not meant to assess the resilience of the CPSoS to specific cyber threats.Rather, this work aims to present a complementary decision-support tool that can be used in the early/conceptual stages of CPSoS architecture development, which are nondata intensive and threat agnostic. A. CYBER PHYSICAL SYSTEMS AND SYSTEM OF SYSTEMS MODELING AND ANALYSIS FOR RESILIENCE Resilience describes a system's ability to securely operate during and recover from adverse situations to resume normal operations.As a cyber-physical system, resilience is a multidimensional property that requires managing disturbances originating from physical component failures, cyber component malfunctions, and human attacks [19].Modeling the cyber-physical system holistically is essential to analyzing and investigating its resilience.Conventionally, cyberphysical systems are modeled graphically by classifying the nodes (constituent systems) into cyber and physical layers: interactions between the cyber nodes form the cyber network and interactions between the physical nodes form the physical network.Interlayer links then capture the interdependence on functions, topologies, and facilities between the cyber and physical networks [20], [21]. Taking power systems as an example, resilience has been quantified through the resilience trapezoid, to capture temporal properties of the power system's performance during an extreme event [22].The resilience trapezoid is a portrayal of the preparation, duration, and recovery from a severe disturbance in electric power systems.This portrayal can quantitatively show an aggregate resilience property of the system: for power systems, this is its ability to meet the load.As commonly used, the resilience trapezoid hence depicts a system-wide property's evolution over time, subject to disturbance. Modeling to quantify resilience in real, complex, and nonlinear systems is more complicated than the resilience trapezoid.The resilience of a system depends on both how the network is designed and how the system is operated, recognized as infrastructural resilience and operational resilience.As discussed in [23], infrastructural resilience lays the foundation for operational resilience, which provides more resources that operators and stakeholders can utilize.Recent work has shown that more robust power networks have an improved tolerance of disturbances while maintaining systems' security and resilience against hazards [16].Likewise, a more robust communication network exhibits more paths to deliver critical information through different routes [24].A further limitation of the resilience trapezoid is that it is specific to each particular threat.Infrastructural resilience, the focus of this work, enables further reliable and sustainable operations.Hence, the proposed holistic design-based solution would benefit future operators under different cyber and physical threats. Power network design involves economic aspects such as [25], [26], and [27]; investment portfolios and contingency scenarios must be included, where tracking of power system constraints using detailed models under these variable investments and events must occur in practice, to inform network expansion for better resilience against unexpected contingencies.With the integration of cyber networks, different definitions and quantification of cyber-physical power system resilience are proposed.Clark and Zonouz [28] proposed a resilience metric to quantify the ability of the system to recover from a given attack using discrete stochastic models and dynamical linear system models to capture the interdependencies of the cyber network and the underlying physical processes.Venkataramanan et al. [29] proposed a framework to quantify cyber-physical transmission resiliency where a graphical analysis was applied along with a measure of critical network parameters in both the cyber and physical systems.Huang et al. [24] built the interconnections between cyber and physical networks through the amount of critical data transferred among physical and cyber networks for control and observability to capture the resilience of cyber-physical power systems.To ensure cyber-physical resiliency, a resilient communication network is essential for the smart control of the different resources against threats.Lin et al. [30] proposed a self-healing phasor measurement unit network using the software-defined networking (SDN) infrastructure to achieve resiliency against cyberattacks.A mixed-integer nonlinear optimization model was formulated to capture the self-healing process in a communication network while considering constraints on the physical network.Al et al. [31] proposed an SDN platform using Industrial Internet of Things technology to support power systems' resiliency by reacting immediately whenever a failure occurs to recover smart grid networks using real-time monitoring techniques.Jin et al. [32] presented an SDN-based communication network architecture for microgrid operations with the applications of self-healing communication network management, real-time and uncertainty-aware communication network verification, and specification-based intrusion detection for cyber-physical systems' resilience. Existing methodologies on cyber-physical systems' resilience focus on the interactions between the cyber and physical systems as well as the functionalities of both the cyber and physical networks.With the specified threat vector and objectives, they can then optimize and analyze the system through cyber and/or physical development and actions.These methodologies, however, are not feasible in the early design stages when specific threat vectors are not yet known. B. EXISTING ENA AND CHALLENGES ENA is a tool used by ecologists to study the complex interactions among species in ecosystems.ENA provides a set of metrics to study structural and functional characteristics of ecological networks [33].The nodes in the digraph represent the species and the directed arcs represent the transfer of energy or nutrients between them and their immediate environment.The flows between the actors (or nodes) within the system boundaries and the system inputs, outputs, and dissipation exchanged with the environment are stored in the (N + 3) × (N + 3) flow matrix T, where N is the number of actors within the network (see Fig. 1).The nodes 1 to N in the flow matrix represent the actors within the specified network boundary.The nodes 0, N + 1, and N + 2 are the imports, exports, and dissipations, respectively.Any matrix element T i j represents the magnitude of flow from node i (producers/prey) to node j (consumers/predators).The hypothetical food web of Fig. 1, for example, shows that midges (node 1) are consumed by predators (node 2) and predators are consumed by detritivores (node 3).ENA models these food web interactions as caloric (energy) transfers between the nodes and the flow information are saved in the elements T 12 and T 23 of the flow matrix, respectively.The entries T 03 and T 34 represent the input and output flows between the detritivores (node 3) and their environment, respectively.Readers interested in a more detailed description may refer to [34].ENA includes multiple metrics that quantify different architectural characteristics of flow networks such as cyclicity, nestedness, and synergism.Such analyses have been applied to industrial networks showing promising improvements in resilience and sustainability [35], [36], [37], [38]. The ENA metric of interest in this work is degree of system order (DoSO), which quantifies the relative pathway constraints/organization in a flow network [39].The level of network pathway organization or constraints is measured using the metric average mutual information [AMI; see (1)].The upper limit of AMI is quantified by the metric Shannon Index [H, see (2)].DoSO is evaluated as the ratio of AMI to H (3) and takes values from 0 to 1.In ( 1)-( 3), TST p is the sum of all flows in the network, T i. is the sum of flows leaving node i, and T .j is the sum of flows leaving node j [see ( 4)]. Highly pathway-constrained networks will have more static routes for flows between nodes to improve the efficiency of transporting material from one point to another.These networks will have DoSO values close to 1. Highly pathwayflexible networks will have multiple (but not the most efficient) options to route flows between nodes.These networks will have DoSO values close to 0. A DoSO analysis of biological ecosystems showed that they have evolved to exist within a narrow range of DoSO ∈ [0.213, 0.589], called the window of vitality [40], [41].This study provided evidence for the hypothesis that a balance between constraints and redundancies in network organization is crucial to ecosystems' resilience and sustainable growth [39].The DoSO evaluation has also been applied to engineering networks such as supply chains [42], industrial water networks [43], and power grids [14], [15]. where The existing ENA framework and DoSO formulation are only applicable to networks with one type of flow and are unsuitable for the evaluation of CPSoS architectures.Ulanowicz [33] provided generalizations of the AMI and H metrics across multiple dimensions (including time, flow types, and spatial location).However, the authors identified the following two issues regarding the application of this modified formulation for CPSoS analysis: 1) the formulation uses sums of the flows of different types leading to dimensional inconsistencies in CPSoS with physical and information flows; 2) a trivial change in the unit/scale of any one flow can lead to a different DoSO evaluation of the same CPSoS.The observation is that earlier work had an undesirable sensitivity to the scale/unit for measuring flows.This is not an acceptable characteristic for a CPSoS architecture assessment technique.Therefore, we assert that a new formulation is required to evaluate DoSO of CPSoS architectures with multiple flow types. III. PROPOSED CPSOS MODELING FRAMEWORK The authors propose that CPSoS architectures should be modeled as directed multigraphs.A multigraph is a graph that is permitted to have multiple edges/links between the nodes.The nodes represent the constituent systems and the directed edges represent the different types of interactions.This section proposes a set of guidelines to model CPSoS architectures as directed multigraphs for ENA and provides a modified formulation for DoSO evaluation that addresses the issues identified in Section II-B. A. IDENTIFYING CONSTITUENT SYSTEMS AND INTERACTIONS The first step in developing the multigraph model of CPSoS is to identify the constituent systems (nodes) and distinct interactions (edges).Han et al. [44] presented a hierarchical description of SoS, as illustrated in Fig. 2. The SoS has a main operational objective.The main objective is met by accomplishing a set of requirement capabilities, and each requirement capability is met by completing fundamental tasks/functions in a meaningful order by the constituent systems.Following this hierarchical description of SoSs, constituent systems in a CPSoS (unique nodes in the ENA model) are identified using the following rules: 1) the system operation can be changed (at least to some degree) independently; 2) the system performs one or more of the fundamental tasks for the SoS; 3) the system ownership/management/development process is different from other systems.Contrary to some previous applications of ENA to engineering networks (see [14] and [43]), the authors propose that systems like pipeline segments and transmission branches should be modeled as unique nodes and not simply as graph edges/interactions.This is because these systems fulfill a unique and essential role in the SoS and have a certain level of operational independence.For instance, transmission branches in power grids can be shut down to protect from power surges, and flow through pipeline segments can be controlled using valves.In addition, these systems have their own cyber interactions (for monitoring and/or regulation) with the supervisory control and data acquisition (SCADA) systems.These unique functional flows require that they be modeled as nodes because edges in a directed graph/multigraph can only exist between two nodes and not between a node and an edge.This was not considered in prior work using ENA on engineering networks because they were only considering the physical flows. In this model, human operators are considered to be a part of the system that they work on.For example, human operators working at the physical systems (such as generators in power grids) are lumped into the physical system node.Human operators are also included in the cyber system nodes if they are involved in processing the data received to ascertain the state of the monitored physical systems and make regulatory decisions.The human operators give the physical systems their ability for independent operation and/or decision making. Physical systems' operations are measured using sensors/meters attached to them.These sensors/meters are not considered separate nodes in the proposed model because they are components built into the physical systems and are not independent constituent systems themselves.A physical system could have multiple (redundant) sensor/meter components.However, when analyzing the overall SoS, the focus is on the higher level network architecture, and not on the minute component-level details. The different types of interactions are identified based on the requirement capabilities of the SoS.Each type of interaction represents the interdependencies and task flows to achieve a specific requirement capability.The authors identify the following three types of common interactions (requirement capabilities) in CPSoS. 1) Physical interactions: The sourcing, processing, and distribution of physical flows, such as energy and water.2) Monitoring interactions: Collecting, communicating, or processing the state information of the physical operations.3) Regulatory interactions: Generating, communicating, or processing information for regulating physical operations.This classification does not imply that the proposed ENA modeling framework can only be used on SoS with three types of flows.Instead, this is intended to provide a detailed procedure that allows for a consistent analysis of many critical CPSoS such as energy/gas/water distribution infrastructure. B. ASSIGNING INTERACTION MAGNITUDES The next step is to identify the interactions between the constituent systems, as well as the constituent systems and the SoS operating environment.Once all interactions (of each type identified in step 1) are known, it is required to assign a magnitude to each of these interactions for the DoSO analysis.The amount (or fraction) of a task accomplished by a system (referred to as the task load in this article) should be used to determine the strength/magnitude of interactions from a node.The interaction magnitudes assigned in this step are meant to create a generalized representation of how the architecture is designed to work during its operation period, to evaluate pathway constraints and redundancies.This step (or the whole framework) is not being proposed as a simulation of the CP-SoS at any given time.This step is explained in more detail for physical interactions and cyber interactions as follows. 1) Physical flows: In the case of physical interactions, the strength of interactions can be assigned as equal or proportional to the amount of planned material or energy transfers between the constituent systems, and the systems and the environment.For example, in supply chains, the magnitude of flow between a supplier and an assembler would be equal (or proportional) to the amount of material supplied by the supplier to the assembler under normal operating conditions [42].In an energy distribution network, the flows between any two systems would be equal (or proportional) to the planned transfer of energy between the systems under normal operating conditions [16], [45]. It should be noted that the exact amounts of the flows are not required for ENA modeling.When designing an architecture, designers make decisions regarding what amount of material/energy flows will be routed through different channels in the network.These planned proportions can be used to create an ENA model instead of needing to know the exact amount of flow.This is especially important where the flows may vary over the period of operation.2) Cyber interactions: The guidelines for assigning the cyber interaction magnitudes (monitoring and regulation) are described later for a typical SCADA-based architecture that has local cyber systems and a central terminal.In this work, local cyber systems are referred to as remote terminal units (RTUs).The RTUs receive information from physical devices, process them, and communicate them with other RTUs or the central SCADA terminal (CST).A notional CPSoS of this type is shown in Fig. 3(left).The process to assign task-load magnitudes to the monitoring interactions is outlined as follows. 1) Physical operation systems to RTUs: Sensor or meter components on the physical systems measure the operating parameters of interest and communicate that information with RTUs connected with those systems. To assign magnitudes to the monitoring interactions, first, it needs to be identified whether the monitoring of each system is equally important or if there are some systems whose monitoring is more important to the SoS operation.If the monitoring of each system is equally important, then a fixed quantum of monitoring task load (say five units) is assigned to each interaction from a physical system to its RTU.However, if some of the systems' monitoring is more important/critical, the link between those systems and their RTUs can be assigned a proportionally higher task-load magnitude.2) Inter-RTU interactions: If the architecture allows communication between RTUs (for example, a mesh communication topology), there are bidirectional links between each RTU.The magnitude of these interactions is equal to the amount of monitoring information that was received by the sender RTU from its associated physical systems and that is useful to the receiver RTU. 3) Export and dissipation at RTUs: If an RTU has received redundant monitoring information for one or more physical systems, the monitoring task-flow dissipation from that RTU is equal to the amount of the redundant input. In case the RTU has been given certain local regulatory authority in the architecture design, a fraction of the nonredundant input is assigned as the magnitude of the monitoring task-flow export at the RTU.This fraction depends on the level of regulatory authority granted to local systems in the architecture.4) RTU to CST interactions: A fraction of the nonredundant input to RTUs is assigned as the magnitude for the monitoring interactions from the RTUs to the CST.This fraction depends on the level of regulatory authority granted to the CST in the architecture.5) Export and dissipation at CST: If the CST has received redundant monitoring information streams for one or more physical systems, the monitoring task-load dissipation from the CST is equal to the amount of redundant input.The nonredundant input to the CST is assigned as the magnitude of the monitoring task-load export. A similar process is followed to assign task-load magnitudes to the regulation interactions, outlined as follows. 1) Import at CST: The import at the CST represents the transformation of monitoring information to regulation information since the SCADA terminal uses the monitoring information to make regulatory decisions.The magnitude of the import flow depends on the number of systems being regulated by the CST and the level of regulatory authority granted to the CST in the architecture.First, a task load is assigned to the regulation task of each system, similar to the monitoring task-load assignment.If the regulation of each system is equally important, then a fixed quantum of task load (say five units) is assigned to all systems.However, if some of the systems' regulation is more important/critical, then the task loads for these systems are assigned a proportionately greater amount.The magnitude of the import flow of regulation interaction into the CST is set equal to the sum of the assigned regulation task loads for all systems regulated by the CST. 2) CST to RTU interactions: The CST provides input of regulation information to each RTU equal to the sum of regulation task loads of systems that they can communicate with directly or (indirectly) through inter-RTU communication links.3) Inter-RTU interactions: The magnitude of regulation interaction from RTU A to RTU B (if connected) is equal to the sum of the regulation task loads of systems directly connected to RTU B and whose regulation information was received by RTU A from the CST.4) Import at RTUs: The magnitude of the regulation taskload import at any RTU is set equal to the sum of the assigned regulation task loads for all systems regulated by that RTU if the RTU has local regulatory authority.5) Dissipation at RTUs: The magnitude of the dissipation flow of regulation interaction into any RTU is set equal to the sum of the redundant input streams of regulation information.6) RTUs to physical systems interactions: The magnitude of regulation interaction from an RTU to a physical system is equal to the assigned task load of that system's regulation.7) Export and dissipation at physical systems: The magnitude of the export flow of regulation task load at a physical system is equal to the assigned task load of that system's regulation.The magnitude of the dissipation flow of regulation at any physical system is set equal to the sum of the redundant streams of regulation task load into that physical system. C. PREPARING FLOW MATRIX AND CONDUCTING DOSO ANALYSIS Once the multigraph is modeled, as described in the aforementioned steps, a 3-D flow matrix is prepared to represent the model and evaluate the DoSO.In this 3-D flow matrix T, any element T i jl represents the interaction/transfer of type l from node i to node j.An example of the multigraph model and flow matrix, for a notional CPSoS, is shown in Fig. 3. To facilitate the DoSO evaluation of the overall network, the modified AMI and H metrics, shown in ( 5) and ( 6), are proposed.The symbols in the metrics have the same meanings as described in Section II-B and the new subscript l represents the different flow types.These flow values required to use ( 5) and ( 6) can be obtained from the 3-D flow matrix T. In ( 5) and ( 6), T l is the sum of all flows of types l in the network, T i.l is the sum of flows of type l leaving node i, and T .jl is the sum of flows of type l leaving node j [see (7)].Once AMI and H are calculated using the modified metrics, DoSO can be calculated using (3).The formulation of these modified metrics is described in detail in [46].The modified metrics do not use the sum of flows of different types and have been used to analyze supply chains with multiple physical flows [47] and surveillance networks with multiple information flows [48]. where IV. CASE STUDY A. CASE STUDY DESCRIPTION The proposed ENA modeling guidelines are tested on a synthetic eight-substation cyber-physical power networks (CPPN) case study from [49].There are five generators, six loads, and 12 branches/transmission systems in this case study.The monitoring and regulation of the physical systems are accomplished using a SCADA network.Each substation has its own RTU and every generator or load is assigned to a specific substation (see Fig. 4).The RTUs communicate with a central SCADA terminal. In this case study, the physical systems (buses, generators, loads, and branches/transmission systems) generate and distribute energy to the end users.The cyber systems include communication devices, such as routers, firewalls, etc.For simplification, an RTU system is used to model all local communication devices at a substation for ENA.The cyber systems (RTUs and the CST) communicate and process the data received from the physical systems to ensure that the system operates securely, reliably, and economically.The following interaction types are identified for the CPPN case study: energy flows, monitoring interactions, and regulatory interactions. Various architectures of the eight-substation CPPN were evaluated using the proposed ENA framework for CPSoS.The physical infrastructure was unchanged in the tested architectures.The design variations explored in the cyber infrastructure are explained as follows. B. DOSO ANALYSIS The DoSO evaluations for the three interactions (energy, monitoring, and regulation) and the overall CPPN are shown for the four architectures in Table 1.The communication topology selection was a discrete design variable: either a star or a mesh topology.However, the authority distribution is a continuous design variable.The central versus local designs described in the aforementioned list are the two extreme cases.In regular operation, the regulation authority is usually distributed between the local and central systems.For example, the primary regulatory authority may be assigned to the CST but the local RTUs would have a certain level of decision-making authority for emergency response.The trend of DoSO across the spectrum from central to local regulation is also studied and the results are shown in Fig. 5 for architectures with a star communication topology and in Fig. 6 for architectures with a mesh communication topology.In these figures, an authority distribution parameter value of 1 indicates completely centralized monitoring and regulation, and a value of 0 indicates completely local monitoring and regulation. C. CONVENTIONAL TOPOLOGICAL ANALYSIS A conventional topological analysis of the cyber network architectures, consisting of the RTUs and CST in the eightsubstation CPPN, was also conducted.The results of this topological analysis are shown in Table 2.The following four topological metrics [50] were used in this analysis. A. NOTABLE FEATURES OF THE MODEL 1) UNBALANCED FLOWS ENA applied to biological ecosystems typically requires that all physical flows are balanced at all nodes: flow entering a node equals flow exiting a node.This is because the flows of interest for ecologists, such as energy and nutrients, obey the laws of conservation.Unlike physically conserved flows, information flows are not bound by conservation laws as new information can be generated at any time and existing information can be copied to multiple receivers. For instance, a metering device outputs information about the operation of its physical system.An information import flow that would "balance" this information output is not meaningful because the information is not received from the external environment, rather it is generated in the system.This can be seen in the examples in Figs. 3 and 7. Examples of unbalanced information flows due to the copying of information at the physical system and at the RTU can be seen in Fig. 7(a) and (b), respectively. The DoSO evaluation does not mathematically require flow balance at all nodes.Therefore, unbalanced flows are theoretically acceptable in the model as long as they do not violate any physical laws of the network under consideration.It should be noted that physically conserved flows are still balanced in the proposed model. 2) TRANSFORMATION OF FLOWS Flows can be transformed from one type to another after processing.For example, the CST uses the state information (received through the monitoring interactions) to make decisions regarding altering the operations of physical systems (communicated using regulation interactions).This functionality is represented using export-import flow pairs in the proposed model.For example, the CST in Figs. 3 and 7 exports the useful monitoring interactions and imports and an equivalent amount of regulatory interactions. Transformation can also be observed at the physical system nodes.While the monitoring interactions received by the physical systems are not converted to another type of information flow, they are transformed into productive actuation operations.This is modeled as the export of monitoring interactions from the physical system nodes (as shown in Figs. 3 and 7). Finally, redundant information streams are modeled as dissipation leaving the nodes, in this model.Examples of dissipation flows to model redundancy can be seen in the two designs in Fig. 7.The first design Fig. 7(a), employs physical redundancy by using two RTUs for one physical system.The redundancy is modeled by the dissipation flows at the CST and the physical system.The second design, shown in Fig. 7(b), adds redundancy to the CPSoS architecture using multiple communication pathways.This redundancy is modeled by the dissipation flows at the CST and the RTUs. 3) SCALE INVARIANCE The proposed model and the modified metrics presented in Section III are scale-invariant.Changing the scale/unit of any subset of flows will not affect the DoSO evaluation of the CPSoS architecture.This is an essential feature of the model because a meaningful overall evaluation of the network should not be affected by trivial matters such as the selection of measurement units/scales.In this proposed approach, modelers are free to use any unit/scale for the flows as long as the same convention is used for all other flows of the same type.This feature also makes it easier to assign magnitudes to the information flows.For example, when assigning interaction magnitudes to the cyber interactions (see Section III), a modeler can assume any arbitrary value for monitoring or regulation task loads for each system as long as it is consistent throughout the model and proportional to the importance of each system's monitoring and regulation. B. KEY OBSERVATIONS FROM THE EIGHT-SUBSTATION POWER GRID CASE STUDY Consider architecture 1 (in Table 1), with the star communication topology and centralized regulation authority, as the base architecture.By changing the communication network from a star topology to a mesh topology (architecture 1 to 2), the DoSO evaluations shift toward a high level of pathway redundancy.This is consistent with the fact that the mesh-type communication topology provides a greater level of flexibility to maintain normal communication between cyber systems with regulatory authority and the physical systems.This difference between the architectures is also captured by the topological analysis (as shown in Table 2). Next, consider the effect of changing the control authority from centralized to decentralized (architecture 1 to 3).The architectures have the same nodes and use the same communication pathways but they function in different ways because of the differences in regulatory authority.In architecture 1, all monitoring information is sent to the CST for processing and to make regulatory decisions.In architecture 3, the CST is not performing any function in the CPSoS because the regulation authority is completely localized.The architectures 1 and 3 are topologically equivalent, as shown by their identical topological analysis metric values (see Table 2).However, the proposed ENA modeling and DoSO analysis framework can capture these functional/behavioral differences, as shown by the different DoSO values of the two architectures in Table 1. Figs. 5 and 6 provide an insight into the variation of the architectures DoSO values with the distribution of regulatory authority.These results indicate that authority decentralization does not always lead to higher pathway redundancy/flexibility.For the star topology architectures (see Fig. 5), the pathway redundancy increases (DoSO decreases) up to a certain level of authority decentralization.Beyond that, greater decentralization of regulatory authority makes the system more pathway constrained.In the case of the mesh topology architectures (see Fig. 6), the communication between the RTUs provides a high level of pathway flexibility.Decentralizing the authority distribution in architectures with the mesh communication topology is observed to have little effect on the CPSoS pathway organization, at first.However, extreme authority decentralization makes the architecture more pathway constrained. These results are surprising at the first glance.However, it should be noted that decentralization of the regulatory authority has two unique (and opposing) effects on the pathway organization of the CPSoS.While regulatory authority decentralization does add flexibility by adding to the functionality of the RTUs, it also reduces the amount of information shared between RTUs and between the RTUs and the CST.The flexibility provided by the inter-RTU communications is the primary contributor to the pathway redundancy in mesh topology-based architectures.Therefore, reducing the communication between RTUs reduces the flexibility provided by the mesh communication topology, explaining why these architectures are observed to become more pathway constrained with the increase in regulation decentralization. Finally, the DoSO evaluations of architectures 3 and 4 are identical.This is surprising because the architectures are different from a topological perspective (note the different values of the topological metrics in Table 2).However, upon scrutiny, it is noted that when the regulation authority is completely decentralized/local, the CST is not functional and the RTUs are only interested in the information about the physical systems that they are connected to directly.Therefore, there is no information sharing from RTUs to the CST or between RTUs.This leads to both architectures behaving identically-as eight separate subnetworks for the two cyber interactions, and only connected by the power flows between them.The fact that the DoSO evaluation can identify such subtle functional features is a promising indication of its value as a CPSoS architecture evaluation tool. C. POTENTIAL IMPACT, CHALLENGES, AND FUTURE RESEARCH DIRECTIONS This work showed that the ENA approach can be extended for the evaluation of CPSoS.The results also indicate that the proposed framework and the DoSO analysis can capture subtle functional/behavioral characteristics of CPSoS architectures, which makes it unique compared to existing graphical analyses that only consider their topological features.Section III has detailed the procedures of applying this multigraph ENA modeling techniques and DoSO evaluation for CPSoS.It is also worth pointing out that the proposed multigraph-based ENA modeling framework does not get more complex with the increasing network size, and is therefore applicable to large-scale CPSoS too.The DoSO evaluation does not require the knowledge of any detailed disruption scenarios or the ability to evaluate CPSoS under disruptions using complex co-simulation techniques.Therefore, the proposed framework can provide much-needed architecture evaluation feedback to engineers in the early stages of CPSoS design. CPSoS can be designed for significantly different types of operation over a range of time horizons.For example, the task loads for data collection may increase during peak operating periods, compared to regular operations.The regulatory authorities in a CPSoS could also change based on the operating condition.CPSoS stakeholders who are interested in evaluating the pathway organization state of the CPSoS (using the DoSO metric) during different operational situations can develop multiple instances of the CPSoS ENA model for each of those operational situations, and then, use the same steps outlined in this article to compare them.The approach can be extended to include information about ownership of CPSoS assets and data, a capability that could be useful in quantifying the impact of data corruption scenarios against normal day-to-day operations. The proposed framework is developed to assess the design of CPSoS considering the heterogeneous flows and network topologies.It has the capability to provide an early-stage assessment of the resilience capabilities of the CPSoS given the condition that all inputs are correctly collected.This approach takes the flows into consideration, not the quality of the cyber data or information used for operation.The data flow integrity check would come from (and here it is assumed that it is done externally) an organization's external security event monitoring and intrusion detection systems.This work has not yet tested whether the DoSO analysis of CPSoS architectures can "predict" their ability to handle cyber disruptions.Toward prediction, this work has developed an extended ENA framework that makes such an investigation possible.In addition, recent research has shown promising indications that the DoSO analysis can guide resilience improvements in complex systems and SoSs.This motivates future research comparing the DoSO analysis of CPSoS against their resilience evaluation (to cyber threats) using state-of-the-art cyber physical cosimulation testbeds such as those developed in [9], [51], and [52]. Past research has found that ecologically similar DoSO values can lead to desirable resilience in engineering networks with physically conserved flows (see [15], [43], and [47]).However, the cyber interactions in CPSoS have unique behavioral properties, including the ability to generate new information and copy information.Cyber-physical disruption scenarios can also involve the unique aspect of deception.Based on these considerations, it is possible that the favorable DoSO range for resilience from cyber threats may be different from the ecologically identified Window of Vitality (discussed in Section II-B).This is in line with prior work that suggests certain engineering networks and SoS may have specialized Windows of Vitality, especially in cases where the severity of potential threats is known [11], [12].Future research should also investigate different CPSoS applications such as oil and gas infrastructure, and water distribution infrastructure to test if the favorable DoSO ranges vary based on the application.The approach presented here paves the way to uncover the existence and qualities of unique CPSoS' Windows of Vitality. VI. CONCLUSION This article presented a novel multigraph model of CPSoS, along with guidelines and modified metrics that enable ENA evaluation of the overall (cyber and physical) network organization of the CPSoS.The proposed model can accommodate unbalanced flows (as long as they are consistent with the operating principles of the network), accounts for the transformation of flows, and is scale invariant.This article also demonstrated the practical application of the extended ENA framework and DoSO formulation using a realistic energy infrastructure case study.It is shown that the proposed model evaluates both topological and functional (flow-based) characteristics of SoS architectures, which makes it unique compared to existing graphical analyses that only consider the topological features of SoS architectures.The approach presented here paves the way to discover ecology-inspired design principles for resilient CPSoS. FIGURE 1 . FIGURE 1. Schematic of the modeling procedure used in ENA, describing the (a) hypothetical food web as a (b) flow matrix.Figure based on [35]. FIGURE 3 . FIGURE 3. Notional CPSoS modeled using the proposed multigraph approach and its corresponding 3-D flow matrix. 1 ) How is the regulatory/control authority distributed?a) Central: Only the CST has regulatory authority.The RTUs communicate data to and from the CST.b) Local: The substation RTUs make regulation decisions for the systems in their substation.2) What is the communication network topology?a) Star topology: RTUs only communicate to the CST.b) Mesh topology: RTUs communicate to the CST and amongst themselves. FIGURE 5 . FIGURE 5. DoSO trends with changing authority distribution for the star communication topology-based eight-substation CPPN. FIGURE 6 . FIGURE 6. DoSO trends with changing authority distribution for the mesh communication topology-based eight-substation CPPN. 1 ) 3 ) 4 ) Average node degree ( d): Measures the average number of links connected to each node in the network.2) Average clustering coefficient ( c): Measures the average degree to which nodes in a graph tend to cluster together.Average shortest path ( l): Measures the average minimum distance between any two pairs of nodes in the network.Average betweenness centrality ( b): Measure of the average centrality of nodes in a graph based on shortest paths that represent the degree to which nodes form connections between each other. FIGURE 7 . FIGURE 7. Examples of modeling redundancy in CPSoS architectures.(a) Physical redundancy (use of multiple RTUs).(b) Redundancy via flexible communication pathways.
9,962.4
2024-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Towards Autonomous Bridge Inspection: Sensor Mounting Using Aerial Manipulators vehicles (UAVs) to attach a sensor to a bridge using a two-component adhesive in order to perform an inspection. Constant pressure must be applied for several minutes to form a bond between two adhesives. Therefore, one UAV sprays the colored component of an adhesive while the aerial manipulator transports the sensor, detects the contact point and attaches the sensor to it. A trajectory planning algorithm was developed around the dynamic model of the UAV and the manipulator attached to it, ensuring that the end-effector is parallel to the wall normal. Finally, the aerial manipulator achieves and maintains contact with a predefined force through an adaptive impedance control approach. Abstract: Periodic bridge inspections are required every several years to determine the state of a bridge. Most commonly, the inspection is performed using specialized trucks allowing human inspectors to review the conditions underneath the bridge, which requires a road closure. The aim of this paper was to use aerial manipulators to mount sensors on the bridge to collect the necessary data, thus eliminating the need for the road closure. To do so, a two-step approach is proposed: an unmanned aerial vehicle (UAV) equipped with a pressurized canister sprays the first glue component onto the target area; afterward, the aerial manipulator detects the precise location of the sprayed area, and mounts the required sensor coated with the second glue component. The visual detection is based on an Red Green Blue - Depth (RGB-D) sensor and provides the target position and orientation. A trajectory is then planned based on the detected contact point, and it is executed through the adaptive impedance control capable of achieving and maintaining a desired force reference. Such an approach allows for the two glue components to form a solid bond. The described pipeline is validated in a simulation environment while the visual detection is tested in an experimental environment. Introduction The world of unmanned aerial vehicles (UAVs) has been rapidly growing in recent years. As their design and control are perfected, these aerial vehicles have become more and more available. Nowadays, off-the-shelf ready-to-fly UAVs can be found and bought in shops, which makes them available to virtually anybody. This, in turn, has sparked a great deal of public interest in UAVs since their potential can be found in applications such as agriculture, various inspections (bridges, buildings, wind turbines), geodetic terrain mapping, the film industry, and even for hobby enthusiasts to fly and record videos from a first-person perspective. The vast majority of commercially available UAVs are equipped with a camera, while more specialized vehicles for terrain mapping or crop spraying offer a more diverse sensor suite. All of the aforementioned systems primarily observe and gather data about the environment, while having little to no ability to interact with and change the environment. One way to augment these vehicles for physical interaction is to attach a lightweight manipulator to their body, which is the main interest of the aerial manipulation field. Although such vehicles are more complex for both modeling and control, their benefit lies in performing versatile tasks that require interaction with the environment. In general, there are three types of bridge inspections: periodic, special and damage inspections. Periodic bridge inspections differ from country to country according to national standards, and are usually performed at least once every two to three years. Special inspections are typically used to monitor the condition of deficient elements at specific locations based on predefined requirements. Damage inspections are usually performed after events that have occurred due to environmental impacts or human actions. The aim of a bridge inspection is to evaluate and assess structural safety and reliability. Current techniques are based on traditional visual inspection with a combination of nondestructive methods (NDTs). Traditional visual inspection is performed by experienced (trained) engineers and using specialized trucks equipped with the cranes and basket, that allow inspectors to review the conditions underneath the bridge. During the inspection, the engineers are equipped with various NDT [1] tools to detect construction faults and defects such as corrosion, cracks, voids, weakening connections, and concrete delamination. Some of these NDTs require mounting small sensors to collect data, such as accelerometers, strain gauges, tilt meters and various transducers for acoustic or pressure measurements. Afterwards, the bridge is excited with vibrations, sound waves, tapping, etc., and mounted sensors record responses to these specific excitations. Furthermore, there are usually requirements for performing measurements during the bridge inspection, such as the short-and long-term monitoring of vibrations, strains, displacements, etc. Mainly, these inspections offer valuable information about the current bridge conditions, but there are a number of disadvantages. The use of trucks during inspections requires total or temporary road closures, which at the same time require safety measures to keep traffic flowing as freely as possible. In addition, inspectors often encounter challenges in reaching all portions or elements in narrow areas, such as tight spaces between girders, beams and vaults. The aforementioned significantly increases the time and overall cost of the inspection. An aerial robot, with the potential to reach these challenging locations on the bridge, could significantly reduce the time and cost of these inspections and improve worker safety. Moreover, we note that the aforementioned sensors are relatively lightweight, which makes them suitable for transportation and mounting with an aerial robot. Concept We envision a team of robots working together to attach sensors to bridges and similar grade separation infrastructure. In theory, such a task could be accomplished with a single aerial robot, at the cost of a complex mechanical design. The proposed team shown in Figure 1 consists of two drones. One drone applies the adhesive material, and the other attaches sensors. We envision a two-stage process using two-component adhesives which form a solid bond from two separate reactive components: the "resin" and the "hardener". The first UAV applies the resin by spraying it onto the surface, while the second one attaches the sensor with the hardener already applied before the flight. It is important to follow the prescribed ratio of the resin and the hardener to achieve the desired physical properties of the adhesive. Only when mixed together do the two components form the adhesive. The reaction typically begins immediately after the two components are mixed and the bond strength depends both on maintaining the contact and the viscosity of the mixed adhesive during the process. Manufacturers can control the cure rate to achieve various working times (worklife) until final bond strength is achieved, ranging from minutes to weeks. Resin bases are usually more viscous than their respective hardener and are generally applied by brush, roller, applicator or spray. In this work, we propose attaching a canister of pressurized resin to the UAV, and spray it through a nozzle onto the infrastructure surface. In this scenario, the spray needs to be softer and less turbulent to reduce the amount of material lost due to bouncing and it must be colored for the detection in the second stage. Spraying with drones is not a novel concept [2,3], so without loss of generality, we will omit the details of this design and instead focus on detecting, navigating to, and sustaining contact with the sprayed surface. In typical applications, the assemblies are usually kept in contact until the sufficient strength of the bond is achieved. When fully cured, two-component adhesives are typically tough and rigid with good temperature and chemical resistance. We rely on the robotic arm attached to the second aerial vehicle to apply a controlled contact force between the sensor and the surface. Maintaining this fixed assembly contact through the impedance control system enables us to achieve a successful curing process and create a permanent bond between the sensor and the infrastructure. After the first UAV sprays the resin onto the surface, the second aerial robot finds the sprayed part and applies the contact with the sensor's surface. Before takeoff, the surface of the sensor is brushed with a hardener. Once contact is made, it is maintained for the prescribed curing time, after which the aerial robot disembarks and leaves the sensor attached to the surface. Two aerial robots working together to attach sensors to different parts of a bridge and similar grade separation infrastructure. The one on the left is used to spray the resin onto the surface, while the aerial robot on the right maintains contact to the surface with the sensor attached to its end-effector. Contributions This paper focuses on developing a method for mounting sensors on a bridge wall using an aerial manipulator. The first contribution is augmenting the model-based motion planning with the adaptive impedance controller. The motion planning method accounts for the underactuated nature of the multirotor UAV and corrects the end-effector configuration for an appropriate approach. This method also relies on the dexterity analysis which keeps the manipulator configuration within its optimal region, ensuring that the manipulator is never fully extended or contracted while mounting a sensor. The second contribution is the visual blob detection which locates and tracks the appropriate sensor mounting point. The blob detection has been experimentally verified in an indoor environment, yielding the reliable and robust tracking of the mount location, as well as the blob plane orientation. Finally, the third contribution is the simulation analysis of the system's performance, conducted on a straight and inclined wall approach. The simulation concentrates on testing the motion planning together with the impedance controller, performing a repeatability analysis and ensuring that the desired contact force is achieved. Related Work In the world of aerial inspections, a number of UAV-based solutions are being proposed by researchers. In [4], a technical survey for bridge inspections is given. Researchers in [5] present the project AERIAL COgnitive Integrated Multi-task Robotic System with Extended Operation Range and Safety (AERIAL-CORE) which focuses on power lines inspection, maintenance and installing bird diverters and line spacers. Most of these approaches are based in conjunction with new technologies to ensure faster and cheaper inspections. Nowadays, UAVs use high-resolution cameras for visual inspections and employ point cloud methods based on digital photogrammetry [6], Light Detection And Ranging (LiDAR)-based methods [7], digital image correlation [8], etc. There are also reports for visual compensation during aerial grasping [9], aerial grasping in strong winds [10], and the development of a fully actuated aerial manipulator for performing inspections underneath a bridge [11]. According to the experimental testing of contact-based bridge inspections, there is a need to develop a solution for mounting application sensors (such as accelerometers, strain gauges and tilt meters) on a bridge using a UAV. It is expected that a sophisticated system with the possibility of automatic sensor mounting will increase the frequency of measurements without interrupting traffic, ensure the safety of inspectors as well as reduce inspection time and overall costs. As mentioned earlier, the second UAV needs to be aware of the position of the sprayed adhesive which is applied in a blob-like pattern. For this purpose, an Red Green Blue -Depth (RGB-D) camera is used due to its favorable dimensions and weight. It provides image and depth information about the environment which proves useful for object localization and UAV navigation. Such cameras were commonly found on UAVs and unmanned ground vehicles (UGVs) present at the recent MBZIRC 2020 competition. In [12,13], RGB-D information is used for color-based brick detection and localization for the wall-building challenge using UAVs and UGVs, respectively, while in [14] the authors use a Convolutional Neural Network (CNN)-based UAV detection and tracking method for the intruder UAV interception challenge. Furthermore, visual sensors proved useful in [15], where the authors performed a contact-based inspection of a flat surface with an aerial manipulator. The surface position and orientation was obtained by applying random sample consensus (RANSAC) on the RGB-D information. A thorough survey of 2D object detection methods from UAVs was given in [16]. In this paper, the authors present a modular framework for object detection in which a simple contour-based blob detector is implemented. The goal is to use RGB-D information to enable an autonomous inspection workflow. The blob position is obtained by segmenting the depth data at the points where the object is detected in the image, while RANSAC [17] is used to determine its orientation. After the successful detection of a blob-like pattern, it is necessary to attach the inspection sensor. The first phase of the sensor attachment is achieving contact, and the second is maintaining that contact to allow for the two adhesive components to form a bond. Generally, the contact can be achieved with or without force measurements. In [18], contact with the wall is performed and maintained. Researchers in [19] performed wall contact and aerial writing experiments. The work presented in [20] modeled and exploited the effects of the ceiling effect to perform an inspection underneath a bridge. The common denominator in the former approaches is maintaining the contact without any force feedback. Although mounting a force sensor on a UAV increases both mechanical and control complexity, an immediate benefit is the ability to maintain precise contact force regardless of the environment. In [21], the researchers used a force/torque sensor to achieve compliant control while pulling a rope and a semi-flexible bar. A fully actuated UAV with a manipulator has been employed in [22] to compare force feedback control with and without the force/torque sensor. Researchers in [23] used a single degree of freedom manipulator with a force sensor mounted at the end-effector to press an emergency switch. Relying on the blob-like pattern detection and the impedance control, a trajectory for achieving contact is required to steer the aerial manipulator towards the contact point. While mounting the sensor, it is essential that the approach and contact are perpendicular to the wall plane. This can be considered as a task constraint imposed on the planner which the aerial manipulator has to satisfy. Researchers in [24] propose a task constrained planner for a redundant robotic manipulator that enables them to do everyday tasks such as opening drawers or picking up objects. In [25], a task-constrained planner was developed for underactuated manipulators. Since multirotor UAVs are typically underactuated systems, it is necessary to address dynamics and kinematics while planning the endeffector trajectory. Aerial manipulator 6D end-effector trajectory tracking based on the differential flatness principle was presented in [26]. The underactuated nature of multirotor UAVs can cause unexpected deviations in end-effector configuration. Researchers in [27] address this particular problem by including the dynamic model of the system into the planning procedure. In our previous work [28], a trajectory planning method based on the full dynamic model of an aerial manipulator was developed. In this paper, we further augmented this method to plan for the desired force required by the impedance controller. Mathematical Model In this section, the mathematical model of the aerial manipulator is presented. The coordinate systems convention is depicted in Figure 2. Furthermore, an analysis for the manipulator dexterity and reach was performed. Kinematics The inertial frame is defined as L W . The body-fixed frame L B is attached to the center of gravity of the UAV. The position of the UAV in the world frame is given Combining the position and attitude vectors defined the generalized coordinates of the UAV as Written in a matrix form T B W , the transformation contains both the position and orientation of the UAV obtained through an on-board sensor fusion or through an external positioning system (i.e., GPS). The notation T b a ∈ R 4×4 was used to denote a homogeneous transformation matrix between frames a and b. A rigid attachment between the body of the UAV and the base of the manipulator L 0 was considered, denoted with the transformation matrix T 0 B . The manipulator used in this work was a M = 3 degree-of-freedom (DoF) serial chain manipulator with the end-effector attached to the last joint. The DH parameters of the arm are given in Table 1. Using this notation, one can write a transformation matrix T ee 0 between the manipulator base and its end-effector as a function of joint variables q 1 , q 2 and q 3 . For brevity, the expression for the entire matrix T ee 0 is left out and only the end-effector position and its approach vector equations are written using the well-known abbreviation cos(q 1 + q 2 ) := C 12 : Table 1. DH parameters of the 3-DoF manipulator attached to the UAV. A virtual joint q * 4 is added to fully comply with the DH convention. Link sizes a 1 and d 3 are omitted for clarity. Putting it all together, the full kinematic chain of the aerial manipulator can be constructed as combining the fixed transformation T 0 B with T B W and T ee 0 depend on UAV and manipulator motion. Since there is obvious coupling between the motion of the body and the manipulator arm, a β ∈ [0, 1] parameter is introduced to distribute the end-effector motion commands either to the UAV global position control or manipulator joint position control. To this end, the following distribution relationship is used: where ∆P is used to denote the desired aerial manipulator displacement expressed as the following combination body and arm motion: The manipulator displacement is denoted by ∆P arm and the UAV displacement by ∆P UAV , where both ∆P arm and ∆P UAV are expressed in the coordinate system L 0 . With β = 1, the UAV motion is used to control the position of the end-effector. When β = 0, the situation is reversed and the manipulator motion is used to move the end-effector. For every other β, the end-effector motion is obtained in part by the UAV body and the manipulator arm motion. There are obvious advantages in combining the motion of the UAV and the manipulator arm. The UAV can move in 3D space beyond the reach of the arm; however, the motion of the UAV is not as precise and dynamically decoupled. The kinematics of the arm enable the end-effector to obtain the desired approach angle z ee 0 = [cos(δ), sin(δ), 0] T , which under the hovering assumption, becomes equal to the global approach vector z ee W pointing towards the contact point on the infrastructure. The straightforward mathematical manipulation of Equation (1) allows for writing the constraint equation: which ensures that the manipulator points in the right direction, where δ is the desired manipulator inclination in the body x-z plane. To find the optimal manipulator pose during contact, the dexterity D and the reach R of the pose were taken into account, while considering that the joints are as far as possible from their physical limits L. Since the motion of the arm is constrained with its approach axis condition, a reduced form of a Jacobian matrix was used J = δp ee 0 δq 1 , δp ee 0 δq 2 to derive the pose dexterity index D = J T · J and determine how far the current pose is from the null space of the manipulator [29]. The reach of the pose R = (p ee 0 ) T · p ee 0 was also taken into account, since the goal was to keep the end-effector and the contact point as far away from the UAV body. Finally, the following equation is defined: to measure how far away the given configuration is (i.e., q 1 , q 2 ) from the joint limits Q 1max , Q 2max . Normalizing D, R and L enables combining the three conditions into a single manifold M = D · R · L and find the optimal configuration q * M = q * 1 q * 2 q * 3 T for the desired approach angle δ. The described method is depicted in Figure 3 for the specific case of the approach angle δ = 0 • , but can be extended to any value of the approach angle through Equation (5). As a side note, the manipulator attachment on the top of the UAV body was chosen to be able to reach surfaces underneath the bridge. Although this shifts the center of gravity upwards, the stability of the system is not compromised since the manipulator is constructed of lightweight materials. This analysis is performed for δ = 0 • : (a) the dexterity D surface shows the measure of how far the manipulator is from the null space. Values around zero are closer to the null space; (b) the reach R surface shows how far the end-effector can move in a certain configuration. This value tends towards zero as the arm approaches a folded configuration; (c) the limit L depicts how far a certain configuration is from the physical limits of the manipulator joints; and (d) the combined manifold M of the formerly described surfaces. Higher values offer better trade-off between dexterity, reach and limit, defining the optimal manipulator configuration q * M . Dynamics The most complicated task of the aerial manipulator is attaching the sensor to a wall and maintaining the required force reference while the two-component adhesive hardens. To successfully perform such a task, the coupled UAV-manipulator system dynamics have to be addressed for the precise end-effector configuration planning. Considering the UAV dynamics only, the derivative of the generalized coordinates Here, the (ṗ B W ) T is the linear velocity of the body of the UAV in the world frame and the (ω B W ) T represents the angular velocity of the UAV in the world frame. The UAV's propulsion system consists of n p propellers rigidly attached to the body. Each propeller produces force and torque along the z B axis. The vector of the propeller rotational velocities is simply defined as Force and torque produced by each propeller are non-linear functions depending on the rotational velocity Ω UAV . Rather than using the rotational velocities as control inputs, they can be mapped to a more convenient space. Namely, the mapped control input space can be written as where K ∈ R 4×n p is the mapping matrix and u UAV = u 1 u 2 u 3 u 4 T , where u 4 represents the net thrust and u 1 , u 2 and u 3 are moments around the body frame axes. As stated earlier, the manipulator consists of three rotational DoFs. Therefore, the joint positions of the manipulator are defined as q M = q 1 q 2 q 3 T . The rotational velocity of each joint is a time derivative of joint positionsq M = dq M /dt. The torque of each joint is considered the control input of the manipulator The resulting generalized coordinates of the aerial manipulator can be written as q = q UAV q M T ∈ R 9 , and the velocities can be obtained in the same manner aṡ q = q UAVqM T ∈ R 9 . The resulting control inputs of the system can be expressed as Finally, the full system dynamics can be written as where M(q) ∈ R 9×7 is the inertia matrix, c(q,q) ∈ R 7 is the vector of centrifugal and Coriolis forces, g(q) ∈ R 7 is the gravitational term. Control System The overall control of the aerial manipulator consists of several nested control loops. The complete controller overview, with motion planning and blob detection blocks, is depicted in Figure 4. Aerial Manipulator Control At the inner most level, the UAV is controlled through cascade attitude and rate controllers. The input to these controllers is the desired orientation and based on the state, the output is the vector of the rotors' angular velocities. The second level of control, which uses the inner attitude control loop, consists of two additional cascades, the position and the velocity control. These controllers receive a referent position and velocity feed-forward value to generate the desired vehicle orientation and thrust. The manipulator joints are controlled through standard Proportional, Integral, Derivative (PID) controllers; however, in a real-world setting, servo motors with integrated control are typically used. As mentioned earlier, it is important to track the desired force after contact with a wall is achieved. To accomplish this, an adaptive impedance controller is employed to generate an appropriate setpoint for the position controller. This controller receives a trajectory supplied by the mission planner, which steers the aerial manipulator towards the sensor mounting target on the bridge. Adaptive Impedance Control The objective of the adaptive impedance controller is to ensure a stable physical interaction between the aerial manipulator and the environment [30]. As mentioned earlier, the standard UAV control scheme is based on position and attitude controllers. When interacting with the environment, the desired contact force must be considered. The position controlled system can be extended to follow the desired force by introducing an impedance filter. The design of such a filter is explained here for a single DoF. The behavior of the system is defined by the target impedance as where m, b and k are constants, x r (t) is the referent position, provided to the impedance filter as an input, and x c (t) is the output of the impedance filter representing the position command. The filter is designed as a linear second-order system with a dynamic relationship between the position and the contact force tracking error e(t) so that it mimics a mass-spring-damper system. The contact force tracking error is defined as follows: where f r (t) is the other filter input defining the referent force, and f (t) is the measured (exerted) contact force. If the environment is modeled as a first-order elastic system (equivalent spring system) with unknown stiffness k e , the measured force can be approximated as where x(t) is the position of the manipulator and x e (t) is the position of the environment in an unexcited state. By substituting Equation (12) in Equation (11), the position of the aerial manipulator can be expressed as follows: Assuming that the commanded position value can be achieved by the aerial manipulator, i.e., x = x c , the substitution of Equation (13) in Equation (10), the system in the steady state can be described as follows: For a contact force error of zero in the steady state, the following must hold: In other words, the position setpoint has to be designed in such a way that it compensates for the displacement of the environment due to the exerted contact force. To ensure this, a value of the unknown environment stiffness k e is needed. Furthermore, k e plays a fundamental role in the stability of the impedance filter Equation (10), which ultimately affects the stability of the aerial manipulator while in contact with the environment. A stable contact between the aerial manipulator and the environment can be ensured using the Hurwitz stability criterion, by designing the system with b/m > 0 and (k + k e )/m > 0. However, since k e is unknown, an adaptation law for the position setpoint that guarantees the contact stability while compensating for this hidden, unknown parameter is proposed. The adaptation law is derived starting from Equation (15). An adaptive parameter κ(t) is introduced so that: It can be shown using the Lyapunov stability analysis that the following adaptation dynamics equation for κ(t) will yield a stable system response: We refer the interested reader to the proof which can be found in the Appendix A. Motion Planning As discussed in Section 1.1, the main concept of this paper was to use a team of two UAVs, each applying one component of the adhesive. To apply the "resin" component, the UAV has to plan a collision-free trajectory and position itself in front of the target area to start spraying. This is fundamentally different from mounting a sensor coated with "hardener". In the latter case, apart from planning a collision-free trajectory, the manipulator-endowed UAV has to apply pressure for a certain amount of time for the two components to mix. From the perspective of motion planning, the planner needs to be augmented to include a manipulator with three degrees of freedom, contact force and the weighing parameter β. To successfully maintain the pressure, the planner relies on the impedance controller described in Section 4.2. Furthermore, one of the requirements when mounting the sensor on the wall is for the sensor to be perpendicular to the wall. Therefore, it is necessary to take the underactuated nature of the multirotor UAVs into account during the motion planning. Namely, the errors in the planned end-effector configuration were mainly induced due to the roll and pitch angles while executing the planned motion. In our previous work [28], we developed a model-based motion planner for aerial manipulators that is capable of correcting the aforementioned end-effector deviations. In this paper, the idea from [28] was extended to consider the impedance control when obtaining the full state of the aerial manipulator. Waypoint Configuration When dealing with an aerial manipulator, exerting some contact force inevitably yields a high dimensional waypoint configuration. We define a single waypoint as a set of UAV and joint poses, together with the force reference and motion distribution factor β: where q B ∈ R 6 and q M ∈ R 3 are generalized coordinates of the UAV and the manipulator defined in Section 3.1. The force reference vector f r = f x f y f z T ∈ R 3 and weighing scalar parameter β are required by the impedance controller. Furthermore, the impedance controller assumes a step change of these values. Ideally, the change should occur at the moment of contact since no force can be exerted without contact. Therefore, these values are only changed at the final waypoint. Apart from the desired force and the parameter β, the final waypoint must contain the UAV position and orientation, as well as the manipulator joint configuration. Specifying these values relies on the blob detection algorithm presented in Section 6. Namely, the algorithm outputs the position and orientation of the detected blob in the world frame. Following the manipulator dexterity and reach analysis described in Section 3.1, the optimal manipulator configuration q * M is obtained based on the provided plane normal. The optimal manipulator configuration is then used as the desired configuration for the final waypoint. This way, during operation, the manipulator never reaches a fully extended or contracted pose, which allows the impedance controller to command both the arm and the UAV to achieve and maintain the desired force. Trajectory Planning There are three phases in the trajectory planning procedure. First, an initial trajectory is planned based on the provided waypoints. Second, the initial trajectory is sent to a simulated model in order to obtain the full state of the aerial manipulator during the trajectory execution. Third, the end-effector configuration is corrected based on the full state of the vehicle, and the final trajectory is sent to the target aerial manipulator. Initial Trajectory To execute a smooth motion towards the desired waypoint, we use a suitable timeoptimal path planning by reachability analysis (TOPP-RA) trajectory planner [31]. The TOPP-RA algorithm searches for the time-optimal trajectory and is based on a "bang-bang" principle on the generalized torque of each DoF. The planner is capable of receiving the input waypoints of an arbitrary dimension and output a smooth trajectory. Each DoF has to be provided with dynamical constraints in terms of velocity and acceleration, which are respected during the trajectory generation process. As mentioned, the input to the TOPP-RA trajectory is the path of a set of n ≥ 2 waypoints: Based on the dynamical constraints, the output of the TOPP-RA planner is a sampled trajectory: where t = (w) T (ẇ) T (ẅ) T T ∈ R 3×13 is a single sampled trajectory point consisting of position, velocity and acceleration; T s is the sampling time; and n t is the number of points in the sampled trajectory. Note that each trajectory point contains both roll and pitch angles. Although these angles can be planned through the TOPP-RA algorithm, they are omitted at this point because of the underactuated nature of the multirotor UAV. Nevertheless, they are used later in the paper when the model corrections are applied. The impedance controller expects a step change in the force and weighing parameter β referent values. To satisfy this requirement, large constraints for velocity and acceleration are imposed for these DoFs. However, because other DoFs have constraints below their physical limit, the overall force and β trajectory has a slower, dynamically smooth profile. These profiles also have overshoots and undershoots which are not acceptable because they are not within the hard constraints required for β. To tackle this problem, a simple piecewise constant velocity interpolation was applied to the force and β. This way, a large velocity constraint produces a step change which is a suitable input to the impedance controller. A visual example of the difference between the TOPP-RA and piecewise constant velocity interpolation is depicted in Figure 5. Model-Based Corrections The initial trajectory from Equation (20) is planned without any consideration about the underactuated nature of the multirotor UAV. To obtain the unknowns, namely roll and pitch angles, the initial trajectory can be executed in a simulation environment. The chosen simulation environment is, in our case, Gazebo, because it is realistic and supports the robotics operating system (ROS), which is the backbone of our implementation. The simulated aerial manipulator is based on the mathematical model described in Section 3. The standard cascade PID controllers are employed for low-level attitude and high-level position control. The impedance controller is built on top of the position controller and provides a position reference based on the input trajectory. More details about the simulation environment are provided in Section 7. The first step is executing the initial trajectory in the aforementioned simulation environment. While executing, the roll and pitch angles are recorded as they are needed for obtaining the full state of the UAV. Rearranging Equation (2) and plugging the unknown roll and pitch angles in the full state of the UAV, the transform of the end-effector in the manipulator base frame can be obtained: Usingthe inverse kinematics of the manipulator, joint values q M for the desired endeffector configuration are obtained. This way, the null space of the aerial manipulator is used for the end-effector correction. Note that due to the configuration of the manipulator, an exact solution of the inverse kinematics will not always exist. In such a case, an approximate closest solution is used instead. The final trajectory is constructed by replacing the initial q M with the corrected values. This trajectory is afterwards sent to the target aerial manipulator. The careful reader should note that the developed three DoF manipulator operates on the x and z position in the body frame, as well as the pitch angle. This allows the impedance controller to maintain the orientation perpendicular to the wall, while compensating for the UAV body motion in the x and z axes. However, the system will experience disturbances and control errors which will act on the roll and pitch angle, and the lateral movement along the body y axis. We can address these issues either with mechanical dampers or by adding additional degrees of freedom to the manipulator, which will be explored in future work. Blob Detection This section presents the methods we propose to detect the hardener blob position and orientation. A modular object detection framework, as shown in Figure 6, is designed to ensure a reliable blob pose detection. Since the detection is to be done on board the UAVs, RGB-D cameras are selected. Therefore, the inputs to the framework are images and organized point clouds obtained from the visual sensor. The remainder of this section introduces the individual components of the framework and adds implementation details where necessary. The sensor message synchronizer is responsible for the time-based synchronization of the given sensor message streams. In the case of blob detection, a module that synchronizes images and organized pointclouds from an RGB-D camera is derived. This is necessary since the algorithm detects the blob in both 2D image space and 3D point clouds, which are not necessarily sampled simultaneously. The underlying implementation uses ROS libraries to synchronize messages with an approximate time policy. An object detector attempts to find a set of object poses using synchronized sensor data. The module used in this paper detects blob poses and is implemented in the following way. First, all the blob positions and radii are found in the image frame using the standard blob detection functionality found in the OpenCV libraries. Second, the depth information corresponding to the detected blobs is isolated from the organized point cloud. Finally, blob positions are calculated as centroids of the corresponding depth positions, while the orientation is obtained through the random sample consensus (RANSAC) algorithm from the Point Cloud Library (PCL). The remaining framework components are independent from synchronizer and detector modules. The pose tracker is used to track the obtained object's poses through multiple frames based on the closest Euclidean distance criterion. This component solves the issue of multiple objects being visible, as it always outputs the pose of the currently tracked object. Moreover, it increases the robustness of the system since it remembers the object poses for a certain number of frames, which allows some leniency with the detector. The goal of the world transformation component is to transform the tracked pose from the sensor to the world frame using the estimated odometry from an external source that any UAV should have access to. Additionally, since the blob poses are to be sent as references to the trajectory planner, it is important to correctly compute the blob orientation. Since the blob is a flat surface, there are two equally correct possible orientations that can be detected. Therefore, the blob orientation is chosen as follows: where r 1B is the heading component of the UAV rotation matrix expressed in world coordinates R B = r 1B r 2B r 3B and r 1blob is the heading component of the blob rotation matrix expressed in world coordinates R blob = r 1blob r 2blob r 3blob and R 180 = diag(−1, −1, 1). Finally, a linear Kalman filter with a constant velocity model is used to further increase the robustness of the system and provide smoother blob position estimates. The constant velocity model for each axis is given as follows: where T s is the discretization step, x k ∈ R 2 is the state vector containing the position and velocity along the corresponding axis and w k ∈ R 2 is the process noise. The observation model along a single axis is given as follows: where z k ∈ R is the position observation along the corresponding axis and v k ∈ R is the measurement noise. If the detector is unable to provide measurements and the pose tracker removes the pose from the tracking set, the linear Kalman filter is still able to provide blob position estimates. Experimental validation of the described methods is performed in an indoor Optitrack environment with an Intel Realsense D435 RGB-D camera. To ensure ground truth is available for detection validation, reflective markers are attached to both the camera and the blob. In order to determine the transformation between the camera optical frame and the reflective markers attached to the camera, an optimization-based calibration approach is used as described in [32]. Results are shown in Figures 7 and 8. The experiments are performed with the UAV in constant motion while looking at the general direction of the painted blob. Figure 7 shows a relative difference between the ground truth UAV motion in the world frame and the UAV motion as observed from the detected blob frame. Figure 8 presents the comparison of ground truth and detected blob positions expressed in the world frame. It is important to note that camera calibration errors can manifest themselves as static offsets between the detected and ground truth blob positions in Figure 8. However, in this case, the visual detection provided a reliable blob tracking results which is a direct consequence of careful camera calibration. Simulation The environment used for simulating the UAV and manipulator dynamics, as well as the contact with the environment, is the widely accepted Gazebo simulator. It is realistic and highly modular, with a large community and a support for the robot operating system (ROS), which is also the primary implementation environment for impedance control, motion planning and blob detection. Through ROS, Gazebo has a large variety of developed plugins realistically simulating various sensors and actuators. All simulations were conducted with Linux Ubuntu 18.04 operating system and ROS Melodic middleware installed. The UAV is modeled as a single rigid body with n p propellers mounted at the end of each arm. As propulsion units, these propellers generate thrust along the z axis of the UAV body. To simulate the propeller dynamics, the rotors_simulator package is used. It contains a plugin that models thrust based on the user-provided propeller parameters [33]. Furthermore, to obtain the UAV attitude and position, IMU and odometry plugins are mounted on the vehicle. The manipulator was mounted on the body of the UAV and consists of three joints connected with links. A rod type tool is mounted as the end-effector, with a force-torque sensor required by the impedance controller. Furthermore, a monocular camera with an infrared projector is also mounted for the blob detection. End-Effector Motion Distribution Analysis Given some end-effector configuration, the inverse kinematics is responsible for finding the UAV position and yaw angle, as well as the manipulator joint values that satisfy the desired configuration. The parameter β from Equation (3) defines a ratio of how much the manipulator joints and UAV position and orientation contribute to achieving the desired end-effector configuration, as described in Section 3.1. Recalling the values, β = 1 only moves the UAV in the direction of the desired end-effector configuration; and β = 0 uses the inverse kinematics of the manipulator to achieve the desired configuration. To determine the influence of β on the overall system, an analysis was conducted with different β values. The desired end-effector configuration was chosen to be in contact with a plane perpendicular to the bridge wall which required the force reference along the x axis. The waypoints for the trajectory planner were kept the same across all trials, and only β was changed. The results of this analysis are depicted in Figure 9. As can be observed, all trials produced very similar results with oscillating force upon contact and eventually reaching the desired reference, providing us with no obvious conclusion regarding how to select the optimal β. However, following the dexterity analysis from Section 3.1, and only relying on the manipulator motion might drive the system close to its limits due to the UAV body movement. On the other hand, the motion of the UAV induces disturbances in the end-effector pose control. The manipulator is therefore responsible for compensating errors introduced by the motion of the UAV body. Taking all of the aforementioned into account, the value is chosen as β = 0.5 so that both the manipulator and the UAV are simultaneously used to maintain a steady contact force. Bridge Sensor Mounting Since the concept of this paper was to mount inspection sensors on a bridge, the simulation trials were tailored in the same direction. After spraying the first component, it is necessary to achieve and maintain a stable contact while the second adhesive component on the sensor dries. Since the manipulator is attached above the propellers, the workspace of the manipulator is limited to contact above the UAV or on the plane perpendicular to the ground. Naturally, the first set of simulation trials were conducted by holding the desired force on a plane perpendicular to the ground. In this case, the contact force only acts along the x axis and the response is depicted in Figure 10. The time delay between the planned and executed contact is present due to the impedance filter which slows down the dynamics of the referent trajectory. After the initial contact, there are some oscillations and an overshoot which diminish over time and the desired force reference is achieved. The second set of simulation trials included an inclined contact plane. This requires the UAV approach from below the plane and achieving contact perpendicular to the plane. Since the plane is inclined for δ = 68 • , the planned force referent values have components in both the x and z axes, as shown in Figure 11. Similarly to the previous example, the force response has some oscillations around the instance of contact, but it eventually settles and reaches the desired force reference. The simulation tests for δ = 0 • and δ = 68 • were performed n = 10 times for each case, as depicted in Figure 12. The left portion of the figure is a dot product between the normal of the blob r t and the end-effector orientation vector r ee . If the value of the dot product r t · r ee = 1, the two vectors are parallel which results in a successful approach. For both angles, the dot product is very close to 1 and the orientation error is negligible. On the right, the distance between the center of the target and contact point is shown. The error distance is in both cases less than 0.1 m, which ensures the relatively high precision of sensor mounting, well within margins for the bridge inspection. The accompanying video of simulation tests can be found on our YouTube channel [34]. Conclusions This paper presents a step towards autonomous bridge inspection by investigating the possibility of mounting various inspection sensors using an aerial manipulator. Currently, inspectors use specialized trucks with cranes and baskets in order to access the area underneath the bridge. This inevitably leads to road closure which poses an inconvenience for both inspectors and traffic. To alleviate this problem, the aforementioned aerial manipulators can be used to access difficult-to-reach areas of the bridge. As mounting sensors require forming a bond between the wall and sensor, we envision using a two-component adhesive with a short cure time. Since the aerial manipulator has to achieve and maintain contact with the sensor mount point, short cure times are desirable because of te limited flight time of these platforms. Nevertheless, current flight times of outdoor multirotors reach up to 30 min, which ensures enough time for the two adhesive components to form the bond. Although preliminary, the results of this paper seem promising. The visual detection was extensively tested and reliably tracks the blob position. The adaptive impedance controller is capable of maintaining the required force. Even though there are some oscillations and settling times in the force response, in practical use, it does not make much difference since the curing time of the adhesive is at least several minutes. The trajectory planner was augmented to plan in the force space which allows for setting the force reference step change before the contact. The simulation results show the high repeatability of the overall system which gives us the confidence to perform experiments in a real-world environment. Our first step in future work was to perform experiments in a controlled laboratory environment. The outdoor environment poses a different set of challenges including the lower accuracy positioning system and unpredictable disturbances, i.e., wind gusts. Since these factors will inevitably reflect on the overall end-effector accuracy, we are looking into augmenting the manipulator to be able to compensate for lateral movements, as well as roll and yaw angles. To further increase the system's accuracy, the developed visual tracker will be used to improve feedback around the tracked blob on the bridge wall in real-world experiments. Conflicts of Interest: The authors declare no conflict of interest. After reordering, we obtain: where σ(t) = [p 1 e(t) + p 2ė (t)]. By choosing: where γ d is a positive constant, Lyapunov condition Equation (A7) becomes: i.e., for the adaptation law to be stable, g(t) should be bounded. Since x r ,ẋ r andẍ r are bounded, so are e,ė andë. Therefore, g(t) is also bounded, i.e., the condition in Equation (A9) is satisfied. The adaptation law is finally obtained by taking the derivative of Equation (A3), and substitutingġ(t) with Equation (A8), yields the (17). Parameters γ and γ d dictate the adaptation dynamics. Based on the measured contact force, the error adaptation law Equation (17) estimates the adaptation parameter κ (reciprocal value of the environment stiffness), which is then used in Equation (16) for calculating the referent position x r .
11,365.2
2021-09-07T00:00:00.000
[ "Engineering" ]
Chaotic turnover of rare and abundant species in a strongly interacting model community Significance A prominent feature of ecological communities is that a few species are abundant while most are rare. Using a standard community model, in which species interactions are assigned fixed random values, we show that chaos is a generic outcome if interactions are strong and immigration prevents extinction. Each species then alternates, in an effectively stochastic way, between long periods of rarity and shorter periods of high abundance; yet the overall distribution of species abundances remains conserved and qualitatively consistent with observations in marine plankton protists. Our model results contribute to a rekindled debate about the role of chaos in ecological communities. Introduction Scientists have long marvelled at the complexity of ecosystems, and wondered what ecological processes allowed them to remain diverse despite species' competition for shared resources and space.The coexistence of a large number of di erent taxa reaches its climax in microbial communities, where several thousands of Operational Taxonomic Units (referred to as 'species' in the following), can be detected when sequencing samples of soil or water [1].A general feature of microbial communities is that abundances vary widely among the species that co-occur at one time of observation.Generally, just a handful of abundant species make up most of the total community biomass [2][3][4], whereas the large majority of rare species are present in such low numbers as to only be detectable by su ciently deep genomic sequencing. Multiple hypotheses have been put forward to explain the nature and origin of such a 'rare biosphere' [2,5].One possibility is that distinct taxa belong preferentially to the abundant sub-community or to the rare, either because of their traits, or because evolutions drives new, more adapted species to dominance [6,7].The notion that abundant and rare sub-communities are well-distinct is supported by the observation that their macroecological patterns appear to di er [8,9].Alternatively, diversity in community composition could be maintained by variations in the environment that allow for many spatio-temporal niches [10].The concept of a microbial seed bank encapsulates the idea that a small number of dominant taxa are maintained by environmental ltering, while most taxa remain dormant until they meet conditions favourable to their growth [11].In this view, the abundance or rarity of a given species is a function of the extrinsically-driven conditions at the time of sampling.A third hypothesis is that cycling of species between the rare and abundant component of the community is driven by intrinsic ecological uctuations that self-sustain even in the absence of environmental variation.Such oscillations have been found in controlled settings [12,13], and have been argued to be relevant also in natural communities [14][15][16].For instance, planktonic bacteria display a fast turnover of species within a season, even as abiotic conditions do not vary substantially [17][18][19].This supports the notion that species interactions may be central in determining which species are abundant, and when. That ecological interactions can cause instability of species' abundances and drive chaotic uctuations is predicted by numerous theoretical models.Chaos can be found in models with just a handful of species [14,20,21], but may then require a ne-tuning of parameters [22].In contrast, instability and chaos appear to be generic features of high-dimensional systems, such as species-rich communities [23,24].The disordered Lotka-Volterra equations, for instance, representing intra-speci c and randomly assigned inter-speci c interactions have been broadly used to explore the collective ecological dynamics of species-rich communities [25][26][27][28][29][30][31][32]. Self-sustained oscillations that resemble the turnover of natural communities have been obtained by numerical simulations of species-rich meta-communities, where local abundances are coupled via dispersal [33][34][35][36].In this setting, species that are locally driven to extinction get replenished by migration of individuals from neighbouring patches, where they were not outcompeted.Among the key features of natural ecosystems that such models reproduce are large numbers of coexisting taxa, large uctuations in abundance, and skewed species abundance distributions with prominent power-law trends.However, given the complexity of the models, it seems inevitable that analytical descriptions are only possible under speci c assumptions, for instance on the scaling or (anti-)symmetry of interactions [25,32,33]. On the other hand, more phenomenological models, that assume from the outset the existence of intermittent population dynamics, also predict highly diverse communities where species alternate between rarity and abundance [18,37].Similarly, single-species stochastic equations account for many features of empirical abundance distributions [38,39].When species are also subject to evolution, booms and busts allow to maintain a larger number of species than expected by ecological dynamics alone [40]. Given the range of alternative theoretical descriptions, it is thus unclear what level of complexity-in terms of species richness, type and variation of interactions, spatial structure-is necessary to explain non-stationary patterns of highly diverse communities. Here, we show that abundance patterns consistent with observations of plankton communities emerge generically in a disordered Lotka-Volterra model under simple assumptions: a single patch with well-mixed populations, small and constant immigration, and strong inter-species interactions.Under such conditions, we nd a broad range of model parameters where species alternate over time between rarity and abundance, in such a fashion that the community is at any time dominated by just a few abundant species.The turnover in composition of this dominant component occurs on a characteristic timescale, and resembles a succession of low-diversity equilibria.We compare the distribution of abundances observed in the community at a xed time with the frequency with which the abundance of any given species occurs in a long time series.These distributions all have the same shape, which across many orders of magnitude in abundance values is a power-law, suggesting an emergent equivalence among di erent taxa [6,33,[41][42][43].We therefore propose an approximate, e ective model for the ecological dynamics of a 'typical' focal species.Guided by this model, we characterize the region where the ecological dynamics is chaotic and point to scaling relations that may explain the weak geographical variation of plankton protist communities' abundance distributions [3].We also explore the reason why some species deviate from the typical abundance pattern.Species that have a smaller average interaction strength with all other species boom more frequently, highlighting the importance of relative interaction statistics in addition to absolute ones. Model We describe a community of species by their timedependent absolute abundances ( ), with = 1, 2, … , the index of a species.Deterministic equations that relate the changes in abundance to competition within species and interactions between pairs of species have been argued to be relevant descriptions of diverse microbial communities [13,44,45].According to the Lotka-Volterra equations [46], the abundance of any species in isolation grows logistically: if initially the species is rare, its abundance grows exponentially at a maximum rate ; later, it saturates to a carrying capacity set by resources, predators, and environmental conditions assumed constant and not modelled explicitly.For simplicity, we set and to unity for all species, but discuss heterogeneity in these parameters in Supplementary Note S3.The interaction coe cients (real numbers) quantify the e ect of species on the growth rate of species .We include a small rate of immigration ≪ 1 into the community; constant and equal for each species.This term prevents extinctions, and re ects immigration from a regional pool or the existence of as a 'seed bank' [11].Abundances thus change in time as In species-rich communities, the number of potential interactions-× -is very large, and their values hard to estimate in natural settings.Therefore, a classic approach [23,25,27,47] is to choose the set of interaction coe cients as a realization of a random interaction matrix , whose elements are Gaussian random variables; ∼ ( , 2 ) ( ≠ ).It is customary to allow a correlation between diagonally opposed elements, biasing interactions toward predator-prey ( = −1) or symmetric competition ( = 1); here, we focus on independent interaction coe cients ( = 0) and discuss other cases in Supplementary Figure S5.According to Eq. ( 1), positive denotes that species reduces the growth of species , as for instance when they compete for a common resource.A negative value indicates that species facilitates the growth of species .The interaction coe cients for distinct species , can be represented in terms of the mean and standard deviation of the interaction matrix, as where the are realizations of random variables with zero mean and unit variance.We note that, by convention, we have separated the self-interaction term from the intraspeci c interaction terms in Eq. ( 1).The diagonal element therefore does not appear in the sum, and is not de ned.Equation ( 1) with randomly sampled interactions de nes the disordered Lotka-Volterra (dLV) model.By tuning the ecological parameters , , , , it exhibits a number of distinct dynamical behaviours which have been thoroughly explored in the weak-interaction regime, where the interaction between any particular pair of species is negligible, but a species' net competition term from all other species is comparable to its (unitary) self-interaction.If species are near their carrying capacities, the net competition is approximately where the net interaction bias is a realization of a random variable ∼ (0, 1).To achieve a nite net competition in the limit of a large species pool requires where ̃ , ̃ do not grow with .Under this scaling, methods from statistical physics (dynamical mean-eld theory [25,26,28,30], random matrix theory [23,48], and replica theory [29,31]) allow exact analytical results in the limit of → ∞, although in practice ∼ 100 is su cient for good agreement between theory and simulations.Sharp boundaries were shown to separate a region where species coexist at a unique equilibrium and one with multiple attractors, including chaotic steady-states [25,26,28,30]. While vanishing interactions entail signi cant mathematical convenience, one can question how well the weak-interaction regime represents microbial communities.For instance, bacterial species engage in metabolic cross-feeding, toxin release, phagotrophy, and competition over limited nutrients, so that some species do depend substantially on another's presence [49,50].Moreover, some empirical species abundance distributions-notably those of plankton communities [3,4]-deviate qualitatively from those predicted for weak interactions [25,26].Finally, there is increasing evidence that both absolute and relative abundances of microbial species display large and frequent variation even on time scales of days, where environmental conditions are not expected to undergo dramatic variations [16][17][18].Weak interactions, instead, produce moderate uctuations, so that the total abundance can be assumed to be constant [30].For these reasons, as well as for completing the analysis of disordered Lotka-Volterra model, we consider here the strong-interaction regime where the statistics of the interaction matrix do not scale with species richness according to Eq. ( 5).For ≫ 1, the overall competitive pressure makes it impossible for all species to simultaneously attain abundances close to their carrying capacities, resulting in instability and complex community dynamics. Results In the strong-interaction regime, numerical simulations of the disordered Lotka-Volterra model show that the community can display several di erent classes of dynamics, from equilibrium coexistence of a small subset of species, to different kinds of oscillations, including chaos.In sections 3.1-3.4,we focus on reference value of the interaction statistics ( = 0.5, = 0.3) representative of chaotic dynamics, and describe its salient features.In Section 3.5 and 3.6, we describe how the dynamics depends qualitatively on the statistical parameters and .Unless otherwise stated, simulations use = 500 and = 10 −8 .Further details on the numerical implementation are presented in Appendix A. A chaotic turnover of rare and abundant species For a broad range of parameters in strong-interaction regime, the community undergoes a chaotic turnover of dominant species.As illustrated by the time series of stacked dances of all species under steady-state conditions: there is a turnover of species such that only the dominant component is visible at any given time (each species has a distinct random colour).B Bray-Curtis index of community composition similarity between the dominant component of the community at time , and the composition if it were isolated from the rare species and allowed to reach equilibrium: the community appears to approach the composition of few-species equilibria before being destabilized by invasion from the pool of rare species.abundances in Figure 1A, the overwhelming share of the total abundance at any given time is due to just a few species.Which species are abundant and which are rare changes on a characteristic timescale, dom ≈ 30 time units, comparable to the time it would take an isolated species to attain an abundance on the order of its carrying capacity, starting from the lowest abundance set by immigration (− ln ≈ 18).While the total abundance uctuates moderately around a well-de ned time average, individual species follow a 'boom-bust' dynamics.If this simulation represented a natural community, only the most abundant species-that we call the dominant component of the community-would be detectable by morphological inspection or shallow sequencing. We wish to characterize the dominant component, and understand how it relates to the pool of rarer species.In order to quantify the notion of dominance, we de ne the e ective size of the community as Simpson's (reciprocal) diversity index [51], where = ∕ ∑ denote relative abundances.e approaches its lowest possible value of 1 when a single species is responsible for most of the total abundance, and its maximum when all species have similar abundances.Its integer approximation provides the richness, i.e. number of distinct species, of the dominant component. The e ective size e of the community in our reference simulation uctuates around an average of 9 dominant species, which make up 90% of the total abundance.The relative abundance threshold for a species to be in the dominant component uctuates around 3%, which is comparable to the arbitrary 1%-threshold used in empirical studies [9].In Supplementary Figure S4 we show that the number of Chaotic turnover ... in a strongly interacting model community Mallmin, Traulsen, De Monte (2023) dominant species grows slowly (but super-logarithmically) with , up to about 15 for = 10 4 .Thus, strong interactions limit the size of the dominant component and the vast majority of species are rare at any point in time. The turnover of dominant species is not periodic; indeed, even over a large time-window, where every species is found on multiple occasions to be part of the dominant component, its composition never closely repeats (Supplementary Figure S3).This aperiodicity suggest the presence of chaotic dynamics.We give numerical evidence for sensitive dependence on initial condition and positive maximal Lyapunov exponent in Supplementary Figure S1 and S2.The turnover dynamics has the character of moving, chaotically, between di erent quasi-equilibria corresponding to di erent compositions of the dominant community (cf.'chaotic itinerancy' [52]).To reveal this pattern, we measure a 'closenessto-equilibrium', de ned as the similarity in composition between the observed dominant component at a given time, and the equilibrium that this dominant component would converge to if it were isolated from the rare component and allowed to equilibrate.As a similarity metric we use the classical Bray-Curtis index (Appendix B), which has also been used to measure variations in community composition in plankton time series [17].In Figure 1B, we see that the similarity at times slowly approaches 100%, followed by faster drops, towards about 50%, indicating the subversion of a coherent dominant community by a previously rare invader. The fact that the community composition is not observed to closely repeat is arguably due to the vast number of possible quasi-equilibria that the chaotic dynamics can explore.In the weak-interaction regime, a number of unstable equilibria exponential in has been con rmed [53,54].It is therefore conceivable that the number of quasi-equilibria in our case is also exponentially large.The LV equations for = 0 admit up to one coexistence xed point (not necessarily stable) for every chosen subset of species [46].Hence, we expect on the order of ∼ e quasi-equilibria, which for e ≈ 9 evaluates to 10 24 !If the dynamics explores the astronomical diversity of such equilibria on trajectories which depend sensitively on the initial conditions, the dominant component may look as if having been assembled 'by chance' at di erent points in time. The composition of the dominant community is not entirely arbitrary, though.While the abundance time series of most pairs of species have negligible correlations, every species tends to have a few other species with a moderate degree of correlation.In particular, if ( + )∕2 is significantly smaller than the expectation , and hence species and are close to a commensal or mutualistic relationship, these species tend to 'boom' one after the other (Supplementary Figure S6). Species' abundance uctuations follow a power-law In a common representation of empirical observations, where relative abundances are ranked in descending order (a rank-abundance plot [55]), microbial communities display an overwhelming majority of low-abundance species [5].Our simulated community reproduces this feature; Fig- with ± one standard deviation across species shaded in grey: the snapshot SAD appears to be a subsampling of the average AFD, indicating an equivalence, but de-synchronization, of species in their abundance uctuations.The one bar missing from the SAD is the e ect of nite species richness, as high-abundance bins only ever contain a couple of species for = 500.The vertical dashed line indicates the immigration level which determines a lower limit to abundances.ure 2A.The exact shape of the plot changes in time, as does the rank of any particular species, but the overall statistical structure of the community is highly conserved.An alternative way to display the same data is to bin abundances, and count the frequency of species occurring within each bin, producing a species abundance distribution (SAD) [55].The histogram in Figure 2B illustrates the 'snapshot' SAD for the rank-abundance plot in Figure 2A of abundances sampled at a single time point.Whenever observations are available for multiple time points, it is also possible to plot, for a given species, the histogram of its abundance in time.As time gets large (practically, we considered 100}000 time units after the transient), the histogram converges to a smooth distribution, that we call the abundance uctuation distribution (AFD) [38].Its average shape across all species is also displayed in Figure 2B. Several conclusions can be drawn by comparing SADs and AFDs.First, a snapshot SAD appears to be a subsampling of the average AFD.Therefore, SADs maintain the same statistical structure despite the continuous displacement of single species from one bin to another.Second, every species uctuates in time between extreme rarity ( ≈ = 10 −8 ) and high abundance ( ≳ 10 −1 ).This variation is comparable to that observed, at any given time, between the most abundant and the rarest species.Third, species are largely equivalent with respect to the spectrum of uctuations in time, as indicated by the small variation in AFDs across species.We will evaluate the regularities and di erences of single-species dynamics more thoroughly in 7)) with parameters as in Eq. ( 9): the time series are statistically similar.B Comparison of the average abundance uctuation distribution (AFD) from Figure 2 (black), and the AFD of the focal-species model (pink): excellent agreement is found for the powerlaw section.The 'uni ed coloured noise approximation' solution for the focal-species model's AFD (dashed, pink line) predicts the correct overall shape of the distribution, but not a quantitatively accurate value for the power-law exponent. The most striking feature of these distributions, however, is the power-law − traced for intermediate abundances.This range is bounded at low abundances by the immigration rate and at high abundances by the single-species carrying capacity.The power-law exponent is ≈ 1.18 for the simulation analysed, but it varies in general with the ecological parameters, as we discuss further in the following sections. The regularity of the abundance distributions across species suggests that it may be possible to describe the dynamics of a 'typical' species in a compact way-this is the goal of the next section. A stochastic focal-species model reproduces boom-bust dynamics Fluctuating abundance time series are often tted by onedimensional stochastic models [15]; for example, stochastic logistic growth has been found to capture the statistics of uctuations in a variety of data sets on microbial abundances [38,39].The noise term encapsulates variations in a species' growth rate whose origin may not be known explicitly.In our virtual Lotka-Volterra community, once the interaction matrix and initial abundances have been xed, there is no uncertainty; nonetheless, the chaotic, high-dimensional dynamics results in species' growth rates uctuating in a seemingly random fashion.We are therefore led to formulate a model for a single, focal species, for which explicit interactions are replaced by stochastic noise.Because we have found species to be statistically similar, its parameters do not depend on any particular species, but re-ect thee e ective dynamics of any species in the community. Following dynamical mean-eld-like arguments and approximations informed by our simulations (Appendix E), we derive the focal-species model where ( ) is a stochastic growth rate with mean − , and uctuations of magnitude and correlation time .The process ( ) is a coloured Gaussian noise with zero mean and an autocorrelation that decays exponentially; where brackets denote averages over noise realizations.The connection between the ecological parameters , , , and the resulting dynamics of the disordered Lotka-Volterra model in the chaotic phase is then broken down into two steps: how the e ective parameters , , relate to the ecological parameters; and how the behaviour of the focalspecies model depends on the e ective parameters. For the rst step we nd where is the total community abundance of the original dynamics Eq. ( 1), the e ective community size e is as in Eq. ( 6), and an overline denotes a long-time average.Equation (9) relates the focal species' growth rate to the timeaveraged net competition (≈ ) from all other species.We nd in simulations of Eq. ( 1) in the chaotic phase that competition is strong enough to make > 0. The second relation captures the variation in the net competition that a species experiences because of turnover of the dominant community component.Due to sampling statistics, this variation is larger when the dominant component tends to have fewer species; hence the dependence on ( e ) −1∕2 .The third e ective parameter, the timescale , controls how long the focal species stays dominant, once a uctuation has brought it to high abundance.This timescale is essentially equal to the turnover timescale dom of the dominant component (de ned more precisely by autocorrelation functions in Appendix E).In the weak-interaction regime, where any pair of species can be treated as e ectively independent at all times, self-consistency relations such as ⟨ ⟩ = allow to implicitly express the focal-species model in terms of the ecological parameters.For strong interactions, however, the disproportionate e ect of the few dominant species on the whole community invalidates this approach; we therefore relate the e ective parameters to the community-level observables , e , dom which are obtained from simulation of Eq. ( 1) at given values of the ecological parameters. For the second step, we would like to solve Eqs.(7) for general values of the e ective parameters.While this is intractable due to the nite correlation time of the noise, the equations can be simulated and treated by approximate analytical techniques.In Figure 3A we compare the time series A Example of a long abundance time series for the three species who are ranked rst, median, and last, with respect to the 'dominance bias' (fraction of time spent in the dominant component relative to the species median).Some species 'boom' more often than others.B The scaling of median fraction of time spent in the dominant component against reciprocal species pool size: increasing results in a proportional decrease in median dominance time.C Distribution of dominance biases against relative dominance rank for a range of : there appears to be convergence towards a non-constant limiting distribution, implying that net species di erences are not due to small-e ects.Note that, by de nition, the dominance bias is 1 for the middle rank, indicated by the dashed line separating positively from negatively biased species.D Scatter of dominance bias against the normalized sum of interaction coe cients, Eq. ( 4): lower net competition correlates with higher dominance bias.Species in the tails of the distribution are also less 'typical', with typicality quanti ed by the index , Eq. ( 16), representing the similarity of a species AFD to the species-averaged AFD.Panel A and D are both for = 500. of an arbitrary species in the dLV model with a simulation of the focal-species model.By eye, the time series appear statistically similar.The typical abundance of a species can be estimated by replacing the uctuating growth rate in in Eq. ( 7) with its typical value (i.e.= 0), yielding the equilibrium ∕ if > 0, as indeed con rmed by the simulation.Thus the typical abundance value is on the order of the immigration threshold.Figure 3B shows that the average AFD of the dLV agrees remarkably well with the stationary distribution of the focal-species model, in particular for the power-law section.Using the uni ed coloured noise approximation [56] (Appendix F), one predicts that the stationary distribution, for ≪ ≪ 1, takes the power-law form − , where the exponent is strictly larger than one-the value predicted for weak interactions [30] and for neutral models [57].Even if Eq. ( 10) is not quantitatively precise (Figure 3B), this formula suggests a scaling with the e ective parameters that we will discuss later on. Species with lower net competition are more often dominant The similarity of all species' abundance uctuation distributions in Figure 2 is re ected in the focal-species model's dependence on collective properties like the total abundance.However, the logarithmic scale downplays the variance between species' AFDs, particularly at higher abundances.Indeed, while all abundances uctuate over orders of magnitude, some species are observed to be more often dominant (or rare).Such di erences are reminiscent of the distinc-tion between 'frequent' and 'occasional' species observed in empirical time series [58,59]. In order to assess the nature of species di erences in simulations of chaotic dLV, we rank species by the fraction of time spent as part of the dominant component.Observing the community dynamics on a very long timescale (400 times longer than in Figure 1), the rst-ranked species appears to boom much more often than the last (Figure 4A).The frequency of a species is chie y determined by the number of booms rather than their duration, which is comparable for all species.The median dominance time decreases with the total species richness (Figure 4B): a doubling of leads to each species halving its dominance time fraction.As the community gets crowded-while its e ective size hardly increases, as remarked in Section 3.1-all species become temporally more constrained in their capacity to boom.Yet some signi cant fraction of species is biased towards booming much more often or rarely than the median, regardless of community richness.We quantify this trend by plotting in Figure 4C the dominance bias-the dominance time fraction normalized by the median across all speciesagainst the relative rank (i.e., rank divided by ).For high richness ( ∼ 10 3 ), the distribution of bias converges towards a characteristic, nonlinearly decreasing shape, where the most frequent species occur more than four times as often as the median, and the last-ranked species almost zero. The persistence of inter-species di erences with large may seem to contradict the central limit theorem, as the vectors of the interaction coe cients converge towards statistics that are identical for every species.In the chaotic regime, however, even the smallest di erences in growth rates get ampli ed during a boom.As we show in Appendix D, if Eq. ( 1) is rewritten in terms of the proportions , the relative advantage of species is quanti ed by a selection 2∕ ) and realization of the interaction matrix.Parameters yielding divergence every time are marked with grey.The boundary separating the chaotic phase from the rest of the multipleattractor phase (in which cycles and multi-stable xed point are common in addition to chaos) is not sharp, unless probed adiabatically in the way explained in Supplementary Figure S8.The unique xed-point phase has been studied analytically in the weak-interaction regime ( ∼ 1∕ ).When inter-speci c competition is in general stronger than intra-speci c competition, a single species (identity depending on initial condition) dominates, in line with the classical competitive exclusion principle [60].coe cient whose time average scales as − −1∕2 .Correspondingly, the relative, time-averaged growth rate is proportional to the net interaction bias (de ned in Eq. ( 4)), resulting in species with larger to have positive dominance bias (Figure 4D).Outliers of the scatter plot, i.e. species that have particularly high or low dominance ranks, are also the species whose AFD is furthest from the average AFD of the community, as quanti ed by the typicality index ∈ [0, 1], de ned in Appendix B. In conclusion, the relative species-to-species variation in the total interaction strength drives the long-term di erences in the dynamics of single species in the community.While the focal-species model emphasizes the similarity of species, species di erences can also be taken into account by employing species-speci c e ective parameters.In particular, replacing with a distribution of 's would create a dominance bias, and is in fact motivated upon closer examination of our focal-species model derivation (Figure 7D in Appendix E). Interaction statistics control di erent dynamical phases Hitherto, we have focussed on reference values of the interaction statistics and that produce chaotic turnover of species abundances.We now broaden our investigation to determine the extent of validity of our previous analysis when the interaction statistics are varied.For every pair of ( , ) values, we run 30 independent simulations, each with a di erent sampling of the interaction matrix and uniformly sampled abundance initial condition.After a transient has elapsed, we classify the trajectory as belonging to one of four di erent classes: equilibrium, cycle, chaos, or divergence.Figure 5 displays the probability of observing chaos, demonstrating that it does not require ne tuning of parameters, but rather occurs across a broad parameter range. The parameter region where chaos is prevalent, the 'chaotic phase', borders on regions of qualitatively di erent community dynamics.For small variation in interaction strengths (below the line connecting (−1∕ , √ 2∕ ) to (1, 0)), the community has a unique, global equilibrium that is fully characterized for weak interactions (cf.Fig. 2. of [25]).The transition from equilibrium to chaos has been investigated with dynamical mean-eld theory [30].For low interaction variance, but with mean exceeding the unitary strength of intra-speci c competition, a single species comes to dominate, as expected by the competitive exclusion principle [60].Adiabatic simulations, implemented by continuously rescaling a single realization of the interaction matrix (details in Supplementary Figure S8), reveal that lines radiating from the point ( , ) = (1, 0) separate sectors where stable xed points have di erent numbers of coexisting species.Traversing these sectors anti-clockwise, e increases by near-integer steps from one (full exclusion) up to about 8. From thence, a sudden transition to chaos occurs at the dashed line in Figure 5.We note, however, that the parameter region between chaos and competitive exclusion contains attractors of di erent types: cycles and chaos, coexisting with multiple xed points, resulting in hysteresis (Supplementary Figure S8B).This 'multiple attractor phase' [25,30] is a complicated and mostly uncharted territory whose detailed exploration goes beyond the scope of this study.Finally, for large variation in interactions, some abundances diverge due to the positive feedback loop induced by strongly mutualistic interactions, and the model is biologically unsound. Across the phase diagram, community-level observables such as the average total abundance and e ective community size e vary considerably (Supplementary Figure S9).The weak-interaction regime (whether in the equilibrium or chaotic phase) allows for high diversity, so and e are of order ; strong interactions, on the other hand, imply low diversity, with e and of order unity.An explicit expression for how these community-level observables depend on the ecological parameters ( , , , ) is intractable (although implicit formulas exists in the weak-interaction regime [25]).Nonetheless, an approximate formula that we derive in Appendix C allows to relate community-level observables to one another and to and : in which we introduce the collective correlation involving the time-averaged product of relative abundances weighted by the their normalized interaction coe cient Chaotic turnover ... in a strongly interacting model community Mallmin, Traulsen, De Monte (2023) Eq. ( 4).By construction, the collective correlation is close to zero when all species abundances are uncorrelated over long times, as would follow from weak interactions.On the contrary, it is positive when pairs of species with interactions less-competitive than average tend to co-occur, and/or those with more-competitive interactions tend to exclude one another.Eq. ( 11) is particularly useful in understanding the role of correlations in the chaotic phase.As we observed in Section 3.3, the e ective parameter = −1 is positive in the chaotic phase, implying that the growth rate of a species is typically negative, and abundances are therefore typically on the order of the small immigration rate rather than near carrying capacity.The existence of these two 'poles' of abundance values is key to boom-bust dynamics.By combining > 0 with Eq. ( 11), we estimate a minimum, critical value of the collective correlation required for boom-bust dynamics: Numerical simulations demonstrate that ≳ in the chaotic phase, where the critical value is approached at the boundary with the unique-equilibrium phase (Supplementary Figure S11).With this result in hand, Eq. ( 11) and Eq. ( 13) establish that ≳ 1∕ in the chaotic phase.For strong interactions, total abundances are predicted to be of order one, and for weak interactions ≈ ∕ ̃ (recall Eq. ( 5)), which recovers the observed scalings of these observables.As one moves deeper into the chaotic phase, the collective correlation increases continuously, as the e ective community size drops, suggesting a seamless transition from a weak-interaction, chaotic regime amenable to exact treatment [30], to the strongly correlated regime that we have analyzed by simulations and the approximate focalspecies model. Self-organization between community-level observables constrains abundance power-law variation In Section 3.3 we established a focal-species model depending on the e ective parameters , , and , that were related to the ecological parameters , , , indirectly via community-level observables , e , dom .Furthermore, in the previous section we studied how the latter vary in the chaotic phase.Putting these results together, we here examine the corresponding variation of the e ective parameters and of the focal-species model's predictions.Because the trio , , ultimately hails from but two independent variables, , (considering xed , ), they must be dependent.Figure 6A demonstrates that, across the chaotic phase, an approximate linear relationship holds between and , as well as between and .Because and are related to the mean and the variance of abundances via Eq.( 9), their proportionality is reminiscent of the empirical Taylor's law which posits a power-law relation between abundance mean and variance as they vary across samples [61].The slope of the relationship of to is close to one (and varying little with and ; Supplementary Figure S10), which implies with Eq. ( 9) that Comparison to Eq. ( 11) then yields that − ≈ e −1∕2 .This empirical relationship thus supports the aforementioned convergence-in the limit where e is large, as for weak interactions-of the collective correlation to its critical value. We nd in Figure 6 that the slope foc of the power-law trend obtained from simulation of the focal-species model nds good agreement with the value from the full dLV model.There is a narrow overall variation of the exponent; a consequence of the interdependency of the e ective parameters.As can be intuited by the approximate expression Eq. ( 10) for the focal-species model, the exponent is strictly larger than 1, a value it approaches if the turnover time scale diverges, as indeed it does on the boundary to the unique equilibrium phase.The exponent increases as interactions become more competitive, up to about 1.4 at ( , ) = (1, 0).However, the exponent also depends on and , showing a constant slope against log or −1∕ log (Supplementary Figure S7). Discussion We have sought a possible theoretical underpinning for macroecological patterns of dominance and rarity in species-rich communities.To this end, we studied a Lotka-Volterra model with strong interactions and weak immigration using numerical simulations and approximate analytical techniques.We characterized a parameter regime where species generically turn over between a small dominant component, and a large pool of temporarily rare species.In this process, each species' abundance undergoes a chaotic boom-bust dynamics, asynchronous with respect to most other species.The resulting distribution of abundances-of a single species over long times, or of the whole community at a single time-has a prominent power-law trend. The phenomenology of the model-chaos, boom-bust dynamics, and a power-law shaped SAD with exponent larger than one-is consistent with observations of marine plankton communities [3,12,16,18,62].While the evidence for chaos in ecological time series has been generally ambiguous, a recent systematic assessment concludes that chaos is commonplace, especially for plankton [16].Experiments with closed plankton microcosms have revealed chaotic, high-amplitude uctuations sustained over many years [12,62].Abundance uctuations indicative of chaos were also seen in non-planktonic synthetic microbial communities [13,63], but of lower amplitude.That planktonic population sizes uctuate over many orders of magnitude in abundance is made especially poignant by algal blooms, which can become visible even from space.The timing of blooms, and the succession of functional groups within a season, are coupled to environmental factors such as nutrient concentrations.Yet, for the non-dominant taxa, the large di erences in a species' abundance between ocean samples show 5).Each pair of ( , ) has been mapped to a distinct colour.B The exponent of the power-law section of the AFD for the chaotic dLV model plotted against the analogue foc obtained for the focal-species model: generally good agreement is found, with more outliers for parameters close to phase boundaries.A few outliers lie beyond the plotted range.C Co-dependence of the e ective parameters , , : the amplitude of growth-rate uctuations approximately equals the absolute value of the negative growth rate (only weakly depending on and ; Supplementary Figure S10); is roughly proportional to the inverse turnover time, but the slope of the relationship depends on and .little environmental signature [3].This suggests that the turnover of abundances might rather be driven by complex interactions or mixing dynamics.Empirical snapshot SADs of marine protists show a clear power-law trend for nondominant species within the same size class, with a largerthan-one exponent varying little between samples, as in our model. The empirical value of the exponent of the SAD's powerlaw trend (around 1.6 [3]) is important because it rules out particular model assumptions.The chaotic phase in the Lotka-Volterra model produces a unitary exponent in the weak-interaction limit, and seemingly also for strong interactions if the immigration rate vanishes, → 0. Similarly, neutral theory predicts a power-law tail of the SAD with exponent one [57,64].To approach the empirical value, previous studies augmented neutral theory with nonlinear growth rates or chaotic mixing [3,65] to nd an exponent dependent on the model parameters.For dLV with strong interactions, we have shown that > 1 for all parameter combinations within the chaotic phase when interactions are strong.The approximate solution to the focalspecies model, Eq. ( 10), shows that the positive deviation from = 1 depends on three inter-related e ective parameters: the mean, amplitude, and timescale of uctuations in each species' net competition.As these uctuations drive the turnover pattern, boom-bust dynamics comes to be associated to a larger-than-one exponent.In our model, approaches the empirical value in the green region of Figure 6A.There, interactions are strong (smaller than but relatively close to self-interactions) and vary moderately from species to species, and the turnover time scale is large (but not yet divergent).We can speculate that similar features underpin plankton ecological dynamics, and may di erentiate these communities from other, more stable microbial assemblages. The chaotic phase ends at the parameter point ( , ) = (1, 0) where all non-divergent dynamical phases meet (Figure 5).These phases appear to be qualitatively similar to those observed in an individual-based version of the dLV model, accounting for demographic stochasticity [66]. There, the phases meet at the "Hubble point" which recovers neutral theory; every pair of individuals compete equally, and species only come to di er through randomness in birth-death events.Despite being deterministic, our model shows some similarity to neutral theory (sometimes referred to as e ective neutrality [43,67,68]): species are largely equivalent in a statistical sense, and they uctuate (pseudo-)randomly and mostly independently.It is important, however, to highlight the di erences that allow distinguish between models.As argued before [33], the vast census sizes of planktonic species would imply enormous timescales of turnover if by demographic noise alone, the main driver of diversity in neutral models.In contrast, deterministic species interactions can produce rapid turnover.If the turnover time is long, or if there is no turnover, then species-speci c AFDs would di er substantially from one another, and not span the range of abundances observed across the whole community.It is therefore informative to measure AFDs in addition the more commonly measured SADs, whenever this is feasible. Fluctuations, whether due to demographic or environmental noise, or complex interactions, can drive species extinctions.In our model, the immigration term represents some mechanism able to sustain abundances above the extinction threshold long enough for a species to rebound.Meta-community models of spatial patches connected by migration ows show how drastic levels of global extinctions can be avoided through a temporal storage effect [33][34][35][36]: if the patches' dynamics desynchronize, then a species that went extinct in one patch may eventually be reestablished there through immigration from another patch where it persisted.Within-patch turnover of composition was observed in the meta-community setting but attributed to the local interactions in a single patch [33,36].In particular, Ref. [33] focussed on disordered Lotka-Volterra dynamics where both within-and between-species interactions follow the same statistics.This required anti-symmetric, i.e. predator-prey-paired, interactions ( = −1) for sustained uctuations.We have instead followed a competitive paradigm (normalized self-interaction, and = 0), di-Chaotic turnover ... in a strongly interacting model community Mallmin, Traulsen, De Monte (2023) rectly extending the studied range of the -phase diagram of earlier work [25,30].Furthermore, our model assumptions have allowed the formulation of an explicit focal-species model, in terms of a stochastic logistic equation with coloured noise.It is similar to models that have been successfully tted to microbial time series [38,39], but a notable di erence lies in the negative mean growth rate we nd, which together with noisecorrelation and immigration yields uctuations over many orders of magnitude.Our focal-species model constitutes an ecologically motivated candidate for tting to ecological time-series with large uctuations.Its derivation also serves an important conceptual purpose, as it illustrates in an ecological context how complex dynamics can come to resemble a simple noise process.Indeed, it is notoriously di cult to distinguish random noise from chaos in empirical time series.The prevalence of ecological chaos may have been underestimated for this reason [15,16]. Chaos is a multifaceted phenomenon, and the question of how and why it arises in a given context is of great theoretical interest.It has long been recognized that Lotka-Volterra systems can admit heteroclinic networks [69][70][71]; saddle-points, i.e. equilibria with stable and unstable directions, connected by orbits.For LV equations without immigration, such saddles are found on the system boundary, corresponding to some subset of species having zero abundance (i.e.being extinct).One route to chaos is via the 'deformation' of a heteroclinic network, when, for instance, the introduction of an immigration term pushes saddles o the boundary.This can result in 'chaotic intinerancy', whereby trajectories traverse the vicinities of low-dimensional quasiattractors (formerly saddles on the boundary) via higherdimensional, chaotic phase-space regions [52,72].This picture ts well with our description of the chaotic turnover in relation to Figure 1.A further understanding of this mechanism may come from analytical investigations of disordered models related to, but more tractable than, Lotka-Volterra [73]; or focussing on the transition to chaos from the unique equilibrium regime in the weak-interaction limit of the dLV model.In the strong-interaction regime, a systematic bifurcation analysis of the transition from stable xed points to chaos under the adiabatic scheme outlines in Supplementary Figure S8 may also provide insights. Our model can be extended in several directions.Differences in growth rates and carrying capacities between species are expected for natural communities, and revealed when tting time series [74]; one may also consider a less than fully connected interaction network, e.g.sparsity [75][76][77].Such generalizations will however break assumptions-in particular that of independent Gaussian interactions-underlying some of our analytical approximations.Preliminary numerical explorations suggest that existence of chaotic turnover dynamics is robust to these features, but that they may bias some species towards abundance or rarity (Supplementary Figure S5).A systematic investigation is however warranted.Given the proposed connection of our work to plankton ecology, considering a trophically structured ecosystem might be a particularly relevant generalization, as grazers [78] and viruses [79] have been put forward as key ecological actors.On the one hand, we have shown that non-structured competition between many species can produce high-dimensional chaos; on the other hand, the dynamics between functional groups such as bacteria, phyto-and zooplankton, and detrivores could potentially also be chaotic, but in low dimension.It is therefore an interesting question how uctuations across di erent levels of coarse-graining of a community might be intertwined. To conclude, we have demonstrated the emergence of a chaotic turnover of rare and abundant species in a strongly interacting community with minimal model assumptions.By deriving an explicit focal-species model to capture this complex dynamics, we have identi ed community-level observables and e ective parameters that constrain the variation of the power-law exponent of the species abundance distribution.These insights may prove valuable for interpreting eld data [3,18], as well as for predicting dynamical features of synthetic communities [13]. Appendices A Numerical implementation For Lotka-Volterra simulations we used a xed time-step Euler scheme with ∆ = 0.01, applied to the logarithm of abundances.This guarantees the positivity of all abundances at all times, regardless of immigration rate.To automatically classify the long-time behaviour of trajectories as xed-points, cycles, or chaos, we used a heuristic method of counting abundance vector recurrences, validated against visual inspection of trajectories and calculated maximal Lyapunov exponent for a subset of trajectories.Further details are given in Supplementary Note S2. B Similarity metrics The Bray-Curtis similarity index [80] is de ned as where is the relative abundance of species with respect to the joined abundances + .By de nition, BC( , ) = 1 i = , and BC ≈ 0 when, for each , either ≫ or ≫ ; this makes it suitable for communities where abundances span orders of magnitude. For the similarity graph Figure 1B, we have plotted BC( dom ( ), * ( )), where dom ( ) is the restriction of ( ) in the reference simulation to only the dominant species at time , and * ( ) is the xed point reached from dom ( ) as initial condition, with = 0. To compare the similarity the AFD of species , ( ), to the species-averaged AFD = ∑ ∕ , we de ne the index where and are the cumulative distribution functions of and , respectively; i.e., the index is based on the Kolmogorov-Smirnov distance [51] of the AFDs. C Derivation of time-averaged total abundance Direct summation of Eq. ( 1) over (assuming = 1), and then division on both sides by ( ) = ∑ ( ), yields ) with e as in Eq. ( 6) and ( ) as Eq. ( 12) but without the time average.We denote the time-average operator by Applying it to Eq. ( 17), the left-hand side becomes lim →∞ ( ( ) − (0))∕ , which evaluates to zero on the assumption that no species diverges in abundance.The righthand side contains terms such as [ ∕ e ] and [ ].If the relative uctuations in , e , are small (see Supplementary Figure S9), or these functions are at most weakly correlated to one another, then we obtain, approximately, As the immigration is small compared to the other terms it can be neglected; solving for nds Eq. ( 11).The relative error in between Eq. ( 11) for simulated values of the community-level observables in the right-hand side, and the simulated value of , is typically less than a few percent (see Supplementary Figure S14). D Selective advantage The dynamics of the relative abundance = ∕ , with = ∑ , is found by summing and di erentiating Eq. ( 1) as Using Eqs. ( 3), (6), and ( 12) in de ning we can write Eq. ( 20) as The term is responsible for the bias of species against the reference proportion .As a heuristic means of calculating the time-averaged bias, we suppose the 's can be treated independently of the and be replaces by ≈ 1∕ ; then we obtain ≈ − ∕ √ .On this basis, we expect to be indicative of a species' dominance bias. E Derivation of the stochastic focal-species model from dynamical mean-eld arguments We write Eq. ( 1) as If we suppose that the abundances { ( )} (or, rather, their statistical properties) are independent of the particular realization [ ] of the interaction matrix, then, for a given realization of { ( )}, based on the properties of sums of Gaussian variables.The time-varying mean and variance of means that, averaged over time, does not necessarily follow a Gaussian distribution.We introduce which are found to exhibit signi cant relative uctuations, with skewed distributions (Figure 7A and B).However, once we shift and scale ( ) into the "e ective noise" we recover (closely) a (0, 1) distribution, for both the set { ( )} 1,…, at any given time , or for the stationary distribution of ( ), at least for typical species (Figure 7C and D).The empirical distribution of the across all species and times is closely approximated by the stationary distribution ( , ) (Figure 7E).Therefore, we suppose that, despite their uctuations, we can replace ( ) and ( ) with their time-averages and model as a stochastic process where ( ) is a process with stationary distribution (0, 1).The parameter correspondence in Eq. ( 9) follows by = − , = ≈ ∕ √ e , and = , the correlation time of .Note that, up to neglecting a diagonal term of the sum, the e ective noise can be written with ∼ (0, 1), and ( ) = ( )∕ ( ) 2 .Given the chaotic turnover pattern, the latter is expected to perform something like a random walk on the -sphere, with a decorrelation time corresponding to the turnover of dominant species.This timescale is inherited by the e ective noise.More precisely, we compare autocorrelation functions (ACF).The ACF of a function is de ned as species and time (grey), over just species for one random time (green), over all time for the rst/mid/last-ranked species with respect to average abundance (blue/pink/yellow), with (0, 1) (black, dashed) for reference.E The empirical distribution of in Eq. ( 23) over all species and times, compared to the distribution ( , ) assumed for in the focal-species model.F Autocorrelation functions: for every species (grey), rst/mid/last-rank species (blue/pink/yellow)), and the average over all species (black).The left inset compares the ACFs of (green), (black), and the exponential t to the latter (red); the right inset shows the distribution of the parameter in exponential ts to each species ACF. with = − and using the notation Eq. (18).By denition (0) = 1.For each species' e ective noise we compute numerically ( lag ), as shown in Figure 7F.Due to the small number of 'booms' per species, even over a large simulation time, ACFs are slightly irregular.In order to make estimations more accurate, we consider the averaged ACF The decay of correlation is well-approximated by the exponential exp(− lag ∕ ), where the parameter ( tted by least squares) represents the noise correlation timescale for a 'typical' species. The approximately (0, 1) distribution and exponential autocorrelation function of the e ective noise suggest that it can be modelled as an Ornstein-Uhlenbeck process, the only Markov process with these two properties; where ( ) is a Gaussian white noise; ⟨ ⟩ = 0, ⟨ ( ) ( )⟩ = ( − ).The timescale referred to as dom in the main text can be de ned as , the decay time of the exponential t to the ACF of the abundance vector.For a vector-valued function, Eq. ( 29) gives Comparing and , they match very well (inset of Figure 7F) for the reference simulation; as do the associated timescales and for all ( , ) in the chaotic phase (Supplementary Figure S13).This observation motivates identifying of the focal-species model with the turnover timescale dom .Thus, the focal species model and its parameters have been fully speci ed. The crucial di erence to dynamical mean-eld theory developed in the weak-interaction limit is the fact that ( ) and ( ) are, for strong interactions, determined by a small number ( e ) of dominant species, whose abundances uctuate substantially, and, during their time of co-dominance, have signi cant e ects on each other; i.e., they are conditionally correlated.Therefore, self-consistent determination of the e ective parameters fails, because species cannot be treated as independent realizations of the focal-species model.For example, the self-consistency relation for in Eq. (7b) is = ⟨ ⟩ − 1.In our reference simulation = 0.26, whereas ⟨ ⟩ − 1 = −0.31even has the wrong sign.This discrepancy is due to the neglected inter-species correlations needed for the collective correlation , Eq. ( 12), to exceed the critical value Eq. ( 13) associated with > 0 and boom-bust dynamics. F Steady-state solution of the focal-species model under the uni ed coloured noise approximation The uni ed coloured noise approximation [56] assumes overdamped dynamics to replace a process ̇ = ( ) + ( ) , driven by Gaussian correlated noise of correlationtime , with a process driven by white noise.The approximation is exact in the limits → 0 or → ∞.The stationary distribution of the corresponding white-noise process is * ( ) ∝ exp ∫ ( ) d , with and a function of , , and .For Eq. ( 7) Supplementary material Chaotic turnover of rare and abundant species in a strongly interacting model community S1 Supplementary Figures As in the Because only the diagonal elements are far from zero, and the similarity index is mostly determined by the overlap of dominant species, we conclude that the dominant component is not closely repeated (unless, perhaps, after an exceedingly long time).The aberration around ≈ ≈ 8000 re ects a time when some dominant component persisted for an unusually long time.B For a few well-separated time points (one graph each), we show how ( , ) decays over time on a timescale of 200 time units (top panel), and how it uctuates around a small value over a longer time scale of 5000 time units.Thus, community composition decorrelates quickly in time, with some residual low peaks in similarity re ecting that one or a few species will eventually reappear in a dominant community that is otherwise di erently composed.6)), increases slowly (but super-logarithmically) with the overall richness .That is, even if we add thousands of new species to the community, the dominant component at given time would just have a species or two more than before.where is a single, xed realization of a standard Gaussian random matrix.A For each value of , we initialized separate simulation runs starting at = 1.4,and let their abundances evolve until an attractor was found.For each run, we then changed by small increments = −0.1,allowing enough time between each change for the abundances to relax from their previous state.This relaxation would either result in a small perturbation of the previous attractor, or instigate a jump to a di erent attractor.If a state diverged, the initial abundances for the next value of were set as the most recent non-divergent attractor.Thus, each simulation traced a sequence of attractors from = 1.4 → −0.1, corresponding to a horizontal line in the phase diagram.The colour quality re ects the class of the attractor, and the colour gradation indicates the e ective community size, revealing the following features: First, we nd mostly xed points in the multiple attractor region.This is because, once a xed point is converged to, it is "hold on to" until it vanishes or changes stability.If, instead, every simulation at given , would start from newly sampled initial abundances and interaction matrix, we would nd di erent attractors every time, and the diagram becomes more heterogeneous (compare Main Text Figure 5).Second, clear lines radiate from ( , ) = (1, 0) and delineate sectors characterized by the number of high-abundance species coexisting at a xedpoint.In section S4 we show that an invasion analysis predicts such sectors, but not the right scaling of the lines' slope with e .Third, the jump from xed-point to chaotic attractors occurs along a sharply de ned line.B Stacked abundances of the attractor found in an adiabatic sequence = 1.4 → 0.6 (top panel, right to left) and the reverse 0.6 → 1.4 (bottom panel, left to right) at = 0.3.One can see sudden jumps to new equilibria involving more (or less) species.In the upper panel, reading right to left, a three-species equilibrium is found at = 1.15, which jumps to a 6-species equilibrium by the invasion of three more species at = 1.11; another two species displace one of the previous at = 0.9; and at = 0.72 a sudden jump onto a chaotic attractor occurs.Reversing the adiabatic protocol, the transition from chaos to xed point occurs only at = 0.81, and the sequence of equilibria is not identical to the forward direction (hysteresis).A systematic investigation of the multiple attractor phase and the transition to the chaotic phase is left for future work.S8A, we have run a long simulation for each parameter point with random initial condition and interaction matrix realization.Statistics were recorded for persistently chaotic trajectories; non-chaotic trajectories were discarded, and the parameter point rerun to obtain a long chaotic trajectory, up to ve times, else the point was omitted (chaos probability was shown in Main Text Figure 5).A The collective correlation.B The critical value of the collective correlation as dened by Main Text Eq. ( 13).C The ratio ∕ tends towards 1 at the boundary to the equilibrium phase.Note that changes continuously across this boundary (Figure S9C).The arrow at the upper end of the colour bar implies the range has been capped for clarity.In one dimension, * ( ) must be constant.Since is a non-negative abundance in our case, we must impose a boundary at = 0 through which probability cannot ow.Therefore * ≡ 0. The solution for * is then Figure 1 : Figure1: Turnover of the dominant component.A The stacked abundances of all species under steady-state conditions: there is a turnover of species such that only the dominant component is visible at any given time (each species has a distinct random colour).B Bray-Curtis index of community composition similarity between the dominant component of the community at time , and the composition if it were isolated from the rare species and allowed to reach equilibrium: the community appears to approach the composition of few-species equilibria before being destabilized by invasion from the pool of rare species. Figure 2 : Figure 2: Statistical features of abundance variations across species and in time.A Snapshot rank-abundance plot for the relative abundances in the reference simulation: most species have orders of magnitude smaller abundances than the top ranks.Di erent lines represent observations at well-separated time points.B Species abundance distribution (SAD, blue histogram) corresponding to the blue rank-abundance plot; overlaid, abundance uctuation distribution (AFD), averaged over all species (black line)with ± one standard deviation across species shaded in grey: the snapshot SAD appears to be a subsampling of the average AFD, indicating an equivalence, but de-synchronization, of species in their abundance uctuations.The one bar missing from the SAD is the e ect of nite species richness, as high-abundance bins only ever contain a couple of species for = 500.The vertical dashed line indicates the immigration level which determines a lower limit to abundances. Figure 3 : Figure 3: Comparison of the stochastic focal-species model and the chaotic dLV model.A Time series of one arbitrary species in the disordered Lotka-Volterra (dLV) model (blue), and one realization of the stochastic focal-species model (Eq.(7)) with parameters as in Eq. (9): the time series are statistically similar.B Comparison of the average abundance uctuation distribution (AFD) from Figure2(black), and the AFD of the focal-species model (pink): excellent agreement is found for the powerlaw section.The 'uni ed coloured noise approximation' solution for the focal-species model's AFD (dashed, pink line) predicts the correct overall shape of the distribution, but not a quantitatively accurate value for the power-law exponent. Figure 4 : Figure4: Species di erences in dominance.A Example of a long abundance time series for the three species who are ranked rst, median, and last, with respect to the 'dominance bias' (fraction of time spent in the dominant component relative to the species median).Some species 'boom' more often than others.B The scaling of median fraction of time spent in the dominant component against reciprocal species pool size: increasing results in a proportional decrease in median dominance time.C Distribution of dominance biases against relative dominance rank for a range of : there appears to be convergence towards a non-constant limiting distribution, implying that net species di erences are not due to small-e ects.Note that, by de nition, the dominance bias is 1 for the middle rank, indicated by the dashed line separating positively from negatively biased species.D Scatter of dominance bias against the normalized sum of interaction coe cients, Eq. (4): lower net competition correlates with higher dominance bias.Species in the tails of the distribution are also less 'typical', with typicality quanti ed by the index , Eq. (16), representing the similarity of a species AFD to the species-averaged AFD.Panel A and D are both for = 500. Figure 5 : Figure 5: Dynamical phases of the disordered Lotka-Volterra model as a function of the interaction mean and standard deviation.Probability of persistent chaos in long-time simulations: for each and (with 0.01 increment), 30 simulations were made, each with a di erent random initial condition ∼ ( ,2∕) and realization of the interaction matrix.Parameters yielding divergence every time are marked with grey.The boundary separating the chaotic phase from the rest of the multipleattractor phase (in which cycles and multi-stable xed point are common in addition to chaos) is not sharp, unless probed adiabatically in the way explained in Supplementary FigureS8.The unique xed-point phase has been studied analytically in the weak-interaction regime ( ∼ 1∕ ).When inter-speci c competition is in general stronger than intra-speci c competition, a single species (identity depending on initial condition) dominates, in line with the classical competitive exclusion principle[60]. Figure 6 : Figure 6: Relations between e ective parameters in the chaotic phase.A Colour legend of the chaotic phase (boundaries from Figure5).Each pair of ( , ) has been mapped to a distinct colour.B The exponent of the power-law section of the AFD for the chaotic dLV model plotted against the analogue foc obtained for the focal-species model: generally good agreement is found, with more outliers for parameters close to phase boundaries.A few outliers lie beyond the plotted range.C Co-dependence of the e ective parameters , , : the amplitude of growth-rate uctuations approximately equals the absolute value of the negative growth rate (only weakly depending on and ; Supplementary FigureS10); is roughly proportional to the inverse turnover time, but the slope of the relationship depends on and . Figure 7 : Figure 7: Statistical properties of the e ective noise.A, B Time series and distribution of rel = ∕ − 1, etc. C, D Histograms of ( ) across allspecies and time (grey), over just species for one random time (green), over all time for the rst/mid/last-ranked species with respect to average abundance (blue/pink/yellow), with (0, 1) (black, dashed) for reference.E The empirical distribution of in Eq. (23) over all species and times, compared to the distribution ( , ) assumed for in the focal-species model.F Autocorrelation functions: for every species (grey), rst/mid/last-rank species (blue/pink/yellow)), and the average over all species (black).The left inset compares the ACFs of (green), (black), and the exponential t to the latter (red); the right inset shows the distribution of the parameter in exponential ts to each species ACF. Figure S1 : Figure S1: Sensitive dependence on model parameters.A chaotic system exhibits sensitive dependence on initial conditions, and hence also on any model parameters or numerical implementation details that a ect the dynamic variables.A Reference simulation, showing stacked abundances, similar to Main Text Figure 1A.B A change of integration scheme, with respect to the reference; C a perturbation of the interaction coe cients by (10 −6 ); D a perturbation of the initial abundances by (10 −8 ).E Each type of perturbation leads to completely di erent community composition compared to the reference (measured as Bray-Curtis similarity) after a few hundred time units. Figure S2 : Figure S2: Convergence to positive maximum Lyapunov exponent (MLE).AThe dominant nite-time Lyapunov exponent (FTLE) over a few integration time steps ( = 2) uctuates along a trajectory, indicating the alternation of periods of phase-space expansion (boom) and contraction (bust).B The cumulative average of the FTLE converges towards a limit that is the maximal Lyapunov exponent.Its positive value (0.02) indicates that the trajectory is chaotic. Figure S3 : Figure S3: Decay of community similarity with time..A The temporal similarity matrix has elements given by the Bray-Curtis similarity between the abundance vectors at two time points, ( , ) = BC( ( ), ( )).Because only the diagonal elements are far from zero, and the similarity index is mostly determined by the overlap of dominant species, we conclude that the dominant component is not closely repeated (unless, perhaps, after an exceedingly long time).The aberration around ≈ ≈ 8000 re ects a time when some dominant component persisted for an unusually long time.B For a few well-separated time points (one graph each), we show how Figure S4 : Figure S4: Scaling of e ective community size with richness.The time-average of the e ective community size, e (Main Text Eq. (6)), increases slowly (but super-logarithmically) with the overall richness .That is, even if we add thousands of new species to the community, the dominant component at given time would just have a species or two more than before. Figure S5 : Figure S5: Robustness of turnover dynamics under model variations.We here illustrate that chaotic dynamics is observed even when we relax the simplifying assumptions we made on model parameters in the Main Text; however, we leave a systematic investigation of these generalized scenarios for future work.A Non-uniform growth rates: we sample ∼ (0, 1).B Non-uniform carrying capacities:∼ LogNorm(0, 0.1).C Sparse interactions: each interaction has a 0.1 chance to be non-zero.D Symmetric bias: = 0.2 correlation between diagonally opposed interaction coe cients.E Predator-prey bias: = −0.3correlation between diagonally opposed interaction coe cients. Figure S6 : Figure S6: Pairwise correlations in species abundances.While most of the ( − 1) pairs of species do not have meaningful levels of correlation over long times (here 100'000 time units), every species has some other species with which its correlation is substantial and non-spurious.The vertical axis has the correlation coe cient with lag time lag ( lag ), and the horizontal axis has the rescaled interaction coe cient = ( − )∕ .The inset show that most zero-lag correlation coe cients are close to zero; all zero-lag correlations are scattered in grey in the main plot.Blue (darker) points shows the values of maximum correlations max (0) for every species ; in order to see if correlations are stringer if we optimize over the delay time, we show in light blue max , lag <200 ( lag ).It is seen that the maximal correlations are around 0.25 in size, and clearly associated with < 0, i.e. a less-than-averagely negative (even positive) e ect of species on .Similarly, the extremal anti-correlations (pink for zero time lag, and light pink optimizing over time lag) are associated with > 0, i.e. a particularly negative e ect of on . Figure S7 : Figure S7: Scaling of AFD power-law exponent with and .From simulations, we have extracted the slope of the power-law section of the abundance uctuation distribution (AFD) A For varying , we nd empirically that the exponent depends linearly on the logarithm of species richness, with coe cients that depend on the system's other parameters: = 0 + log , where 0 = 0 ( , , ), = ( , , ).B For varying , the exponent appears to follow = 1 − ∕ log , where = ( , , ).The values of are 10 to the power of negative 8, 12, 16, 20, 24, 28, 32, and 128 in order to extrapolate towards zero immigration.The dashed line connects = 1 with the value at = 10 −8 . Figure S8 : Figure S8: Phase diagram form adiabatic simulations.Adiabatic simulations allow to track, in a numerically e cient fashion, the attractors of the dynamics as model parameters are changed slowly and continuously.To make the interaction statistics and continuous parameters of the model, we use as interaction matrix ( , ) = +where is a single, xed realization of a standard Gaussian random matrix.A For each value of , we initialized separate simulation runs starting at = 1.4,and let their abundances evolve until an attractor was found.For each run, we then changed by small increments = −0.1,allowing enough time between each change for the abundances to relax from their previous state.This relaxation would either result in a small perturbation of the previous attractor, or instigate a jump to a di erent attractor.If a state diverged, the initial abundances for the next value of were set as the most recent non-divergent attractor.Thus, each simulation traced a sequence of attractors from = 1.4 → −0.1, corresponding to a horizontal line in the phase diagram.The colour quality re ects the class of the attractor, and the colour gradation indicates the e ective community size, revealing the following features: First, we nd mostly xed points in the multiple attractor region.This is because, once a xed point is converged to, it is "hold on to" until it vanishes or changes stability.If, instead, every simulation at given , would start from newly sampled initial abundances and interaction matrix, we would nd di erent attractors every time, and the diagram becomes more heterogeneous (compare Main Text Figure5).Second, clear lines radiate from Figure S9 : Figure S9: Variation of community-level observables across the phase diagram.For the community-level observables in Main Text Eq. (11) we show: A-C their time-averaged values; D-F their relative relative uctuations.The data comes from the adiabatic simulation detailed in the caption to Figure S8.An arrow on the end of the colour bar implies the range has been capped for clarity. Figure S10 : Figure S10: Dependence of the e ective parameters , on , .The empirical, approximate relationship ∝ , found across the range of , in the chaotic phase, has a proportionality constant that depends relatively weakly on and . Figure S11 : Figure S11: Collective correlation.Within the bounds to the chaotic phase indicated in FigureS8A, we have run a long simulation for each parameter point with random initial condition and interaction matrix realization.Statistics were recorded for persistently chaotic trajectories; non-chaotic trajectories were discarded, and the parameter point rerun to obtain a long chaotic trajectory, up to ve times, else the point was omitted (chaos probability was shown in Main Text Figure5).A The collective correlation.B The critical value of the collective correlation as dened by Main Text Eq. (13).C The ratio ∕ tends towards 1 at the boundary to the equilibrium phase.Note that changes continuously across this boundary (FigureS9C).The arrow at the upper end of the colour bar implies the range has been capped for clarity. Figure S12 : Figure S12: Power-law exponent in the chaotic phase.A Variation of the AFD power-law exponent across the chaotic phase.Apart from outliers, we nd an exponent larger than one.B To test the accuracy of the focalspecies model in predicting the exponent, we measure the relative error in − 1 (since we expect > 1) with respect to the value obtained from simulations of the disordered Lotka-Volterra model.Data from the simulations described in Figure S11.The arrow at the upper end of the colour bar implies the range has been capped for clarity. Figure S13 : Figure S13: Comparison of autocorrelation times.We compare the autocorrelation time of the abundance vector and the autocorrelation time of the e ective noise .These two parameters are obtained by the exponential t − ∕ applied to the respective autocorrelation functions.Across the chaotic phase, these to timescale are quantitatively close, for reasons explained in Main TextAppendix E.
16,916.2
2023-06-19T00:00:00.000
[ "Environmental Science", "Biology", "Mathematics" ]
LANGUAGE-AGNOSTIC SOURCE CODE RETRIEVAL USING KEYWORD & IDENTIFIER LEXICAL PATTERN Despite the fact that source code retrieval is a promising mechanism to support software reuse, it suffers an emerging issue along with programming language development. Most of them rely on programming-language-dependent features to extract source code lexicons. Thus, each time a new programming language is developed, such retrieval system should be updated manually to handle that language. Such action may take a considerable amount of time, especially when parsing mechanism of such language is uncommon (e.g. Python parsing mechanism). To handle given issue, this paper proposes a source code retrieval approach which does not rely on programming-languagedependent features. Instead, it relies on Keyword & Identifier lexical pattern which is typically similar across various programming languages. Such pattern is adapted to four components namely tokenization, retrieval model, query expansion, and document enrichment. According to our evaluation, these components are effective to retrieve relevant source codes agnostically, even though the improvement for each component varies. INTRODUCTION Software reuse is a research area which is focused on optimizing development time by reusing existing software artifacts (Chavez, et al., 1998). This activity is commonly conducted when the programmers should do repetitive tasks that have been done by other programmers or themselves. However, due to a rapid growth of software artifacts (Bajracharya, et al., 2014), retrieving relevant artifact from repositories may take a considerable amount of time, especially when targeted repositories are unstructured and contain a vast amount of software artifacts. Hence, artifact retrieval should be developed as a supportive tool for software reuse. It is expected to aid programmer for finding their relevant software artifact from local or online repository in no time. In general, software artifact retrieval typically focuses on two major domains which are binary and source code domain. On binary code domain, artifact retrieval commonly relies on external resources such as human-defined tag since the binary code itself is not human-readable. Two example systems which applies such retrieval mechanism are Maven Repository (http://mvnrepository.com/) and NuGet Gallery (https://www.nuget.org/). To the best of our knowledge, there is only one work which does not rely on external resources. Such work has been done by Karnalim and colleagues (Karnalim & Mandala, 2014;Karnalim, 2015;Karnalim, 2016b). It relies on binary code from Java Archive (JAR) to extract related lexicons. On source code domain, on the contrary, artifact retrieval commonly relies on more varied features due to source code high-readability. Several sample features for such retrieval system are: structural information (https://code.google.com/), program input-output (Lemos, et al., 2007), user contribution (Vanderlei, et al., 2007), external resource (Chatterjee, et al., 2009), and modified retrieval algorithm (Puppin & Silvestri, 2006). This paper proposes a language-agnostic approach to retrieve source codes. Language-agnostic means that our approach could incorporate various programming languages automatically since it does not rely directly on programming language structure. Instead, it incorporates Keyword & Identifier lexical pattern. Keyword & Identifier lexical pattern is selected as our main concern based on twofold. First, Keyword & Identifier is the most declarative token type for representing author intention on source code. Second, Keyword & Identifier lexical patterns are typically similar in most programming language. RELATED WORKS Source code retrieval is a task for retrieving, classifying, and extracting information from source code (Mishne & Rijke, 2004). There are numerous reasons why such activity is so popular nowadays. Sadowski et al (2015) provides a good description about these reasons. However, regardless of the reasons, since standard Information Retrieval (IR) approach frequently yields inaccurate result on source code domain (Kim, et al., 2010;Hummel, et al., 2008), this task becomes an emerging field for research, especially for enhancing its effectiveness. In order to enhance retrieval effectiveness, most source code retrievals rely heavily on user knowledge about target code structure. Many large-scale source code retrieval systems such as Google Code Search (https://code.google.com/), Codase (http://www.codase.com/), Krugle (http://www.krugle.com/), and searchcode (https://searchcode.com/) encourage user to provide fixed structure location of the given query (e.g. class, field, or method body components). In such fashion, a large number of false positives could be removed since not all indexed terms are taken into account. It only considers terms found on specific structure location. It is important to note that such approach is not only found on large-scale source code retrieval systems but also on various research works about source code retrieval (Sindhgatta, 2016;Keivanloo, et al,, 2010). Exploiting user knowledge further, several works even expect user to provide program pattern as a query. Such pattern is expected to draw out various target source code characteristic. It is typically represented as UML-like function specification (Hummel, et al., 2008), high-level form of programming language (Paul & Prakash, 1994), specific query language (Begel, 2007), and raw code chunk (Mishne & Rijke, 2004). Nevertheless, even though structure-based approaches are more effective than the standard IR approach, it may be unfavorable for users who only know target source code in a black-box manner. They have no clue about target source code structure. According to the fact that black-box behavior is representable through program input-output, several works incorporate program input-output as their query. Users can provide either test cases (Lemos, et al., 2007), input-output data types (Thummalapenta & Xie, 2007;Reiss, 2009), or input-output query model (Stolee, et al., 2016) to refine their retrieval result. On the one hand, test cases are incorporated for retrieving only source codes which output is similar to given test-case output while its respective input is given. Input-output data types, on the other hand, are incorporated for retrieving only source codes which input and output match specific object types. Even though both kinds of approaches may help user to retrieve target source code in black-box manner, it cannot help users who only know about target source code functionality in general. For example, when users only know ANTLR as a source code parser library, they cannot incorporate input-output pattern as a query for retrieving ANTLR. Retrieving relevant source codes for users who only know target source code functionality in general is not a trivial task since it forces the retrieval system to work well even with a simple and limited query. In general, there are several approaches to achieve this goal which are: conducting pre-processing refinement, conducting postprocessing refinement, incorporating user contribution, enriching source code with API documentation, and modifying retrieval mechanism. Conducting pre-processing refinement refers to enriching either user query or indexed source codes to provide more descriptive information. On the one hand, when enriching user query, most works are focused on applying query expansion technique, that is adapted from standard IR approach. One example of such work is Lu et al's work. Lu et al (2015) enrich the query with its synonyms, that are extracted from WordNet. On the other hand, when enriching indexed source codes, most works are focused on embedding more related information. One example of such work is Vinayakarao et al's work. Vinayakarao et al (2017) enrich the indexed source codes by providing additional annotations related to syntactic representation. Nevertheless, we would argue that pre-processing refinement should be assisted with other techniques to provide more accurate result, as it is known that such refinement do not affect the retrieval process directly. Conducting post-processing refinement refers to refining initial retrieved results with additional processes in order to yield more effective results. In such approach, initial retrieved results are commonly extracted from large-scale internet search engine due to its accessibility. It could be accessed with ease through the internet. After retrieved, initial results are then refined to accommodate specific goal through additional processes, such as re-ranking mechanism and information embedding. On the one hand, re-ranking mechanism has been applied on two works which are Kim et al's and Stylos & Myers' work. Kim et al (2010) overrides the search results of Koders, a source code search engine that have been discontinued on 2012, to retrieve source code example by re-ranking the search result. Stylos & Myers (2006) also shares similar goal with Kim et al but they override Google search result instead of Koders and utilize API documentation in their ranking mechanism. On the other hand, information embedding has been applied on one work which is Hoffmann et al (2007). They refine Google search result by embedding multiple resources such as Java Archive (JAR), code example, and code-specific snippet. Nevertheless, refining retrieved result of existing source code retrieval system may yield two additional drawbacks: 1) It takes longer processing time due to its two-fold processing mechanisms; and 2) Applied refinements are limited since initial retrieval mechanism, which is commonly defined on publiclyavailable retrieval system, cannot be modified directly. Incorporating user contribution refers to embedding more information from users to enhance retrieval result effectiveness. This approach typically relies on either user behavior logs (Ye & Fischer, 2002) or collaborative information (Vanderlei, et al., 2007;Gysin & Kuhn, 2010) as its supplementary information. On the one hand, user behavior log is applied by Ye & Fischer for encouraging source code reuse (Ye & Fischer, 2002). Their retrieval mechanism is personalized under user behavior logs so that users could easily retrieve their target source code for reuse. On the other hand, collaborative information is applied by Vanderlei et al (2007) and Gysin & Kuhn (2010). Vanderlei et al incorporates collaborative manual tagging on indexed source code whereas Gysin & Kuhn incorporates user votes and developer reputation to refine their retrieval result. Nevertheless, user contribution only affects significantly when the system is frequently used and/or it involves numerous users. Thus, its impact may be insignificant for new users and unavailing on early development stages with limited users. Even though its impact may be improved as the number of users and their interactions are larger, user contribution still relies greatly on users which might be biased due to human error. Enriching source code with API documentation refers to utilizing API documentation to enhance retrieval effectiveness. It is inspired by the fact that source code has limited vocabulary terms and enriching the documents with external resource is proved to be effective on IR domain. In general, there are three works which fall into this category namely Chatterjee et al's, Grechanik et al's, and Lv et al's work. First, Chatterjee et al (2009) incorporates API documentation to enrich Java source code by embedding specific API documentation each time that API is used on source code. Second, Grechanik et al (2010) applies similar approach as in Chatterjee et al's work but differs in how they utilize API documentation. API documentation is used to convert query into API calls before retrieval phase and retrieval phase is conducted based on given API calls. Last, Lv et al (2015) uses API documentation to expand user query. The potential APIs will be defined based on API understanding component. Despite its promising result, enriching source code with API documentation relies greatly on the completeness and quality of the given API documentation. Thus, its impact may vary per source code dataset since not all dataset are featured with highquality API documentation. Modifying retrieval mechanism refers to designing domain-specific retrieval mechanism for source codes. We believe that such approach is the most promising one for enhancing retrieval effectiveness, as it is known that retrieval mechanism is the heart of information retrieval. Without proper retrieval mechanism, even the best retrieval system may yield faulty results. The simplest implementation of such approach is to consider source code as natural language text without relying on source-code-specific features (Girardi & Ibrahim, 1995). However, the result is not promising since natural language in source code domain is quite different with the real natural language. Hence, several works focus on the probabilistic approach instead. To the best of our knowledge, there are three works which use such probabilistic approach. First, Puppin & Silvestri (2006) modifies Google PageRank algorithm (Page, et al., 1998) by treating class usage as a replacement of link. Second, Spars-J (Inoue, et al., 2005) applies similar approach as in Puppin & Silvestri's work but with more fine-grained entities and relations. Last, Sourcerer (Bajracharya, et al., 2014) combines three ranking mechanisms which are graph-based, text-based, and structure-based ranking to yield the most appropriate results. Nevertheless, since these probabilistic approaches, at some extent, rely on source code structure as their retrieval features, they still require considerable efforts when incorporating new programming language(s). In this paper, a language-agnostic source code retrieval is proposed. Languageagnostic means that our proposed source code retrieval can incorporate new programming language(s) with no effort since it ignores language-centric features such as programming language structure. To our knowledge, there are no related works that claim their works as language-agnostic. Most of them are only focused on a particular programming language. Even though some of them state that their work can be applied to another programming language, they require a considerable effort to do so. Our work relies on Keyword & Identifier lexical pattern which rules are similar in most programming languages. Such pattern is adapted to four components namely tokenization, retrieval model, query expansion, and document enrichment. METHODOLOGY Similar with standard IR approach, our proposed source code retrieval system consists of three modules which are source code tokenization, retrieval. and indexing. Source code tokenization is aimed to convert query or source code into lexicons; source code retrieval is aimed to retrieve relevant source code according to given query; and source code indexing is aimed to index the source code dataset. Our contributing components, which are tokenization, retrieval model, query expansion, and document enrichment, are applied in either source code tokenization or retrieval. Tokenization is applied on source code tokenization while the other three are applied on source code retrieval. Source Code Tokenization The detail of our proposed source code tokenization can be seen in Figure 1. This module converts query or source code into lexicons with 4 steps namely lexicon recognition, lexicon categorization, transition-based tokenization, and standard text preprocessing (lowercasing, stopping, and stemming). Figure 1. The Flowchart of Source Code Tokenization Lexicon recognition is responsible to extract all possible lexicons from source code. However, since our proposed approach is aimed to be designed as languageagnostic as possible, programming language lexer and parser are not used. Instead, we utilize Keyword & Identifier lexical pattern that is generalized from naming rules of popular programming languages (Cass, 2016). Regular expression of such pattern can be seen in Eq.(1). Basically, our pattern only accepts lexicon which starts with alphabet or underscore, that is followed by zero or more alphanumeric, underscore, hyphen, and dot mark. Hyphen is included in this pattern since such character is typically used in scripting language such as PHP. Moreover, dot mark is included to differentiate keyword and identifier. Lexicons with dot mark as its member will be considered as an identifier since most keywords only consist of alphanumeric. Even though comment pattern is quite similar in most programming languages, it is ignored in our work due to following reasons: 1) Comment delimiter on a particular programming language may represent another token type on other programming language. For instance, the number sign ('#'), which acts as an initial mark of comment in Python, acts as an initial mark for macro in C++; and 2) Comment lexicons are still implicitly extracted with Eq.(1), even though they are not exclusively recognized as comment lexicons. Lexicon categorization is responsible to classify recognized lexicons based on Keyword & Identifier lexical pattern. Each lexicon is classified either as a keyword-like or identifier-like lexicon. A lexicon will be categorized as a keyword-like lexicon if such lexicon only consists either uppercase or lowercase characters and involves zero or more underscore(s). Otherwise, it will be categorized as identifier-like lexicon. Even though such heuristic may misclassify several identifiers as keywords or vice versa, we would argue that this heuristic is the best approach so far for recognizing Keyword & Identifier in language-agnostic manner. Transition-based tokenization is responsible to parse all identifier-like lexicons based on their character transition, as it is known that most identifier lexicons are built from several sub-lexicons that are separated based on character transition. The implementation of this phase is adapted from Karnalim & Mandala's work (Karnalim & Mandala, 2014). Standard text processing is responsible to minimalize the number of lexicons, handle character-case variation, and handle affixes variation. Firstly, minimalizing the number of lexicons is performed by stopping. It relies on Weka default stop words (http://weka.sourceforge.net/doc.stable/weka/core/Stopwords.html). Secondly, handling character-case variation is performed by lowercasing. It converts all characters to its lowercase form. Finally, handling affixes variation is performed by stemming. It is implemented based on English stemmer (Porter, 2001). In order to get a broader view about our source code tokenization, a sample process of tokenization is also embedded on Figure 1. First of all, a particular Java source code chunk, which is generateData (tokenData); // tokenizer, is fed into lexicon recognition and yields three lexicons: generateData, tokenData, and tokenizer. Based on their respective lexical characteristic, the first two lexicons are categorized as identifier-like lexicon whereas the latter one is categorized as keyword-like lexicon. Afterwards, transition-based tokenization splits generateData and tokenData into their respective sub-lexicons and all generated lexicons from both categories is lowercased, stopped, and stemmed. As a result, these steps yield 5 lexicons which are token, generat, data, token, and data. Source Code Retrieval Source code retrieval consists of four major sub-modules namely source code tokenization, query expansion, retrieval model, and document enrichment. Retrieval flowchart involving these sub-modules can be seen on Figure 2. First of all, source code tokenization converts query into token stream and feeds them to query expansion. After the query has been expanded, all source codes which match to given query will be returned as its retrieval results. In this phase, each time a source code is considered as irrelevant document toward given query, the source code contents will be enriched by the most similar source code and its retrieval score will be recalculated. Query Expansion Query expansion is an IR technique to handle term mismatch problem by expanding query with additional terms that are mostly related to initial query terms (Carpineto & Romano, 2012). In our work, among various query expansion techniques, we apply query-specific local technique, which expands the query based on terms found on top-K retrieved source codes. First of all, query expansion candidates are selected from top-K retrieved documents. To limit the number of candidates, a lexicon is only considered as a candidate iff it consists of 1 to 4 words and its category is similar to the category of query term, either keyword-like or identifier-like lexicon. After all candidates are selected, these candidates will be tokenized and merged as shortlisted query term candidates. Shortlisted candidates are then sorted in descending order based on their importance score, that is defined based on Eq.(2). In general, Eq.(2) considers two aspects to determine candidate importance which are weighted term frequency (weighted_tf(t)) and one-to-many association (∑ q ∈QMI(t,q)). A promising candidate should be frequently occurred on Top-K retrieved documents and is strongly-associated with initial query terms. Both aspects are connected in Eq.(2) through multiplication symbol where each of them has been additive-smoothed. score(t) = (weighted_tf(t) + 1) * (∑ q ∈QMI(t,q) + 1) Weighted term frequency of term t is calculated using Eq.(3), which is generally resulted from counting the occurrence of given term from top-K retrieved documents (tf(t,di)). For each retrieved document, the term frequency would be multiplied with their respective document retrieval score (sd(di)) so that the impact of term frequency on high-ranked documents are strengthened. One-to-many association is calculated based on the term co-occurrence between candidate term and query terms. Term co-occurrence is preferred to natural language ontology since vocabulary used in programming may be different with standard natural language (Karnalim, 2016c) (e.g. mouse in programming context may be not related to a kind of rodent mammal). Term co-occurrence for each query term q and candidate term t is measured based on position-based mutual information defined in Eq.(4). For each document in top-K retrieved documents, term pairs are extracted and their respective inverse delta position is calculated and summed. pos (q,i,p) and pos(t,i,p) represent term position on document i and pair p. The first one is related to query term position whereas the latter one is related to candidate term position. Eq.(4) will yield higher score when both candidate and query term are frequently located in adjacent position. However, since limited vocabulary terms in source code corpus may yield biased co-occurrence, external resource is incorporated as a replacement of source code corpus to measure mutual information. Our work utilizes noun phrases from 27.229 software-specific html pages, which are scraped from 32.000 links at the beginning of GitHub Java Corpus project list (Allamanis & Sutton, 2013) where remained 4.771 links are not accessible. In order to assure each term position is only included in one position pair, algorithm for selecting pairwise method candidates from Karnalim's work (Karnalim, 2016a) is adapted and redirected to handle term position. It takes query and candidate term position list as its input and return selected position pairs as its result. First, all query term positions are paired with possible candidate term positions. After paired, all pairs will be sorted in ascending order based on its distance where each pair which member(s) is occurred in more-adjacent pair is removed. In such fashion, remained position pairs resulted from this algorithm are the most adjacent pairs and each position is only included once in selected pairs. After all query expansion candidates are sorted in descending order based on its importance, top-N candidates will be selected and merged with initial query lexicons. In our work, N is defined manually by user so that it may be modified according to user necessity. Retrieval Model Our proposed retrieval model is extended from Okapi BM25 (Robertson, et al., 1998), a popular retrieval model from natural language IR, by incorporating Keyword & Identifier lexical pattern. Scoring function for such retrieval model can be seen in Eq.(5). score(Q,d) refers to our scoring function which define the relevancy between query lexicons Q and an indexed source code d. It computes score locally per lexical pattern category and sums up both scores into its final score. BMK and BMI stand for BM25 for keyword-like and identifier-like category respectively where their equation is quite similar as standard BM25 except that they only consider lexicons with specific lexical pattern (either keyword-like or identifier-like). score(Q,d) = ∑ q ∈Q( a * BMK(q,d) + b * BMI(q,d) ) In Eq.(5), a and b are weighting constants which represent the impact of each lexical pattern category for generating final score. a represents keyword-like lexicon weighting whereas b represents the identifier ones. Such weighting mechanism is implemented based on our informal observation about user query where users typically expect the retrieved results to have given query in similar lexical pattern category. For instance, if users provide an identifier as a query, they typically expect retrieved results to have such query as an identifier, not keyword. Thus, by incorporating weighting constants, our retrieval model can enhance the impact of desired lexical pattern category. However, to provide a simple interaction, a and b are set automatically according to given query lexicon category. If given query is detected as a keyword-like lexicon, then a will be assigned higher than b. Otherwise, b will be assigned higher than a. Constant values that will be assigned as a and b are defined statically beforehand and named as x and y. x represents weighting constant for preferred category whereas y represents weighting constant for non-preferred category. According to the fact that programming language keywords are less discriminative than other lexicons, our scoring function minimalizes its impact by computing BMI and BMK locally per file extension. In such manner, programming language keywords will generate low score since they occur frequently in a particular file extension. It is important to note that such reduction will not be achieved if BMI and BMK are computed globally. As we know, not all programming languages share similar keywords. Some of them even incorporate unique terms as their keywords. Since source codes are frequently used as either standalone file or project, our retrieval model will generate scoring function for each representation. For retrieving standalone file (i.e. single-file source code), we will use scoring function defined in Eq.(5). For retrieving project, a post-processing mechanism is incorporated toward retrieved results. All retrieved single file source code which are from similar project will be merged and considered as one document. The retrieval score of such project will be assigned as the sum of the score of these codes. Document Enrichment Document enrichment is conducted based on an assumption that lexicons on similar documents (i.e. source codes in our case) should be related to each other. This mechanism is implemented by replacing initial retrieval score with the weighted retrieval score of its most similar code. Weighted retrieval score is calculated with Eq.(6) where d2 stands for the replacement source code of d1, score(d2) stands for d2 retrieval score toward given query, and sim(d1,d2) stands for the similarity degree between d1 and d2. In such equation, the retrieval score of replacing source code is multiplied with its similarity degree so that enriched source code (d1) is assured to yield lower retrieval score than its enricher (d2), resulting lower rank for d1 when a query is naturally relevant to d2. weighted_score(d1,d2) = score(d2)*sim(d1,d2) (6) Similarity degree is calculated using standard token-based approach for detecting source code plagiarism (Prechelt, et al., 2002). It converts source code into lexicons and calculate its similarity with Rabin-Karp Greedy String Tiling Algorithm (RKGST) (Wise, 1993). However, this approach is extended in our work by incorporating language-agnostic tokenization and lexicon weight. On the one hand, language-agnostic tokenization is incorporated to extract all lexicons without relying on a particular lexer. It is implemented based on our proposed source code tokenization. On the other hand, lexicon weight is incorporated to enhance the impact of all identifiers heuristically. Such mechanism is incorporated since our similarity measurement prefers identifier to keyword subsequence for determining source code similarity. As we know, two source codes with similar keyword subsequence may not share similar intention. For example, let us assume that there are two source codes: a source code for sorting numbers and a source code for accessing a matrix. Even though both source codes share similar nested-traversal keyword subsequence (i.e. a subsequence of two-level-nested traversal), their intentions are extremely different. Lexicon weight is applied by assigning all identifier-like lexicons with 1 and all keyword-like lexicons with their respective local inverse document frequency that is calculated locally per file extension. In such manner, programming language keywords, which have high frequency in a particular file extension, will be assigned with lower score when compared to identifiers. The detail of our weighted RKGST can be seen in Eq. (7). It is based on the average similarity of RKGST. A and B are compared lexicon sequences; Tiles are RKGST output which represent similar lexicons from both sequences; and weight(T), weight(A), & weight(B) are the total weight of lexicons in T, A, and B respectively. In our similarity measurement, two lexicons are only considered as similar to each other iff their lexicon and respective weight are identical. Such mechanism is implemented to differentiate keywords between programming languages. As mentioned before, each keyword would be assigned with its respective local IDF. Thus, even though a keyword is used on two or more programming languages, such keyword would not be considered as similar to each other since each programming language would generate different IDF for given keyword. In other words, keyword subsequence is only considered when two source codes are written in similar programming language. Furthermore, this mechanism is also used to differentiate keywords which might be considered as identifier on other programming languages. By assigning unique weight to keyword per programming language, such keyword would not be considered similar with identifier on other programming languages. sim(A,B) = ( 2∑ T ∈Tiles weight(T) ) / ( weight(A) + weight(B) ) (7) Source Code Indexing In general, our indexing scheme stores four major source code components: source code lexicon, source code file extension, replacement list, and project member list. All components are required for retrieving source code. First, source code lexicon is used for calculating retrieval score of each indexed source codes. Second, source code file extension is used for determining IDF of keyword-like lexicon. Third, replacement list is used for conducting document enrichment. It stores the most similar source code for each indexed source code, including their similarity degree. In such manner, our document enrichment is not required to recalculate similarity degree at retrieval phase. It is only required to access stored similarity degree on replacement list. Last, project member list is used to accommodate project retrieval. It enlists all available projects with its members so that post-processing on our retrieval phase can be conducted just by accessing this list. Projects are recognized based on two regexes namely directory and key file regex. On the one hand, directory regex recognizes project based on directory name pattern. This mechanism is aimed to accommodate manually-created projects such as Java-based project that is developed with a standard text editor. On the other hand, key file regex recognizes project based on the name pattern of key file, an artificial file that is automatically generated by IDE to recognize its own project (e.g. ".vsproj" file on C++ project that is developed with Visual Studio). This mechanism is aimed to accommodate IDE-generated projects by considering all source codes on the same or deeper directory level with detected key file as project members. EVALUATION Due to unique characteristics of our approach, our evaluation will be conducted based on controlled dataset so that its impact can be measured more precisely. Our evaluation dataset is collected from source codes given on various programming books and tutorial websites. These resources are assumed to provide well-written source codes since all of them are utilized to learn programming on a particular topic. The detail of our datasets can be seen in Table 1. In general, our datasets are focused on two major topics, which are Algorithm & Data Structure and Design Pattern, wherein each topic is taken from three data sources, implemented in one or more programming languages, and represented as standalone file or project. Our dataset consists of 729 source code files where 221 of them are single-file source codes and the rests of them (508 files) are from 30 source code projects. For each source code, its queries are generated based on keyphrases found on the most descriptive paragraph from their respective source. Keyphrases are detected in twofold. First of all, keyphrase candidates are extracted automatically through keyphrase candidate selection heuristic (Karnalim, 2016c). Afterwards, the keyphrases are manually filtered by the author to maintain query relevancy toward given source code. He discards any queries which are unrelated to given source code. Besides queries from keyphrases, our work also incorporates source code file or project name as queries since these names are frequently used for retrieving source code. As a result, our dataset yields 1066 queries: 815 queries are extracted from 187 paragraphs; 221 queries are extracted from single-file source code filename; and 30 queries are extracted from source code project name. In order to generate more comprehensive result, our evaluation is not only measured based on the whole dataset but also measured based on sub-datasets provided on Table 1. Therefore, seven datasets are taken into our consideration. Six of them are sub-datasets from Table 1 and one of them is the merged form of these datasets. For brevity, these datasets are referred as S1, S2, S3, S4, S5, S6, and SM respectively where SM stands for merged dataset. In addition, since source codes are not always featured with comments, our evaluation also incorporates comment-excluded datasets. These datasets are generated by replicating S1-SM dataset and removing all of their respective comments. These comment-excluded datasets are then referred as CES1, CES2, CES3, CES4, CES5, CES6, and CESM respectively where "CE" stands for "Comment-Excluded". As a result, our evaluation will be conducted on 14 datasets in total. Half of them are raw datasets whereas the others are comment-excluded datasets. Java project-based source codes with short comments that are converted into single-file source codes. In terms of evaluation metric, our evaluation relies on F-measure since it is frequently applied as a standard IR effectiveness measurement (Croft, et al., 2010). Yet, standard precision in F-measure is replaced with Mean Average Precision (MAP) so that proposed F-measure becomes more sensitive toward the importance of retrieved document rank. Evaluating the Impact of Source Code Tokenization and Retrieval Model This evaluation is intended to measure the impact of our proposed source code tokenization and retrieval model. In order to do such evaluation, two evaluation scenarios are proposed, namely Retrieval-Model-Only (RMO) and Naïve Approach (NA). On the one hand, RMO refers to our proposed approach without query expansion and document enrichment. It only consists of source code tokenization and retrieval model where the retrieval model itself is parameterized with x=2 and y=1 for weighting constants. On the other hand, NA refers to retrieval scheme which treats source codes as natural language text and retrieves its result based on standard BM25. F-measure of both scenarios toward our evaluation datasets can be seen in Figure 3. In general, RMO outperforms NA in all cases despite its improvement varied. Thus, it is clear that RMO is more effective than NA for retrieving source code, especially for our evaluation dataset. In addition, by assuming that SM and CESM represent real-world datasets since they consist numerous source codes with various characteristics, it can also be stated that such improvement might also occur in real-world cases. Figure 3. F-measure between RMO and NA As seen in Figure 3, RMO generates less impact on datasets with long or intermediate comments (S1, S2, and S3). After further observation, such phenomenon is natural based on two reasons. First, in these datasets, most queries can be explicitly found on source code comments and such queries could be easily extracted by standard IR tokenization. Hence, the impact of proposed source code tokenization is unavailing when compared to standard IR approach. Second, since most comment terms are written in keyword-like manner, most of them will fall on keyword-like lexicon category, resulting imbalance proportion between keyword-like and identifier-like lexicons. Such imbalance proportion might reduce the impact of proposed tokenization that consider the distinction between keyword and identifier lexicon. When incorporated on datasets with short or no comments, our approach yields a significant improvement. It yields 6.967% averaged improvement on short-comment datasets (S4, S5, and S6) and 7.355% averaged improvement on comment-excluded dataset (CES1-CESM). According to these findings, it is clear that the impact of our source code tokenization and retrieval model are inversely proportional to the number of comments. It would be more effective when the number of comments from indexed source codes are reduced. Evaluating Retrieval Parameters Retrieval parameters (x and y) are incorporated to reweight retrieval score based on lexical pattern category. The larger the difference between these parameters, the higher retrieval score for preferred lexical category will be rather than the non-preferred one. This evaluation aims to find out the best parameter values so far for those parameters. For evaluation purpose, there are 10 constant pairs that will be assigned to those parameters, resulting 10 scenarios. x will be assigned with an integer value from 1 to 10 for each scenario respectively while y will be assigned with 1 for all scenarios. For clarity, each scenario will be referred as two integer values separated by a hyphen where the first value refers to x and the second one refers to y. For example, 3-1 refers to an evaluation scenario with x=3 and y=1. In order to provide more accurate result, each scenario will be conducted without query expansion and document enrichment mechanism. Each scenario will be evaluated with 4 datasets which are SM, CESM, SM with only identifier-like queries (SMIQ), and CESM with only identifier-like queries (CESMIQ). The last two datasets are used to evaluate the impact of retrieval parameters for handling cases where users typically expect the retrieved results to have similar lexical pattern category toward given query. Identifier-like category is preferred to the keyword one since such category is more frequently used in real cases based on our informal survey. Evaluation result toward our proposed scenarios can be seen in Figure 4. MAP is displayed as a replacement of F-measure since the modification of our retrieval parameters only affects MAP instead of the whole F-measure. It only changes the position of retrieved source codes without changing the members of retrieved results. Figure 4. MAP Toward Various Retrieval Parameter Constants As seen in Figure 4, optimal scenario for each dataset varies. It yields 1-1 scenario for SM dataset, 2-1 scenario for CESM dataset, 2-1 scenario for SMIQ dataset, and 4-1 scenario for CESMIQ dataset. There are several findings that can be deducted from this phenomenon. First, weighting mechanism is unavailing for source codes with long comments. Such finding is deducted from the fact that the highest MAP for SM dataset, a dataset where half of the source codes have long comments, is generated by 1-1 scenario, a scenario that does not favor preferred category at all. Second, weighting mechanism only affects the rank of retrieved results when the number of terms for each category is balanced. Such finding is deducted from the fact that the highest MAP for CESM, a dataset where the number of terms for each category is more balanced than SM, is generated by 2-1 scenario, a scenario which favors preferred category two times higher than the non-preferred one. Third, the impact of weighting mechanism grows higher when queries are limited to a particular category. Such finding is deducted from the fact that the most optimal scenario for both SMIQ and CESMIQ, two scenarios which only consider identifier-like queries, generate higher x when compared to the most optimal scenario for their respective original dataset (SM and CESM). Evaluating the Impact of Query Expansion and Document Enrichment This evaluation is conducted to measure the impact of proposed query expansion and document enrichment for enhancing source code retrieval effectiveness. Four scenarios, which are generated based on the possible combination of both query expansion and document enrichment, are proposed and evaluated for each evaluation dataset. These scenarios are Retrieval-Model-Only (RMO), Retrieval-And-Query-Expansion (RAQE), Retrieval-And-Document-Enrichment (RADE), and Combined-Form (CF). RMO refers to our baseline scheme which only incorporates source code tokenization and retrieval model with x=2 and y=1; RAQE refers to RMO with query expansion; RADE refers to RMO with document enrichment; and CF refers to RMO with both query expansion and document enrichment. For evaluation purpose, the number of candidates for query expansion is limited to 20, a number that is frequently used as a candidate threshold for query expansion (Carpineto & Romano, 2012). The impact of our proposed features can be seen in Figure 5. In order to generate more intuitive display, Figure 5 only shows delta F-measure between selected scheme and RMO. In general, both features are quite compromising since they yield positive delta improvement in most datasets. RAQE yielded 2.640% average improvement whereas RADE yielded 0.538%. When combined together (CF scenario), both features enhance retrieval effectiveness by 2.649%, which is higher than RAQE or RADE improvement. Thus, it can be stated that both features are supportive to each other in terms of enhancing effectiveness. Query expansion yields a significant impact for our proposed approach since it reformulates given query into more-detailed form by incorporating several additional query terms. These additional query terms can either strengthen the impact of alreadyretrieved relevant documents or retrieve more relevant documents. Based on our manual observation through our dataset, it is clear that, for our case, query expansion is more inclined on retrieving more relevant documents. This finding is consistent with query expansion behavior on natural language domain (Carpineto & Romano, 2012). Hence, it can be stated that query expansion on both natural language and source code domain yield similar impact, they tend to retrieve more relevant documents. Document enrichment only yields low improvement since this mechanism assures that enriched source codes are always ranked lower than their respective enricher. Thus, even though many relevant source codes are retrieved using this approach, most of them would be placed at the end of result list due to their low score. In addition, since this mechanism might also enlarge false positive results, it may lower F-measure at some points, especially when the number of irrelevant document is large. An example of this phenomena can be seen in CES5 dataset where RADE yields negative result due to its large number of irrelevant documents. Threats to Validity In general, there are two threats to validity which should be considered toward the result of this work. Firstly, it is important to note that our evaluation datasets only represent a small number of programming languages. It only involves 5 programming languages, which are Java, C, C++, C#, and Python. Therefore, the result cannot be generalized for all programming languages. It might be changed when more programming languages are incorporated. Secondly, generated queries in our dataset is limited to keyphrases found on referred text. Thus, the results cannot be generalized to all kinds of queries, including human-generated queries. CONCLUSION In this paper, a language-agnostic source code retrieval, which relies on Keyword & Identifier lexical pattern, has been developed. Using this approach, new programming languages could be incorporated automatically since no programming-language-centric features are used. Four components are proposed as our major contribution. These components are source code tokenization, retrieval model, query expansion, and document enrichment. According to our evaluation, these components are effective to retrieve relevant source codes agnostically, even though the improvement for each component varies. For future work, we intend to incorporate large-scale dataset such as GitHub corpus (Allamanis & Sutton, 2013) for further evaluation. In addition, we also intend to measure the impact of standard IR text processing such as stemming and stopping on our language-agnostic source code domain.
9,779
2018-02-28T00:00:00.000
[ "Computer Science" ]
ADemonstration of Modern BayesianMethods for Assessing System Reliability withMultilevel Data and for Allocating Resources Good estimates of the reliability of a system make use of test data and expert knowledge at all available levels. Furthermore, by integrating all these information sources, one can determine how best to allocate scarce testing resources to reduce uncertainty. Both of these goals are facilitated by modern Bayesian computational methods. We demonstrate these tools using examples that were previously solvable only through the use of ingenious approximations, and employ genetic algorithms to guide resource allocation. Introduction Assessing the reliability of systems represented by reliability block diagrams remains important.Take for example, U.S. military weapon systems and nuclear power plants.In making these assessments, often there are information and data available at all levels of these systems, whether they be at the component, subsystem, or system level.For example, there may be data from component and subsystem tests as well as expensive full system tests.In this paper, we are concerned with assessing the reliability of a system by combining all available information and data at whatever level they are available; here we consider the case where we have success/failure test data. Much of the reliability literature ( [1][2][3][4][5][6]) predates the advances made in Bayesian computation in the 1990s and resorts to various approximations.However, today a fully Bayesian method using the framework in [7], which simultaneously combines all available multilevel data and information, can be implemented using Markov chain Monte Carlo (MCMC).In this paper, we employ such modern Bayesian methods as MCMC to make reliability assessments. In the next section, we introduce the statistical model that combines all available multilevel data and briefly present MCMC for analyzing such data.Then, we illustrate this methodology by making reliability assessments for an air-toair heat-seeking missile system and a low-pressure coolant injection system in a nuclear power plant first considered by [5,6], respectively. Once multilevel data and information can be analyzed, the question arises of what additional tests should be done when new funding becomes available.That is, what tests will reduce the system reliability uncertainty the most?In this paper, we show how a genetic algorithm using a preposteriorbased criterion can address this resource allocation question.Reference [8] considered resource allocation for a twocomponent series system.In this paper, we illustrate resource allocation with a more complex series-parallel system. A Model for Combining Multilevel Data To combine multilevel data for system reliability assessment, we use the framework in [7].We introduce the framework's notation and models by considering the reliability block diagram of a series-parallel system given in Figure 1.First, components, subsystems, and the system are referred to as nodes.In this example, the system is node 0 which consists of two subsystems (nodes 1 and 2) in series.The first subsystem consists of two components in parallel (nodes 3 and 4) and the second subsystem consists of three components in series (nodes 5, 6, and 7). We begin by considering the binomial data model when data are available at a node.At the ith node, there are x i successes in n i trials with probability of success (reliability) π i .If node i is a subsystem or the full system (i.e., not a component), then π i is expressed in terms of the component reliabilities.For the series-parallel system, the subsystem reliabilities are expressed as and π 2 = π 5 π 6 π 7 and the system reliability is expressed as In general, let C be the subset of nodes which are components, and let π C = {π i : i ∈ C}; then for i / ∈ C and for some function h i , Next, we consider prior distributions for node reliabilities.For components, we use beta prior distributions in terms of an estimated reliability p i and a precision ν i which acts like an effective sample size.That is, if the ith node is a component, then We also allow the possibility that information (expert knowledge) is available on the reliabilities of subsystems and/or the full system; we assume that this information is independent of the test data and any information used to build the prior distributions for the component reliabilities.(Frequently, we will not use any such information: in particular, expert opinion about upper-level nodes will often be based on the same information that led to the prior distributions for component reliabilities.This information should not be used twice, so a simple solution is to exclude the upper-level expert opinion.)Assume that the information takes the form of an estimated reliability p i and a precision ν i .We then express the information contribution, including the x i successful tests in n i trials, from the ith subsystem or system as a term proportional to (1− pi) . ( As discussed above, the subsystem or system reliability π i is expressed in terms of the component reliabilities as h i (π C ).In effect, we have treated this information as if it were derived from binomial data instead of as a beta distribution; the difference involves a change in the exponents of π i and (1−π i ) by one.One effect of this treatment is to ensure that the posterior distribution of π C is well defined.We can define e i to be the indicator that node i is a component (i.e., e i = 1 if node i is a component, and 0, otherwise), in which case the information contribution from the ith node is regardless of whether node i is a component.If no information at the ith node is available beyond binomial tests, then ν i = 0, although ν i > 0 should be used for components to ensure a proper prior.In the remainder of this paper, when we refer to the prior distribution, we mean the distribution that arises from combining the component Beta distributions with the upper-level expert knowledge.This is in fact a posterior distribution if there is nonzero expert knowledge, and in this case the components no longer have independent "prior" distributions. A variety of models might be employed for the ν i .The ν i might be treated as constants when they are really thought to be effective sample sizes.On the other hand, they might be described by a distribution, such as ν i ∼ Gamma(a ν , b ν ).This allows expert knowledge to be downweighted if it is inconsistent with the data.Now consider the data and prior information for the series-parallel system given in Table 1.Note that no precisions ν i are provided so that a prior distribution needs to be specified.For illustration, we consider the same precision ν, that is, ν i = ν, and take the prior distribution for ν to be ν ∼ Gamma(a ν = 5, b ν = 1). ( That is, we believe that the expert information on average is worth five Bernoulli observations.To combine the data with the expert knowledge represented as above, we use Bayes theorem where θ is the parameter vector (i.e., the component reliabilities π C and any other unknown parameters), y is the data vector, f (θ) is the prior probability density function, and f (y|θ) is the data probability density function (i.e., the binomial probability mass function for binomial data) which viewed as a function of the parameter vector given that the data is known as the likelihood.The result of combining the data with expert knowledge is f (θ|y) which is known as the posterior distribution.Since the 1990s, advances in Bayesian computing through Markov chain Monte Carlo or MCMC have made it possible to sample from the posterior distribution [9].Next, we discuss how the Metropolis-Hastings algorithm [10] can be used to obtain draws or samples from the parameter posterior distribution. A fully Bayesian analysis of the model described above, which simultaneously combines all available multilevel data International Journal of Quality, Statistics, and Reliability and information, is nontrivial.The posterior distribution is not analytically tractable: up to a normalizing constant, it is ( This looks superficially like a beta distribution, but it is not so simple because of the functional relationships between the π i ; that is, the subsystem and system π i = h i (π C ).Consequently, a Bayesian analysis requires an implementation of an MCMC algorithm such as Metropolis-Hastings; see, for example, [10].We use a variable-at-a-time Metropolis-Hastings algorithm as follows.The algorithm loops through all the unknown parameters π i (i ∈ C) and ν, proposing changes to one parameter at a time and either accepting or rejecting changes according to the Metropolis-Hastings rule. We update the π i on the logit scale: suppose we are at the stage in one iteration of the algorithm where we are updating π i (for some i ∈ C).Propose a new value π i according to where s i > 0 are tunable constants.Accept the value π i with probability min 1, where π C is equal to π C except with its ith node reliability replaced by π i .If the move is accepted, change the current value of the parameter to be π i , otherwise its value continues to be π i .After all the π i for i ∈ C have been updated in this way, we update ν on the log scale; this proceeds similarly except that the proposed new values of ν satisfy so that these proposed new values are accepted with probability min 1, ν ν After a complete iteration (after attempts to move each of the π i for i ∈ C and also ν), record the current values of all the parameters; this is treated as one sample from the posterior distribution.In practice the first several iterations are discarded as part of a "burn-in" period.Choosing good values of the s i is not difficult: in particular, the YADAS software system [11][12][13] has a method to tune these automatically in the burn-in period.This method consists of running an experiment with a wide range of s i 's, modeling the acceptance rates of the proposed moves using logistic regression with log (s i ) as a predictor, and choosing s i so that the logistic regression model predicts an acceptance rate close to a target value such as 0.35.The same MCMC algorithm just described for making draws from the joint posterior distribution can be used for making draws from the joint prior distribution where π i for a subsystem or system is a function of π C .Draws for the subsystem and system reliabilities are obtained by evaluating the appropriate functions with the π C draws.The resulting prior distributions for the node reliabilities and ν are displayed as dashed lines in Figures 2 and 3, respectively.In assessing the system reliability for the series-parallel system of Figure 1, we combine the node data with the prior distributions using MCMC as just described that result in the posterior distributions displayed as the solid lines in Figures 2 and 3. From these results, the 90% (central) credible interval for the system (node 0), reliability is calculated as (0.697, 0.861) whose length is 0.164.Note that even though there is no data for the first subsystem (node 1), the system data (node 0), and the component data (nodes 3 and 4), dramatically improve what we know about the first subsystem reliability.As shown in Figure 3, the addition of the data does not change ν much, except that ν is somewhat larger than indicated by the prior distribution; that is, the data essentially confirms the prior distribution of ν. Reliability Assessments for Two Applications Next, we consider two substantive applications from the literature [5,6] to demonstrate making reliability assessments with multilevel data. Series System Example. Reference [5] considered the reliability of a certain air-to-air heat-seeking missile system consisting of five subsystems in series each consisting of multiple components themselves combined in series as depicted in Figure 4.The data and prior information that [5] used are presented in Table 2 as (successes/trials) and estimated reliabilities p and precisions ν.Reference [5] did not provide details on how these data were obtained and how the prior information was arrived at.To compare with [5], we treat the precisions as constants and then obtain the posterior node reliabilities using YADAS [11][12][13].The posterior node reliabilities are displayed in Figure 5 as solid lines; the results from [5] are displayed as dashed lines.The median (0.50 quantile) and 90% credible intervals (0.05, 0.95 quantiles) for the system and subsystem posterior reliabilities from the fully Bayesian and [5] methods are given in Table 3. Note that there is quite a difference for the subsystem 1 results.The difference in location is due to the fact that the approximations used in [5] do not use higher-level information (system data) to estimate lower-level parameters (such as subsystem 1 reliability).The expert judgment estimate of system reliability, 116/267 or 0.43, is lower than the data and expert judgment at the lower levels would imply, and the fully Bayesian analysis needs to attribute this unreliability to one of the subsystems.Subsystem 1 and in particular component 19 has the sparsest information and is the natural targets.For this reason, the fully Bayesian analysis is more useful than the approach of [5] in evaluating the usefulness of gathering more data at low levels.In practice one would review the information that led to the low system reliability estimate.The fully Bayesian analysis could be rerun with random ν's, and this would presumably allocate positive probability to the event that p 0 is an underestimate. Complex Series-Parallel System Example. Reference [6] considered the reliability of a low-pressure coolant injection system, an important safety system in a nuclear-power boiling-water reactor.It consists of twin trains consisting of pumps, valves, heat exchanges, and piping whose reliability block diagram is displayed in Figure 6.The data and prior information that [6] used are presented in [14] and some subsystem prior distributions (i.e., ).See [6] for more details.We treat the precisions as constants as in [6], and then obtain the posterior node reliabilities using YADAS ( [11][12][13]).The resulting posterior reliabilities for the subsystems and system are displayed in Figure 7. Also, the summaries of the posterior reliabilities for all nodes are given in Table 5.The results in Table 5 are similar to those given in [6] although somewhat smaller; for example, the (0.05, 0.5, 0.95) quantiles for nodes 0-2 from [6] are (0.999968, 0.9999940, 0.99999975), (0.9925, 0.9974, 0.99944), and (0.9926, 0.9974, 0.99948), respectively. Resource Allocation In Section 2, we showed how to analyze multilevel data to assess system reliability.In this section we address test design.When additional funding becomes available, the question of where should the tests be done and how many should be taken arises to improve the system reliability assessment.In this section, we consider the optimal allocation of additional testing within a fixed budget that results in the least uncertainty of system reliability.We explore this by using the series-parallel system in Figure 1.We must determine how many tests should be performed at the system, subsystem, and component level (i.e., nodes 0-7) under a fixed budget for specified costs at each level (system, subsystem, component).In this paper, we use a genetic algorithm (GA) [16,17] to do the optimization because it is simple to implement and generally provides good results.But other optimization methods like particle swarms [18] could easily be used instead.Thus, we assume that there is a cost for collecting additional data with higher-level data being more costly than lower-level data.Consider the following costs as an example of the costs for testing at each node.Recall that node 0 is the system, nodes 1 and 2 are subsystems and nodes 3-7 are components: (0 : $5), (1 : $2), (2 : $3), (3 : $1), (4 : $1), (5 : $1), (6 : $1), (7 : $1).(11) We evaluate a candidate allocation (i.e., a specified number of tests for each of the eight nodes) using a preposterior-based criterion as follows.We take a draw from the current joint posterior distribution (based on the current data) of the node reliabilities and draw binomial data according to the candidate allocation.Then we combine these new data with the current data using the same prior distributions to obtain an updated posterior distribution of the node reliabilities; again we use MCMC to obtain N p draws from this updated posterior distribution.The length of the 90% central credible interval of the system reliability posterior distribution is taken as a measure of uncertainty.This is repeated N d times, each with a different draw from the current joint posterior distribution of the node reliabilities.The uncertainty criterion is then calculated as the 0.90 quantile of the resulting 90% credible interval lengths. Briefly, we describe how a GA can be used to find a nearly optimal allocation.A GA operates on a "population" of candidate allocations, where a candidate allocation is a vector of node test sizes.The GA begins by constructing an initial population or generation of M allocations by randomly generating allocations that do not exceed the given fixed budget.The uncertainty criterion for each of these allocations in the initial population is evaluated and the allocations are ranked from smallest to largest, that is, the best allocation has the smallest criterion in the initial population.The second (and subsequent) GA generations are then populated using two genetic operations: crossover and mutation [16,17].A crossover is achieved by randomly selecting two parent allocations from the initial (or current) generation without replacement with probabilities inversely proportional to their rank among the M allocations in the initial (or current) generation.A new allocation is generated node by node from these two selected parent allocations by randomly picking one of the two parents each time and taking its node test size.The two parent allocations are then returned to the initial (or current) population before the next crossover is performed.In this way, an additional M allocations are generated using the crossover operator.The generated allocations are checked to make sure they do not exceed the budget, so that new allocations are generated until there are M such allocations.The uncertainty criterion is then evaluated for each of these new allocations.A mutation of each of the initial (or current) M allocations is obtained node by node by first randomly deciding to change the node test size and if so then randomly perturbing the current node test size.Using mutation, M additional allocations which remain within the budget are generated and the uncertainty criterion for each is evaluated.At this point there are 3M allocations.In the next generation, the current population consists of the M best allocations from these 3M allocations, that is, with the smallest uncertainty criterion.The GA is executed for G generations.We implemented the GA for resource allocation in R [19] which generates the candidate allocations.An allocation is evaluated in R by repeatedly building YADAS [11][12][13] input data files, running the YADAS code using the reliability package (through the R "system" call) to analyze the new and current data, and reading the resulting YADAS output files back into R to calculate the uncertainty criterion.In the implementation, there are a number of issues regarding the choice of M, G, N p , and N d .As the population size M and number of generations G increase, more candidate allocations (i.e., M(1 + 2 × G)) are entertained, but then more calculation is required.As the number of posterior draws for each generated data set N p and the number of generated data sets to analyze N d increase, the uncertainty criterion is better evaluated, but the calculation needed to evaluate a single candidate allocation can dramatically increase let alone that for M(1 + 2 × G) candidate allocations.One has to realize that the nearly optimal allocation found by the GA may not be the optimal allocation if the difference between them is less that the variability of the evaluated uncertainty criterion, that is, within the simulation error of the uncertainty criterion. One might ask if there are any general insights regarding resource allocation with assessment of system reliability in mind.If we consider testing at the same level, for components (or subsystem), the component (or subsystem) with the most uncertainty will require more testing than the others.If the subsystems are connected in series, but some subsystems have components connected in series where as other subsystems have components connected in parallel, in terms of component testing, the parallel configured subsystems will require less testing; this can be explained by examining the subsystem reliability expression, which shows that the reliability of series configured subsystems is of second order in their component reliabilities, where as that for parallel configured subsystems is of first order.The allocation will also depend on the testing costs relative to the amount of uncertainty reduction that it provides.If we consider a series configured subsystem, if the subsystem cost exceeds the sum of the components costs, then performing components tests will be recommended; if the subsystem cost is less than the sum of the components costs, then performing some subsystem tests may be recommended if they provide relatively more information.But for complicated systems with many subsystems and components whose costs are all different, it will be difficult to choose an optimal allocation with these rules of thumb.However, the proposed methodology balances all these costs and information across the entire system in finding a nearly optimal allocation. Next, we illustrate the GA for the resource allocation problem described above for the series-parallel system depicted in Figure 1 for a fixed budget of $1000.The length of the 90% credible interval of system reliability based on the existing data is 0.164.We use populations of size M = 20 and G = 50 generations, so that 2020 (= 20(1+2×50)) candidate allocations were generated and evaluated.To evaluate the uncertainty criterion, we generated N p = 2000 posterior draws per data analysis and generated N d = 500 data sets corresponding to posterior draws based on the existing data.For this situation, what allocation yields the most reduction in the uncertainty criterion for system reliability? Based on the proposed methodology described above, the GA produced the traces presented in Figures 8 and 9 which display the best uncertainty criterion and allocation found during each generation.The uncertainty criterion drops to 0.0804 for the initial population and decreases to 0.0725 by generation 50 with an allocation of test sizes (0, 0, 175, 0, 0, 208, 137, 128) for nodes 0-7.We evaluated this allocation with N p = 50000 and N d = 100000 and obtained uncertainty criterion values of 0.073358 and 0.073363, so we take the uncertainty criterion for this allocation as 0.0734.These results suggest that there is enough data for node 1, the two component parallel subsystems and the cost structure prohibit additional system tests (i.e., the system cost equals the sum of the subsystem costs, which equals the sum of the components costs).Because node 2 subsystem cost equals the sum of its component costs, we tried an allocation which proportionally allocated the subsystem tests to its components (i.e., splitting up 175 × 3 = 525 by the proportion (208/473, 137/473, 128/473) found by the GA) giving the allocation (0, 0, 0, 0, 0, 439, 289, 270).Evaluating this allocation again with N p = 50000 and N d = 100000 gave uncertainty criterion values of 0.071439 and 0.071426, which we round to 0.0714.Consequently, there is some improvement by doing all component tests for node 2 subsystem. Discussion For relatively complex systems, we have illustrated how to respond to the challenge of integrating all information available at the various levels of a system in order to estimate its reliability.Bayesian models have always been natural for doing this integration, and the computational tools have now caught up to make this practical.Moreover, because we are able to analyze such data, we can now consider the problem of allocating additional resources that best reduce the uncertainty in the system reliability assessment. We have discussed the case of binomial test data only for systems represented by reliability block diagrams.Reference [20] showed how binomial data can be analyzed for problems using fault tree representations.Component and subsystem tests may generate continuous data such as lifetimes, and their distributions may depend on covariates such as different suppliers.Reference [21] presented an example of such an analysis.However, the problem of resource allocation for nonbinomial test data is a topic for future research. Figure 6 : Figure 6: Complex series-parallel system example reliability block diagram. Figure 7 :Figure 8 : Figure7: Plot of complex series-parallel system example reliability posteriors for subsystems and system. Figure 9 : Figure 9: GA evolution of resource allocation.Nodes 2 and 5-7 test sizes are identified. Table 1 : Data for series-parallel system. Table 2 : Data for series system example. Table 4 : Data for complex series-parallel system example.
5,716.2
2009-02-08T00:00:00.000
[ "Engineering", "Computer Science", "Mathematics" ]
Targeting local lymphatics to ameliorate heterotopic ossification via FGFR3-BMPR1a pathway Acquired heterotopic ossification (HO) is the extraskeletal bone formation after trauma. Various mesenchymal progenitors are reported to participate in ectopic bone formation. Here we induce acquired HO in mice by Achilles tenotomy and observe that conditional knockout (cKO) of fibroblast growth factor receptor 3 (FGFR3) in Col2+ cells promote acquired HO development. Lineage tracing studies reveal that Col2+ cells adopt fate of lymphatic endothelial cells (LECs) instead of chondrocytes or osteoblasts during HO development. FGFR3 cKO in Prox1+ LECs causes even more aggravated HO formation. We further demonstrate that FGFR3 deficiency in LECs leads to decreased local lymphatic formation in a BMPR1a-pSmad1/5-dependent manner, which exacerbates inflammatory levels in the repaired tendon. Local administration of FGF9 in Matrigel inhibits heterotopic bone formation, which is dependent on FGFR3 expression in LECs. Here we uncover Col2+ lineage cells as an origin of lymphatic endothelium, which regulates local inflammatory microenvironment after trauma and thus influences HO development via FGFR3-BMPR1a pathway. Activation of FGFR3 in LECs may be a therapeutic strategy to inhibit acquired HO formation via increasing local lymphangiogenesis. 6. The authors report that FGFR3 is involved in lymphatic migration and proliferation. However, the data do not exclude that FGFR3 is also involved in the differentiation from COL2-positive mesenchymal cells. Fig. 4, the authors claim that the reporter labeling indicates a high degree of lymphatic Fgfr3 deletion. However, reporter activity does not necessarily reflect gene deletion efficiency. To demonstrate this, the authors need to perform qPCR or immunoblot analyses. In supplementary Reference: Álvarez-Aznar, A et al., Tamoxifen-independent recombination of reporter genes limits lineage tracing and mosaic analysis using CreERT2 lines. Transgenic Research 2019 Reviewer #2: Remarks to the Author: In this study, authors explored the role FGF signaling in acquired HO development. Their lineage tracing experiment showed that Col2+ cells are adopted fate of lymphatic endothelial cells during HO development. FGFR3 cKO in Prox1-positive LECs increased HO formation. FGFR3 deficiency in LECs resulted in decreased local lymphatic formation with increased inflammatory levels. Local administration of FGF9 in Matrigel inhibited heterotopic bone formation. This study revealed Col2+ lineage cells as a novel origin of lymphatic endothelium in HO. This is an interesting novel finding. The experiments in general are well designed and executed. The data are convincing. I have the following comments to improve the manuscript: 1. Heterotopic ossification is a very complex process involved in many factors. HO could occur only if all conditions are satisfied. This is why there are so many factors were identified in inhibition of HO. In the introduction, it did not describe the overall scheme of HO development and the potential role of FGF signaling in the process. For example, TGFbeta levels are significantly increased at both initial phase and late stage as well. And it is also critical for chondrogenesis and progression of HO. The information is missing in the Introduction. 2. There is no evidence to show the process of acquired heterotopic ossification is different from the other types of HO. AHO is already used for acute hematogenous osteomyelitis. AHO for acquired HO used here causes confusion in the field and literature. 3. Authors claim "we still have limited knowledge about the cellular and molecular mechanism of AHO development", but the manuscript did not even review the current understanding of four different stages of HO development and did not discuss their finding of FGF signaling in HO relative to the four stages of HO development. 4. BMP signaling is known to determine cell lineage fate. What is the function of BMP signaling in fate of lymphatic endothelial cells under normal physiology? 5. "Sustained high-level inflammation after trauma is related to impaired local lymphatic drainage in FGFR3-deficient mice, which may aggravate AHO development". Apparently, increase of local inflammatory levels subsequently in elevation of AHO is an indirect effect. Inflammatory is at early stage of HO, which promotes TGFbeta level for chondrogenesis. The authors should examine whether increase of TGFbeta activity for HO development. 6. The Diagram and Discussion should include overall outline of HO development and relative position of FGF signaling in LECs in HO. 7. The overall writing about HO and interpretation of the results need to be improved Reviewer #3: Remarks to the Author: The manuscript by Zhang et al. introduces a compelling relationship between local lymphangiogenesis and acquired heterotopic ossification (AHO) in various sophisticated mouse models that underwent Achilles tenotomy. Specifically, the authors identified Col2+ resident progenitors of the peritendineum as a potential novel source of lymphatic endothelial cell (LEC) renewal post-tenotomy. The capacity for these progenitors to promote lymphangiogenesis posttenotomy was directly associated with the severity of AHO development in a FGFR3 dependent manner. Conditional knockout (cKO) of FGFR3 in Col2+ progenitors and Prox1+ LECs led to increased AHO formation post-tenotomy, and this pathologic change was associated with an increase in BMPR1a and p-Smad1/5. Moreover, cKO of BMPR1a in these models reversed this phenotype. The authors propose that reduced lymphatic function promotes local inflammation that eventually dysregulates the FGFR3-BMPR1a signaling pathway leading to AHO formation, and thus targeting FGFR3 may promote lymphangiogenesis to ameliorate disease. While the manuscript presents a convincing story with data from both mice and humans to support their claims, there are concerns about some of the data and interpretation of some results that need to be addressed. There are also some minor concerns that the authors should consider. Major Comments: 1. The current presentation of the images makes colocalization of markers difficult to assess. For example, the authors write, "Immunostaining revealed abundant expressions of canonical LEC markers LYVE1 and VEGFR3 in tdTomato labeled Col2+ lineage cells…" . However, in the associated Figures 2g,h the colocalization of immunostain (green) and lineage trace (red) is questionable. As the authors point out, lymphatic vessels (LVs) were only present within the tendon after injury, so if Col2+ cells are truly the predominant progenitor for LECs in these circumstances, one would expect all (or most, dependent on tamoxifen efficiency) to be Col2-derived. Instead, in Figure 2h the VEGFR3 immunostaining appears to be mostly independent of the lineage traced cells besides a select few colocalized (yellow) cells. Moreover, for Figure 2g the presence of Col2-derived LYVE1+ cells is difficult to interpret as the lineage traced red fluorescence appears to only be present in the nuclei. How did the authors verify that these nuclei are specific to the LEC, and not the nuclei for the presumably directly adjacent lymphatic muscle cells? Is this lineage tracing mouse model expected to only show nuclear fluorescence, as it appears in other images (i.e. Figure 2h) to be non-specific to the nucleus and cytoplasm? An explanation for the lacking colocalization of many cells, or an alternative presentation of the fluorescence (i.e. split channels with a composite image) not just in Figure 2, but throughout the manuscript, is needed. 2. As presented, the conclusions of Figure 3 are misleading. The authors write, "Collectively, FGFR3 cKO in LECs tremendously promoted AHO development, which further supports that the aggravated AHO formation in FGFR3Col2 mice is strongly related to the disturbed LECs derived from Col2+ cells in the tendon after trauma" (Lines 293 -296). While there may be a connection associated with the similar reduction in lymphangiogenesis when FGFR3 is deleted in both the proposed Col2-derived LEC progenitors and Prox1+ LECs, a direct mechanistic relationship between these two cells cannot be made as presented. The strongest conclusion that can be made is that there appears to be a relationship between lymphangiogenesis and AHO development represented in both models. Additional studies are needed to demonstrate that the associated cellular changes following FGFR3 deletion mediate the same lymphangiogenic disruption in both affected Col2+ progenitors and Prox1+ LECs. For instance, do Col2-derived Prox1+ LECs demonstrate continued disruption in the FGFR3-BMPR1a pathway by protein expression in FGFR3f/f-Col2tomato compared to Col2tomato control mice? It may be possible that the reduced lymphangiogenesis in FGFR3Col2 animals functions by a similar, but unrelated mechanism compared to the FGFR3Prox1 model in which Col2+ progenitors are unable to proliferate and differentiate into LECs without FGFR3, and any lymphangiogenesis in FGFR3Col2 animals is due to ineffective deletion of FGFR3 in certain Col2+ progenitors. To make the claims as written, further experiments are needed to confirm that LECs derived from Col2+ progenitors in the FGFR3Col2 model indeed have disturbed FGFR3-BMPR1a signaling leading to the reduced lymphangiogenesis via mechanisms similar to the FGFR3Prox1 construct. 3. The authors write, "Lineage tracing of FGFR3f/f; Prox1CreERT2; R26RtdTomato mice (FGFR3f/f-Prox1tomato) revealed that almost all LYVE1+ LECs in the tendon were labeled by tdTomato at 8 weeks post tenotomy, indicating a high efficiency of FGFR3 deletion in LECs in the repaired Achilles tendon" (Lines 277 -280). Similar to Comment 1, the associated Supplementary Figure 4 seems to indicate very little colocalization between Prox1 lineage traced (red) and LYVE1 immunostained (green) cells. This finding brings into question the reliability of either the lineage tracing model or the immunostaining in these experiments, as Prox1 and LYVE1 should be colocalized as canonical LEC markers. Moreover, the title to Supplementary Figure 4, "Prox1+ lineage traced cells contribute to local LECs in the repaired tendon" is an inaccurate representation as LECs themselves are Prox1+. 4. LYVE1 is also a known marker of certain M2 polarized macrophages, especially surrounding smooth muscle cells to promote regulation of collagen content (PMID: 30054204). The authors demonstrate that under conditions where LYVE1 staining increases (depicted in the manuscript as equivalent to LECs, and used as a measure of LV area and length) that the number of M1 polarized macrophages decreases (Figures 4,Supp 5,5,6,Supp 7). Concurrent F4/80 or Prox1 staining ought to be performed with LYVE1 to confirm that LYVE1+ cells are truly representing LECs and the results are not confounded by LYVE1+ M2 polarized macrophages. 5. The near infrared indocyanine green (NIR-ICG) clearance as depicted in Figure 4j does not seem to match the quantified clearance results in Figure 4k. With the current images, it looks as if FGFR3Prox1 actually has greater or similar clearance compared to FGFR3f/f controls. A clarification of the analysis method depicted in the figures and more representative images in Figure 4j are needed. 6. As presented, the Western blot in Figure 5e is questionable. It seems as if siFGFR3 #1 had relatively ineffective knockdown of FGFR3, but the pSmad1/5 levels appear increased similar to the other siFGFR3 lanes compared to the control. An explanation for this discrepancy is warranted. In addition, the authors write, "FGFR3 knockdown in mouse LEC line led to upregulated BMPR1a…" (Lines 434 -435) however, as presented, it is difficult to see a noticeable increase in BMPR1a in the siFGFR3 conditions on the blot. Relative quantification of the protein levels would be helpful for interpreting these results. 7. Essential controls for the mouse models used in this study are missing. To validate accurate representation of the lineage tracing, Cre-negative and Cre-positive without tamoxifen induction (PMID 31641921) controls are necessary. 8. Clarification on the methods for immunostaining image analysis are needed. Were exact cell counts determined using an automated process or counted manually? How were the regions of interest for analysis determined in a representative and unbiased manner? Were the observers blinded to the conditions? "Image J software was used for quantifications. 5-8 independent images of 3-5 sequential sections in each sample were used for quantitative analysis" (Lines 733 -735) is not sufficient to allow repeatability of this study. 9. For the human specimen collection, the methods note that, "HO specimens were collected from male patients who had previously sustained a femur or elbow fracture…" (Lines 786 -787). In Supplementary Figure 5, are the human specimen data pooled to include both femur and elbow fractures? Is it expected for AHO formation in these two different conditions to behave similarly? What was the breakdown of subjects with elbow versus femur injuries that were assessed at the osteogenesis versus maturation stages? 10. Additional information on the mice used for the study is needed. In connection with Comment 9 in which it was noted for human subjects that only males were used, were both male and female mice used for this study? If so, were the sexes distributed evenly between the groups? Moreover, Jackson Laboratory stock numbers ought to be provided for the animals used in this study. For instance, according to Jackson Laboratories, the only Prx1-CreERT2 animal available is also tagged with a GFP (Prx1CreER-GFP; Stock No 029211). Was a different strain used for this study? If not, how was the Prx1-driven GFP fluorescence controlled in the analysis in Supplementary Figure 3, especially since the antibodies assessed were also labeled green? In addition, there are many Rosa26-tdTomato reporters offered by Jackson Laboratories, so which strain was used? This is also important to acknowledge given the differences in basal CreERT2 activity noted in the source for Comment 7. Minor Comments: 1. There are potentially misleading comments that do not match the data as presented. For example, in reference to Supplementary Figure 5f, the authors write, "immunostaining revealed significantly increased numbers of F4/80+iNOS+ inflammatory macrophages in HO lesions at maturation stage relative to osteogenesis stage" (Lines 372 -373). However, the figure demonstrates no significant difference in the % F4/80+iNOS+ cells relative to total F4/80+ cells between the two stages. The authors should ensure that all written explanations accurately depict the data as presented. 3. The manuscript ought to be thoroughly proofread for grammar and typos. We would like to submit the revised manuscript entitled "Targeting local lymphatics to ameliorate heterotopic ossification via FGFR3-BMPR1a pathway" (NCOMMS-20-16420-T). We have addressed all reviewers' questions in the revised manuscript and provided 'point-to-point' replies. All changes were marked in red in the manuscript. We appreciate the opportunity allowing us to revise our manuscript. Review 1 In the manuscript "Targeting local lymphatics to ameliorate heterotopic ossification via FGFR3-BMPR1a pathway" Zhang et al. study mechanisms that underlie acquired heterotopic ossification (AHO). As a result of their study, the authors propose FGF signaling as a promising therapeutic target to treat the disorder. Overall, the manuscript is preliminary and confusing and the presented data are not convincing. Especially, the lineage-tracing studies are of concern given the known leakiness of the genetic reporter used. Response: Thank you very much for your valuable comments. We checked our manuscript according to your suggestions one by one. We carried out related experiments and hope our supplemental results or explanations might address your concerns. Other comments: 1. All the histology and immunofluorescence (IF) data are not properly labeled, making it difficult to evaluate the data. Also, the quality of IF images must be improved. 3. It has been reported that the R26tdTom reporter mouse shows Tamoxifen-independent recombination (Álvarez-Aznar, A et al., 2019). Consequently, the authors need to examine this possibility before concluding that LECs are of mesenchymal-origin. Response: We appreciate the reviewer's important suggestion. According to the referred paper 1 , we tested tamoxifen-independent Cre recombination of reporter using tomato and Col2 tomato mice without tamoxifen induction as controls (S Fig3b). We did not find tomato positive cell in the repaired Achilles tendon of these control mice at 4 weeks post surgery, though co-staining of LYVE1 revealed that lymphatics were already formed in these tendons. Furthermore, as mentioned in this reference 1 , mTmG reporter mice are more suitable for lineage tracing as they have lower recombination susceptibility. Therefore, we also used Col2 mTmG mice to further confirm the LEC identity of Col2-derived cells in the tendon after surgery in vivo and in vitro. GFP Fig5a,b). These results demonstrated that Col2 + cells adopted the fate of LECs rather than macrophages in the tendon after injury. Meanwhile, we tried to sort Col2-derived cells in the repaired tendon of Col2 mTmG mice. However, the amount of primary GFP-labeled Col2 + lineage cells is too low to do sorting and transcriptome profiling. We also tried to amplify these primary GFP + cells before sorting, but these GFP + cells were amplified in a much slower rate than GFP cells in vitro, even though we used endothelial cell medium (ECM, ScienCell). Since in vitro evidence can help further confirm the LEC identity of Col2 + lineage cells in the repaired Achilles tendon, we isolated the primary cells in the repaired tendon of Col2 mTmG mice and confirmed that GFP + Col2-derived cells were immunostained by LEC markers including LYVE1, Prox1 and VEGFR3 (S Fig4a). We agree that the transcriptome profiling of Col2 + lineage cells is an important study, and we will carry it out using emerging new approaches in the future research. Thank you again for your important suggestion. 6. The authors report that FGFR3 is involved in lymphatic migration and proliferation. However, the data do not exclude that FGFR3 is also involved in the differentiation from COL2-positive mesenchymal cells. Response: Thank you for your suggestion. Previous findings reported that FGFR3 is essential for lymphangiogenesis by regulating LEC proliferation as well as migration 2,3 . It was also reported that FGFR3 is an initial target of Prox1, which is known as a master regulator inducing lymphatic differentiation 2 . Therefore, it was speculated that FGFR3 1. Heterotopic ossification is a very complex process involved in many factors. HO could occur only if all conditions are satisfied. This is why there are so many factors were identified in inhibition of HO. In the introduction, it did not describe the overall scheme of HO development and the potential role of FGF signaling in the process. For example, TGFbeta levels are significantly increased at both initial phase and late stage as well. And it is also critical for chondrogenesis and progression of HO. The information is missing in the Introduction. inducing type H vessel formation coupled with osteogenesis. Locally increased blood vessels transport more oxygen, nutrients and minerals for HO development 5 and 691-693 (TGF-β produced by macrophages has been found to contribute to HO development and TGF-β signaling remains activated in the osteogenesis stage of HO before a reduction till 15 weeks after injury 5 ). The potential role of FGF signaling in HO was mentioned in lines 93-104 (Fibroblast growth factor (FGF) signaling plays an essential role in skeletal development 6 . Activation mutations of fibroblast growth factor receptor 3 (FGFR3) in human cause chondrodysplasia including achondroplasia, hypochondroplasia as well as thanatophoric dysplasia through inhibiting chondrocyte proliferation and differentiation 7 . Meanwhile, FGFR3 also plays a vital role in the regulation of lymphatic formation. FGFR3 is expressed in human and mouse lymphatic endothelial cells (LECs) and is essential for LEC proliferation and migration 2 . 9-cis retinoic acid (9-cisRA) promotes LEC proliferation, migration and tube formation via activating FGF signaling. 9-cisRA-induced proliferation of LECs is coupled with increased FGFR3 expression, which is suppressed by soluble FGFR3 recombinant protein that sequesters FGF ligands 3 . All these findings suggest the possible involvement of FGFR3 in acquired HO development, although the accurate role and detailed underlying mechanisms remain to be clarified). 2. There is no evidence to show the process of acquired heterotopic ossification is different from the other types of HO. AHO is already used for acute hematogenous osteomyelitis. AHO for acquired HO used here causes confusion in the field and literature. Response: Thank you for your suggestion. We replaced 'AHO' with 'acquired HO' or 'HO' to avoid confusion. 3. Authors claim "we still have limited knowledge about the cellular and molecular mechanism of AHO development", but the manuscript did not even review the current understanding of four different stages of HO development and did not discuss their finding of FGF signaling in HO relative to the four stages of HO development. The overall writing about HO and interpretation of the results need to be improved Response: Thank you for your suggestion. We thoroughly reviewed the whole manuscript and improved the interpretation of the results. For example, improved descriptions highlighted in red are shown in the results of our manuscript. Further discussions about the influence of FGFR3 on LEC differentiation were added in Lines 709-719. Reviewer 3 The manuscript by Zhang et al. introduces a compelling relationship between local lymphangiogenesis and acquired heterotopic ossification (AHO) in various sophisticated mouse models that underwent Achilles tenotomy. Specifically, the authors identified Col2+ resident progenitors of the peritendineum as a potential novel source of lymphatic endothelial cell (LEC) renewal post-tenotomy. The capacity for these progenitors to promote lymphangiogenesis post-tenotomy was directly associated with the severity of AHO development in a FGFR3 dependent manner. Conditional knockout (cKO) of FGFR3 in Col2+ progenitors and Prox1+ LECs led to increased AHO formation post-tenotomy, and this pathologic change was associated with an increase in BMPR1a and p-Smad1/5. Moreover, cKO of BMPR1a in these models reversed this phenotype. The authors propose that reduced lymphatic function promotes local inflammation that eventually dysregulates the FGFR3-BMPR1a signaling pathway leading to AHO formation, and thus targeting FGFR3 may promote lymphangiogenesis to ameliorate disease. While the manuscript presents a convincing story with data from both mice and humans to support their claims, there are concerns about some of the data and interpretation of some results that need to be addressed. There are also some minor concerns that the authors should consider. Major Comments: 1. The current presentation of the images makes colocalization of markers difficult to assess. For example, the authors write, "Immunostaining revealed abundant expressions of canonical LEC markers LYVE1 and VEGFR3 in tdTomato labeled Col2+ lineage cells…" (Lines 200 -202). However, in the associated Figures 2g,h the colocalization of immunostain (green) and lineage trace (red) is questionable. As the authors point out, lymphatic vessels (LVs) were only present within the tendon after injury, so if Col2+ cells are truly the predominant progenitor for LECs in these circumstances, one would expect all (or most, dependent on tamoxifen efficiency) to be Col2-derived. Instead, in Figure 2h the VEGFR3 immunostaining appears to be mostly independent of the lineage traced cells besides a select few colocalized (yellow) cells. Moreover, for Figure 2g the presence of Col2-derived LYVE1+ cells is difficult to interpret as the lineage traced red fluorescence appears to only be present in the nuclei. How did the authors verify that these nuclei are specific to the LEC, and not the nuclei for the presumably directly adjacent lymphatic muscle cells? Is this lineage tracing mouse model expected to only show nuclear fluorescence, as it appears in other images (i.e. Figure 2h) to be non-specific to the nucleus and cytoplasm? An explanation for the lacking colocalization of many cells, or an alternative presentation of the fluorescence (i.e. split channels with a composite image) not just in Figure 2, but throughout the manuscript, is needed. Moreover, we also stained the primary cells isolated from repaired Achilles tendon of Col2 mTmG mice with Prox1, LYVE1 and VEGFR3 in vitro (S Fig4a) and found that Col2 + lineage cells in the tendon after surgery were labeled by these LEC markers. 2. As presented, the conclusions of Figure 3 are misleading. The authors write, "Collectively, FGFR3 cKO in LECs tremendously promoted AHO development, which further supports that the aggravated AHO formation in FGFR3Col2 mice is strongly related to the disturbed LECs derived from Col2+ cells in the tendon after trauma" (Lines 293 -296). While there may be a connection associated with the similar reduction in lymphangiogenesis when FGFR3 is deleted in both the proposed Col2-derived LEC progenitors and Prox1+ LECs, a direct mechanistic relationship between these two cells cannot be made as presented. The strongest conclusion that can be made is that there appears to be a relationship between lymphangiogenesis and AHO development represented in both models. Additional studies are needed to demonstrate that the associated cellular changes following FGFR3 deletion mediate the same lymphangiogenic disruption in both affected Col2+ progenitors and Prox1+ LECs. 5. The near infrared indocyanine green (NIR-ICG) clearance as depicted in Figure 4j does not seem to match the quantified clearance results in Figure 4k. With the current images, it looks as if FGFR3Prox1 actually has greater or similar clearance compared to FGFR3f/f controls. A clarification of the analysis method depicted in the figures and more representative images in Figure 4j are needed. Response: Thank you for your suggestion. We replaced the previous NIR-ICG image of FGFR3 Prox1 mice with a more representative image (Fig4j). A clarification of NIR-ICG analysis was explained in the methods and materials as follows: The signal intensity in the footpad was recorded immediately after ICG injection as the initial signal intensity. ICG imaging was collected again 24 hours after ICG injection. Conditions including exposure time, focus and position of the mouse hindlimbs need to be consistent for all imaging and overexposure needs to be avoided. The images were analyzed using Evolution-Capt v18.02 software. In brief, regions of interest (ROI) defining the injection site of the footpad was identified. ICG clearance was quantified as the percentage of reduced ICG signal intensity in the footpad 24 hours post injection relative to the initial ROI signal intensity (Lines 843-850). 6. As presented, the Western blot in Figure 5e is questionable. It seems as if siFGFR3 #1 had relatively ineffective knockdown of FGFR3, but the pSmad1/5 levels appear increased similar to the other siFGFR3 lanes compared to the control. An explanation for this discrepancy is warranted. In addition, the authors write, "FGFR3 knockdown in mouse LEC line led to upregulated BMPR1a…" (Lines 434 -435) however, as presented, it is difficult to see a noticeable increase in BMPR1a in the siFGFR3 conditions on the blot. Relative quantification of the protein levels would be helpful for interpreting these results. Response: Thank you for your suggestion. The seemingly inconsistency of the western blot result might be due to the variable FGFR3-knockdown efficiency among these three siRNAs. To obtain a more stable and efficient knockdown of FGFR3, siFGFR3#1, #2 and #3 were pooled together for a combined FGFR3 knockdown in mLEC line. As shown in Fig5g, FGFR3 level was evidently knocked down and BMPR1a/pSmad1/5 levels were remarkably upregulated. 7. Essential controls for the mouse models used in this study are missing. To validate accurate representation of the lineage tracing, Cre-negative and Cre-positive without tamoxifen induction (PMID 31641921) controls are necessary. Response: Thank you for your suggestion. According to the referred paper 1 , tamoxifen-independent Cre recombination of reporter mice was tested using tomato and Col2 tomato mice without tamoxifen induction as controls (S Fig3b). We did not find tomato positive cell in the repaired Achilles tendon of these control mice at 4 weeks post surgery, though co-staining of LYVE1 revealed that lymphatics were already formed in these tendons. Furthermore, as mentioned in this reference
6,169.8
2021-07-19T00:00:00.000
[ "Biology", "Medicine" ]
Modification of Ant Colony Optimization Algorithm to Solve the Traveling Salesman Problem − Traveling salesman problem (TSP) is an optimization problem in determining the optimal route of a number of nodes that will only be passed once with the initial node as the final destination. One method for solving TSP is the Ant Colony Optimization (ACO) Algorithm. ACO is inspired by ant behaviour in searching for food, where ants produce pheromones to find food sources and make a route from the colony to food that will be followed by other ants. However ACO has not been considered as the optimal method for resolving TSP. This is because ACO has several shortcomings in the computational process. Comparisons between pheromones are not yet clear, and slow computing time causes the results of ACO to be not optimal. To correct these deficiencies, modifications will be made to the ACO. Modifications are made by changing some values in the ACO, such as adjusting the number of ants by the node automatically, changing the value in the pheromone renewal, and adding value to the construction of the solution. The outcome of this research is the modification of ACO did not provide shorter computing time with a more accurate final value, thus did not provide an optimal solution. The test results in this study found that the average computation time for the last iteration of each test was 0.54 second, and for the 10 iteration computation time obtained an average of 5.54 second for four tests. The amount of memory used in four tests in this study was 440.11 mb for 10 iterations. INTRODUCTION Traveling Salesman Problem (TSP) is one of the classic problems of optimization that can be found in everyday life. TSP is stated as an optimization problem to find the most optimal route where a number of nodes will be passed and the node may only be passed once, with the initial node as the final destination. Some examples of TSP cases in daily life include the search for multiple locations, delivery of goods, and others. There are several methods developed to complete the TSP, one of which is Ant Colony Optimization (ACO) [4]. The Ant Colony Optimization (ACO) algorithm is inspired by ant behavior in foraging in nature. Based on a book written by Marco Dorigo and Thomas Stutzle in 2004 entitled "Ant Colony Optimization", ACO can be applied directly to the TSP case easily. This is because the behavior of ants when foraging has the same mechanism as the TSP case. Initially, ants will walk randomly when they first come out of their nest to find food, and leave pheromone trails when returning to their nests. Each ant has pheromones, and then when other ants find food, then the possibility of ants running randomly is getting thinner because ants tend to follow in the footsteps of pheromones left by the ants before. When that happens, the pheromone trail gets stronger and the ants will stop walking randomly until the food runs out. But pheromones can also experience evaporation, the longer the route, the faster the pheromone evaporation. The shorter path is more often traveled by ants, so that the intensity of the pheromone is more concentrated, and the evaporation is slower. Overall, it will be with high pheromone intensity. This makes ACO work well in the case of the TSP. [1] However, ACO has several shortcomings so it is considered not to really provide optimal results to complete the TSP. The most prominent drawback is that the algorithm's computing time tends to be slow. This is because the pheromone comparison of each iteration does not have a clear comparison. In addition, the large number of ants also greatly affects the computational time, and tends to give suboptimal results. ACO also has a stagnation problem, where all ants are only concentrated in one pathway with the most pheromones, so the probability of finding another more optimal path becomes smaller. Based on this brief description, modifications will be made to the ACO. Modifications are made by correcting the deficiencies that have been described without changing the base workings of the algorithm. The implementation of ACO modification is done by making a comparative simulation application between conventional ACO and modified ACO, this application uses the Java Language by using the Netbeans IDE. With this simulation application, the final results of the ACO modification can be compared with the Conventional ACO as a whole starting from the route formed until the most optimal computing time. With this modification, it is hoped that ACO can provide more optimal results, which can later be widely implemented, as well as a reference for further research. II. LITERATURE REVIEW In a study titled Ant Colony for the TSP conducted by Yinda Feng in 2010, the results provided by ACO have many advantages, but also many disadvantages. The drawback of the results of this study is that ACO requires a slow computation time. This happens because the pheromones produced do not have a significant difference from each iteration. In this study it was also explained that ACO has a stagnation problem, where all ants are only concentrated on one pathway with the most pheromones. The following year, in a study titled Solving The Traveling Salesman Problem Using The Ant Colony Optimization conducted by Ivan Brezina Jr. and Zuzana Cickova concluded that the quality of the final product depends on the number of ants. Fewer ants affect route changes. This causes the ACO algorithm to only produce optimal results with a small number of ants, because the fewer ants, the fewer iterations are carried out. Meanwhile, to solve the problem with a large number of nodes, it takes a greater number of ants and iterations. This problem is similar to previous studies where the ACO computation time tends to be slow. Dedy Mulia conducted a study in the same year titled Application of the Ant System Algorithm (AS) in the Case of the Traveling Salesman Problem (TSP) provided a detailed explanation for solving TSP using conventional ACO. ACO which is implemented into TSP is able to produce the shortest route length. In this study only explained how the ACO process is applied to a simple TSP case, so it is not known how optimal ACO is in solving TSP with a large number of nodes. Nitun N. Poddar and Devinder Kaur in 2013 further researched in modifications to the ACO algorithm were carried out in their study titled Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization Techniques in 2013. In this study, modifications were made in the form of reinforcement, where changes were made to the original algorithm of ACO by changing several values. . This is done so that routes that were previously considered not optimal to pass, and give better results than before. Although the gain from this gain is better than the unmodified ACO in terms of tour distance, the convergence rate of this modified ACO is also higher. This causes the computation time to find another tour in the next iteration is also slower. Further research was conducted in 2016 by Abdulqader Mohsen titled Annealing Ant Colony Optimization with Mutation Operator for Solving TSP. In this study, a modification of the ACO was also carried out as in previous studies. Modifications include strengthening and hybridization of ACO and mutation operators. When compared with the unmodified ACO, the last iteration of this modification gives more optimal results without stagnation or premature convergence. However, this modification is still not fully optimal. The final research that will be used as a benchmark in this study was conducted in 2018 by Xu et al. titled Dynamic Vehicle Routing Problems with Enhanced Ant Colony Optimization. This study made a modification in the form of ACO strengthening in the form of a merger between ACO and K-means to solve dynamic problems. The result of this modification has the advantage of finding the optimal solution. However, the biggest drawback of this modification is that the convergence time is slower than the ACO which has not been modified at a later stage. III. RESEARCH METHOD Based on the literature review, there are quite a bit of research gap on the Ant Colony Optimization method in general. Mainly when implementing the ACO to the Traveling Salesman Problem, ACO tends to be slow in the computation process. This is due to the increasing number of ants affecting memory usage, and insignificant changes in pheromones that do not give optimal results. In addition, even though there have been modifications in the form of changes in value and reinforcement by combining ACO with other optimization algorithms, the final results given did not make any significant changes. Based on this analysis, the biggest drawback of this algorithm is in the comparison stage of each iteration. The updated pheromone does not necessarily compare with the pheromone value from the previous iteration. This has an effect on the final result of the solution, because to determine the optimal result requires comparisons that have a definite value in each iteration. In addition, it is not explained if after all the iterations have been successfully executed, the resulting route is really optimal, because basically the agent (ant) generates pheromones randomly. The number of ants also affects computing performance. A large number of ants produces a more accurate final result, but slows down the computation time, whereas a smaller number of ants results in a faster computation time, but the final result is less accurate. A. Modified Ant Colony Optimization The ACO stage starts when the ants walk around in random when foraging. By the time the ants went back to the nest or starting point, the ants that left a pheromone trail for other ants to follow before it starts to evaporate. When other set of ants follow the path that contains previous pheromones, the pheromone's trail gets stronger and the ants will stop running randomly. Over time, the intensity of the pheromones will be more concentrated and an optimal path will be formed. In addition to pheromones intensity, other factors in determining the optimal path is the intensity of the distance between nodes. Automatic Ant Deployment This modification is done to fix the problems of the ACO where a large number of ants causes stagnation. Therefore, this modification is made so the number of ants automatically adjust to the number of nodes. The conditions can be seen in the formula as follows. Pheromone Intensity Renewal The modifications in this stage are made by adding some values to the pheromone renewal formula. Here are the formula for pheromone intensity renewal. Modify Main Solutions Modification are made to the main ACO solution by adding an extra parameter to the solution, which is calculated based on the following formula. (3) The parameter κij a distance combination between two nodes, while the λ is the controller parameter. κij is calculated by the following formula. B. Testing and Comparison Comparisons were made to measure the final results of conventional ACO and modified ACO. After the final results are obtained, the next step is verification if modified ACO provides better and optimal results when compared to conventional ACO. Modified ACO gives better and optimal results compared to conventional ACO if all of these criteria have been met. 1. The resulting route in modified ACO has a shorter total distance 2. Faster modified ACO computing time 3. Memory that is used for computing is relatively lighter The criteria above are determined based on the estimation of the stage of the research method, where the modification of the number of automatic ants and additional variables at the solution development stage should provide a better change in terms of computational time and the final route produced is more optimal. If one of the criteria is not met at the time of testing, the modified ACO does not provide optimal results. C. Parameters of Research For this research, several parameters are required in order to determine the limitations. The first parameters to determine is the control parameter. These parameters serve to control the main solution phase in the modified ACO, where the values will not change no matter the pheromone intensity. There are three different parameters as shown in Table 1 below. As shown in Table 1, Alpha (α) is a parameter that controls the initialization value of pheromone in each iteration, while Beta (β) is a parameter that controls the value of the visibility of the distance. Furthermore, the Lambda value (λ) is a parameter controlling the value of the combined distance between nodes. Aside from the control parameters used in the main solution, there needs to be limitations on other parameters. These parameters are determined by the user, and there are three different parameters as sown in Table 2. Table 2, the minimum and maximum number of ant (k) affects the determination of the best route in terms of computation time. The pheromone evaporation rate correlates with the number of ant (k) in determining the best route determined by conventional ACO or Modified ACO. This is due to the estimation that the faster or slower the pheromone evaporation will determine the percentage of ant (k) to form a route from a node, especially in computation for the development of a Modified ACO solution. Maximum iteration parameter to determine how many iterations are executed when comparing Conventional ACO and Modified ACO, because if it is not given a limit, the computation will continue without stopping. IV. RESULTS AND DISCUSSION In testing and comparing the results of conventional ACO and modified ACO, several research samples are needed. These samples are needed to find out how optimal the results of modified ACO are when applied to several different cases. There are four data inputted to determine the results. These data are the number of ant (k), number of nodes, evaporation rate, and maximum iteration. If the data has been inputted, then the case is determined in combination with the following. 1. The number of ant (k) is 50% more than the number of nodes, with an evaporation rate of 1% of the number of nodes. 2. The number of ant (k) is 30% more than the number of nodes, with an evaporation rate of 4% of the number of nodes. 3. The number of ant (k) is 30% less than the number of nodes, with an evaporation rate of 1% of the number of nodes. 4. The number of ant (k) is 50% less than the number of nodes, with an evaporation rate of 4% of the number of nodes. The provisions of the case for this study are based on estimates from the provisions in the previous chapter. The number of ant (k) affects the accuracy of the route determination in Conventional ACO computing, with the estimation that the number of ant (k) will give more accurate results than the number of ant (k) is small, but with more memory usage. The number of ant (k) does not affect the Modified ACO computational stage, where the number of ant (k) automatically follows how many number of nodes, if the number of nodes is 50 then the number of ant (k) is also 50, whereas if the number of nodes is above 50. Next is the explanation for the provisions of the pheromone evaporation rate, where the density of the evaporation rate influences in determining the route at the main solution stage because ant (k) will be more consistent in following the route if the pheromone density is high. Large Number of Ants with Low Evaporation Rate In this case, each variable will be entered with the condition that the value of ant (k) is greater than the number of nodes inputted, with a low evaporation rate. The value ant (k) is made of 50% larger than the number of nodes, and the evaporation rate is 1% of the number of nodes. This provision is based on an estimation where more ant (k) values give more accurate results when compared to a smaller number, this is because the greater number of ant (k) to compute between nodes gives more pheromones. However, this is not the case if the evaporation rate inputted is not concentrated, because the route formation information by ant (k) is influenced by the pheromone density. The first variable to be determined is a node which amounts to 25 nodes. Furthermore, the specified variable is the number of ant (k), according to the provisions of the first case, the number of ant (k) is 50% more than the number of nodes, then the value is 37 ant. The last variable specified is the evaporation rate, based on the condition that the value is 1% of the number of nodes, then the value is 0.25. The test results may be seen on figure 2 down below. According to the test result, there are a number of differences between the conventional ACO and modified ACO. Specifically, the shape of the routes is very different from each other, and from a visual perspective it appears that the route formed by modified ACO is more random when compared to conventional ACO. Even so, in terms of computing time, modified ACO is more stable than conventional ACO. The time elapsed for this test is 0.257 second for the last iteration, and 2.761 second for a total of 10 iteration. The memory usage for this test is 29.15 mb for the last iteration, and 321.25 mb for a total of 10 iteration. Large Number of Ants with High Evaporation Rate In the second case that has been previously determined, the variables are in accordance with the provisions where the value of ant (k) is greater than the number of nodes inputted, with a thick evaporation rate, different from the previous case. For the ant value variable (k) is 30% more than the number of nodes, then for the evaporation rate variable value 4% of the number of nodes. As in the previous case, this provision is based on an estimate where the value of ant (k) which is more than the number of nodes gives more accurate results when compared to the number that is less, the difference is in the number of ant (k) specified. The process of forming one of the routes is influenced by the level of pheromone density, the determination of the concentrated evaporation rate is estimated to give more optimal results, especially if combined with the large number of ant (k). The first variable to be determined is the number of nodes, which is 20 nodes. The next specified variable is the number of ant (k), provided that the number of ant (k) is 30% more than the number of nodes, so the number of ant (k) for this test is 26 ant. For the last variable determined is the evaporation rate. The second case condition is a thick evaporation rate, which is 4% of the number of nodes, therefore the evaporation rate determined for this test is 0.8. After all variables have been determined, the test results can be seen in figure 3 below. Figure. 3 Test Results for the second case The final results of the second case are the computational time and the route formation of conventional ACO and modified ACO experienced some difference, but not significantly. This is caused by the thick evaporation rate, which causes the distribution of ant (k) to be more evenly distributed so that the formation of the route is tidier when compared to the first case where the evaporation rate is not as concentrated. As with the first case, the routes generated in the last iteration of conventional ACO and modified ACO are just as random, but the routes generated by modified ACO tend to be more random because many route formations overlap one another. This is caused by the number of ant (k) determined in modified ACO is greater than ant (k) in conventional ACO. The time elapsed for this test is 0.108 second for the last iteration, and 1.149 second for a total of 10 iteration. The memory usage for this test is 26.72 mb for the last iteration, and 279.21 mb for a total of 10 iteration. Small Number of Ants with Low Evaporation Rate For the third of the four predetermined cases, there are three variables to be determined. The first variable is the value of ant (k) is smaller than the number of nodes determined, with a non-concentrated evaporation rate. The ant (k) variable determined for this case is 30% less than the number of nodes, then for the evaporation rate variable value is 1% of the number of nodes. Determination of the variables for this case is based on estimation if the value of ant (k) which is less than the number of nodes gives more accurate results with smaller memory usage when combined with the same small evaporation rate variable. For this case, several variables are determined to carry out this test. The first variable to be determined is nodes, the amount of which is 50 nodes. The next variable to be is determined is ant (k), according to the provisions stated, the number of ant (k) is 30% less than the number of nodes, so the number is 35 ant. The last variable to be determined is the evaporation rate, with a provision being 1% of the number of nodes, the evaporation rate is 0.5. Once the variables are determined, the results can be seen in figure 4 below. modified ACO have some differences, where the routes produced by conventional ACO are more evenly distributed, and the routes produced by modified ACO are more random and overlapping even though it is not the case. The computing time of a conventional ACO is slower than that of a modified ACO. In this test the time elapsed at the last iteration is 1,706 seconds, and at 10 iterations is 17,350 seconds. Memory usage for this test is 65.51 mb for the last iteration, and 804 mb for 10 iterations. Small Number of Ants with High Evaporation Rate For the last case, the variables are determined to achieve a different outcome than the previous three cases. The variables determined for this case are ant (k) values smaller than the number of nodes determined, combined with a thick evaporation rate. The ant value variable (k) determined for this case is 50% less than the number of nodes, then for the evaporation rate variable value is 4% of the number of nodes. The variable provisions for this case are based on estimation if a smaller ant (k) value compared to the number of nodes gives a smaller memory usage, and when combined with a concentrated evaporation rate variable, will give an optimal final result. As with the previous cases, several variables are determined to carry out this test. The first variable that is determined is the node, which in this study was determined as many as 22 nodes. Next, the ant (k) variable is determined, the value of ant (k) is 50% less than the number of nodes, so the total is 11 ant. The last variable specified is the evaporation rate, in this case the value is determined to be 4% of the number of nodes, and therefore the evaporation rate is 0.88. After these variables have been determined, the result of this test can be seen in figure 5 as follows. The route formed by conventional ACO is more neat and structured compared to modified ACO. Visually, the route formed by conventional ACO is spread evenly between nodes. One of the influencing factors is the number of ant (k) whose numbers are much less than the ant (k) in Modified ACO which is 50 ant (the provisions based on equation 3.6 in the previous chapter). In Modified ACO, the routes that are formed are more random and overlapping, so that visually seen many nodes are traversed many times even though what happened is not the case. Overall, both conventional ACO and modified ACO in this test have their advantages and disadvantages, but they are not consistent in terms of the route and computational time produced. Where conventional ACO provides better route formation results, the computation time is slower compared to modified ACO. The opposite occurs in modified ACO, where the computational time produced is more optimal and stable, the resulting route is more random and overlapping. In this test the time elapsed at the last iteration is 0,089 seconds, and at 10 iterations is 0,900 seconds. Memory usage for this test is 27.69 mb for the last iteration, and 355.99 mb for 10 iterations. V. CONCLUSION Based on the result of this study, it is concluded that Modified ACO does not provide better results than conventional ACO in terms of computational speed and the final route that is formed. From several tests conducted, the final route produced by Modified ACO results is not consistent, where in some samples it looks more optimal but in other samples it is even more random and overlapping. Modified ACO affects memory usage when computing for several tests. Modified ACO can be used to compute with a large number of nodes with optimal and stable computing time results when compared to conventional ACO, this is caused by modifications made to the number of ants so that it is adjusted to the number of nodes that are inputted. But in terms of forming the final route, Conventional ACO still gives better results. The test results in this study found that the average computation time for the last iteration of each test was 0.693 second, and for the 10 iteration computation time obtained an average of 6,977 second for 1 tests. The amount of memory used in 12 tests in this study was 428,969 mb for 10 iterations. Further research is expected when developing the results to form better results. As such, development when further modifying the ACO algorithm should focus on the number of ants, and to test on larger samples. Y. Feng, "Ant colony for the TSP," 2010.
6,084.4
2020-12-27T00:00:00.000
[ "Computer Science" ]
Hepatoprotective Effect of Otostegia persica Boiss. Shoot Extract on Carbon Tetrachloride-Induced Acute Liver Damage in Rats. In this study, the hepatoprotective effect of the methanol extract of aerial parts (shoot) from Otostegia persica Boiss (Golder) was investigated against the carbon tetrachloride (CCl4)-induced acute hepatotoxicity in male rats. Liver damage was induced through the oral administration of 50% CCl4 in liquid paraffin (2.5 mL/Kg bw, per os) 60 min after the administration of the methanol extract of O. persica shoot (in 200, 300, 400 mg/Kg bw doses) and assessed using biochemical parameters (plasma and liver tissue malondialdehyde (MDA), transaminase enzyme levels in plasma [aspartate transaminase (AST), alanine aminotransferase (ALT)] and liver glutathione (GSH) levels). Results show that the methanol extract of O. persica shoot is active at 300 mg/Kg (per os) and it possess remarkable antioxidant and hepatoprotective activities. Additionally, histopathological studies verified the effectiveness of this dose of extract in acute liver damage prevention. Introduction Liver, which is involved in almost all of the biochemical pathways in the body, plays a vital role in maintaining, performing and regulating its homeostasis. Therefore, a healthy liver is necessary for the health and wellbeing. Unfortunately, liver is often abused by environmental toxins, poor eating habits and alcohol and medications and over the counter drug use, which can damage and weaken the liver and eventually lead to hepatitis, cirrhosis and other liver diseases (1). Modern medicine has little to offer to alleviate hepatic diseases and there are not many drugs available to treat liver disorders. Hence, many folk remedies of plant origin have been evaluated according to their possible hepatoprotective effects against the liver damage in experimental animals. Silymarin is a polyphenolic component isolated from the fruits and seeds of Silybum marianum (2, 3). It restores the GSH content and facilitates the ATPase activity and promotes RNA polymerase I in hepatocytes (4). Flavonolignans isolated from silymarin are known to lead to regeneration of liver tissue (5) and hepatic membrane stabilization response (6). Silymarin has a great potency for hepatoprotection against toxic agents like CCl 4 and were used here as a reference drug. decrease the hepatic dysfunction originated from diabetes mellitus (18). In this study, we aimed to evaluate the hepatoprotective effect of the methanol extract of O. persica shoot on liver injury model in rats. Plant material extraction procedure The aerial parts of the O. persica were collected from Jiroft, Kerman, southeastern of Iran, taxonomically identified and approved by Dr. SM. Mirtadzaddini, Biology Department of Shahid Bahonar University of Kerman (voucher number : 40642, deposited in : Herbarium of Tehran University, director: Dr. F. Attar). The O. persica was powdered in an electrical grinder. The extraction was carried out through the maceration of dry plant powder in methanol 80% for 48 h at room temperature. Then, it was submitted to the extraction with methanol by soxhlation. After the extraction, methanol was evaporated by rotary evaporator at 40-50ºC and was dried using a freeze-dryer at -50ºC. The yield of extraction was 10%. The extract was prepared in distilled water before use. Laboratory animals Adult male Wistar rats with an average weight of 200-220 g were used in this assay. They were purchased from the animal breeding laboratories of Pasteur Institute (Tehran, Iran) and had free access to food and water, and were maintained in a controlled temperature (24 ± 2ºC) and light cycle (12 h light and 12 h dark). Experimental design The animals were divided into 6 groups, each consisting of 6 rats orally treated with 50% CCl 4 in liquid paraffin (2.5 mL/Kg bw, per os) 60 min after the administration of O. persica methanol extract (in 200, 300, 400 mg/Kg bw doses), Legalon which contained 70% silymarin (420 mg/Kg bw) as a reference drug and 0.5 mL distilled water. One group of normal (untreated) rats was also used in our study. Throughout the experiments, local ethical guidelines were considered for taking care of laboratory animals. Twenty-four hours after the CCl 4 administration, rats were sacrificed CCl 4 -induced hepatotoxicity model is frequently used to investigate the hepatoprotective effects of drugs and plant extracts. The changes associated with CCl 4 -induced liver damage are similar to that of acute viral hepatitis (7). The family Lamiaceae is one of the largest and most distinctive families of flowering plants with about 220 genera and almost 4000 species worldwide (8). Many biologically active essential oils have been isolated from various members of this family so far. The genus of Otostegia is a member of this family which is comprised of 20 species that are distributed over the east of Asia, from them Otostegia persica (Burm.) Boiss (O. persica ) locally called «Golder» is endemic to south of Iran. This is a spiny shrub plant, with about 1.5 m height and with rectangular woody stems. Its leaves are opposite on stems with short petiole and obovate blade and covered with dense white hairs. Flowers have funnel-shaped calyx with longitudinal ridges and bilabiate white corolla with hairy upper lip (9). The flowers of the plant are widely used as an additive to yoghurt, butter, milk and meat. It has also been used in Iranian traditional medicine as analgesic in toothache and arthritis. Hydroalcoholic extract of O. persica alleviates the morphine withdrawal syndrome (10). O. persica extracts (methanolic, chloroform and hexane) showed antimicrobial activities against the Gram-positive strains (11). The aqueous extract of the aerial parts of the plant has been used as antispasmodic, antihistaminic and antiarthritic (12). Phytochemical studies on this plant resulted in the isolation and characterization of geraniol, eugenol, ceryl alcohol, hentriacontane, caffeic acid, phydroxybenzoic acid, β-sitosterol, β-sitostery acetate, β-amyrin, campesterol and stigmasterol (13). Oral administration of ethanol extract of O. persica for 21 days showed antidiabetic effect in rats (14). It has been reported that its ethanolic extract has anti-glycation property which belongs to the known compound 3, 7-dihydroxy-4', 6, 8-trimethoxy-flavone (15). It has strong antioxidant property and our recent studies indicated that the methanolic extract of its aerial parts has anti-diabetic effect through the stimulation of insulin release and pancreas tissue improvement (16,17). In addition, it can by overdose of diethyl ether and blood samples were withdrawn, collected in heparinized tubes and were centrifuged at 3000 × g for 10 min to obtain plasma. Plasma samples were used to determine the lipid peroxidation level as well as to test the aspartate aminotransferase (AST) and alanine transaminase (ALT) activities. On the other hand, the liver of each rat was promptly removed and used to determine the tissue levels of malondialdehyde (MDA) and glutathione (GSH). Biochemical assays Pars Azmoon standard kits and RA-1000 Autoanalyzer were used to measure the AST and ALT activities in plasma. The methodology described by Kurtel et al. (19) was used to determine the plasma lipid peroxidation level. Besides, rats were sacrificed using diethyl ether to determine the lipid peroxidation in liver tissue. The liver of each rat was immediately excised and chilled in ice-cold 0.9% NaCl and then perfused via the portal vein with ice-cold 0.9% NaCl. After washing with 0.9% NaCl, the method of Ohkawa et al. (20) modified by Jamall and Smith (21) was used to determine the lipid peroxidation in tissue samples. The evaluation of cellular GSH in liver tissue was determined by Sedlak and Lindsay method (22). Histopathological studies For the histopathological study, the livers of six animals in each group were immediately removed and the tissues were fixed in 10% formalin for a period of at least 24 h. The paraffin sections were then prepared (Automatic Tissue Processor, Lietz, 1512) and cut into 5 µm thick sections in a rotary microtome. Thereafter, the sections were stained with haematoxylineosin dye and mounted in Canada balsam. The histopathological slides were examined and photographs were taken with a photomicroscope. Histological damage was expressed using the following score system: Ø: absent; ⊥: minimal; +: mild; + +: moderate; + + +: severe. Statistical analysis The obtained data were analyzed by oneway ANOVA followed by the Tukey's post-hoc test and p < 0.05 was considered statistically significant. Results and Discussion The effects of the methanol extract of O. persica on biochemical parameters of rats intoxicated by carbon tetrachloride (CCl 4 ) were evaluated in this study. CCl 4 was found to cause several fold increases in plasma AST (2561.82%) and ALT (3206.53%) levels (Table 1). Moreover, the liver (117.21%) and plasma (345.77%) lipid peroxidation levels were increased significantly in CCl 4 -treated group compared with those in the normal group which has been evidenced by MDA determination. However, the content of GSH in the liver was decreased in CCl 4treated group (51.5%). The plasma (26.6%) and liver (22.96%) MDA level, as well as the plasma ALT (34.95%) and AST (31.78%) were significantly reduced in rats that received a dose (Table 3 and Figure 1). Our results indicate that the methanol extract of O. persica exerts hepatoprotective properties against CCl 4 -induced liver damage. CCl 4 is a well-known hepatotoxin and the exposure to this chemical is known to induce the oxidative stress and cause the liver injury through free radicals formation (23). The changes associated with Results are expressed as mean ± SEM. a (+) represents the percentage of increase and (-) represents the decrease in each value when compared with either control or CCl4. b Compared with control. c The dose unit is mL/Kg. d Compared with CCl4 as hepatotoxin.*p < 0.05 significant from control or CCl4. **p < 0.01 significant from control or CCl 4 .*** p < 0.001 significant from control or CCl4. CCl 4 -induced liver damage are similar to those of acute viral hepatitis (7). In accordance with our findings, it has been shown that the liver of CCl 4intoxicated rats has been exerted the massive fatty change, gross necrosis, broad infiltration of lymphocytes and kupffer cells around the central vein and loss of cellular boundaries (24-28). CCl 4 -induced hepatotoxicity is believed to include two phases. The initial phase involves the metabolism of CCl 4 by cytochrome P 450 , which leads to the formation of free radicals (CCl 3 • , CCl 3 OO • ) and lipid peroxidation (29). The second step involves the activation of kupffer cells, probably through free radicals. The activation of kupffer cells is accompanied by the production of proinflammatory mediators (30). As a result of the hepatic injury, the altered permeability of the membrane causes the enzymes from the cells to be released into the circulation which damages the hepatic cells, as shown through the abnormally high level of serum hepatospecific enzymes. Free radicals also affect the antioxidant defense mechanisms, reduce the intracellular concentration of GSH and decrease the activity of SOD and CAT. Lipid peroxidation is a chain reaction that involves the oxidation of polyunsaturated fatty acids in membranes induced by free radicals and is an indicator of oxidative cell damage. Direct measurement of oxidative stress in humans is difficult since the active oxygen species and free radicals are extremely short-lived (31). Instead, products of the oxidative process are measured. The elevation of MDA levels, which is one of the end products of lipid peroxidation in the liver, and the reduction of hepatic GSH levels are important indicators in CCl 4 -intoxicated rats (32). Glutathione exists in reduced (GSH) and oxidized (GSSG) states. GSH can be regenerated from GSSG through the enzyme glutathione reductase. In healthy cells and tissues, more than 90% of the total glutathione pool is in the reduced form (GSH) and less than 10% exists in the disulfide form (GSSG). An increased GSSGto-GSH ratio is considered as the indicative of oxidative stress (33). Two compounds of O. persica methanol extract which were separated by column and paper chromatography showed significant antioxidant activities compared with butylated hydroxyl anisole (BHA) and alpha tocopherol. These active compounds were identified as morin and quercetin (34). It is thought that antioxidants play a significant role in protecting the living organisms from the toxic effects of chemical substances such as CCl 4 and carcinogens (35). Morin has been shown to act as a potent antioxidant (36), xanthine oxidase inhibitor (37) and modulator of lipoxygenase and cyclooxygenase activities in the arachidonic acid cascade (38). Morin prevents acute liver damage via inhibiting the production of TNF-α, IL-6 and iNOS (39). Besides, Quercetin, a natural antioxidant, reveals its antioxidant properties through inhibiting the lipid peroxidation via blocking the enzyme xanthine oxidase (40), and directly scavenging hydroxyl, peroxy and superoxide radicals (41). Quercetin also potentiates an antioxidative defense mechanism through increasing the absorption of vitamin C (42) and inhibiting the structural damage to the proteins (43). Conclusion The methanol extract of O. persica has protective effect against the acute liver damage and hepatoprotective mechanisms of this extract on CCl 4 -induced acute liver damage might be due to the decreased lipid peroxidation (decreased MDA level and increased content of GSH). More studies are needed to determine further mechanisms involved in the hepatoprotective effects of this plant. glutathione peroxidase, superoxidedismutase, and lipid peroxidation in the rat heart a possible mechanism of cadmium cardiotoxicity. Toxicol. Appl. Pharmacol.
3,040.8
2012-07-21T00:00:00.000
[ "Biology" ]
Immune genes are associated with human glioblastoma pathology and patient survival Background Glioblastoma (GBM) is the most common and lethal primary brain tumor in adults. Several recent transcriptomic studies in GBM have identified different signatures involving immune genes associated with GBM pathology, overall survival (OS) or response to treatment. Methods In order to clarify the immune signatures found in GBM, we performed a co-expression network analysis that grouped 791 immune-associated genes (IA genes) in large clusters using a combined dataset of 161 GBM specimens from published databases. We next studied IA genes associated with patient survival using 3 different statistical methods. We then developed a 6-IA gene risk predictor which stratified patients into two groups with statistically significantly different survivals. We validated this risk predictor on two other Affymetrix data series, on a local Agilent data series, and using RT-Q-PCR on a local series of GBM patients treated by standard chemo-radiation therapy. Results The co-expression network analysis of the immune genes disclosed 6 powerful modules identifying innate immune system and natural killer cells, myeloid cells and cytokine signatures. Two of these modules were significantly enriched in genes associated with OS. We also found 108 IA genes linked to the immune system significantly associated with OS in GBM patients. The 6-IA gene risk predictor successfully distinguished two groups of GBM patients with significantly different survival (OS low risk: 22.3 months versus high risk: 7.3 months; p < 0.001). Patients with significantly different OS could even be identified among those with known good prognosis (methylated MGMT promoter-bearing tumor) using Agilent (OS 25 versus 8.1 months; p < 0.01) and RT-PCR (OS 21.8 versus 13.9 months; p < 0.05) technologies. Interestingly, the 6-IA gene risk could also distinguish proneural GBM subtypes. Conclusions This study demonstrates the immune signatures found in previous GBM genomic analyses and suggests the involvement of immune cells in GBM biology. The robust 6-IA gene risk predictor should be helpful in establishing prognosis in GBM patients, in particular in those with a proneural GBM subtype, and even in the well-known good prognosis group of patients with methylated MGMT promoter-bearing tumors. Background Glioblastoma multiforme (GBM) is the most common and aggressive primary brain tumor in adults. Despite recent advances in multimodal therapy, prognosis remains limited [1]. Conventional treatment, generally maximal safe surgical resection followed by combination radiation and chemotherapy with temozolomide, fails to prevent tumor recurrence. Recently, molecular subtypes of brain tumors have been characterized by microarray gene expression profiles [2][3][4][5][6]. These subgroups have been associated with significant differences in tumor aggressiveness, progression, and/or prognosis [7]. Gene expression analysis has been reported as being more accurate than conventional histology [8,9]. Due to this greater accuracy, expression-based classifications offer an opportunity to improve molecular classification of gliomas [6,7] and clinical diagnosis of glioblastomas [2]. Such advances could be helpful in designing future therapeutic trials [4,10]. Many arguments have supported a link between the immune system and glioma pathogenesis. In several epidemiologic studies, glioma incidence is inversely associated with allergy history [11][12][13]. T-lymphocyte infiltration has been reported in certain glioma patients and an elevated number of intratumoral effector T cells has been recently correlated with a better survival in GBM patients [14]. Interestingly, several transcriptomic studies using microarray technologies have also reported an immune signature in gene expression profiling of glioma [8,10,15,16] and GBM [17][18][19][20]. A signature associated with myeloid/macrophagic cells has been reported in most of these studies [10,15,16,18,20], a finding consistent with the known macrophage/microglia infiltration in GBM [21][22][23]. More recently, transcriptomic studies in glioma have revealed different signatures involving immune genes associated with overall survival (OS) [8,10,15,19]. Gravendeel et al. reported an immune response signature associated with poor survival in glioma (Cluster 23the M function category) [8]. Murat et al. reported better outcome in patients with gene clusters characterizing features of innate immune response and macrophages (G24 cluster -134 probes, among them probes for CD11b and CD163 genes) [19]. In contrast, Irliev et al. found an immune module (M7 module) associated with short survival that includes 449 genes, among them T-cell markers (CD4, CD8) and myeloid markers (MHC class II, TLR1 and TLR2) [15]. An NK cell signature (G12 gene cluster including Fc gamma receptors and DAP-12) has previously been reported in one study with higher level expression in primary GBM with shorter survival compared to low grade astrocytomas and secondary GBM [10]. In order to clarify the possible role of immune cells in GBM pathology and OS, we have performed a co-expression network analysis focusing on 791 genes linked to the immune system. Using a meta-analysis approach and independent validation cohorts, we identified an immune signature of GBM linked to innate immunity involving myeloid and NK cells as well as a 6-immune genes risk-model stratifying patients into two groups with significantly different OS. Immune-associated (IA) genes Immune-associated genes were defined as genes annotated with the 'immune system process' Gene Ontology (GO) biological process term (GO:0002376) by the AmiGO annotation tool (505 genes). Important immuneassociated genes not annotated with GO:0002376 in GO, such as cytokines, cells markers and immunomodulation genes (286 genes), were added to this GO genes list. This IA genes list is composed of 791 genes ( Figure 1) (Additional file 1: Table S1). Patients and datasets For the survival analysis we used four publicly available Affymetrix technology independent microarray datasets ( Figure 1) [2,5,7,24]. Moreover, a local cohort including 41 patients with newly diagnosed grade IV glioma admitted to the neurosurgery department of Rennes and Angers University Hospitals was analyzed using a different technology (Agilent). Eventually, a local cohort of 57 newly diagnosed GBM patients, admitted to the neurosurgery department of Rennes University Hospital and homogeneously treated by surgery and radio-chemotherapy with temozolomide like Stupp's schedule, was analyzed by a reverse transcriptase quantitative polymerase chain reaction (Q-PCR). All patients of the local cohort signed their informed consent. All cohorts and patients characteristics are detailed in Table 1. The MGMT status of the local cohort was obtained by pyrosequencing methylation assay with a threshold of CpG methylation set to ≥9% [25,26]. Local tumor subtypes were determined using the centroid-based classification algorithm described by Verhaak et al. [7]. Weighted gene co-expression network analysis (WGCNA) Signed weighted gene co-expression network analysis was performed on the GSE13041 data set [24] (Figure 1 and Table 1). A co-expression network was constructed on the basis of the IA genes. For all possible pairs of the variable genes, Pearson correlation coefficients were calculated across all samples. The correlations matrix was raised to the power 6, thus producing a weighted network. The weighted network was transformed into a network of topological overlap (TO)an advanced co-expression measure that considers not only the correlation of 2 genes with each other, but also the extent of their shared correlations across the weighted network. Genes were hierarchically clustered on the basis of their TO. Modules were identified on the dendrogram using the Dynamic Tree Cut algorithm [27]. Each gene's connectivity was determined within its module of residence by summing up the TOs of the gene with all the other genes in the module. By definition, highly connected (hub) genes display expression profiles highly characteristic for their module of residence [28]. To define a measure of prognostic significance, a univariate Cox proportional hazards regression model was used to regress patient survival on the individual gene expression profiles. The resulting p-values were used to define a measure of prognostic significance. To obtain a condensed representative profile of each module, focus was placed on the top 20 hub genes in the module. Co-expression network analyses were performed using the WGCNA R package. Survival analyses were performed using the survival R package. WGCNA modules functional annotation and enrichment Functional annotation of the IA genes co-expression modules was performed on the basis of the analysis of their top 20 hub genes and survival associated genes in each module. DAVID software (http://david.abcc.ncifcrf. gov/) was used to test each module for genome enrichment in GO process terms, PIR superfamily, Panther or Kegg pathways, InterPro or SwissProt keywords, and to test IA genes having an impact on overall survival (Fisher's exact tests with Benjamini-Hochberg correction for multiple testing). IA genes associated with patient outcome Molecular screening of IA genes was performed on 115 GBM patients included in a whole-genome Affymetrix meta-analysis dataset described by de Tayrac et al. [2]. Association between expression levels and patient outcome defined IA genes having an impact on overall survival (OS). Several survival analysis methods were used to identify relevant associations: (i) a Cox-step method [29], (ii) a differential analysis between the first and the fourth quartile, (iii) a classical Cox analysis (Figure 1). Adjusted p-values were calculated by controlling for the false discovery rate with the Benjamini-Hochberg correction. Overall survival was estimated by the Kaplan Meier method. Comparisons between survival groups were performed by the log-rank test. Univariate cox analyses were performed with gene expression data as a predictor and overall survival in months as the response. IA genes risk model An optimal survival model was built on IA genes associated with survival as described in de Tayrac et al. [2]. Analyses were performed using survival, survivalROC and Figure 1 Analysis workflow. 791 IA genes were studied in three analyses: weighted gene co-expression network analysis (WGCNA) and functional annotation were performed on the GSE13041 data set (blue box); 108 survival associated IA genes were found by 3 different methods (Step-Cox, Quartile, Z-score) on de Tayrac dataset (middle green box); survival IA gene risk model was built on de Tayrac dataset and validated on 5 datasets: GSE 13041, TCGA, GSE2727, a local Agilent dataset, a local RT-Q-PCR dataset (right hand-side green box). rbsurv R packages. These packages selected survivalassociated genes and estimated the regression coefficients of the optimal survival model after adjustment on the study factor. All analyses were stratified on the age. Q-PCR procedures Total RNA was isolated using Rneasy Plus Mini QIAGEN kit from fresh-frozen glioblastoma samples. RNA integrity was confirmed using the Agilent Bioanalyser (RNA 6000 NAno assay kit). cDNA synthesis was obtained by a High capacity cDNA Reverse Transcription kit with Rnase inhibitor (Applied biosystem W ). Q-PCR reactions were done with the 7900HT Fast Real-time PCR System using the Applied biosystem W Taq Man FAM-labeled probes for ACVR2, CD22, MNX1, ARG1, RPS19 and FGF2, and the three housekeeping genes: TBP, HPRT1, GAPDH. Liver cells, testis cells, B lymphocytes and U251 cells were used as positive control. The relative amounts of the gene transcripts were determined using the ΔΔCt method, as described by the manufacturer. IA genes co-expression modules WGCNA algorithm with the Lee dataset (GSE 13041) was applied to explore transcriptional relationships between IA genes and highlight consistent patterns of gene coexpression [24]. The weighted gene co-expression network constructed on the basis of the IA genes revealed 6 modules, each of them containing coordinately expressed genes potentially involved in shared cellular processes. To associate putative relevant processes and structures with the observed gene co-expression, we analyzed the functional enrichment of each module. For each module, the top five hub IA genes and the first five genes associated with survival are provided in Figure 2. The modules' annotations were obtained with the top 20 hub IA genes associated with each module and all IA genes associated with survival within this module ( Figure 2). The IA genes co-expression modules were thus designated as followed: NK cells and innate immunity (blue module), Cytokines and molecular IA genes associated with survival Interestingly, two co-expression modules were significantly enriched in IA genes having an impact on overall survival: NK cells and innate immunity signature module and the Cytokines and MHC class I signature module (p < 0.01). Three different methods were then applied to further analyze the IA genes associated with survival using the de Tayrac dataset. The step-Cox model identified 52 genes associated with overall survival. The quartile model found 46 genes significantly differentially expressed between the lowest survivors and the highest survivors. The classical Cox method identified 28 genes associated with patient outcome (Additional file 1: Table S2). The overlap between the three methods is presented in Figure 3. In conclusion, 108 out of 791 IA genes were found to be associated with GBM patient survival by at least one of the three different statistical methods. IA genes risk-score model and MGMT methylation status In univariate Cox analysis using the de Tayrac dataset, the only factors associated with survival were the MGMT promoter methylation status and the 6-IA gene risk category. Sex, histology, age and KPS were not statistically associated with patient outcome. In multivariate analysis, the MGMT promoter methylation status and the 6-IA gene risk category were still significant (p = 0.02 and p = 0.01, respectively). Difference of survival defined by the 6-IA gene risk remained significant when considering patients bearing tumors with methylated MGMT promoters (25 versus 8.1 months, n = 8 and 16 respectively, p < 0.01; Figure 5C), as in the Lee dataset (21.2 versus 13.1 months, p < 0;05, Figure 5A). In the Q-PCR cohort, the MGMT status and the 6-IA gene risk category were also significantly associated with OS of GBM patients, in both univariate and multivariate analysis (p = 0.045 and p = 0.036, respectively). Nineteen patients with low risk had a median survival of 21.8 months versus 13.9 months in three patients with high risk. Although the number of high-risk patients is low, the difference remains significant (p < 0.05; Figure 5D). No significant difference in survival could be found among patients bearing tumors with methylated MGMT promoters only in the TCGA cohort ( Figure 5B). This might be explained by insufficient statistical power, especially since a significant difference was found in the 122 unmethylated MGMT promoter tumors from the TCGA cohort (data not shown). IA genes risk-score model and GBM subtypes The 6-IA gene risk predictor was also applied to a local cohort and to the cohorts described by Lee and Verhaak [7,24] taking into account the recent GBM classification published by Phillips and Verhaak [6,7]. As only the proneural subtype is associated to survival [24], GBM specimens were divided into two sub-groups: proneural (25% in GSE13041, 38% in TCGA, 29% in the local cohort) and non proneural ( Table 1). The 6-IA gene risk predictor classed the patients with proneural GBM into two groups exhibiting significant OS difference: 11.9 versus 28.7 months (p < 0.01; [24]); 11.3 versus 3.4 months (p < 0.05, [7]); 24.8 versus 4.7 months (p < 0.02; in our local cohort) (Figure 6 A-C). Conversely, no difference was observed in the non proneural group of GBM ( Figure 6 D-F). Discussion In this study, we were able to link IA genes expression pattern with GBM biology and patient survival. Indeed, our co-expression network analysis highlighted clusters of IA genes and revealed related immune signatures marking innate immunity, NK and myeloid cells and cytokines/MHC class I molecules profiles. Furthermore, 108 IA genes were associated with OS. Among these, 6 IA genes were included in a weighted multigene risk model that can predict outcome in GBM patients. Several studies have previously reported an immune signature in GBM [8,10,[15][16][17]19,20,30]. A signature associated with myeloid/macrophagic cells was reported in most of these [10,15,16,18,20]. We also found such a signature linked to one co-expression module for which annotation enrichment found monocytes, leukocyte activation and macrophage-mediated immunity. The well known macrophage/microglia infiltration in GBM can account for up to one-third of cells in some GBM specimens [21][22][23]. Unlike Ivliev et al. [15], we were unable to identify a T-cell signature in our analysis. Nevertheless, the association of two gene modules with GBM patient survival suggests that innate immunity including NK cell functions and cytokines/CMH class I profiles might affect outcome in GBM patients. A NK cell signature has previously been reported in one study in primary GBM [10]. NK cell infiltration was described earlier in glioma [31] but was not confirmed by others [32]. It is noteworthy that in murine glioma models, various vaccines strategies using CCL2 [33], CpG [34], IL12expressing stroma cells [35] or IL23-expressing dendritic cells [36], induced an increased recruitment of NK cells at the tumor site, associated with better overall survival. Most of chemokines present in the cytokines/MHC class I module are involved in recruiting T cells, monocytes/ macrophages and neutrophils: e.g. CX3CR1/CX3CL1, CXCL9 and CXCR2 genes. In addition, most of the cytokines found such as MIF, IL5, IL12A and IL16 genes are known to regulate macrophages/monocytes, eosinophils, NK and T cells. Lohr has also reported that intratumoral infiltration of effector T cells is associated with a better survival in GBM [14]. In total, one could speculate that these two modules associated with overall survival reflect the recruitment and activation of immune cells such as NK cell, T cell, macrophages/monocytes, or neutrophils that would interfere with GBM patients' survival. Interestingly, several clinical trials using dendritic cells have reported that the presence of T cells and neutrophils at the tumor site is associated with longer survival of the vaccinated patients [37]. Recently, Ducray et al. reported that infiltration of both CD3+ T cells and CD68+ macrophages was observed more frequently in GBM responders than in non-responders to radiotherapy [17]. However, in the present study, we did not find any association between key regulators of the T cell biology such as GATA3, TBX21 (TBET), and RORC (RORgamma-t) with patients' survival (data not shown). The small amount of these infiltrating cells is usually reported in the GBM specimens and might have impaired the identification of such genes by a transcriptomic approach. In addition to the co-expression network analysis, we have found 108 IA genes directly associated with OS in GBM patient using three different statistical methods. These genes are known to be involved in the biology of B cells (i.e. immunoglobulins, BLNK, CD19, CD20 and CD22 genes), T cells (i.e. CD1E, PTCRA, CD247), NK cells (i.e. KIR2DL1, KIR2DL4 and KIR3DL3 genes), and myeloid cells including monocytes/macrophages (i.e. ADAMDEC1, CD89/FCAR, CD64/FCGR1B and FCGR1C genes) and neutrophils (i.e. CD89, and NCF1B genes). Surprisingly, other important genes expressed by glioma-infiltrating microglia/macrophages, such as CD163 and AIF1 (IBA1), were not significantly associated with patients' survival (data not shown). Komohara et al. have recently reported that the presence of CD163+ CD204+ M2-type macrophagic cells correlates with glioma grading and survival using an immunohistochemistry approach [38]. This discrepancy between our results and the Komohara et al. study could be explained by the fact that we used different technical approaches to detect these markers: at the mRNA level in our genomic study and at the protein level in [38]. Others genes of chemokines and cytokines have been also found such as CCL15, CCL17 IL1B and IL5 genes. Finally, some genes are known to be involved in the modulation/suppression of the immune response such as APRIL, ARG1, CD70, B7-H4, ICOSLG, NOS2A, TGFB1 and TWEAK genes. Finally, we have developed a 6-IA-gene risk predictor of OS in GBM patients. The genes have been selected for an optimal survival model built on IA genes associated with survival as described in de Tayrac et al. [2]. This 6-IA gene risk is able to discriminate patients treated by chemo-radiation therapy into two distinct groups with significantly different survivals. These genes ACVR2A, ARG1, CD22, FGF2, MNX1 and RPS19 were present in all but one of the co-expression modules. The 'regulation of immune response' module, which contains no gene retained in the 6-IA-gene risk predictor, is the only one that does not include survival-associated genes. ACVR2A, CD22 and MNX1 genes were found to be associated with GBM patient survival in the three different statistical methods. Intriguingly, these 6 IA genes are not specific markers for known immune cell subpopulations. They are involved in the activation or the inhibition of the immune system. As a result, they impact positively or negatively on the risk predictor. For example, the expression of ARG1, a gene involved in immunosuppression, contributes positively to the 6-IA-gene risk index and therefore decreases the patient's probability of survival. Although these genes are known in other cancers, they have not been described in GBM. ACVR2A is a receptor for activin-A and controls cell proliferation [39], for example proliferation of prostate cancer cells [40]. Mutations of ACVR2A are commonly found in unstable colonic cancers [41], and interestingly, infiltration of CD3 T cells is associated with mutated ACVR2A genes [42]. ARG1 for arginase-1 is a cytosolic enzyme that hydrolyses arginine to urea and ornithine [43]. ARG1 has recently been involved in immunosuppressive mechanisms by reducing T-cell activation [44]. CD22 cannot be considered only to be a B cell receptor that mediates cell adhesion and signaling [45,46] since Mott et al. report that neurons can secrete this molecule [47]. Neuronal secretion of CD22 inhibits microglia activation via interaction with CD45 [47]. FGF2 for fibroblast growth factor-2 stimulates GBM growth [48]. Nevertheless, the high molecular weight FGF2 isoform inhibits glioma proliferation [49] and explains the radiation therapy resistance pathway [50]. Interestingly, plasma levels of FGF are higher in GBM patients compared to control [51]. MNX1 gene is involved in a congenital malformation, the Currarino syndrome (congenital malformation) [52] and also previously reported in CD34+ cells, B cells and B lymphoid tissues [53]. MNX1 function in immune cells and GBM biology has not been demonstrated yet but it has recently been described as a transcriptional factor implicated in the development of both solid and hematological cancers [54]. RPS19 is a subunit of 40S ribosome involved in pre-rRNA processing but also has extra-ribosomal functions. Indeed, RPS19 can act as a chemokine that regulates macrophage migration inhibitory factor (MIF) negatively [55]. Moreover, RPS19 can interact with FGF2 to drive differentiation or proliferation pathways of various cell types [56]. Only one statistical method, the quartile method, found this gene significantly (Figure 3), but the co-expression module found it to be significantly associated with OS ( Figure 2). To validate the strength of our 6-IA-gene risk predictor, expression of these genes was tested in a local cohort using RT Q-PCR. This technique has at least two advantages, it is used routinely in most laboratories and is relatively inexpensive compared with genomic microarray technologies. The test cohort was small (57 GBM specimens) but homogeneous in terms of treatment: combined surgery and chemo-radiation therapy [1]. In addition, the MGMT methylation status, which is the best predictor of response to the current combination treatment, was determined for all GBM specimens. Applied to this small cohort, 6-IA-gene risk predictor was even able to discriminate significantly between patients with high and low risk in the good prognosis group, defined by methylation of the MGMT promoter. Recent advances in glioma classification have been achieved using genomic analysis. It is now accepted that GBM can be categorized in four subtypes defined as proneural, neural, mesenchymal, and classical groups [6,7,24]. The clinical outcome of the patients is different according to the GBM subtype. For instance, patients with proneural subtype live longer and the standard treatment does not increase their overall survival [6,7]. In contrast, overall survival of patients with classical or mesenchymal subtype is significantly increased with the standard treatment. Interestingly, we have shown that our 6-IA-gene risk predictor was powerful in GBM proneural subtype but not in others subtypes. GBM proneural is an atypical GBM subtype which is associated with younger age, PDGFRA gene amplification, IDH1 mutations, TP53 mutations [7]. Due to the fact that these patients with proneural GBM have longer survival, one could speculate that the anti-tumor immune response could have more time to occur and slow down the tumor progression in some of these patients with a particular immune profile, revealed by our 6-AI-gene risk predictor. Conclusions In conclusion, we have demonstrated that GBM are characterized by an immune signature which could reflect the infiltration and activation of immune cells or the immunosuppression mechanisms developed by the tumor itself. Several IA genes were found to be associated with clinical outcome of GBM patients, allowing us to describe a 6-IA-gene risk predictor. This risk model can discriminate between patients with different outcomes, even within the good prognosis group based on MGMT status and within the proneural GBM subtype group. Further studies are needed to understand how these IA genes are involved in the control of GBM progression. Overall, this study highlights the important role of the immune system in the battle against the tumor and suggests new strategies for further development of immunotherapy for GBM patients. Additional file Additional file 1: Table S1. List of IA genes. Table S2. IA genes associated with survival in the 3 statistical methods.
5,702.2
2012-09-14T00:00:00.000
[ "Biology", "Medicine" ]
Radio-Absorbing Magnetic Polymer Composites Based on Spinel Ferrites: A Review Ferrite-containing polymer composites are of great interest for the development of radar-absorbing and -shielding materials (RAMs and RSMs). The main objective of RAM and RSM development is to achieve a combination of efficient electromagnetic wave (EMW) absorption methods with advantageous technological and mechanical properties as well as acceptable weight and dimensions in the final product. This work deals with composite RAMs and RSMs containing spinel-structured ferrites. These materials are chosen since they can act as efficient RAMs in the form of ceramic plates and as fillers for radar-absorbing polymer composites (RAC) for electromagnetic radiation (EMR). Combining ferrites with conducting fillers can broaden the working frequency range of composite RAMs due to the activation of various absorption mechanisms. Ferrite-containing composites are the most efficient materials that can be used as the working media of RAMs and RSMs due to a combination of excellent dielectric and magnetic properties of ferrites. This work contains a brief review of the main theoretical standpoints on EMR interaction with materials, a comparison between the radar absorption properties of ferrites and ferrite–polymer composites and analysis of some phenomenological aspects of the radar absorption mechanisms in those composites. Introduction The use of electromagnetic radiation (EMR) in science and technology provided a wide range of opportunities and triggered intense technical progress in the 20th century.However, the benefits of EMR have a reverse side, electromagnetic pollution.This is commonly treated as the growing electromagnetic wave (EMW) intensity in urban spaces, accommodation areas, industrial zones and even in the whole environment within a wide EMR spectrum, from radio frequencies to microwaves, excluding ionizing EMR [1][2][3][4][5].One of the most widely discussed problems is the long-term human body exposure to non-ionizing low-power EMR [6][7][8][9].For example, there are indications that a number of borderline personality disorders, depression, hyperkinetic behavior syndrome, child hyperactivity and suicidal tendencies can originate from long-term EMR exposure in everyday activity [10].Typical EMR sources surrounding humans in the 21st century are mobile phones, microwave communication devices, TV sets, Wi-Fi, computers, antennas, satellite and mobile communication.Those EMR sources can emit in a range from extremely low frequencies to microwaves (~1 Hz to hundreds of GHz).Microwave radiation doubtlessly exerts a direct impact on the human body by interacting with the water molecules inside it.This interaction causes sleeping disorders, emotional instability and possible cumulative effects, leading to carcinoma formation.There are numerous studies demonstrating the effect of the habitual things surrounding us on human health.Furthermore, electromagnetic pollution is a serious challenge in the design of electronic devices, antennas and other EMW equipment [11,12].Parasitic signals, noise and high electromagnetic background can disrupt equipment operation.To avoid the development of critical equipment operation conditions, one should undertake measures providing electromagnetic compatibility between different equipment units or maximizing equipment performance under specific operation conditions.The above list of problems originating from the wide spreading of EMR is not exhaustive, but those problems are sufficient to demonstrate the necessity of reducing the electromagnetic background in living areas.The best solution to minimizing EMW distribution in a limited space is the use of electromagnetic shields (EMSs), the simplest of which are metallic sheets (plates) or wire mesh.However, when this solution is used, EMW multiple reflection causes only electromagnetic background redistribution in the space and can even aggravate the problem [13].Minimization of the electromagnetic background requires the use of radar-absorbing materials (RAMs), capable of complete internal absorption of EMW energy and converting it to heat.One can, thus, avoid EMW multiple reflection and reduce electromagnetic background in a large space.Advanced civilian RAM should meet the following requirements: light weight (low density), easy mechanical treatment, manufacturability (production route should not include complex processes), maximally wide working frequency range and atmospheric resistance [14].Good options are polymer composites, in which organic materials act as the matrix and EMR-interacting powders as the fillers [15].Logically, the performance of radar-absorbing composites is largely determined by the choice of fillers readily interacting with EMR.The fillers lead to dielectric and magnetic losses, which will be dealt with below.Polymers are typically considered only as matrices to bind the other components, because most of the currently available and widely used thermoplastic and thermosetting composites do not exhibit radar-absorbing properties in the NHz and GHz EMR spectrum regions that are most widely used in civilian applications [16].One cannot, however, definitively affirm that the dielectric properties of polymers do not affect radar absorption by polymer composites [17].Later, we will demonstrate that changes in the dielectric and magnetic permeabilities, even by a few fractions of a unit, can cause noticeable changes in the radar absorption properties of radar-absorbing composites. EMR Interaction with Materials Before addressing the radar-absorbing properties of RAM, one should consider the main definitions relating to the electrophysical parameters of materials and theoretical standpoints on EMW interaction with materials.It should be stressed that EMR is defined as the propagation of interrelated electric and magnetic fields through tremendous distances in space.EMR interacts both with the electric charges and the magnetic moments of the material through which it propagates.An electric field applied to a material shifts opposite charges or rotates electric dipoles in the material bulk.This phenomenon is also referred to as polarization.One can indirectly assess the polarization capability of a material from the change in its relative dielectric permeability ε r , which is contained in one of Maxwell's constitutive equations.Since EMR propagation is described by a harmonic sine law, the oscillatory movement of charges occurs.Since the harmonic processes are written in complex numbers, the dielectric permeability is written as follows [18] The real part of ε r * is related to polarization processes and characterizes the material's capability to accumulate charge, whereas the imaginary part is related to polarization (dielectric) losses.Charges can be covalent electrons, electrons and holes in the conduction band (or in the valence band), polarons, dipoles and defect-related electrons.High-relative dielectric permeability is inherent to ferroelectrics and ionic bond crystals.Dielectric permeability is usually measured in an AC electric field (up to 1 MHz) using EMR in the microwave region [19].Furthermore, application of an AC electric field produces bias, which also depends on polarization processes and the DC electrical conductivity of the material. Magnetic field application to a material produces magnetic induction through reorientation of the magnetic moments of atoms or ions.The magnitude of magnetic induction is proportional to the magnetic permeability of the material.For EMR interaction with materials, the magnetic permeability is written in a complex form: By analogy with complex dielectric permeability, the real part of complex magnetic permeability shows the material's magnetization capability, whereas the imaginary part characterizes the quantity of energy lost for remagnetization.The complex dielectric permeability ε r * and the complex magnetic permeability µ r * exhibit a clear frequency dependence (frequency dispersion).In general, this originates from the fact that the electric dipoles and magnetic moments of atoms exhibit a delay relative to the rapidly changing electric and magnetic fields.To characterize the properties of radar-absorbing materials, one should know the frequency dependences of the complex dielectric permeability and the complex magnetic permeability.To demonstrate this law, one can consider EMW attenuation in a material with non-zero loss.The radar absorption phenomenon is related to loss of the energy in a flat EMW during its propagation through the bulk of a material and conversion to heat.This process is accompanied by a decrease in the amplitude of the electromagnetic wave.This decrease is written as I(x) = I 0 •e −α•x , where α is the attenuation coefficient.The attenuation coefficient is calculated using the following formula: One can, therefore, demonstrate in a first approximation that radar absorption in a material depends on its magnetic permeability and dielectric permeability, as well as on the frequency dependences.There are literary indications that EMR interaction with monolithic materials can occur through several possible EMW interaction scenarios: EMW reflection from the material's surface, EMW absorption by the material and propagation through the material.Three relative coefficients are often used for EMW power or amplitude characterization.The sum of these coefficients equals 1.These coefficients are the reflection coefficient (R), the absorption coefficient (A) and the transmission coefficient (T) [20]: It is obvious that radar-shielding materials (RSMs) should exhibit the highest reflection (or absorption) coefficient and the lowest transmission coefficient, whereas RAMs should have the highest absorption coefficient and the lowest reflection and transmission coefficients.One should note, however, that RAM applicability criteria may vary depending on RAM testing setup.For example, test RAMs are often placed on a perfect reflector (metal) plate.Then, the measurement criterion is the reflection coefficient of the metallic plate (reflection loss).The reflection coefficient is calculated using the formulas given below [19]: (5) where Z in is the wave impedance of the specimen, Z 0 is the characteristic impedance of free space, h is the absorber thickness and c is the light velocity.These formulas describe experimental data very well and allow one to evaluate the radar absorption of composites from experimental spectra of complex ε r * and µ r * without the need to produce massive and costly specimens.The lowest R 1 can be achieved by matching the impedances Z in and Z 0 .All the aspects relating to the effect of RAM thickness and electrophysical parameters on the interference (resonance) absorption were addressed in good detail earlier [21].There are also methods for calculating the matching conditions, which, depending on EMR frequency (wavelength), can yield sets of real and imaginary parts of ε r * and µ r * or thicknesses for which EMR will be attenuated to the greatest extent at the preset abovementioned parameters [22,23]. When it comes to measuring the radar-shielding properties of RSMs, the experiments are conducted without a metallic plate behind the RSM.Both measurement setups can be easily implemented with a vector network analyzer, which allows for measuring A, T and R (or the S parameters) as well as the EMW phase over a wide frequency range.This set of measured parameters allows for calculating the frequency dependence of the complex dielectric and magnetic permeabilities using various methods [24,25].The most widely used options are a coaxial waveguide, a rectangular waveguide and free-space measurements (Figure 1). the need to produce massive and costly specimens.The lowest R1 can be achieved by matching the impedances Zin and Z0.All the aspects relating to the effect of RAM thickness and electrophysical parameters on the interference (resonance) absorption were addressed in good detail earlier [21].There are also methods for calculating the matching conditions, which, depending on EMR frequency (wavelength), can yield sets of real and imaginary parts of εr* and μr* or thicknesses for which EMR will be attenuated to the greatest extent at the preset abovementioned parameters [22,23]. When it comes to measuring the radar-shielding properties of RSMs, the experiments are conducted without a metallic plate behind the RSM.Both measurement setups can be easily implemented with a vector network analyzer, which allows for measuring A, T and R (or the S parameters) as well as the EMW phase over a wide frequency range.This set of measured parameters allows for calculating the frequency dependence of the complex dielectric and magnetic permeabilities using various methods [24,25].The most widely used options are a coaxial waveguide, a rectangular waveguide and free-space measurements (Figure 1).Shielding effectiveness (SE) is often distinguished out of the parameters describing the radar-shielding properties of materials.The complete shielding effectiveness (or the transmission coefficient) can be written as SET = 20lg(Et/E0) = 20lg(Ht/H0) through the magnitudes of the electric and magnetic fields.At SET > 10 dB, the shielding effectiveness contains two terms: reflection shielding SER and absorption shielding SEA [26]: which are written as Shielding effectiveness (SE) is often distinguished out of the parameters describing the radar-shielding properties of materials.The complete shielding effectiveness (or the transmission coefficient) can be written as SE T = 20lg(E t /E 0 ) = 20lg(H t /H 0 ) through the magnitudes of the electric and magnetic fields.At SE T > 10 dB, the shielding effectiveness contains two terms: reflection shielding SE R and absorption shielding SE A [26]: which are written as Polymers 2024, 16, 1003 where α is the attenuation coefficient; h is the sample thickness.At the end of this section, it should be noted that magnetoelectric sensors, which are very well described in a review [27], can be successfully used to detect electromagnetic pollution. Electromagnetic Properties and Synthesis Methods of Spinel Ferrites Among transition metal oxides, one can distinguish iron oxides, iron co-oxides with other metal oxides and the solid solutions of oxides.The abovementioned iron co-oxides with other metal oxides are commonly referred to as ferrites [28].Out of the variety of ferrites, one can separate spinel ferrites, which are widely used as magnetic materials in electrical engineering, electronics and microwave devices [29].The term spinel ferrite originates from the fact that the crystallographic structure of this material is isomorphic to that of spinel crystals (MgAl 2 O 4 ), having the Fd-3m space group.The crystal lattice of this ferrite is shown in Figure 2.There are normal spinel ferrites, mixed spinel ferrites and converted spinel ferrites [30].In the latter (converted) spinel ferrites, Me 2+ and Fe 3+ cations occupy the octahedral B positions, whereas the tetrahedral A positions are occupied by Fe 3+ iron cations in a normal spinel; Me 2+ cations occupy only the A positions, whereas Fe 3+ cations occupy the octahedral B positions.The example shown in Figure 2 is normal spinel ZnFe 2 O 4 , where Zn 2+ ions occupy only the tetrahedral positions (the blue polyhedrons), whereas Fe 3+ ions occupy the octahedral positions (the green polyhedron).In mixed spinel ferrites, 3+ and 2+ valence cations occupy both sublattices.The general formula of spinel ferrites can be written as MeFe 2 O 4 , where Me is a bivalent metal.Ferrite solid solutions are used the most widely, e.g., Ni-Zn, Mn-Zn, Mg-Zn and Co-Zn spinel ferrites.Detailed analysis of the crystallographic structure of ferrites is required for understanding the origin of their magnetic properties.It is considered that the magnetic properties originate mainly from the super-exchange interaction of 3d shell iron electron spins in the A and B sublattices [31].As a result, the magnetic moments of the two sublattices arrange in an antiparallel manner, thus enabling ferromagnetic ordering in spinel ferrites.The arrangement of cations in the sublattices and the configuration of their external electron shells determine the crystallographic magnetic anisotropy, the magnetic moment of the unit cell (or the saturation magnetization), the coercive force and the magnetic permeability of ferrites [32].where Heff = Ha + Hd.f.+ Hg + Hσ, Ha is the crystallographic magnetic anisotropy field, Hd.f. is the field of demagnetizing factors, Hg is the growth anisotropy field, Hσ is the stress anisotropy field and γ is the gyromagnetic ratio. The ferromagnetic resonance frequency in magnetic polymer composites depends on intrinsic factors, e.g., magnetic particle shape, distribution pattern and concentration.Those factors can strongly affect the radar absorption properties of spinel ferrite-based RAC.As a result of NFMR processes in spinel ferrites, the cutoff frequency at which the imaginary part of magnetic permeability grows rapidly and the real part of the magnetic permeability decreases are in the MHz region due to a low anisotropy field.However, efficient GHz spinel ferrite RAC can be produced, as shown below.Changing the chemical composition of ferrites and, to an extent, avoiding the distortion of the spinel crystal cell, one can significantly affect the magnetic or electric properties of ferrite ceramics or powders.Ferrites have less expressed magnetic properties than iron alloys, but their low electrical conductivity and high chemical stability can be advantageous, e.g., in the production of RAC.Spinel ferrites are magnetically soft materials (low coercive force H c and high initial magnetic permeability µ 0 ) with moderate anisotropic constant and ferromagnetic resonance frequency [28].One can also note the relatively high Curie temperatures (the ferromagnetic to paramagnetic transition points) in the 100-300 • C range.In the absence of an external field, if the crystallographic anisotropy field acts as a magnetizing field, ferrite interaction with EMR produces the natural ferromagnetic resonance (NFMR) [33].This phenomenon occurs if the EMW frequency coincides with the magnetic moment precession frequency in ferrite sublattices.The formula of ferromagnetic resonance as a function of an effective ferromagnetic field is often used in the literature [34]: where is the field of demagnetizing factors, H g is the growth anisotropy field, H σ is the stress anisotropy field and γ is the gyromagnetic ratio. The ferromagnetic resonance frequency in magnetic polymer composites depends on intrinsic factors, e.g., magnetic particle shape, distribution pattern and concentration.Those factors can strongly affect the radar absorption properties of spinel ferrite-based RAC. As a result of NFMR processes in spinel ferrites, the cutoff frequency at which the imaginary part of magnetic permeability grows rapidly and the real part of the magnetic permeability decreases are in the MHz region due to a low anisotropy field.However, efficient GHz spinel ferrite RAC can be produced, as shown below. One should also dwell upon the main spinel ferrite production methods used by researchers and industry engineers.The most widely used spinel ferrite production method is ceramic technology (or the solid-state reaction method), for which compressed powdered oxides and carbonates with metal content as per the synthesized ferrite composition are sintered at high temperatures [31,35].This method is used for the mass production of Ni-Zn and Mn-Zn ferrites in the form of bulk ceramics, but it has a number of disadvantages.The synthesis of high-density ceramics requires high temperatures and a long sintering time in resistive furnaces, while the magnetic parameters of the final products can vary substantially in one batch.To reduce the energy consumption of the process and increase the yield, one can use other annealing methods or sintering modifiers: radiation thermal sintering [36], reactive instantaneous sintering [37], spark plasma sintering [38], sintering using microwaves [39] and low-melting-point additions for sintering temperature reduction [40].To use ferrite ceramics for RAC production, one should mechanically grind the as-sintered ceramic materials in mills to the required particle size. To directly synthesize ferrites in the form of powdered fillers, one should use chemical solution synthesis methods.Oxides in those methods are replaced with metal salts or metal-organic compounds that are dissolved in water or other solvents.The general principle of those methods is to mix the metal salt solution with polymers, alkali and other reactants to obtain the precursor in the form of a residue.The precursor should then be heat treated to obtain the spinel ferrite phase.The simplest ferrite powder synthesis method is co-precipitation, for which the residue from the solution is dried, filtered and annealed in a furnace [41].One can distinguish this from the hydrothermal (solve-thermal) method for which the salt solution with a mineralizer is held in an autoclave at relatively low temperatures (180-500 • C) for obtaining ferrite nanoparticles [42,43].The sol-gel method is also widely used, for which a hydrolysis reaction between salts, catalysts and gelforming agents produces 3D structures with a homogeneous metal cation distribution.Heat treatment converts the solution from the sol condition to the gel one, and a further increase in temperature produces ferrite nanoparticles [44].A gel self-inflammation reaction agent Polymers 2024, 16, 1003 7 of 19 can be added to sol.The flash point is sufficient for the formation of the ferrite phase in the final product.The microemulsion [45], ultrasonication [46] and mechanical activation [47] methods are also used. Radar-Absorbing Parameters of Ferrites and Ferrite-Polymer Composites Analysis of publications over the most recent 20 years suggests that the main trends in the research and production of ferrite-polymer composites are the transition to the submicron or nanometer scales and the development of and improvement in new and existing ferrite filler production methods.The nanometer scale transition implies the use of ferrite particles with smaller than 100 nm sizes in one of the directions or combinations of ferrite particles with other nanoparticles [48,49].However, insufficient attention is paid to the problem of particle agglomeration during composite synthesis, which offsets the advantages of their small size.The main concept of ferrite-polymer composite synthesis is to fill the polymer matrix with distributed filler particles.Some materials can be chemically bound to hybrid composite fillers.A special example is the technology for which the polymer is used as a conducting shell in "magnetic core (ferrite)-conducting shell (polymer)" structures [50].Importantly, the use of such hybrid composite fillers solves the problem of the homogeneous distribution of magnetic fillers in the RAC bulk.Detailed analysis of RAC compositions reported so far also provides examples of ferrite-polymer composites with additional dielectric (ferroelectric) fillers [23], conductive fillers [51], combinations of conductive and dielectric fillers and combinations of magnetically soft and magnetically hard fillers [34].It is commonly believed that the use of fillers having different magnetic and electric properties increases the overall electromagnetic energy loss in composites due to a combination of different absorption mechanisms and broadens the working range of the RAC. First of all, one should demonstrate that spinel ferrites in the form of continuous ceramics have good radar-absorbing properties in the RF range.Good examples are Ni-Zn, Mn-Zn and Li-Mn-Zn ferrites, which are solid solutions of the simple spinels NiFe 2 O 4 , ZnFe 2 O 4 , MnFe 2 O 4 and Li 0.5 Mn 0.5 Fe 2 O 4 .Due to the complex microstructure of ferrite ceramics (single-crystal grains and amorphous grain boundaries), ferrites exhibit intense interfacial polarization at low EMR frequencies.As a result, ferrites demonstrate tremendous ε' r and high dielectric losses [52,53].The initial magnetic permeability of magnetically soft spinel ferrites (ceramics) can vary over a wide range, from several decades to 25,000.The high magnetic and dielectric permeabilities provide the excellent radar absorption properties in the low-frequency EMR region.This can be traced by analyzing the formula which qualitatively illustrates the RAM thickness at which interference and maximum radar absorption occur, as a function of the dielectric and magnetic permeabilities. where c is the light velocity, f is the frequency and n an odd number 1, 3, 5 . . .etc.. . .It follows from Equation ( 11) that high R l absolute values in spinel ferrites will be observed at low frequencies and a constant absorber material thickness.Ni 0.39 Zn 0.61 Fe 2 O 4 ceramic with Bi 2 O 3 addition and a test specimen thickness h = 7 mm exhibited high EMR attenuation to −30 dB (the R l (f ) peak is at 200 MHz) and a 980 MHz absorption bandwidth at −10 dB [54].The authors attributed the expressed radar-absorbing properties to magnetic loss for natural ferromagnetic resonance and domain wall resonance (DWR).Our laboratory conducted a study of the radar-absorbing properties of 400НН, 1000НН, 2000НН (Ni-Zn ferrites), 2000НМ (Mn-Zn ferrites) and Li-Mn-Zn ferrites synthesized in Russia.Ni-Zn ferrites had absorption peaks in the reflection spectrum of the sample on the metal plate in the range of-(18-13) dB.The peak positions were in the frequency range 200-700 MHz, and the absorption bandwidth was ∆f (−10 dB) = 1050-1300 MHz.The RAM thickness was 6 mm, the dielectric permeability ε r was within 10 and the magnetic permeability µ r in the sub-resonance region was 500-1000.It was found that the Ni-Zn also has predominantly magnetic loss, with a small fraction of dielectric loss due to hopping polarization [55].These losses occur in defective ionic bond crystals, which is the case for ferrites with oxygen deficiency after annealing [56].By and large, since Ni-Zn is quite a good dielectric, it does not exhibit large dielectric loss in the MHz and GHz regions.The same is true for the Li-Mn-Zn ferrites, but better impedance-matching conditions are provided for the minimum R l of −22 dB at a higher frequency than for the Ni-Zn ferrite (1.34 GHz) and a 2 GHz absorption bandwidth [57].The good radar absorption properties were also observed for the Li-Zn ferrites with CuO and MgO additions.For 6-10 mm thicknesses, the maximum |R l | was within 25-48 dB, the peak position being at 0.2-0.8GHz [58]. The situation is different for Mn-Zn ferrites synthesized by sintering in an Ar gas atmosphere: the electrical resistivity of the ferrites was close to that of semiconductors at sufficiently high magnetization.This resulted in the cutoff frequency µ r (f ) lying in a range of several MHz with high (~30) dielectric permeability being retained up to the GHz EMR region.The high conductivity of the grains in the Mn-Zn spinel is caused by the presence of [Fe 2+ -Fe 3+ ] ions or [Me 2+ -Fe 3+ ] complexes in the octahedral positions.Electrical conductivity between the ions and the ion complexes occurs via a hopping mechanism (electrons jump between energy bands due to electron-phonon interaction) [59].The Mn-Zn ferrites can contain Mn and Fe ions with different valences, with the oxidation degree for Mn being from 2+ to 4+.Thus, Mn-Zn ferrites with a spinel structure have low electrical resistivity and expressed magnetic properties.These electrophysical properties of Mn 1−x Zn x Fe 2 O 4 determine the radar absorption properties of the ceramic specimens.Our experiments showed that for the setup with a perfect reflector, an impedance mismatch causes the R l coefficient in the 1-100 MHz range to be at least −10 dB for a thickness of 5-9 mm.Noteworthily, the peak reflection coefficients were at 1-10 MHz, making the MN-Zn ferrite a potentially effective low-frequency radar absorbent. Regarding the radar-screening properties of the abovementioned ferrites, one should point out that the absence of expressed dielectric loss in the Ni-Zn and Li-Mn-Zn ferrites precludes their use as an RSM in the MHz and GHz ranges at thicknesses of 5-10 mm.The shielding effectiveness SE T of the Ni-Zn and Li-Mn-Zn ferrites is within 10 dB.In the MN-Zn ferrite, eddy current loss caused by high conductivity increases dielectric and magnetic losses, allowing one to achieve SE T of 10 to 20 dB in the 1-7 GHz range for 5 mm thick material. The situation for ferrite-polymer composites is completely different.Our department conducted studies of different ferrite-polymer composite compositions (both twocomponent and multicomponent ones) in which thermoplastic polymers, such as polyvinylidenfluoride (PVDF), polyvinyl alcohol (PVA) and polystyrene, were used as polymer matrices.The main change in the radar-absorbing properties of the RAC with spinel ferrite fillers in comparison with those of the initial ferrites was a shift in the absorption peak in the R l (f ) spectra towards higher frequencies with a decrease in the ferrite concentration.At a ferrite filler weight fraction of 80-20% for the Ni-Zn and Mn-Zn ferrites, the absorption peaks shift from the MHz range to the GHz one.Furthermore, the resonance frequency in the µ r *(f ) spectrum also shifts towards higher frequencies with a decrease in the ferrite concentration.Earlier, changes in the µ r *(f ) frequency position for ferrite polymer composites were studied by Tsutaoka [60][61][62][63].Tsutaoka also found that the NFMR and DWR frequencies shift towards higher frequencies with a decrease in the ferrite concentration.We also observed, in our works, a similar behavior of resonance frequencies with changes in the ferrite concentration.It was found that the NFMR frequency shift is greater than the DWR frequency shift, and the imaginary parts of the magnetic susceptibility χ which are related to NFMR are always higher at maximum absorption frequencies [64,65].We, therefore, assumed that NFMR makes the greatest contribution to radar absorption.This can be accounted for by the change in the effective field H eff in the composite due to the demagnetization factor [23].On the other hand, this can be qualitatively interpreted using Snoek's law [66]: Polymers 2024, 16, 1003 where f r is the resonance frequency (cutoff frequency) and M s is the saturation magnetization of ferrite; µ 0 is the magnetic constant.The magnetization of ferrite-polymer composites (relative to ferrite weight) does not change, whereas the magnetic permeability µ 0 decreases considerably due to a decrease in the magnetic flow in a non-continuous magnetic medium [66].Then, in order for the equation to hold, f r should increase, which is observed in the experimental ferrite-polymer composite spectra. We found excellent radar absorption properties of two-component RACs with Ni-Zn and Li-Mn-Zn spinel ferrites at weight fractions of 60-80% (volume concentrations of 25-61%) with R l = −33.8-−20dB and an absorption bandwidth of ∆f (−10 dB) in the 2-4 GHz range and at 2-7 GHz frequencies [18,55,67].The matrices were PVDF, PVA and polystyrene.Those parameters can be attributed to high magnetic losses in the composite (the NFMR and DWR losses) and impedance-matching conditions.However, even better parameters were observed for composites having the same matrices with Mn-Zn ferrite inclusions.This is well illustrated in Figure 3. changes in the ferrite concentration.It was found that the NFMR frequency shift is greater than the DWR frequency shift, and the imaginary parts of the magnetic susceptibility χ″ which are related to NFMR are always higher at maximum absorption frequencies [64,65].We, therefore, assumed that NFMR makes the greatest contribution to radar absorption.This can be accounted for by the change in the effective field Heff in the composite due to the demagnetization factor [23].On the other hand, this can be qualitatively interpreted using Snoek's law [66]: where fr is the resonance frequency (cutoff frequency) and Ms is the saturation magnetization of ferrite; is the magnetic constant.The magnetization of ferrite-polymer composites (relative to ferrite weight) does not change, whereas the magnetic permeability μ0 decreases considerably due to a decrease in the magnetic flow in a non-continuous magnetic medium [66].Then, in order for the equation to hold, fr should increase, which is observed in the experimental ferrite-polymer composite spectra. We found excellent radar absorption properties of two-component RACs with Ni-Zn and Li-Mn-Zn spinel ferrites at weight fractions of 60-80% (volume concentrations of 25-61%) with Rl = −33.8-−20dB and an absorption bandwidth of Δf (−10 dB) in the 2-4 GHz range and at 2-7 GHz frequencies [18,55,67].The matrices were PVDF, PVA and polystyrene.Those parameters can be attributed to high magnetic losses in the composite (the NFMR and DWR losses) and impedance-matching conditions.However, even better parameters were observed for composites having the same matrices with Mn-Zn ferrite inclusions.This is well illustrated in Figure 3.It was stressed in earlier publications that the use of sole Ni-Zn ferrite (especially nanoparticles synthesized using the co-precipitation, hydrothermal and sol-gel methods) with a high electrical resistivity as a filler can impose restrictions on device weight and dimensions due to the absence of expressed dielectric losses and eddy current loss in the GHz range [68][69][70][71].However, excellent results [72,73] can be obtained for multicomponent spinel ferrite solid solutions with high ferrite filler concentrations.For example, the It was stressed in earlier publications that the use of sole Ni-Zn ferrite (especially nanoparticles synthesized using the co-precipitation, hydrothermal and sol-gel methods) with a high electrical resistivity as a filler can impose restrictions on device weight and dimensions due to the absence of expressed dielectric losses and eddy current loss in the GHz range [68][69][70][71].However, excellent results [72,73] can be obtained for multicomponent spinel ferrite solid solutions with high ferrite filler concentrations.For example, the NiCuZn/ferrite composite with La substitution (Ni 0.35 Co 0.15 Zn 0.5 La x Fe 2−x O 4 (x = 0 − 0.06))/paraffin (in a 6:1 ratio) with a thickness of 4 mm and x = 0.02 had a peak absorption of −34 dB at 5.5 GHz (absorption bandwidth 5.5 GHz).From this viewpoint, good results were obtained for MN-Zn ferrite fillers with lower electrical resistivity than for Ni-Zn, Mg-Zn, Co-Zn and Li ferrites.For a 20-40 wt.% filler content (9-30 vol.%), the RAC with Mn-Zn ferrite had |R l | in the 20-44 dB range at 2-7 GHz and a bandwidth of ~2 GHz at −10 dB (for a 5-7 mm thickness).For higher filler contents, the impedancematching condition is not met, and, hence, the maximum attenuation (dB) is lower.Similar results were obtained for the rubber/Mn-Zn ferrite composite [74].High-concentration thermoplastic/Mn-Zn ferrite composites (filler content, 60-80 wt.%) can be considered as radar-shielding materials with SE T in the −33-−15 dB range at a reflection coefficient SE R = −3 dB.The radar-shielding properties of composites with Mn-Zn ferrites are caused by the small skin layer, which, in turn, is caused by a high attenuation coefficient (Equation ( 3)).The high dielectric and magnetic permeabilities of those composites originate from electrical and magnetic percolation [23,75].One can, therefore, state that even twocomponent ferrite-polymer composites can be used as efficient RAMs in the GHz EMR range. The radar absorption properties can be improved by adding conductive components to ferrite-polymer RACs [76,77].Co 0.2 Ni 0.4 Zn 0.4 Fe 2 O 4 /graphene composites in 5:1, 10:1, 15:1 and 20:1 ratios were obtained using joint hydrothermal synthesis (one-pot route) [78].The authors noted that the Co-Ni-Zn ferrite nanoparticles were wrapped in graphene sheets due to electrostatic interaction.Strong interaction between carbon derivatives and spinel ferrite nanoparticles was also observed by other researchers [79,80].Then, the synthesized Co 0.2 Ni 0.4 Zn 0.4 Fe 2 O 4 /graphene powders were mixed with paraffin to produce composite rings, measuring the radar absorption properties.A comparison between composites containing sole Co-Ni-Zn ferrite as a filler and the Co 0.2 Ni 0.4 Zn 0.4 Fe 2 O 4 /graphene ones yielded ε r *(f ) spectra, which suggested higher real and imaginary parts of the dielectric permeability due to interphase polarization and higher dielectric loss.It is noteworthy that with an increase in the graphene concentration in the specimen, the maximum |R l | increases initially and then decreases.At excessive conductive graphene concentrations, the impedances are no longer matched, and strong EMR reflection from the RAC surface is observed.For 3 mm thick composites without graphene, |R l | was within 5.8 dB.With an increase in the graphene concentration, this parameter increased to 12, 15 and 31.3 dB and decreased to 16 dB for the same specimen thickness and concentration ratios of 20:1, 15:1, 10:1 and 5:1, respectively.In another work, MnFe 2 O 4 and ZnFe 2 O 4 were used as magnetic additions and multi-walled carbon nanotubes as the conductive filler [81].The matrix was paraffin.The specimens were synthesized by mechanical mixing of all the components in the form of powders (total weight fraction of fillers, 40%).The specimens with a sole ferrite filler and multi-walled carbon nanotubes had a radar absorption coefficient of within 30 dB in the 8-12 GHz range, which is a sufficiently good result.However, the introduction of carbon nanotubes produces strong resonance peaks with the highest |R l | = 35-58 dB.The working frequency bandwidth was 4 GHz or greater for all the specimens synthesized.There are many literary data on the use of carbon-containing materials as RAC performanceimproving additives [82,83]. A very efficient materials science solution for composites is to produce magnetic core/conductive shell structures.They are produced from spinel ferrite nanoparticles synthesized using the sol-gel or hydrothermal methods and monomers of conductive polymers, e.g., polyaniline or polypyrrole.A solution containing those components is added with a Figure 4 shows a schematic of a magnetic core/conductive shell composite synthesis setup and a comparison between the radar absorption parameters of the composites.It can be seen that the best radar absorption conditions are for a polyaniline/Ni-Zn ferrite ratio of 1:1; varying the thickness from 2.25 to 3.5 mm, one can cover the entire 8-12 GHz range.Similar structures were also studied earlier [84][85][86].Magnetic core/conductive shell structures allow for controlling the frequency behavior of complex ε r * and µ r *, eventually making great changes to the radar absorption properties, e.g., resonance peak position and working bandwidth [87]. core/conductive shell structures.They are produced from spinel ferrite nanoparticles synthesized using the sol-gel or hydrothermal methods and monomers of conductive polymers, e.g., polyaniline or polypyrrole.A solution containing those components is added with a polymerization agent, with the nanoparticle/solution interface acting as the polymerization center.Illustrative results were obtained for polyaniline/Ni-Zn ferrite composites [50].Figure 4 shows a schematic of a magnetic core/conductive shell composite synthesis setup and a comparison between the radar absorption parameters of the composites.It can be seen that the best radar absorption conditions are for a polyaniline/Ni-Zn ferrite ratio of 1:1; varying the thickness from 2.25 to 3.5 mm, one can cover the entire 8-12 GHz range.Similar structures were also studied earlier [84][85][86].Magnetic core/conductive shell structures allow for controlling the frequency behavior of complex εr* and μr*, eventually making great changes to the radar absorption properties, e.g., resonance peak position and working bandwidth [87]. Excellent radar absorption properties can be obtained using ferroelectric polymers, i.e., PVDF and its copolymers [88].For example, PVDF/nano-Mn0.8Zn0.2Cu0.2Fe1.8O4ferrite films ~0.2 mm in thickness exhibit excellent radar absorption properties in the 12-18 GHz range, with a 6 GHz working bandwidth and −32 dB peak attenuation [89].In order to increase the maximum operation frequencies and possibly the working bandwidth, attempts were made to synthesize RACs with fillers consisting of sintered magnetically soft and magnetically hard ferrite particles.Since magnetically hard ferrites have high magnetic crystallographic anisotropy fields, according to Equation (10), it is expected that their effective anisotropy energy will grow, thus affecting the NFMR process in the composite.Furthermore, exchange coupling between the magnetically soft and magnetically hard phases is expected to improve the radar absorption properties of the RAC [90].It was demonstrated [91] that the Ni0.5Zn0.5Fe2O4/SrFe12O19composite filler can be considered as an efficient radar-absorbing component in a paraffin matrix (paraffin/filler ratio, 4:6).The filler was synthesized using the sol-gel method and by joint synthesis of the spinel phase and the hexaferrite phase from the same precursor (one-pot method).Strong exchange coupling between the spinel and hexaferrite phases occurs for a 1:3 ratio, and magnetic losses grow, as can be seen in the μ″(f) spectra.The maximum EMR attenuation is 47 dB at 6.2 GHz, with a 6.4 GHz working absorption bandwidth for a 4 mm thickness.The Ba(Zr-Ni)0.6Fe10.8O19/Fe3O4composite filler synthesized by joint annealing in an argon gas atmosphere of separately obtained substituted hexaferrite and magnetite phases was studied [34].The best radar absorption properties were obtained Excellent radar absorption properties can be obtained using ferroelectric polymers, i.e., PVDF and its copolymers [88].For example, PVDF/nano-Mn 0.8 Zn 0.2 Cu 0.2 Fe 1.8 O 4 ferrite films ~0.2 mm in thickness exhibit excellent radar absorption properties in the 12-18 GHz range, with a 6 GHz working bandwidth and −32 dB peak attenuation [89]. In order to increase the maximum operation frequencies and possibly the working bandwidth, attempts were made to synthesize RACs with fillers consisting of sintered magnetically soft and magnetically hard ferrite particles.Since magnetically hard ferrites have high magnetic crystallographic anisotropy fields, according to Equation (10), it is expected that their effective anisotropy energy will grow, thus affecting the NFMR process in the composite.Furthermore, exchange coupling between the magnetically soft and magnetically hard phases is expected to improve the radar absorption properties of the RAC [90].It was demonstrated [91] that the Ni 0.5 Zn 0.5 Fe 2 O 4 /SrFe 12 O 19 composite filler can be considered as an efficient radar-absorbing component in a paraffin matrix (paraffin/filler ratio, 4:6).The filler was synthesized using the sol-gel method and by joint synthesis of the spinel phase and the hexaferrite phase from the same precursor (one-pot method).Strong exchange coupling between the spinel and hexaferrite phases occurs for a 1:3 ratio, and magnetic losses grow, as can be seen in the µ (f ) spectra.The maximum EMR attenuation is 47 dB at 6.2 GHz, with a 6.4 GHz working absorption bandwidth for a 4 mm thickness.The Ba(Zr-Ni) 0.6 Fe 10.8 O 19 /Fe 3 O 4 composite filler synthesized by joint annealing in an argon gas atmosphere of separately obtained substituted hexaferrite and magnetite phases was studied [34].The best radar absorption properties were obtained for an annealing temperature of 400 • C and a magnetically soft to magnetically hard phase ratio of 1:1.The authors noted that the working bandwidth of the RAC increased due to peak broadening in the µ (f ) spectrum of the composite. Another method of producing efficient RAMs is to synthesize porous composites containing conductive and magnetic fillers.Along with dielectric and magnetic losses (NFMR and DWR), loss caused by multiple internal reflection in composite cavities can also be significant in these materials [92][93][94].The properties of a radar-absorbing composite in which the filler is in the form of empty carbon microspheres coated with Fe 3 O 4 magnetite nanoparticles having a hierarchic nanostructure were studied [92].The synergy of magnetic and dielectric losses and multiple reflection loss caused scattering at the empty structures (the matrix/carbonized microsphere wall and the air/carbonized microsphere wall boundaries) provided for a maximum attenuation of 60.3 dB at a 10 wt.% filler content and a 3.72 mm thickness.Hierarchic structures consisting of porous CoFe 2 O 4 and reduced graphene oxide were also synthesized [93].The authors noted a synergetic improvement in radar absorption due to magnetic and dielectric losses (interfacial polarization) and eddy current loss in a 3D network of graphene sheets and multiple reflections in nanopores.A comparison between the radar absorption properties of the RAC in question is presented in Table 1.Composite spinel ferrite RSM should contain either ferrite filler with a low electrical conductivity (magnetite, Mn, Cu containing spinel ferrites) or an additional conductive component.Then, one can achieve high attenuation coefficients (small skin layer thicknesses) and reduce the amplitude of incident EMR using low-thickness layers.However, excessive contents of conductive components can lead to high reflection coefficients.Then, a metalor semiconductor-like shielding mechanism will be implemented in the composite RSM.Good radar-shielding composites are the above-discussed magnetic core/conductive shell structures [27,[95][96][97][98].The authors noted that magnetic core/conductive shell structures are EMW scattering centers, and the networks of interconnected particles are considered as EMW multiple reflection regions. Obviously, to achieve the highest attenuation coefficient and radar absorption in materials, one should combine different EMR loss mechanisms.Aimed at achieving such a combination, a conductive polymer/polypyrrole/CoFe 2 O 4 /graphene composite was produced and shown to be an efficient RSM in the 8-12 GHz range with a low reflection coefficient [99].Shielding in that composite occurred through radar absorption mechanisms.Porous composites with fillers of magnetic and conductive particles also perform very well as RSMs [100].The introduction of carbon nanotubes and magnetite particles to a PVDF matrix followed by pore formation produces a porous composite with SE T = 37 dB for a 2 mm thickness in the 14-20 GHz range.The use of magnetite as a filler in composite polymer RSM is an advantageous solution since magnetite can exhibit elevated electrical conductivity combined with magnetic properties (see above) and, hence, higher eddy current loss and interfacial polarization.For example, there are PVDF-matrix RSMs having the compositions PVDF/carbon nanotubes/Fe 3 O 4 and PVDF/graphene sheets/Fe 3 O 4 in which the shielding effectiveness is 35-37 dB at 18-26 Ghz for a 1.1 mm thickness [101].The composites also exhibit good heat conductivity due to carbon material filling.Similar results were also obtained in other works [102].A comparison between some polymer composite RSMs is presented in Table 2. Figure 5 shows a diagram illustrating the main mechanisms of energy loss in electromagnetic waves when passing through magnetic polymer composites based on ferrites.These are the following mechanisms: 1. Reflection: part of the energy of electromagnetic waves is reflected. 2. Dielectric losses: part of the energy of electromagnetic waves is converted into heat due to dielectric losses in the polymer matrix. Magnetic losses in ferrite filler: these are the energy losses of electromagnetic waves on the resonance of the domain boundaries and the ferromagnetic resonance in the ferrite filler. It should be noted that all magnetic composites based on spinel ferrite fillers discussed in this review are non-flammable and solid.All of them are very technologically advanced from the point of view of implementing the application process on surfaces made of different materials.The issue of considering the mechanical properties of these composites is very important.However, this issue requires special research beyond the scope of this review.The same also applies to the problem of the influence of the particle size of the ferrite filler on the radio-absorbing characteristics of the considered composites. Summary EMR radar absorption in spinel ferrite polymer composites was reviewed.Those composites were shown to be promising as RAMs and RSMs due to the excellent magnetic and dielectric properties of spinel ferrites.The fundamentals of EMR interaction with materials, crystal structure and factors influencing the high-frequency behavior of spinel ferrites, main spinel ferrite synthesis methods and radar absorption and radar-shielding properties of polymer composites with spinel ferrites were discussed.Special attention was paid to the most efficient and proven methods of improving the radar absorption properties of spinel ferrite RAC.It was shown that reducing electrical conductivity and adding iron-substituting cations can significantly improve the radar absorption properties of spinel ferrites.The electrical conductivity of spinel ferrites is also a decisive factor controlling the frequency behavior of R l in two-component composites.Changes in the radar absorption spectra of ferrite-polymer composites in comparison with pure ferrites were explained based on the theoretical standpoints on the magneto-dynamic properties of ferrites.The improvement in the radar absorption properties of composites by combining magnetic fillers with conductive additives in the form of carbon derivatives, metals and polymers (magnetic core/shell structures) or by using combinations of magnetically soft and magnetically hard composites was discussed.It was stressed that device weight and dimensions can be improved by using porous composites with good radar absorption properties.The abovementioned regularities also hold true for radar-shielding spinel ferrite composites, in which shielding occurs via absorption mechanisms. In their further research, the authors of this review plan to obtain and study the radio absorption characteristics of multilayer magnetic polymer composites, "polymer-ferrite", as well as magnetic polymer composites with spherical ferrite particles coated with a metal film. Figure 1 . Figure 1.(a) Basic diagram of electromagnetic parameter measurement using a vector network analyzer and (b) typical RAM and (c) RSM measurement setups. Figure 1 . Figure 1.(a) Basic diagram of electromagnetic parameter measurement using a vector network analyzer and (b) typical RAM and (c) RSM measurement setups. Figure 3 . Figure 3. Illustration of the effect of filler electrical properties on the radar absorption properties of ferrite-polymer composites with Mn-Zn and Ni-Zn ferrite fillers.The results were obtained at NUST MISiS by the Department of Electronics Materials Technology.(a) Rl(f) spectrum of PVDF/Ni-Zn ferrite composite, (b) SET(f) spectrum of PVDF/Ni-Zn ferrite composite, (c) Rl(f) spectrum of PVDF/Mn-Zn ferrite composite, (d) SET(f) spectrum of PVDF/Mn-Zn ferrite composite. Figure 3 . Figure 3. Illustration of the effect of filler electrical properties on the radar absorption properties of ferrite-polymer composites with Mn-Zn and Ni-Zn ferrite fillers.The results were obtained at NUST MISiS by the Department of Electronics Materials Technology.(a) R l (f ) spectrum of PVDF/Ni-Zn ferrite composite, (b) SE T (f ) spectrum of PVDF/Ni-Zn ferrite composite, (c) R l (f ) spectrum of PVDF/Mn-Zn ferrite composite, (d) SE T (f ) spectrum of PVDF/Mn-Zn ferrite composite. with the nanoparticle/solution interface acting as the polymerization center.Illustrative results were obtained for polyaniline/Ni-Zn ferrite composites [50]. Figure 5 . Figure 5. Scheme of the main mechanisms of absorption of electromagnetic waves in magnetic polymer composites with ferrite filler. Figure 5 . Figure 5. Scheme of the main mechanisms of absorption of electromagnetic waves in magnetic polymer composites with ferrite filler. Table 1 . Comparison between radar absorption parameter R l (f ) frequency behavior of ferrite-polymer composites. Table 2 . Comparison between shielding efficiencies of ferrite-containing polymer composite RAM. Table 2 . Comparison between shielding efficiencies of ferrite-containing polymer composite RAM.
10,899
2024-04-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
CH3NH3PbX3 (X = I, Br) encapsulated in silicon carbide/carbon nanotube as advanced diodes We employ first-principles density functional theory (DFT) calculations to study CH3NH3PbX3 (X = I, Br) and its encapsulation into the silicon carbide nanotube and carbon nanotube (CNT). Our results indicate that these devices show diode behaviors which act on negative bias voltage but do not work under positive voltage. When they are encapsulated into SiC nanotube and CNT, their electronic properties would be changed, especially, electric currents mainly exist at positive bias region. Corresponding transmission spectra and density of states are provided to interpret the transport mechanism of the CH3NH3PbX3 (X = I, Br) as a diode. These findings open a new door to microelectronics and integrated circuit components, providing theoretical foundation for innovation of the new generation of electronic materials. or silicon carbide nanotube encapsulated. How do these perovskite devices exhibit unidirectional penetrability in current-voltage characteristics? Can large influence occur in the current-voltage characteristics of these hybrid lead trihalide perovskite devices with carbon nanotube and silicon carbide nanotube? All the questions above, which are critical for perovskite electronics, need to be further throughly studied. The goal of this work is to solve these two issues. To this end, we present a first-principles DFT study of MAPbX 3 (X = I, Br) and their encapsulation in silicon carbide/carbon nanotube as a function of composition. By adopting computation in this way, we demonstrate sensitivity to electronic properties information on the microscopic scale, hence, it can connect microscopic scale structure to novel electronic properties determined by this work, informing new design for novel efficient and stable diodes which can be employed in confidential-needing aspect. Results Our calculations assess the current sensitivity to halide identity and the influence of the encapsulation of nanotubes. Figure 1 exhibits the atomic structures of CH 3 NH 3 PbI 3 and CH 3 NH 3 PbBr 3 designed by Feng and Xiao 32 . Analogous to CH 3 NH 3 PbI 3 , perovskite CH 3 NH 3 PbBr 3 has the same lattice structure, in which Br and I have the same position. Although peroviskites possess several advantages, they are not stable in presence of moisture and heat. In consideration of the rapid moisture-induced degradation of the system, the encapsulation sealing will be important for perovskites 33 . It has been reported that nanotube in solar cells based on perovskites is better in electron transport and recombination behaviors than traditional films 34 . And the single-walled carbon nanotube is found to create an additional barrier to degradation 35 . All these results suggest the encapsulation plays a key role in determining the stability. Many experiments of carbon nanotubes have been done [36][37][38] , which make the encapulation of nanotubes possible. Inspired by the above studies, we here insert the molecules CH 3 NH 3 PbI 3 and CH 3 NH 3 PbBr 3 into the silicon carbide nanotube and carbon nanotube to explore their electronic properties and potential applications in electronic circuits, as shown in Fig. 2. Figure 3 depicts the current−voltage characteristics of the six models studied in this work. I-V characteristics have been measured by Sarswat et al. 39 and Tang et al. 40 showing the memory effect and negative differential resistance (NDR) effect. And these curves they obtained have two characteristics. One is a local current maximum (when V = V max ), and then the NDR effect appear. Another is the dual states when V < V max , where the on state is the situation when the first time the device reach the V max . Before quickly returning to zero, restore the off state by accessing a voltage that exceeds the NDR region. Although the NDR effect is also appear in our study, there are some differences. It is not performed voltage sweep like Sarswat et al. 39 and Tang et al. 40 . And no memory effect display. The only I-V characteristic of one model is calculated based on the corresponding transmission spectra at different bias voltages one by one. Here, for simplicity, we regard the region when no current appears (I = 0A) as off state and the region when current has value (I ≠ 0A) as on state. We can see from Fig. 3(a) that MAPbI 3 model has unidirection continuity of the diode, showing on-state in the negative bias and off-state in the positive range. The reverse recovery (RR) phenomenon occurs when a negative voltage is applied to a certain value across the MAPbI 3 device. When the stage of the reverse current starts to build up, the current would reaches its peak value and then drop back to the beginning state (off state). What is noteworthy is that it is not conducting under a certain negative bias range from −0.5 to 0, but a turn-on operation in other negative states. That is to say, as for MAPbI 3 , −0.5 bias is a threshold value and the current can be conducted only beyond this threshold. Thus, based on this property, threshold voltage can be set for security under the application of some electronic security locks. As shown in Fig. 3(b) , MAPbBr 3 device has similar diode character like MAPbI 3 , which has on-state in a certain negative bias and off-state in the whole positive range. But at the on-state region, with the increase of the negative bias, the current of the MAPbBr 3 device increases continuously and the strong NDR effect would disappear. When MAPbI 3 and MAPbBr 3 are inserted into the silicon carbide nanotube, the value of the threshold voltage shifts a bit to the right, meaning that silicon carbide nanotube probably regulate the diode activity of MAPbX 3 (X = I, Br), as shown in Fig. 3(c). The grey line represents MAPbI 3 encapsulated in silicon carbide nanotube while the orange line represents the MAPbBr 3 encapsulated in silicon carbide nanotube. It can be obviously seen that these two composite structures hold on-state in a certain negative bias and off-state in the whole positive range as exhibited by the individual MAPbX 3 (X = I, Br) device. Importantly, NDR effect appears on the operating region of this MAPbI 3 -SiCNT device whose maximum peak-valley ratio (PVR) is 3.03 (28.8/9.5) and on-off ratio is 720 The band structures of orthorhombic MAPbX 3 (X = I, Br) calculated by the PBE/GGA functional are shown in Fig. 4(a,b). Our results are successfully match the previous theoretical value with the same function employed 31 . It can be seen that MAPbI 3 and MAPbBr 3 have the similar band structure because they have the same lattice constants. This phenomenon also appears in the research by Mosconi et al. 41 . There are obvious band gap between the VBM and CBM, showing a typical semiconductor nature for both MAPbI 3 and MAPbBr 3 . This also make it rational for their semiconductive current-voltage characteristic curves. We also calculate the band structures when MAPbX 3 (X = I, Br) encapsulated in CNT as shown in Fig. 4(c,d). The band gap vanishes and only one subband for MAPbI 3 -CNT and for MAPbBr 3 -CNT passes through the E f , respectively, showing a metal characteristic. These also correspond to their I-V curves which have both forward current and reverse current, distinguishing from that of MAPbX 3 which has semiconductor diode rectifying. By moving to analyze the electronic structures of these devices, we show the density of states (DOS) and projected density of states (PDOS) as well as the band structure (BS) in Fig. 5. The PDOS is the DOS projected onto the Pb and halide atoms. After encapsulated in a SiC nanotube, bands of MAPbI3 move down and up leading to semiconductor behavior with a very narrowed gap of 16 meV. And MAPbBr 3 has further narrowing gap after encapsulated in the SiC nanotube like MAPbI 3 . The Bloch states at the highest occupied valence band maximum (HOVBM) and the Bloch states at the lowest unoccupied conductance band minimum (LUCBM) of device MAPbI 3 -SiCNT are shown in Fig. 5(a,b). Both the HOVBM and LUCBM are located at the Γ point in the Brillouin zone, illustrating it has the feature of a semiconductor with direct band gap. As shown in Fig. 5(a-d), both the HOVBM and LUCBM of the MAPbX 3 -SiCNT corresponds to the bottom of MAPbX 3 part localized at N atoms mainly. And the LUCBM of the MAPbI 3 -SiCNT is also localized at SiCNT partially while the HOVBM is hardly localized at SiCNT. These two corresponding Bloch states at the Γ point indicate that only the conductance band is induced by the hybridization of SiCNT and the valence band actually results from the states of the 31 who pointed out that organic CH 3 NH 3 + cations made little contribution to the bands of CH 3 NH 3 PbI 3 around the Fermi energy level. The SiCNT is a main factor contributing to these two results. However, the MAPbBr 3 -SiCNT has the opposite case that only the HOVBM localized at SiCNT as shown in Fig. 5(c,d), indicating its valence band is induced by the hybridization of SiCNT and the conductance band results from both the states of the N atoms of CH 3 NH 3 + cations and the hybridization of SiCNT. The BS, TDOS and PDOS for MAPbX 3 -SiC are manifested in Fig. 5(e,f). The PDOS shows that the DOS originates mainly from the contribution of halide atoms rather than Pb atoms, more precisely, it results from their unsaturated p orbitals. That is to say, the chemistry modification of I/Br atoms can tune the electronic phenomenon of MAPbX 3 -SiCNT. Furthermore, we can see that PDOS peaks for Pb and X (X = I, Br) atoms can align very well, which indicates that a orbital hybridization and a strong bonding have been established among these atoms. Interestingly, the PDOS peaks of I atoms in the bottom conductance bands are stronger and sharper than those of Br atoms in the top valence bands. The PDOS images of I and Br atoms support the argument that these two kinds of halide atoms are chemically non-equivalent though MAPbI 3 -SiCNT and MAPbBr 3 -SiCNT have the same lattice structure. The different PDOS images of I and Br atoms indicate a stronger bonding between Pb and I atoms in MAPbI 3 -SiCNT than that between Pb and Br atoms in MAPbBr 3 -SiCNT. This result is different from that of MAPbX 3 without SiCNT 41 . To explore origin of the above current-voltage characteristic, we further analyze the transmission spectra of MAPbI 3 -CNT, which is depicted in Fig. 6. The current through devices is inseparable from the transmission coefficients of devices since the current here is calculated by the Landauer-Büttiker formula. In this work, the average Fermi level, which is the average chemical potential of left and right electrodes, is set as zero. We define and discuss NDR effect follow a classical mode proposed by Shen et al. 42 who employed the same simulation method and software as us. The colormap shows the current value, for example, red color stands for I = −16 μA and green color represents I = 6 μA. We select four typical points which are pointed by black circles to analyze their transmission spectra. The shaded area at every insert transmission spectra represents the energy interval of the chemical potential from the left to the right electrode. The transmission peaks within the bias window are important since they mainly contribute to the current. It is observed that values of all transmission peaks do not exceed 1, which illustrates all transmission spectra are contributed by single channel. In this work, when negative bias voltage rises to 2 V, the MAPbI 3 -CNT device has the maximum reverse current of almost 16 μA. At this moment, the bias window in transmission spectra at V = −2 V has four significant transmission peaks, elucidating the existence of transport. This result also identifies with conducting state of MAPbI 3 -CNT device. Furthermore, this device has non-conducting state in the region of [−1, 0]. To analyze this domain, we select the center dot at −0.5. From the corresponding transmission spectrum, we can see that there is no electronic transport regardless of whether at bias window nor in the whole range. When applied the positive voltage, the current increases at the beginning and then begins to decrease at V = 0.5 V and finally increases again continuously. We define the two inflection points (0.5 V, 1 V) as peak and valley respectively, which is not consistent with ohm's law called NDR effect. It is obvious that the peaks in the 0.5 bias window are enhanced when compared to peaks in the 1 bias. Other MAPbX 3 devices have the similar reasons. Discussion In conclusion, DFT calculations have provided a new idea that MAPbX 3 (X = I, Br) can act well as diodes and its behavior can be modulated by encapsulation of the silicon carbide/carbon nanotube (SiCNT/CNT). It can be observed that MAPbX 3 models display on-state in the negative bias and off-state in the positive range. And the diode activity of MAPbX 3 changes after inserted into SiCNTs. It could be applied in some electronic security locks, based on the character that they only conduct in a certain bias. In addition, when MAPbX 3 are inserted into CNTs, it emerges current throughout positive voltage range different from MAPbX 3 and MAPbX 3 -SiCNT. Origin of their diode behavior is interpreted by their transmission spectra and density of states. The electronic structures are also discussed. This represents an important step in designing new diodes for high-efficiency electronic components. Methods In this work, we employ the first-principles density functional theory (DFT) in combination with the Non-equilibrium Green Function (NEGF) method as implemented in Atomistic ToolKit (ATK) software package to calculate transport properties. The device is divided into two parts, that is, the electrode and the channel. The source and drain electrodes are composed of graphene nanoribbons (GNRs), between which is the electron transfer channel (MAPbX 3 ), as shown in Fig. 2. The mesh cutoff for the electrostatic potentials is 75 Ha. Double-zeta single polarized basis sets of local numerical orbitals and generalized gradient approximations (GGA) for exchange correlation potentials are used. K samples in the x, y, and z directions are 3, 3, and 50, respectively. The standard of convergence of the total energy is set to 10 −5 eV. The current I is calculated by Landauer formula when coherent transport of electrons occurs between left and right electrodes with Fermi levels μ L and μ R through the central scattering region 43 : (T(E, V)( ( ) ( ))) 1 2 where T(E, V) is the transmission probability of electrons from left to right region, f 1,2 (E) is the Fermi-Dirac distribution function of the source and drain respectively, e and h are the electron charge and Planck constant, respectively.
3,504.2
2018-10-12T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Grain boundary sliding at low temperatures . Grain boundary sliding plays a key role on the high temperature deformation of fine grained materials. This mechanism is related to a high strain rate sensitivity of approximately 0.5 and usually gives rise to high superplastic elongations. The rate controlling equation for the mechanism of grain boundary sliding has shown good agreement with experimental data for multiple materials, with different grain sizes and tested at different strain rates. However, the predictive ability of the rate controlling equation seems to deteriorate at low temperatures. Although there are experimental evidences of high strain-rate sensitivities in ultrafine grained materials tested at low temperatures, this parameter does not reach values near 0.5 and also there seems to be disagreement in stress level in many conditions. The present overview evaluates the occurrence of grain boundary sliding in ultrafine grained materials at low temperatures considering an adapted rate controlling equation which display good agreement with experimental data. A gradual transition from grain refinement softening at high temperature to grain refinement hardening at low temperatures and a gradual increase in strain rate sensitivity with increasing temperature are observed. Introduction The occurrence of grain boundary sliding as a deformation mechanism at high homologous temperatures is now well established. This mechanism is observed in materials with fine grains, typically less than 10 μm, at temperatures typically above half of the material melting point. There is also a strain rate range, or stress range, in which this mechanism is rate controlling. The rate controlling equations for high temperature creep mechanisms are usually in the format in eq. 1 [1] ̇= � � � � (1) in which ̇ is the effective strain rate, A is a dimensionless constant, D is the diffusion coefficient, G is the shear modulus, b is the Burgers vector, k is the Boltzmann constant, T is the absolute temperature for creep, σ is the effective stress, n is the stress exponent and p the inverse of the grain size exponent. Different creep mechanisms have different parameters A, D, n and p and so each is rate controlling at specific conditions of stress, temperature and grain size. A common way to visualize the rate controlling mechanism for the different conditions is through deformation mechanism maps and an example is given in Fig. 1 for the AZ31 magnesium alloy tested at different temperatures [2]. Thus, the range for each of the deformation mechanisms is delineated and experimental data from the literature [2][3][4][5][6][7][8][9][10][11][12][13] is also shown. A different color is used for each deformation mechanism and the same color is used to indicate the deformation mechanism reported in each experiment. It is apparent there is a good correlation between the theoretical deformation mechanism maps and the experimental data. Thus, the high temperature deformation of metallic materials is fairly well predicted in terms of creep mechanisms. The occurrence of grain boundary sliding (GBS) in fine grained materials at temperatures above half the melting point is well established. However, there has been great interest in the development of ultrafine and nanocrystalline materials [14] and in the evaluation of the occurrence of GBS at lower temperatures. Early studies showed that some features of GBS were observed at lower temperatures in these materials such as grain boundary offsets after deformation and increased strain rate sensitivity. However, the experimental data of flow stress and strain rate do not agree with the predictions from the rate equation for high temperature GBS. A recent paper [15] showed that a simplification considered in the model for high temperature GBS is not valid for lower temperature deformation and a modified version of the rate equation was suggested. The present overview evaluates this modified model and show that the mechanism of grain boundary sliding agrees with experimental data for lower temperature deformation of many metallic materials. The model for grain boundary sliding and data for room temperature strength The model [15] considers that grain boundary sliding takes place by the movement of dislocations in the vicinity of grain boundaries. These dislocations pile up at triple junctions increasing the stress and trigger dislocation motion in the neighboring grains. These dislocations then pile up at the opposite boundaries where they undergo climbing assisted by the high stresses. The equation for the deformation rate is available elsewhere [15] and an equation for the flow stress is given below [15]. where ds is the spatial grain size and δ is the grain boundary width which is usually considered as 2b. It was shown that the predictions of this model agrees with experimental data for multiple materials tested at different temperatures and strain rates [15]. For instance, Fig. 2 shows a good agreement between the flow stress determined in experiments at room temperature and the predictions from eq. 2 for 27 different materials processed by high-pressure torsion [16]. [16]. The model of grain boundary sliding only predicts the contribution of the grain size to the flow stress and other strengthening mechanisms such as solid solution hardening must be incorporated in order to evaluate the overall strength of materials. Thus, the contribution of other strengthening mechanisms might be introduced by the incorporation of a parameter σ0 in addition to the right side of eq. 2. Also, many studies evaluate the grain size using the mean linear intercept length method and this affects the prediction of the GBS model. Thus, eq. 3 shows the prediction of the flow stress considering a threshold stress, σ0, and the mean linear intercept grain size, dl. It has been shown that the model of GBS might predict a Hall-Petch relationship between the flow stress and the grain size at low temperatures and a good agreement has been reported with experimental data for multiple materials and a broad range of grain sizes [15,[17][18][19][20][21]. The transition between the high temperature and low temperature behavior is examined next. Transition between high-and low-temperature behavior A recent review [21] evaluated the relationship between the grain size and the flow stress for different temperature ranges considering the grain boundary sliding (GBS) model. A Hall-Petch relationship is predicted at low temperatures and superplasticity is predicted at high temperatures for fine grained materials. Low temperature is considered as T<0.3TM and high temperature is Figure 3 shows representative curves of the prediction of the GBS model for the relationship between flow stress and strain rate for two grain sizes at the different temperature ranges [21]. The behavior of an ultrafine grained material is depicted by the dashed lines and d1 while the behavior of a fine grained material is depicted by continuous lines and d2. The ultrafine grained material display higher flow stress than the fine grained counterpart at low temperatures and the opposite is observed at high temperature. At moderate temperatures the ultrafine grained material might display higher strength at high strain rates and lower strength at low strain rates. The slope of the curve of the ultrafine grained material is larger than the slope of the fine grained counterpart. Figure 3 -General trends for the relationship between flow stress and strain rate for materials with different grain sizes and tested at different temperature ranges [21]. The strain rate sensitivity is estimated by the slope in the plots of flow stress vs strain rate. Thus, the predictions of the GBS model depicted in Fig. 3 suggests that the strain rate sensitivity of ultrafine grained materials increases with increasing temperature. This effect has been reported in the literature for multiple materials and Fig. 4 shows experimental data [22][23][24] of strain rate sensitivity of a CrMnFeCoNi multi component alloy tested at different temperatures. The predictions from the GBS model are also shown. The experimental data shows an increase in the strain rate sensitivity values with increasing temperature in agreement with the predictions from the model. [20] The gradual transition between low temperature behavior, with low strain rate sensitivity, and high temperature behavior, with high strain rate sensitivity, is also confirmed by experimental data from ultrafine grained Al-Mg alloys tested at different temperatures. Figure 5 shows the predictions from the model of grain boundary sliding for the relationship between flow stress and strain rate for temperatures of 298 K, 403 K, 523 K and 673 K. It is important to note that the threshold stress is considered as a thermally activated process in the prediction depicted in Fig. 5 [21]. Experimental data from the literature is also shown for comparison and display good agreement with the predictions. Thus, a gradual increase in slope in the relationship between flow stress and strain rate is observed with increasing temperature. This analysis confirms that the model of grain boundary sliding [15] display good agreement with experimental data and predicts a gradual increase in strain rate sensitivity of fine grained material with increasing temperature. Summary and conclusions Grain boundary sliding is an established deformation mechanism for high temperature deformation of fine grained materials. A recent paper [15] showed that the rate controlling equation for the mechanism of grain boundary sliding can be adapted to account for the higher stresses of low temperature deformation. The present overview shows that the adapted equation for grain boundary sliding predicts a gradual transition from the typical behavior observed at high temperature in fine grained materials to a Hall-Petch type of behavior at low temperatures. The transition from low temperature to high temperature changes the relationship between flow stress and grain size in a way that grain refinement hardening is observed at low temperatures and grain refinement softening at high temperatures. A gradual increase in strain rate sensitivity with increasing temperature is predicted and confirmed by experiments in ultrafine grained materials. [21].
2,238.6
2023-07-10T00:00:00.000
[ "Materials Science" ]
Dimensional Accuracy of Different Three-Dimensional Printing Models as a Function of Varying the Printing Parameters Even in digital workflows, models are required for fitting during the fabrication of dental prostheses. This study examined the influence of different parameters on the dimensional accuracy of three-dimensionally printed models. A stereolithographic data record was generated from a master model (SOLL). With digital light processing (DLP) and stereolithography (SLA) printing systems, 126 models were produced in several printing runs—SolFlex350 (S) (DLP, n = 24), CaraPrint 4.0 (C) (DLP, n = 48) and Form2 (F) (SLA, n = 54)—and their accuracy was compared with plaster and milled polyurethane models. In addition to the positioning on the build platform, a distinction was made between parallel and across arrangement of the models to the printer’s front, solid and hollow models, and printing with and without support structures. For accuracy assessment, five measurement sections were defined on the model (A–E) and measured using a calibrated digital calliper and digital scans in combination with the GOM Inspect Professional software 2021. The mean deviation between the measurement methods for all distances was 79 µm. The mean deviation of the models from the digital SOLL model were 207.1 µm for the S series, 25.1 µm for the C series and 141.8 µm for the F series. While positioning did not have an influence, there were clinically relevant differences mainly regarding the choice of printer, but also individually in alignment, model structure and support structures. Introduction The three-dimensional (3D)-printing process has become established and is frequently used, especially in the production of dental models [1][2][3][4].Consequently, not least because of the material-efficient additive production, many manufacturers have included a 3D printer in their portfolio [3].Because it is a comparatively inexpensive purchase, printers with stereolithography (SLA) or digital light processing (DLP) technology are increasingly used.In both technologies, the printers contain a vat filled with synthetic resin.During the printing process, a build platform is pulled out of the vat step by step.In this process, a laser beam (SLA principle) or a projected image (DLP principle) polymerises successively layer by layer through the bottom of the printer.In SLA printers, a laser beam is directed to the bottom of the vat using mirror galvanometers in a Cartesian coordinate system to polymerise the resin point by point [5,6].The arrangement of the laser and the mirrors is specific for each printer.Depending on the build platform, the illuminated mask varies in DLP technology.Micromirror devices with a high-power light source are used [7].The layer-building process is repeated in both methods until the object is completely formed.Freshly printed objects must be post-processed to achieve their maximum mechanical stability [8]. Several studies have shown that printing parameters, including the user-selected position of the models on the build platform, whether the models are solid or hollow and the layer thickness, influence the results [9][10][11].Solid models have dominated most of the published investigations; in comparison, hollow shell models are used less frequently [10,11]. Depending on the model and the positioning, support structures are required for stabilisation; they have an influence on the surface roughness depending on the angle of inclination [12]. Dental models must meet different dimension specifications.The clinically acceptable values of accuracy vary greatly in the relevant literature.For example, an accuracy of less than ±200 to even less than 500 µm is postulated for planning and situation impressions [1,13,14], and no more than ±100 µm for master models or saw-cut models, which are used for the production of fixed partial dentures such as crowns, bridges and implant-supported dentures [10,15].Gypsum is the conventionally used analogous material for the fabrication of dental models.Its accuracy can be fixed at a value of less than ±50 µm and even ±10 µm with the use of an appropriate manufacturing process and additives (resin that is reinforced or mixed with epoxy resin).These measurements are at defined distances and taken 1 day after casting [16].The resin components, which are processed with 3D printers with SLA or DLP technology, are rather contractive during the polymerisation or curing processes [3,17,18]. The available data for determining the dimensional accuracy of models are mostly limited to fully anatomic jaws that contain all teeth [5,[19][20][21][22][23][24].In rare cases, researchers have investigated a jaw quadrant or the area that will be prosthetically restored, as is common in practice [17,[25][26][27][28]. Potentially complicated wide-spanning dental arches are also considered to be potentially more prone to complications, solely due to the scanning process [26,29].In addition, the focus of the comparison is often on the scanner that is used [30,31], and less frequently on the accuracy of the printers [3,24,32,33]. The primary purpose of this in vitro study was to determine the dimensional accuracy of models of fixed dental prostheses compared with a master model as a function of printer selection, positioning and placement on the build platform and the model shape (hollow vs. solid).The following four hypotheses related to the primary purpose were tested: (1) the use of different printers has no effect on the dimensional accuracy of the printed models; (2) the positioning or placement of the models on the respective build platform does not affect the dimensional accuracy; (3) there are no dimensional differences between solid and hollow models; and (4) there are no differences between models printed with and without support.The secondary purpose was to compare different measurement methods.In the relevant literature, measurements with callipers [11,22,34,35] are represented as digital measurements by various software programs [5,19,22,24,35,36].This secondary purpose tested hypothesis (5): the use of different measuring methods-calliper versus software-does not result in significantly different measurements with regard to the dimensional accuracy of the printed models. Master Model The measurement was based on a proven master model made of brass (Figure 1) [12,37].It simulates a clinical situation of a single-span, four-unit bridge.The small stump represents a canine, and the large stump represents a molar.Figure 2 and Table 1 indicate the distances that were measured for the comparison of dimensional stability.1. Table 1.Description of the measuring distances.To generate a digital data record according to the clinical situation, the master cast was coated with scan spray (3D Anti-Glare Spray, Organical CAD/CAM GmbH, Berlin, Germany) and then scanned with the highest detail level for the stump and arc scan with a scanner (D2000, 3Shape, Copenhagen, Denmark).The resulting stereolithographic (stl) data record was the basis for the additively manufactured solid models.The Meshmixer 3D modelling program (Autodesk Research, Toronto, ON, Canada) was used to create hollow models. 3D Printing of the Test Models Three dental 3D printers with two different functional mechanisms were selected for the additive manufacturing of the models.Table 2 presents an overview of the printers and the printing materials.1. Table 1.Description of the measuring distances.To generate a digital data record according to the clinical situation, the master cast was coated with scan spray (3D Anti-Glare Spray, Organical CAD/CAM GmbH, Berlin, Germany) and then scanned with the highest detail level for the stump and arc scan with a scanner (D2000, 3Shape, Copenhagen, Denmark).The resulting stereolithographic (stl) data record was the basis for the additively manufactured solid models.The Meshmixer 3D modelling program (Autodesk Research, Toronto, ON, Canada) was used to create hollow models. 3D Printing of the Test Models Three dental 3D printers with two different functional mechanisms were selected for the additive manufacturing of the models.Table 2 presents an overview of the printers and the printing materials.To ensure comparability, at least one match was made for each printing parameter and for each printer (Table 2).The 15 • inclination of the models is shown in Figures 3 and 4. The assembled canine stump is in an elevated position.To ensure comparability, at least one match was made for each printing parameter and for each printer (Table 2).The 15° inclination of the models is shown in Figures 3 and 4. The assembled canine stump is in an elevated position.To ensure comparability, at least one match was made for each printing parameter and for each printer (Table 2).The 15° inclination of the models is shown in Figures 3 and 4. The assembled canine stump is in an elevated position.Altogether, 126 models were produced, of which 24 were produced by the Solflex 360 (the S series), 48 by the CaraPrint 4.0 printer (the C series) and 54 by the Form 2 printer (the F series).The different number of models for each series is due to the morphology of each printer's build platform (Figures 5-7, Table 2).Similarly to the pilot study and the power analysis of Anadioti et al., a minimum of 20 models per test series were generated.In addition, several studies have used a minimum sample number of 10 models to test the influence of parameters [19,20,33,36,38].Manufacturer-or printer-specific recommended resins were used for the models; their compositions are shown in Table 3. Printer Materials 2024, 17, x FOR PEER REVIEW 5 of 23 Altogether, 126 models were produced, of which 24 were produced by the Solflex 360 (the S series), 48 by the CaraPrint 4.0 printer (the C series) and 54 by the Form 2 printer (the F series).The different number of models for each series is due to the morphology of each printer's build platform (Figures 5-7, Table 2).Similarly to the pilot study and the power analysis of Anadioti et al., a minimum of 20 models per test series were generated.In addition, several studies have used a minimum sample number of 10 models to test the influence of parameters [19,20,33,36,38].Manufacturer-or printer-specific recommended resins were used for the models; their compositions are shown in Table 3. Altogether, 126 models were produced, of which 24 were produced by the Solflex 360 (the S series), 48 by the CaraPrint 4.0 printer (the C series) and 54 by the Form 2 printer (the F series).The different number of models for each series is due to the morphology of each printer's build platform (Figures 5-7, Table 2).Similarly to the pilot study and the power analysis of Anadioti et al., a minimum of 20 models per test series were generated.In addition, several studies have used a minimum sample number of 10 models to test the influence of parameters [19,20,33,36,38].Manufacturer-or printer-specific recommended resins were used for the models; their compositions are shown in Table 3. Altogether, 126 models were produced, of which 24 were produced by the Solflex 360 (the S series), 48 by the CaraPrint 4.0 printer (the C series) and 54 by the Form 2 printer (the F series).The different number of models for each series is due to the morphology of each printer's build platform (Figures 5-7, Table 2).Similarly to the pilot study and the power analysis of Anadioti et al., a minimum of 20 models per test series were generated.In addition, several studies have used a minimum sample number of 10 models to test the influence of parameters [19,20,33,36,38].Manufacturer-or printer-specific recommended resins were used for the models; their compositions are shown in Table 3.After printing, the models were post-processed according to the respective manufacturer's specifications.Until the time of the measurements, the models were stored in a climate chamber at a constant temperature of 21 • C. S-Series Models and Post-Processing After a 10 min draining period, the models were removed from the build platform and precleaned by repeated immersion in 98% pure isopropanol.Then, the models were placed in an ultrasonic bath (Sonorex RK100H, Badelin, Berlin, Germany) for 3 min.For the final cleaning, the models were placed in a fresh ultrasonic bath for 2 min.The models were dried with compressed air and placed in a xenon flashlight unit (Otoflash G171, dentona, Dortmund, Germany) for post-exposure, 15 min after the last isopropanol contact.The models were post-polymerised with two rounds of 2000 flashes (100 flashes per second), with a 2 min cooling phase between the rounds. C-Series Models and Post-Processing After removing adherent liquid residue with compressed air, the models were removed from the build platform.The models were precleaned in an ultrasonic bath with 98% pure isopropanol for 3 min, and then post-cleaned in a fresh ultrasonic bath for 2 min.They did not spend more than 5 min in the ultrasonic bath.After cleaning, the models were dried with compressed air.The models were placed in a xenon flashlight unit (HiLite Power 3D, Kulzer GmbH, Hanau, Germany) two times at 5 min each: one time on the upper side and one on the lower side.Finally, the support structures were removed with a scalpel (HS Disposable Scalpel, Henry Schein Dental Deutschland GmbH, Langen, Germany). F-Series Models and Post-Processing The models were removed from the printer and adherent liquid residue was rinsed off with 99.9% pure isopropanol.The support structures were removed with a mill (H251E.104.040,Gebr.Basseler GmbH & Co. KG, Lemgo, Germany).Subsequently, the models were cleaned two times (10 min each) with isopropanol on a vibration unit (KaVo EWL 5442, KaVo Elektrotechnisches Werk GmbH, Leutkirchen, Germany).Between the two cleanings, the released ingredients were removed with 3 bar of compressed air.The models were dried for 1 h at 60 • C in the drying oven (KaVo EWL TYP 5615, KaVo, Elektrotechnisches Werk GmbH).Then, they were post-polymerised for 60 min at 350-500 nm (LUXOMAT D, UVA and blue light tube with 350-500 nm wavelength, al dente Dentalprodukte GmbH, Horgenzell, Germany). Reference Models For reference, milled (Organical Multi Changer 20, OCAD CAM GmbH) polyurethane models (Organic Model blank, Organical CAD CAM GmbH) were fabricated using the stl dataset.For the plaster reference models, the master model was moulded with alginate (Tetrachrom, Kaniedenta GmbH & Co. AG, Herford, Germany) using an individual impression carrier and, following the manufacturer's recommendations, cast with super-hard stone (Original Rocky Mountain [IV], Dental GmbH, Augsburg, Germany) immediately after carefully rinsing the impression with water and transferred into super-hard stone according to the manufacturer's specifications. Calliper Measurements To evaluate the dimensional accuracy, the master, reference and printed models were measured under standardised conditions using a certified and calibrated calliper from Mitutoyo (ABS DIGIMA-TIC, Mitutoyo Europe GmbH, Neuss, Germany).The room temperature during the measurement was constant at 22 • C. The same investigator performed all of the measurements to keep the measurement error small.All distances listed in Table 1 and shown in Figure 2 were measured.The model was clamped between the straight surfaces of the calliper sectors to measure distances A, B, C and D. The distance between the stumps (distance E) was measured by opening the calliper until it was flush against the straight surfaces of the stumps.Each measurement was repeated five times.The mean value per section was calculated for each sample and transferred to an Excel file. Digital Measurement 2.6.1. Creation of a Test Specimen Scan Dataset For the scanning process, the models were coated with a thin layer of scan spray (3D Laser Scanning Anti-Reflection Spray MATT, HELLING GmbH, Heidgraben, Germany) from a distance of 25 cm and then scanned with a desktop scanner (D2000, 3Shape) with the highest detail level for the stump and arc.For this purpose, the models were fixed on the platform of the scanner using plasticine (Blu Tack Scan Fix, Ivoclar Vivadent GmbH, Ellwangen, Germany).The scan data were saved in the stl format. Comparative Measurements with the GOM Inspect Professional Software The matching process was performed using the GOM Inspect Professional 2021 software (Carl Zeiss GOM Metrology GmbH, Braunschweig, Germany).For this purpose, the .stlfiles were transferred to the software (Figure 8).In the software, the models were cut along the base plane in the X-axis to remove the plasticine base, which was necessary for scanning.The scan data record of the master model was defined as the target model (SOLL model), while the printed models represented the IST models.To compare the SOLL model to all IST models, the SOLL model was treated as a pseudo-CAD.For correct and geometric positioning of the SOLL model in the 3D coordinate system of the GOM Inspect Professional software 2021, the scan data record of the SOLL model was subjected to a single-element transformation by auxiliary geometries (plane, line and point).Thus, the The scan data record of the master model was defined as the target model (SOLL model), while the printed models represented the IST models.To compare the SOLL model to all IST models, the SOLL model was treated as a pseudo-CAD.For correct and geometric positioning of the SOLL model in the 3D coordinate system of the GOM Inspect Professional software 2021, the scan data record of the SOLL model was subjected to a single-element transformation by auxiliary geometries (plane, line and point).Thus, the SOLL model could be moved in the global space and integrated in the coordinate system in an exactly defined way (Figure 9).The scan data record of the master model was defined as the target model (SOLL model), while the printed models represented the IST models.To compare the SOLL model to all IST models, the SOLL model was treated as a pseudo-CAD.For correct and geometric positioning of the SOLL model in the 3D coordinate system of the GOM Inspect Professional software 2021, the scan data record of the SOLL model was subjected to a single-element transformation by auxiliary geometries (plane, line and point).Thus, the SOLL model could be moved in the global space and integrated in the coordinate system in an exactly defined way (Figure 9).Using the automatic initial alignment function of the GOM Inspect Professional software 2021, the printed and reference models were initially aligned to the SOLL model (Figure 10).For a more accurate match, a main alignment was performed by using geometric elements (Figure 11).Thus, the models were aligned along the plane between the stumps and the two cones around the stumps (Figure 12).The deviations of the superimposed master model and printed model could be displayed in colour as an area comparison (Figure 13).Using the automatic initial alignment function of the GOM Inspect Professional software 2021, the printed and reference models were initially aligned to the SOLL model (Figure 10).For a more accurate match, a main alignment was performed by using geometric elements (Figure 11).Thus, the models were aligned along the plane between the stumps and the two cones around the stumps (Figure 12).The deviations of the superimposed master model and printed model could be displayed in colour as an area comparison (Figure 13).The scan data record of the master model was defined as the target model (SOLL model), while the printed models represented the IST models.To compare the SOLL model to all IST models, the SOLL model was treated as a pseudo-CAD.For correct and geometric positioning of the SOLL model in the 3D coordinate system of the GOM Inspect Professional software 2021, the scan data record of the SOLL model was subjected to a single-element transformation by auxiliary geometries (plane, line and point).Thus, the SOLL model could be moved in the global space and integrated in the coordinate system in an exactly defined way (Figure 9).Using the automatic initial alignment function of the GOM Inspect Professional software 2021, the printed and reference models were initially aligned to the SOLL model (Figure 10).For a more accurate match, a main alignment was performed by using geometric elements (Figure 11).Thus, the models were aligned along the plane between the stumps and the two cones around the stumps (Figure 12).The deviations of the superimposed master model and printed model could be displayed in colour as an area comparison (Figure 13).A digital measuring method was used to compare the calliper measurements with the digital matching process.The selected measurement technology is frequently used in the automotive and aerospace industries but has also been used in the dental sector [21,32].With the GOM Inspect Professional software, the distance between opposing planes was determined, an approach that resembles the calliper principle.To measure the models according to the independence principle, symmetry planes were defined on the parallel faces, and theoretical edge points of the stumps were determined in the two-dimensional (2D) view of the software.This measurement key was transferred from the SOLL model to all IST models to determine the distances between the planes and lines for each model.Because the 3D-printed model did not have a flat surface over the entire side, the software specified a minimum and maximum value for each model (Figure 14).From this, the average value was calculated (the largest area of triangles).The labelling on the models had no influence because all deviations > 5° from the surface were excluded from the digital measurement.The same investigator performed all of the digital and calliper measurements.A digital measuring method was used to compare the calliper measurements with the digital matching process.The selected measurement technology is frequently used in the automotive and aerospace industries but has also been used in the dental sector [21,32].With the GOM Inspect Professional software, the distance between opposing planes was determined, an approach that resembles the calliper principle.To measure the models according to the independence principle, symmetry planes were defined on the parallel faces, and theoretical edge points of the stumps were determined in the two-dimensional (2D) view of the software.This measurement key was transferred from the SOLL model to all IST models to determine the distances between the planes and lines for each model.Because the 3D-printed model did not have a flat surface over the entire side, the software specified a minimum and maximum value for each model (Figure 14).From this, the average value was calculated (the largest area of triangles).The labelling on the models had no influence because all deviations > 5° from the surface were excluded from the digital measurement.The same investigator performed all of the digital and calliper measurements.A digital measuring method was used to compare the calliper measurements with the digital matching process.The selected measurement technology is frequently used in the automotive and aerospace industries but has also been used in the dental sector [21,32].With the GOM Inspect Professional software, the distance between opposing planes was determined, an approach that resembles the calliper principle.To measure the models according to the independence principle, symmetry planes were defined on the parallel faces, and theoretical edge points of the stumps were determined in the two-dimensional (2D) view of the software.This measurement key was transferred from the SOLL model to all IST models to determine the distances between the planes and lines for each model.Because the 3D-printed model did not have a flat surface over the entire side, the software specified a minimum and maximum value for each model (Figure 14).From this, the average value was calculated (the largest area of triangles).The labelling on the models had no influence because all deviations > 5 • from the surface were excluded from the digital measurement.The same investigator performed all of the digital and calliper measurements.The mean values of each model were summarised in a table and transferred to Excel.The values of the SOLL model are shown in Table 4.The mean values of each model were summarised in a table and transferred to Excel.The values of the SOLL model are shown in Table 4. Evaluation and Statistical Analysis Each model was coded based on its printing parameters.This allowed rating the various distances individually for each parameter. According to ISO 5725-1:2023, accuracy is divided into trueness and precision, which are quantitative counterparts.Accuracy is the degree of agreement between the arithmetic mean of a large number of test results and the true or accepted value.Precision is defined as the degree of agreement between test results and thus corresponds to the value of deviation with repeated measurement (repeatability value) [39]. To compare the measurement methods for each printer, the mean calliper and GOM Inspect Professional software measurements per distance were set in relation to the SOLL model.The statistical analysis was performed using SPSS Statistics version 21 (IBM corp.Armonk, NY, USA).Apart from descriptive statistics, the Kolmogorov-Smirnov and Shapiro-Wilk tests were used to check whether the data had a normal distribution.Levene's test was used to assess the homogeneity of variances.For the normally distributed data, statistical differences were assessed with Student's t-test or a univariate analysis of variance (ANOVA) followed by the post hoc Bonferroni test.The non-normally distributed data were assessed with the Mann-Whitney U test or the Kruskal-Wallis test.A p-value < 0.05 was considered to be statistically significant. Reliability The accuracy of the measurement methods was investigated by assessing trueness and precision.Assessment of trueness requires the absolute value of the SOLL model; it can only be determined by using a higher-level measurement system (e.g., an Artos scanner) under known material conditions.This is not possible in a real patient case, so the true value of the model was not used.To assess the precision of the calliper measurements, the method error was evaluated according to Dahlberg's method [40].For this purpose, five samples from a printer were repeatedly measured with the calliper.There was sufficient measurement accuracy (0.047).To determine the precision of the GOM Inspect Professional software, 30 identical scans of the SOLL model were performed.The generated data record was matched with the SOLL model and measured.The average difference was 0.0101 mm, which is within the tolerance range.Thus, the measurement system itself did not generate fluctuations in the measurement results. Comparison of the Measurement Methods The SOLL and printed models were measured with a calliper and with the GOM Inspect Professional software.The calliper's manufacturer specifies that its accuracy is ±0.02 mm.When measuring the SOLL model, there were clinically irrelevant differences of up to 10 µm for three distances (A, B and C; Table 5).To compare the measurement results between the methods, the mean value of all models of the respective measurement series was calculated for each defined distance.The basis for the comparison of the models was the measured values of the SOLL model in the GOM Inspect Professional software.All differences were determined for these results.For all but 3 of the 15 possible comparison cases, the differences between the calliper measurements and the SOLL model measurements were lower (Table 5, values in italics) and corresponded more to the SOLL model.Overall, the mean measurement deviation between the methods was 77 µm.The Kolmogorov-Smirnov test showed that the S-and Cseries results were normally distributed, while the F-series results were not.A comparison of the measurement methods showed no relevant difference for the normally distributed data (p > 0.05).Assuming that a 3D measurement by means of geometric, flat elements produces more exact values and to exclude an improvement of the results, the statistical evaluation was based on the scanned models using the GOM Inspect Professional software (Table 5, values in bold). General Results Compared with the Reference Models As shown in Table 6, the GOM Inspect Professional software results for the printed models showed shorter distances compared with the digital SOLL model, so the real models were comparably smaller.The exceptions were distance E in the S series (+113 µm) and distance C in the C series (+20 µm).The C-series models showed the smallest differences (a maximum of 47 µm for distance B and a minimum of 6 µm for distance D) and greater agreement with the SOLL model than the classic plaster models (a maximum of 76 µm for distance A and a minimum of 22 µm for distance D), which were slightly enlarged.The greatest differences were in the S series (a maximum of 327 µm for distance B and a minimum of 112 µm for distance E).The milled plastic models were also smaller than the SOLL model, with only distances A and B showing differences of >100 µm (Table 6). . Model Structure: Hollow Versus Solid Comparison There were no uniform changes distributed over the measured distances (Figure 15).The S series showed a smaller difference for the hollow models at distance C (p < 0.002).At distance B (p < 0.001), but also minimally at distances A and D, the differences were smaller for the solid models.There was an evident extension for distance E, which was smaller for the solid models (58 µm) than for the hollow models (167 µm; p = 0.000).For the C series, the mean dimensional differences between the solid and hollow models did not exceed 20 µm and were both contractive and expansive.The difference in distance E (p < 0.000) and distance B (p < 0.02) was in favour of the hollow models.The F-series models were consistently contracted; there were no relevant differences between the solid and hollow models (a maximum of 30 µm) for distances A, C, D and E. There was a tendency for the differences to be smaller for the solid models, especially for distance B (p < 0.001).The only exception was distance E, although this was not statistically significant (p > 0.05). Model Structure: Hollow Versus Solid Comparison There were no uniform changes distributed over the measured distances (Figure 15).The S series showed a smaller difference for the hollow models at distance C (p < 0.002).At distance B (p < 0.001), but also minimally at distances A and D, the differences were smaller for the solid models.There was an evident extension for distance E, which was smaller for the solid models (58 µm) than for the hollow models (167 µm; p = 0.000).For the C series, the mean dimensional differences between the solid and hollow models did not exceed 20 µm and were both contractive and expansive.The difference in distance E (p < 0.000) and distance B (p < 0.02) was in favour of the hollow models.The F-series models were consistently contracted; there were no relevant differences between the solid and hollow models (a maximum of 30 µm) for distances A, C, D and E. There was a tendency for the differences to be smaller for the solid models, especially for distance B (p < 0.001).The only exception was distance E, although this was not statistically significant (p > 0.05). Model Base Orientation In the S series, there were no differences between the across and parallel printed models in directions A, C and D (a maximum difference of 16 µm, Figure 16).For distances Model Base Orientation In the S series, there were no differences between the across and parallel printed models in directions A, C and D (a maximum difference of 16 µm, Figure 16).For distances B and E, the parallel models were closer to the SOLL model.In contrast to the other distances, distance E showed expansion.Due to the large scatter, no difference was statistically significant (p > 0.05).The C-series models showed a maximum difference of 35 µm at distance B. At distances A, C, D and E, the differences between the parallel and across orientations were <17.5 µm and both expansive and contractive.The differences for the across printed models tended to be closer to the SOLL model at distances B (p < 0.000) and E (p < 0.026).For the F series, the largest difference between the across and parallel models was for distance B (63 µm, p < 0.000).For distances B and E, the parallel models showed smaller differences relative to the SOLL model.The differences for distances A, C and D were very small, with a maximum difference of 6 µm. at distance B. At distances A, C, D and E, the differences between the parallel and across orientations were <17.5 µm and both expansive and contractive.The differences for the across printed models tended to be closer to the SOLL model at distances B (p < 0.000) and E (p < 0.026).For the F series, the largest difference between the across and parallel models was for distance B (63 µm, p < 0.000).For distances B and E, the parallel models showed smaller differences relative to the SOLL model.The differences for distances A, C and D were very small, with a maximum difference of 6 µm. Support Structure/Inclination In the S series, all models were printed without support.For the C series, the differences between the models with and without a support structure and the SOLL model were smaller for distances A (p < 0.001) and D (p < 0.014) but larger for distances B, E (p < 0.002), and especially C (p < 0.000).In general, the deviations of all models relative to the SOLL model were <69.2 µm.In the F series, the models printed with support were closer to the SOLL model for all distances.The mean differences between the models with and without support were between 12.5 and 86.7 µm.The largest deviation from the SOLL model was for distance B for the model without support (225.6 µm).The differences in the use of support structures were significant for distances B (p < 0.001) and C (p < 0.000). Positions on the Build Platform Due to the different dimensions of the build platforms and the resulting different arrangement of the models, it was not possible to compare the corresponding models between the different printers.For the S series, there were no deviations depending on the model's position on the build platform (Figure 5), and only the differences explained in Sections 3.4.1 and 3.4.2were confirmed.The same could be observed for the C series (Figure 6) and the F series (Figure 7). Support Structure/Inclination In the S series, all models were printed without support.For the C series, the differences between the models with and without a support structure and the SOLL model were smaller for distances A (p < 0.001) and D (p < 0.014) but larger for distances B, E (p < 0.002), and especially C (p < 0.000).In general, the deviations of all models relative to the SOLL model were <69.2 µm.In the F series, the models printed with support were closer to the SOLL model for all distances.The mean differences between the models with and without support were between 12.5 and 86.7 µm.The largest deviation from the SOLL model was for distance B for the model without support (225.6 µm).The differences in the use of support structures were significant for distances B (p < 0.001) and C (p < 0.000). Positions on the Build Platform Due to the different dimensions of the build platforms and the resulting different arrangement of the models, it was not possible to compare the corresponding models between the different printers.For the S series, there were no deviations depending on the model's position on the build platform (Figure 5), and only the differences explained in Sections 3.4.1 and 3.4.2were confirmed.The same could be observed for the C series (Figure 6) and the F series (Figure 7). Discussion Dimensional behaviour is an important factor in the production of dental restorations.Additive manufacturing processes are becoming increasingly important, both in the direct production of dental restorations and in the indirect production of supporting parts.Only sufficiently high dimensional stability-in terms of precision and accuracy-can guarantee that dental restorations fit accurately and function appropriately [23,27,41,42].Despite the digital workflow, some work processes-for example, preparation and fit control of crowns and bridges-require working or saw-cut models as a basis [43].In individual cases, additively manufactured models (for implant or bar restorations) also form the basis for additional production processes.According to the relevant literature, digital methods that employ scanning and matching processes are used for the measurements [5,19,24,27,36], although analogue measurements with callipers have proved successful [11,22,34,35].In the literature, it is recommended that for prosthetic restorations, models have a deviation of no more than ±120 µm [44].In studies, additively manufactured models have achieved an accuracy of <100 µm [10,15].In the authors' opinion, the upper limit of ±50 µm specified for the classic plaster models should be the aim.The mean deviations of this study were 200.9 µm for the S series, 25.2 µm for the C series, 142 µm for the F series, 72 µm for the plaster models and 74 µm for the milled models (see Tables 5 and 6).In many studies, plaster models have demonstrated superior accuracy compared with milled or printed models [22,23,27,28,36], which is congruent with the results of the present study, with the exception of the C series. The statistical analyses of this study showed that hypothesis (1) must be rejected: the choice of printer and material had an effect on the dimensional behaviour.Overall, compared with the C series (MWT: 25 µm), the dimensional deviation increased by a factor of 8 for the S series and by a factor of 5.6 for the F series.The differences between the individual test series may be due to the printers' modes of operation and/or the material properties [21,45]. Great importance is attributed to the exposure technology.A moving exposure unit and the resulting constant distance between the light source and the object result in lower light scattering losses and thus less loss of precision due to distortion.This could not be confirmed with the highest differences for the S-series models.The light sources also vary between the printers used in the present study.On the one hand, the materials polymerise as specified by the manufacturer when different wavelengths of light are used (the S series at 385 nm and the C and F series at 405 nm).On the other hand, different modes of operation are used.While the S-and C-series printers use the DLP method, the F-series printer is based on SLA. The print quality is also influenced by the XY and Z resolution of the respective printers.According to the manufacturers, the greatest accuracy can be achieved with a high Z-axis resolution and the smallest possible minimum structure size (XY resolution).The Z-axis resolution, that is, the layer height, can be defined by the user.In the present study, it was set at 50 µm for better comparability among the series.The DLP printers have a defined pixel matrix in relation to the exposure area (build platform), which is also dependent on the projector.For each pixel, there is an actual XY value, which must be determined metrologically by the manufacturers.In the case of the S-series printer, a pixel size of 50 µm is specified with an indication for the variation of this size and the print volume, as well as an accuracy of ±25 µm (508 dpi resolution).The C-series printer is said to have a resolution of 1920 × 1080 with a minimum structure size of 65 µm.Analogously to the C-series values, a sufficiently high dimensional stability can be achieved with a pixel size of 65 µm.Despite the smaller minimum structure size declared by the manufacturer, the differences were larger for the S-series models.The cause could be due to the shape of the pixels: they do not have to be exclusively square but can also be diamond-shaped depending on the projector.The latter would produce different sizes in the X and Y directions and, accordingly, may have had a negative effect on precision or dimension [46].In SLA technology, the smallest structure size in the ideal imagination is defined by the laser spot size.The smaller the value, the higher the level of detail.The laser spot is specified as 140 µm for the F-series printer.It describes the smallest movement that the laser is capable of making within a layer.Investigations by the manufacturer proved reliable XY structures only at a dimension of 150 µm, which was set as the minimum structure size [47].Therefore, the more than doubled minimum structure size compared with the C-series printer could be a reason for the larger differences compared with the SOLL model. In the literature, researchers have stated that with a higher resolution or lower slice height in the Z-direction, precision can decrease [24].The possible causes are an increase in the number of exposure layers and greater repositioning of the build platform in the Z direction, leading to an increased potential for errors, artefacts and failures during the course of a print [24,46,48].In this context, overexposure of already-exposed layers can also play a role.Depending on the material composition and colour, the projected light can penetrate into deeper levels and cause distortion [9].On the other hand, Zhang et al. [49] found that for SLA printing, accuracy increases as the layer thickness decreases.The authors stated that nonlinear edges in the printed object are not directly positioned on the Z or X/Y plane.Therefore, the layer thickness determines the number of discrete points.A thinner layer creates multiple discrete dots and thus a smoother and more detailed surface, making printing more accurate.In contrast, a thicker layer has fewer discrete points with wider spacing, which leads to a staircase effect at the edge and affects accuracy [49].Because the coating thickness in the present study was just 50 µm, no statements can be made regarding possible changes within the series.Hence, this issue requires further investigation. The S-and C-series printers are semi-closed and closed systems, respectively.Thus, the materials recommended by the manufacturer for the test specimens were used in this study.These materials were liquid photopolymers based on acrylates or methacrylates (Table 3).Consistent with the literature, the test specimens underwent shrinkage due to the polymerisation process [18].In this process, long-distance bonds are replaced by strong and short covalent bonds between carbon atoms of different types within the monomer units via van der Waals force.The ultraviolet (UV) active monomers radicalise and convert into polymer chains [50].As a result of the choice of printer-specific materials, the shrinkage factor of the materials may have varied independently of the printer and may have been most pronounced in the S-series models.Basically, the composition of the resin, specifically the photo-initiator concentration, in combination with the processing procedure influences the mechanical, biocompatible and aesthetic aspects of the printed components [45,[51][52][53][54][55]. Studies have also shown that different exposure sources during post-polymerisation can cause property changes, for example, the breaking load of temporary plastic materials [45].In conclusion, not all light-curing devices can activate reactive groups equally effectively, and polymerisation is affected by the irradiation intensity and duration [56].The polymer conversion rate, the true crosslink density in the polymer network, can be influenced and varies depending on the processing.Therefore, in principle, the volumetric shrinkage of the printed components is dependent on the chemical reaction that occurs and, according to Schümann et al. [57], also on process-related thermal changes.In the present study, the test specimens were subjected to post-polymerisation. Depending on the manufacturer's specifications and the light-curing unit, the times varied considerably (6.67 min for the S series, 10 min for the C series and 60 min for the F series), meaning that there were variable temperature influences depending on the series.Compared with the C-series models, the F-series models were exposed to a sixfold increase in time and faced with the associated heat that was generated.Depending on the coefficients of thermal expansion, it can be assumed that thermal expansion increases with temperature and that internal stresses, which could lead to geometric distortions, increase. The manufacturers provided relatively little information on the materials used in the present study (i.e., on the safety data sheets).Thus, no specific conclusions can be drawn about the materials used.The density was only specified for the C series (1 g/cm 3 ) and the F series (1.08 g/cm 3 ), and the difference was relatively small.The density of the starting resins can be varied by adding particles, which can affect the liquid and solid density in the form of an increase in density.The particles can also influence the crosslinking process during printing and post-processing.In this context, the crosslinking reaction could be limited by hindering the mobility of the polymer molecules [58] or accelerated by specific surface modifications or complex interactions between the matrix and particles [59,60].Both factors could counteract this phenomenon.The influence of the additive process itself and the post-curing process was not investigated in the present study; that endeavour would have required an intermediate measurement. In addition to the selection of the resin, it is important that the hardware is calibrated and, for example, that the exposed pixels of the 3D printer match the stl file.For this purpose, the manufacturer usually provides corresponding calibration parts.The specimens were manufactured based on the company specifications or the specifications of the dental laboratory.At the time of the study, it was assumed that the printers were calibrated correctly, but no corresponding certificates were available. Hypothesis (2) must be partially rejected.While the exact positioning on the build platform did not produce any clearly visible changes, the orientation and alignment of the models to the front side of the respective printers showed, in part (the C and F series), distance-specific differences in the dimensional behaviour.For the C series, the measurements of the models oriented across to the front were slightly closer to the SOLL model measurements.As proposed by Lederer et al. [46], a pixel geometry deviating from the square shape could be the cause for this outcome.Each underlying micromirror can change its orientation between 1 • and 12 • to the beam axis, exposing and curing only the desired areas.The XY resolution is determined by the pixel size [61].This means that the minimum structure size is distributed differently with respect to the basic shape of the test specimens and also has different sizes depending on the direction.There is no comparable study in the literature that has investigated dimensional stability that chooses the printer front as the reference point of alignment.Some researchers have investigated the build orientation, where the term perpendicular referring to the Z-axis-that is, the specimens were printed upright [9,62,63].Park et al. [64] investigated the influence of the orientation on the dimensional accuracy of a bridge, where the orientation was chosen at different rotation angles around the XY-axis.They also concluded that depending on the orientation of the structure, the shape of the exposure surface changes, and, therefore, the shape and degree of polymerization shrinkage are affected. The measurements of the F-series models showed the greatest differences from the SOLL model, especially for the longest distance (B).In the F-series printer, polymerisation is activated sequentially by a laser dot.Despite the monofrequency and linearly polarised laser light, these beams are subject to divergence.This is also optimised by optical systems such as mirrors or lenses.However, beam propagation, the divergence of the beam, cannot be completely prevented and increases the longer the beam path is.In addition, when generating the laser light, transverse oscillation modes in the laser resonator make it possible to change the beam (in height and width) in different spatial directions.Depending on the arrangement of the laser in the printer and the distance of the test specimens, minimal deviations from the optimum diameter (140 µm) could add up-especially over longer distances-and be reflected in the measurements.Favero et al. [24] confirmed the influence of the laser spot and radical polymerisation kinetics on the resolution of the SLA print.Poorer XY resolution could lead to physical pressure outside the print object boundary.Kim et al. [21] noted that the SLA technique is prone to errors due to the mirror and the comparatively slow laser motion.The selection of laser intensity and speed to avoid refraction of the light are critical to the reproducibility of SLA printing [65,66].Shim et al. [62] stated that the refraction of light in the SLA print was lower to the vertical axis than to the XY-axis, which in the present study might have provided better dimensional stability by arranging the samples differently along the Z-axis.In contrast, the faster DLP process minimises the error probability associated with repeat printing. Comparison of the solid and hollow models revealed section-specific differences, so hypothesis (3) must be partially rejected.The results showed that linear shrinkage can be different from 3D shrinkage.Indeed, the S-series model measurements showed a comparatively large shrinkage for distances A-D, which resulted in a positive difference compared with the master model in distance E, amplified in the hollow models.There was a similarbut significantly less pronounced-effect in the C-series hollow models.In contrast, there were consistently negative differences for the F series.Again, the hollow models showed somewhat increased shrinkage.These findings suggest that for non-uniform wall thicknesses and flat or wide parts, temperature variations are more likely to cause deviations and deformations to the desired geometry.On the contrary, researchers have reported no relevant differences and have recommended hollow structures to reduce printing time, material consumption and costs [10,11].Chuang et al. [67] found that homogeneous shrinkage occurred on smooth, straight surfaces, which they expected based on the uniform contact with the polymer.Rungrojwittayakul et al. [10] observed asymmetric shrinkage patterns on occlusal depressions, which they attributed to a lack of direct exposure opportunity.In the present study, the hollow models had a more complex geometry, which made direct exposure of all surfaces difficult.The drain holes in the design are indispensable to avoid the accumulation of liquid resin inside the structure [68,69].In addition to deformations due to the container effect (surface tensions), discolorations are a possible consequence.The approximately 20% larger surface area of hollow patterns increases the adhesion of more uncured resin and thus the washing effort.The larger surface area also presents the risk of more stresses during post-curing.Because of the reduced contact areas in hollow models, adhesion problems to the build platform have been observed [68]. There were significant differences between models with and without support structures, so hypothesis (4) must be rejected.On average, the F-series models printed with a support structure showed smaller measurements compared with the SOLL model.As already mentioned, the final shrinkage of the test specimens depends on the material, the chemical setting process and the environmental influences or temperature variations during the printing process.The shrinkage process involves physical shrinkage of the polymer during curing and the change in geometry due to the specific thermal expansion coefficient during cooling of the material, both of which lead to internal stresses.These can be amplified by constraining edge conditions, in this case by the continuous adhesion of the specimens to the build platform, and lead to larger dimensional changes compared with specimens with a support structure.Overhangs demand the use of support structures to prevent sagging and delamination of the component [69].Alharbi et al. [70] found that the number and geometry of support structures can affect accuracy.A high number of support structures introduces potential errors when they are separated from the part.It is postulated that ideal alignment, with maximal self-supporting surfaces, can minimise defects and the time required for finishing and polishing.These points are very important, especially for directly printed dentures, and less so for bases of models.In general, the distribution of the supports had a greater influence than the diameter of the supports [70].Unkovskiy et al. [63] reported that for samples without support, there was a significant deviation in the Z-axis.The cause seems to be the first layer on the build platform.To ensure a secure hold of an object, the first layer is irradiated for a longer time, which can lead to a compression and thus a shorter total height and an overhang in the width.Because the lower area was trimmed when measuring the models, no direct comparison can be made to evaluate this view, although the differences in direction A could be influenced by this phenomenon.Osman et al. observed positive deviations on printed objects in their study.They suspected that the upward movement of the build platform during the fabrication process was due to sagging of the material under its own weight in combination with the curing pattern of the DLP technique.Over-hardening or post-hardening of the coatings was indicated as the cause.For the C-series model with support, there were positive differences in distance C (Figure 17).A more detailed analysis showed that, compared with the SOLL model, there was material accumulation on the flattened side of the smaller die (Figure 18).Therefore, it is reasonable to assume that the inclined position of the flat surface of die C resulting from the placement of the support structure prevented initially uncured polymer from flowing off.Due to ambient light or curing of the subsequent layers, the residual polymer may have subsequently hardened undesirably.Given that the inclination of the models was identical in all series, this phenomenon was likely related to the material.In addition to a lower viscosity, surface modifications (charges) may be the cause of the buildup.smaller die (Figure 18).Therefore, it is reasonable to assume that the inclined position of the flat surface of die C resulting from the placement of the support structure prevented initially uncured polymer from flowing off.Due to ambient light or curing of the subsequent layers, the residual polymer may have subsequently hardened undesirably.Given that the inclination of the models was identical in all series, this phenomenon was likely related to the material.In addition to a lower viscosity, surface modifications (charges) may be the cause of the buildup.In the literature, it is recommended that for prosthetic restorations, models have a deviation of no more than ±120 µm [44].This recommendation is justified by possible consequential errors that affect the accuracy of fit of the restorations-for example, if a shrunk model is produced, the restoration to be fabricated would be fitted on a model that is relatively too small and would therefore be produced too small.In the present study, most 3D-printed models achieved the level of accuracy found in the literature, although there were significant differences depending on the printer and the printing parameters used.smaller die (Figure 18).Therefore, it is reasonable to assume that the inclined position of the flat surface of die C resulting from the placement of the support structure prevented initially uncured polymer from flowing off.Due to ambient light or curing of the subsequent layers, the residual polymer may have subsequently hardened undesirably.Given that the inclination of the models was identical in all series, this phenomenon was likely related to the material.In addition to a lower viscosity, surface modifications (charges) may be the cause of the buildup.In the literature, it is recommended that for prosthetic restorations, models have a deviation of no more than ±120 µm [44].This recommendation is justified by possible consequential errors that affect the accuracy of fit of the restorations-for example, if a shrunk model is produced, the restoration to be fabricated would be fitted on a model that is relatively too small and would therefore be produced too small.In the present study, most 3D-printed models achieved the level of accuracy found in the literature, although there were significant differences depending on the printer and the printing parameters used.In the literature, it is recommended that for prosthetic restorations, models have a deviation of no more than ±120 µm [44].This recommendation is justified by possible consequential errors that affect the accuracy of fit of the restorations-for example, if a shrunk model is produced, the restoration to be fabricated would be fitted on a model that is relatively too small and would therefore be produced too small.In the present study, most 3D-printed models achieved the level of accuracy found in the literature, although there were significant differences depending on the printer and the printing parameters used.Unfortunately, no generalised recommendations could be derived from the present results.From a technical point of view, it would therefore be highly recommended to use test prints to determine the respective ideal orientation of the components to be fabricated, e.g., splints, bridges and crowns. The comparison of the measurement methods showed smaller differences mainly for the analogue method.Considering all of the distances, the deviation was 79 µm.Given the error variables (measurement error, scan error, alignment error during superposition), this amount is within the clinical tolerance, so hypothesis (5) can be accepted.Studies have shown that printing parameters such as layer thickness and the type and number of support structures can influence the surface quality [12].A distinction is made between the real surface of the workpiece in relation to its environment and the actual surface.The latter is defined as the surface that can be measured and mapped, and thus reflects only the approximate image of the surface.The calliper measurements were carried out for the outer dimension (the distance from A to D) with the outer measuring legs.The inside dimension (distance E) was measured with the inside measuring legs of the calliper.For the most accurate representation of the section extent, the largest possible inner leg area was used to measure each section, although the planar area of the legs to the section to be measured on the workpiece is severely limited and far from equal to the total area of the workpiece.The measurement basis for the legs is exclusively the outermost elevations of the surface, which corresponds to the maximum roughness height (Rmax) when considering the classic surface profile.Surfaces form the basis for software measurement.A distinction is made between the geometrically ideal surface, which is usually given by the nominal value in design drawings or constructions, and the actual measurable surface.The present study did not use a purely virtually developed CAD body.According to the clinical situation-analogous to patient care-a highly simplified geometric bridge was scanned and used as the original dataset.As a result, the 3D-printed model did not have flat surfaces over the entire side, and the software specified minimum and maximum values for the actual values (Figure 13).From this, the mean value was calculated.The findings of the present study may provide an explanation for the differences between the two measurement methods. Matching displays the root mean square, which is used extensively in the literature and corresponds to the arithmetic mean of the positive and negative deviations of the entire area.Thus, this value shows the deviation between the SOLL model and the actual model as a whole.Due to the intended comparison of the different measurement methods and the labelling of the models shown in Figure 12 and the sample cutting from the basal side, this value was not used for evaluation in the present study.According to the software programs, the alignments are often performed automatically.Figures 10 and 11 clearly show that this approach may be insufficient without fine adjustment.The initial alignment of the specimens serves only as a rough preliminary alignment so that the elements constructed for further alignments can be reasonably calculated.The actual main alignment corresponds to the fine adjustment, where all mesh data are mathematically assigned to the CAD data in a deliberate way that is always the same, an approach that provides reproducibility and stability in the evaluation. Conclusions Although there were differences in accuracy between the models of the various manufacturing processes in this investigation, they were all within the range of clinical acceptance described in the literature.No general manufacturing recommendation for 3D-printed models can be derived from the present results.In general, there was no influence on accuracy due to the positioning on the build platform.In contrast, the measured values reflected printer-specific differences depending on the orientation of the samples to the respective printer front.Full models and the use of support structures tended to produce higher accuracy.When selecting support structures, however, printer-and material-specific defective areas (incorrect polymerisation of residual resin that does not flow off, overhangs) may be possible.It was not possible to determine the influence of the material on accuracy due to the very limited information provided by the manufacturers.Thus, individual printer-and specimen-specific workflows are indispensable to ensure high accuracy and precision. The comparability of the overall results of digital calliper versus 3D measurements suggests regular application of these approaches in clinical practice. Figure 1 . Figure 1.The master model.Figure 1.The master model. Figure 2 . Figure 2. The measuring distances.Letters are described in Table1. A Short side of base B Long side of base C Diameter of the small stump D Diameter of the large stump E Distance between the stumps Short side of base B Long side of base C Diameter of the small stump D Diameter of the large stump E Distance between the stumps * Relative to the respective printer front. Placement* Relative to the respective printer front. Figure 3 . Figure 3. Support structure of the C series. Figure 4 . Figure 4. Support structure of the F series. Figure 3 . Figure 3. Support structure of the C series. Placement* Relative to the respective printer front. Figure 3 . Figure 3. Support structure of the C series. Figure 4 . Figure 4. Support structure of the F series.Figure 4. Support structure of the F series. Figure 4 . Figure 4. Support structure of the F series.Figure 4. Support structure of the F series. Figure 5 . Figure 5. S-series models on the build platform.P = parallel, A = across. Figure 6 . Figure 6.C-series models on the build platform.P = parallel, A = across. Figure 7 . Figure 7. F-series models on the build platform.P = parallel, A = across. Figure 5 . Figure 5. S-series models on the build platform.P = parallel, A = across. Figure 5 . Figure 5. S-series models on the build platform.P = parallel, A = across. Figure 6 . Figure 6.C-series models on the build platform.P = parallel, A = across. Figure 7 . Figure 7. F-series models on the build platform.P = parallel, A = across. Figure 6 . Figure 6.C-series models on the build platform.P = parallel, A = across. Figure 5 . Figure 5. S-series models on the build platform.P = parallel, A = across. Figure 6 . Figure 6.C-series models on the build platform.P = parallel, A = across. Figure 7 . Figure 7. F-series models on the build platform.P = parallel, A = across.Figure 7. F-series models on the build platform.P = parallel, A = across. Figure 7 . Figure 7. F-series models on the build platform.P = parallel, A = across.Figure 7. F-series models on the build platform.P = parallel, A = across. Materials 2024 , 23 Figure 8 . Figure 8.The SOLL model (left) and test models (IST; middle and right, lateral views) in the GOM Inspect Professional software 2021 before matching. Figure 8 . Figure 8.The SOLL model (left) and test models (IST; middle and right, lateral views) in the GOM Inspect Professional software 2021 before matching. Figure 8 . Figure 8.The SOLL model (left) and test models (IST; middle and right, lateral views) in the GOM Inspect Professional software 2021 before matching. Figure 9 . Figure 9. Alignment of the SOLL model in the 3D-coordinate system. Figure 9 . Figure 9. Alignment of the SOLL model in the 3D-coordinate system. Figure 8 . Figure 8.The SOLL model (left) and test models (IST; middle and right, lateral views) in the GOM Inspect Professional software 2021 before matching. Figure 9 . Figure 9. Alignment of the SOLL model in the 3D-coordinate system. Figure 12 . Figure 12.The geometric elements of the main alignment. Figure 13 . Figure 13.Area comparison of the master model and the printed models. Figure 12 . Figure 12.The geometric elements of the main alignment. Figure 12 . Figure 12.The geometric elements of the main alignment. Figure 13 . Figure 13.Area comparison of the master model and the printed models. Figure 13 . Figure 13.Area comparison of the master model and the printed models. Materials 2024 , 23 Figure 14 . Figure 14.An example of the distance values between the master model and the test body. Figure 14 . Figure 14.An example of the distance values between the master model and the test body. 3 . 4 . Comparison of the Results in Relation to the Printer Parameters 3.4.1 Figure 15 . Figure 15.The mean differences as a function of the distances and the model structure for each series. Figure 15 . Figure 15.The mean differences as a function of the distances and the model structure for each series. Figure 16 . Figure 16.The mean differences as a function of the distances and the placement for each series. Figure 16 . Figure 16.The mean differences as a function of the distances and the placement for each series. Figure 17 . Figure 17.The mean differences as a function of the distances and support for each series. Figure 18 . Figure 18.Colour map of the differences between the SOLL model and a C-series model. Figure 17 . Figure 17.The mean differences as a function of the distances and support for each series. Figure 17 . Figure 17.The mean differences as a function of the distances and support for each series. Figure 18 . Figure 18.Colour map of the differences between the SOLL model and a C-series model. Figure 18 . Figure 18.Colour map of the differences between the SOLL model and a C-series model. Table 2 . An overview of the printers and the printing materials. Figure 2. The measuring distances.Letters are described in Table Table 2 . An overview of the printers and the printing materials. Table 3 . The composition of the resins. Table 4 . The values of the master model from GOM Inspect Professional software and the calliper Table 4 . The values of the master model from GOM Inspect Professional software and the calliper measurements. Table 5 . The general results showing the differences in the deviations between the calliper and GOM Inspect Professional software measurements compared with the digital SOLL model (all measurements are in mm).The largest deviation was 128 µm.** The smallest deviation was 20 µm. * Table 6 . The reference model measurements determined with the GOM Inspect Professional software.
14,915.6
2024-07-01T00:00:00.000
[ "Engineering", "Medicine" ]
Development of agribusiness-based tourism villages in Sigi Regency, Central Sulawesi, Indonesia In order to increase economic growth, welfare of farmers, eradicate poverty, overcome unemployment, preserve nature, environment and resources, and promote culture. So, the potential of an area needs to be developed. The aim of the research is to develop an agribusiness-based tourism village development plan. The determination of respondents was deliberately based on the highest position in the organization. The number of respondents was 72 respondents who came from technical regional government organizations, village heads, sub-district heads, village youth organizations and academics. The analytical method for determining a tourism village and planning the development of an agribusiness-based tourism village uses the borda method analysis tool and the process hierarchy analysis method. The results show the direction of priority development for each designation of agribusiness-based tourism villages based on accessibility, amenities, actions, accommodation and activities with the support of agribusiness potential, social aspects, and village institutions. Introduction Along with the acceleration of regional cluster-based economic development in Central Sulawesi Province, Sigi Regency has a very large opportunity to contribute to national food procurement.There are 4 regional clusters that are focused on developing in accelerating the economic development of Central Sulawesi Province.Sigi Regency is included in 2 clusters, namely the Pasigala urban cluster and the agropolitan cluster of the archipelago food area.Several villages in Sigi Regency are included in the National Food Area (KPN).KPN will act as a buffer for the food needs of the Archipelago Capital City (IKN) in East Kalimantan.This is an opportunity for Sigi Regency to develop an agribusiness-based tourism village in the corridor of accelerating economic development in Central Sulawesi Province.[1] Resource management capabilities include capabilities in terms of: (1) establishing development programs; (2) mobilizing the community; and (3) setting priorities for budget allocation, which will later determine the success of doing agribusiness in Sigi Regency to produce quality agricultural products that go hand in hand with the development of the agribusiness-based tourism sector [2]. Research on the agribusiness-based tourism village development model is one of the researches that can be carried out and is very much in line with the Sigi District RPJMD Vision 2021-2026.Agribusiness-based tourism villages are expected to help alleviate poverty through the development of 1253 (2023) 012086 IOP Publishing doi:10.1088/1755-1315/1253/1/012086 2 tourism that is oriented to local potential, in order to increase income and welfare.Empowerment of the poor through approaches and community awareness is needed so that they can use and have access to control in the development of tourist villages.Tourism activities by utilizing local resources have begun to be developed based on sustainable economic goals, supporting efforts to preserve the environment, and increasing the welfare of the local community.Developing agribusiness-based tourism is one of the efforts to realize the Medium Term Development Vision of Sigi Regency, one of which is by creating Sigi Regency's competitiveness in a sustainable manner through tourism development.Village-based tourism, with agribusiness as the spearhead of the community's economic development.For this reason, the aim of the research is to develop a design for the development of an agribusiness-based tourism village. Research method A descriptive analysis approach is used in research to help describe, demonstrate and help summarize data points so that patterns can develop that meet all data conditions.Data collection was carried out in the context of research on the development of tourist villages through: Method of collecting data 2.1.1.Questionnaire.Data collection method by distributing a closed list of questions to respondents to get an overview and information about the assessment criteria for agribusiness-based tourism villages 2.1.2.Interviews.In-depth interviews by submitting questions directly to key informants in each agribusiness-based tourism potential village 2.1.3.Focus grub discussions.Focus group discussions involving interested elements and understanding the important role of the village as part of a tourist destination are carried out to obtain an overview of the condition of the village which has potential as an agribusiness-based tourism village 2.2.Data types and sources 2.2.1.Primary data.Namely data and information obtained directly from the respondent sources.Primary data was mainly obtained through FGDs and interviews with selected respondents. 2.2.2. Secondary data.namely data and information obtained from documents/publications/research reports from offices/agencies or other supporting data sources, especially those related to the implementation of tourism in Sigi Regency. Implementation stages Determination of tourism villages, determination of agribusiness-based tourism villages in Sigi Regency is carried out through the following stages. Focus group discussion (fgd). In determining the tourism village an FGD was carried out with stakeholders, which consisted of the district, sub-district and village governments as well as village Karangtaruna and academics.Through focused discussions, the criteria for an agribusiness tourism village will be obtained which will become a potential tourism village in the future 2.3.2.Questionnaire.Distribution of questionnaires to key informants in providing an assessment of the criteria for potential villages in Sigi Regency. Observation of alternative villages . Conducting direct observations of potential villages to become agribusiness tourism villages.This observation was carried out to find out and confirm the results of the FGD and the results of the processed questionnaire data that had been carried out IOP Publishing doi:10.1088/1755-1315/1253/1/0120863 3.3.4.Alternative village interviews.structured interviews with stakeholders in each potential village that has been selected in the FGD analysis, and the results of distributing the questionnaires. Determination of Respondents. The determination of respondents in this study was carried out purposively (deliberately) with the consideration that the designated respondents were key informants for the development of agribusinessbased tourism villages.Based on the stipulation of the Decree of the Sigi Regent Number 556-296 of 2020 concerning the Establishment of Tourism Villages in Sigi Regency, the number of respondents collected was 72 respondents consisting of 18 village head respondents, 18 camat respondents, 18 youth youth leaders, 10 respondents from academics and 8 technical local government organization (OPD) respondents 2.5.Analysis 2.5.1.Borda method.To determine the degree of priority for tourism village development from various agribusiness tourism object candidates, the Borda method is used.[3] The Borda method is used to rank villages and potential DTWs that will become agro-tourism villages in Sigi Regency.This method can be formulated as follows: B a-z = Borda function scores on alternatives to A -Z m = Lots of alternatives Pj = The number of informants who gave a ranking score to alternative j AHP method (Process Hierarchy Analysis). The AHP method will be used to determine the degree of importance of the development strategy of selected Agribusiness-Based Tourism Villages, in order to achieve various objectives.The AHP analysis phase includes [4]; 1. Step 1 of setting up a hierarchy 2. Step 2 determination of the comparative scale from 1 to scale 9 3. Step 3 Making a comparison matrix between criteria in pairs (pairwise comparison) on the level of importance, based on the criteria for agro-tourism development strategies in Sigi district.If element I has one of the numbers above when compared to element j, then j has the opposite value when compared to element i Determination of agribusiness-based tourism villages The Regent of Sigi has designated 11 villages as tourist villages in Sigi Regency.This Tourism Village will be explored for its agribusiness potential and then developed as a tourism village besides having natural and cultural tourism destinations, will also be supported by the development of sustainable agricultural tourism.Table 2 below illustrates the potential of the 11 Sigi Regency Tourism Villages which will be programmed to have added value from the agricultural sector as agribusinessbased tourism villages.Bionga Baths; natural tourism Source : [5] FGD Results Processed by Research Team in 2022 The designation of a tourist village must be carried out carefully by taking into account various aspects.Some of the aspects considered are: (1) The distance is still within reach within ½ to 1 hour drive from the city.This is because tourism villages have market potential, especially people in urban areas.With this distance, it is possible for local tourists to visit tourist attractions without having to stay overnight.The close distance allows tourists to bring their families for a trip.So agro tourism will be a form of family and educational tourism.(2) Linkages between DTWs.Tourism villages must have more than 1 (one) tourism object.This allows tourists to gain various experiences in one tourist area.Thus, the potential for agribusiness development in one village can be more than 1 tourism object commodity.(3) Social community.The basic strength in supporting agribusiness-based tourism at the village level is the community.The attitude of the people who want to be involved in tourism, the existence of early community activities in tourism, the existence of agribusiness commodities that have been cultivated by the community, allows the development of tourist villages to be easier.Thus the development of an agribusiness tourism village is a collaborative activity to create an agribusiness tourism village whose various facilities require different roles of OPD. For this purpose, in the early stages of developing a tourism village, an FGD was conducted by inviting all related OPDs in Sigi Regency.This is intended to obtain initial information related to efforts to develop an agribusiness tourism village, explore inter-OPD policies that are relevant to the agribusiness tourism village of Sigi Regency and reveal various risks and development potential.The data obtained from the results of the FGD and secondary data, especially several prospective villages that have alternatives to be developed into agribusiness tourism villages, are then analyzed using the borda method, to produce a degree of priority for the development of a tourism village from various existing village alternatives.As a first step, the alternative villages are the villages as stated in the Sigi District Head's Decree regarding the designation of 11 tourist villages and an additional 7 villages in the national capital buffer food area (IKN).Based on the results of the analysis using the borda method, a priority ranking for the development of agribusiness-based tourism villages in Sigi Regency was obtained.The ranking above shows the stages of developing an agribusiness tourism village.However, the 18 villages have tourism potential, so it has been determined by the Sigi district head and KPN as areas that have good resource potential in Sigi district.If there is sufficient financial capacity, the Sigi Regency government can build all (18) villages simultaneously, so that it will provide Sigi Regency economic growth from the aspect of agribusiness tourism. Tourism destination development Tourism Destination Development is effort to improve the quality of tourism by completing facilities and improving services and utilizing regional potentials that support tourism development.The development of tourist destinations by utilizing regional potential, especially in the agricultural sector, aims to attract tourists to visit and improve the economy of the community in tourist destination areas in a sustainable manner.Tourist Destinations are not only tourist attractions, other things that are important to do, especially regarding education that can be obtained at these tourist sites based on resources around the destination.A tourist location certainly requires various types of activities or attractions that are different from other tours.This can be distinguished from how the presentation of these attractions.In addition, what things can be brought back by visiting tourists is an important element in making the tour leave an impression on the heart when visited. Agriculture as the core of tourism development; As a follow-up to the development of agribusiness tourism villages, the design plans for the development of Tourism Destination Areas in priority villages must be carried out [1].This is important in order to align programs between related OPDs in the development of agribusiness-based tourism villages.Agriculture is the spearhead of the economy in Sigi Regency.The Sigi District Government has required each village to prepare three hectares of agricultural land to improve and maintain food security.addition to providing land, the village must also prepare two tons of rice for village food security.The program is solely to secure food availability in that area.If the intended program can be carried out in every village in Sigi Regency, food stocks as a whole are guaranteed to be safe and sufficient for the community's needs.In an effort to improve and maintain food security in Sigi Regency, the central and regional governments have provided assistance with seeds and agricultural machine tools (alsintan) to farmer groups in each sub-district and village in Sigi Regency.The government continues to encourage farmers to increase production and productivity of food and horticulture in the area by using superior seeds and adequate agricultural production inputs.Every year, farmers in Sigi receive seed, fertilizer, medicine and alsintan assistance from the government as a form of government alignment with the community, especially farmers.Agricultural development, in this case intensive agribusiness, is the basic capital for developing agricultural tourism.All districts in Sigi Regency have agricultural potential.However, in this study, the 11 tourism villages that had been designated by the Regent of Sigi plus 7 villages in the Archipelago Food Area (KPN) were pilot locations agribusiness-based tourism development in Sigi Regency. Formula 5 A for tourism development; Formula 5A consists of attraction, activity, accessibility, accommodation, and amenities.Attraction or attraction refers to something that is the main attraction of a place that makes people want to visit that place.The attractions in the 11 Tourism Villages which were issued by the Decree of the Regent of Sigi are natural attractions and man-made attractions.This attraction can be further improved so that tourist destinations become more attractive, for example: at the location of bathing places, paragliding tours, cultural tourism, built playgrounds, pedestrian parks, jogging tracks, and prepared camping ground.Cafés are also provided, and places to stay around tourist destinations.In order to have distinctive characteristics and tastes, culinary is very important to be displayed in every tourist destination.Culinary served in a unique place with a unique presentation.Destinations.activity The more choices of activities a tourist can do, the happier he will be at the tourist spot.This condition will cause the length of stay in that place to be longer.Furthermore, more and more money will be spent because of the satisfaction of being in that place.Mini library can be a consideration of its existence.This library can be visited by tourists if they are tired of walking around.Here tourists can rest while reading books.You can even provide cupboards filled with books that tourists can take, while tourists are also allowed to donate the books they own and have read.Sigi has many hillsides. Besides being able to enjoy the fresh air and views of agricultural villages, tourists can also have picnics on mats, walk, run, or rent bicycles to get around the village.Bike paths are certainly one of the tourist facilities that need to be prepared.Furthermore, the community needs to continue to be creative and innovative in creating fun activities for tourists by paying attention to variations in tourist segments, so that each age group can be served.By making lots of activities, it is hoped that tourists will not only be satisfied, but also have the intention to visit the destination again.Neat village arrangement with handling of cleanliness, availability of electricity and water is an important requirement for tourist comfort.Fun and unique conditions need to be created in each village is a priority tourist destination in Sigi Regency, so that the impact of increasing the economy from tourism activities will be increasingly felt by the community.Accessibility is all types of transportation facilities and infrastructure that support the movement of tourists from the area of origin of tourists to tourism destinations as well as movements within the area of tourism destinations in relation to the motivation for tourist visits.For that all destinations are made easy to access.So that it is easy for tourists to come by bringing private motorized vehicles, or using public transportation, or cycling in tourist destination areas.Even to visit mountainous areas with a certain altitude, access can be opened by providing a cable car.This is certainly a lot of fun.So access to all tourism destinations in Sigi Regency is made easy, cheap, and clear, so that potential tourists are interested in visiting all destinations owned by Sigi Regency.Accommodation is an industry in tourism, accommodation can be in the form of a place or room where tourists can rest, stay or sleep, bathe, eat and drink as well as relax activities after being tired of traveling around.Tourists need a place to stay temporarily during the trip so they can have a pleasant rest.The existence of comfortable accommodation facilities will encourage tourists to visit and enjoy tourist objects and attractions with a relatively longer time.Because of this, accommodation needs to be considered, and the people of Sigi Regency should get a capacity building program to provide community-based accommodation services.So that the community can independently provide accommodation that is clean, comfortable, and inexpensive.Amenities or amenities are all supporting facilities that can meet the needs and desires of tourists while at the destination.Amenities related to the availability of various public facilities and tourism facilities, such as places of worship, health, parks, restaurants, restaurants, souvenir shops, and public facilities such as ATMs, pharmacies, markets, mini markets, karaoke places, gyms and others.The more complete and more varied the amenities in a tourist destination, the more comfortable it will be for tourists.It is even hoped that they will recommend this destination to be visited by friends, friends, relatives.In fact, it is very possible to be visited again by these tourists. Supporting Factors for the Development of Tourism Village Areas, to find out the supporting factors for the development of tourist village areas in Sigi Regency, the analysis results based on the existing conditions are: (1) The daily activities of the community are farming (rice fields) gardening (coconut, corn, peanuts, red beans , potatoes, candlenut, cassava) raising cattle and goats, fish farmers; (2) Increasing accessibility to each tourism village; (3) Providing supporting facilities and infrastructure such as public toilets, parking lots at tourist spots; (4) Strive to provide souvenirs from agricultural products which are characterized by each village; (5) Improving the quality of human resources (HR).Based on the in-depth study of the FGD activities, it was revealed that there are things that need to be developed as supporting factors for the development of a tourism village area, including: (1) Promotional media is needed by creating a Web about the tourism village area connected to the Sigi Regency Web and also collaborating with the media -existing promotional media as an identifier for the tourist village area to the outside community; (2) Management of the local community so that it plays an active role in the development of tourist village areas; (3) Need to form POKDARWIS; (4) Government regulations or policies are needed to regulate the development of tourist village areas in Sigi Regency; (5) Developing agribusiness-based tourist areas which also have other interesting attractions according to village characteristics. Conclusion The development of agribusiness-based tourism villages shows the direction of priority development for each designation of agribusiness-based tourism villages based on accessibility, amenities, actions, .1088/1755-1315/1253/1/012086 4 Table 1 . 4.Step 4 Calculating CI (consistency index); Determination of the comparative scale from 1 to scale 9. Table 2 . Potential of 11 tourism villages as agribusiness-based tourism villages. Table 4 . Ranking of agribusiness tourism villages in sigi regency based on borda scores
4,396
2023-10-01T00:00:00.000
[ "Agricultural and Food Sciences", "Business", "Economics" ]
Aspects of the owning/keeping and disposal of horses, and how these relate to equine health/welfare in Ireland Background Ireland has long been renowned as a major centre for the breeding, rearing and keeping of horses. Since 2007, however, there has been increasing concern for horse health and welfare standards, and links between these concerns and the structures, governance and funding of the Irish equine industries have been reported. This paper addresses two central issues: firstly the local governance of, trade in and disposal of unwanted horses; and secondly mechanisms employed to improve standards of care given to horses owned by certain communities. Method Primary information was gathered through visits to horse pounds run by and on behalf of Local Authorities, to social horse projects, to horse dealer yards, ferry ports, horse slaughter plants and knackeries. Results The approach adopted by members of a given group, e.g. ferry ports, is described and differences are highlighted, for example in how different Local Authorities implement the Control of Horses Act of 1986, and how the choice, for example, of disposal route affects the standard of animal welfare. Conclusions There is a pressing need for a more centrally mandated and uniformly applied system of governance to safeguard the health and promote the keeping of horses to a higher welfare standard in Ireland. Fundamental to an understanding of why there is insufficient oversight of the keeping and proper disposal of horses is the lack of a comprehensive, integrated system for the registration, identification and tracing of equidae in Ireland. Background Ireland has long been a major producer of horses of all types for the domestic market and for export abroad, ranking among the largest producers of Thoroughbred horses in Europe during the recent decade [1]. With an estimated 27.5 sport horses per thousand people it is the most densely sport horse populated country in Europe [2]. Links between the structures, governance and funding of the Irish equine industries and potential concerns for equine welfare have already been reported [3]. These authors also reported upon the perception of equine welfare [4,5] and on the welfare of horses on farms in Ireland [6]. The key issues to emerge from this work as drivers for poor welfare standards were problems with unwanted horses, especially the trade (most particularly via fairs and dealers) and disposal of horses by an owner/keeper when he/she no longer considered them fit for purpose. The level of production of horses in Ireland has historically exceeded the domestic need and a variety of routes of removal of horses from the owned live Irish horse population have long existed. These include sale (including privately via sales companies, dealers and to slaughter plants); surrender to animal welfare charities for re-homing; abandonment; burial or disposal of carcases via knackeries; and export predominantly via ferry ports. The Control of Horses Act was enacted in 1996 in response to a perceived problem with unwanted and straying horses, especially in urban areas. The legislation was designed to deal with horses being kept on local authority land without permission, horses being exercised in a manner which interfered with other amenity or land users (for example, on public beaches during the summer months), and the keeping of horses in inappropriate locations (for example, urban high density housing units), by persons with insufficient resources (for example, to house and feed horses according to their needs). Powers of enforcement were vested in the Local Authorities [7]. One mechanism for addressing poor standards of care of horses owned by inner city communities has been the 'social horse projects', which have been created in Ireland over the past decade. In most cases, these projects developed from informal community initiatives to facilitate the keeping of horses by inner city communities. In other cases, the prime driver was a desire (by agencies) to engage with defined communities using horses as an enabling mechanism for other social goals. The aim of this paper is twofold. Firstly to review the management structures for dealing with unwanted or stray horses and describe routes of horse trading and disposal. Secondly to review mechanisms to improve responsible horse ownership amongst certain communities through schemes such as the 'social horse projects'. Trade in and disposal of horses Stray horses and The Control of Horses Act, 1996 Three horse pounds were selected for inclusion in this study, on the basis of geographical spread and significant difference in management structure: direct management by Louth County Council in the North East, by sub-contract from Cork County Council in the South West, and by private operators under the supervision of Kilkenny County Council in the South East. These pounds manage seized horses, pending payment of a reclaim fee. Each facility was assessed during a site visit, including a review of physical facilities and equipment, an examination of written records of the throughput of horses (where available) and interviews with staff members. Horse slaughter plants (Abattoirs) Until mid 2010, there were three abattoir facilities in the Republic of Ireland (ROI) licensed to slaughter horses for human consumption, and one in Northern Ireland which had suspended operations. Each of the three facilities (in Counties Kildare, Kilkenny and Limerick) that were actively engaged in the horse slaughter trade was visited. The physical facilities and methods for horse slaughter were reviewed, and members of staff were interviewed. Category 2 Plants (Knackeries) Plant operators of approved Category 2 Plants and subcontractors in ROI were contacted in September 2007 by telephone. Each operator was asked to consult their records and provide details of the numbers of horse carcases handled at their facility during the past twelve months. Two sample knackeries were visited in 2009 to assess the facilities and disposal procedures. Horse dealers Visits were conducted to the farms of five known horse dealers in four counties (two in the Republic of Ireland and two in Northern Ireland). Dealers were identified by horse slaughterers, transporters, portal inspectors, veterinary groups, horse sales vendors and animal welfare societies. Information was gathered during inspection of facilities and interviews, and photographs were taken of facilities and horses. Ferry ports Contacts were made with the portal veterinary inspector at each ferry port capable of the import and export of live horses from the island of Ireland. Visits were conducted to those ports with records of horse throughput to view the facilities, interview staff and study/collect records. These ports were: • Larne and Belfast (both Co. Antrim) • Dublin and Dún Laoghaire (both Co. Dublin) • Rosslare (Co. Wexford) Social horse projects Social horse projects were investigated in the Dublin and Kilkenny areas. In Dublin, these were the Cherry Orchard Equine, Education and Training Centre, the Fettercairn Youth Horse Project and the Meakestown Equestrian Facility, each with established equestrian facilities. In Kilkenny, the Kilkenny Community Action Network (KCAN) project focuses on local horse-keeping groups through the medium of horses but without central equestrian facilities. The following protocol was adopted for all four projects: an inspection of facilities and interviews with staff and clients. Further information was elicited through a study of media reporting. In addition, visits were made to the Smithfield horse fair, a monthly equestrian event with links to the three social horse projects in the Greater Dublin area. Trade in and disposal of horses Horse pounds and the Local Authorities Each of the three horse pounds visited was in a rural setting. Each employed security such as lights, razorwire, high fences, CCTV, guard dogs, lock-down at night, security patrols and intruder alarms and can be differentiated as follows: • Louth. This pound was a purpose-built, managed and serviced premises with direct supervision by the Local Authority veterinarian. There were horse stables and horse transport equipment; in addition there were kenneling facilities for impounded dogs and cats. The pound occasionally took in animals at the request of neighbouring Local Authorities in North Leinster/Ulster. • Cork. This pound was operated as a sub-contracted private business, employing private veterinary services for animal treatments. The pound was regularly inspected by the Local Authority veterinarian. Animals were collected at the request of several Local Authorities in Munster including Cork and Limerick City and County Councils. • Kilkenny. This pound was privately owned and managed, gathering horses from a wide geographical area (predominantly Leinster and Connacht) at the behest of multiple Local Authorities. It employed private veterinary services as needed for animal care. Each pound operated under the direction of one or more Local Authorities under powers defined by the Control of Horses Act, 1996 which permits them to define (by means of bye-laws) both 'Exclusion Areas' where the presence of horses is not permitted and the resource inputs which an owner/keeper is required to provide before a license will be granted to keep horses in a designated 'Control Area'. Funding was provided centrally by the Department of Agriculture, Food and Fisheries (DAFF). Local Authorities varied in how they defined areas for special consideration in regards to the keeping of horses. For example, Limerick City Council designated all of the area under its control a 'Control Area', but seemingly employed its powers to authorise the seizure and impounding of horses only sporadically. Louth County Council defined 'problem' areas as Control Areas (for example, regions of commonage, public beaches or urban zones where horses might compete with other grazing species, leisure users or dwellers, respectively) and instigated a systematic and rigorous set of requirements for the licensing, exercising and keeping of horses in that area. Most County Councils had not sought to develop and maintain their own fully functional horse pound, instead outsourcing its collection and impounding functions. Under this template, the Local Authority authorized the seizure (by sub-contractors) of horses deemed to be in contravention of its bye-laws to be kept at the pound, microchipped for recording purposes and released on production of a receipt-of-payment-of-a penalty issued by the Local Authority to a licensed person. Unclaimed horses, and those repeatedly seized, could be otherwise disposed of. Louth County Council had developed an alternate template. Authorised officers (local authority veterinarian and inspectors) patrolled the 'Control Area' in a marked horse-transport vehicle, creating a visible presence and actively engaging with the horse-owning/ keeping community. Staff offered a service (the identification and licensing of horses) to owners who showed a willingness to comply with local bye-laws, and otherwise impounded horses where necessary -either in the public interest and/or to show that the legislation has teeth. Louth Local Authority staff expressed the view that this interaction led to an improvement in compliance with the law, a culture change over time and to a reduction in the incidence of serious problems with irresponsible horse keeping. Horse slaughter plants (Abattoirs) The three active slaughter horse slaughter facilities in Ireland in 2010 differed in location, supervision and species processed, as follows: • Co. Kilkenny: a long-established business processing horses on average two days per week (with cattle and sheep on the other days), supervised by DAFF veterinary inspectors; • Co. Limerick: a Local Authority supervised plant processing a range of animal species according to market requirements and commenced horse slaughter from early 2009; and • Co. Kildare: a DAFF supervised, re-commissioned, purpose-built horse slaughter plant that recommenced the slaughter of horses in late 2009 under new management. In each facility, the slaughter process itself was considered to be carried out in a satisfactory manner with due regard to the principles of horse handling and humane slaughter [8]. Purchasing staff reported that that they had no current difficulty sourcing horses for slaughter but that there were greater difficulties with sourcing 'suitable horses for the human food chain'. Horse identification, conformation/body condition and health/drug history are the main criteria for selecting horses to enter the food chain. Ineligible horses, usually procured as part of a joblot, were disposed of through the knackery system at a loss to the plant operator and typically included: • Foals and yearlings; • Lightweight athletic types such as young, racing-fit Flat Thoroughbreds, which produce overly lean carcases; • Undernourished and debilitated horses which produce poor quality carcases at best suitable only for the low value, processing end of the market (with poor financial returns). • Undocumented horses and those with documents signed as 'Excluded from the food chain' for reasons of owner choice or medication history. The horse slaughter business was considered by staff to have changed in four significant ways in the recent past: 1) Horses have become an expensive luxury to many. Increasingly, those in the horse industries wish or need to dispose of surplus horses in a cost efficient manner. 2) There is an increasingly anthropomorphic and moralistic depiction of unwanted horses by the media, casting the equine industries in an unfavourable light. 3) More operators have entered the horse slaughter trade, competing for limited markets. 4) There is a higher public awareness of the trade. Category 2 plants (Knackeries) Category 2 plants (knackeries) are licensed, in the Republic of Ireland, to collect horse carcases not intended for human consumption, and are not currently required to submit records to a central database. Horse identification documents are not sought, nor collected or returned to the Horse Passport Issuing Authority for recording of the death of the horse on a database. The annual throughput of horse carcases reported by plant operators is shown in Table 1. The total estimated number of horse carcases processed by this route in the period examined was 1,973. More than half (53%) of the plants processed fewer than 20 horses in that period. Horse dealers The facilities and resource inputs on view at horse dealer yards varied in standard. In each case there were 'frontof-house' stables for public viewing. The 'front-of-house' horses were generally kept individually stabled in circumstances considered typical of Irish equestrian facilities. Holding yards were subsequently viewed, where entry was by invitation only. Here horses were kept in groups in barns, outdoor pens and fields, and fed on large bale hay/haylage. Horses were held here and further assessed for suitability for onward trade as riding/driving/breeding animals or for slaughter. There were horse-transport lorries on view capable of holding up to 18 horses. In some instances, these had GB license plates. There were often horses of moderate (acceptable) quality and welfare state on view in the more public facilities. However, at other holding facilities, lame, injured, ill and thin horses were viewed which were reported as being intended for slaughter. Circumstances did not allow the viewer to intervene in these instances but simply to observe and gather information. Dealers openly admitted that they did not necessarily seek horse identification documents (in contravention of the law) when sourcing horses as they could apply to a Horse Passport Issuing Authority of their choice for a new set. Ferry ports In no port were horses routinely unloaded, inspected to ascertain their health and welfare status, or cross-checked with regard to their travel or identification documents. At most ports, the number of horses in the shipment was noted and referenced to the number of identification documents offered by the shipper. Ferry ports have begun to record the detail of proffered information, by means of listing document and/or microchip numbers or photocopying documents. In one port, the introduction of this practice led to the discovery that a known shipper-forslaughter was repeatedly reusing horse identification documents for successive shipments. Larne is currently the only port on the island of Ireland with facilities for the inspection of horses in lorries by means of a gantry and viewing platform and unloading facilities that could be used to inspect horses pre-export or import. Information was gathered regarding the throughput of horses per month where records exist and is summarised in Table 2. No such records exist for the Dublin ports. There was no information recorded at ferry ports concerning the purpose for which horses were exported or imported, or how many individual horses traveled both in and out via any port. There was no system to trace the movement of individual horses on and off the island of Ireland. Social horse projects Cherry Orchard Equine, Education and Training Centre Based in Ballyfermot, a densely-populated area of west Dublin, this project commenced approximately ten years ago as a local community initiative in response to the commencement of The Control of Horses Act, 1996. Funding (for capital and current expenditure) was secured both centrally (DAFF, Department of Education, and Department of Enterprise, Trade and Employment) and locally (Dublin City Council). Based on interviews with staff, it seems that initially there was a perception by local groups that the Cherry Orchard initiative would provide an equestrian facility for the local community to house their horses and use the facilities at will and under local community direction. There was a sense (on all sides) that this would lead to little or no change in the local horse culture. However, the facility has evolved otherwise: the horses are owned and managed by the Centre, which provides subsidized, structured training to local groups. At the time of inspection, there were 28 stables, 25 microchipped horses/ponies, 5 hectares of grazing, and both indoor and outdoor riding facilities. Teaching sessions were conducted in equine skills -both riding and general horse husbandry -for locals, either individually or on referral from Dublin City Council, An Garda Síochána, and Youth or Disability Groups. In 2010, approximately 600 persons attended weekly courses at the centre raising education standards through FETAC modules or providing a path to a professional equestrian career, for example via RACE (the Racing Academy and Centre of Excellence). Thus, there were now two parallel horse cultures in Ballyfermot: • Individuals (predominantly youths) engaged in supervised equestrian training (and related social improvement schemes) in modern, subsidized equestrian facilities at Cherry Orchard, and • A horse community whose young owners/keepers housed, grazed, manage, rode and drove horses in the urban spaces and endured periodic raids by contractors working under the direction of the Local Authority under the terms of the Control of Horses Act, 1996. Fettercairn Youth Horse Project This project runs in Tallaght, a built-up area of south County Dublin with generally similar demographics to Ballyfermot. The project was established in 1995 when funding was secured from Dublin South County Council and The Ireland Funds [9], and a facility developed which the local community felt they might use to house and keep their own horses in their own fashion. A block of 20 stables was commissioned on approximately 6 hectares of land. Over time it became apparent to project staff that the local horse culture remained largely unchangedhorses still roamed freely in the surrounding urban areaand the standards of horsemanship within the Fettercairn project did not approach equestrian norms. Despite local resistance to change, at the time of writing some three quarters of the horses were now owned by the Fettercairn project rather than directly by the community. Consequently, the Centre's focus is now on changing the behaviour of those willing to engage with a structured programme, rather than accommodating those who wished simply to avail of a facility on their own terms. Riding lessons were provided at a subsidized rate; stable management and horse husbandry were taught and supervised; youths were accepted from such as the local drugs rehabilitation unit; and pupils have graduated to further training at RACE and the Irish Army Equitation School. Meakestown Equestrian Facility This project was developed as a green-field initiative in north-west Dublin during a time (the mid-2000s) when the nearby suburban areas of Finglas and Ballymun were the subject of major regeneration projects [10]. High-rise apartment blocks were being replaced by lower-density housing considered more in tune with the social needs of the community. Meakestown facility staff felt that that the equestrian project might represent a solution to two local horse 'problems': • The area 'suffered' a high number of straying and unlicensed horses (in the sense of the Control of Horses Act, 1996), and • It was felt that many of the horses presented at the monthly 'problematic' Smithfield market (see below) came from this horse population. The Meakestown project was developed by Dublin City Council in conjunction with Ballymun Regeneration Ltd with a €3.5 million set up cost [11]. Architect-designed stables, meeting rooms, storage facilities, grazing and a horse exercise area were developed and a manager installed. However, members of the local community were permitted to move their own horses (and methods) onto the site, continuing to operate as before but in a subsidized facility. The Meakestown project seemed, at the time of visiting, to be experiencing some administrative difficulty. Smithfield horse market Smithfield market has a long-established association with horse ownership amongst the Traveller and inner city communities in Dublin, the communities which the social horse projects were largely set up to serve. The market is held in a built-up inner-city Dublin location on the first Sunday of every month. It is unregulated and horse numbers vary unpredictably from month to month (there are no pre-market entry requirements). The market has been the subject of considerable discord between Dublin City Council and the local horse-owning community. A serious incident involving a run-away horse in 2002 led to Dublin City Council disassociating itself officially from the event (citing insurance difficulties). The market continued as before but with complaints increasing by such as the Dublin Society for the Prevention of Cruelty to Animals (DSPCA), members of the local business community, tourists and the general public. Attempts to close Smithfield market or move it to either Meakestown or Cherry Orchard were met by heavy resistance from regular attendees who have carried on, regardless of Dublin City Council and police stewarding, in the fair's traditional inner-city location and on its traditional calendar date. The fair has been the focus of ongoing negative media reporting of violent and unsocial behavior such that at the time of writing in 2011, further attempts are being made to close or relocate it. Kilkenny Community Action Network (KCAN) KCAN is a non-government organisation (NGO) funded by the Department of Community, Gaeltacht and Rural Affairs through the Local Development Social Inclusion Programme and managed by Pobal on behalf of government [12]. As one of its many initiatives aimed at addressing social exclusion of disadvantaged communities, it has sought to engage with adult male members of the Traveller community through the medium of horses. Grazing land was rented locally and a training programme instigated. At its peak, approximately twenty men (with forty horses) participated with a KCAN team comprising community workers, Local Authority Staff, an equestrian trainer and a veterinarian. KCAN project staff "recognized the effectiveness of using 'horse talk' as a forerunner to the introduction of other topics such as mental and physical health issues". Improvements in horse health and welfare were considered of secondary benefit. The next and seemingly natural step proposed for the project was the acquisition of a permanent home for the horse project. Pledges of substantial funding were secured to develop a permanent project with purpose-built facilities and grazing land permitting Traveller men to keep horses under supervision and engage in equestrian training. However, suitable land was not identified by the Kilkenny Local Authority at a critical stage in the project development, and the funding pledges were subsequently lost. Discussion The Control of Horses Act, 1996 has proven to be a seminal piece of legislation regarding the keeping of horses. The Act was not devised to address equine welfare issues although there are limited circumstances in which authorised officers under the Act can directly insist that veterinary attention be sought and provided for equids. The Act appears as the dominant legislative instrument influencing how certain communities such as Travellers and inner city horse owners are expected to keep their horses. It has had a profound effect in areas and on populations where Local Authorities have chosen to implement it. This influence can be viewed in a most positive light in County Louth; however, the subcontracting model employed by most Local Authorities would appear to be a fire-fighting exercise at best. Additional concerns have arisen since the introduction on July 1 st 2009 of EU Regulation 504/2008 (as implemented by SI 357 of 2011) regarding the identification of horses as microchip devices not linked with the issuing of horse passports were being inserted at horse pounds. The routes of movement, sale and disposal of horses are not well documented or regulated in Ireland. The Tripartite Agreement permits free movement of horses between Ireland, the UK and France, without health certification, ostensibly only of non-slaughter, identified equidae accompanied by their passports. Horse-dealers take advantage of the lack of oversight and operate with impunity to free market principles exporting horses for slaughter but not openly declaring this intention. Proper oversight of horse movement would require extensive input at ferry ports and other border crossings with potentially major repercussions for the conduct of the normal business of trade in breeding and competition stock between the three countries concerned. Ferry ports officials currently do not examine horses or check that microchip numbers and horse markings corroborate with passport details; some do not record data for throughput. The gross numbers of horses moving out of Ireland (north and south) through those ports which recorded numbers can be seen to have remained relatively stable between 2006 and 2009 while the net export figure can be seen, from Table 2, to have almost doubled in the same period. The net trend is accounted for by a significant reduction in the movement of horses into Ireland. It is not possible to determine whether individual horses being imported are horses that have previously been exported, or to reliably quantify the total import/export numbers. The Irish Thoroughbred Breeders Society (ITBA), for example, claims that 6,222 horses were exported from Ireland in 2008 (4,171 to Great Britain) [13]. The knackery system was set up to manage the disposal of fallen farm stock with due regard to concerns for animal health and welfare and for the environment. The service was subsidised by the Fallen Animal Scheme until 2009 when this support was discontinued. Though not originally intended as a service to the equine industries, the Fallen Animal Scheme covered the cost of rendering and disposal (though not collection) of horse carcases and its withdrawal can only have had a negative effect on the numbers of horse carcases processed by this route. Knackeries can be seen from the enquiries conducted in 2007 not to deal with significant numbers of horse carcases (in comparison to production numbers [1]) although as there has been no requirement to record actual throughput, it must be acknowledged that the figures presented are an estimate only. Statutory Instrument 612 of 2006 sets out the legislative position (as per EC Regulation 1774 of 2002) in the Republic of Ireland regarding the burial of carcases. A derogation exists permitted the disposal of pet animals, defined as 'any animal belonging to species normally nourished and kept, but not consumed, by humans for purposes other than farming', under license. This derogation is not normally felt to apply to horses but there must be concern that the numbers of horses buried in remote locations will increase as the cost of legitimate routes of disposal for horses excluded from the human food chain also increases. In Northern Ireland no co-ordinated system of knackeries for the disposal of horses exists; horses are often held to come within the definition of 'pet animal' as defined by the relevant legislation and thus on-farm burial is considered to occur with greater frequency than in the Republic of Ireland. Disposal of horses through abattoirs for human food trade is a comparatively more lucrative method of disposal of horses for owners. Italy is Europe's largest market for horse meat and one where much of the lower quality product is further processed. There is major competition, however, in the marketplace from suppliers of horse carcasses in North and South America and Eastern Europe, and from the live horse trade. Live transportation for slaughter is driven by a cultural desire for horse meat from horses slaughtered locally and thus perceived to be local even if actually from horses imported live immediately prior to slaughter. France and Belgium represent added-value markets -there is a desire for higher quality unprocessed product. The major problems for Irish suppliers into the Continental market are that many Irish horses are of perceived non-meat breeds such as the Thoroughbred, Irish business operates in a high cost environment, there is a significant added cost associated with transport to the market place, and it may be difficult to secure payment for product. In Ireland, horses are not generally bred for the meat trade and the horse slaughter business has largely been conducted in an unobtrusive fashion due to concerns that it is not a trade that the general public is likely to view in a favourable light. A growing issue is that many will have received medications that preclude them from entering the human food chain. Public health considerations drive a policy of strict oversight by DAFF and Local Authorities in Ireland. Strict control over the selection of appropriate horses with "clean" passports, which are not recorded as having received prohibited medications, means that his route is not open to many horse owners in the ROI. From a welfare perspective, the humane destruction of unwanted horses at home (and subsequent disposal via knackeries) and at supervised abattoirs ought to be facilitated in preference to their movement over indeterminate time and distances via fairs, markets and dealers, which latter trade is likely to increase stress and therefore compromise horse welfare. This is, however, a complex argument and one easily misrepresented in the media. For example, the humane slaughter of horses at an approved abattoir and subsequent supply of skin-covered carcases (improving carnivore welfare) to Dublin zoo was described in one national newspaper as: "Slow racehorses fed to the lions in Dublin zoo" [14]. Social horse projects are a commendable attempt to engage locally with urban communities who wish to keep horses, serving the twin aims of engaging with authorityshy groups predominantly young males, and improving the local horse culture to the benefit of all. However, those whom the project aims to assist may themselves resist engagement as they perceive a different need to the project's stated aims. And these projects often suffer from the perception of low public good and therefore from resistance by such as local politicians who can exert downward funding pressure. FAWAC is a non-statutory government advisory committee, established in 2002, which comprises representatives from stakeholder bodies such as farming and veterinary organisations, educational and scientific institutions, animal welfare charities and government departments. It issues guidance documents [8] and advisory position statements to the Minister for Agriculture on concerns relating to the welfare of farmed animals, which category has increasingly been considered to include horses [6]. In 2007, a sub-committee (the Equine Welfare Liaison Working Group) was established in response to the perception of a growing need to address the plight of unwanted horses. Members of FAWAC expressed concern for a perceived worsening of welfare conditions for horses on farms and at fairs and the need to improve existing routes for the humane disposal of unwanted horses. Members proposed that the correct identification of equidae receive appropriate legislative attention as being fundamental to achieving improvements in equine health and welfare. Advisory documents were issued to government, which, however, it is not statutorily obligated to accept and most likely views in the much wider context of animal health, agri-economics and the political reality. Establishing a coordinated system for the registration of horses, transfer of ownership and monitoring of movement in and out of Ireland is essential, in the opinion of the authors of this paper, to safeguarding equine biosecurity and welfare in Ireland. Failure in this regard means that responsibility cannot be defined and traceability of horse movement in the face of contagious disease is extremely difficult. As per the European Communities (Equine) Regulations of 2011 (SI 357 enacted in July 2011), horse identification details will not, in the foreseeable future be centrally recorded in such as fashion that each animal can be traced from birth, from one owner/keeper to the next (as persons responsible for the animals' welfare) and to a humane endpoint. Conclusions There is a huge variance in how the Control of Horses legislation has been employed across Local Authority areas in Ireland and there is thus a very real concern that pressure applied in one area simply leads to a movement of the problem elsewhere. Fundamental to an understanding of why there is insufficient co-ordination of routes for the proper, timely and humane disposal of horses is the lack of a comprehensive, integrated system for the registration, identification and tracing of equidae. And social horse projects, though laudable, suffer (as a means of improving horse welfare standards) from the difficulty that results (in terms of both human and horse welfare) are often intangible and long-term in nature. All of the above point to the need for a more centrally mandated and uniformly applied system of governance to promote the production, keeping and disposal of horses to a higher welfare standard [15].
7,712.6
2011-09-21T00:00:00.000
[ "Economics" ]
EMD-Based Predictive Deep Belief Network for Time Series Prediction: An Application to Drought Forecasting : Drought is a stochastic natural feature that arises due to intense and persistent shortage of precipitation. Its impact is mostly manifested as agricultural and hydrological droughts following an initial meteorological phenomenon. Drought prediction is essential because it can aid in the preparedness and impact-related management of its effects. This study considers the drought forecasting problem by developing a hybrid predictive model using a denoised empirical mode decomposition (EMD) and a deep belief network (DBN). The proposed method first decomposes the data into several intrinsic mode functions (IMFs) using EMD, and a reconstruction of the original data is obtained by considering only relevant IMFs. Detrended fluctuation analysis (DFA) was applied to each IMF to determine the threshold for robust denoising performance. Based on their scaling exponents, irrelevant intrinsic mode functions are identified and suppressed. The proposed method was applied to predict different time scale drought indices across the Colorado River basin using a standardized streamflow index (SSI) as the drought index. The results obtained using the proposed method was compared with standard methods such as multilayer perceptron (MLP) and support vector regression (SVR). The proposed hybrid model showed improvement in prediction accuracy, especially for multi-step ahead predictions. Introduction Among all extreme climate events, drought is considered the most complex phenomenon [1].This may be due to its slow development, the difficulty of detection, and the many unique facets that it exhibits in any single region [2].It differs from other natural hazards because it has a wide spatial coverage, and it is very difficult to determine its onset, duration, and recovery [1].Drought occurrences cause substantial damages to a wide array of sectors, including agriculture, energy generation, recreation, and ecosystems [3].For instance, the United States witnessed a significant increase with regard to the number and severity of drought events over the last two decades affecting more people than any other natural phenomenon [1,4].As reported by the US National Climatic Data Center database (2002), the United States experienced either severe or extreme drought during the last century, with nearly 10% of the total land area affected.Droughts and related heat waves accounted for 10 out of the 58 weather-related disasters recorded within the period [5].The 2011 Texas drought [6], the 2012 central U.S. drought [7], the 2012-2014 California drought [8], and the 2010-2011 East Africa drought [9] are severe droughts that have occurred over the last decade.These droughts have led to substantial damages in a wide array of sectors, including agriculture, energy generation, recreation, and ecosystems [3].The success of any drought preparedness and mitigation strategy depends, to a large extent, upon timely information on drought onset, duration, and spatial extent [2].This information may be obtained through continuous drought monitoring, which also relies on accurate predictions from models. A plethora of drought prediction methods have been proposed in literature, including time series models, regression models, probabilistic models, machine learning models, physical models such as the Global Integrated Drought Monitoring and Prediction System(GIDMaPS) [10], and a host of hybrid models.Regression or autoregressive models are flexible and are commonly used for drought prediction.However, these traditional methods suffer from their linearity relationship assumption between predictand and predictors and may be insufficient for real application problems.In an effort to improve drought prediction accuracy, different models have been explored recently [11,12].A seasonal drought prediction model based on a Bayesian framework was proposed in [11] to characterize hydrologic droughts with different severities across the Gunnison River Basin in the upper Colorado River Basin, using a standardized streamflow index (SSI) as the drought variable.A wavelet-linear genetic programming (WLGP) model was explored in [13] for long lead-time drought forecasting in the state of Texas with 3-, 6-, and 12-month lead times.The authors demonstrated that the classical linear genetic programming model is unable to learn the non-linear structure of drought phenomenon in lead times longer than three months [13].A linear stochastic model (ARIMA), a recursive multistep neural network (RMSNN), and a direct multi-step neural network (DMSNN), as indicated in [12], have also been used for drought forecasting.In another study, three machine learning techniques were explored to forecast long-term drought at the Awash River Basin in Ethiopia.These techniques include artificial neural networks (ANNs), support vector regression (SVR), and coupled wavelet ANNs [14].Although all of these methods have shown promising results in terms of improving accuracy of drought forecasts, the impact of climate change on droughts and other climate extremes across various regions of the globe, especially in recent decades, has highlighted the need for more advanced methods for predicting these events [15].Artificial neural network (ANN), a type of machine learning model, which can be used to learn from observations to establish complicated relationship between inputs and outputs has been explored as an alternative to modeling complex systems.Due to its advantage in modeling the complex and nonlinear relationship between variables, it has proven to be effective for drought prediction [16].The potential disadvantages of the ANN model include, its proneness to over-fitting resulting from poor weights initialization issues [17] and the difficulty in training multiple hidden layers for learning complex problems, among others [18].Several studies have also used global circulation models (GCM) or regional circulation models (RCM) outputs for assessment of drought characteristics [19,20].Generally, drought is influenced by several factors, including large scale climate variables and can be estimated using current climate characteristics [20].The relation between historical drought sequences and the current climate can be used in conjunction with climate projections from global or regional circulation models to simulate future drought conditions. The present work is interested in exploring the applicability of a deep belief network (DBN), a form of deep learning architecture for prediction of drought indices.The DBN is used as a pretraining step for a supervised back-propagation neural network.The idea of pretraining using DBN is to aid in obtaining better initial weights for the network instead of random initialization.Recent studies using a DBN as a deep learning algorithm have had great successes in applications such as image classification, computer vision, and speech recognition problems [21,22].However, the use of deep learning in time series prediction problems is relatively new and is gaining much attention.Some applications of DBN for time series modeling can be found in [23][24][25].Zhang et al. applied deep belief networks to forecast foreign exchange rates and found better performance with the DBN than with other classical approaches [23].A deep belief network model optimized by particle swarm optimization (PSO) was proposed in [25] to forecast time series.The proposed model was found to outperform conventional neural network models such as multi-layer perceptron (MLP), self-organizing fuzzy neural networks (SOFNN), and the mathematical linear model ARIMA.Chen et al. also used a deep belief network model to predict the short-term drought index of the Huaihe River Basin in China [24]. The performance of the DBN-based model was found to be superior to that of the traditional back propagation neural network in terms of accuracy and efficiency. In recent years, hybrid models involving signal decomposition have also been shown to be effective in improving prediction accuracy of time series prediction methods, as indicated in [26].Wavelet analysis is one of the widely used signal decomposition methods for hydrological time series prediction [26].Wavelet analysis has been employed in several hydrological time series studies, as shown in [13].Decomposition of time series reduces the difficulty of forecasting, thereby improving forecasting accuracy.Though wavelet analysis is mostly used in hydrological time series, its efficiency is usually affected by certain factors.First, accurate wavelet decomposition of time series is still a problem due to its heavy dependence a priori on the choice of wavelet basis functions [26].Additionally, some experience is required to determine the level of decomposition needed to extract the original series.Huang et al. proposed a signal decomposition method called empirical mode decomposition (EMD), which is suitable for both nonlinear and nonstationary time series [27].Hybrid models using EMD as a series decomposition technique have gained great interest among time series prediction researchers.Unlike wavelet decomposition, empirical mode decomposition is an adaptive data-driven method that can extract the oscillatory mode components present in data without the need to specify a priori the basis functions or the level of decomposition [27,28].These are generated internally by the analyzed signal and therefore overcomes the intrinsic limitations present in wavelet approaches [28].EMD can be used to decompose any complex signal into finite intrinsic mode functions and a residue, resulting in subtasks with simpler frequency components and stronger correlations that are easier to analyze and forecast.Another important feature of empirical mode decomposition is that it can be used for noise reduction of noisy time series, which can be effective in improving the accuracy of model predictions.This work presents a hybrid method involving a denoised empirical mode decomposition and a deep belief network to improve the accuracy of the single DBN-based time series prediction model.Different EMD-based denoising methods have been proposed and applied in many studies for different purposes [29,30].A common method for EMD-based denoising algorithms usually eliminates the noise by using one of the intrinsic mode function (IMF) components.However, the decision as to which IMF to eliminate is still an ongoing research problem [31].Since the nosiest components are usually at the top, most studies also consider the first IMF as noise and eliminate it when reconstructing the original signal.This may not be an optimal way of eliminating noisy IMFs because EMD decomposes a given signal into several IMFs with different frequency levels.As a result, other lower order IMFs may contain noise as well.In this work, a technique based on Hurst exponent thresholding is used to determine noisy IMFs.Instead of using the popular rescale (R/S) analysis to directly estimate the Hurst exponents for the various IMFs, detrended fluctuation analysis (DFA) was used for this purpose.Unlike R/S analysis, DFA can be used for nonstationary time series.Detrended fluctuation analysis is a technique that has proven to be useful in measuring the extent of long-range correlations in time series [32,33].It can measure the same power law scaling observed through R/S analysis [32]. The rest of the paper is structured as follows.The next section presents the methodology which includes a brief overview of the structure of the deep belief network and the proposed approach.In Section 3, the study area and the dataset used for evaluation of the proposed method is presented.Section 4 presents the results and discussion, and the conclusion is presented in Section 5. Methodology This section will first give a brief description of the general structure of the restricted Boltzmann machine (RBM), which forms the building blocks of the DBN: a composition of several stacked RBMs.This is followed by an overview of the EMD process and noise reduction based on a detrended fluctuation analysis.Finally, the overall work flow of the proposed hybrid EMD-DBN model with series denoising is presented. Restricted Boltzmann Machines An RBM is a type of neural network model used for unsupervised learning.It can also be used as a feature extraction method for supervised learning algorithms [34].A typical RBM consists of a single layer of hidden units with undirected and symmetrical connections to a layer of visible units [35].The visible units represent the data, and the hidden units act as the outputs that are used to increase learning capacity.The configuration simply defines the state of each unit.They only allow connections between a hidden unit and a visible unit-no connections between two visible units or between two hidden units.The restriction is that their units must form a bipartite graph, as depicted in Figure 1.RBMs represent a special type of generative energy-based model that is defined in terms of the energies of configurations between visible and hidden units.The energy of the joint configuration (v, h) of the visible and hidden units of an RBM is defined as [18,25,36,37]: where v i , h j are the binary states of the visible unit i and the hidden unit j, a i , b j are the biases, and w ij is the weight between them.The configuration energy indicates the state of the network.For instance, a lower energy shows that the network is in a more desirable state and therefore has a higher probability of occurring.The energy function is used to calculate the probability that is assigned to every possible pair of visible and hidden units.The energy of configuration determines the probability of a configuration of a possible pair and is given by: where Z is a partition function (normalization constant), which is a sum of the energies over all possible configurations of the visible and hidden units: The RBM is trained using the contrastive divergence algorithm [35] by presenting a training vector to the visible units and alternatively sampling the hidden units, p(h|v), and visible units, p(v|h).The hidden unit activations are mutually independent, given the visible units activations and vice versa.The RBM, in this case, is called a conditional restricted Boltzmann machine (CRBM).The conditional probabilities of hidden and visible units with binary values are therefore calculated using the following equations: where n and m are the numbers of hidden and visible units, respectively. For a single binary hidden and visible unit, the conditional probabilities are given by where σ is the activation transfer function.c j and b i are the biases, v i and h j are the states of the visible and hidden units, and w ij represents the connection weight between units i and j. In RBM training, the main objective is to be able to obtain optimal parameters b, c, w for the network.This can be realized by optimizing the gradient function: where v i h j data expresses the distribution of raw data input to the RBM, and v i h j model is the distribution of data after the model has been reconstructed.The gradient function represents the log probability of a training vector with respect to a weight.The weights and biases can be updated using contrastive divergence, Figure 2, as follows: where α is a learning rate. Deep Belief Network A DBN is a probabilistic generative model that consists of multiple hidden layers.The multiple layers can be used to learn more complex patterns of data in a progressive manner from low-level features to high-level features.One important feature of the learning algorithm for a DBN is that of its greedy layer-wise training, which can be repeated several times to efficiently learn a deep hierarchical model [38].Other key features of DBN models include their ability to efficiently learn from large amounts of unlabeled data that can be discriminatively fine-tuned for classification and regression problems using the standard backpropagation algorithm [38].They can also be used to make nonlinear autoencoders that work considerably better than standard feature reduction methods, such as principal component analysis (PCA) and singular value decomposition (SVD) [22,38].A DBN is constructed by stacking multiple RBMs on top of each other [18,35].The structure of a DBN with two RBMs is shown in Figure 3.The layers are trained efficiently by using the feature activations of one layer as the training data for the next layer.Better initial values of weights in all layers can be obtained by the layer-wise unsupervised training, compared to random initialization [18].A DBN is trained using two steps: pre-training and fine-tuning.First, unsupervised pre-training is performed layer by layer, from low-level to high-level RBMs, to obtain reasonable parameter values of the network.Second, the entire network is fine-tuned in a supervised manner according to the target value using back-propagation. Training a DBN is simply done by training the individually stacked RBMs constituting the network.An RBM is trained using contrastive divergence, which is an algorithmic procedure for the efficient estimation of RBM parameters.A standard way of estimating the RBM's parameters from a training set x 1 , ..., x n with respect to a given weight is carried out by finding the parameters that maximize the average log probability. where v i h j data expresses the distribution of raw data input to the RBM, and v i h j model is the distribution of data after the model has been reconstructed.This can be summarized in the following few steps: 1. set initial states to the training data set (visible units); 2. sample in a back-and-forth process 3. update all of the hidden units in parallel starting with visible units, reconstruct visible units from the hidden units, and finally update the hidden units again; 4. repeat with all training examples and update the weights using Equation ( 9). Empirical Mode Decomposition EMD is a signal preprocessing algorithm that was introduced by Huang et al. in 1998 [27].EMD is an adaptive data processing method that can be used for the decomposition of both nonlinear and nonstationary time series and has found applications in various domains.The method was developed based on the assumption that any time series data consists of different simple intrinsic modes of oscillation, i.e., IMFs.The essence of EMD is to empirically identify these intrinsic oscillatory modes by their characteristic time scales in the data and then decompose the data accordingly.It converts an irregular signal into a stationary signal process by continuously eliminating the average envelope of the sequence, thereby making the sequence smooth.It considers oscillations of the signal at a very local level and separates the signal into locally non-overlapping, zero-mean, stationary time scale components through a sifting process.The advantage of EMD over other signal decomposition techniques is that it does not need to be constrained by conditions, which often only apply approximately.Several hybrid models based on the principle of 'decomposition and ensemble' have been proposed.For instance, hybrid forecast approaches have been applied in hydrology research, as shown in [26,[39][40][41].A wavelet transform technique with ANN was employed in [39] and [40] to predict rainfall and streamflow time series respectively.In [41], Sang developed a method for discrete wavelet decomposition of time series and proposed an improved wavelet model for hydrologic time series forecasting.The results of these studies have proven that the 'decomposition and ensemble' principle-based forecasting methods can reduce the difficulty of forecasting and can outperform the single models [26].Unlike wavelet transforms that have been widely used as decomposition techniques, EMD is a heuristic technique that is based on the properties of the data on a local scale.It decomposes the time series without the need of an a-priori-defined basis function in which the signal is expressed [27]. The necessary conditions of the IMFs are symmetry with respect to the local zero mean and the same number of zero crossings and extrema [27].In order for an EMD to decompose a signal x(t n )) into the different IMFs, the following two properties must be met: 1. an IMF has only one extremum between two subsequent zero crossings-i.e., the number of local extrema and zero crossings differs at most by one; 2. the local average of the upper and lower envelopes of an IMF has to be zero. The sifting process locally filters pure oscillations, starting with the highest frequency oscillation in an iterative procedure [27].Hence, the sifting algorithm decomposes a data set x(t) into c j IMFs, where j = 1, ..., n and a residue r n as shown in the following equation [42].A more detailed procedure on how IMFs are calculated can be found in Wu et al. [42]. where c j represents the IMF components, and r n is a residual component.The residual r n could be a constant, or a function that contains only a single extrema and from which no more oscillatory IMFs can be extracted [42]. The EMD Algorithm At the beginning of the proposed hybrid model, the EMD-based decomposition is employed to decompose the original signal into the various components.The main steps followed in the time series decomposition using EMD are as follows: 1. identify all of the local extrema of x(t); 2. create the upper envelope e up (t) and the lower envelope e lo (t) by the cubic spline interpolation, respectively; 3. compute the mean value m(t) of the upper and lower envelopes: m(t) = [e up (t) − e lo (t)]/2; 6. repeat Steps 1-5 until the residue r(t) becomes a monotonic function or the number of extrema is less than or equal to one, from which no further IMF can be extracted. Finally, the original signal x(t) can be expressed as the sum of the IMFs and the residue r(t) given in Equation (14).The above algorithm is summarized in Figure 4 and an example of EMD decomposition is illustrated in Figure 5. Detrended Fluctuation Analysis Detrended fluctuation analysis (DFA) is a method proposed by Peng et al. [33] for measuring the intensity of the long-range dependence of a signal.This dependence can be described using three different classes: long-range dependence, mild dependence, and pure randomness.It can be used to estimate the scaling exponent of a signal that describes its self-affinity similarly to the Hurst exponent.The oldest and best-known method for estimation of the Hurst exponent is the so-called R/S analysis method, proposed by Mandelbrot and Wallis [32].This method was first based on a previous work by Hurst on hydrological analysis that allows for the estimation of the Hurst exponent (self-similarity parameter H).However, the method is not suitable for nonstationary time series, as it can cause spurious scores [43].In this work, detrended fluctuation analysis, which is a more suitable method for obtaining reliable scaling exponents for nonstationary time series is employed [32,43].The method can be summarized as follows: 1. for a given time series X i with length L, divide it into d subseries of length n; 2. for each subseries m = 1, 2..., d, Finally, calculate the mean value of the root mean square fluctuation for all subseries of length n Similarly to the R/S analysis, a linear relationship on a double-logarithmic of F(n) against the interval size n indicates the presence of a power-law scaling behavior F(n) ∝ n H [32]. Here, H is the DFA scaling exponent that is identical to the Hurst exponent.The Hurst exponent is related to the power spectrum exponent η and the autocorrelation exponent γ by η = 2H − 1 and γ = 2 − 2H [44].It is considered an indicator of the roughness of the time series.The larger the value, the smoother the time series.Smaller slope values are usually associated with rapid fluctuations.If the process is white noise, then the slope is roughly 0.5.If it is persistent, the slope is >0.5.If it is anti-persistent, the slope is <0.5.In such cases, significant fluctuations are followed by small ones and vice versa.Just like the R/S analysis approach, a drawback of the DFA is that no known asymptotic distribution theory has been derived from the statistics.As such, no explicit hypothesis testing can be performed, as the significance relies on a subjective assessment [32]. EMD-Based Denoising Using DFA The EMD algorithm decomposes any complicated dataset into a finite number of IMFs of different dominant frequencies and amplitudes.The decomposed IMFs are usually arranged starting with the highest frequencies at the top and those with the lowest frequencies at the bottom.The original dataset can be reconstructed accurately by using all of the IMFs.However, some of the components, especially those with the highest frequencies, may contain irrelevant information about the original data (noise); therefore, using all the IMFs for reconstruction of the original dataset may affect the performance of any prediction method.A new series of the original dataset can be suitably reconstructed by using only a subset of the IMFs.This can be achieved by properly eliminating those IMFs that contain no relevant information about the original series.Different EMD-based denoising methods have been proposed and applied in various studies for different purposes [29,30].An important step in EMD-based denoising is how to separate the noisy IMFs from the rest of the IMFs.A common method for EMD-based denoising algorithms is to eliminate the noise by using one of the IMF components.However, the decision as to which IMF to eliminate is still an ongoing research problem [31].In this work, we propose a denoising approach that eliminates the noisy IMFs by using DFA to estimate the scaling exponents of all the IMFs and comparing them with the Hurst exponent threshold. The proposed method eliminates both Gaussian white noise and anti-persistent processes by using a Hurst exponent threshold of 0.5.A plot of the scaling exponents of the IMFs with a threshold of 0.5 is shown in Figure 6.The original dataset is therefore reconstructed by summing those IMFs with scaling exponents above the threshold.Figure 7 shows a plot of the original and the reconstructed dataset. Study Area River at Cameron, the Virgin River at Littlefield, the Colorado River below the Hoover Dam, the Colorado River below Davies Dam, the Colorado River near Grand Crayon, the Williams River below Alamo, the Colorado River below Parker Dam, and the Colorado River above Imperial Dam.Monthly natural streamflows for the period 1906 to 2014 were used to illustrate the proposed hybrid EMD-based predictive DBN.This monthly data is from the United States Geological Survey (USGS) observed gage data that can be obtained from the website of the Upper Colorado Regional Office of the United States Bureau of Reclamation at Salt Lake City, Utah (http://www.usbr.gov/lc/region/g4000/NaturalFlow/index.html).For long-term drought analysis and prediction, the 12-month and 24-month drought index scales are normally used.As a preliminary experiment, the 12-month scale SSI was used as the drought index. Standardized Streamflow Index We followed the concept employed by McKee et al. for the standardized precipitation index (SPI) [45] to calculate the SSI.Generally, drought indicators that are defined like the SPI are called standardized indices (SIs).McKee et al. used the Gamma distribution for fitting monthly precipitation data series and suggested that the procedure can be applied to variables other than precipitation, provided they are relevant to drought (for instance, streamflow, snowpack, and soil moisture) [45].The procedure used by Cacciamani et al. [46] was followed in order to calculate the SSI.First, we modeled the distribution frequency of the total streamflow time series cumulated over different time scales (e.g., 3 months, 6 months, and 12 months) using a probability density function.Then, the probability density function was transformed into a normal standardized distribution.The values of the resulting standardized index could then be used to classify the category of drought characterizing each place and time scale [46].Madadgar et al. [11] used a similar procedure to characterize the hydrological droughts of the Gunnison River Basin.Since the SSIs are calculated over different streamflow accumulation periods and scales, they can be used to estimate various potential impacts of a hydrological drought.For instance, the 12-month SSI shows a comparison of the streamflow for 12 consecutive months against the same 12 consecutive months of all the available data from previous years.A drought event is said to occur when the SSI is continuously negative for a certain period of time.The event is said to end when the index becomes positive.The SSI 12 and SSI 24 monthly scale drought indices are often tied to long-term drought conditions.Longer-term drought forecasts can serve as useful information about drought conditions that affect streamflow, groundwater, or other hydrological systems within the Colorado River basin. Feature Extraction For time series prediction, the prediction is usually carried out using previous values of the series as features for the training model.The selection is based on their correlation with the output variable.In this work, the number of input neurons was selected using autocorrelation analysis.An example is illustrated in Figure 9 with lags of 1-7,9,13 showing different high levels of significance to the output.Based on the significance levels of the individual lags and experimentation, a lag of 6 was chosen.Hence, the past five observations and the current value (S t−5 , S t−4 , S t−3 , S t−2 , S t−1 , S t ) were used as inputs to predict the next observation (S t+1 ), as illustrated in Figure 10.These were somehow different for the various stations.For long-term targets, a recursive procedure was employed.The models were used to predict one step ahead, and the outputs from these models were used as inputs for subsequent predictions. Evaluation of Model Performances Although the Pearson correlation coefficient (r) and the coefficient of determination (r 2 ) have been widely used for model evaluation, they have been identified as inappropriate performance metrics for hydrological models [47,48].They are oversensitive to extreme values and insensitive to additive and proportional differences between model predictions and observed data [48,49].In order to have a complete assessment of model performance, Legates et al. [49] suggested that at least one absolute error measure, such as root mean square error (RMSE), mean absolute error (MAE) or mean absolute percentage error (MAPE), be included as performance metrics.Additionally, the Nash Sutcliffe model efficiency coefficient (NSE) [50] is a good alternative to r or r 2 [47].Hence, the following three metrics were used for model comparison: where y i is the observed data, ŷ represents the predicted values, ȳi is the mean of the observed data, and T is the length of the data. Summary of the Proposed EMD based Predictive Deep Belief Network The proposed EMD-based Predictive DBN consists of four main steps: (1) EMD decomposition of data into a finite number of IMFs; (2) noise reduction based on partial reconstruction using only the relevant IMFs; (3) DBN modeling and training; and (4) prediction using the trained model.A flowchart of the proposed approach is shown in Figure 11.Two RBMs were used to construct the DBN.The DBN model is therefore made of one visible (input) layer, two hidden layers, and a final layer (output) for fine-tuning the entire network as shown in Figure 3.Only two hidden layers (two stacked RBMs) were used to construct the DBN model because of the small data sample size.Higher hidden layer sizes were experimented but they were found to be over-fitting.The use of other meteorological variables as features in addition to the SSI may increase data size, and this may also necessitate the use of larger network sizes.The following few steps summarize the procedure of the proposed method: 1. obtain the different time-scale SSI (SSI 12 in this case); 2. decompose the time series data into several IMFs and a residue (Rn) using EMD; 3. reconstruct the original data using only relevant IMF components; 4. divide the data into training and testing sets (80% for training and 20% for testing); 5. construct one training matrix as the input for the DBN; 6. select the appropriate model structure and initialize the parameters of the DBN (two hidden layers are used); 7. using the training data, pre-train the DBN through unsupervised learning; 8. fine-tune the parameters of the entire network using the back-propagation algorithm; 9. perform predictions with the trained model using the test data. Because a typical RBM uses binary logistic units for visible nodes, we modified the binary nodes to the continuous case in order to handle the SSI continuous-valued input data using the technique presented in [18].We rescaled the continuous-valued input data to the (0,1) interval and considered each continuous input value as the probability for a binary random variable to take the value 1.The transformation is given by: where X std = X obs −X min X max −X min . Results The proposed denoising EMD-based predictive DBN was evaluated by applying it to predict drought indices of different lead times across the Colorado River Basin.Standardized streamflow index (SSI) was used as the drought index.The forecast errors for predicting SSI 12 one-step-ahead (one-month lead time) and two-step-ahead (two-month lead time) for the chosen ten stations are presented in Tables 1 and 2. Six models were compared.They include the MLP, SVR, the DBN, and the hybrid versions of these three models using EMD-DFA decomposition.Decomposition and denoising significantly improved the performance of the three models, with both DBN and SVR showing a performance far superior to that of MLP.Optimal parameters for both MLP and SVR were obtained using grid search on the training part of the dataset.Three performance metrics were used to compare the various models: the RMSE, the MAE, and the NSE.A histogram showing the RMSE and MAE for the one-step-ahead predictions are shown in Figure 12.The accuracies of both DBN and SVR are similar for most of the stations in the one-step-ahead prediction.However, in the two-step-ahead prediction, EMD-DBN outperforms the other models, as it recorded the least prediction errors in all stations as shown in Table 2.These results emphasize the importance of the unsupervised pretraining of neural networks using DBN over traditional neural networks.Figure 13 shows the one-month and two-month lead times forecasts for Lee's Ferry station.Results of six-month and twelve-month lead times forecasts are also shown in Figure 14, where it can clearly be observed that the accuracy of prediction decreases as the prediction horizon increases. Conclusions Drought modeling and prediction has been a topic of interest over the last two decades, generating interest among researchers around the globe.In order to understand future drought event behavior, modeling of drought indices is very important.This study explored a DBN for drought prediction.We proposed a hybrid model (EMD-DBN) for long-term drought prediction.The results were compared with DBN, MLP, and SVR alone and with EMD-MLP and EMD-SVR.Overall, both DBN and SVR and their hybrid versions, showed comparatively similar prediction errors for the one-step ahead predictions as shown in Table 1 .However, DBN and the proposed EMD-DBN outperformed all other models for the two-step predictions for almost all stations, as shown in Table 2. Though the performance of MLP improved with the decomposition and denoising of drought indices, its performance, relative to both DBN and SVR was poor.In all, the improvement in the performance of the hybrid models over the single models suggests that errors encountered in time series predictions can be improved significantly by series decomposition using EMD.Pre-processing the original input dataset with EMD decreases the complexities in the data, allowing the removal of noisy components and therefore improving prediction accuracy.For long-term predictions, it was observed that, as the prediction horizon increases, the accuracy of predictions decreases.This was not unexpected because a recursive approach was employed for higher prediction lead times, where previous predictions were used as inputs for lead times greater than one.The results obtained from this work are very promising and pave the way for further works where hybrid models involving empirical mode decomposition techniques with future selection and other temporal models such as recurrent neural networks or variants can be explored.Additionally, due to the good performance of the SVR model, future work may try to employ the SVR as the last layer of the DBN pretrained model.Optimal parameter search for SVR can be improved by considering evolutionary optimization algorithms such as genetic algorithms (GAs) or PSO.This is necessary because the grid search method used in this work may be sensitive to the selected SVR parameters and might have influenced the results. Future work will also consider the use of other meteorological variables such as precipitation, temperature, and large climate variables such as El Nino southern oscillation (ENSO) and North Atlantic oscillation (NAO) [51].Drought is a very complex natural phenomenon, as it is known to be influenced by several meteorological variables.Hence, the use of only one variable may not be adequate enough to provide reliable forecasts.Additionally, climate change effects are very diverse.They vary both locally and regionally, in their intensity, duration, and areal extent.Hence, in order to understand the impact of climate change on drought, GCM outputs are downscaled to model drought variables on a large scale [17].Therefore, we will try to adopt this current work to GCM or RCM outputs to assess drought characteristics. Figure 1 . Figure 1.An example of a restricted Boltzmann machine (RBM). Figure 2 . Figure 2. A single step of contrastive divergence. Figure 4 . Figure 4. Flowchart of the sifting process for the empirical mode decomposition (EMD) algorithm. (a) Create a cumulative time series Y i,m = ∑ i j=1 X i,m for i = 1, ..., n (b) Fit a least squares line Ȳm = a m x + b m to {Y 1,m , ..., Y n,m } (c) Calculate the root mean square fluctuation (i.e. standard deviation) of the integrated and detrended time series: Figure 6 . Figure 6.Scaling exponents of all intrinsic mode functions (IMFs) with a threshold of 0.5. Figure 7 . Figure 7.A plot of the original and reconstructed dataset. Figure 8 . Figure 8. Location of gages at the Colorado River near Lee's Ferry, Paria River near Lee's Ferry, Little Colorado River near Cameron, Virgin River near Littlefield, the Colorado River below the Hoover Dam, the Colorado River below Davies Dam, and the Colorado below Parker Dam (red dots). Figure 10 . Figure 10.An example of a recursive network. Figure 12 . Figure 12.Comparative plots of root mean square error (RMSE) and mean absolute error (MAE) for the various methods: one step ahead. Figure 13 .Figure 14 . Figure 13.Observed and predicted drought Index using the EMD-DBN model for Lee's Ferry Station. 4. extract the mean envelope m(t) from the signal x(t), where the difference is defined as d(t): d(t) = x(t) − m(t); 5. check the properties of d(t): (a) if d(t) satisfies the requirements of IMF Conditions (1) and (2), then d(t) is denoted as the ith IMF, and x(t) is replaced with the residual r(t) = x(t) − d(t); the ith IMF is denoted as c i (t) , and i is the order number of the IMF; (b) if d(t) is not an IMF, replace x(t) with d(t);
8,663.2
2018-02-27T00:00:00.000
[ "Environmental Science", "Computer Science" ]
To Live with the Monsters In this world that lost its utopias and their projection of perfect and (im)possible worlds into the space-time continuum, dystopian narratives are the representations and the key to understand contemporary time and its fears. In this age devoid of a world order, political directions and, after 1989, temporal perspective towards the future, natural and human catastrophes, “internal” and “external” monsters, as well as the thinning of the borders between natural and artificial besiege our imag... In a May 2018 interview with the Los Angeles Review of Books 1 concerning her book, Carceral Capitalism, Jackie Wang was asked to summarize the link between the phenomenon of racialized mass incarceration and the seemingly omnipresent debt economy, which takes a central role in the book. Resolutely, the performer, poet and Harvard PhD candidate replied that she "wanted to think of debt as a form of unfreedom that is unequally distributed". One can hardly oppose this definition, for, in her book, the author goes as far as to state that, in writing these essays, she wanted to show "how carceral techniques of the state are shaped by -and work in tandem with -the imperatives of global capitalism" (p. 69). Furthermore, she asserts that "Carceral Capitalism is not an attempt to posit carcerality as an effect of capitalism but to think about the carceral continuum alongside and in conjunction with the dynamics of late capitalism" (p. 85), which becomes connected with one of Wang's main theses, i.e. that "black racialization proceeds by way of a logic of disposability and a logic of exploitability" (p. 88). Taking a page from Wolfgang Streeck, Wang writes that in a quintessentially neoliberal context, the evolution of the tax state into a debt state generates the necessary conditions for the emergence of a predatory state. In other words, neoliberalism has led to the collapse of the tax state which inextricably tied the sustainability of government bodies to the creation of debt which, in turn, whenever payments are due, means that revenue must be secured, this time from the very population the government itself proposed to represent. As such, debt is at the core of Wang's concerns with the contemporary racialized nature of the Prison Industrial Complex, in which mass incarceration takes a particularly heavy toll on black and Latino minorities. Time and again, the author reiterates the notion that, as a result of the increasing hegemony of debt economy, the government and its officials have shifted their priorities from the public that elected them to the financial institutions to which they are largely indebted. The author substantiates this by referring to the analysis of, for instance, David Harvey, who, quoted in this volume, bluntly states that "[i]f there is a conflict between the well being of financial institutions and the well being of the population, the government will choose the well being of the financial institutions; to hell with the well being of the population" (p. 164). This means that if the financial sector supports public debt then the government becomes far more accountable to its creditors than to the public. Debt thus becomes a de -democratizing agent in which the higher the debt, the lower an individual's (credit) worthiness, which will eventually strip them of their fundamental rights, increasing the probability that they will become a target for predatory fees and, eventually, incarceration. To make matters worse, when debt itself is turned into a highly profitable commodity, then the aforementioned demographics become an actual source of revenue. This was made particularly clear in the aftermath of the 2008 subprime mortgage crisis that saw the profitization of debt taken to never -before -seen extremes. The collapse of the housing market generated a global economic crisis, which led to the defunding and loss of revenue for municipalities that, in turn, resorted to the "creation of municipal fiscal schemes" (p. 19) to make -up for their losses. The public thus becomes an alluring source of revenue, and police departments are used as a tool for extracting valuable income -a process the author does not shy away from bluntly (and aptly) denouncing as "looting" (p. 22). Through these mechanisms, police departments are used "to plunder residents" (p. 19) and, unlike other public sectors, they "continue to operate" and "are among a meager handful of unions that have actually fared well" (p. 19), mainly due to what has been designated "offender -funded policing", similarly replicated in the court systems, wherein offenders are forced to pay various fees to cover the expenditures of their constitutionally inalienable right to due process (public defender fees, arrest fees, prison housing fees, etc.). Furthermore, in the context of a global financial crisis with debt at its core, it becomes understandably more profitable to invest in prisons rather than in social programs, thus starting a vicious cycle in which, from a socioeconomic standpoint, historically disadvantaged minorities will be further hindered. But this is not new information and, as of late, it has found an audience. A considerable portion of the subjects approached by the author in this collection of essays has recently found its way into the public eye, largely at the hands of widely -watched shows such as HBO's Last Week Tonight in which, weekly, comedian John Oliver approaches a problematic subject to discuss at length, always grounding his exposition with extensive and rigorous research. Interestingly, one of the examples Wang draws upon to showcase the near -kafkian horror generated by the aforementioned predatory practices both by police and the court system is the exact same one Oliver used in a fairly recent show about municipal violations, 2 wherein a man was forced to sell his own blood plasma to be able to make payments on the fees imposed by the court and the police department over a minor offense. Wang has no shortage of examples and deftly navigates her material by establishing a comfortable pace made possible by a delicate balance between academic discourse as such and human -interest stories, sometimes taken from her own biography. Indeed, quite refreshingly, Wang often resorts to illustrating her points with vivid personal anecdotes. What she tells the reader regarding both her brother's experience of imprisonment and her own interactions with him from the perspective of a free individual are more than enough to put the extensive amounts of theoretical exposition that preceded this account into much -needed context. This way, a reader is better able to witness first -hand the differences between life in prison and life on the outside. Beyond simple physical separation between free people and convicted offenders, it could be argued that serving time in prison creates a division that resembles something along the lines of time being stopped altogether. It can hardly be expected that someone who spends years isolated from society will be able to cope with the hectic pace of, for instance, technological development. It is in these more personal notes that some of Wang's most inspiring glimpses of formal innovation come into play. While most of the chapters in the volume read as conventional essays, Wang innovates through carefully thought -out and aptly placed inserts that, in what seem to be excerpts taken from some sort of personal journal, read very much like poetry (another medium the author explores regularly). In what is perhaps the less usual of all the essays, chapter five (titled "The Cybernetic Cop: Robocop and the Future of Policing"), Wang adapts for example "a multimedia performance originally conceived for the L.A Filmforum's Cinema Cabaret" (p. 253; italics in the original) into a somewhat lighter chapter that follows a highly expositive essay on predictive and algorithmic policing. As a whole, by placing her work in a thoroughly researched cross -section of post -marxist economic theory and analysis -prison abolitionist thinking framed by great bastions of African -American thought such as W. E. B. Du Bois, and something approaching the realm of poetry and performance that gives the volume a more human and perhaps more real and concrete reality, without ever facilitating the appropriately serious nature of her subject matter -Jackie Wang has succeeded in crafting an inspiring set of essays that shed light on some of the society's most pressing concerns that, as of late, have rightly begun to seep into public consciousness and that, aside from deserving everyone's undivided attention, proves yet again that, yes, the game is rigged. To Live with the Monsters In this world that lost its utopias and their projection of perfect and (im)possible worlds into the space -time continuum, dystopian narratives are the representations and the key to understand contemporary time and its fears. In this age devoid of a world order, political directions and, after 1989, temporal perspective towards the future, natural and human catastrophes, "internal" and "external" monsters, as well as the thinning of the borders between natural and artificial besiege our imaginary and define outlines, meanings and references. The obsession with security in a time without future and dreams, through the continuous production of threatening diversities, identifies and constrains the political objectives with the sole task of protecting and preserving the human species, the nation, the community, the family, private property; protecting the circulatory system of the social structure -that is the capitalist system of private production and consumption -from potential and imaginary pathogenic germs. A system, by its nature based on the competitive principle of inclusion/exclusion, that at the same time protects, nourishes, consumes, devours and destroys human life. In this political and social scenario, an evident shift in social attention towards the visual media sphere -due to its "ability to arouse strong emotions" (p. 3) and its "very high commercialization" (ibidem) 1 -makes this dimension not only the main space of construction of collective imagination, but even the space where reality itself is produced; its hyper -realistic dimension arises from the confusion between real content, representations and media aspects, as Gaia Giuliani highlights in this book on the monstrous figures of otherness: Zombie, alieni e mutanti. The references to gender studies, post--human feminist philosophy and race studies of the last years -particularly to whiteness studies -provide key tools and work materials for the intersectional approach chosen by Giuliani, in this and other researches, to develop her articulated and comprehensive criticism of the current system of global domination. With this critical interpretation of the present times, Giuliani's research reconstructs the visual archive of contemporaneity, accessing this extreme space of representation of nightmares and real distortions of our imaginary: an exciting journey through science fiction, horror movies and TV series produced in the Western world in recent years. It is a work of semiotic deconstruction of movie language, and at the same time a philosophical -political text. It is an interpretation of the present contextualized with the history of the last two centuries. Giuliani's point of view is crucial: the ghosts summoned in our confusing time to terrify us and to close ourselves to diversity actually come, like every good nightmare, from our past. This horizon is clarified and reconstructed with great analytical ability in this book. It focuses above all on the colonial past, its semantics and image repertoire as a space where the West has built and fed from its heritage of monstrous others. Current visual literature thus draws on the cultural heritage of a system of domination, which Giuliani analyzes with special and constant attention in her philosophical investigation on the production of racisms. Despite the gigantic work of historiographical revision of the last decades that has brought to the surface all the "barbaric" violence carried out by the conquerors in the colonial past, living dead, anthropophagous monsters and insidious aliens continue to emerge from the past, sprung from racial genealogies; from that far world they rise to come and threaten and devour the "innocent" peace of our families and society. A continuous narrative of violent scenarios emerges, and so a political and social clash, vital and relevant, between civilizations and for civilization breaks out, the same one that fueled Western expansionism in the world for two centuries. Once more the national community, as an imagined community, is rebuilt through the monstrification of the other and the confirmation of its own endeavor towards progress, peace, and civilization. But, in this visual patrimony, the neocolonial representation of monstrous otherness also takes the form of the attention to physical and psychic alterations until one can identify a post -human dimension in the relationship between the human being and nature, and the human being and technology. Once more, the political objective is to take control over mutations, diversities, racial mixings, and indistinctness. The careful analysis of movies production, on which this research is based, shows that the visual material examined is not only the expression, but rather the live food, so to speak the active and socially performing element of a view of the present world and the future of humanity that is clearly claustrophobic and apocalyptic, and that in the end seems to want to leave us without any chance for social change. Giovanni Ruocco Edited by Ricardo Cabrita João Gabriel
3,045
2019-12-01T00:00:00.000
[ "Philosophy", "Sociology" ]
Mutagenicity in a Molecule: Identification of Core Structural Features of Mutagenicity Using a Scaffold Analysis With advances in the development and application of Ames mutagenicity in silico prediction tools, the International Conference on Harmonisation (ICH) has amended its M7 guideline to reflect the use of such prediction models for the detection of mutagenic activity in early drug safety evaluation processes. Since current Ames mutagenicity prediction tools only focus on functional group alerts or side chain modifications of an analog series, these tools are unable to identify mutagenicity derived from core structures or specific scaffolds of a compound. In this study, a large collection of 6512 compounds are used to perform scaffold tree analysis. By relating different scaffolds on constructed scaffold trees with Ames mutagenicity, four major and one minor novel mutagenic groups of scaffold are identified. The recognized mutagenic groups of scaffold can serve as a guide for medicinal chemists to prevent the development of potentially mutagenic therapeutic agents in early drug design or development phases, by modifying the core structures of mutagenic compounds to form non-mutagenic compounds. In addition, five series of substructures are provided as recommendations, for direct modification of potentially mutagenic scaffolds to decrease associated mutagenic activities. Introduction In drug discovery, mutagenicity is an issue that needs to be avoided.The detection of mutagenicity at preclinical drug discovery stages could halt the development of potentially harmful drugs and aid in the development of safe therapeutic agents.Mutagenicity is a term used to broadly describe the property of chemical agents or drug substances to induce genetic mutation.It is sometimes used interchangeably with the term genotoxicity, especially concerning the discussion of chemical agents to deleteriously change the genetic material in a cell.However, while all mutagens are genotoxic, not all of the genotoxic substances are mutagenic.[1] To avoid mutagens in the drug candidate screening processes, many efforts have been made in determining mutagenicity of various compounds via in vitro approaches, of which the Ames test is the most common. The Ames test was first introduced in the early 1970's by Bruce Ames.[2][3][4] It is a wellestablished and widely accepted method to assess the mutagenic potential of compounds to cause genetic damage in bacterial cells, for example through frameshift mutation or mutation by base-pair substitution.[2] It is recognized that genetic events are central to the overall development of cancer.Therefore evidence of mutagenic activity may indicate that a chemical substance has the potential to encourage carcinogenic effects.In therapeutic agents, carcinogenicity is strongly correlated with mutagenicity.[5] A positive Ames test would indicate that the chemical is mutagenic and highly likely to be carcinogenic, however false-positive and false-negative test results have been reported as well.Despite that, Ames test is still preferred over the standard in-vivo assays, because it provides a quick, convenient, and cost-effective way to estimate the mutagenicity (carcinogenicity) of a compound. The Ames test has been in use for almost 40 years; the assayed outcome usually correlates with life-time rodent carcinogenicity studies which require 2 years to complete.[6] For the purpose of this study, we mainly focus on the scaffold analysis of DNA reactive (mutagenic) chemical agents in general; therefore the carcinogenic risks associated with these agents will not be discussed.In this study, the word "scaffold" is used primarily to describe the core structure of compounds.In accordance with the International Conference on Harmonisation (ICH) M7 guideline updated in June of 2014, an expert rule-based and statistic-based quantitative structure-activity relationship (QSAR) model can be utilized to estimate the potential mutagenicity of impurities in pharmaceuticals.[7] These models can also be utilized to determine the mutagenicity potential of drugs in safety evaluation.The application of in silico models to predict mutagenicity of compounds has been popular in early drug discovery and development processes, sometimes before compounds were synthesized.[8] The time and cost of drug design can be considerably reduced by avoiding to synthesize and analyze compounds with mutagenicity.In recent years, several commercially and publicly available in silico tools have been developed to predict the mutagenicity of compounds based on the endpoints of Ames test. Currently, structural alert-based [9,10] and QSAR-based [11,12] models are the two main Ames mutagenicity prediction strategies; users could derive structure-activity relationship and/ or mechanistic information from their predictions.Both DEREK for Windows [9] (DfW) and Toxtree [10] are expert prediction systems that utilize structural alerts (SAs) to predict mutagenicity of compounds.The toxicological alerts are derived from literature, academic and industry experts, available experimental data [13][14][15], and Benigni-Bossa rules.[16] The QSAR-based approaches (e.g., Leadscope Model Applier (LSMA) [11] and MultiCASE (MC4PC) [12]) use regression models to illustrate the relationship between molecular properties (e.g., lipophilicity, polarizability, electron density, and topology) and mutagenicity of compounds being studied.[17] It would be especially useful to be able to correspond the relationship between different core structures of a compound with their associated Ames mutagenicity.However, neither structural alerts nor correlative QSAR-based models can directly indicate whether a scaffold would be more likely to link to mutagenicity.[18] The structural alerts approach only evaluates functional groups and the correlative QSAR-based approach mostly emphasizes on side chain or functional group analysis of an analog series, core structures or scaffolds are not the focus in both approaches.If mutagenicity arises from the scaffold (core structure) itself, these approaches will not be able to flag the scaffold as the major cause of the mutagenic potency.This presents a serious problem because drug compounds are usually constituent from one or several similar core structures with different combinations of side chains.Essentially, all of the drugs from this series might be mutagenic. In this study, we analyzed the relationship between scaffolds of diverse compounds by correlating the scaffolds and mutagenicity from a dataset of Ames assay for 6,512 compounds collected from literatures.[19] The Scaffold Hunter [20] strategy was adopted to generate hierarchical relationships of scaffolds between these compounds.From analyzing scaffold relationships, we established a list of scaffolds with potential mutagenicity.These scaffolds can be used as a basis for drug design to prevent the development of potentially mutagenic therapeutic agents; they can also be used to suggest non-mutagenic scaffolds to replace mutagenic core structures. Benchmark Data Set: Ames Mutagenicity In recent years, data on Ames mutagenicity have been collected and well organized.The Ames mutagenicity benchmark data set from Hansen [19] includes mutagenicity data collected prior to 2009 and was used in our study.Several recent works [21][22][23] have also used Hansen's data set because of its reliability.The Hansen benchmark data set [19] was derived from CCRIS[24], Helma et al. [25], Kazius et al. [26], Feng et al. [27], VITIC [28], and GeneTox [29].Inorganic molecules and duplicate structures were omitted.Compounds with experimental results that contradicted DEREK or MultiCASE internal data were also removed.Chemical Abstracts Service (CAS) numbers and World Drug Index (WDI) names are provided.The final data set is balanced, containing 3053 mutagens and 3009 non-mutagens (6512 compounds in total).The mean molecular weight is 248 ± 134 (Median MW: 229).An overview is presented in Table 1.Due to overlap between different sources, the total amount of relevant data in the individual databases may be higher. Scaffold Hunter Scaffold Hunter is an interactive tool for intuitive hierarchical structuring, visualization and analysis of complex structure and bioactivity data as well as for the navigation and exploration of chemical space.The program extracts chemically meaningful compound scaffolds (all carbo-and heterocyclic rings, aliphatic linkers and atoms attached via double bonds) from a data set by removing all side chains except exocyclic or linking double bonds.Scaffold Hunter then iteratively removes one ring at a time from larger "parent" scaffolds to yield smaller "child" scaffolds according to the pruning rules [20].Hierarchical arrangements of parents and children are combined to form a tree."Virtual scaffolds" that do not exist in the dataset are constructed in silico.Each node in the tree denotes a scaffold.A parent scaffold is a substructure of a child scaffold, and while every child scaffold only links to one parent in the scaffold tree, a parent scaffold can be the common substructure shared between many different children scaffolds.The children scaffolds that share the same parent scaffold are termed "sibling The results were obtained from each source when extending the Ames mutagenicity data set in a stepwise manner. doi:10.1371/journal.pone.0148900.t001 scaffolds".It is worth noting that each compound can only be assigned to one scaffold node.For a compound belonging to a specific scaffold node, it signifies that the largest core structure of this compound matches exactly or is identical with the scaffold structure assigned at this node. In this work, we applied Scaffold Hunter to construct scaffold trees in order to illustrate the relationships between mutagenic and non-mutagenic scaffolds.These hierarchical trees assist with the visual analyses of parent-child and sibling structural relationships. Cutoffs for Selecting Mutagenic and Non-Mutagenic Scaffolds We assigned a mutagenicity value to each scaffold for reorganization of representative mutagenic and non-mutagenic scaffolds in the scaffold tree.The mutagenicity of a scaffold was defined as the ratio of mutagenic to total compounds categorized in that scaffold.A mutagenicity cutoff was then specified for selecting representative mutagenic and non-mutagenic scaffolds.The scaffolds whose mutagenicities are greater than or equal to the mutagenicity cutoff are defined as representative mutagenic scaffolds whereas the scaffolds whose mutagenicities are less than the mutagenicity cutoff are defined as representative non-mutagenic scaffolds.In addition, mutagenic and non-mutagenic scaffolds have to cover at least 10 compounds. The mutagenicity cutoff was adjusted to select a minimal number of mutagenic scaffolds covering a maximum number of mutagenic compounds (mutagens).This means selecting a minimal set of scaffolds that represented as many mutagens as possible.Thus, we maximized the ratio (C 1 /S) of the number of mutagens (C 1 ) to the number of mutagenic scaffolds (S) when adjusting the cutoff obtained using the selection criteria above.The detailed steps for selection of best mutagenicity cutoff were demonstrated in results and discussion. Additionally, the non-mutagenicity cutoff was adjusted to select a minimum number of non-mutagenic scaffolds covering the maximum number of non-mutagenic compounds (nonmutagens).Therefore, we sought to select the minimum set of scaffolds that could represent as many non-mutagens as possible.Accordingly, we maximized the ratio (C 2 /S) of the number of non-mutagens (C 2 ) to the number of non-mutagenic scaffolds (S) selected using the given cutoff criteria. The adjustment of mutagenicity cutoffs and the selection criteria for choosing representative mutagenic and non-mutagenic scaffolds, are discussed in length under the "Selection Criteria for Mutagenic and Non-Mutagenic Scaffolds" section below. Results and Discussion In this section, we will first describe and discuss the selection criteria for choosing representative mutagenic and non-mutagenic scaffolds.Then, we will discuss scaffold-mutagenicity relationship between major and minor mutagenic scaffolds and their "children" scaffolds.In this study, a scaffold is defined as a fixed part of a molecule, on which functional groups or other side chains can be substituted or exchanged.A mutagenic scaffold is defined as the scaffold that meets the following specifications: (1) a scaffold with mutagenicity (score) greater than the pre-determined selection criteria, and (2) there should be at least ten compounds with this scaffold as part of their structures.The "children" scaffolds in this study, refer to the variation of molecules belonging to a family of molecules sharing an identical (fixed) scaffold.When all of the children of a mutagenic scaffold are also mutagenic, we defined those children scaffolds and their parent mutagenic scaffold as a group of "major mutagenic scaffolds".These children scaffolds are considered mutagenic if there are at least ten compounds with these children scaffolds present as part of their structures in the Ames dataset.On the other hand, for the scaffolds with mutagenicity (score) lower than but close to the selection criteria, we defined those children and their parent scaffold as a group of "minor mutagenic scaffolds".Different selection criteria were applied to successfully identify scaffolds containing the most mutagenic compounds.If a series of compounds including a scaffold and its children scaffolds are all mutagenic, then we may infer that those scaffolds contribute significantly to mutagenicity.Therefore, all scaffolds satisfying the selection criteria were discussed according to their scaffold structures and substructures (children scaffolds).Finally, we further recognized the reduction rules to elucidate how to modify a mutagenic compound into a non-mutagenic molecule.In order to specify these rules, each group of the selected scaffolds was then compared with the substructure between parent and child scaffolds. Selection Criteria for Mutagenic and Non-Mutagenic Scaffolds For selection of mutagenic scaffolds, ten different mutagenicity cutoff percentages (100%, 95%, 90%, 85%, 80%, 75%, 70%, 65%, 60% and 55%) were applied initially, to search for an appropriate cutoff point in order to optimally differentiate the significance of mutagenicity between all of the core structures in our established scaffold tree.In doing so, we aimed to yield a minimum number of scaffolds covering the maximum number of mutagens.The C 1 /S distribution was plotted against ten different mutagenicity cutoff percentages in Fig 2A , where S represents the number of scaffolds selected, and C 1 represents the number of mutagens categorized into the selected scaffolds.The detailed values of the C 1 /S distribution plot were listed in Below the labels of scaffold names were labelled by the mutagen rates and the numbers of mutagen compounds/the numbers of overall compounds in that scaffolds.In the structures of child scaffolds, the differences from the parent scaffold were colored as red.The mutagenicities shown in Fig 1 were presented as the percentage of mutagenic compounds for each scaffold, and the IUPAC names were generated using the Chemaxon Marvin applet.[30].doi:10.1371/journal.pone.0148900.g001 23.6 (1016/43), 23.9 (1075/45), and 23.5 (1082/46), respectively.Although 60% (0.6) mutagenicity cutoff resulted in the highest ratio of C 1 , mutagens number categorized in the selected scaffolds, to S, selected scaffolds number, 60% mutagenicity cutoff isn't statistically meaningful.We then selected a higher mutagenicity cutoff that still retains high value of C 1 /S.Although the difference between C 1 /S ratios of 0.6 and 0.7 isn't significant (Table A in S1 File), the selection of mutagenicity cutoff that is higher than 0.7 will result in a loss of more than 20% mutagenic compounds.Therefore, 70% (0.7) mutagenicity cutoff point was chosen for evaluation of Ames mutagenicity. For selection of non-mutagenic scaffolds, ten mutagenicity cutoff percentages (45, 40, 35, 30, 25, 20, 15, 10, 5 and 0%) were chosen, to select a minimum number of scaffolds covering the maximum number of non-mutagens.The C 2 /S distribution was plotted against the ten different mutagenicity cutoff values shown above in /2), respectively.Similar to the rationale for selecting the best adjusted C 1 /S cutoff, although 45% (0.45) mutagenicity cutoff yielded the highest ratio of C 2 to S, we opted for a lower mutagenicity cutoff percentage that still retains low value of C 2 /S.Since the selection of mutagenicity cutoff of less than 0.3 will result in loss of more than 80% of nonmutagenic compounds covered by scaffolds selected, 35% (0.35) mutagenicity was then chosen.For simplicity, the mutagenicity cutoff points will be referred to as mutagenicity scores for the purpose of this discussion. In general, the ratios of C 1 to S were higher than the ratios of C 2 to S, this indicates that mutagens share more common scaffolds and chemical attributes compare to non-mutagens.Thus, this study focuses on the mutagens to identify these common scaffolds which contribute toward mutagenicity.Finally, after the appropriate selection criteria were determined, 37 mutagenic scaffolds (with mutagenicity score ≧ 0.7) and 12 non-mutagenic scaffolds (with mutagenicity score ≦ 0.35) were identified and summarized in Table D in S1 File. Major Mutagenic Scaffolds from the Ames Mutagenicity Scaffold Tree A total of 6512 compounds were included in our dataset.To organize the scaffolds covered in this dataset for easier interpretations, a scaffold tree was generated using Scaffold Hunter.This scaffold tree comprises 12 layers with total of 2456 scaffolds.On average, each scaffold covered 4 compounds.From the assessment of Ames mutagenicity, 49 out of 2456 scaffolds were recognized as representative scaffolds present in more than 10 compounds, with mutagenicity scores ≧ 0.7 or ≦ 0.35.Of these representative scaffolds, 37 scaffolds have mutagenicity scores ≧ 0.7, and 12 scaffolds have mutagenicity scores ≦ 0.35.In another word, 37 scaffolds were identified from 996 compounds, and at least 70% of these compounds were known to be mutagenic (860 tested Ames positive and 136 Ames negative compounds).Similarly, 12 scaffolds were identified from 259 compounds and only less than 35% of these compounds were known to be mutagenic (57 Ames positive and 202 Ames negative compounds).Those scaffolds resulting in more than 70% of the compounds being mutagenic, were strongly correlated with mutagenicity.In contrast, those scaffolds resulting in less than 35% of the compounds being mutagenic, were considered to have low tendency towards mutagenicity (non-mutagenic). To determine the common structural features (or major mutagenic scaffolds) that contribute significantly toward mutagenicity, structural relationship between the 49 scaffolds were examined.Any scaffolds sharing a common structural feature were grouped together.Thus, a scaffold tree was built for a group of scaffolds sharing a common structural feature, and their structural relationship was depicted as direct parent and child correlation.From the 37 mutagenic scaffolds identified above, 13 scaffolds shared structural similarities, and they were categorized into four groups as shown in Fig 1 .The four major mutagenic scaffolds are acridine (1), phenanthrene (4), pyrene (7), and quinoxaline (11) groups.All of the children scaffolds listed under the four parent scaffolds have mutagenicity scores ≧ 0.7, and every children scaffold was present in at least 10 compounds as their core structures (Fig 1).The statistics concerning the rate of mutagens found and the number of compounds for each scaffold under the four major mutagenic groups were listed in the first four rows of Table C in S1 File.The analysis reported here demonstrated that compounds bearing any one of these four major mutagenic scaffolds are very likely to induce mutagenicity regardless of their side chain modifications.The structural characteristics of the major mutagenic scaffolds are discussed in the following sections. Major Mutagenic Scaffolds (I): Acridine Group In the benchmark Ames mutagenicity dataset, acridine was considered one of the major mutagenic scaffolds because more than 70% of the compounds (94%, 50/53 compounds) with acridine (1) as core structure were mutagenic.For example, N-acridin-9-yl-N',N'-dimethylpropane-1,3-diamine and 2-[ [9-[3-(dimethylamino)propylamino]-1-nitroacridin-4-yl]-(2-hydroxyethyl)amino]ethanol both contain acridine in their structures, and they were both tested positive in Ames test, significantly induced colony growth in at least one out of five Salmonella strains.The acridine scaffold tree consists of six children scaffolds, however, four of the six children scaffolds did not meet the selection criteria.These four scaffolds (not shown) were found in 7 (< 10) compounds in the benchmark dataset, and only 3 out of the 7 compounds were tested positive for Ames mutagenicity, suggesting that these four children scaffolds do not contribute significantly toward mutagenicity.For this reason they were excluded from the acridine scaffold tree and discussion. The two children scaffolds that were shown in the parent-children scaffold tree included benzo[c]acridine (2) and N-phenylacridin-9-amine (3) (Fig 1).They were considered major mutagenic scaffolds because the mutagenicity score of the two children scaffolds are higher than 0.7.While 94% of acridine-containing compounds are mutagenic (from the total of 53 compounds), not all of the benzo[c]acridine (2) and N-phenylacridin-9-amine (3) containing compounds are mutagenic.From the total of twenty-one benzo[c]acridine (2) compounds, 86% were mutagens, and from the total of eighteen N-phenylacridin-9-amine (3) compounds, 94% were mutagenic (Table C in S1 File). The differences between the children acridine (1) scaffolds and the parent acridine The structure of N-phenylacridin-9-amine (3) has an aniline group added to the dihydropyridine of acridine (1), yet the mutagenicity of N-phenylacridin-9-amine (3) is similar to that of acridine (1) parent scaffold (Table C in S1 File).Amsacrine is a drug clinically used in the treatment of acute leukaemia.Its structure composed of a methoxy group and a methylsulfonyl group attached to the N-phenylacridin-9-amine (3) core structure, and it has been investigated extensively for its well-known mutagenicities.[31] These examples demonstrated that compounds containing acridine (1) scaffold have a higher tendency of being mutagenic, but also structural modifications on the acridine (1) scaffold resulting in children scaffolds such as benzo[c]acridine (2) and N-phenylacridin-9-amine (3), still preserves the high mutagenicity tendency of acridine (1).Therefore, from the evidence presented above, acridine (1), benzo[c] acridine (2), and N-phenylacridin-9-amine (3) scaffolds were collectively classified into one major mutagenic group. Major Mutagenic Scaffolds (II): Phenanthrene Group Phenanthrene (4) was considered the second major mutagenic scaffold from our analysis of benchmark Ames dataset because 93% of the phenanthrene-containing compounds (from a total of 40 compounds) were mutagens.Phenanthren-1-amine is an example of a phenanthrene (4) scaffold containing compound from the Ames mutagenicity benchmark dataset, with positive Ames test result.The phenanthrene (4) parent scaffold consists of two mutagenic children scaffolds (Fig 1B), and eleven non-mutagenic children scaffolds (not shown).Although most of the phenanthrene (4) children scaffolds were non-mutagenic, each of the non-mutagenic scaffold was only present in an average of 2 compounds, while the two mutagenic children scaffolds covered more than 10 compounds each.Furthermore, the mutagenicity scores reported for half of the non-mutagenic children scaffolds were very low (< 0.5).Therefore, we can reasonably ignore the phenanthrene (4) children scaffolds that do not contribute significantly to mutagenicity. The structural relationship between the two mutagenic children scaffolds and phenanthrene (4) was shown in Fig 1B .The first of the two children scaffolds, 15,16-dihydrocyclopenta[a]phenanthren-17-one (5), was present in 13 compounds, and 77% of these compounds were known to be mutagenic.We can observe that 15,16-dihydrocyclopenta-[a]phenanthren-17-one (5) has an added cyclopentanone substructure compared to the phenanthrene (4) parent scaffold.Interestingly, this addition led to a 16% decrease in mutagenicity of 15,16-dihydrocyclopenta-[a]phenanthren-17-one (5) compared to phenanthrene (4) (Table C in The other mutagenic children scaffold, chrysene (6), was responsible for 96% of mutagenicity from the total of 23 compounds with chrysene as their core structure.This indicates that regardless of the addition of benzene to the phenanthrene (4) parent scaffold, the high mutagenicity rate found in phenanthrene-containing compounds was reflected in chrysene-containing compounds.An example of a mutagenic compound with chrysene (6) as its core structure is 2-nitrochrysene.This section demonstrated that phenanthrene (4), 15,16-dihydrocyclopenta [a]phenanthren-17-one (5) and chrysene (6) have direct parent-children scaffolds structural relationship, but also all of these scaffolds contribute significantly toward compound mutagenicity.Hence, these scaffolds were organized into one major mutagenic group. Major Mutagenic Scaffolds (III): Pyrene Group Pyrene (7) was the third major mutagenic scaffold, but it may be the most important out of the four major mutagenic scaffolds identified in this study, because all of the compounds (39 total) with pyrene (7) core structure were mutagenic.This suggests that a pyrene-containing compound usually has the potential to induce mutagenicity.For this reason, any compound with pyrene (7) scaffold as part of its structure should be carefully avoided in the drug candidate selection processes.It shouldn't be surprising that pyrene (7) itself was proven to be mutagenic according to the test performed in different Salmonella strains, including TA97, TA98, TA100 and TA1537 when S9 was present.[32] 1,8-dinitropyrene, and N-(6-hydroxypyren-1-yl)acetamide are two examples of mutagens containing pyrene (7) scaffold as part of their structures. The pyrene (7) scaffold tree consists of thirteen children scaffolds, including three mutagenic children scaffolds and ten non-mutagenic children scaffolds.Although ten of the pyrene (7) children scaffolds were non-mutagenic, each of the non-mutagenic scaffold was only presented in an average of 3 compounds, while the three mutagenic children scaffolds were represented in at least 10 compounds each, with high mutagenicity (rate of mutagen) as shown in Table C in The third child scaffold, 9,10-dihydrobenzo[a]pyrene (10), contains a cyclohexene ring attached to pyrene (7).This addition yielded the overall mutagenicity of 90% from a total of 10 selected compounds (Table C in S1 File).An example of a known mutagen with 9,10-dihydrobenzo[a]pyrene (10) core structure is (7S,8S)-3-nitro-7,8-dihydrobenzo[a]pyrene-7,8-diol.It was suggested that pyrene (7), benzo[e]pyrene (8), benzo[a]pyrene (9) and 9,10-dihydrobenzo [a]pyrene (10) were four noteworthy scaffolds that could cause mutagenicity and were classified in our third major mutagenic group. Minor Mutagenic Scaffolds of the Ames Mutagenicity Tree Naphthalene group consists of minor mutagenic scaffolds, since the mutagenicity scores for the scaffolds selected for this group are between 0.35 and 0.7.As shown in the fifth row of Table C in S1 File, 62% of the compounds with naphthalene (14) scaffold as part of their structures are mutagenic.Examples of mutagenic naphthalene-containing (14) compounds include 1-(4-methoxynaphthalen-1-yl)prop-2-enyl acetate and N-hydroxy-N-naphthalen-2-ylformamide.In the naphthalene (14) group, two children scaffolds, anthracene (15) and phenanthrene (16), were identified and their structures were shown in Fig 3A .In both children scaffolds, the fusion of naphthalene (14) with an additional benzene ring at different locations on naphthalene (14), resulted in much higher mutagenicity overall compared to that of naphthalene (14) itself.In anthracene (15), the addition of benzene on naphthalene (14) yielded 87% mutagens from a total of 31 anthracene-containing (15) compounds, while in phenanthrene (16), the addition of benzene on naphthalene (14) at a different fusion location resulted in 93% mutagens from a total of 40 phenanthrene-containing (16) compounds (Table C in S1 File).The following compounds, 3-methylanthracene-1,8,9-triol and 2,10-dinitrophenanthrene, are known mutagens containing anthracene (15) and phenanthrene (16) children scaffolds, respectively.Although naphthalene (14) parent scaffold does not contribute significantly toward causing mutagenicity, both anthracene (15) and phenanthrene (16) children scaffolds are linked to higher rate of mutagens, for this reason it was deduced that compounds with naphthalene (14) group should serve as warning scaffolds for mutagens. In addition to the naphthalene (14) group, the children scaffolds of benzene C in S1 File).However, since the benzene (19) scaffold is a common structure that covers most compounds with a broad range of mutagenicity, benzene (19) cannot be regarded as a mutagenic scaffold group.Quinoline (17) was the third scaffold with mutagenicity score between 0.3 and 0.7 (54% mutagens, out of 90 compounds).However, quinoline (17) was not recognized as a minor mutagenic scaffold either, because only one of the scaffolds in the quinolone (17) group was mutagenic.The quinoline (17) group contains two children scaffolds, N-phenylquinoline-8-sulfonamide (18) and N-phenylsulfamate, each covering more than 10 compounds at least from the Ames dataset.As shown in Fig 3B , N-phenylquinoline-8-sulfonamide (18) is one of the children scaffolds, which was classified as a mutagenic scaffold containing 94% mutagens (out of 53 compounds) while N-phenylsulfamate was the non-mutagenic scaffold with no mutagens (out of 11 compounds) (Table C in S1 File).Below the labels of scaffold names were labelled by the mutagen rates and the numbers of mutagen compounds/the numbers of overall compounds in that scaffolds.In the structures of child scaffolds, the differences from the parent scaffold were colored as red.doi:10.1371/journal.pone.0148900.g003 Reduction of Mutagenicity via Substructure Modification on Scaffolds Most importantly, we have recognized a series of substructures that can be used to modify the mutagenic scaffolds for decreasing the mutagenic activities.By observing the variations of mutagenicity between children of non-mutagenic scaffolds and of the scaffolds in our recognized major/minor mutagenic groups, we induced five series of the reduction rules that can decrease mutagenicity by modifying the substructures of mutagenic scaffolds.The structural relationships of the scaffolds in the five cases were illustrated in The fourth case involved four children scaffolds under the minor mutagenic scaffold of naphthalene (14), and they were composed of anthracene (15), phenanthrene (16), N-phenylnaphthalen-2-amine (28) and N-phenylnaphthalen-1-amine (29) with mutagenicity scores of 87%, 93%, 0% and 0%, respectively (Fig 4D).To remove mutagenic activity of compounds that contain the structure of anthracene (15), we can remove benzene from either site of anthracene (15), and link an aniline on the resultant naphthalene (14).In this way, the final compound contains the structure of N-phenylnaphthalen-2-amine (28) and yielded no mutagenic activity.Similarly, when we change the core structure of compounds containing phenanthrene (16) to the N-phenylnaphthalen-1-amine (29), the mutagenicity of resultant compounds will be reduced. Comparison with the Structure Alert Approach (Toxtree) To cross check the mutagenicity analysis conducted in this study as well as the benefits of having scaffold-mutagenicity flags, the publicly available structure alert approach (Toxtree) was compared to the results of our analysis.All of the mutagens covered by our identified four major mutagenic scaffolds were tested by Toxtree.Mutagenic compounds covered by acridine, phenanthrene, and pyrene were all correctly predicted by our study and Toxtree.Two mutagens including 5-(bromomethyl)-2,3-dimethoxyquinoxaline (32) (quinoxaline scaffold), and acridine-1,9-diamine (33) (acridine scaffold) were taken as two examples successfully predicted by both our study and Toxtree.The two examples of predicted structural alerts for mutagenicity analyzed by Toxtree were presented in Fig 5 .The matched structural alerts were highlighted and labeled in red text.According to the analysis of Toxtree, 5-(Bromomethyl)-2,3-dimethoxyquinoxaline (32) was predicted to be a mutagen due to the presence of an aliphatic halogen substructure alert.A similar result was observed for acridine-1,9-diamine (33), which was predicted to be a mutagen in Toxtree due to a primary aromatic amine structure alert.Because the acridine scaffold is a major mutagenic scaffold, acridine-1,9-diamine (33) was also predicted to be a mutagen in this study. Among the four major mutagenic scaffolds, the pyrene (6) groups with the mutagenicity score of 1 indicate that all pyrene-containing compounds are mutagens.All mutagenic compounds covered by pyrene were also correctly predicted by Toxtree since those compounds matched the structural alert, "SA_18: Polycyclic Aromatic Hydrocarbons (three or more fused rings)".In fact, Toxtree contains very few scaffold-like structural alerts, such as "quinoes", "halogenated benzene", and "Polycyclic Aromatic Hydrocarbons".[33] However, these general types of scaffold-like structural alerts could result in many false negatives.Take "Polycyclic Aromatic Hydrocarbons" as an example: many compounds having the properties of "Polycyclic Aromatic Hydrocarbons" ( our study can assist in predicting the core structures that are mutagenic, whereas Toxtree can only predict the substructures that are mutagenic.If the mutagenicity of a compound arises from its core structure instead of its substructural features, Toxtree will fail to identify such compound as a mutagen.Actually, the mutagenicities of most of our identified major or minor mutagenic scaffolds were less than 1.Therefore, the mutagenicity of compounds could depend on the some functional groups of chemical modification.Furthermore, partial of mutagens and non-mutagens still can be correctly predicted in Toxtree.We agreed that analysis of both functional group and scaffold of mutagenicity can enhance the predictability of mutagenicity of compounds. Conclusions The major findings and conclusions of this study include: 1) all of the children scaffolds derived from major mutagenic scaffolds were also mutagenic; 2) parent scaffolds with insignificant mutagenicity may produce mutagenic or non-mutagenic children scaffolds depending on the attached substituent.Furthermore, 3) when the core scaffold rather than the side chains of a compound is responsible for the mutagenicity of that compound, modifications can be made by replacing the mutagenic core structure with a different structure to form non-mutagenic scaffolds.Detailed lists of major mutagenic scaffolds and suggestions for modifications of mutagenic scaffolds were provided. Supporting Information S1 File.The additional Tables A-D, and Figures A-D.Table A in S1 File.The numbers of scaffolds selected, mutagens categorized as the selected scaffold and the ratios of C 1 to S for different mutagenicity cutoffs.(C 1 : number of mutagenic compounds, S: number of mutagenic scaffolds); Table B in S1 File.The numbers of scaffolds selected, non-mutagens categorized as the selected scaffold and ratios of C 2 to S for different mutagenicity cutoffs.(C 2 : number of non-mutagenic compounds, S: number of non-mutagenic scaffolds); Table C in S1 File.Rate of mutagen and number of compounds for each scaffold in: major mutagenic scaffold groups (Acridine, Phenanthrene, Pyrene, Quinoxaline), minor mutagenic scaffold group Fig 1 provides an illustration of the hierarchy and organization between four mutagenic parent scaffolds and children scaffolds. Fig 1 . Fig 1.The scaffold structures of major mutagenic scaffold groups.(A) Acridine, (B) Phenanthene, (C) Pyrene, and (D) Quinoxaline groups.Below the labels of scaffold names were labelled by the mutagen rates and the numbers of mutagen compounds/the numbers of overall compounds in that scaffolds.In the structures of child scaffolds, the differences from the parent scaffold were colored as red.The mutagenicities shown in Fig1were presented as the percentage of mutagenic compounds for each scaffold, and the IUPAC names were generated using the Chemaxon Marvin applet.[30]. Fig 2 . Fig 2. The C/S distribution plots according to different mutagenicity cutoff values.(A) In selecting mutagenic scaffolds, using the mutagens categorized in each selected scaffold as the selection criteria (C 1 / S).The detailed scores were listed in Table A in S1 File.(B) In selecting non-mutagenic scaffolds, using the non-mutagens categorized in each selected scaffold as the selection criteria (C 2 /S).The detailed scores were listed in Table B in S1 File.(C 1 : number of mutagenic compounds, C 2 : number of non-mutagenic compounds, S: number of mutagenic (for C 1 ) or non-mutagenic (for C 2 ) scaffolds).doi:10.1371/journal.pone.0148900.g002 (1) scaffold are: one of the children scaffold, benzo[c]acridine (2), has an additional benzene ring compared to the parent scaffold, and the other children scaffold, N-phenylacridin-9-amine (3), has an aniline structure added to the parent scaffold.Benzo[c]acridine contains an additional benzene substructure compared to acridine (1), and we observed the mutagenicity of benzo[c]acridine (2) was thus slightly decreased (Table C in S1 File).An example of a benzo[c]acridine (2) containing compound is 7,11-dimethylbenzo[c]acridine.This compound has two methyl groups added to benzo[c]acridine, and it was tested positive for mutagenicity according to Ames test. Fig 3 . Fig 3.The scaffold structures of minor mutagenic scaffold groups.(A) Naphthalene, (B) Quinoline, and (C) Bezene groups.Below the labels of scaffold names were labelled by the mutagen rates and the numbers of mutagen compounds/the numbers of overall compounds in that scaffolds.In the structures of child scaffolds, the differences from the parent scaffold were colored as red. Table 1 . Overview of the number of compounds in our collected dataset.
7,719.6
2016-02-10T00:00:00.000
[ "Biology" ]
ODE/IM correspondence and Bethe ansatz for affine Toda field equations We study the linear problem associated with modified affine Toda field equation for the Langlands dual $\hat{\mathfrak{g}}^\vee$, where $\hat{\mathfrak{g}}$ is an untwisted affine Lie algebra. The connection coefficients for the asymptotic solutions of the linear problem are found to correspond to the $Q$-functions for $\mathfrak{g}$-type quantum integrable models. The $\psi$-system for the solutions associated with the fundamental representations of $\mathfrak{g}$ leads to Bethe ansatz equations associated with the affine Lie algebra $\hat{\mathfrak{g}}$. We also study the $A^{(2)}_{2r}$ affine Toda field equation in massless limit in detail and find its Bethe ansatz equations as well as T-Q relations. Introduction The ODE/IM correspondence was proposed by Dorey and Tateo in [1] where they demonstrated an interesting relationship between a Schrödinger-type ordinary differential equation with anharmonic potential and the conformal limit of a certain two-dimensional quantum integrable model. It was shown that functional relations satisfied by the Stokes multipliers and spectral determinants of this ODE agree with those of the Q-operator and transfer matrix vacuum eigenvalues for an A 1 type quantum integrable system in the conformal field theory limit (see also [2]). The case where the Schrödinger differential equation is modified with an additional angular momentum potential was studied in [3]. This correspondence is now just a single example of the growing number of links between classical and quantum integrable models. The generalization of this massless ODE/IM correspondence to simple Lie algebra A r was carried out in [4,5]. The case of other simple Lie algebras was studied in [6], where it was necessary to consider in general pseudo-differential equations. The work of [7] showed that the same results could be obtained by using a first order formulation that did not require introduction of a formal anti-derivative. Lukyanov and Zamolodchikov [8] studied the ODE/IM correspondence for the massive sine(h)-Gordon model and found that spectral determinants of a modified form of the classical sinh-Gordon model coincide with the Q-functions of the quantum sine-Gordon model, the affine Toda field theory for algebra A (1) 1 . This was generalized to a relation between the classical Tzitzéica-Bullough-Dodd equation (A (2) 2 algebra) and the quantum Izergin-Korepin model in [9], and was studied for type A (1) r affine Toda theories in [10,11]. In these works it was shown that connection coefficients for subdominant solutions to the linear problem associated with the affine Toda field equation correspond to the vacuum eigenvalues of Q-operators for g-type quantum integrable models. The work of [11] looked at ABCDG-type affine Lie algebras and found that the (pseudo-)ordinary differential equation associated withĝ ∨ affine Toda field equation was the same as that of [6] for simple Lie algebra g after taking the conformal limit. While the work of [8,9] used a functional relation on the subdominant solution to the linear problem to obtain Bethe ansatz equations satisfied by the Q-function, the connection to the previously studied ψ-systems was not manifest. The ψ-system, a set of functional relations among uniquely defined solutions ψ (a) to a (pseudo-)ODE for a = 1, . . . , rank(g), was found in [6] (see also [7]). These ψ-systems are similar to the Plücker type relations, and using these relations they were able to derive the Bethe ansatz equations satisfied by the Q-functions which corresponded to the Q-function of a conformal vertex model associated to g. In this paper we investigate the ψ-system of [6,7] and show how it also holds in the massive case for subdominant solutions to the linear problem associated to a modified affine Toda field equation for affine Lie algebraĝ ∨ , whereĝ is an untwisted affine algebra. The case of A (2) 2r is unique in that it is non-simply laced yet its Langlands dual is equal to itself. Furthermore, the correspondence in [11] links massive theories associated to the Langlands dual affine algebraĝ ∨ to conformal quantum theories associated with g in the massless limit, so it is interesting to understand A (2) 2r which does not fit into this scheme in more detail. To investigate the meaning in this case we also propose a new ψ-system for A (2) 2r and give evidence for it by studying the spectral determinant of the ordinary differential equation associated with the linear problem and find its T -Q relations and the Bethe ansatz equations satisfied by Q. The case of untwisted non-simply laced affine Lie algebras remains elusive at the moment. The flow of this paper is as follows. In section 2 we introduce the modified form of the classical affine Toda field equation used in this paper and its linear form. This section's main purpose is to introduce some special solutions to the linear problem determined by their asymptotic behavior near the irregular singularity at z = ∞ and the regular singularity at z = 0. Section 3 introduces the ψ-system functional relations satisfied by uniquely determined subdominant solutions to the linear problem Ψ (a) . These massive ψ-systems serve as the fulcrum of this work, linking the classical affine Toda differential equations with Q-functions corresponding to some massive quantum integrable model. Finally section 4 uses the special solutions of section 2 and the functional relations of section 3 to give relations satisfied by the connection coefficients Q that are the same as Bethe ansatz equations for associated quantum integrable models. Affine Toda field equations In this section we will first summarize the Lie algebra conventions used in this paper. We then introduce the modified affine Toda field equation, including its linear form, and study special solutions defined by their asymptotic behaviors. Lie algebra preliminaries A rank r Lie algebra g has generators in {E α , H i } where α ∈ ∆ (the set of roots) and i = 1, . . . , r. The commutation relations satisfied by these generators are [12] [ /α 2 is the coroot of α and N α,β are structure constants. Lie algebra g has fundamental weights ω a and simple roots α a where a = 1, . . . , r and α ∨ a · ω b = δ a,b . The Cartan matrix is defined to be A ab = α a · α ∨ b . We normalize the roots so that the long root has length 2. Letĝ denote the affine Lie algebra of g. Its extended Dynkin diagram is obtained from that of g by adding the root α 0 = −θ, where θ is the highest root. The (dual) Coxeter labels n a (n ∨ a ) are integers satisfying 0 = r a=0 n a α a = r a=0 n ∨ a α ∨ a and n ∨ 0 = 1. The (dual) Coxeter number h (h ∨ ) is the sum of the (dual) Coxeter labels, and the (co)Weyl vector ρ (ρ ∨ ) is the sum of the (co)fundamental weights.ĝ ∨ denotes the Langlands dual ofĝ, whose simple roots are α ∨ a . The simply-laced affine Lie algebras A r , and E (1) r are self-dual, whereas the non simply-laced cases obey (B Modified affine Toda field equation First we will define the two-dimensional affine Toda field equation associated withĝ. The theory is defined on the complex plane using coordinates where ρ and θ are polar coordinates. The equation of motion for the two-dimensional modified affine Toda equation studied here is 1 The conformal factor p(z) in this equation is chosen to have the form (see [8,9]) Equation (2.5) can be written as a zero curvature condition, dA + A ∧ A = 0, where the one form A = A dz +Ā dz is This zero curvature condition can equivalently be written as a first order linear problem defined on some finite dimensional g-module, Such connections can be changed through an arbitrary gauge transformation of the form This leaves the zero curvature condition and linear problem unchanged, and will be used to put the connection into various convenient forms. Asymptotic behavior Now we will look at the asymptotic behavior of solutions to the modified affine Toda field equation and its linear problem. First, following [8,9,10,11] we consider a special family of solutions to the equation of motion (2.5) φ(ρ, θ) with the following properties: (i) Consistent with the choice of p(z) in (2.6), φ(ρ, θ) should have periodicity: The field φ(ρ, θ) is real-valued for real ρ and θ (i.e. whenz is identified as the complex conjugate of z), and finite everywhere except at the apex ρ = 0. The periodicity condition naturally leads one to define the following transformation under which both the equation of motion and linear problem are unchanged for integer k, (2.14) Functions that are rotated by this transformation are said to be k-Symanzik rotated, and will often be denoted with a subscript as follows, The linear problem also has another symmetry, Π : This symmetry follows naturally by noticing that underΠ, E α i transforms as For the following we will consider the linear problem (2.9) in g-module V (a) where the representation of this module has highest weight ω a and dimension α>0 (ωa+ρ)·α ρ·α [12] where ρ is the Weyl vector, half the sum of the positive roots. The vector space V (a) has a basis e In this work, we will be interested in the unique solution Ψ (a) in module V (a) that is subdominant, that is, the solution that decays fastest along the positive real axis. To find this subdominant solution it is useful to take a gauge transformation (2.10) that puts either the holomorphic or anti-holomorphic connection into a nice form with no exponentials where U is respectively (2.18) In the large z limit, φ(z,z) ∼ M ρ ∨ β log(zz) and p(z) ∼ z hM , and the connections becomẽ Now the subdominant solution is found to be, through consideration of the holomorphic and anti-holomorphic linear problems separately and then shifting back to the ± are the eigenvalues of Λ ± with the largest real part and its eigenvector in module V (a) . This eigenvalue is distinct, and furthermore since the representations can be chosen such that E ⊤ α = E −α , we have Λ − = (Λ + ) ⊤ and the two eigenvalues and eigenvectors are the same. Finally, after setting the Ψ (a) from (2.21) and (2.22) to be equal, f and g are fixed within a constant giving ApplyingΩ k to this for any real number k gives the k-Symanzik rotated solution Note that aΠ transformation applied to Ψ (a) gives the same large-ρ behavior as Ψ k is the subdominant solution in the Stokes sector . (see [8,9]). By considering the holomorphic and anti-holomorphic linear problem it can be shown that such a solution X where the overall constant's dependence on λ was fixed by requiring that this solution is invariant underΩ k . Note however that the Ψ (a) solutions do not display this invariance form a basis of solutions to the linear problem, the subdominant solution Ψ (a) can be expanded as These coefficients Q We conjecture that for affine Toda field equations with algebraĝ ∨ these Q-functions will correspond to the vacuum eigenvalues of Q-operators for some massive integrable quantum field theory associated withĝ. We will give evidence for this correspondence by showing that in the conformal limit these connection coefficients Q will satisfy Bethe ansatz equations associated to vertex models with Langlands dual Lie algebra symmetry. ψ-system The ψ-system [6] is a set of Plücker type relations satisfied by auxiliary functions that are constructed from the subdominant solution to a (pseudo-)ODE. The ψ-system was proved for A-type simple Lie algebras and was conjectured for all other simple Lie algebras. In [7], the ψ-system for classical Lie algebras was derived by studying the first order system equivalent to the (pseudo-)ODE of [6] and embeddings of g-modules. We will study the ψ-systems in the context of modified affine Toda field equations with algebraĝ ∨ and show that the same system of functional relations holds for the massive case. In particular it will be shown that the unique subdominant solutions Ψ (a) to the linear problem in g-module V (a) satisfy the same ψ-system relations of [7] forĝ ∨ when g is a classical Lie algebra, and [6] when g is an exceptional Lie algebra. We also find a new ψ-system for A (2) 2r affine Toda theories. Let us consider an embedding of modules as explained in [7] (see also [13]). In the r there is an embedding ι which acts as As consistency expects the highest weight of the left and right side modules are the same, ω a−1 + ω a+1 . Next, the incidence matrix B ab is related to the Cartan matrix as r the eigenvalues are such that the ψ-system with the same large ρ behavior on both sides is found to be ι Ψ where it is natural to define Ψ (0) and Ψ (r+1) to be 1. Here ι is the above embedding of modules. The ψ-system can be written in a general form for any simply laced case as ι Ψ where the matrix B ab is defined in (3.3). When considering non-simply laced cases, there are difficulties that arise forĝ = B (1) 4 , and G 2 with deriving a Bethe ansatz equation that has only simple poles, so we will not consider these untwisted non-simply laced cases here. Twisted cases The ψ-systems for twisted cases can be found by computing the eigenvalue with largest real part of Λ + and are, 1/4 = Ψ (2) ⊗ Ψ A 2r -type case This case does not fall under theĝ ∨ → g identification since there is no simple Lie algebra X r such that (X 2r . Nevertheless, a study of the eigenvalues of Λ + for this case show that the ψ-system that Ψ (a) satsifies is 2 ι Ψ When r = 1 this is the same functional relation as equation (4.77) in [9] for the case of the Tzitzéica-Bullough-Dodd model. Bethe ansatz equations Using the above ψ-systems, it is now possible to derive functional relations for the Qfunctions defined in equation (2.27) that will correspond to Bethe ansatz equations. We will verify that for modified affine Toda field equation with algebraĝ ∨ , when taking the conformal limit the Q-functions satisfy Bethe ansatz equations associated with g found in the context of the massless ODE/IM correspondence [6]. The conformal limit for modified affine Toda field equation with algebraĝ discussed in section 2 is reached using the following definitions, where ψ (a) and χ The definition of twisting used is from [14], where the role of α and α ∨ are swapped compared with [11]; here α 2 0 = 1 2 , α 2 i = 1, and α 2 r = 2, while n ∨ 0 = 1 and n ∨ i = 2. 1, . . . , dim V (a) ) can be determined through the linear problem and X (a) i , and is where ω ≡ e 2πi/h(M +1) and h is the Coxeter number for affine Lie algebraĝ. Under a k-Symanzik rotation in the conformal limit Q These functional relations are exactly the same as the Bethe ansatz equations for A r -type conformal vertex models. This method applied to the other algebras give Bethe ansatz equations 2r−1 : 6 : 4 : Each of these Bethe ansatz equations agree with those reported in [6] (see also [15]) under the identificationĝ ∨ → g. For the case of A (2) 2r , the same procedure gives Bethe ansatz equations (4.13) Note that for A 2 , which corresponds to the Tzitzéica-Bullough-Dodd model discussed in [9], α r = ω r − ω r−1 and (4.13) reduces to their equation (4.85). Since this case does not fall under the identificationĝ ∨ → g, it is important to verify these equations. To this end in appendix A we derived the T -Q relations that give rise to Bethe ansatz equations (4.13) starting from an analysis of the ODE itself and not using the ψ-system (see [16] for the A (2) 2 case). Furthermore, in appendix B we also show how these Bethe ansatz equations can be found in the work of [17] which looked at Bethe ansatz equations associated with twisted quantum affine Lie algebras. Discussion In this paper we studied a classical affine Toda field theory for affine Lie algebraĝ ∨ that is modified by a conformal transformation. Writing this modified affine Toda field equation in the linear form (d + A)Ψ = 0 translates the problem into a holomorphic and antiholomorphic first order matrix ordinary differential equation. Studying the asymptotic behavior of solutions Ψ to this linear problem, a unique subdominant solution Ψ (a) is found depending on the module V (a) in which the vector Ψ lives. These subdominant solutions Ψ (a) were then found to obey a set of functional relations, the massive ψ-system (see [6,7] for massless case). By expanding Ψ (a) in the basis of solutions X in this expansion. Substituting this expansion then into the ψ-system in the conformal limit gives a set of functional relations on the Q-functions that is of the same form as Bethe ansatz equations associated with a g-type conformal quantum vertex model. This was carried out for modified affine Toda field equations with algebraĝ ∨ where g is a simple Lie algebra and the resulting Bethe ansatz system matched those of [6,7] The presence of the Langlands dual affine algebra hints that the ODE/IM correspondence here could be a manifestation of Langlands duality [18]. This identification under the conformal limit gives important evidence in support of our conjecture that the proposed ψ-systems hold for massive systems and that the ODE/IM correspondence links the classical modified affine Toda equations to a massive quantum integrable model. Furthermore, previous work on the massive ODE/IM correspondence in this context on the modified sinh-Gordon equation [8] and A (1) r -type Toda theories [10] are in agreement with this work. A Also, the massive ODE/IM correspondence was recently studied in the case of the classical modified sinh-Gordon equation for a choice of p(z) defined on the 3-punctured Riemann sphere, and was found to correspond to the quantum Fateev model [19]. A generalization to affine Lie superalgebras [20] would also be interesting to study to explore the integrable structure of superstring theory in AdS space-time. Note added: During the preparation of this paper, we became aware of [21] where the conformal limit has been also studied for simply-laced cases. A T -Q relations for A (2) 2r In this appendix we will derive the Bethe ansatz equations (4.13) for A (2) 2r starting from the ODE satisified by the top component of Ψ (1) in the conformal limit. This was done for the A (2) 2 case in [16]. For this discussion the ψ-system will not be used explicitly, but for reference we write down the ψ-system here where in the conformal limit it reduces to Wronskian relations on the top component of each vector Ψ (a) , In the conformal limit the top component of Ψ (1) , ψ (1) , satisfies the ODE (see [11]) This equation has a subdominant solution ψ with asymptotic behavior A Symanzik rotation of ψ(x, E, g) is defined to be ψ k (x, E, g) = ω −kr ψ(ω k x, ω hM k E, g) , ω = e 2πi/h(M +1) . This implies that the solutions {ψ k , ψ k+1 , . . . , ψ k+2r } are linearly independent. We will also make use of the notation and define the auxiliary functions which will make up the ψ-system To show that the above functions (A.8) asymptotically satisfy ψ-system (A.1), first note that the above ψ k asymptotic functions are exactly what one would get in the case of 2r . The work of [6,7] then gives a ψ system for auxiliary functions ψ su(h) . The twisting of A (1) 2r to A (2) 2r implies that we expect ψ (a) = ψ (2r+1−a) . Using the trigonometric relations one can show indeed that Notice that the coefficient in the exponential here is exactly µ (a) /µ (1) , as required. This demonstrates that after making the identification ψ (r+1) ∼ ψ (r) the ψ-system of A (1) 2r reduces to (3.12). In the case of A 2r one cannot truly identify ψ (r+1) ∼ ψ (r) , but for A (2) 2r in addition to the large x behavior the small x behavior is also in agreement, Now, using ψ-system (A.1) the Bethe ansatz equations (4.13) can be proven to hold through the T -Q relations we will now derive. Since {ψ k , ψ k+1 , . . . , ψ k+2r } form a basis of solutions, we can expand ψ as Then, using the notation W k,k+1,...,k+a−1 and determinant relations in [4] gives where T (1) (E) ≡ C (1) (ω −hM E) and the Coxeter number h is 2r + 1 in this case. We will also expand ψ (a) in terms of solutions defined by the small x behavior as i . (A.14) After considering just the most divergent first term (i = 1) in this expansion, we can then make the identification (Q In this appendix we show how the Bethe ansatz equations for A (2) 2r (4.13) are in agreement with [17], which looked at Bethe ansatz equations for twisted quantum affine algebras (see also [15]). The Bethe ansatz equation associated to a solvable vertex model associated with the twisted quantum affine algbra U q (A (2) 2r ) [17] is . (B.8) In the A 2r case, for these Bethe ansatz equations to agree with (4.13), simultaneously replace E (a) j → −E (a) j for odd a and setθ = π − θ and take N and N (1) to be even. The identification then holds for Q
5,150.6
2015-02-03T00:00:00.000
[ "Mathematics" ]
Evaluation of expanded uncertainty at glass thermometer calibration A method for calibrating a glass thermometer is investigated and a procedure for measurement uncertainty evaluation based on the kurtosis method is developed. The correlation between the indication of the reference and calibrated thermometers at uncertainty evaluation is taken into account. The effectiveness of the reduction method applying in calculating the uncertainty of correlated measurements is demonstrated. Uncertainty budgets have been drawn up, which can be used as the basis for developing software tools to automate the uncertainty evaluation. A real example of the measurement uncertainty evaluation at glass thermometer calibration is considered. It is shown that taking into account the correlation between the measurement results of the calibrated and reference thermometers allows to reduce the values of the combined and expanded measurement uncertainty by almost 1.5 times. The coincidence of the results obtained by the proposed method and the Monte Carlo method is shown. Introduction Glass thermometers are widespread in laboratory and industrial practice due to the high accuracy, cheapness, ease of use [1]. Their measurement range, depending on the thermometric fluid used (mercury, toluene, ethyl alcohol, kerosene, petroleum ether, pentane), extends from -200 to +750 °C. Glass thermometers, like other measuring instruments, need periodic calibration. In this case, in accordance with the requirements of the standard ISO/ IEC 17025 [2], it is necessary to evaluate the measurement uncertainty. The main method for calibrating these thermometers is to compare them with a reference thermometer using a transfer device (thermostat). In the process of calibrating a thermometer, the difference Δ between the indications of the calibrated thermometer and the reference thermometer is estimated, thus determining the systematic error of the calibrated thermometer at the calibration point [3]. Since the measurements by both thermometers are carried out simultaneously under the same conditions, the instability of the temperature of the thermostat causes a statistical interrelation (correlation) between their indications, which must be taken into account when developing the procedure of measurement uncertainty evaluation. Analysis of literary data and problem statement Currently, the efforts of Working Group 1 (WG-1) of the Joint Committee for Guides in Metrology (JCGM) are focused on revision the Guide to the Expression of Uncertainty in Measurement (GUM) [4]. The reason for the revision is the inconsistency of the uncertainty estimates obtained by the GUM method [5] and estimates obtained by the Monte Carlo Method (MCM) in accordance with Supplement 1 to the GUM [6]. Since [6] is based on the Bayesian approach to the measurement uncertainty evaluation, this approach should also be used in the revised Guide (NewGUM). In this case, it is necessary to consider the issues of taking into account correlation in the measurement uncertainty evaluation [7]. The purpose and objectives of the study The article considers the procedure for measurement uncertainty evaluation based on the Bayesian approach [4], which implements the kurtosis method proposed by the authors [8]. To determine the reliability of the developed procedure, one should compare its results with the results obtained by the Monte Carlo method [9]. Measurement Uncertainty Evaluation Algorithm The error of indication E X of the calibrated thermometer obtained from the relation: UDC 006.91:053.088 Evaluation of expanded uncertainty at glass thermometer calibration where T c is the temperature indicated by the calibrated thermometer; T s is the temperature indicated by the reference thermometer; ∆ s is the correction due to calibration error of the reference thermometer; ' c is the correction due to the finite resolution of the calibrated thermometer; ∆ T is the correction due to the temperature unevenness inside the thermostat. (2) where T ci , T si are indications of calibrated and reference thermometers; n is the number of indications. All corrections in expression (1) are centered values, so their estimates are zero: -standard uncertainty due to the finite resolution of the calibrated thermometer: where d is the resolution of the calibrated thermometer; -standard uncertainty of the reference thermometer: where U s and k s , respectively, the expanded uncertainty and coverage factor taken from the reference thermometer calibration certificate; -standard uncertainty associated with the temperature unevenness in the thermostat: where T T is the limit of temperature unevenness in the thermostat. Determination of pairwise correlation of input quantities The change in temperature in the thermostat leads to a correlation between the indications of the reference and the calibrated thermometers. Estimate of coefficient of correlation is made by the formula: The standard uncertainty of the measurand is calculated according to the formula: If there is a correlation between the reference and calibrated thermometer indications, the reduction method and a simpler expression to calculate u E ( ) X can be used: where u( ) ∆ is standard uncertainty of observed dispersion of the varying indications of thermometers will be determined like that: 9. Calculating expanded uncertainty: where the k coverage factor for the confidence level is 0.95 calculated by the formula [7]: 0 1085 0 1 2 3 3 2 3 0 95 6 4 , , , , . ; and η is the kurtosis of the distribution of the measurand, determined as: or when evaluating by the reduction method: where the kurtosis values of input quantities are taken from Table 1 in accordance with their distribution laws. T T n c s 6 5 10. Uncertainty budget All above obtained information on input quantities and measurand is summarized in Table 2, which is an uncertainty budget. The uncertainty budget is convenient to use as a basis for building a software tool to automate the process of measurement uncertainty evaluation. The uncertainty budget with applying the reduction method is shown in Table 3. An example of measurement uncertainty evaluation Indications of calibrated and reference glass thermometers are shown in Table 4. °C is taken from the passport of the thermostat. The uncertainty budget for these data is listed in Table 5. The uncertainty budget for the reduction method is presented in Table 6. The results of the measurement uncertainty evaluation for this example, obtained by the Monte Carlo method, are presented in Table 7 and completely coincide with the results of Tables 5 and 6. Measurand The value of the measurand Combined standard uncertainty Coverage factor Expanded uncertainty E X -0.01667 0.005227 1.974 0.01033 Conclusions 1. The procedure of the measurement uncertainty evaluation at glass thermometer calibration, based on the Bayesian approach and the kurtosis method, is described. 2. Applying the reduction method to process correlated measurements makes it easier to calculate the measurement uncertainty. 3. The procedure is illustrated by a specific example, the results of which showed a complete agreement with the results of the calculation by the Monte Carlo method. 4. Taking into account the correlation between the measurement results of calibrated and reference thermometers allows to reduce the values of combined and expanded measurement uncertainties by almost 1.5 times.
1,591.8
2019-12-28T00:00:00.000
[ "Physics", "Engineering" ]
First timeseries record of a large-scale silicic shallow-sea phreatomagmatic eruption Phreatomagmatic eruptions are one of the most common styles of volcanic eruptions on Earth 1–2 . Recent studies have highlighted the importance of the eruption depth and magma discharge rate on the eruptive behaviour of underwater volcanoes 2–7 . Even though voluminous silicic eruptions in shallow-water environments are likely to be intense and hazardous, such eruptions mostly appear in geological records 7–12 and the nature of this type of eruption is therefore poorly understood. Here, we show the �rst timeseries record of a large-scale silicic phreatomagmatic eruption that occurred at the Fukutoku-Oka-no-Ba volcano, Ogasawara, Japan, on 13 August 2021. The eruption started on the sea�oor at a depth of < 70 m and breached the sea surface, resulting in a 16-km-high, steam-rich sustained eruption column. The total magma volume was ~ 0.1 km 3 , including the subaerial tuff cone and the 300-km 2 pumice raft, most of which can be explained by the effective accumulation of pyroclasts near the vent resulting from interactions between the eruption plume and the ambient water. This eruption provides a rare opportunity to investigate the process of a large-scale phreatomagmatic eruption in a shallow sea and contributes to our understanding of the nature, dynamics, and hazards of submarine volcanism. Introduction Phreatomagmatic eruptions, caused by the interaction of magma and external water, are one of the most hazardous types of volcanic eruptions on Earth 1,2 .Such eruptions can signi cantly impact areas around volcanoes by generating high-energy pyroclastic density currents (PDCs), strong pressure waves and tephra-laden jets [13][14][15][16] .The explosivity of phreatomagmatic eruptions increases when a certain amount of external water is incorporated and a higher energy-exchange e ciency from magmatic heat to mechanical energy is achieved [17][18][19][20][21][22] .Therefore, the mixing conditions and the ratio of external water to magma are important to understand the explosivity and eruption style of subaqueous eruptions.The explosivity and eruption style are also related to the water depth of the eruption and the magma discharge rate [2][3][4][5][6][7] .In general, shallow-water environments result in more explosive eruptions than deepwater or dry conditions.Even small-volume eruptions with low magma discharge rates can be explosive, as often observed in Surtseyan-type eruptions [13][14][15][16] .For large-volume eruptions, the explosivity and eruption style may change dramatically if the eruptions occur in a shallow-water environment where external water is e ciently involved [7][8][9][10][11][12] ; conversely, if the eruptions occur in a deep-water environment, the explosivity is suppressed by the high pressure [4][5][6] .However, direct observational data of such largescale phreatomagmatic eruptions are limited.Only the geological record can provide information concerning the possible surface phenomena of these eruptions [7][8][9][10][11][12] , although recent remote-sensing tools, including satellites and the global infrasound monitoring network, have captured some smaller cases [23][24][25][26][27] .The explosive submarine eruption that occurred in Tonga on 15 January 2022 may be one of the examples of large-scale phreatomagmatic eruptions in the shallow sea; however, the physical and chemical processes and related parameters of the eruption are under debate.Accordingly, how such eruptions proceed and their impacts in real space and time are poorly constrained.The 2021 Fukutoku-Oka-no-Ba (FOB) eruption may be the rst to reveal the processes involved with a large-scale eruption of this type. The FOB volcano is one of the active volcanoes in the Izu-Ogasawara arc (Fig. 1a).On 13-15 August 2021, a large explosive eruption occurred at this volcano (Fig. 1b).Prior to the eruption, the depth of the summit of the volcano was 40-50 m b.s.l.(below sea level) and a slightly deeper ssure (~ 70 m b.s.l.), where the 2021 eruption began, existed on its northern side (Fig. 1c).We analysed the process of this eruption using satellite imagery, aerial photos, infrasound, plume modelling, and geochemistry.The eruption began with a sustained plume, which breached the overlying sea water and reached a height of 16 km a.s.l.(above sea level) in hours 28 .As a result, the eruption reclaimed the shallow sea and produced a tuff cone (Fig. 1c).The eruption also produced a 300-km 2 pumice raft (Fig. 1d), which was dispersed by ocean currents more than 1,000 km west of the volcano.The pumice raft reached the coastal areas of the Paci c Ocean along the Japanese archipelago and caused damage to coastal infrastructure.A chemical analysis of the pumice clasts indicates that the eruption consisted of trachytic-trachydacitic magma with glass compositions reaching 68 wt.% SiO 2 .Therefore, the eruption was a large-scale silicic explosive phreatomagmatic eruption, the rst ever recorded in modern history. Timeseries data Himawari-8 satellite and infrasound remote observations suggest that the FOB eruption began at 05:55 JST on 13 August 2021 (Fig. 2a, b).The eruption occurred in four phases.Phase 1 consisted of a continuous plume phase that began at 05:55 JST on 13 August and lasted for ~ 14 h with uctuations until ~ 20:00 JST on 13 August.The period 12:00-19:00 JST was more intense with sustained plumes. Phase 2 was a 14-hour pulsating phase, characterised by frequent strong infrasound signals.The period 05:30-06:30 JST on 14 August was more intense with a sustained plume.Phase 3 consisted of 24 h of intermittent weak explosions and sparse strong explosions until 09:00 on 15 August.In Phase 4, the activity decayed.Phase 1 was the most powerful and produced the major eruptive products.It generated a continuous, white-coloured vigorous plume directed to the west.At approximately 08:00 JST on 13 August the pumice raft was con rmed to be spreading circularly from the source (Fig. 2c).The development of the pumice raft was observed for the rst 4 h until the source area was covered by the eruption plume in the satellite view.After the cessation of Phases 1-3, the pumice raft drifted westward and was carried by ocean currents. Plume characteristics of Phase 1 In the periods of 12:00-19:00 JST on 13 August and from 05:30-06:30 JST on 14 August, sustained eruption columns developed (blue bars in Fig. 2b).The most vigorously erupted columns formed in the 1 h following 14:00 JST and in the 20 min from 12:45 JST on 13 August.These eruption columns formed thin laterally spreading clouds with a 15-20-km radius at 16 km a.s.l.(Fig. 2a), which corresponds to the tropopause height (SI).The eruption column was entirely white-coloured, indicating a steam-rich eruption. The thin laterally spreading clouds (Fig. 1b) were similar to an 'anvil cloud' or 'incus', which is often observed when the upper portion of a strong cumulonimbus spreads out in the shape of an anvil along the tropopause 29 .These clouds differ from those observed in (Sub-)Plinian eruptions in which more vigorous, thicker pyroclast-laden 'umbrella clouds' with a grey/brown colour develop 30 .The FOB eruption column may have contained ne-grained ash; however, its shape, colour, and spreading behaviour along the tropopause did not provide evidence for a large amount of pyroclasts being suspended in the column. The area covered by the entire eruption cloud reached ~ 10 5 km 2 at approximately 15:00 JST on 13 August.However, there is no report or evidence of ashfall on any ships or boats in the downwind area or on the neighbouring Minami Io To island, 6 km south-southwest of the source. Thermal anomaly No thermal anomaly was detected in the near-source region, even when the root of the eruption column was clearly observable from Himawari-8.An aerial observation by the Japan Coast Guard (JCG) with an infrared camera at 15:00-15:30 JST on 13 August captured hot pyroclastic material being ballistically ejected; however, the thermal anomaly was likely too small to be detected by satellites.A reasonable interpretation is that most of the heat issued from the rising magma was consumed in the rapid vaporisation of seawater before it could be detected by thermal monitoring. Tuff cone and pumice raft formation The eruption formed a new tuff cone with an ~ 1-km-diameter crater around the vent; however, this cone was rapidly eroded by waves and separated into western and eastern islands.The cone can be seen in a satellite image taken on 14 August.The height of the cone was ~ 15 m at maximum, and the cone components were massive, poorly sorted loose pyroclastic units, suggesting multiple depositional processes.These islands disappeared by early 2022.From 15:00-15:30 JST, 13 August, during the most intense phase, the JCG airplane repeatedly observed laterally spreading PDCs at the source.Therefore, a major component of the new islands was likely formed via near-vent depositional processes such as partial collapses of the eruption column.The volume of the tuff cone is estimated to have been 0.04-0.07km 3 . A brown-coloured pumice raft began to form at 08:00 JST on 13 August and spread against the direction of the wind and ocean currents.The edge of the pumice raft reached ~ 4 km southeast from the source by 12:00 JST on 13 August (Fig. 2c), suggesting a spreading speed of ~ 1 km/h upstream before being drifted by ocean currents.The area of the pumice raft reached ~ 300 km 2 at 01:00 JST on 15 August.Most of the pumice raft appeared to originate from the vent location.The growth of the pumice raft cannot be explained by the direct deposition of fallout from the eruption plume because the upwind area was never covered by the plume as a result of the strong easterly wind during the observation period.In the downwind direction, the formation of the pumice raft was not observed, indicating that the eruption plume did not contain a large amount of pumice clasts dispersed in the distal direction.Therefore, most of the pumice raft is thought to have been directly generated from the vent during Phase 1.The volume of the pumice raft was estimated to be 0.1-0.3km 3 .Therefore, the sum of the volumes of the tuff cone and pumice raft is 0.1-0.4km 3 (0.04-0.1 km 3 dense rock equivalent, DRE). Modelling a steam-rich eruption plume The magma discharge rate required to form the 16-km-high eruption column in Phase 1 was estimated using a one-dimensional eruption plume model 31,32 that includes the effect of the phase change of the external water.The effect of an amount of pumice that should have provided thermal energy to the plume but did not rise in the plume was also considered in this study (SI).The results indicate that a magma discharge rate of 3-6 × 10 5 kg/s is su cient to explain the observed plume height if only a fraction (0.3-3 × 10 5 kg/s) goes into the plume (Extended data Fig. 6).Assuming a nine-hour sustained plume (blue bars in Fig. 2b), the erupted mass is estimated to be 1-2 × 10 10 kg, corresponding to 0.004-0.008km 3 in DRE.Because some fraction of the eruptive material is deposited in the proximal area, the contribution of the magma to the distant fallout tephra is signi cantly reduced compared with these values. SO 2 emissions The SO 2 emissions were observed by the TROPOMI instrument installed on the Sentinel-5 Precursor satellite.The mass of SO 2 emitted during the 15 h of activity was 2.1 × 10 7 kg.We also analysed the SO 2 concentrations of the silicic matrix glass of the pumice and the melt inclusions (MIs) of plagioclase, which is a major phenocryst in the products.The degassed SO 2 was estimated to be 73.3 ppm from the difference between the SO 2 concentrations of the matrix glass and the MIs.Using this SO 2 concentration and the observed amount of SO 2 , the mass of the erupted magma is estimated to be 2.9 × 10 11 kg, corresponding to 0.11 km 3 DRE.Therefore, the total erupted volume estimated from the SO 2 balance can be mostly explained by the sum of the geology-based and the model-based tephra volume estimates without assuming any other source. Discussion And Summary The eruptive volume of the FOB eruption has large uncertainties; however, a comparison between the volumes estimated from the geology and from the SO 2 emissions indicates that it likely reached ~ 0.1 km 3 DRE.This eruptive scale, the high eruption plume, and the voluminous pumice raft caused by continuous magma discharge differ from the features of well-observed phreatomagmatic explosions, such as Surtseyan eruptions, which are characterised by a series of discrete events with a relatively low magma discharge rate 16,19 .Plumes from Surtseyan eruptions rarely rise to high altitudes (generally less than 10 km) because their thermal ux is generally low 2,19 .Instead, this type corresponds to Phases 2 and 3 of the FOB eruption.The magma discharge rate of Phase 1 is similar to those of (Sub-)Plinian eruptions, which are characterised by sustained explosive eruptions, discharging hot pyroclast and gas mixtures with a tall eruption column and resulting in the widespread dispersion of large amounts of pyroclasts [33][34][35] .However, the style of the FOB eruption also differs from these types because most of the pyroclasts accumulated near the vent and were consumed to form the tuff cone and pumice raft.Infrasound also does not indicate features of sustained explosive eruptions. The features of Phase 1 of the FOB eruption may be explained by the effective decoupling of coarse pyroclasts from the eruption plume caused by the interaction between the eruption plume and the seawater.In this process, the gas-pyroclast mixture above the submarine vent (50-100 m b.s.l.) penetrates the seawater and atmosphere.First, the mixture ingests the ambient seawater and develops an eruption plume, possibly in a manner similar to deep submarine eruptions [4][5][6] .Then, the plume reaches the sea surface that acts as the boundary between two ambientes with different densities, where it may split into two parts (Fig. 3): (1) a breaching part into the atmosphere caused by the upward migration of the mixture and heating, where the seawater ashes to steam and is rapidly accelerated, and (2) a remaining part in the water that is decelerated by the increasing density resulting from mixing with liquid water.Underwater, lighter parts may form density currents composed of a slurry of hot water and pyroclasts along the sea surface, and denser parts may form submarine density currents along the sea oor 36,37 .After ejection of the material into the atmosphere, there may be a further decoupling of the large poorly fragmented clasts and the wet, cold gas-pyroclast mixture, which may result in partial column collapse and the generation of PDCs.The remaining buoyant parts of the plume rise and form a steam-rich convective plume that carries highly fragmented ne material.A large amount of oating pumice might result from sedimentation from subaerial PDCs and/or directly from the slurry gushing from the submarine vent (Fig. 3).The circular spread of the pumice raft from the source indicates that the slurry discharge was su cient to drive an upstream current.The volume of the pumice raft is estimated to have reached 50-90% of the total erupted volume.This is similar to a silicic deep submarine eruption in which ~ 70% of the eruptive products were dispersed as a pumice raft 4 . Phase 1 was followed by the Phase 2 and 3 Surtseyan eruptions, re ecting both the upward migration of the vent position and the decrease in the magma discharge rate (Fig. 3).The transition of the eruption style was likely signi cantly affected by the eruption depth and magma discharge rate.Phase 1 demonstrated an eruption style that appears when certain conditions of eruptions (depth of < 100 m, magma discharge rate of < 10 7 kg/s) are met.While a buoyant steam-rich plume was generated, tephra dispersal processes might have been signi cantly affected via the interaction of magma and seawater in the shallow-water environment.It is di cult to categorise this eruption into previously de ned eruption styles such as 'Surtseyan' or 'Plinian' [33][34] .'Phreatoplinian' may potentially describe this type of eruption; however, this eruption was de ned based on its deposit characteristics, such as its extensive dispersion of voluminous ne-grained ash, because of its intense fragmentation and perhaps higher magma discharge rate 8-11 .Conversely, we have no evidence of a large amount of ne-grained ash being generated in the FOB eruption and the proposed near-vent processes may be different.The large-scale silicic phreatomagmatic eruption with a steam-rich sustained eruption column observed in the FOB can be called the 'Ultra-Surtseyan' eruption. The FOB eruption provides an important opportunity to explore the processes of large-scale silicic submarine eruptions in shallow-water environments.The surface phenomena, eruptive products, and emplacement processes of such eruptions may signi cantly differ from those of deep submarine eruptions and dry explosive eruptions.To examine these problems, we need to comprehensively survey submarine deposits.Such information is essential to constrain the eruption and enhance our knowledge of submarine volcanism. Declarations The plume rise height is primarily determined by the heat ux or mass discharge rate from the source because the development of a plume is driven by the conversion of thermal energy to potential energy.Under high humidity conditions in the atmosphere, latent heat is released during the phase change from steam to liquid water and then to ice as the eruption plume rises, causing an increase in plume height.Another essential feature of this eruption was the existence of a large amount of pumice that should have provided thermal energy to the plume but did not rise in the plume.In plume modelling, we applied the user-friendly plume model 'Plumeria' 31,32 , which includes the effect of the phase change of water, to calculate the relation between the eruption parameters and plume height (see Supplementary Information for details).To incorporate the thermal energy from the pumice that did not rise in the plume, we adjusted the input parameters of the software to represent the water-rich high-enthalpy mixture at the vent (Extended Data Fig. 3-6).To achieve a plume height of 16 km, the necessary mass discharge rate was estimated to be 3-6 × 10 5 kg/s.The temperature dependence is small compared with that of the mass discharge rate. Chemical analysis Pumice samples from the 2021 FOB eruption were taken by the Japan Meteorological Agency at 25° 30.3 N, 138° 53.3 E on 22 August 2021 during a survey cruise and at Minami Daito on 4 October 2021; samples were also acquired by Minami Daito Village at Minami Daito on 8 October 2021 and by our research group on Okinawa Island on 20 November 2021.We performed microscope observations and conducted whole-rock major element analyses using X-ray uorescence spectrometry (ZSX Primus II, Rigaku Co., Ltd., Tokyo, Japan) and groundmass and mineralogical analyses using an electron probe microanalyser (EPMA, JXA-8800R, JEOL Ltd., Tokyo, Japan) with an acceleration voltage of 15 kV, a beam current of 12 nA, and a beam diameter of 10 μm at the Earthquake Research Institute, University of Tokyo.On the basis of the microscope observations, all the products from the 2021 FOB eruption include phenocrysts of plagioclase, clinopyroxene, Fe-Ti oxides, olivine, and apatite.Plagioclase is the most abundant phenocryst.The whole-rock chemical compositions for the 13 samples of the FOB products are 61.7-64.0wt.% SiO 2 , 1.3-2.5 wt.% MgO, and 9.6-11.1 wt.% Na 2 O + K 2 O and are classi ed as trachyte or trachydacite.The ranges of the chemical compositions of the 2021 products are the same as those of past eruptions.The chemical compositions of the groundmass (GM) glass vary over a linear trend ranging from 56-68 wt.% SiO 2 , re ecting a mixture of ma c magma, even though silicic GM is the main component.The silicic melt inclusions (MIs) in plagioclase have 65-67 wt.% SiO 2 .We also measured SO 3 for the silicic GM (n = 95) and silicic MIs in a plagioclase phenocryst (n = 32) using EPMA.The SO 3 concentrations of the silicic GM and silicic MIs were 334 ppm (1σ = 336 ppm) and 425 ppm (1σ = 178 ppm), respectively.The SO 3 concentrations were recalculated as SO 2 concentrations, and nally the degassed SO 2 was estimated to be 73.3 ppm from the difference between the SO 2 concentrations of the silicic GM and silicic MIs.
4,589.2
2022-02-11T00:00:00.000
[ "Geology", "Environmental Science" ]
Revised Subunit Structure of Yeast Transcription Factor IIH (TFIIH) and Reconciliation with Human TFIIH* Tfb4 is identified as a subunit of the core complex of yeast RNA polymerase II general transcription factor IIH (TFIIH) by affinity purification, by peptide sequence analysis, and by expression of the entire complex in insect cells. Tfb3, previously identified as a component of the core complex, is shown instead to form a complex with cdk and cyclin subunits of TFIIH. This reassignment of subunits resolves a longstanding discrepancy between yeast and human TFIIH complexes. Tfb4 is identified as a subunit of the core complex of yeast RNA polymerase II general transcription factor IIH (TFIIH) by affinity purification, by peptide sequence analysis, and by expression of the entire complex in insect cells. Tfb3, previously identified as a component of the core complex, is shown instead to form a complex with cdk and cyclin subunits of TFIIH. This reassignment of subunits resolves a longstanding discrepancy between yeast and human TFIIH complexes. TFIIH 1 is remarkable among RNA polymerase II (pol II) transcription factors for its size, catalytic activities, and multiple functional roles (1). Consisting of nine subunits, with a total mass of about 500 kDa, TFIIH is comparable in size and complexity to pol II. The largest subunits of TFIIH, termed Ssl2 and Rad3 in yeast, are DNA-dependent ATPase/helicases and are essential for unwinding promoter DNA at the active center of pol II (2)(3)(4)(5)(6)(7)(8)(9). Two smaller subunits form a cyclin-dependent protein kinase (cdk)-cyclin pair that phosphorylates the Cterminal domain of pol II during the transition from transcription initiation to elongation (10,11). Beyond its role in transcription, six TFIIH subunits, including Ssl2 and Rad3, are components of a DNA "repairosome," responsible for nucleotide excision repair of DNA damage (12,13). In human cells, the counterparts of the cdk-cyclin pair perform yet another role, activating the cdks that drive the cell cycle (14,15). All nine subunits of TFIIH have been conserved in amino acid sequence from yeast to humans (1), and structural studies have demonstrated conservation as well (16,17). An intact nine-subunit "holo" TFIIH, capable of fulfilling the requirement for transcription, has been isolated from both yeast (18) and mammalian cells (19 -21). Subcomplexes, apparently related to the distinct functional roles of TFIIH, have been reported (22,23). In previous work from this laboratory, a 5-subunit "core" complex of the yeast proteins Rad3, Ssl1, Tfb1, Tfb2, and Tfb3 was described (18,24), as was a separate complex of the cdk-cyclin pair, termed TFIIK (10). While subcomplexes of TFIIH subunits were also obtained from human cells (20,21,25), with similar compositions to those from yeast, a notable discrepancy arose in regard to MAT1, the human counterpart of Tfb3 (25). MAT1 was isolated in association with the cdk-cyclin pair, rather than as part of the core complex (14,15,25). Another protein, p34, replaced MAT1 in the human core complex (23,25,26). The yeast homolog of p34, termed Tfb4, was late to be identified (24), and although it was shown to be required for both transcription and nucleotide excision repair (27), it was not definitively assigned to a subcomplex. Recent studies (28) have indicated an association of Tfb3 with the cdk-cyclin pair rather than with the yeast core complex. We now find that the component of the core complex originally identified as Tfb3 is, in fact, Tfb4. We confirm and extend the evidence for a Tfb3-cdk-cyclin trimer. The revised molecular description of yeast TFIIH is entirely coincident with that of the human factor. Construction of Baculoviruses Containing Genes for TFIIH Subunits and Expression in Insect Cells-Open reading frames (ORFs) of genes encoding, Rad3, Tfb1, Tfb2, Ssl1, Tfb3, and Tfb4, were amplified from yeast genomic DNA by polymerase chain reaction (PCR) and cloned into the BacPAK9 baculovirus expression vector (Clontech). A hexahistidine tag was added at the C terminus of the Tfb1 ORF. Recombinant viruses were produced in monolayer of Sf21 cells as described (Clontech). For protein expression, Sf21 cells (ϳ1.5 ϫ 10 7 ) in a T75 flask were infected with various combinations of cloned virus stocks at a multiplicity of infection of 2-10. After 72 h, the cells were harvested and stored at Ϫ80°C until use. Cells were lysed in 1 ml of buffer A (50 mM Hepes-KOH (pH 7.6), 10% glycerol, and 5 mM ␤-mercaptoethanol) containing 600 mM potassium acetate, 0.5% Nonidet P-40 (Calbiochem), and protease inhibitor mix (final concentrations of 6 M leupeptin, 20 M pepstatin A, 20 M benzamidine, and 10 M phenylmethylsulfonyl fluoride). The cell lysate was clarified by centrifugation at 20,000 ϫ g for 30 min and loaded on a 0.5 ml column of Ni-NTA resin (Qiagen) equilibrated with buffer A containing 600 mM potassium acetate and 0.01% Nonidet P-40. After washing with 5 ml of buffer A containing 1.2 M potassium acetate and 0.01% Nonidet P-40, and 5 ml of buffer A containing 150 mM potassium acetate, proteins were eluted with buffer A containing 150 mM potassium acetate and 300 mM imidazole (pH 8.0). Peak fractions (10 l) were subjected to immunoblot analysis. Construction of GST Fusion Proteins and Antibody Production-The ORFs of yeast SSL1 and TFB4 genes were amplified by PCR and cloned between BamHI and XhoI sites of pGEX6P-1 (Amersham Bioscience). GST-Ssl1 and GST-Tfb4 were overexpressed in Escherichia coli BL21 CodonPlus cells (Strategene), grown in 500 ml of Luria broth at 37°C to an A 600 value of 0.6 -0.8, and induced with 0.5 mM isopropyl-␤-D-thiogalactopyranoside for 6 h at room temperature. The cells were harvested, frozen in liquid nitrogen, and ground in a mortar and pestle under liquid nitrogen to a fine powder (5-10 min). After the cell powder was thawed, 50 ml of lysis buffer (phosphate-buffered saline containing 1 M NaCl, 10 mM dithiothreitol, and protease inhibitor mix) was added, * This work was supported in part by National Institutes of Health (NIH) Grant GM36659 (to R. D. K.), by a grant from the Human Frontiers Science Program (to R. D. K.), and by NCI (NIH) Cancer Center Support Grant P30 CA08748 (to P. T.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. § These two authors contributed equally to this work. ¶ Recipient of postdoctoral fellowships from Human Frontiers Science Program. ** Fellow of American Heart Association. § § To whom correspondence may be addressed. 1 The abbreviations used are: TF, transcription factor; GST, gluthathione S-transferase; TAP, tandem affinity purification; TEV, tobacco etch potyvirus; Ni-NTA, Ni 2ϩ -nitrilotriacetic acid-agarose; pol II, polymerase II; cdk, cyclin-dependent protein kinase; ORF, open reading frame; MALDI-reTOF, matrix-assisted laser-desorption/ionization reflectron time-of-flight; MS, mass spectrometry; CBP, calmodulin-binding peptide. followed by stirring for 20 min at 4°C, brief sonication, and centrifugation at 100,000 ϫ g for 60 min. The supernatant was loaded on a 3 ml column of GST-agarose (Sigma) equilibrated with lysis buffer. The column was washed with 10 volumes of lysis buffer, followed by 10 volumes of lysis buffer without NaCl. GST fusion proteins were eluted with lysis buffer containing 10 mM glutathione and no NaCl. GST-Tfb4 was dialyzed overnight against phosphate-buffered saline, followed by concentration to 2 mg/ml in a Vivaspin 6m concentrator, 30,000 molecular weight cutoff (Vivascience). About 1 mg of GST-Tfb4 was used to inoculate a rabbit (Covance, PA). GST-Ssl1 was fractionated by SDS-PAGE and visualized by staining with Coomassie Brilliant Blue R-250. The GST-Ssl1 protein band was excised and used to inoculate rabbits (Covance, PA). Affinity Purification of TFIIH Complexes-Yeast strain YT062 or YT063 was grown in 7.5 liter of 2ϫ YPD (4% (w/v) Bacto Peptone, 2% (w/v) yeast extract, 4% (w/v) glucose) to an A 600 value of 8 -9. Cells were harvested, wash once with cold water, and the resulting cell pellets (ϳ180 g) were extruded into liquid nitrogen through a 60-ml syringe. The frozen cells were broken in liquid nitrogen essentially as described previously (30), using a 2-liter Waring blender at high speed for 10 min with constant addition of liquid nitrogen. About 160 g of broken cells were thawed at 4°C and 230 ml of 0.27 M Tris acetate (pH 7.6), 1 M ammonium sulfate, 0.09 M potassium acetate, 1.8 mM EDTA, 18% glycerol, 10 mM ␤-mercaptoethanol, protease inhibitor mix was added. The mixture was stirred at 4°C for 30 min and clarified by centrifugation in a Beckman JA14 rotor at 13,000 rpm for 20 min and then in a Beckman Ti45 rotor at 42,000 rpm for 90 min. Ammonium sulfate was added to 60% of saturation, followed by centrifugation in a Beckman JA14 rotor at 13, 000 rpm for 45 min. The pellet was resuspended in 50 ml of buffer A (50 mM Hepes-KOH (pH 7.6), 10% glycerol, 5 mM ␤-mercaptoethanol) containing protease inhibitor mix, clarified by centrifugation in a Beckman Ti45 rotor at 40,000 rpm for 30 min, and loaded on a 0.8 ml IgG-agarose column (Sigma) equilibrated in buffer A containing 500 mM ammonium sulfate at 4°C. The column was washed with 10 ml of buffer A containing 500 mM ammonium sulfate and with 10 ml of the buffer A containing 100 mM ammonium sulfate. The column was equilibrated with 50 mM Hepes-KOH (pH 8.0), 0.1 mM EDTA, 200 mM potassium acetate, 5 mM ␤-mercaptoethanol and eluted by incubation overnight in the same buffer containing TEV protease (40 g/ml) at 4°C. Peptide Sequence Analysis and Protein Identification-A phenyl column fraction of highly purified core TFIIH (60 l), prepared as described previously (17), was precipitated with 20% acetone in the cold and subjected to 10% SDS-PAGE. The lowest molecular weight band, visualized with Coomassie Brilliant Blue R-250, was excised. The gel slice was dried in a SpeedVac. The protein was digested with trypsin, peptides were fractionated on a Poros 50 R2 RP micro-tip, and the resulting peptide pools were analyzed by matrix-assisted laser-desorption/ionization reflectron time-of-flight (MALDI-reTOF) MS using a Bruker UltraFlex TOF/TOF instrument (Bruker Daltonics; Bremen, Germany), as described previously (31,32). Selected experimental masses (m/z) were then taken to search a non-redundant protein data base (ϳ1.4 ϫ 106 entries; National Center for Biotechnology Information, Bethesda, MD), utilizing the PeptideSearch (Matthias Mann, Southern Denmark University, Odense, Denmark) algorithm. A molecular weight range twice the predicted weight was covered, with a mass accuracy restriction better than 40 ppm and maximum one missed cleavage site allowed per peptide. Mass spectrometric sequencing of selected peptides was done by MALDI-TOF/TOF (MS/MS) analysis on the same prepared samples, using the UltraFlex instrument in "LIFT" mode. Fragment ion spectra were then taken to search the non-redundant protein data base using the MASCOT MS/MS Ion Search program (Matrix Science Ltd., London, UK). Any identification thus obtained was verified by comparing the computer-generated fragment ion series of the predicted tryptic peptide with the experimental MS/MS data. RESULTS AND DISCUSSION Tfb4, but Not Tfb3, Supports Expression of Core TFIIH in Insect Cells-We set out to express yeast TFIIH in insect cells, beginning with the previously defined core complex of Rad3, Ssl1, Tfb1, Tfb2, and Tfb3. A monolayer of Sf21 cells was infected with a mixture of baculoviruses, each harboring a gene FIG. 1. Assembly of core TFIIH in insect cells. Rad3, hexahistidine-tagged Tfb1, Tfb2, and Ssl1 were expressed in insect cells along with either Tfb3 (lanes 1-3) or Tfb4 (lanes 4 -6). Expressed proteins were purified on Ni-NTA columns as described. The Ni-NTA fractions were analyzed by 10% SDS-PAGE, transferred to nitrocellulose, and probed with anti-Rad3, anti-Ssl1, anti-Tfb2, anti-Tfb2, anti-Tfb3, and anti-Tfb4 antibodies on the left. Controls on the right: expression of Tfb3 alone (lanes 7-9), Tfb4 alone (lanes 10 -12), no expressed proteins (uninfected cells, lanes [13][14][15]. L, load; FT, flow-through; E, peak fraction of eluate. FIG. 2. Sequence analysis of the smallest subunit of core TFIIH. A, SDS-PAGE of highly purified core TFIIH from yeast. A phenyl 5-WP fraction (60 l) was precipitated, separated by 10% SDS-PAGE, and visualized by staining with Coomassie Blue. The smallest subunit, ϳ37 kDa, was subjected to peptide sequence analysis. Molecular masses of protein standards (Bio-Rad) are indicated in kilodaltons at the left. B, tryptic peptides from the smallest TFIIH subunit, identified as described, are underlined in the deduced amino acid sequence of the yeast Tfb4 ORF. Tfb3 Is Not a Subunit of Core TFIIH 43898 for one of the five proteins, with that for Tfb1 bearing a hexahistidine tag. Clarified cell extracts were applied to Ni-NTA resin and eluted with imidazole, and eluted proteins were detected by immunoblotting with antibodies against the five proteins. Only Rad3, Tfb1, and Ssl1 were detected in the eluate (Fig. 1, lane 3). When the experiment was repeated, substituting a virus expressing Tfb4 for that expressing Tfb3, a fivesubunit complex was obtained, as shown by the detection of all expressed proteins in the eluate (Fig. 1, lane 6). In control experiments, none of the antibodies showed significant crossreactivity with proteins in an extract from uninfected cells, and neither Tfb3 nor Tfb4, expressed individually, was retained nonspecifically on Ni-NTA resin (Fig. 1, lanes 9 and 12). We conclude that Tfb4 is required for the assembly of a five-protein core complex, which does not include Tfb3. Peptide Sequence Determination Identifies the Smallest Subunit of Highly Purified Core TFIIH as TFB4 -The assembly of a core complex in insect cells with Tfb4 but not Tfb3 led us to question the previous assignment of the lowest molecular weight band in SDS gels of yeast core TFIIH preparations to Tfb3. As this assignment was based on mass spectroscopy of tryptic peptides derived from the gel band (24), we repeated the analysis. We used a more recent, improved yeast core TFIIH preparation of sufficient purity for crystallization (17). Following SDS-PAGE ( Fig. 2A), the smallest band, of about 37 kDa, was excised, dried, and subjected to MALDI analysis. All pep-tide fragments detected had sequences derived from Tfb4 (Fig. 2B). We conclude that the smallest subunit of core TFIIH is Tfb4 and that the previous results arose from contamination by Tfb3, nearly identical in size to Tfb4. Affinity Purification of Tfb4 from Yeast Yields Core TFIIH, whereas Affinity Purification of Tfb3 Yields a cdk-cyclin-Tfb3 Complex-Despite the requirement for Tfb4 for assembly of a core TFIIH complex in insect cells, and despite the presence of Tfb4 along with four other proteins in a core TFIIH preparation from yeast, it still remains to be shown that all components of the core preparation are physically associated with one another. To this end, we expressed Tfb4 bearing a TAP tag in yeast. A crude extract was applied to an IgG column for binding the protein A moiety of the TAP tag, and specifically bound proteins were eluted by cleavage of the tag with TEV protease. SDS-PAGE and Coomassie Blue staining showed (Fig. 3A), and immunoblotting confirmed (Fig. 3B) the enrichment of Rad3, Ssl1, Tfb1, and Tfb2, together with Tfb4 (bearing the residual CBP component of the TAP tag). After this single step of affinity purification, only a few impurities remained. When the alternative experiment was performed of expressing TAP-tagged Tfb3 rather than Tfb4, the eluate from the IgG column contained no core TFIIH subunits (Fig. 3). Rather, what emerged were apparently stoichiometric amounts of Tfb3 and the cdk-cyclin pair (Kin28 and Ccl1). Similar evidence for a Tfb3-cdk-cyclin complex has been reported elsewhere (28). The results of affinity purification therefore support those from peptide sequence analysis and expression in insect cells, showing that Tfb4 is a subunit of core TFIIH, whereas Tfb3 is not. The results further demonstrate the association of Tfb3 with the cdk-cyclin pair, previously denoted TFIIK (10). With this realignment of subunits between core TFIIH and TFIIK, the compositions of the yeast complexes now correspond perfectly with their counterparts in human cells (Table I). 3. Affinity-purified TFIIH subcomplexes. A, TFIIH subcomplexes were purified from Tfb3-tagged or Tfb4-tagged yeast strains as described. Peak fractions of IgG eluate (20 l) were analyzed by 10% SDS-PAGE and staining with Coomassie Blue. Tfb3 and Tfb4 in the eluate retained the CBP component of the TAP tag. Ccl appears in two bands, designated Ccl1a and Ccl1b, as noted previously (33). Tfb3-CBP overlaps with Ccl1b. *, contaminant from the TEV preparation. **, unknown contaminant. B, immunoblot analysis of the IgG elutes. Each fraction (10 l) was analyzed by 10% SDS-PAGE, transfer to nitrocellulose, and probing with anti-Rad3, anti-Ssl1, anti-Tfb1, anti-Tfb2, and anti-Tfb4 (left) or anti-Tfb3, anti-Ccl1, and anti-Kin28 antibodies (right). *, cross-reacting species with anti-Tfb1 antibodies. Tfb3 Is Not a Subunit of Core TFIIH 43899
3,837.4
2003-11-07T00:00:00.000
[ "Biology" ]
Highly Robust DNA Data Storage Based on Controllable GC Content and homopolymer of 64-Element Coded Tables In this paper, we propose a DNA storage encoding scheme based on a 64-element coding table combined with forward error correction. The method encodes the data into DNA sequences by LZW compression of the original text, adding error correction codes and scrambling codes. In the encoding process, the effects of GC content limitation and long homopolymers on DNA sequences are considered. At the same time, RS error correction code is introduced to correct the DNA sequence to improve the accuracy of decoding. Finally, the feasibility and effectiveness of the program were verified by simulation experiments on Shakespeare’s sonnets. The data results show that the GC content of DNA sequences encoded by the program is kept at 50%, the homologous multimer length is not more than 2, and the original information can be recovered from the data of 10-fold sequencing depth without error with an error rate of 0.3%. We conducted simulation experiments of primer design, DNA sequence recombination, PCR amplification, and sequence reading on DNA sequences loaded with design information, which further proved the concrete feasibility of the scheme. This scheme provides a reliable and efficient encoding scheme for DNA information storage. Introduction Traditional physical storage media can no longer meet the exponentially growing demand for data storage.According to forecasts, the global data volume will increase to 1.75×10 14 GB by 2025 [1,2].Data storage problems have been solved using information storage in organic molecules such as DNA molecules, oligopeptides, and metabolite moieties [3].Compared with traditional storage media, these emerging storage media show significant advantages in terms of storage density, especially the unique double helix structure of the DNA molecule.The storage capacity can be as high as 455 EB bytes per gram of DNA molecules.In addition, DNA molecules have a half-life of about 521 years under proper storage conditions, Half-life above 2 million years if stored in silica [4].Based on the advantages of DNA molecules, such as high storage density and long-term stability, this ancient information carrier is regarded as a storage medium with great potential.In the DNA data storage process, the raw information is first converted into a binary sequence and then into a DNA sequence according to specific coding rules.These DNA sequences can be synthesized and stored in organisms or in vitro as oligonucleotides or double-stranded DNA, facilitating subsequent retrieval and reading of the original information using relevant technologies.Thanks to its rapidity and accuracy, the latest third-generation DNA molecular sequencing technology makes it easier to read and write the information content stored in DNA molecules [5].In conjunction with third-generation DNA molecular sequencing technologies, several random read strategies have been proposed to achieve selective access to stored information [6][7][8], further enhancing the utility and scalability of DNA data storage. DNA sequences follow biosynthetic and sequencing constraints, effectively reducing errors during subsequent reads and increasing decoding efficiency [9].Homopolymer (nt) and GC content (% calculation) are two critical metrics for assessing the performance of coding schemes.For example, single base long string repeats (homopolymers) of lengths greater than five may introduce higher error rates during synthesis or sequencing [10,11].In addition, GC contents below 40% or above 60% (i.e., extreme GC contents) are usually detrimental to the synthesis of DNA molecules [11][12][13][14].Church [15], Grass [16], Blawat [17], and Erlich [18] used a binary transcoding strategy (0 for A or G, 1 for C or T) to control the GC content to avoid long homopolymer sequences.Subsequently, it has been shown in the literature [19] that LT codes can efficiently handle many input symbol segments (also known as droplets), effectively transmitting information in erasure channels.DNA Fountain uses LT codes as an internal transcoding strategy, which significantly reduces the introduced redundancy.DNA Fountain screens droplets that meet the constraints according to their unique screening mechanism to avoid extreme GC content and long homopolymers.However, the successful decoding of LT codes relies on introducing sufficient logical redundancy, which will lead to decoding failure if too few droplets meet the criteria [18,19].Reducing logical redundancy may increase the decoding failure probability while increasing analytical tedium decreases the information density and the synthesis cost. At the same time, the transcoding method using LT code as the internal code is also rigorous in selecting the input symbol segment length.If the input symbol segment length and the message eBCH code length are chosen arbitrarily, it will lead to rate degradation in the fading channel [20].To ensure the integrity of the information, Goldman used a quadruple physical redundancy approach, where an excess number of DNA molecules are synthesized, increasing the number of copies of DNA molecules per sequence [21].However, this can increase the cost of subsequently synthesizing the DNA sequences.Therefore, when developing transcoding algorithms, they need to be characterized by high storage density and high robustness to adapt to various types of data transcoding [22], and at the same time, the cost should be reasonably controlled to apply the DNA information storage in practical scenarios. To this end, this paper proposes a DNA storage transcoding scheme based on a 64-element coded mapping table.First, the text to be read and written is compressed, and an RS error correction code is added to ensure data integrity [18,[23][24][25].Then, a perturbation sequence is introduced to prevent consecutive occurrences of the same binary element.Finally, the data is transcoded into DNA base sequences according to the proposed mapping table. Restrictions on DNA coding 2.1 Hamming distance constraints The Hamming distance is a measure of similarity between two encoded code words.In DNA coding, a smaller Hamming distance means that two DNA strands have more of the same number of bases between them.For two DNA sequences, a and b, we use H(a, b) to denote the number of different bases at position i of the sequence (a, b).The Hamming distance is calculated using the following formula [26]: Where H(a, b) denotes the Hamming distance between two bases, 0 when the corresponding bases are the same, and 1 when the complementary bases are different. GC content constraints GC content refers to the ratio of the total number of G and C bases to the total number of bases in a DNA sequence, which is closely related to the stability of the DNA sequence and the melting point.The designed DNA sequence should be within the range of 40% ≤ GC(n) ≤ 60% to ensure that the constraints of GC content are met, and the formula of GC(n) is as follows [26]: where GC(n) denotes the GC content of the sequence, |G| and |C| represent the number of bases G and C, respectively, in the sequence n, and |n| denotes the total number of bases in the sequence. Homopolymer constraints DNA sequences should satisfy the occurrence of longer repetitive base sequences (homopolymers) that do not contain them.Longer consecutively repeated bases can lead to misinterpretation of the synthesized DNA sequence during subsequent sequencing, thus reducing the robustness of the decoding.For example, in the sequence TCCCCAC, the C bases are repetitive, making it easy to read long C as short C during synthesis and sequencing, increasing the error rate in the DNA storage information and decreasing the accuracy of reads and writes.This can be avoided by limiting the length of the homopolymer.For example, AAA is a shorter homopolymer that is controlled to be between 3 and 5 bases long, which can be read accurately by third-generation sequencing instruments. DNA encoded storage program design An ideal transcoding scheme should satisfy the constraints of Section 2 and be able to correct insertion and deletion errors.To this end, we propose a transcoding method based on a 64-element coded table.In this scheme, the original text is first compressed using the LZW mechanism and converted into a binary stream.An RS check digit block is added to the binary stream to generate a binary stream with an error-correcting code.The binary stream with an error-correcting code is scrambled, and finally, the rearranged binary sequences are transcoded into DNA sequences using a 64-element coded table.The steps are shown in Figure 1: Fig. 1 Flowchart of coding process of 64 elements coded table Design of the 64-element coded table The DNA information storage process consists of four main steps [27]: (a) Encoding digital information into a DNA sequence.(b) Synthesizing DNA sequences.(c) Storing the DNA sequence.(d) Recovering the stored DNA information by sequencing.However, current high-throughput sequencing technologies (e.g., Illumina sequencing methods) may lead to complex pairing of bases during amplification of DNA sequences, which may trigger base mutations, produce long homopolymers that interfere with the information carried by the original DNA sequence, and cause misinterpretation in subsequent sequencing processes [28].Therefore, an excellent transcoding scheme should satisfy the following constraints during DNA information storage: 1) avoiding long homopolymers and 2) avoiding extreme GC content.The scheme selects 64 combinations from 4 4 =256 base pairs that meet the GC content and homopolymer constraints to form a 64-element coded table. In the process of converting binary code streams to DNA sequences, we note that overly long binary code streams mapped to shorter DNA base sequences are irreversible, and excessively short binary code streams mapped to longer DNA base sequences result in a decrease in the information density and an increase in the cost of synthesis.Thus, we consider every four base alignments in the set of 4 4 = 256 codes as a set of mapping elements and filter these mapping elements by considering the limitations of homopolymers.For example, if the previous set of mapping elements is AAAA, the next set of mapping elements cannot have AAAA again to avoid longer homopolymer sequences.Also, while considering longer homopolymer sequences, we note that extreme GC content occurs when transcoding into DNA sequences if there are too many occurrences of G and C bases in each set of mapped elements.For example, a set of coding features with a GCGC has a GC content of up to 100%, which can lead to breaking hydrogen bonds and denaturation difficulties during the synthesis of DNA sequences.Therefore, we filtered the mapping elements with GC content of less than 25% or more than 75% in the coding set and excluded them from the coding set.After screening, we obtained 64 code mapping tables that meet the requirements, as shown in Table 1 below: GAAG GAAC GACA GACT GATC GATG GAGA GAGT GTAG GTAC GTCA GTCT GTTC GTTG GTGA GTGT Adding a Perturbation Sequence Information scrambling is a standard method used in communications engineering to ensure the security of signal transmission and copyright protection of transmitted information against illegal interception and theft.In this paper, the identification of the above parameters of demodulation of the position error signal is achieved by two improvements: adding the voltage injection vector and optimizing the signal demodulation.Adding artificial noise processing is a standard scrambling method in satellite signal transmission [29].There are two methods of signal scrambling: one is to run the coded signal sequence by superimposing pseudo-random lines, and the other is to use cryptographic algorithms (e.g., DES) to encode the digitized coded signal for transmission in segments.During DNA storage, Literature [26] and literature [30] state that, Introducing a pseudo-random sequence to randomize the input data can effectively break the case of multi-bit repetitions (e.g., 011111111110). Single DNA sequence design and error correction The complete DNA sequence includes not only information loading regions and error correction regions but also indexing regions in the DNA sequence to realize the need for fast access to information [6].According to the previously described processing for Shakespeare's sonnets, the original text, totalling 97,343 bytes, was compressed, error corrected, and scrambled, resulting in a 399,408-bit binary code stream, ultimately synthesizing 1,841 DNA sequences.To ensure complete retrieval of each DNA sequence, we designed the length of the information load region and the RS error correction region to be 129 base pairs plus 15 base pairs, totalling 144 base pairs (217 bits).Each DNA sequence is designed with a 16-base index region to enable random access or direct readout.After testing, data less than 0.47 GB can be quickly located and accessed through the index area.In addition, primer regions were added to the DNA strand to facilitate data backup and sequencing builds.As the Illumina sequencing method required, primers of 20 bases were added to each end of the single-stranded DNA sequence for library amplification. During the encoding process, we incorporate error-correcting regions to improve the stability of the data to prevent errors caused by base mutations.The dominant error-correcting codes include Hamming error-correcting codes [31] and RS errorcorrecting codes [16,32].In contrast, Hamming error correction codes can only correct one error or detect multiple errors, while RS error correction codes can convert any number of mistakes [30,31].In choosing the encoding scheme, we wanted to ensure the integrity of the digital information while reducing the total number of DNA bases to minimize the cost of synthesis.Considering the fidelity of the DNA sequence and the synthesis cost issues, the RS error correction code was chosen as the error correction method in the encoding process in this scheme.Each single-stranded DNA sequence fragment is designed as follows: According to a 2016 study [17], substitution error rates accounted for a more significant percentage of errors, with insertions and deletions being less likely.Substitutions cause most errors, so the theoretical coding scheme should be able to handle substitution errors.During experiments, extreme GC content and longer homopolymer stretches (e.g., long homopolymer AAAAA) can lead to substitution errors during synthesis and sequencing.Literature [17] and literature [26] point out that the substitution error rate increases significantly, especially for homopolymer sequences of lengths more than six.In addition, DNA sequences with high GC content develop a highly variable state during PCR amplification.Literature [33] also indicated that the coefficient of variation reached 11.8% in five separate sequences in a population of 20 mimics sequenced for the Ion Torrent PGM 16S rRNA gene.This increase in the coefficient of variation is due to differences in the GC content of the base sequences leading to increased variation during PCR.Once this difference occurs, the number of insertion, substitution and deletion errors in the base sequences increases.In the face of mutations triggered by extreme GC content and substitution errors started by long homopolymers, the 64-element coded table proposed in this paper has potential advantages for error correction.Simple substitution, deletion, and insertion errors can be corrected according to this coding table, as shown in Fig. 3.The three error cases are briefly described below: Case 1: Replacement error According to the above encoding, read the DNA sequence such as CTCTTCAGAGGATC ......In the traversal process, every 4 bp as a group, if there are TTCT, CCAG such as not in the 64-element coded table mapping elements, you can judge that a substitution error has occurred, and according to the 64-element coded table to find the correct mapping elements to be corrected.Case 2: Deletion error The DNA sequence was read based on the above encoding, such as CTCTTCAGAGGATC ......In the traversal process, every 4 bp as a group, the first few groups of DNA sequences did not occur error, but in the last group of base sequences, only C base appeared.After comparison, the DNA sequence length is shorter than the designed DNA sequence, so it can be judged that the deletion error occurred. Case 3: Insertion error The DNA sequence is read based on the above encoding, such as CTCTTCAGAGGATC ......During traversal, the DNA sequence is divided into groups every 4 bp.In the read results, if the second pair of bases in the previous set is CT, and in reading the first pair of bottoms in the subsequent group, the original DNA sequence is CT.Still, the read sequence appears as AC, indicating that an error has occurred.Then continue traversing the subsequent bases until the end.By combining this with the 64-element coded table, we can find that an error has occurred before the AC mapping element and continue traversing until the end.When performing sequence comparison, if the number of bases in the read DNA sequence is more than the number of floors in the designed DNA sequence, it can be determined that an insertion error has occurred.However, based on the 64-element coded table, only insertion errors can be detected, and the location of the error is determined, but no correction can be made based on the site.This situation is similar to a deletion error, and it is impossible to accurately determine the actual problem of the group of base sequences to determine the specific correction of the insertion error.The above is a brief description of the three error scenarios.The static coding table of this scheme has some self-correcting advantages for substitution-type errors.However, for insertion and deletion type errors, this coding table can only detect the error but not correct it.We introduce a simpler error correction code to avoid the situation where insertion and deletion errors lead to decoding failure.Specifically, we update each DNA base sequence using the RS(255, 240) error correction code to ensure the fidelity of the DNA sequence [16]. TT C T T C A G A C G AT C CT C T T C A G A G G AT C primary sequence Read Squence CT C T T C A G A G G AT C CT C T T C A G A G G AT C C T C T T C A G A G G A C C T C T T C A G A G G AT C A substitution error Delete Error Insert error By introducing RS (255, 240) error-correcting codes, we were able to correct enough errors to improve the reliability of the DNA sequence.RS error-correcting code is an error-correcting code scheme capable of repairing any number of mistakes.In our project, applying RS (255, 240) to each DNA base sequence corrects a certain number of insertion and deletion errors, which enhances the accuracy and stability of decoding. By combining static coding tables and RS error-correcting codes, we can handle both substitution-type errors and correct a certain number of insertion and deletion errors, thus improving the overall decoding capability and data fidelity. Results and analysis In the previous literature, some computational methods and corresponding concepts related to information density have been proposed for evaluating the performance of coding schemes.Among them, base coding density (bit/nt) describes the number of effective bits carried by a base in practice, regardless of the effect of molecular copy number [15,18,34]. 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 𝑏𝑖𝑡𝑠 = ℎ #(4) In addition, several concepts have been proposed, such as coding potential and realized capacity, which involves complex theoretical derivations in information theory.However, in DNA storage, these concepts have not been recognized on a large scale due to the wide variation of data in practical applications. Our study used a DNA sequence length of 129 bases per segment as the information carrier, corresponding to 194 bits of information, and calculated storage density.We have improved the net coding efficiency by using the LZW mechanism compression on the original text.Eventually, we obtained a net information density of 1.49 bit/nt. This program takes Shakespeare's sonnets as an example for a reading test, and the specific steps are as follows: a) Shakespeare's sonnets (97,343 bytes in the original text) were first LZW compressed, resulting in 24,963 compressed text information.b) To ensure the text message's recovery and the original message's fidelity, we converted 24963 bytes to a binary stream according to ASCII code and split every 240 bits into segments.Then, 15 redundant bits are added to each fragment to correct the error of 10 bits.c) After error correction coding, the total length of the binary code stream is 399408 bits.d) The binary code stream is scrambled with chaotic sequences.After the scrambled sequence is added to the binary data stream, every 3 bytes are encoded into 16 bases (8 bits per byte for 24 bits) according to the 64-element mapping table. Ultimately, Shakespeare's sonnet was transformed into 266,272 bases.Based on the above calculations, this scheme achieves a theoretical storage density of 399408/266,272 = 1.5 bits/base.Combined with the GC content involved, we summarize and list Table 2 below: Single DNA sequence Scrambling during data transmission aims to randomize the original binary code stream.To achieve this goal, in the binary code stream, after adding the error correction code, we employ a specific chaotic sequence for scrambling and descrambling it before proceeding to decode.Precisely, we introduce logically mapped chaotic sequences with expressions such as Eq. ( 7): +1 = (1 - )#( 7)# Among them, for the case of 3.569945627 < r ≤ 4, we take advantage of the characteristics of logical chaotic sequences, such as high sensitivity and unpredictability, and use the arrangements as mentioned above and the binary code streams after adding the error correction code for the dissimilarity processing, to avoid the occurrence of too many consecutive {0, 1} in the binary code streams, and thus prevent the occurrence of more extended consecutive identical bases in the subsequent synthesis of DNA base sequences in the following DNA base sequence synthesis.In addition, the scrambling process can also play a role in encryption. Figure 4 illustrates the percentage of {0, 1} elements before and after the sequence scrambling and the degree of chaos of the series at different cycles.According to the observation in Fig. 4(a), the distribution density of X(n) values produced by the logistic mapping sequence between 3.7 < r ≤ 4 is much higher than in the case of 2.5 < r ≤ 3.7.Therefore, we chose the chaotic sequence generated by r = 3.8 and the given parameter X 0 = 4.0 for heterodyne processing with the binary code stream.After scrambling, the number of 0s and 1s in the binary sequence is relatively uniformly distributed, around 50%. Code snippet performance analysis 4.2.1 GC content analysis Higher GC levels can make it challenging to break hydrogen bonds, making DNA denaturation difficult.To raise the melting point to break the hydrogen bonds, an increase in temperature increases the activity of the DNA bases, which increases the probability of mutation of the DNA bases, leading to a rise in the error rate during DNA sequencing.Therefore, these factors need to be weighed when controlling the GC content. We tested this scheme and several other transcoding methods on five documents and compared their GC content.We used a sequencing depth of 10× for each text and calculated the average GC content percentage for each transcoding scheme.The results are shown in Fig. 5(a)~(e). The transcoding scheme of DNA Fountain [9] controls the average percentage of GC content between 40% and 60%.The transcoding method of Church [15] keeps the average rate of GC content between 25% and 80%.The transcoding scheme of Grass [16] slightly decreases the average percentage of GC content compared to Church's transcoding strategy, controlling it between 25% and 78%, but did not fully comply with the requirements of the sequencing method.The transcoding scheme of Goldman [35] controlled the average percentage of GC content between 15% and 60%, which is still a gap compared with the DNA Fountain and YYC codes. Considering the requirements of the third-generation sequencing system for GC content and the constraints of upstream and downstream biological experiments, this program controls the average percentage of GC content between 49% and 51%, which better adapts to the requirements of the third-generation sequencing method in biological experiments, this allows a small number of substitution, deletion and insertion errors of DNA bases during sequencing. Considering the requirements of the third-generation sequencing system for GC content and the constraints of upstream and downstream biological experiments, this program controls the average percentage of GC content between 49% and 51%, which better adapts to the requirements of the third-generation sequencing method in biological experiments, this allows a small number of substitution, deletion and insertion errors of DNA bases during sequencing. Homopolymer analysis The most accurate third-generation sequencing technologies currently available cannot read DNA homopolymer sequences entirely and correctly and are especially difficult to discriminate between multiple consecutively repeated bases.In the methods of this paper, we tested the stored document Photo 1 (Cameraman 256) at 10× sequencing depth by minimizing the homopolymer length.We compare the performance of several different transcoding schemes in terms of homopolymer length distribution (denoted as length n).The specific results are shown in Fig. 5(f). For the transcoding schemes of DNA Fountain [9] and YYC [23], most of the homopolymer lengths are clustered at n=4, which is long compared to other transcoding methods.This is because they belong to dynamic coding, which cannot control the generation of the last base, which is a disadvantage of dynamic coding.As for Blawat [17] and Bornholt [36] transcoding methods, most homopolymer lengths are concentrated at n=1~2.In Grass [16] transcoding scheme, most homopolymer lengths are at n=2~3, and the most extended homopolymer length is at n=4. After comparison, our method successfully controls the homopolymer in the range of homopolymer lengths of n=1~2, which can satisfy the homopolymer constraints in the sequencing process. Read Process Performance Analysis In Shakespeare's sonnet simulation experiments, we constructed coding tables by randomly selecting 64 mapping elements from a set of 256 codes.We then compared the base substitution error rates between (G, C) and (T, G) using the 64-element coded table with those of the random coding table, as shown in Fig. 6(a), (b).The results show a 1.08% decrease in the substitution error rate between (G, C) and an 18.01% decrease in the substitution error rate between (T, G) when using the 64-element coded table.However, the base substitution error rate between (T, C) and (A, C) increased slightly, but not more than 10%.The details are shown in Fig. 6(c).Using the 64-element coded scheme proposed in this paper has a lower base substitution error probability than the random coding table.We used the Flye [37] platform to assemble from scratch the DNA sequences generated by the complete Shakespearean sonnets.These sequences generated 266,272 bases according to a 64-tuple coding table (note that we only analyzed errors in the informational regions and did not encode error correction in such areas as the index).We split these DNA sequences into informative loads and RS error-correcting fragments (129 nt + 15 nt), yielding 1,850 DNA strands, which were then recovered.We added pseudo-random errors in the generated DNA sequences (equal number of insertions and deletions).We found that the error case of losing 200 bases occurred less than 1% of the time, and the error frequency of inserting 20 commands was almost close to 0. Thus, the coding method based on the 64-element coded mapping table ensures highly robust recovery. Monolithic Segment Read Performance Analysis When recovering stored data, retrieving the information carried by the DNA sequences in the information payload region is crucial.As a maximum distance separable code, the Reed-Solomon packet code is very effective in protecting contiguous nucleotides [17].In this paper, we use RS coding for error correction where the error correction block size is 240 bits payload with 15 bits of redundancy added, i.e.RS (255,240,15). According to information theory, errors can be categorized into random and erasure errors.This paper uses the forward error correction method to solve the above errors.In this method, the random error is unknown, and the erasure error is known.If E e is used to denote an erasure error and E r is used to indicate a random error, RS (255, 240, 15) shall satisfy the following conditions: - ≥ + 2 × #(8) When an insertion or deletion of a base occurs in a DNA sequence, this means that an erasure error has occurred.Since we already know the existence of erasure errors, we must consider the problems posed by substitution errors (random errors) when designing error correction codes.Note that since we did not add external code error correction to the priming and indexing regions, we only analyzed the information bits in the payload region for single-segment reads.Based on the data in Table 3, we performed reading experiments on three texts, Text 1 (Shakespeare's Sonnets), Text 2 (DNA Springs Abstracts), and Photograph 1 (Cameraman 256), respectively.For Text 1 (Shakespeare's sonnets), we synthesized 266,272 nucleotides, from which we selected a fraction of the bases (129 nt × 10 entries, for a total of 1,290 bases) for testing.Substitution errors of 80 bases were randomly added to each DNA sequence, and then automatic recovery experiments were performed.By adding RS (255, 240) error correction codes to the message load portion, we fully recovered it to the original text.For Text 2 (DNA Fountain summary) and photo 1 (Cameraman 256), we carried out the same process as Text 1, and the results are shown in the Table 3.According to the results in Table 4, among all the generated DNA sequences, we randomly selected 1290 bases (129 nt × 10 entries) for insertion and deletion error simulation experiments.We added three random deletion errors to DNA sequence #8 in Text 1.However, only two deletion errors were corrected in the final decoding, resulting in a decoding rate of 99.84% and an absolute decoding failure.In Text 2 (DNA Fountain Abstracts), the number of errors added during the simulation was low and fully recovered.For photo 1 (Cameraman 256), we performed the same process as for Text 1 and the results are shown in the Table 4. Informative DNA primer design According to the previous coding rules, Shakespeare's sonnets yield 266,272 bases (1,850 DNA sequences).Short homologs of 129 bases were first synthesized, and then primers were designed using the Oligo 7 program for these short homologs to facilitate subsequent PCR.Considering the DNA sequence design requirements in the previous section and the dissolution temperature requirements in the actual PCR operation, we designed the primer sequences of the short homologs to be 20 bases ± 1 base, Tm = 53.9°C, and finally created the forward primers and reverse primers as shown in Table 5 (in part).We performed a GC content test and dissolution temperature Tm statistics for the above primer sequences.The results are shown in Fig. 6(f) and Fig. 7.According to Fig. 6(f), all the designed primer sequences were within the reasonable range of dissolution temperatures (45℃~65℃).Fig. 7 shows that no primer sequences were found to have GC content exceeding the standard.(The highest percentage of GC content was 56%, and the lowest was 44%) Fig. 7 Distribution of GC content in primers Genomic integration of informational DNA Recombination of E.coli plasmid vectors in combination with informative DNA, recombinant E.coli plasmid vectors contain resistance genes, integrase genes, and initiation of replication sites along the informative DNA sequence [38].Then E.coli was chemically transformed.To ensure fidelity when sequencing the informative DNA sequences, we adopted a double-stranded design for the informative DNA sequences, forward sequencing for the informative DNA sequences, and reverse sequencing needed to obtain the sequences consistent with the informative DNA sequences after further transcription.The forward message DNA sequence was digested using EcoR I enzyme, The reverse message DNA sequence was digested using Xba I enzyme, and then the message DNA sequence was integrated into E.coli using T7 DNA ligase, which resulted in the replication of the message DNA along with the value-added of the E.coli plasmid vector [39], as shown in Figure 8. Simulation of multiple sequence comparison This study used excerpts from Shakespeare's sonnets to simulate PCR amplification of DNA sequences.Firstly, according to the previous article, after transcoding the original text into DNA sequences, every 129 bases in the DNA sequences were split into short homologues.The primers for the short homologues were designed according to the Oligo 7 Primer Design Software and finally reorganized into new DNA sequences.We used SnapGene software to simulate the PCR simulation of recombinant DNA sequences by intercepting a short homologue DNA sequence for five PCR amplifications, with the lysis temperature set to Tm = 60 °C.Finally, the PCR-amplified DAN sequences were subjected to multiple sequence comparisons to recover the correct original sequences, and the results are shown in Figure 10.According to Fig. 9, it can be seen that the encoding and error correction scheme designed in this scheme can completely recover the original information after the DNA columns generated by polymerase chain reaction (PCR) and then decode them after multiple sequence comparisons. Conclusion and Outlook DNA bases are essential as a potential long-term storage medium for data that needs to be preserved for a long time.DNA molecules can remain stable in dry, cold and dark environments for at least thousands of years, unlike standard large-scale physical media that require expensive data migration fees to adapt to environmental changes.Digital information storage using DNA molecules could theoretically achieve storage densities of 11 EB/mm 3 , and information stored in DNA molecules could be read at any time without risking the obsolescence of the extraction method. This paper successfully simulates and analyses texts such as Shakespeare's sonnets for GC content, homopolymer, and error correction performance.The constraints in subsequent biological experiments can be satisfied using the proposed 64-element mapping table.It prevents the increase of errors such as base substitutions, deletions and insertions due to extreme GC content and also contains misreading situations during sequencing due to long homopolymers.This scheme analyzes the error correction and single-segment read performance and provides recovery rate statistics for data reads of text and image types.The shortcoming to be pointed out is that we only added the error correction code in the information loading part of the DNA sequence without the outer code protection of the error correction code for the primer region, which is what we need to improve in the next step.In terms of coding efficiency, the coding storage density of this scheme is 1.49 bit/nt.However, there is still a gap compared with the maximum theoretical storage density of 2 bit/NT per nucleotide.In the follow-up work, we will improve the 64-element coded table and introduce the concept of concatenated bases to enhance the coding density further. Fig. 2 Fig. 2 Sequence design of oligonucleotides designed in this paper Fig. 3 Fig. 3 Self-correcting replacement, deletion and insertion errors (a)Logistic Mapping Bifurcation Diagram (b) Sequence {0, 1} content comparison plot Fig. 4 Bifurcation map of logic mapping and comparison of {0, 1} (a)Text 1 GC content percentage graph (b)Text 2 GC content percentage graph (c)Text 3 GC content percentage graph (d)Photo 1 GC content percentage graph (e)Character 1 GC content percentage graph (f)Homopolymer distribution by method Fig 5 Percentage of GC content and homopolymer distribution by method (a) Base substitution rates for the 64-element coded table (b) Randomized coding table base substitution rates (c)Rate of change of base substitutions (d) Distribution of missing lengths (e) Distribution of inserted lengths (f) Forward and reverse primer solubilization temperature plots Fig. 6 Various error analyses and forward and reverse primer solubilization temperature plots Fig. 8 Fig. 8 Assembly of informative DNA with E.coli plasmid vector Fig. 9 Fig. 9 Comparison and decoding of 5 DNA sequences Table 3 Overview of the number of incorporation substitution error nucleotides Table 4 Overview of the number of inserted, deleted error nucleotides added to the Table 5 Primer design for forward and reverse DNA sequences
8,087.8
2023-09-29T00:00:00.000
[ "Computer Science" ]
Auxiliary field Monte-Carlo simulation of strong coupling lattice QCD for QCD phase diagram We study the QCD phase diagram in the strong coupling limit with fluctuation effects by using the auxiliary field Monte-Carlo method. We apply the chiral angle fixing technique in order to obtain finite chiral condensate in the chiral limit in finite volume. The behavior of order parameters suggests that chiral phase transition is the second order or crossover at low chemical potential and the first order at high chemical potential. Compared with the mean field results, the hadronic phase is suppressed at low chemical potential, and is extended at high chemical potential as already suggested in the monomer-dimer-polymer simulations. We find that the sign problem originating from the bosonization procedure is weakened by the phase cancellation mechanism; a complex phase from one site tends to be canceled by the nearest neighbor site phase as long as low momentum auxiliary field contributions dominate. I. INTRODUCTION Quantum Chromodynamics (QCD) phase diagram is attracting much attention in recent years. At high temperature (T ), there is a transition from quark-gluon plasma (QGP) to hadronic matter via the crossover transition, which was realized in the early universe and is now extensively studied in high-energy heavy-ion collision experiments at RHIC and LHC. At high quark chemical potential (µ), we also expect the transition from baryonic to quark matter, which may be realized in cold dense matter such as the neutron star core. Provided that the high density transition is the first order, the QCD critical point (CP) should exist as the end point of the first order phase boundary. Large fluctuations of the order parameters around CP may be observed in the beam energy scan program at RHIC. The Monte-Carlo simulation of the lattice QCD (MC-LQCD) is one of the first principle non-perturbative methods to investigate the phase transition. We can obtain various properties of QCD: hadron masses and interactions, color confinement, chiral and deconfinement transitions, equation of state, and so on. We can apply MC-LQCD to the low µ region, but not to the high µ region because of the notorious sign problem. The fermion determinant becomes complex at finite µ, then the statistical weight is reduced by the average phase factor e iθ , where θ is the complex phase of the fermion determinant. There are many attempts to avoid the sign problem such as the reweighting method [1], the Taylor expansion method [2], the analytic continuation from imaginary chemical potential [3], the canonical ensemble method [4], the fugacity expansion [5], the histogram method [6], and the complex Langevin method [7]. Many * Electronic address<EMAIL_ADDRESS>† Electronic address<EMAIL_ADDRESS>‡ Electronic address<EMAIL_ADDRESS>of these methods are useful for µ/T < 1, while it is difficult to perform the Monte-Carlo simulation in the larger µ region. Recent studies suggest that CP may not be reachable in phase quenched simulations [8]: In the phase quenched simulation for N f = 2, the sampling weight at finite µ is given as | det D(µ)| 2 = det D(µ)(det D(µ)) * = det D(µ) det D(−µ * ), where D represents the fermion matrix for a single flavor. The phase quenched fermion determinant for real quark chemical potential µ d = µ u = µ ∈ R is the same as that at finite isospin and vanishing quark chemical potentials, µ d = −µ u = µ. Thus the phase quenched phase diagram in the temperaturequark chemical potential (T, µ) plane would be the same as that in the temperature-isospin chemical potential (T, δµ) plane, as long as we can ignore the mixing of u and d condensates. We do not see any critical behavior in the finite δµ simulations outside of the pion condensed phase [9]. By comparison, pion condensed phase appears at large δµ, where the above correspondence does not apply. We may have CP inside the pion condensed phase. Gauge configurations in the pion condensed phase, however, would be very different from those of compressed baryonic matter which we aim to investigate. Therefore, we need to find methods other than the phase quenched simulation in order to directly sample appropriate configurations in cold dense matter for the discussion of CP and the first order transition. The strong coupling lattice QCD (SC-LQCD) is one of the methods to study finite µ region based on the strong coupling expansion (1/g 2 expansion) of the lattice QCD. There are some merits to investigate QCD phase diagram using SC-LQCD, while the strong coupling limit (SCL) is the opposite limit of the continuum limit. First, the effective action is given in terms of color singlet components, then we expect suppressed complex phases of the fermion determinant and a milder sign problem. We obtain the effective action by integrating out the spatial link variables before the fermion field integral. This point is different from the standard treatment of MC-LQCD, in which we integrate out the fermion field before the link integral. Second, we can obtain insight into QCD phase diagram from the mean-field studies at strong coupling. The chiral transition has systematically and analytically been studied in the strong coupling expansion (1/g 2 expansion) under the mean-field approximation: the strong coupling limit (leading order, O(1/g 0 )) [10][11][12][13][14][15][16][17], the nextto-leading order (NLO, O(1/g 2 )) [12][13][14][15][16][17], and the nextto-next-to-leading order (NNLO, O(1/g 4 )) [15,17]. It is necessary to go beyond the mean-field treatment and to include the fluctuation effects of the order parameters for quantitative studies of the finite density QCD. Monomer-dimer-polymer (MDP) simulation is one of the methods beyond the mean-field approximation. We obtain the effective action of quarks after the link integral, and evaluate the fermion integral by summing up monomer-dimer-polymer configurations [18]. The phase diagram shape is modified to some extent, compared with the mean-field results on an isotropic lattice: the chiral transition temperature is reduced by 10-20 % at µ = 0, and the hadronic phase expands to higher µ direction by 20-30 % [19]. Until now, we can perform MDP simulations only in the strong coupling limit, 1/g 2 = 0, and the finite coupling corrections are evaluated in the reweighting method [20]. Since both finite coupling and fluctuation effects are important to discuss the QCD phase diagram, we need to develop a theoretical framework which includes both of these effects. In this work, we study the QCD phase diagram by using an auxiliary field Monte-Carlo (AFMC) method as a tool to take account of the fluctuation effects of the auxiliary fields. AFMC is widely used in nuclear manybody problems [21,22] and in condensed matter physics such as ultra cold atom systems [22,23]. In AFMC, we introduce the auxiliary fields to decompose the fermion interaction terms and carry out the Monte-Carlo integral of auxiliary fields, which is assumed to be static and constant in the mean-field approximation. We can thus include the fluctuation effects of the auxiliary fields in AFMC beyond the mean-field approximation. Another important aspect of this paper is how to fix the chiral angle, the angle between the scalar and pseudoscalar modes. In finite volume, symmetry of the theory is not broken spontaneously and an order parameter, in principle, vanishes. In spin systems, a root mean square order parameter is applied to obtain the appropriate order parameter [24]. We here use a similar method, chiral angle fixing (CAF). In CAF, we rotate all fields by the chiral angle, and obtain quantities by using rotated new fields. This paper is organized as follows. In Sec. II, we explain the formulation of AFMC in SC-LQCD. In Sec. III, we show the numerical results on the order parameters, phase diagram, and the average phase factor. In Sec. IV, we numerically confirm a source of the sign problem in AFMC, and discuss the order of the phase transition based on the volume dependence of the chiral suscep-tibility. In Sec. V, we devote ourselves to a summary and discussion. A. Lattice action We here consider the lattice QCD with one species of unrooted staggered fermion for color SU (N c ) in the anisotropic Euclidean spacetime. Throughout this paper, we work in the lattice unit a = 1, where a is the spatial lattice spacing, and the case of color SU(N c = 3) in 3+1 dimension (d = 3) spacetime. Temporal and spatial lattice sizes are denoted as N τ and L, respectively. The partition function and action are given as, where χ x , U ν,x , U Pτ and U Ps represent the quark field, the link variable, and the temporal and spatial plaquettes, respectively. η j,x = (−1) x0+···+xj−1 is the staggered sign factor, and V ± x and M x are mesonic composites. Chemical potential µ is introduced in the form of the temporal component of vector potential. The physical lattice spacing ratio is introduced as f (γ) = a phys s /a phys τ . The lattice anisotropy parameters, γ and ξ, are introduced as modification factors of the temporal hopping term of quarks and the temporal and spatial plaquette action terms. Temporal and spatial plaquette couplings should satisfy the hypercube symmetry condition in the isotropic limit (ξ → 1), g τ (g 0 , 1) = g s (g 0 , 1) = g 0 . In the continuum limit (a → 0 and g 0 → 0), two anisotropy parameters should correspond to the physical lattice spacing ratio, f (γ) = γ = ξ, when we construct lattice QCD action requiring a phys s /a phys τ = γ in the continuum region, then we can define temperature as T = f (γ)/N τ = γ/N τ . By comparison, it seems to be more appropriate to define temperature as T = γ 2 /N τ due to quantum corrections in the strong coupling limit (SCL) [14]. For example, the critical temperature is predicted to be proportional to γ 2 rather than γ in the mean field treatment in SCL [14]. We follow this argument and adopt f (γ) = γ 2 . In SCL, we can ignore the plaquette action terms S G , which are proportional to 1/g 2 . The above lattice QCD action in the chiral limit m 0 → 0 has chiral symmetry B. Effective action In the present formulation, we have four main steps to obtain physical observables. First, we integrate out the lattice partition function over spatial link variables in the strong-coupling limit. Second, we introduce the auxiliary fields for the mesonic composites and convert the four-Fermi interaction terms to the fermion bilinear form. Third, we perform the integral over the fermion fields and temporal link variables analytically, and obtain the effective action of the auxiliary fields. Finally, we carry out the Monte-Carlo integral over the auxiliary fields. In the second step, we transform the four-Fermi interactions, the second terms in Eq. (9), to the fermionbilinear form. By using the Fourier transformation in spatial directions M x=(x,τ ) = k e ik·x M k,τ , the interaction terms read where f (k) = j cos k j andk = k + (π, π, π). For later use, we divide the momentum region into the positive (f (k) > 0) and negative (f (k) < 0) modes. In last line of Eq. (10), we use the relation f (k) = −f (k). We introduce the auxiliary fields via the extended Hubbard-Stratonovich (EHS) transformation [16]. We can bosonize any kind of composite product by introduc-ing two auxiliary fields simultaneously, where ψ = ϕ + iφ and dψ dψ * = dReψ dImψ = dϕdφ. When the two composites are the same, A = B, Eq. (11) corresponds to the bosonization of attractive interaction terms. For the bosonization of interaction terms which lead to repulsive potential in the mean-field approximation, we need to introduce complex number coefficients, The bosonization of the interaction terms in Eq. (10) is carried out as where k,τ , and α = L 3 /4N c . We introduce σ k,τ and π k,τ as the auxiliary fields of M k,τ and iM −k,τ , respectively. σ k,τ (π k,τ ) includes the scalar (pseudoscalar) and some parts of higher spin modes. By construction, σ k,τ and π k,τ satisfy the relation σ −k,τ = σ * k,τ and π −k,τ = π * k,τ , which means that σ x , π x ∈ R. In the third step, we carry out the Grassmann and temporal link (U 0 ) integrals analytically [12][13][14]. We find the partition function and the effective action as, where is a known function of m x and can be obtained by using a recursion formula [12][13][14], as summarized in Appendix B. When m x=(x,τ ) is independent of τ (static), we obtain X Nτ = 2 cosh(N τ arcsinh (m x /γ)). In the last step, we carry out AFMC integral [25,26]. We numerically integrate out the auxiliary fields (σ k,τ , π k,τ ) based on the auxiliary field effective action Eq. (19) by using the Monte-Carlo method, then we could take auxiliary field fluctuation effects into account. When we perform integration, we have a sign problem in AFMC [25,26]. The effective action S AF eff in Eq. (19) contains the complex terms X Nτ via the spatial diagonal parts of the fermion matrix I x = m x /γ. Auxiliary fields are real in the spacetime representation, σ x , π x ∈ R, but the negative auxiliary field modes appear with imaginary coefficients as iε x π x , which come from the EHS transformation. The imaginary part of the effective action gives rise to a complex phase in the statistical weight exp(−S AF eff ), and leads to the statistical weight cancellation. It should be noted that the weight cancellation is weakened in part by the phase cancellation mechanism in low momentum auxiliary field modes. In AFMC, the fermion determinant is decomposed into the one at each spatial site. Since negative modes π k,τ involve iε x , the phase on one site from low momentum π k,τ modes tend to be canceled by the phase on the nearest neighbor site. Thus we could expect that the statistical weight cancellation is not severe when low momentum modes mainly contribute. By comparison, strong weight cancellation might arise from high momentum modes. We discuss the contributions from high momentum modes in Sec. IV B. While we have the sign problem in AFMC, we anticipate that we could study the QCD phase diagram since the long wave modes are more relevant to phase transition phenomena. We show the results of the QCD phase transition phenomena based on AFMC in the next section, Sec. III. III. QCD PHASE DIAGRAM IN AFMC We show numerical results in the chiral limit (m 0 = 0) on 4 3 × 4, 6 3 × 4, 6 3 × 6 and 8 3 × 8 lattices. We have generated the auxiliary field configurations at several temperatures on fixed fugacity (fixed µ/T ) lines. We here assume that temperature is given as T = γ 2 /N τ [14]. Statistical errors are evaluated in the jack-knife method; we consider an error to be the saturated value after the autocorrelation disappears as shown later in Fig. 2. A. Chiral Angle Fixing It is a non-trivial problem how to describe the spontaneous symmetry breaking in Monte-Carlo calculations on a finite size lattice: the expectation value of the order parameter generally vanishes since the distribution is symmetric under the transformation. Rigorously, we need to take the thermodynamic limit with explicit symmetry breaking term, and to take the limit of the vanishing explicit breaking term, as schematically shown in Fig. 1 in the case of chiral symmetry. This procedure is time consuming and is not easy to carry out when we have the sign problem. We here propose a chiral angle fixing (CAF) method as a prescription to calculate the chiral condensate on a finite size lattice. The effective action Eq. (9) is invariant under the chiral transformation, The chiral symmetry is kept in the bosonized effective action by introducing the chiral U(1) transformation for auxiliary fields as, where (σ k , π k ) are the temporal Fourier transform of In order to obtain the chiral condensate rigorously, we need to put a finite mass, first take thermodynamic limit and finally take the chiral (massless) limit as shown in the upper panels. In CAF, we take chiral rotation to make the π0 field vanish, and we get the finite chiral condensate (center bottom panel), which would be close to the correct value. (σ k,τ , π k,τ ), Because of the chiral symmetry, the chiral condensate σ 0 vanishes as long as the auxiliary field configurations are taken to be chiral symmetric, as explicitly shown in Appendix A. In order to avoid the vanishing chiral condensate, we here utilize CAF. We rotate σ 0 and π 0 modes toward the positive σ 0 direction as schematically shown in Fig. 1. All the other fields are rotated with the same angle, −α = − arctan(π 0 /σ 0 ), in each Monte-Carlo configuration. We use these new fields to obtain order parameters, susceptibilities, and other quantities, and eventually obtain finite chiral condensate. Chiral condensate obtained in CAF should mimic the spontaneously broken chiral condensate in the thermodynamic limit. Similar prescriptions are adopted in other field of physics. For example, we take a root mean square order parameter to obtain the appropriate value in spin systems [24]. B. Sampling and Errors We generate auxiliary field configurations by using the Metropolis sampling method. We generate Markov chains starting from two types of initial conditions: the Wigner phase (σ x = 0.01, π x = 0) and the Nambu-Goldstone (NG) phase (σ x = 2, π x = 0) initial conditions. For each τ , we generate a candidate auxiliary field configuration (σ ′ k,τ , π ′ k,τ ) by adding random numbers to the current configuration (σ k,τ , π k,τ ) for all spatial momenta k at a time, and judge whether the new configuration is accepted or not. Since it is time consuming to update each auxiliary field mode separately, we update all spatial momentum modes in one step at the cost of an acceptance probability. It should be noted that the acceptance probability is larger in the the present (σ k,τ , π k,τ ) sampling procedure in each τ compared with updating (σ k , π k ) in the whole momentum space at a time. We evaluate errors of calculated quantities in the jackknife method. The evaluated errors of the chiral condensate φ are shown as a function of bin size in the right middle panel of Fig. 2. Since the Metropolis samples are generated sequentially in the Markov chain, subsequent events are correlated. This autocorrelation disappears when the Metropolis time difference is large enough. In the jack-knife method, we group the data into bins and regard the set of configurations except for those in a specified bin as a jack-knife sample. We find that the autocorrelation disappears for the bin size larger than 30 in this case. The jack-knife error increases with increasing bin size, and eventually saturates. We adopt the saturated value of the jack-knife error after the autocorrelation disappears as the error of the calculated quantity as in the standard jack-knife treatment. The errors are found to be small enough, for example ∆φ 0.01, compared with its mean value shown in Fig. 3 and to discuss the phase transition. C. Order Parameters In Fig. 3, we show the chiral condensate, φ = σ 0 , and the quark number density ρ q after CAF, as a function of temperature (T ) on a 8 3 × 8 lattice. Necessary formulae to obtain these quantities are summarized in Appendix B. We also show the distribution of φ in Fig. 4. The order parameters, φ and ρ q , clearly show the phase transition behavior. With increasing T for fixed µ/T , the chiral condensate φ slowly decreases at low T , shows rapid or discontinuous decrease at around the transition temperature, and stays to be small at higher T . The quark number density ρ q also shows the existence of phase transition at finite µ. The order of the phase transition can be deduced from the behavior of φ, ρ q and the φ distribution on a small lattice [25,26]. The chiral condensate φ and the quark number density ρ q smoothly change around the (pseudo-)critical temperature (T c ) at small µ/T . Additionally, the φ distribution has a single peak as shown in the top panel of Fig. 4. These observations suggest that the phase transition is crossover or the second order at small µ/T on a large size lattice. We refer to this µ/T region as the would-be second order region. By comparison, the order parameters show hysteresis behavior in the large µ/T region. As shown by dashed lines in Fig. 3, two distinct results of φ and ρ q depend on the initial conditions, the Wigner phase and the NG phase initial conditions. The temperature of sudden φ change for the NG initial condition is larger than that for the Wigner initial condition. The distribution of φ shows a double peak as shown in the bottom panel of Fig. 4. In terms of the effective potential, the dependence of initial conditions indicates that there exist two local minima, which are separated by a barrier. In the hysteresis region, the transition between the two local minima is suppressed by the barrier and Metropolis samples stay around the local minimum close to the initial condition. At the temperature of sudden φ change, the barrier height becomes small enough for the Metropolis samples to overcome the barrier. These results suggest that the phase transition is the first order at large µ/T . We refer to this µ/T region as the would-be first order region. D. Phase Diagram We shall now discuss the QCD phase diagram in AFMC. In Fig. 6, we show the QCD phase diagram for various lattice sizes. We define the (pseudo-)critical temperature T c as a peak position of the chiral susceptibility χ σ shown in Fig. 5 in the would-be second order region. We determine the peak position by fitting the susceptibility with a quadratic function. The errors are comprised of both statistical and systematic errors. We fit χ σ as a function of T with statistical errors obtained in the jack- knife method. In order to evaluate the systematic error, we change the fitting range as long as the fitted quadratic function describes an appropriate peak position. We take notice that we do not fit χ σ as a function of T in each jack-knife sample. In the would-be first order region of µ/T , we determine the phase boundary by comparing the expectation values of effective action S eff in the configurations sampled from the Wigner and NG phase initial conditions. We define T c as the temperature where S eff with the Wigner initial condition becomes lower than that with the NG initial condition as shown in Fig. 5. We have adopted this prescription, since it is not easy to obtain equilibrium configurations over the two phases when the thermodynamic potential barrier is high. At large µ/T , Metropolis samples in one sequence stay in the local minimum around the initial condition, and we need very large sampling steps to overcome the barrier. In Fig. 6, we compare the AFMC phase boundary with that in the mean field approximation [11,16,17] and in the MDP simulation [11,19] in the strong coupling limit. Compared with the MF results, T c at low µ is found to be smaller, and NG phase is found to be extended in the finite µ region in both MDP [19] and AFMC. As found in previous works [25,26], the phase boundary is approximately independent of the lattice size in the would-be second order region. The would-be first order phase boundary is insensitive to the spatial lattice size but is found to depend on the temporal lattice size. With increasing temporal lattice size, the transition chemical potential µ c becomes larger, which is consistent with MDP [19]. Phase boundary extrapolated to N τ → ∞ is shown by the shaded area, and is found to be consistent with the continuous time MDP results with the same limit, N τ → ∞ with keeping γ 2 /N τ finite. Spatial lattice size independence of the phase boundary may be understood as a consequence of almost decoupled pions. The zero momentum pion can be absorbed into the chiral condensate via the chiral rotation and has no effects on the transition. Finite momentum pion modes have finite excitation energy, then we do not have soft modes in the would-be first order region on a small size lattice. For a more serious estimate of the size dependence, we need larger lattice calculations. We find that the would-be first order phase boundary has a positive slope, dµ/dT > 0, at low T . The Clausius-Clapeyron relation reads dµ/dT | 1st = −(s W − s NG )/(ρ W q − ρ NG q ), where s W,NG and ρ W,NG q are the entropy density and quark number density in the Winger and NG phases, respectively. Since ρ q is higher in the Winger phase as shown in Fig. 3, the entropy density should be smaller in the Winger phase. This is because ρ q is close to the saturated value, ρ q ∼ 3 = N c , in the Wigner phase, then the entropy is carried by the hole from the fully saturated state. Similar behavior is found in the mean-field treatment in the strong coupling limit [11]. In order to avoid the quark number density saturation, which is a lattice artifact, we may need to adopt a larger N τ [19] or to take account of finite coupling effects [16,17]. E. Average Phase Factor In Fig. 7, we show the average phase factor e iθ as a function of T on 8 3 × 8 and 4 3 × 4 lattices, where θ is a complex phase of the fermion determinant in each Monte-Carlo configuration. The average phase factor shows the severity of the statistical weight cancellation; we have almost no weight cancellation when e iθ ≃ 1, and the weight cancellation is severe in the cases where e iθ ≃ 0. The average phase factor has a tendency to increase at large µ except for the transition region. This trend can be understood from the effective action in Eq. (19). The complex phase appears from X Nτ terms containing auxiliary fields, and their contribution generally becomes smaller compared with the chemical potential term, 2 cosh(3N τ µ/γ 2 ), at large µ. In the phase transition region, fluctuation effects of the auxiliary fields are decisive and finite momentum auxiliary fields might contribute significantly, which leads to a small average phase factor. The average phase factor on a 4 3 ×4 lattice, e iθ 0.9, is practically large enough to keep statistical precision. By comparison, the smallest average phase factor on a 8 3 ×8 lattice is around 0.1 at low temperature on a µ/T = 2.4 line. Even with this average phase factor, uncertainty of the phase boundary shown in Fig. 6 is found to be small enough to discuss the fluctuation effects. We show the severity of the sign problem in AFMC in Fig. 8. The severity is characterized by the difference of the free energy density in full and phase quenched (p.q.) MC simulations, ∆f = f full − f p.q. which is related to the average phase factor, e −Ω∆f = e iθ p.q. , where Ω = N τ L 3 is the spacetime volume. While ∆f takes smaller values on a 4 3 × 4 lattice, it takes similar values on lattices with larger spatial size L ≥ 6. We expect that ∆f in AFMC for larger lattices would take values similar to those on a 8 3 × 8 lattice. We find that ∆f in AFMC is about twice as large as that in MDP when we compare the results at similar (µ, T ) [19]. It means that the sign problem in AFMC is more severe than that in MDP. It is desired to develop a scheme to reduce ∆f in AFMC on larger lattices. In Sec. IV B, we search for a possible way to weaken the weight cancellation by cutting off high momentum auxiliary fields. A. Volume Dependence of Chiral susceptibility We investigate the volume dependence of the chiral susceptibility to discuss the phase transition order in the low chemical potential region. We expect that the phase transition is the second order at small µ/T according to the mean-field results and O(2) symmetry arguments. The latter states that the fluctuation induced first order phase transition is not realized as for O(2) symmetry [27]. In Fig. 9, we show the chiral susceptibility for fixed µ/T = 0.2 on various size lattices. From this comparison, we find that χ σ has a peak at the same T for different lattice sizes, and that the peak height on 6 3 × 4 and 6 3 × 6 lattices are almost the same. These two findings suggest that it is reasonable to define the temperature as T = γ 2 /N τ in the strong coupling limit. We also find that the peak height of the susceptibility increases with increasing spatial lattice size. The divergence of the susceptibility in the thermodynamic limit signals the first or second order phase transition. In order to find finite size scaling of chiral susceptibility, we plot 1/χ σ as a function of inverse spatial lattice volume in Fig. 10. The chiral susceptibility is proportional to spatial volume V = L 3 in the first order phase transition region and to V (2−η)/3 in the second order phase transition region for a d = 3 O(2) spin systems, where the O(2) critical exponent is η = 0.0380(4) [28]. By comparison, χ σ does not diverge when the transition is crossover. It seems to suggest that the chiral phase transition at low µ is not the first order, and we cannot exclude the possibility of the crossover transition with the present precision in comparison with the above three scaling functions shown in Fig. 10 in AFMC. The current analysis implies that the phase transition is the second order or crossover phase transition. In order to conclude the order of the phase transition firmly, we need higherprecision and larger volume calculations. B. High momentum mode contributions We quantitatively examine the influence of high momentum auxiliary field modes on the average phase factor and the order parameters. For this purpose, we compare the results by cutting off high momentum auxiliary field modes having j sin 2 k j > Λ, where Λ is a cutoff parameter. The parameter Λ is varied in the range 0 ≤ Λ ≤ d = 3 to examine their cutoff effects; we include all Monte-Carlo configurations when Λ = 3, while we only take account of the lowest momentum modes when Λ = 0. The average phase factor might become large in the cases where high momentum mode contributions are negligible as discussed in Sec. II B, so we anticipate that the weight cancellation becomes weaker for smaller Λ. In the left top panel of Fig. 11, we show the Λ dependence of the average phase factor on a 8 3 ×8 lattice for µ/T = 0.6. The average phase factor has a large value when Λ → 0, where we improve the statistical weight cancellation. These results are consistent with our expectation for the statistical weight cancellation with high momentum modes. We could here conclude that high momentum modes are closely related to severe weight cancellation. In the right bottom panel of Fig. 11, we show the chiral condensate φ on a 8 3 × 8 lattice for µ/T = 0.6. We here utilize φ = τ σ k=0,τ /N τ . This expression is equivalent to Eq. (24) for full configurations. The chiral condensate does not depend on the parameter Λ since the lowest modes of the integration variables (σ k,τ , π k,τ ) in AFMC consist of the scalar and pseudoscalar modes. In Fig. 11, we also plot the cutoff dependence of other quantities: quark number density (ρ q ), chiral susceptibility (χ σ ) and quark number susceptibility (χ µ,µ ). We find that these quantities do not strongly depend on the cutoff as long as Λ ≥ 2. By contrast, the quantities are affected by the cutoff parameter for Λ < 2. We have already known that the average phase factor becomes large if we set Λ ≤ 2.5. Thus, this analysis implies a probable presence of an optimal cutoff Λ o , with which the order parameter values are almost the same as those of the full ensemble results and the reliability of numerical simulation is improved. We conclude that there is a possible way to study the QCD phase diagram for larger lattice by cutting off or approxi-mating the high momentum modes without changing the behavior of the order parameters. V. SUMMARY We have investigated the QCD phase diagram and the sign problem in the auxiliary field Monte-Carlo (AFMC) method with chiral angle fixing (CAF) technique. In order to obtain the auxiliary field effective action, we first integrate out spatial link variables and obtain an effective action as a function of quark fields and temporal link variables in the leading order of the 1/g 2 and 1/d expansion with one species of unrooted staggered fermion. By using the extended Hubbard-Stratonovich (EHS) transformation, we convert the four-Fermi interactions into the bilinear form of quarks. The auxiliary field effective action is obtained after analytic integration over the quark and temporal link variables. We have performed the auxiliary field integral using the Monte-Carlo technique. We have obtained auxiliary field configurations in AFMC and the order parameters: the chiral condensate and quark number density. Both of order parameters show phase transition behavior. In the low chemical potential region, the chiral condensate decreases smoothly with increasing temperature, while the quark number density increases gently. This behavior suggests that the order of the phase transition is the second or crossover, which is consistent with the analysis of the distribution of the chiral condensate. We call the low chemical potential region the would-be second order region. In order to deduce the phase boundary, we here define (pseudo-)critical temperature as a peak position of the chiral susceptibility. One finds that the critical temperature is suppressed compared with the mean-field results on a isotropic lattice and almost independent of lattice size as shown in the monomer-dimer-polymer simulations (MDP) at the would-be second order phase transition [19]. We also give some results of finite size scaling to guess the phase transition order. While one could expect the second order phase transition from the mean-field and O(2) symmetry arguments in the low chemical potential region, it is not yet conclusive to decide whether the transition is the second order or crossover at the present precision. At high chemical potential, the order parameters show sudden jump and hysteresis, and depend on initial conditions: the Wigner and Nambu-Goldstone initial conditions. The distribution of the chiral condensate has a double peak around the phase transition region. These results imply that the order of the phase transition is the first order owing to the existence of the two local minima with a relatively high barrier compared to the Metropolis jumping width. We call this phase transition the would-be first phase transition in the present paper. We here regard transition temperature as a crossing point of the expectation value of the effective action with two initial conditions. According to our analysis, the Nambu-Goldstone phase is enlarged toward the high chemical potential region compared with the mean-field results. The phase boundary depends very weakly on spatial lattice size and more strongly on temporal lattice size. This behavior is also found in MDP [19]. We find that we have a sign problem in AFMC. The origin of the weight cancellation is the bosonization of the negative modes in the extended Hubbard-Stratonovich (EHS) transformation; an imaginary number must be introduced in the fermion matrix. The fermion determinant becomes complex, and the statistical weight cancellation arises when we numerically integrate auxiliary fields. In our framework, we have a phase cancellation mechanism for low momentum auxiliary fields; a phase on one site is canceled out by the nearest neighbor site phase. We quantitatively show that the high momentum modes contribute to the statistical weight cancellation by cutting off these modes. We also confirm the cutoff dependence on order parameters and susceptibilities. We find that there is a cutoff parameter region where the behavior of the quantities are not altered from the full configurations and the statistical weight cancellation is weakened. Therefore, there is a possibility to investigate phase transition phenomena using cutoff or approximation scheme for high momentum modes. While we have a sign problem in AFMC, statistical weight cancellation is not serious on small lattices adopted here (∼ 8 3 × 8 size) because of the phase cancellation mechanism for the low momentum modes. The phase boundary in AFMC is found to be consistent with that in MDP [19]. In this paper, we utilize CAF in order to obtain the order parameters and susceptibilities in the chiral limit on a finite size lattice. The chiral condensate in finite volume should vanish in a rigorous sense due to the chiral symmetry between the scalar and pseudoscalar modes. In order to simulate the non-vanishing chiral condensate to be obtained in the rigorous procedure of the thermodynamic limit followed by the chiral limit, the chiral transformation of auxiliary fields are carried out in each configuration so as to fix the chiral angle to be in the real positive direction (positive scalar mode direction). We could evaluate the adequate chiral condensate and chiral susceptibility by using CAF. The AFMC method could be straightforwardly applied to include finite coupling effects since bosonization technique is applied in the mean-field analysis [16,17]. Both fluctuations and finite coupling effects are important to elucidate features of the phase transition phenomena, so the AFMC would be a possible way to include these two effects at a time. The sign problem might be severer than that in the strong coupling limit when we include finite coupling effects. One of methods to avoid lower numerical reliability is to invoke shifted contour formulation [29]. We hope that we may apply the formulation with finite coupling effects or on a larger lattice. We also obtain appropriate order parameters in a relatively hassle-free CAF method compared to a rigorous way. We might use this CAF method with higher-order corrections in the strong coupling expansion to investigate the phase diagram. φ k,ω and α k,ω are chiral radius and chiral angle respecting each chiral partner. We find that the chiral condensate ideally vanishes according to Eq. (A1). In CAF, we rotate the negative chiral angle (−α) with respect to all fields and set π 0 = 0. We obtain the finite chiral condensate in the Nambu-Goldstone (NG) phase as The resultant chiral condensate in CAF should simulate the spontaneously broken chiral condensate in the thermodynamic limit. We have some advantages in CAF. One is that the chiral condensate is finite in the NG phase and the chiral susceptibility may have a peak. In the cases where the chiral condensate vanishes ( σ 0 = 0) because of the chiral symmetry, the chiral susceptibility, which is proportional to ∂ 2 ln Z/∂m 2 0 = −∂ χχ /∂m 0 = ∂ σ 0 /∂m 0 , is expressed as ∂ 2 ln Z/∂m 2 0 = σ 2 0 . We could expect that the chiral susceptibility increases with lower temperature. After we utilize CAF, we obtain the chiral susceptibility with a peak, ∂ 2 ln Z/∂m 2 0 = σ 2 0 − σ 0 2 because of the non-vanishing chiral condensate as shown in Fig. 9. Another merit of CAF is that when we calculate the chiral condensate and the chiral susceptibility, we could take into account the information on the pseudoscalar mode which is mixed with scalar mode in the chiral limit.
9,436
2014-01-19T00:00:00.000
[ "Physics" ]
Propagation of superconducting coherence via chiral quantum-Hall edge channels Recently, there has been significant interest in superconducting coherence via chiral quantum-Hall (QH) edge channels at an interface between a two-dimensional normal conductor and a superconductor (N–S) in a strong transverse magnetic field. In the field range where the superconductivity and the QH state coexist, the coherent confinement of electron- and hole-like quasiparticles by the interplay of Andreev reflection and the QH effect leads to the formation of Andreev edge states (AES) along the N–S interface. Here, we report the electrical conductance characteristics via the AES formed in graphene–superconductor hybrid systems in a three-terminal configuration. This measurement configuration, involving the QH edge states outside a graphene–S interface, allows the detection of the longitudinal and QH conductance separately, excluding the bulk contribution. Convincing evidence for the superconducting coherence and its propagation via the chiral QH edge channels is provided by the conductance enhancement on both the upstream and the downstream sides of the superconducting electrode as well as in bias spectroscopy results below the superconducting critical temperature. Propagation of superconducting coherence via QH edge states was more evident as more edge channels participate in the Andreev process for high filling factors with reduced valley-mixing scattering. good edge contact between a graphene layer and an Nb superconducting electrode with a high critical field (H c2 ~ 3.5 T) allows the superconducting proximity effect in the QH regime for a magnetic field above ~1 T. To attain a stronger AR contribution, most of the previous experimental studies adopted Josephson junctions made of two N-S interfaces arranged sufficiently close to each other with an overlap of the superconducting order of the two electrodes [18][19][20] . However, a junction device with a short channel length allows only two-terminal measurements, where the voltage drop across the junction inevitably contains a mixture of longitudinal and transverse Hall voltages. Thus, in such a device, one cannot effectively separate the edge conductance from the bulk contribution. This even distorts the QH conductance plateaus, depending on the aspect ratio of the junction 21 . In sharp contrast to previous studies, in this study, we adopted a three-terminal measurement configuration, involving the QH edge states at both sides of a graphene-S interface to detect the longitudinal and Hall conductance separately. The edge channels exhibited conductance enhancement on both the upstream and the downstream sides of the superconducting electrode, providing convincing evidence for AR mediated by AES along the superconducting-proximity interface. The conductance enhancement in the QH regime was also confirmed by bias spectroscopy at a fixed magnetic field below the superconducting critical temperature, which manifested formation of the AES along the contact edge of the Nb superconducting electrode. In our measurements, the AR-induced conductance enhancement via the QH edge states was more evident in the QH plateaus as more channels participated in the Andreev process for high filling factors (ν ≥ 24). This indicates that each mode of edge channels participated in the AR process via the AES. In addition, the Fermi-energy-modulated background signals of the Landau-level gaps were observed in bias spectroscopy measurements, which were superposed on the AR signal near zero bias. Compared with Josephson junctions, our device configuration allows more precise determination of the relationship between the development of QH edge states and superconducting pair coherence. We find that a very recent work 22 also used the same measurement configuration as ours. A three-terminal configuration was adopted in the study to confirm the crossed Andreev reflection in the QH states. Unlike the AES in this study, however, their result of the negative resistance in the downstream edge states, which represents the hole current, was caused by the nonlocal coherence between electrons and holes. Figure 1(a) shows a false-coloured scanning electron microscope (SEM) image of the BLG hybrid device and the measurement configuration. The Nb superconducting electrode (green) is in contact with the BLG sheet (blue) between the two normal voltage probes (yellow) on the upper side of the BLG. A bias current I was applied through electrodes 2 and 4 (I = I 24 ), while the upstream voltage V U (=V 23 ) and the downstream voltage V D (=V 21 ) were measured. In the QH regime in a high transverse magnetic field, carriers flowed in the chiral edge channels in the counter-clockwise direction, as shown in Fig. 1(b). Carriers incident from the BLG to the superconducting contact form an AES, which consists of coherent paired states of Andreev-reflected electrons and holes flowing along the interface of the BLG-S junction. The AES maintains the coherence of hybridised electron-hole quasiparticles as long as the AR of quasiparticles occurs at the BLG-S interface. The coherence of the Andreev pairs is maintained both upstream and downstream of the AES within the range of pair breaking by the disorder-induced scatterings at the edges. From the quasiclassical point of view, observation of the AR effect in the QH regime depends on the charge type of the outgoing quasiparticles at the exit point of the junction 8 . The proximity effect along the N-S interface becomes most evident when hole-like quasiparticles are dominantly injected downstream of the N-S interface into the edge conducting channel. In this approach, the charge type of outgoing carriers can be determined by the bouncing number (N) of quasiparticles at the interface and the probability of AR (P AR ) upon bouncing. Thus, the 1 μm dominance of either charge type of carriers varies depending on the width W of the N-S interface, magnetic field (B), and the interfacial barrier strength (Z). To interpret the observed results in this study, we adopt the quantum mechanical scheme described in Ref. 23 for AR in a graphene-S system in the QH regime, considering the charge conversion probability between the normal edge state and the AES. In this scheme, unlike in 2DEG-S systems, the measured conductance of the QH plateaus in graphene depends on the valley polarisations of quasiparticles from the upstream and downstream edge states as , where cos ( ) 1 2 ν ν Θ = ⋅ is the product of the valley isospins. These valley isospins are defined in the Bloch sphere for the upstream ( ) 1 ν and downstream (ν ) 2 flow along the opposite edge states, and Θ is the angle between ν 1 and 2 ν . The lowest QH plateau (n = 1 for MLG; n = 0 and 1 for BLG) depends most sensitively on the valley polarisations, since it can be strongly influenced by the types of edge termination. The zigzag edge has ν = ±z (from the three-dimensional unit vector in the Bloch sphere) depending on the graphene sublattice located at the edges. Thus, for a superconductor placed between opposite sides of zigzag edges, Θ = π. The armchair edge has z 0 ν ⋅ = as the wave function does not exist in both sublattices, with the valley isospins lying on the x-y plane in the Bloch sphere. Because an Andreev pair of an incoming electron and an outgoing hole has opposite valley polarisation in graphene with 24 Θ = − cos 1 , the lowest QH conductance corresponding to n = 1 for MLG becomes 4 e h 2 , which is the same as for perfect Andreev reflection (P AR = 1) in a system with time-reversal symmetry. In contrast, the states of n ≥ 2 for MLG are valley degenerate because they form further apart from the graphene edge than the n = 1 state, which leads to ν ν ⋅ ≠ −1 1 2 . In this case, the Hall conductance of the n ≥ 2 states deviates from the doubled conductance for the perfect AR. At the same time, the intervalley scattering of quasiparticles during the propagation of edge channels reduces the AR probability. The effect of valley degeneracy for n ≥ 2 states and intervalley scattering at the graphene-S interface will be addressed in the discussion section. In addition to this valley-related consideration, Ref. 8 takes into account mode mixing in the edge channels near the corners of graphene-S interface. This is to describe the AR-induced conductance variation for n channels with the electron-hole conversion probability based on transfer matrices in the quantum mechanical treatment. In this scheme, one considers that the AR occurs for the inner edge channels for n ≥ 2 as well as the outermost edge channels. Bias spectroscopy of the Nb superconducting contact was performed to confirm the contact transparency for V BG = 10 V and B = 0 T. Figure 1(c) shows the temperature dependence of the upstream differential resistances normalised by the value taken at Each curve shows a dip near zero bias, which suggests that AR occurred inside the superconducting energy gap of the Nb electrode. At the base temperature of 0.16 K, the zero-bias differential resistance drops by ~30%, indicating a highly transparent proximity contact at the graphene-Nb interface. The superconducting gap energy of Nb, Δ Nb , is estimated by choosing the bias voltage where the differential resistance starts dropping abruptly at T = 0.16 K [vertical dashed lines in Fig. 1(c)], where Δ Nb is about 850 μV (V = Δ/e). The resistance dips are broadened and eventually disappear as T increases beyond the superconducting critical temperature of Nb (T c = 8.1 K). Tiny zero-bias resistance peaks near the base temperature (T = 0.16 and 0.36 K) were usually found in our Nb-contacted devices (also found in MLG devices), which may suggest the presence of a small potential barrier at the interface; however, these resistance peaks appeared only at low temperatures (T < 0.6 K). Thus, we suggest that they were caused by the reentrance effect with the high transparency between the superconductor and the neighbouring normally conducting (Au) electrodes that act as thermal reservoirs 25,26 . Figure 2(a) shows the back-gate voltage (V BG ) dependence of the downstream resistance R D (=V D /I) and the upstream conductance G U (=I/V U ) measured at B = 1 T. Considering the contribution from AR, it can be noted that V U = V 23 = e(μ′ 2 − μ 3 ) = e(μ 2 − μ 3 + μ AR + μ c ) = e(μ Hall + μ AR + μ c ) and V D = V 21 = e(μ′ 2 − μ 1 ) = e(μ 2 − μ 1 + μ AR + μ c ) = e(μ AR + μ c ), respectively. Here, the chemical potential of probe 2 is μ′ 2 = μ 2 + μ AR + μ c , where μ AR and μ c are the chemical potentials that arise from the AR and the contact resistance of the Nb junction. μ Hall [ = μ 1 − μ 3 ] is the Hall potential drop and μ 1 = μ 2 . μ AR gives a negative voltage drop (μ AR < 0) to both V U and V D when AR occurs. R D shows the development of Landau levels, with minima when the Fermi level of graphene is between the neighbouring Landau levels. R D does not vanish completely at these incompressible states until V BG reaches 4.7 V, which corresponds to a filling factor ν = 24. The residual resistance of a few ohms at each minimum for ν < 24 may result from backscattering in the edge channels and the contact resistance at the graphene-S interface. G U shows the quantised conductance arising from the unique Landau level structure of BLG [σ xy = ± (4e 2 /h)•n for n ≥ 1, where n is integer] 27 . For high filling factors, the conductance plateaus are clearly enhanced from the expected values denoted by dashed lines, with stronger conductance deviation as more edge states participated in the AR (details are discussed below in relation to Fig. 3). Above V BG = 7.3 V, R D becomes negative. Even in the compressible state between the plateaus of G U at V BG = 9.2 V, a large number of edge channels participate in AR, resulting in negative downstream resistance. Andreev reflection via quantum-Hall edge states. For better clarification, we performed bias spectroscopy for both V D and V U simultaneously as a function of V BG for B = 1 T. Figure 2(b) shows the bias dependence of both the upstream and the downstream differential resistance at a filling factor of ν = 36 (V BG = 8.6 V). The vertical dashed lines (black) indicate the value of Δ Nb (~813 μeV) for B = 1 T, calculated using Bardeen-Cooper-Schrieffer (BCS) theory with the measured zero-field gap energy of Nb. The measured energy range of the dip structure in the upstream resistance (blue) agrees well with the calculated Δ Nb , providing additional confirmation that it indeed arises from the superconducting proximity effect. This structure was also found in the downstream resistance (red) with almost the same energy scale as Δ Nb , which indicates that incident electrons from the upstream side were paired coherently with outgoing holes via the AES. To examine clearly the progressive evolution of the superconducting proximity effect on the edge states for different filling factors, we subtracted the background in the differential resistance by obtaining d 2 V/dI 2 , the second derivative of V U or V D with respect to the bias current I, as shown in the colour maps in Fig. 2(c) and (d). The red (blue) colour in the maps represents an increase (decrease) in the differential resistance with a positive increase in bias current. For ν < 24 (or V BG < 4.7 V), the alternating peak and dip structures near zero bias appeared with modulating V BG , caused by Landau levels with inter-level gap energies exceeding the value of Δ Nb at B = 1 T. A recent report on bias spectroscopy in suspended BLG in the QH regime reveals that these features arise as the Fermi level passes through the different Landau levels with increasing bias 28 . Beyond the filling factor of ν = 24 (or V BG > 4.7 V), however, these features are gradually replaced by successive zero-bias dips in the differential resistance due to the superconducting proximity effect. Horizontal dashed lines represent the energy scale of 2Δ Nb . The centres of peak or dip structures of the differential resistance (dot-dashed line) deviate from zero bias at low filling factors for V BG < 4.7 V but return gradually to zero bias at high filling factors. As mentioned above, increasing the bias current enables the bulk conduction to contribute to the transport. Then, the differential resistance can become asymmetric depending on the polarity of the bias because, in this case, the current flows through different paths, which causes the alternating peak-dip centres to deviate from the zero bias for V BG < 4.7 V. However, as the AR is not affected by the bias polarity, the centres of dip structures are located near zero bias in the region of the superconducting proximity effect for V BG > 4.7 V. The colour map of d 2 V D /dI 2 shown in Fig. 2(d) also presents similar results to the ones in Fig. 2(c). Alternating peak-dip structures of differential resistance also appear for ν < 24. The variation of dV D /dI near the incompressible states (corresponding to the dips of R D ) is smaller than in the compressible states between the adjacent Landau levels (corresponding to the peaks of R D ). This is because V D drops by a few V (R D < 10 Ω) near the incompressible states, which makes the bias dependence weaker than in the compressible states. Increasing the Fermi level, the superconducting proximity effect was more evident in both the upstream and the downstream edge states for ν > 24. This indicates that the same potential deviation μ AR due to the AES was detected in both V U and V D . Within this range, one clearly observes both the AR-induced resistance dips for V < | Δ Nb /e| and the alternating background of the differential resistance due to the QH effect for V > | Δ Nb /e|. For B > 1.32 T, the dip structure from the AR is no longer present. Only the alternating peak-dip structures are visible with increasing B in the energy range beyond 2Δ Nb . The solid black lines in Fig. 3(a) show the calculated gap energy of Landau levels 28 (Δ LL ) for a given B, which is much larger than 2Δ Nb in the coexistence zone. It indicates that Δ LL is irrelevant to the conductance behaviour appearing for V < | Δ Nb /e| in Figs 2 and 3. Focusing on the coexistence of the two effects in Fig. 3(b), one notes that the zero-bias differential conductance G U (B) (red) is enhanced above the expected value of 4e 2 /h • n for the n-th conductance plateau. This is caused by the AES, which is present in all of the plateaus in the coexistence zone in Fig. 3(b). The conductance enhancement is estimated in Fig. 3(c) as ΔG U = (G U,AR − G U,N )/G U,N (blue dots), where G U,AR is the maximum conductance enhancement by AR and G U,N is the normal upstream conductance for V > |Δ Nb /e|. ΔG U tends to decrease monotonically along with decreasing G U as the participating edge channels are reduced with increasing B field. ΔG U shows oscillating behaviour, the maxima (minima) of which correspond to treads (risers) of the quantised steps in G U . In Fig. 2(c), the bias-spectroscopy measurements in the QH regime reveal a peak-dip structure with the same periodic behaviour as the QH plateaus. Because the treads and risers of the QH plateau steps are related to the peaks and dips in the differential resistance, respectively, the oscillating characteristic of ΔG U arises from the AR signals that were superimposed positively or negatively by the modulated Landau-level background signals at zero bias. In addition to the quantised conductance of G U , this behaviour reconfirms the evidence for QH edge states in the AR-QH coexistence zone. Temperature dependence of Andreev edge states. We also confirmed the existence of AES in an MLG device at variable temperature up to the critical temperature of Nb, T c = 3.3 K, for B = −1.7 T. We found that a BLG device was not suitable for such measurements because BLG shows more thermal broadening of the Landau levels than MLG at a given temperature and a resultant smearing of the QH plateaus (see the supplementary material for the T dependence of the BLG device). Similar device and measurement configurations were adopted for measurements using the MLG device, except for a polarity change of the applied magnetic field (B = −1.7 T). With the opposite polarity of magnetic field, the locations of the voltage probes V U and V D were also switched accordingly. The MLG device gave similar results to the ones obtained using the BLG device. The MLG-Nb junction device revealed a zero-bias conductance enhancement of ~14% with Δ Nb ~570 μV for B = 0 T and T = 0.16 K. Figure 4(a) shows a set of line plots of G U at the QH plateaus of ν = 18 and 22 in the MLG device for T = 0.16, 1, 2.5, 2.9, and 3.3 K. Each QH plateau gradually recovers the normally quantised value as temperature rises. The temperature dependence occurs across the entire V BG range in Fig. 4(a) for both incompressible and compressible states of the QH regime; however, the compressible states seem to have a larger temperature dependence. We believe that, in addition to the edge channel conduction, the bulk channels in the compressible states also contributed to the AR. Fig. 4(a) for both the upstream and the downstream edge states. Here, Fig. 4(b) and (c) are the results near the incompressible state (V BG = 8.1 V), so that the development of the zero-bias resistance dips is clearly visible for V < |Δ Nb /e| with almost the same background shape of dV/dI for both the upstream and the downstream edge states. In this device, the incoming electrons from the upstream edge states and the outgoing quasiparticles to the downstream ones are coherently coupled by the AES along the interface of the junction for V < |Δ Nb /e|. As backscattering is almost ruled out in the incompressible states, both V U and V D measure the μ AR induced via the AES. On the other hand, Fig. 4(d) and (e) are the results close to the compressible states (V BG = 7.4 V), where the backscattering in the bulk transport channels also contributed to the transport. The bulk transport channels of the MLG contained randomly distributed defects, which should have acted as scattering sources for the quasiparticles in the AES, resulting in decoherence between quasiparticles. Consequently, as seen in Fig. 4(d-e), the observed dV U /dI and dV D /dI exhibit a different bias-dependent background. Their asymmetrical background and the much weaker superconducting proximity effect in the downstream edge state result from the backscattering and decoherence in the AES. We argue that the similarity in the I-V characteristics between the upstream and downstream edge states is the important criterion for distinguishing superconducting proximity effects via QH edge states without backscattering. Discussion Non-ideal transparency of the graphene-S interface (ΔG U ~30% at B = 0 T in BLG) and valley-degenerate edge states (n ≥ 2) partly breaks the Andreev pairs in the two opposite edge states, which reduces the enhancement of QH conductance from the expected doubled QH conductance (case for P AR = 1). These additional factors causing a deviation from the perfect AR are hard to define quantitatively because of the presence of very subtle and complicated factors such as inhomogeneous transparency at the junction interface. Thus, we introduce a simple AR conversion factor α to the Hall conductance equation from ref. 23 = −α Θ . Setting Θ = π for the case where all incoming electrons are converted into outgoing holes with opposite valley polarisation, the factor α acts as the portion of the outgoing hole-like quasiparticles that are coherently coupled with incoming electrons via the AES. The value of this AR conversion factor α, corresponding to the accumulated conductance enhancement for the channels up to n = 10 level, is estimated to be ~0.057 from ΔG U taken at zero bias. In the B-field dependence of the AR effect of Fig. 3, no sign of conductance oscillation is seen as the B field varies within the QH regime. In the quasiclassical treatment, the outgoing quasiparticles can be either electron-like or hole-like alternatively as the bouncing number of quasiparticles at the 2DEG-S interface is varied with B field 9 . This conductance oscillation is treated quantum mechanically in terms of the phase factor acquired by the quasiparticles along the AES of the 2DEG-S interface. However, the unique valley isospin degrees of freedom in graphene of a graphene-S junction makes the electron-hole mixed states degenerate, which leads to vanishing of the phase factor to the quasiparticles in the AES 8 . Therefore, the conductance is mainly determined by the electron-hole conversion probability factor α via the AES. Although the control of this probability in measurements remains uncertain, we observed only the conductance enhancement in three different graphene devices used in this study. While conductance enhancement by AR was clearly identified at higher filling factors (n ≥ 6), the superconducting proximity effect was barely observed for lower filling factors of the QH edge states in both the BLG and the MLG devices (see supplementary materials for the MLG device). This feature is contrasted with the result from graphene Josephson junctions in other works [18][19][20] where the AR effect was observed for low filling factors also. Surely the outermost edge states of n = 0, 1 have a unique dependence on the valley polarisations for incoming and outgoing quasiparticles, and can be smeared easily due to the strong intervalley scattering by edge disorders. However, it does not explain why the other QH edge states from the lower filling factors (1 < n < 6) in our study did not exhibit the superconducting proximity effect via the AES. In an S-N-S proximity Josephson junction, with the normal-conducting channel as a weakly superconductive link, the coherence of Andreev pairs can be maintained much stronger than in a single N-S junction. The strong proximity effect in a Josephson junction allows a conductance enhancement even for low filling factors with edge channels fewer than in a single N-S junction for a given temperature and B field. Therefore, the relatively weak strength of the proximity effect in our N-S junction device led to the observation of the conductance enhancement only when sufficiently large number of QH edge channels participated in the AR process along the AES for high filling factors. Moreover, intervalley scattering can break the coherence between the incoming electrons and the outgoing quasiparticles via the AES for smaller values of ν. The width of the junction interface W plays an important role in quasiparticle coherence in our devices, as it determines the propagation lengths of each AES. Because the smaller-ν edge states are located on the outer of the conducting channels, an AES of a lower filling factor requires a greater coherence length than that of a higher filling factor. The localisation length of an edge channel in the QH regime is represented by the cyclotron radius . Therefore, the propagation length of AES (l AES ) can be determined by the strength of the magnetic field (B) and the width of the superconducting interface (W). The BLG device with W = 360 nm, shown in Fig. 1(a), started to show the superconducting proximity effect from ν = 24 (n = 6), which corresponds to l AES ~187 nm with l B ~25 nm (B = 1 T). In a control experiment with an MLG device with a shorter superconducting contact edge (W = 270 nm), the superconducting proximity effect began to appear at a lower filling factor, ν = 18 (n = 5). Here, l AES ~120 nm for the same B field and T (see supplementary materials for the MLG experiment). It is proposed in ref. 23 that the Hall conductance with intervalley scattering at the graphene-S interface is , where Γ is the intervalley relaxation rate and v 0 is the velocity at the junction interface. These results seem to support the above argument concerning the coherence of the AES. One cannot reduce W too much to obtain the proximity effect at lower filling factors because a narrow contact edge often results in an effective channel disorder, which backscatters the incoming electrons from the QH edge channels and results in suppressed QH conductance 29 . In conclusion, we fabricated ballistic MLG-Nb and BLG-Nb hybrid devices and observed the AR effect via the QH edge states that form in a strong transverse magnetic field below the superconducting critical field and critical temperature of an Nb electrode. In contrast to the two-terminal measurements in previous studies, the three-terminal measurement configuration adopted in this study allowed us to obtain detailed evidence for AR mediated by AES excluding the bulk contribution. From the observed negative resistance on the downstream side of the AES, coherent conversion of electrons from the upstream side of the AES into paired holes was clearly confirmed. The AR-induced conductance enhancement was more evident as more edge modes participated in the AR process, as the AR conversion probabilities depend on valley degeneracy and intervalley scattering. This study provides valuable detailed information on the propagation of superconducting coherence with specific chirality along edge channels in the QH regime. It also provides a new scheme for investigating the interplay between superconductivity and the chiral edge conducting state that often emerges in 2D topological materials. Method Device fabrication. Our MLG and BLG hybrid devices were fabricated in the following way. First, we encapsulated graphene between 20-30-nm-thick hBN crystals using the sequential stamping method 17 . Graphene was protected from ambient conditions by the hBN layers and the polymer residue that could not be completely removed after the lithography processes. The encapsulation enhanced the mean free path of the graphene significantly beyond the size of the graphene layers in our devices. Then, the encapsulated graphene was placed onto a heavily electron-doped silicon substrate with a 280-nm-thick SiO 2 capping layer, which was used to apply the back-gate voltage (for both MLG and BLG devices). After choosing a defect-free surface on the encapsulated graphene layer under atomic force microscopy, standard electron beam lithography and successive plasma-etching were adopted to prepare the edge contact for the metallic electrodes. A bilayer electrode of 10-nm-thick Cr and 60-nm-thick Au layers was deposited by standard electron-gun evaporation. A 100-nm-thick Nb electrode was deposited by DC magnetron sputtering after inserting a 10-nm-thick Ti buffer layer by electron-gun evaporation between the graphene and Nb. The Ti buffer layer enhanced the adhesion of the electrode and improved the contact characteristics, and thus prevented damage to the graphene layer during the sputtering process. Measurements. All devices were mounted on a dilution fridge system (Kelvinox, Oxford Instruments) with a base temperature of 150 mK. Electrical connection of the measurement probe lines of the fridge system was made via two stages of low-pass RC filters. All measurements data were obtained using a current-biased standard
6,904.8
2017-09-08T00:00:00.000
[ "Physics" ]
Economic Management Based on Hybrid MPC for Microgrids: A Brazilian Energy Market Solution This paper proposes a microgrid central controller (MGCC) solution to the energy management problem of a renewable energy-based microgrid (MG). This MG is a case study from the Brazilian energy market context and, thus, has some operational particularities and rules to be obeyed. The MGCC development was based on a hybrid model predictive control (HMPC) strategy using the mixed logical dynamic (MLD) approach to deal with logical constraints within the HMPC structure, which results in a mixed integer programming (MIP) problem. The development of the solution is done through economic and dynamic modeling of the MG components; furthermore, it also takes into account the energy compensation rules of the Brazilian energy market and the white energy tariff. These conditions are specified through a set of MLD constraints. The effectiveness and performance of the proposed solution are evaluated through high-fidelity numerical simulation. Introduction One of the great advantages of some renewable energies is the possibility of generating energy directly in the region where it is consumed, with great emphasis on solar photovoltaic and small wind generation. Thus, this context enables the use of renewable energy in a distributed way, where each consuming unit is able to produce energy for self-sustenance. Considering this new scenario, the concept of the microgrid (MG) appears as a key solution, enabling the guarantee of operational reliability of electrical systems and, at the same time, presenting cost economies to the consumer units [1]. Microgrids are defined as "a cluster of loads and microsources operating as a single controllable system that provides energy and heat to their local area" [2]. The energy management problem of microgrids is carried by units which have been progressively called microgrid central controllers (MGCCs), which are responsible for implementing different control strategies that ensure the adequate energy generation efficiency and also enhances economic concerns. Many different approaches of algorithms for MGCCs, especially the ones based on model predictive control (MPC) [3] and its variants, are available in the literature [4][5][6][7][8][9][10][11][12]. Recently, [4] depicted the main state-of-the-art techniques of MPC applied to energy management in microgrids. In [5], the development of an optimal control for renewable energy microgrids with hybrid energy storage system (ESS) is presented using a hybrid MPC [6] aiming to maximize the economic benefit of the microgrid and to minimize the degradation causes of the storage systems. In other hand, the optimal load sharing of a renewable MG with hybrid ESS through an advanced optimization-based control technique is the research object in [7]. A hierarchical MPC structure acting in different time scales aiming to optimize the economic profit and the electric vehicles charging Spot Market Rules and Proposed Solution This section presents the Brazilian spot market rules, as well as the main ideas of the proposed solution and a description of the µGridLab microgrid where the simulation study is developed. Spot Market Rules Recently, the Brazilian market rules changed and the so-called "white tariff", which varies according to the time of day and penalizes the cost at peak times, became an interesting option for the users that produce and consume energy, the prosumers. Another important mechanism that has been created is the compensation rule, which allows the units with production capacity to inject energy to the main grid and use it with no purchase costs in other time periods when the load demand is bigger than the energy generation. These conditions allow small consumers, including those operating in low voltage networks, to generate their own energy and dispatch into the main grid autonomously, which creates an energy credit (compensation) that can be used (compensated) in up to 36 months [16]. At this point, it is important for the reader to understand the concept of energy injection and compensation. On one hand, the first concept is, in essence, the excess of energy generated-in other words, the amount of energy that is not instantaneously consumed, but is dispatched/supplied/injected to the main grid. According to the market rules, this energy will result in a credit that can be consumed in the future. On the other hand, the energy compensation concept resides in the usage of this energy credit-for example, in periods where the consumption is greater than the energy production capacity. We also stress that for one to take benefit out of this energy market scheme, the price of the produced energy should be equal or lower than the price of the credit in each tariff spot. Another characteristic of these rules is that in peak periods, the energy price is bigger than in the other spots, which adds a degree of freedom to produce more energy than needed and inject the excess into the grid in peak periods. Thus, this allows one to generate credits with high values that can be translated in a bigger amount of energy in off-peaks and intermediate spots because of the credit conversion factors. However, the energy credits are computed by the electricity provider, and to take them into account in the MGCC, it is necessary to model the market rules in the form that they can be dealt in the optimization problem. To solve this issue, the concept of virtual stocks (VS) proposed in this work arises as a powerful tool that can represent with accuracy the market operation, adding better possibilities of decision to the MGCC. It is important to note here that, as the Brazilian market is different from other markets (e.g., European markets), the Brazilian prosumer cannot have economic benefits by selling energy to the DSO (there is no possibility to sell the energy to the DSO); the only possibility is to convert the injected energy into credits to be used in the future. Another important difference is that energy prices do not oscillate daily and remain constant for weeks or months, thanks to most of the electricity generated coming from large-scale hydroelectric power plants that, in most of the cases, have storage capacities that guarantee the fixed spot prices. The main contribution of this work is the proposal of a new MGCC that takes the compensation rules in its optimization problem into account through the virtual stocks modeling framework. This concept allows the MGCC take the advantage of deciding the optimal solution for the energy management over the prediction horizon using the rules of energy injection/compensation and the different conversion factors among the three tariff spots. The solution presented hereafter uses the Brazilian energy market as a use case, but it is important to bear in mind that the methodology discussed here can be applied to every energy market with similar rules. White Tariff According to [17], in order to use the white tariff, a consumer unit must have the sum of the nominal powers of the transformers equal to or less than 112.5 kVA, which is the case of the µGridLab MG used as a case study in this paper. The white tariff is composed by 3 different spots, off-peak, Energies 2020, 13, 3508 4 of 20 intermediate and peak, and its costs are presented in Table 1. These prices will be used as weights in the main grid cost function, as well as converter factors in the virtual stocks modeling, as explained in following sections. Compensation Rules Hereafter, the modeling of the compensation rules and its integration in the MGCC, which is the novel contribution of this paper, will be discussed. For such, the concept of the virtual stocks (VS) is introduced. The control objectives, the dynamic models and the operational constraints are proposed under the following assumptions: 1. Since there are three spots within the white tariff, three different VSs were proposed, one for each spot. 2. The energy stored in the VSs has a minimum value of zero and no maximum value constraint. 3. The energy injected to the grid at each tariff spot is stored in its respective VS. 4. The energy compensation occurs primarily in the respective tariff spot. In cases where the VS is depleted in its respective tariff spot, it is possible to use the energy of other VS. In this case, a conversion factor γ given by the relation among the tariff spot prices is applied. This modeling approach is compatible with the energy compensation rules [19]. 5. The cost function to be minimized ponders the energy exchanged with the grid-this way, the main objective is to meet the produced energy with the demand as much as possible using the MG renewable sources. 6. The compensated power is the sum of all powers extracted from the VSs. 7. When the MG is using the compensated power, in other words, the energy from the VSs, it is not allowed to inject energy in the main grid. 8. The MG is only allowed to buy energy from the grid if there is no more energy available in the VSs. The following sections aims to detail how the previous assumptions were implemented in the optimization problem. Proposed Control Solution In order to solve the presented problem, a MGCC composed of two hierarchical MPC levels is proposed. The high-level MGCC is responsible for the economic management of the micro-network and it is the focus of this work. It operates on a time scale of hours and determines the operation points for the lower level. The low level is responsible to ensure the energy balance of the MG following the targets defined by the high-level MGCC and it is out of the scope of this work. It operates on a time scale of minutes and guarantees the operation of the MG as close as possible to the economic optimum but allows eventual deviations from this point when necessary. For the design and tuning of the high-level MGCC, correct operation of the low-level MGCC is assumed. A schematic of how this strategy works is shown in Figure 1. The main objective of the high-level MGCC is to operate the MG attending the demands and obtaining the minimum operation cost. Note that the efficient operation of the microgrid has to consider not only direct costs associated to the energy consumption, but also other indirect costs related to the several equipment operations, as will be detailed in the formulation of the cost functions of each one of the MG components in Section 3. The manipulated (decision) variables of the high-level MGCC are the desired values (setpoints/references) at each sample time of 15 min of the generated power, startup and shutdown of the dispatchable sources, the charge/discharge power of the storage systems, the power extracted/injected from/to the grid, and a set of variables related to the market rules and virtual stocks modeling, as depicted in Section 3. Moreover, some of these variables can be continuous such as the generated power of a dispatchable source or the grid power, as well as binary-for example, the startup or shutdown operations of a dispatchable energy source. The high-level MGCC is based on a hybrid model predictive control (HMPC) framework, which is an optimization-based control technique where the main goal is to minimize an objective function though a prediction horizon subjected to dynamic models and operational constraints. The term hybrid is used because the optimization problem has real and binary decision variables; in other words, it is a mixed integer optimization problem (MIOP). As in other MPC approaches, a receding horizon strategy is used, and only the first control action of the horizon is applied to the plant, recalculating the optimal solution at the next sample time with the new information available from the process [4]. In a general form, the optimization problem proposed in this work is presented in Equation (1), where the objective function is the sum of dispatchable sources, external grid and storage systems cost functions expressed over the prediction horizon ( ) and subjected to the dynamic models of the storage systems and virtual stocks ( ( , )), to the dispatchable sources and storage systems operational constraints, and to the energy market rules and to the energy balance constraints (respectively, ( , ) and ( , )). In this problem, is the vector of all decision continuous and binary variables related to each equipment operation, is the vector of state variables, in Equations (1) and (2) represents the time and represents the instant in the prediction horizon . The detailed descriptions of all parts of the optimization problem are provided in Section 3. The general energy balance is presented in the following equation and aims to ensure that all produced energy at each time instant of the prediction horizon will be used by the loads, stored in The main objective of the high-level MGCC is to operate the MG attending the demands and obtaining the minimum operation cost. Note that the efficient operation of the microgrid has to consider not only direct costs associated to the energy consumption, but also other indirect costs related to the several equipment operations, as will be detailed in the formulation of the cost functions of each one of the MG components in Section 3. The manipulated (decision) variables of the high-level MGCC are the desired values (set-points/references) at each sample time of 15 min of the generated power, startup and shutdown of the dispatchable sources, the charge/discharge power of the storage systems, the power extracted/injected from/to the grid, and a set of variables related to the market rules and virtual stocks modeling, as depicted in Section 3. Moreover, some of these variables can be continuous such as the generated power of a dispatchable source or the grid power, as well as binary-for example, the startup or shutdown operations of a dispatchable energy source. The high-level MGCC is based on a hybrid model predictive control (HMPC) framework, which is an optimization-based control technique where the main goal is to minimize an objective function though a prediction horizon subjected to dynamic models and operational constraints. The term hybrid is used because the optimization problem has real and binary decision variables; in other words, it is a mixed integer optimization problem (MIOP). As in other MPC approaches, a receding horizon strategy is used, and only the first control action of the horizon is applied to the plant, recalculating the optimal solution at the next sample time with the new information available from the process [4]. In a general form, the optimization problem proposed in this work is presented in Equation (1), where the objective function is the sum of dispatchable sources, external grid and storage systems cost functions expressed over the prediction horizon (N) and subjected to the dynamic models of the storage systems and virtual stocks (F 3 (x, u)), to the dispatchable sources and storage systems operational constraints, and to the energy market rules and to the energy balance constraints (respectively, F 1 (x, u) and F 2 (x, u)). In this problem, u is the vector of all decision continuous and binary variables related to each equipment operation, x is the vector of state variables, k in Equations (1) and (2) represents the time and j represents the instant in the prediction horizon N. The detailed descriptions of all parts of the optimization problem are provided in Section 3. The general energy balance is presented in the following equation and aims to ensure that all produced energy at each time instant of the prediction horizon will be used by the loads, stored in the Energies 2020, 13, 3508 6 of 20 storage systems or injected in the grid. This is an equality constraint where the sum of the power of all network components must equal to zero: where nl is the number of loads, nnd is the number of non-dispatchable energy sources, ns is the number of storage systems, and nd is the number of dispatchable energy sources. The nomenclature P sub means the power of each MG component expressed by the sub-indices sub. Remark 1. The studied system has both AC and DC loads, but here, it was considered with a power factor near 1, and only the active power was used in the energy balance. Although this assumption simplifies the equations, it does not conceptually affect the results of the study. GridLab Microgrid The microgrid (MG) studied in this work is located in the µGridLab at the Federal University of Santa Catarina (Brazil) and is compound by a battery bank, a gas microturbine emulator, photovoltaic panels, a wind turbine emulator, DC loads and connection with the main grid, as can be seen in Figure 2. Energies 2020, 13, x FOR PEER REVIEW 6 of 21 the storage systems or injected in the grid. This is an equality constraint where the sum of the power of all network components must equal to zero: where is the number of loads, is the number of non-dispatchable energy sources, is the number of storage systems, and is the number of dispatchable energy sources. The nomenclature means the power of each MG component expressed by the sub-indices . Remark. The studied system has both AC and DC loads, but here, it was considered with a power factor near 1, and only the active power was used in the energy balance. Although this assumption simplifies the equations, it does not conceptually affect the results of the study. GridLab Microgrid The microgrid (MG) studied in this work is located in the μGridLab at the Federal University of Santa Catarina (Brazil) and is compound by a battery bank, a gas microturbine emulator, photovoltaic panels, a wind turbine emulator, DC loads and connection with the main grid, as can be seen in Figure 2. The battery bank consists of 10 lithium-ion battery modules (model Beckett 8224S) [20] with approximately 3000 life cycles. The total capacity of the bank is 10 kWh and the charge and discharge powers are 5 kW and 10 kW, respectively. In order to simulate the behavior of a gas microturbine, the model Capstone C3 [21] was developed, which is an emulator based on power electronics converters with an apparent power of 30 kVA. The photovoltaic array consists of 10 panels (2 kW each), resulting in a maximum power of 20 kW. The wind turbine is emulated through the coupling of an electric motor and a generator with a capacity of 11 kW. The motor receives a profile to simulate the mechanical torque generated by the wind and the generator transforms the mechanical torque in an electric energy profile. The adjustable DC loads allow the reception of a power reference and can then vary between 0 and 35.5 kWh. The microgrid also has a connection to the main grid, which can be switched on and off, thus allowing operation in grid-connected and island modes. The battery bank consists of 10 lithium-ion battery modules (model Beckett 8224S) [20] with approximately 3000 life cycles. The total capacity of the bank is 10 kWh and the charge and discharge powers are 5 kW and 10 kW, respectively. In order to simulate the behavior of a gas microturbine, the model Capstone C3 [21] was developed, which is an emulator based on power electronics converters with an apparent power of 30 kVA. Microgrid Economic Modeling and Control The photovoltaic array consists of 10 panels (2 kW each), resulting in a maximum power of 20 kW. The wind turbine is emulated through the coupling of an electric motor and a generator with a capacity of 11 kW. The motor receives a profile to simulate the mechanical torque generated by the wind and the generator transforms the mechanical torque in an electric energy profile. The adjustable DC loads allow the reception of a power reference and can then vary between 0 and 35.5 kWh. The microgrid also has a connection to the main grid, which can be switched on and off, thus allowing operation in grid-connected and island modes. Microgrid Economic Modeling and Control This section presents the economic modeling of each MG component, as well as the inclusion of compensation rules in the optimization problem of the MPC. The modeling is based on the operational costs and it represents (roughly) the real cost of using each component, thus, minimizing the total cost function allows us to obtain the optimal operation point of the microgrid components. Hereafter, each of the component modeling and operational aspects will be depicted. Energy Balance The energy balance ensures that all energy produced is used by the loads, stored in the batteries or injected in the grid. This is an equality constraint where the sum of the power of all network components must equal to zero. It is important to note that the solar panels and wind turbines are non-dispatchable energy sources, and, from the control theory point of view, they act as disturbances in the system (MG). In Equation (3), P i loads (k) is the power consumed by the loads, P solar (k) is the power produced in the photovoltaic panels, P wind (k) is the power generated by the wind turbine, P grid (k) is the power consumed/injected from/to the grid, P bat (k) is the charge/discharge battery power and P turb (k) is the microturbine generated power. Costs of Operation Points In order to model the costs of microturbine operation points, the datasheet of the capstone turbine model C30 was used [21]. The data presented in the datasheet can be approximated, with small error, by the following linear model, which represents the cost per hour (Cost turb ) of the microturbine as a function of the generated power (P turb ). The linear coefficient LC = 1527.1428 l h represents the microturbine fuel hourly consumption without energy generation, while the angular coefficient AC = 311.4286 l kWh represents the fuel hourly consumption rate during the generation process. Finally, NTG is the natural gas tariff NGT = 0.0015719 R$ l , defined according to Resolution No. 098 of the Public Services Regulation Agency of Santa Catarina [22,23], used in Equation (4) in order to express the economic turbine operation cost in Brazilian currency. It is important to take the fact that the coefficients in Equation (4) were obtained for environmental conditions of 15 • C and 1 atm into account [24]. Microturbine Conversion Efficiency It is well-known that the pressure varies with altitude; however, the MG is located in a coast city with an altitude close to zero and the variation of the microturbine conversion efficiency related to the pressure will be neglected. On the other hand, considerable ambient temperature oscillations directly influence the microturbine efficiency. In order to consider the ambient temperature variations, the weight factor Eff(Temp) is introduced and it varies according to the efficiency vs. temperature curve presented in Figure 3. The curve has been normalized so that Eff(Temp) is 1 at 15 • C. This normalization is given by multiplying the original efficiency curve by (1/0.26), where 0.26 represents the efficiency at 15 • C [21]. Startup and Shutdown Costs In this work, the microturbine startup � = , ( $)� and shutdown � = , ( $)� total operation costs were considered. To obtain these values in local currency, the maximum startup time (2 min) and shutdown time (10 min) are multiplied by NTG and LC, as during these periods, the microturbine consumes gas without generating energy. Objective Function The objective function of the microturbine is a composition of the operating point, startup and shutdown costs. In this work, the sample time of 15 min for the high-level MGCC was used, which is bigger than the startup and shutdown times. Thus, when the high-level MGCC decides to change the operational state of the microturbine, the low-level MGCC is responsible to implement it, and this state is only possible to be changed in the next execution of the high-level MGCC. Finally, the microturbine objective function is defined as: Equation (5) considers three terms: one for the normal operation, one for the shutdown process and one for the startup process. In Equation (5), ( ) is a binary variable that represents the microturbine on/off state at time instant , ( ) is the power of the turbine, is the sampling period and the variables ( ) and ( ) represent startup and shutdown of the turbine, respectively, being defined by the following constraints: To understand the behavior of Equation (6) Remark. Note that for the startup process, the cost will include the SU cost and the normal operation consumption for one sample. This is not exact, but the introduced error was not significant. This occurs because the sampling time is bigger than the startup period, and both terms depend on the variable ( ). The way Startup and Shutdown Costs In this work, the microturbine startup (SUC = 0.037023 (R$)) and shutdown (SDC = 0.185115 (R$)) total operation costs were considered. To obtain these values in local currency, the maximum startup time (2 min) and shutdown time (10 min) are multiplied by NTG and LC, as during these periods, the microturbine consumes gas without generating energy. Objective Function The objective function of the microturbine is a composition of the operating point, startup and shutdown costs. In this work, the sample time of 15 min for the high-level MGCC was used, which is bigger than the startup and shutdown times. Thus, when the high-level MGCC decides to change the operational state of the microturbine, the low-level MGCC is responsible to implement it, and this state is only possible to be changed in the next execution of the high-level MGCC. Finally, the microturbine objective function is defined as: Equation (5) considers three terms: one for the normal operation, one for the shutdown process and one for the startup process. In Equation (5), δ turb (k) is a binary variable that represents the microturbine on/off state at time instant k, P turb (k) is the power of the turbine, T s is the sampling period and the variables SU(k) and SD(k) represent startup and shutdown of the turbine, respectively, being defined by the following constraints: To understand the behavior of Equation (6), let us first assume the startup scenario with δ turb (k − 1) = 0 and δ turb (k) = 1. In this case, we have the first and fourth constraints active, and the startup of the turbine is performed at a cost of SUC. In the opposite scenario, δ turb (k − 1) = 1 and δ turb (k) = 0, the second and third constraints are active and the turbine shutdown is carried out with cost SDC. In other words, minimization of the variables SU(k) and SD(k) implies, in minimization, changes in operational conditions in the turbine weighted by its economic costs. Note that if δ turb (k) = δ turb (k − 1), these costs are not considered in the sample time. Remark 2. Note that for the startup process, the cost will include the SU cost and the normal operation consumption for one sample. This is not exact, but the introduced error was not significant. This occurs because the sampling time is bigger than the startup period, and both terms depend on the variable δ turb (k) . The way to avoid this effect it to introduce one more binary decision variable with a set of MLD constraints, but since the error is infimum and startup procedures just happen during a few specific periods of the day, it does not justify the cost to augment the complexity of the optimization problem. Thus, the authors understand that the current solution is the one that provides the better compromise between the modeling and computational complexity and desired operation of the MG. The microturbine power limits are given by: where M turb is the maximum power value (30kVA) and m turb is the minimum power value (0kVA). When δ turb (k) = 1, the power stays among the minimal and maximum limits, while in the case of δ turb (k) = 0, the turbine power should be zero. Virtual Stocks Dynamic Model In order to represent the behavior of the virtual stock, the following models were used: E I (k + 1) = E I (k) + γ 4 T s P OP I (k) + γ 5 T s P I I (k) + γ 6 T s P P I (k) + T s P injected grid I (k) E P (k + 1) = E P (k) + γ 7 T s P OP P (k) + γ 8 T s P I P (k) + γ 9 T s P P P (k) + T s P injected grid P (k) (8) where E sub1 (k) represents the energy of each stock related to the three white tariff spots. The subscript sub1 was used to generalize the problem since the same logic will apply to tariff spots periods OP-off-peak; I-intermediate, P-peak). P sub2 sub3 (k) represents the power consumed from the virtual stock with sub-index sub3 during the time of spot with sub-index sub2, taking into account that sub2 sub3. The power injected to the grid at each time spot is expressed by P injected grid sub1 (k). The constants γ n are the conversion factors given by the relation between the cost (C sub1 ) of the tariff spots: A simplified scheme of the virtual stocks operation is shown in Figure 4. Minimum Value Constraints To ensure that stocks will assume only positive values, the following constraints were added to the problem: Energies 2020, 13, 3508 10 of 20 Energies 2020, 13, x FOR PEER REVIEW 10 of 21 Minimum Value Constraints To ensure that stocks will assume only positive values, the following constraints were added to the problem: Usage of the VS Correspondent to the Actual Spot Period The virtual stocks, in accordance with the energy market rules and to guarantee the compensation of the injected energy at the current tariff spot, must be subject to the following constraints according to the tariff spot: Off-peak: Intermediate: Peak: To understand the operation of these constraints, let us take a look at the scenario where the MGCC is operating in the off-peak period. In this case, if the MGCC decide to consume the energy from the off-peak VS, it will be done by the variable ( ); if it is needed to use energy from the intermediate and peak VSs, the MGCC will use the variables ( ) and ( ), respectively. During this spot period, the constraint imposed by Equation (11) should work in order to guarantee that the operation will be done as expected. It is important to ensure that the right decision variable will be used, since each of them is related to the conversion factor that represents the energy flow among the VSs. The same reasoning holds for the other two spots periods. This modeling framework is directly connected with assumption 4 of Section 2.1.2. Priority Among the VSs To represent the existing priority in energy compensation, given by the consumption of the virtual stock relative to the current spot period, the use of the following logical connectives is required: Usage of the VS Correspondent to the Actual Spot Period The virtual stocks, in accordance with the energy market rules and to guarantee the compensation of the injected energy at the current tariff spot, must be subject to the following constraints according to the tariff spot: Off-peak : P I OP = P I I = P I P = P P OP = P P I = P P P = 0 Intermediate : P OP FP = P OP I = P FO = P P OP = P P I = P P P = 0 Peak : P OP OP = P OP I = P OP P = P I OP = P I I = P I P = 0 To understand the operation of these constraints, let us take a look at the scenario where the MGCC is operating in the off-peak period. In this case, if the MGCC decide to consume the energy from the off-peak VS, it will be done by the variable P OP OP (k); if it is needed to use energy from the intermediate and peak VSs, the MGCC will use the variables P OP I (k) and P OP P (k), respectively. During this spot period, the constraint imposed by Equation (11) should work in order to guarantee that the operation will be done as expected. It is important to ensure that the right decision variable will be used, since each of them is related to the conversion factor γ n that represents the energy flow among the VSs. The same reasoning holds for the other two spots periods. This modeling framework is directly connected with assumption 4 of Section 2.1.2. Priority among the VSs To represent the existing priority in energy compensation, given by the consumption of the virtual stock relative to the current spot period, the use of the following logical connectives is required: Off-peak : E OP (k) > 0 → P OP I (k) = 0 and P OP P (k) = 0 Intermediate : E I (k) > 0 → P I OP (k) = 0 and P I P (k) = 0 Peak : E P (k) > 0 → P P OP (k) = 0 and P P I (k) = 0 This existing logical priority defines which storage will be used and is represented by the multiplexers showed in Figure 4. These constraints only allow using the VS referred to other spots if the VS of the current spot is out of energy. The presented conditions are related to assumption 4 of Section 2.1.2. In order to enable the use of logical propositions by the MPC, the MLD framework is used [25]. In addition, since some constraints must have different behaviors according to the current tariff spot, the vector of binary variables defined in Equation (17) will be used, which will be decided out of the optimization problem and sent as a parameter. These vectors, which have a size equal to the prediction horizon, are conveniently used in some constraints that will be presented later. If at time instant k, the MGCC is operating in the spot period OP, for example, vector OP (k) = 1, otherwise vector OP (k) = 0. An auxiliary binary variable δ sub1 (k) is used to force E sub1 (k) to zero when the condition is satisfied. Thus, it is desired that this variable is equal to zero if and only if E sub1 (k) is greater than zero: To model this logical condition in the form of a constraint, the constants M e and m e that represent the maximum and minimum values of E sub1 (k), respectively, are used. It should be emphasized that in this particular case, m e = 0 since the value of the stored energy credit may not be negative and M e ∞ because there is no ceiling for the value of the energy credit in the compensation system. In order to ensure the condition of equivalence, it must be verified that the logical implication is taken care of in both directions. To guarantee that E sub1 (k) > 0 → δ sub1 (k) = 0 , the following inequality is considered: where ε is a constant of positive value close to zero. Note that this constraint meets this implication by forcing δ sub1 (k) to zero when E sub1 (k) assumes positive values. There is ambiguity when E sub1 (k) = ε, but it does not interfere with the desired implication, since E sub1 (k) − ε ≤ M e is still valid. To ensure that δ sub1 (k) = 0 → E sub1 (k) > 0 , the following inequality is considered: (20) Note that this constraint meets this implication by forcing E sub1 (k) to assume positive values when δ sub1 (k) is equal to zero. s P OP OP (k) + γ 2 T s z I OP (k) + γ 3 T s z P OP (k) + T s P injected grid OP (k) E I (k + 1) = E I (k) + γ 4 T s z OP I (k) + γ 5 T s P I I (k) + γ 6 T s z P I (k) + T s P injected grid I (k) E P (k + 1) = E P (k) + γ 7 T s z OP P (k) + γ 8 T s z I P (k) + γ 9 T s P P P (k) + T s P injected grid P (k) (21) It is important to note that both the δ sub1 (k), P sub2 sub3 (k) and z sub2 sub3 (k), are decision variables, so that this multiplication between variables will generate a bilinearity in the objective function. To solve this problem, we adopt the approach introduced in [21], where the auxiliary variable z sub2 sub3 (k), subject to some constraints, is used to represent the bilinearity behavior, as follows: δ sub1 (k)).vector sub1 (k) (22) where M stock and m stock represent, respectively, the minimum and maximum values of P sub2 sub3 (k). When δ sub1 (k) = 0, we have z sub2 sub3 (k) = 0 and P sub2 sub3 (k) = 0, which represent that, at spot period sub2, there is no consumption from the VS sub3. The case of δ sub1 (k) = 1 implies that z sub2 sub3 (k) = P sub2 sub3 (k) and m stock ≤ P sub2 sub3 (k) ≤ M stock , which means that, at spot period sub3, consumption from the VS sub2 exists. At this point, it is important to point out that the possible combinations of the values of variables sub1, sub2 and sub3 are depicted in Table 2. Cost Function The portion of the cost function relative to the grid can be expressed by: where P grid (k) represents the grid power and C sub1 represents the cost of electricity at the time that depends on the current tariff spot presented in Table 1. The positive values of P grid (k) imply purchasing energy from the grid, while negative values imply injecting power to the grid. To relate the injected power P injected grid sub1 (k) with the grid power P grid (k), a set of MLD constraints is defined as follows: where δ aux (k) is a binary auxiliary variable; z aux (k) is a continuous auxiliary variable; and M grid and m grid represent, respectively, the minimum and maximum values of P grid (k). When δ aux (k) = 0, we have z aux (k) = 0 and P injected grid sub1 (k) = 0, which represent no power injection in the grid, but it allows consumption of the power from the grid. The case of δ aux (k) = 1 implies that z aux (k) = P grid (k) and P injected grid sub1 (k) = −z aux (k), which means that a power injection in the grid is happening. Energies 2020, 13, 3508 13 of 20 In order for the energy injection not be accounted in all stocks simultaneously, the tariff spot vectors were used. By using the tariff spot vectors, only the VS referent to the current spot period will receive the injected power. Finally, Equation (21) must be rewritten as: (k)vector OP (k) E I (k + 1) = E I (k) + γ 4 T s z OP I (k) + γ 5 T s P I I (k) + γ 6 T s z P I (k) + T s P injected grid I (k)vector I (k) E P (k + 1) = E P (k) + γ 7 T s z OP P (k) + γ 8 T s z I P (k) + γ 9 T s P P P (k) + T s P injected grid P (k)vector P (k) Compensated Power and Variables Interlocking The variable related to total compensated power P comp (k), which represents the total power used from the VSs, does not appear in the cost function since its cost is zero. However, it appears in the constraints as a decision variable. P comp (k) = P OP OP (k) + z OP I (k) + z OP P (k) + P I I (k) + z I OP (k) +z I P (k) + P P P (k) + z P OP (k) + z P I (k) Furthermore, to ensure the interlocking between energy compensation and injection, that is, P comp (k) ≥ 0 → P injected grid sub1 (k) = 0 and P comp (k) = 0 → P injected grid sub1 (k) ≥ 0 , the following constraints were added: where δ comp (k) is a binary variable and M grid is the maximum value of P grid (k). When δ comp (k) = 0, there is no compensation from the VS, but power injection is allowed; however, in the case δ comp (k) = 1, the opposite statement is true. In the same way, to ensure the interlocking between energy compensation and energy purchase, that is, E OP (k) + E I (k) + E P (k) > 0 → P grid (k) ≤ 0 , the following constraints are added: Here, when δ e (k) = 0, there is no compensation and the purchase from the grid is allowed; otherwise, the opposite situation rules. Objective Function To estimate the cost of using the batteries, two important aspects were considered: the loss of energy in the conversion and the degradation of the battery. The cost of using the battery considering purchase cost, efficiency and total number of cycles can be modeled as: where CB total is the purchasing cost of the battery, N cicles is the number of complete cycles, η C and η D are the battery charge and discharge efficiencies, P bat (k) is the battery power at instant k, and T s the sampling period. Moreover, in order to allow the use of different values for the charge/discharge efficiencies, the auxiliary variables z bat (k) (continuous) and δ bat (k) (binary) were introduced, so that z bat (k) = δ bat (k)P bat (k). The MLD constraints that guarantee the desired operation for the battery modes switching are given by: where M bat (positive value) and m bat (negative number) are, respectively, the maximal and minimal values allowed for P bat . Note that z bat (k) and δ bat (k) allow us to switch the cost function according the battery charge/discharge modes. The variable δ bat (k) is associated with the charge (δ bat (k) = 1) or discharge (δ bat (k) = 0). Note that when P bat (k) > 0, we have δ bat (k) = 1 and z bat (k) = P bat (k), which means that the battery is charging, therefore the charge efficiency is used. Otherwise, P bat (k) < 0 implies that δ bat (k) = 0 and z bat (k) = 0, and the discharge efficiency is used. The efficiency of a battery depends on the charge or discharge speed. In this work, we chose to use near-normal efficiencies for lithium-ion batteries [26,27], then considered a charge efficiency of 92% and discharge efficiency of 95%. Constraints The optimization of the batteries cost must be subject to the dynamics of the behavior of the batteries, which is expressed by constraints. In order to model the dynamics of the battery, the MLD framework was used, since it allows representing logical variables in a suitable way to represent the restrictions in MPC. The modeling was based on the one presented by [4]. The battery bank state of charge is represented by: where SOC(k) is the state of charge at time k and Cap is the battery capacity. Simulation Results The simulations were developed in MATLAB software [28], where the controllers were described using the YALMIP modeling language [29] and the solver Gurobi [30] was used. Real data referring to the year 2019 were used to generate the vectors of solar incidence, temperature and wind speed. The sampling time used for the MPC was 15 min and the prediction horizon adopted was 24 samples (equivalent to 6 h). The control horizon is equal to the prediction horizon since the optimization problem does not have severe temporal and processing constraints, considering the computation time of the solution (less than 1 s). In this section, three different simulation scenarios are evaluated. The first scenario seeks to represent the nominal operating conditions of the µGridLab microgrid, while the second scenario analyses the results if there is a 50% increase in expected demand. Finally, in the third scenario, an island operation of the MG in nominal conditions is studied and compared to scenario 1. Scenario 1 In this scenario, a one-day period simulation is performed to observe the results graphically and a complete month simulation (period of 30 days) is simulated with the intention of performing the monthly economic analysis. Figures 5-8 show all the interesting variables during one day of operation. From Figure 5, where the generated and demanded power (kW) are shown, it can be observed that in many periods, the MGCC chooses to use the microturbine to generate power. This is due to the low cost of the microturbine use (compared to the grid power exposed in Figure 7b), thus, the controller prioritizes the energy generation through the microturbine instead of consuming power from the network. The microgrid also uses all generated power by the wind turbine and the PV (photovoltaic) panels. It is important to point out that in some periods of the day, the load demand is bigger than the generated power, but in these moments, as can be seen in Figure 7b, the compensated power from grid helps to feed the loads. Scenario 1 In this scenario, a one-day period simulation is performed to observe the results graphically and a complete month simulation (period of 30 days) is simulated with the intention of performing the monthly economic analysis. Figures 5-8 show all the interesting variables during one day of operation. From Figure 5, where the generated and demanded power (kW) are shown, it can be observed that in many periods, the MGCC chooses to use the microturbine to generate power. This is due to the low cost of the microturbine use (compared to the grid power exposed in Figure 7b), thus, the controller prioritizes the energy generation through the microturbine instead of consuming power from the network. The microgrid also uses all generated power by the wind turbine and the PV (photovoltaic) panels. It is important to point out that in some periods of the day, the load demand is bigger than the generated power, but in these moments, as can be seen in Figure 7b, the compensated power from grid helps to feed the loads. Figure 6 shows the SOC (state of charge) and power in the batteries. As can be seen, the battery power presents some oscillations near the operating point, as the battery plays the role of aiding the system in damping the effect of the renewable energy variations and load changes so that the energy balance is guaranteed and the variation of the energy exchanged with the external grid is minimized. It is important to note that in the period between approximately 12 and 19 h, these oscillations are large, and are in accordance with the ones observed in the solar generation due to the strong presence of clouds, which causes abrupt variations of irradiation. On sunny days, it is not possible to observe this phenomenon. The constraints imposed in the controller ensure that the SOC is between 100% and 20% as recommended for the correct operation of the batteries. Figure 7a shows the energy of the virtual stocks for the three tariff spots. Initially, the stocks have zero energy, so, it is assumed that in this scenario, there is no previous energy credit from previous months. This would be the case for the MG first month operation in the compensation system. It should be noted that in this scenario, there are several moments that happens the injection of energy for subsequent use. The energy injection at peak hours is the one that generates more offset With respect to economic costs, the one-day period of the obtained operating data is presented in the second column of Table 3. It is noted that at the end of the day, there was an excess balance of compensated energy. The total operating cost of the microgrid during this day was R$ 221.84, but a credit of R$ 18.78 was generated for later use; this credit is calculated as the sum of the products between the power stored in each virtual store by its respective tariff (see Table 1). In order to analyze the monthly economic operation of this scenario, a simulation is performed for a period of 30 days under the same conditions, and the data referring to the first 30 days of the month of January 2019 were used. In this case, the operation data are presented in the third column of Table 3. Figure 6 shows the SOC (state of charge) and power in the batteries. As can be seen, the battery power presents some oscillations near the operating point, as the battery plays the role of aiding the system in damping the effect of the renewable energy variations and load changes so that the energy balance is guaranteed and the variation of the energy exchanged with the external grid is minimized. It is important to note that in the period between approximately 12 and 19 h, these oscillations are large, and are in accordance with the ones observed in the solar generation due to the strong presence of clouds, which causes abrupt variations of irradiation. On sunny days, it is not possible to observe this phenomenon. The constraints imposed in the controller ensure that the SOC is between 100% and 20% as recommended for the correct operation of the batteries. Energies 2020, 13, x FOR PEER REVIEW 18 of 21 Note that in this period, there was more energy injection than compensation; in addition, the MGCC has never decided to purchase energy once the compensated energy has zero cost. At the end of 30 days, there was R$ 19.57 of credit to be compensated in the upcoming months. The total cost of operation during the month considering the use of the battery and the turbine is R$ 8116.78. For comparison purposes, the energy cost is calculated if all demand is supplied by the grid. In this case, the total operation cost is obtained by applying the cost of the conventional tariff to the energy demand (R$ 8755.27). This gives a 7.86% increasing in the energy bill with respect the distributed generators plus white tariff compensation system. Of course, the comparison with the use of the grid power only serves to give the reader a better perception of the values, since a rigorous economic analysis would have to take into account the purchase cost of the turbine, the wind generator and the PV panels. This analysis is disregarded in this work since the studied MG already has these components. Scenario 2 As a second scenario to be analyzed, a 50% increase in energy demand is proposed. The curves obtained for this case have profiles similar to those obtained in the first scenario-this way, they will not be presented. In this case, simulating again the periods of 1 (one) day and 30 days give the operation data shown in Table 4. Figure 7a shows the energy of the virtual stocks for the three tariff spots. Initially, the stocks have zero energy, so, it is assumed that in this scenario, there is no previous energy credit from previous months. This would be the case for the MG first month operation in the compensation system. It should be noted that in this scenario, there are several moments that happens the injection of energy for subsequent use. The energy injection at peak hours is the one that generates more offset credits. The relationship between the compensated power, the grid power and the injected power in this scenario is shown in Figure 7b. It can be seen that the controller does not choose to purchase power from the grid, but only compensates for credits obtained, since grid power is always negative, which means that the power is being injected into the grid. This fact can be explained by the high price of energy in relation to the energy generated by microturbine and renewable sources. Figure 8 shows the value of the objective function during the first day of the scenario 1. As expected, the minimum values of the objective function occur in the period between 11 h and 19 h, where the microturbine is not operating most of the time and there is a peak in the photovoltaic generation as well as in the wind turbine. With respect to economic costs, the one-day period of the obtained operating data is presented in the second column of Table 3. It is noted that at the end of the day, there was an excess balance of compensated energy. The total operating cost of the microgrid during this day was R$ 221.84, but a credit of R$ 18.78 was generated for later use; this credit is calculated as the sum of the products between the power stored in each virtual store by its respective tariff (see Table 1). In order to analyze the monthly economic operation of this scenario, a simulation is performed for a period of 30 days under the same conditions, and the data referring to the first 30 days of the month of January 2019 were used. In this case, the operation data are presented in the third column of Table 3. Note that in this period, there was more energy injection than compensation; in addition, the MGCC has never decided to purchase energy once the compensated energy has zero cost. At the end of 30 days, there was R$ 19.57 of credit to be compensated in the upcoming months. The total cost of operation during the month considering the use of the battery and the turbine is R$ 8116.78. For comparison purposes, the energy cost is calculated if all demand is supplied by the grid. In this case, the total operation cost is obtained by applying the cost of the conventional tariff to the energy demand (R$ 8755.27). This gives a 7.86% increasing in the energy bill with respect the distributed generators plus white tariff compensation system. Of course, the comparison with the use of the grid power only serves to give the reader a better perception of the values, since a rigorous economic analysis would have to take into account the purchase cost of the turbine, the wind generator and the PV panels. This analysis is disregarded in this work since the studied MG already has these components. Scenario 2 As a second scenario to be analyzed, a 50% increase in energy demand is proposed. The curves obtained for this case have profiles similar to those obtained in the first scenario-this way, they will not be presented. In this case, simulating again the periods of 1 (one) day and 30 days give the operation data shown in Table 4. In fact, the MGCC behaves in an expected way as well in this more severe scenario, using all the power of the compensation system and virtual stocks in order to take advantage of the conversion factors among the spots and achieve a good economic efficiency. For comparative purposes, the equivalent cost of purchasing grid power for the 30 days' case with a conventional tariff is R$ 12845.85. Thus, in this case, the proposed control strategy gives a 3.72% reduction in the energy bill. Scenario 3 Finally, in order to validate the controller and the sizing of the microgrid, a third scenario is simulated with the MG operating in islanded mode, which means without the connection to the power grid. The 1 (one) day and 30 days' results are given in Table 5 for the same loads as in scenario 1. Here, it is possible to compare scenarios 1 and 3, and as expected, the MG is capable of operating in both modes. Analyzing Tables 3 and 5, is it possible to conclude that the controller works more efficiently in the connected mode with a 4.76% reduction in the energy bill compared with the islanded mode. This happens because there is an extra economic element, the virtual stocks and compensation system, allowing more freedom in the optimization. The cost of battery use was lower in the island mode because the MGCC opted for the use of the turbine to assist in damping short-term oscillations. Conclusions This work proposed a MGCC strategy for simulation and control of microgrids considering the Brazilian energy compensation system. The Brazilian energy compensation rules are different from the ones used in other countries; thus, it is necessary to develop specific control strategies to take advantages of the particular processes and reduce the electricity bill. Therefore, the proposed MGCC considers in its formulation all the particularities of the local market and allows the user to take more profit of the installed EMS. The considered microgrids include renewable energy sources and conventional ones. The proposed strategy consisted of a MPC using mixed-integer programming, being the main contribution and innovation, the modeling and implementation of a set of virtual stocks that allow one to optimally manage the power consumption considering the Brazilian energy compensation rules. Through the analysis of the results, it is possible to conclude that the behavior of the proposed MGCC is in agreement with the expected one, taking management decisions that allow the operation of the microgrid in a more efficient way, reducing the energy bill in the studied scenarios. Although the obtained results were obtained in a particular microgrid, the methodology is general and can be applied to other microgrids and could also be simply adapted to other energy compensation systems. Regarding applicability, for companies selling EMS systems, the use of an advanced control system such as the one proposed here gives a competitive edge without increasing costs too much. On the other hand, users can reduce their energy bills and have faster payback. As for future works, the authors plan on implementation of a demand management layer and stochastic energy management techniques can be taken into account. Conflicts of Interest: The authors declare no conflict of interest.
13,664
2020-07-07T00:00:00.000
[ "Engineering", "Economics", "Environmental Science" ]
$Z_c(3900)$: what has been really seen? The $Z^\pm_c(3900)/Z^\pm_c(3885)$ resonant structure has been experimentally observed in the $Y(4260) \to J/\psi \pi\pi$ and $Y(4260) \to \bar{D}^\ast D \pi$ decays. This structure is intriguing since it is a prominent candidate of an exotic hadron. Yet, its nature is unclear so far. In this work, we simultaneously describe the $\bar{D}^\ast D$ and $J/\psi \pi$ invariant mass distributions in which the $Z_c$ peak is seen using amplitudes with exact unitarity. Two different scenarios are statistically acceptable, where the origin of the $Z_c$ state is different. They correspond to using energy dependent or independent $\bar D^* D$ $S$-wave interaction. In the first one, the $Z_c$ peak is due to a resonance with a mass around the $D\bar D^*$ threshold. In the second one, the $Z_c$ peak is produced by a virtual state which must have a hadronic molecular nature. In both cases the two observations, $Z^\pm_c(3900)$ and $Z^\pm_c(3885)$, are shown to have the same common origin, and a $\bar D^* D$ bound state solution is not allowed. Precise measurements of the line shapes around the $D\bar D^*$ threshold are called for in order to understand the nature of this state. The resonant-like structure Z c (3900) ± was first seen simultaneously by the BESIII and Belle collaborations [1,2] in the J/ψπ spectrum produced in the e + e − → Y(4260) → J/ψπ + π − reaction. An analysis [3] based on CLEO-c data for the e + e − → ψ(4160) → J/ψπ + π − reaction confirmed the presence of this structure as well, although with a somewhat lower mass. Under a different name, Z c (3885) ± , a similar structure, with quantum numbers favored to be J P = 1 + , has also been reported by the BESIII collaboration [4,5] in theD * D spectrum of e + e − →D * Dπ at different e + e − center-of-mass (c.m.) energies [including the production of Y(4260)]. Because there is a little difference in the central values of the masses and in particular the widths of these two structures, whether they correspond to the same state is still unknown. As will be shown in this Letter, the two structures have indeed the same common origin. We generically denote it here as Z c . Evidence for a neutral partner of this structure was first reported in Ref. [3], and more recently in Ref. [6]. If this resonant structure happens to be a real state as argued in Ref. [7], it is one of the most interesting hadron resonances, since it couples strongly to charmonium and yet it is charged, thus it is something clearly distinct of a conventional cc state -its minimal constituent quark content should be four quarks, ccud (for Z + c ). A discussion of possible internal structures is given in Ref. [8]. It has been interpreted as a molecularD * D state [9][10][11], as a tetraquark of various configurations [12] or as a simple kinematical effect [13], although this possibility has been ruled out in Ref. [7]. Distinct consequences of some of these different models have been discussed in Ref. [14]. It has been also searched for in lattice QCD though with negative results so far [15]. Being a candidate for an explicitly exotic hadron, the Z c (3900) definitely deserves a detailed and careful study. Indeed, the last years have witnessed an intense theoretical activity aiming at understanding the actual nature of this state. What is still missing, however, is a simultaneous study of the two reactions analysed by BESIII and mentioned above in which the Z c structure has been seen. 1 The goal of this work is to perform such a study, and, from it, to extract information about this seemingly resonant intriguing structure. We will first settle aD * D, J/ψπ coupled channel formalism, considering that the Z c emerges from theD * D interaction, and that its coupling to J/ψπ proceeds through the former intermediate state. The resulting T -matrix will enter the calculation of the amplitudes for the reactions Y(4260) → J/ψππ,D * Dπ. We will assume that the Y(4260) state is dominantly a D 1 (2420)D+c.c. bound state [9,16] and use the ideas of Ref. [9] to compute the relevant amplitudes. Let us denote with 1 and 2 the J/ψπ andD * D channels, respectively, with I = 1 and J PC = 1 +− (here and below, the C-parity refers to the neutral member of the isospin triplet). The coupled-channel T -matrix can be written as where G is the loop function diagonal matrix, and the matrix elements of the potential read where m i n is the mass of the the nth particle in the channel i, and the mass factors are included to account for the non-relativistic normalization of the heavy meson fields. The J/ψπ → J/ψπ interaction strength is known to be tiny [17,18], and we neglect the direct coupling of this channel, C 11 = 0. Such a treatment was also done in Ref. [19] in a coupled-channel analysis of the Z b states. For the inelas-ticD * D → J/ψπ S -wave interaction, we make the simplest possible assumption, that amounts to take it to be a constant, C 12 ≡ C. In a momentum expansion, the lowest order contact potential for theD * D →D * D transition is simply a constant as well, denoted by C 22 ≡ C 1Z [20]. However, it can be shown that even with two coupled channels, no resonance can be generated in the complex plane above threshold with only constant potentials. To that end, we will also allow some energy dependence for the V 22 term, introducing a new parameter b, and writing with E the total c.m. energy. The new term is of higher order in low-momentum expansion in comparison with C 1Z . The interactions considered here need to be regularized in some way, and hence we employ a standard gaussian regulator [21], where the c.m. momentum squared of the channel i is denoted by q i 2 . We adopt a relativistic (non-relativistic) definition of the latter for the i = 1 (i = 2) channel, i.e., being µ the reduced mass of theD * D system. Since the interaction for this channel is derived from a non-relativistic field theory, we take cutoff values Λ 2 = 0.5 − 1 GeV [20]. At the Z c energy, the c.m. momentum of the J/ψπ channel is q 1 ≃ 0.7 GeV, and hence we use a different cutoff for it. For definiteness, we set Λ 1 = 1.5 GeV, although the specific value is not very relevant as we have checked since changes in the cut-off can be reabsorbed in the strength of the transition potential controlled by the undertermined C 12 low energy constant. With this convention for the regulator, the loop functions in the matrix G read with ω n = l 2 + m 2 1n . The DD * channel loop function G 2 is computed in the non-relativistic approximation. For the e + e − annihilations at the Y(4260) mass, both BE-SIII and Belle have reported the Z c structure in the J/ψπ final state [1,2], but only BESIII provides data for theD * D channel [4,5]. Hence, for consistency, we will only study the BESIII data. In particular, we will consider the most recent double-D-tag data of Ref. [5], in which the D * is reconstructed from several decay modes, whereas in Ref. [4] the presence of the D * is only inferred from energy conservation. Hence, in the former data the background in the higher energy D * D invariant mass regions is much reduced. For definiteness, we will consider the reported spectra of the D * − D 0 and J/ψπ − final states, and set m D * = m D * − , m D = m D 0 , and m π = m π ± . This implicitly assumes that isospin breaking effects are neglected. These data are taken at a c.m. energy equal to the nominal Y(4260) mass, so the decays to π(J/ψπ,D * D) proceed mainly through the formation of this resonance. The mechanisms for the Y(4260) decays are shown in Fig. 1. The coupling YD 1 D, whose value is not important here to describe the lines shapes, is taken from Ref. [9], where the Y(4260) is considered to be dominantly aDD 1 + c.c. bound state. The subsequent D 1 D * π coupling can also be found there. We denote M 1 (M 2 ) to the amplitude for the Y → J/ψπ + π − (Y → D * − D 0 π + ) decay, and s and t, respectively, to the invariant masses squared of J/ψπ − and J/ψπ + (D * − D 0 and D * − π + ) in the first (second) decay. Up to some common irrelevant constant, both amplitudes can be written (after the appropriate sum and average over polarizations) as: where , and θ denotes the relative angle between the two pions in the Y(4260) rest frame. Further, I 3 (s) is the scalar three-meson non-relativistic loop function, for which details can be found in Ref. [22]. One first notes that M 1 (s, t) is symmetric under s ↔ t. The term with α represents diagram (1a), and it acts as a non-resonant background amplitude, added coherently to the rest of the diagrams. It has the same dependence on the external momenta and polarization vectors as that of diagrams (1b)-(1e). The first term in M 1 (s, t) is the amplitude of diagrams (1b)+(1c), the second term is the one from diagrams (1d)+(1e), and the last one is their interference. In M 2 , the first summand of the first term corresponds to diagram (2a) in Fig. 1, whereas the second one, which includes theD * D final state interaction (FSI), is the contribution from diagrams (2b)+(2c). Diagrams (2a)-(2c) proceed through the formation of D 1 , but we also consider some non-resonantD * Dπ production by means of diagram (2d). TheD * D rescattering effects in this last diagram give rise, in turn, to diagrams (2e) and (2f). The term with β in Eq. (8) represents these latter three diagrams. The parameters α and β in Eqs. (7) and (8) are unknown. Note that the effect of D 1 width, Γ D 1 = (25 ± 6) MeV, is negligible here since m D 1 + m D − Γ D 1 /2 is well above 4.26 GeV. 2 The spectrum for both reactions can be obtained as a contribution from the amplitudes (A i ) plus a background (B i ): where t i,± (s) are the limits of the t Mandelstam variables for the decay mode i. The two global constants K i could be related if the event selection efficiencies of the two spectra analyzed in this work were known. If the latter were roughly the same, then one would have K 1 ≃ 5K 2 (due to the different bin sizes). If both parameters are considered free, a large correlation arises between K 1 and C, since K 1 | C| 2 basically determines the total strength of the event distribution N 1 . This is due to the fact that the influence of C in the shape of the Tmatrix elements, and thus of the signal of Z c in the spectrum, is small. To obtain a reasonable estimate of this coupling constant, we consider a further experimental input from Ref. [4], and estimate this ratio as that is, as the ratio of the background subtracted areas of each physical spectrum around the Z c mass, namely in the range √ s = (3900 ± 35) MeV. In principle, the double-D-tag technique ensures that all thē D * D spectrum events in Ref. [5] contain aD * D pair, so there is no background due to wrong identification of the final state. There could be, however, contributions to the spectrum from higher waves other than the S -wave. In any case, an inspection of Fig. 2 shows that the tail of the spectrum is small, and we set B 2 = 0. We shall come to this point later on. For the J/ψπ spectrum, B 1 is parameterized with a symmetric smooth threshold function as used in the experimental work of Ref. [1]: with m 1− = m J/ψ + m π and m + = m Y − m π , i.e., the limits of the available phase space for the reaction. The parameters B 1 and d 1 are free. We have three free parameters directly related to our Tmatrix (C 1Z , C, and b), and six (B 1 , d 1 , α, β and K 1,2 ) related to the background and the overall normalization. These nine free parameters are adjusted to reproduce the data of Refs. [1,5] (a total of 104 data points). In this work, two errors are given. The first error is statistical and it is computed from the hessian matrix of the χ 2 merit function. The second error is systematic, and to estimate it we have considered two different uncertainty sources. First, we have varied the J/ψπ background function [Eq. (13)] and used other smooth functions. The second source of uncertainties is related to the tail of thē D * D spectrum, and it is estimated as follows. The central value of the parameters is computed by fitting this spectrum up to √ s = 4025 MeV. Then, we vary this limit between √ s = 3975 MeV and m + (the maximum allowed invariant mass), and repeat the fit. In all cases, we find statistically acceptable fits and the difference between the new fitted parameters and the central ones is used to determine the systematic error. The same method is applied to estimate the systematic error of our predictions for the spectra and the mass and width of the Z c state, to be presented below. We perform four different fits, corresponding to the two cases of keeping the parameter b, which controls the energy dependence of theD * D potential, free or set to zero, and for each of these, we choose Λ 2 to be 0.5 or 1 GeV [20]. Results from the four fits are compiled in Table I, where only the parameters that are directly related to our T -matrix are shown. One first notes that the reduced χ 2 is very close to unity in all four cases. Indeed, the description of the experimental spectra is very good in all cases, as can be seen in the top panels of Fig. 2, where the results from one of the fits (b free and Λ 2 = 0.5 GeV) are shown and confronted with the data. In particular, the effect of the Z c is nicely reproduced in theD * D spectrum above threshold and in the J/ψπ spectrum around theD * D threshold. Its reflection can also be appreciated in the J/ψπ distribution around 3.5 GeV. The other fits lead to results similar to those shown in Fig. 2. The largest differences can be found in theD * D spectrum between the b 0 and b = 0 cases, which are compared for Λ 2 = 0.5 GeV in the bottom right panel of the same figure. In any case, we see that we are able to simultaneously reproduce the two available BESIII data sets related to the Z ± c (3900)/Z ± c (3885) state with a single structure for the very first time. The numerical values are shown in Table II. threshold. For the case b = 0, however, the situation is quite different. While the description of the experimental data is still quite good with χ 2 /d.o.f. ∈ [1.3, 1.4], the pole in this case is located below threshold, with a small imaginary part (around 8 MeV), and in the (01) Riemann sheet. If the J/ψπ channel were now switched off ( C = 0), this pole would move into the real axis in the unphysical Riemann sheet of the elastic amplitude T 22 . In this sense, the obtained pole does not qualify as a resonance, and we see it as a virtual or anti-bound DD * state. It does not correspond to a particle in the sense that its wave function, unlike that of a bound state, is not localized. However, it produces observable effects at the DD * threshold similar to those produced by a near threshold resonance or bound state. 3 Indeed, scattering experiments alone, in principle, cannot distinguish between virtual and bound states, but the difference is not a purely academic one since they can produce different line shapes in inelastic open channels [24]. The line shapes of a virtual state and a near-threshold resonance are different since the former is peaked exactly at the threshold while the latter, in principle, is above. This can be seen in the left bottom panel of Fig. 2 where the Jψπ − spectrum for the two fits b = 0 and b 0 are shown (for the case Λ 2 = 0.5 GeV). Although the two curves are different, each one would approximately lie within the error band of the other. Clearly, very precise data with a good energy resolution and small bin size are necessary to distinguish among them. Without taking sides, and given that both natures for the Z c structure (resonance or virtual state) arise in fits of good quality, it must be stated that the experimental information available at this time cannot fully discriminate between both scenarios and, hence, claims about the Z c structure should be made with caution. Nevertheless, the resonance scenario seems to be statistically slightly preferred. It is also clear that more experimental information is needed to elaborate on the nature of Z c . In particular, the spectrum of J/ψπ with narrower bins would be highly desirable to have a good resolution on its line shape. If it is finally shown to be a virtual state, then it cannot be a tetraquark, since it does not correspond to a normal particle, and it can only have a hadronic molecular nature, in the sense that it appears only because of the DD * interaction. Summarizing, we have studied the two decays (Y(4260) → J/ψπ + π − , D * − D 0 π + ) in which the Z ± c resonant-like structure is seen. We have presented the first simultaneous study of the invariant mass distributions of the J/ψπ andD * D channels with fully unitarized amplitudes. We find that these data sets are well reproduced in two different scenarios. In the first one, in which there is an energy dependence in theD * D →D * D potential, the Z c appears as a dynamically generatedD * D resonance. In the second one, however, when the aforementioned energy dependence is not allowed, it appears as a virtual state, with the pole located below the DD * threshold. In any case, it is demonstrated that both data sets can be reproduced with only one Z c state, so that the two experimentally observed structures Z ± c (3900) and Z ± c (3885), in different channels, are proven to correspond to the same state. Moreover, both fits do not allow aD * D bound state solution. 4 Since the virtual state can only be of hadronic-molecule type, it is really important to discriminate between these two scenarios. For that purpose, one needs a very precise measurement of the line shapes around, in particular slightly above, the DD * threshold. Such a measurement is foreseen when more data are collected at BESIII.
4,638.2
2015-12-11T00:00:00.000
[ "Physics" ]
Triple Higgs Coupling as a Probe of the Twin-Peak Scenario In this letter, we investigate the case of a twin peak around the observed 125 GeV scalar resonance, using di-Higgs production processes at both LHC and $e^{+}e^{-}$ Linear Colliders. We have shown that both at LHC and Linear Collider the triple Higgs couplings play an important role to identify this scenario; and also that this scenario can be distinguishable from any Standard Model extension by extra massive particles which might modify the triple Higgs coupling. We also introduce a criterion that can be used to ruled out the twin peak scenario. In this letter, we investigate the case of a twin peak around the observed 125 GeV scalar resonance, using di-Higgs production processes at both LHC and e + e − Linear Colliders. We show that the triple Higgs couplings play an important role to identify this scenario; and also that this scenario is surely distinguishable from any Standard Model extension by extra massive particles which might modify the triple Higgs coupling. On July 2012, ATLAS and CMS collaborations [1,2] have shown the existence of a Higgs-like resonance around 125 GeV confirming the cornerstone of the Higgs mechanism that predicted such particle long times ago. All Higgs couplings measured so far seem to be consistent, to some extent, with the Standard Model (SM) predictions. Moreover, in order to establish the Higgs mechanism as responsible for the phenomena of electroweak symmetry breaking one still needs to measure the self couplings of the Higgs and therefore to reconstruct its scalar potential. Recent measurements at the LHC show that there is still uncertainty on the Higgs mass; m h = 125.3 ± 0.4(stat.) ± 0.5(syst.) GeV for CMS [3] and m h = 125.0 ± 0.5 GeV for ATLAS [4] from the diphoton channel and m h = 125.5 ± 0.37(stat.) ± 0.18(syst.) GeV from combined channels. Despite this relatively large uncertainty, a scenario of two degenerate scalars around 125.5 GeV resonance is neither excluded nor confirmed [5]. In the twin peak scenario (TPS); it is assumed that there are two scalars h 1,2 with almost degenerate masses around 125 GeV. The couplings of the twin peak Higgs to SM particles g hiXX are simply scaled with respect to SM rate by cos θ (for h 1 ) and sin θ (for h 2 ), where θ is a mixing angle, such that we have the following approximate sum rule: where X can be any of the SM fermions or vector bosons. Consequently, the single Higgs production such as gluongluon fusion at LHC, Higgs-strahlung, Vector Boson fusions, and ttH at LHC and e + e − Linear Colliders (LC) will obey the same sum rule. The summation of event numbers (both for production and decay) of the two possible cases will be identical to SM case since cos 2 θ + sin 2 θ = 1. However, for processes with di-Higgs final states (pp(e − e + ) → hh + X), the triple Higgs couplings may play an important role, and therefore these processes can be useful to distinguish between the cases of one scalar or two degenerate ones around the observed 125 GeV resonance. It is well known that the triple Higgs couplings can be, in principle, measured directly at the LHC with high luminosity option through double Higgs production pp → gg → hh [6]. Such measurement is rather challenging at LHC, and for this purpose several parton level analysis have been devoted to this process. It turns out that hh → bbγγ [7], hh → bbτ + τ − [7,8] and hh → bbW + W − [8,9] final states are very promising for High luminosity. Recently, CMS report a preliminary result on the search for resonant di-Higgs production in bbγγ channel [10]. The LC has also the capability of measuring with better precision: the Higgs mass and some of the Higgs couplings together with the self coupling of the Higgs [11]. Using recoil technique for the Higgs-strahlung process, the Higgs mass can be measured with an accuracy of about 40 MeV [11]. We note that at LHC with high luminosity we can measure the Higgs mass with about 100 MeV uncertainty which is quite comparable to e + e − colliders. The triple coupling can be extracted from e + e − → Zh * → Zhh at 500 GeV and even better from e + e − → ννh * → ννhh at √ s > 800 GeV. In this regards, the LHC and e + e − LC measurements are complementary [12]. In Ref. [13], the authors have provided a tool to distinguish the two-degenerate states scenario from the single Higgs one. The approach of [13] applies only to models which enjoy modifications of h → γγ rate with respect to the SM. However, according to the latest experimental results, both for ATLAS and CMS the di-photon channel seem to be rather consistent with the SM [3,4]. In this work we propose a new approach to distinguish the TPS. This approach is based on the di-Higgs production which is sensitive to the triple Higgs coupling, that is modified in the majority of SM extensions. Here, as an example, we consider, the Two-Singlets Model proposed in [14], where the SM is extended with two real scalar fields S 0 and χ 1 ; each one is odd under a discrete symmetry Z In what follows, we denote by c = cos θ and s = sin θ. The quartic and triple couplings of the physical fields h i are given in the appendices in [15]. In our analysis we require that 1 : (i) all the dimensionless quartic couplings to be ≪ 4π for the theory to remain perturbative, (ii) the two scalar eigenmasses should be in agreement with recent measurements [3,4]: we have checked that for the Two-Singlets model, the splitting between m 1 and m 2 could be of the order of 40 MeV. (iii) the ground state stability to be ensured; and (iv) we allow the DM mass m 0 to be as large as 1 TeV. In our work, we consider di-Higgs production processes at the LHC and e + e − LC, whose values of the cross section could be significant, namely, σ LHC (hh) and σ LHC (hh + tt) at 14 TeV; σ LC (hh + Z) at 500 GeV and σ LC (hh + E miss ) at 1 TeV. All these processes include, at least, one Feynman diagram with triple Higgs coupling. For the TPS, the total cross section get contributions from the final states h 1 h 1 , h 1 h 2 and h 2 h 2 . However, each contribution should be weighted by the h 1,2 modified couplings since the Higgs is detected through its SM final states decay. Therefore the quantity to be compared with the standard scenario can be expressed as: which can be parameterized as: with σ aa + σ ab + σ bb = σ SM (hh + X) and σ aa , σ bb and σ ab correspond to the cross section contributions coming from triple Higgs diagrams (a), non-triple Higgs diagrams (b) and the interference term in the amplitude, respectively. The coefficients r i are dimensionless parameters, that receive contributions from the final states h i h j , which depend on the mixing angle θ and the Higgs triple couplings λ (3) ijk . The SM case can be obtained by taking s = 0 and r i = 1 2 . In the TPS, the amplitudes for di-Higgs production processes have SM Feynman diagrams where the the Higgs field h is replaced by h i . To compute the parameters r i , we first estimate how does each amplitude get modified with respect to the corresponding SM one for each case h i h j . For example, in the case of h 1 h 1 production, there are two types of diagrams: (1) The ones that involve triple scalar interactions h 1 h 1 h 1 and h 2 h 1 h 1 , with couplings equal to the one of a SM times a factor of cλ (3) 111 /λ SM hhh and sλ 112 /λ SM hhh , respectively. We denote the total amplitude of these two contributions by M (a) . (2) The ones with no triple Higgs couplings. Their amplitude, denoted by M (b) , is given by the one of the SM scaled by a factor of c 2 . Therefore, the amplitudes M (a,b) (where a (b) stand for triple Higgs (non-triple Higgs) Feynman diagrams) for the di-Higgs production can be written in terms of their corresponding SM values as: where λ SM hhh is the SM triple Higgs coupling calculated at one-loop. Then the parameters r i are given by: 112 ] 2 + s 4 [cλ (3) 122 + sλ Thus, the values of r i quantify by how much each di-Higgs process deviates from the SM case. In Fig. 1 mixing angle r i 's are approximately equal to unity, where as for |s| > 0.8, the parameter r 1 (r 2 ) becomes larger than unity (negative). This behavior could lead to an enhancement/reduction to the cross section depending on the sign of the interference contribution, σ ab , to the total cross section. This means that the measurement of the ratio: could be very useful to confirm or exclude this scenario based on the deviation of any of the parameters r i from unity. For instance, the ratio ξ (hh + X) can deviate from unity if the SM is extended with massive particles (SM+MP) that couple to the Higgs doublet and contribute to the triple Higgs coupling as well the Higgs mass. In this case, r 1 = (1 + ∆) 2 , r 2 = 1 + ∆ and r 3 = 1, where ∆ represents the relative enhancement of the triple Higgs coupling due to SM+MP. As we will show later, our discussed scenario will be surely distinguished from the case of SM+MP by combining the ratio (5) for different processes. In Table I, we give the values of σ aa , σ ab and σ bb for the corresponding di-Higgs production processes. We note that their contributions to the LHC process pp → hh and to the LC one e + e − → Zhh seem to be uncorrelated, which makes the Higgs triple coupling useful to probe this scenario and distinguish it from (SM+MP). For the benchmarks considered previously in Fig. 1, we illustrate in Fig. 2 the production cross section of di-Higgs at e + e − LC and LHC and in Fig. 3 the ratio ξ. As it can be seen, in the TPS, the cross section of the processes pp → hh + tt and e − e + → hh + Z are always reduced, while for pp → hh and e − e + → hh + E miss it could be enhanced or reduced depending on the mixing angle. Now let us discuss the possibility of disentangling the TPS from the SM+MP. According to the ratios ξ (hh) and ξ (hh + tt) (Fig. 3-left), the TPS coincides with the SM+MP in two tight regions of the triple Higgs coupling relative enhancement ∆ ∼ −0.5, −0.7 and ∆ ∼ −1.7. While for the ratios ξ (hh + Z) and ξ (hh + E miss ) (Fig. 3-right), the TPS coincides with the SM+MP only for ∆ ∼ −2.2. Therefore, by measuring these quantities at both the LHC and e + e − LC it possible to confirm/exclude the TPS since a coincidence between the TPS and the SM+MP can not takes place in both measurements. Moreover, if the observed 125 GeV scalar resonance is a twin-peak, then one needs to measure (5) for three well chosen di-Higgs production processes (either at the LHC, e + e − LC or both of them) in order to deduce the values of the three parameters r i , while any other remaining di-Higgs production processes (at both LHC & e + e − LC) could be used to confirm/exclude this scenario. In fact, by studying all the di-Higgs production channels at both LHC and e + e − LC one not only confirm/exclude this scenario, but also distinguished it from models where only one type of processes gets modified by new physics such as: it manifests as new sources of missing energy in e − e + → hh + E miss [17], new colored scalar singlets contribution to pp → hh (or hh + tt) [18], or the presence of a heavy resonant Higgs [19]. In order to show whether this scenario can be tested at colliders, we consider three benchmarks and compare the di-Higgs distribution (of the di-Higgs invariant mass as an example) with the SM one. The corresponding values of ratios r i and ξ i are given in Table II, and in Table III, we present the expected number of events at both the LHC and LC. We see that for benchmark B 2 , the events number is significantly larger than the SM for the channels pp → 2b2τ at the LHC and e − e + → 4b + E miss at LC's, while it is reduced for the processes pp → 4b + tt and e − e + → 4b + Z. For benchmark B 1 , the events number of the processes pp → 2b2τ and e − e + → 4b + E miss is SM-like but it is reduced for the processes pp → 4b + tt and e − e + → 4b + Z. For benchmark B 3 , the events number is reduced for the considered In Fig. 4, we illustrate the di-Higgs invariant mass distribution (M h,h ) for the process e − e + → hh + E miss . Clearly, the TPS can be easily distinguished from the SM, especially in the case where |sin θ| > 0.2, i.e far from the decoupling limit. However, the full confirmation of the TPS requires the enlargement of the investigation by taking into account other di-Higgs production channels such as hhjj, hhW ± , hhZ and hhtj at the LHC [20] and the e + e − LC [11]. Table II. In conclusion, we have investigated the case of twin-peak at the 125 GeV observed scalar resonance, where we have shown that by considering different di-Higgs production processes at both LHC and e + e − LC, this scenario that can be surely distinguished from the SM and SM extended by massive fields. It has been shown also that in the case where the mixing between singlet and doublet is slightly small, the di-Higgs production processes would mimic SM predictions and therefore not distinguishable from SM. Last but not least, we should note that this scenario could be realized within SM +(real/complex) singlet scalar, or any larger scalar content model where two degenerate scalar eigenstates h 1,2 at 125 GeV and couple together to the SM gauge fields and fermions by more than ∼90%, i.e., the sum rule (1) is fulfilled. If the measurement of di-Higgs processes at LHC and/or e + e − LC turn out to be consistent with SM predictions, then it will be very challenging to distinguish the TPS scenario.
3,616.2
2014-07-20T00:00:00.000
[ "Physics" ]
T Cells in Gastric Cancer: Friends or Foes Gastric cancer is the second cause of cancer-related deaths worldwide. Helicobacter pylori is the major risk factor for gastric cancer. As for any type of cancer, T cells are crucial for recognition and elimination of gastric tumor cells. Unfortunately T cells, instead of protecting from the onset of cancer, can contribute to oncogenesis. Herein we review the different types, “friend or foe”, of T-cell response in gastric cancer. Introduction Gastric cancer (GC) is the second cause of cancer worldwide for cancer-related deaths [1]. The regional variations mainly reflect differences in the prevalence of Helicobacter pylori infection, which accounts for more than 60% of GC worldwide [2]. Helicobacter pylori infection is very common in human populations but only 1% of infected individuals develop gastric cancer in response to persistent infection [2][3][4]. Certainly Helicobacter pylori plays a crucial role [3][4][5], but also host factors are relevant for the outcome of the infection [6,7]: actually, many studies showed that the subset of patients who progress to gastric cancer appear to have an increased incidence of some polymorphisms in proinflammatory cytokines and particularly IL-1β. The response of the body to a cancer is not a unique mechanism but has many similarities with inflammation and wound healing. The last century Virchow's [8] observation of a close association between cancer and inflammation anticipated the current interest in the role of immunity in tumor pathogenesis. Recent insights into the dynamics of the tumor microenvironment have begun to clarify the mechanisms underlying tumor-promoting inflammation, which bears striking similarities to wounds that fail to heal [9,10]. Approximately 20% of cancer deaths worldwide are currently linked to unresolved infection or inflammation, with gastrointestinal malignancies representing a significant proportion of this disease burden: the most frequent associations are gastric carcinoma and Helicobacter pylori infection [3][4][5][6][7][8][9][10], colorectal carcinoma and inflammatory bowel disease [11], pancreatic carcinoma and chronic pancreatitis [12]. Unresolved inflammation generates a microenvironment that facilitates cellular transformation and the propagation of invasive disease. Chronic tissue damage triggers a repair response including the production of growth and survival factors, proangiogenesis cytokines, and immune regulatory networks [7,8]. The release of inflammatory cell-derived reactive oxygen species coupled with stimulated epithelial cell proliferation creates an elevated risk of mutagenesis. In addition, crosstalk between neoplastic cells and immune elements throughout the smoldering inflammation perpetuates the transforming environment, which provides the evolving tumor cells with sufficient opportunity to acquire mutations and epigenetic alterations that are necessary for cell autonomy. Inflammatory circuits can considerably differ in different tumors in terms of cellular and cytokine networks and molecular drivers. However, macrophages are a common and fundamental component of cancer promoting inflammation. The drivers of macrophage functional orientation include tumor cells, cancer-associated fibroblasts, B cells, and T cells. It is not unfrequent that gastric cancer patients with the same TNM stage pursue different clinical courses. Histopathologic classifications, including WHO classification [13] and molecular classifications [14], have also been applied for the prediction of patient survival, but their prognostic accuracies are controversial [13]. In addition, many attempts have been made to link molecular events in cancer cells with patient outcome, but none of these have been proved to be clinically meaningful. As a consequence, new prognostic determinants in conjunction with the TNM stage are required to more reliably and precisely predict patients' clinical course. As the cancer immunosurveillance hypothesis was first proposed, the concept that the immune system can recognise and eliminate tumor cells has been energetically debated. Many experimental studies in rodents have shown that the immune system indeed functions to protect murine hosts against development of both chemically induced and spontaneous tumors [15]. Furthermore, in humans, epidemiologic investigations indicate that immunocompromised patients have a higher probability to develop cancers of both viral and nonviral origin, which supports the cancer immunosurveillance concept [15]. In addition, current evidence indicates a positive correlation between the presence of lymphocytes in tumor tissue and increased patient survival. Recent studies have highlighted that several types of tumor infiltrating lymphocytes (TIL) are associated with a better disease outcome for various human cancers [16][17][18], demonstrating that higher numbers of CD3 + , CD8 + , or CD45RO + T cells in tumor tissue are significantly correlated with lower frequencies of lymph node metastasis, disease recurrence, or longer patient survival. Wang et al. advocated that the type, density, and location of immune cells in colorectal cancer have prognostic values that are superior to and independent of those of the TNM classification [16]. However, tumors have developed a number of different strategies to escape immune surveillance, such as the loss of tumor antigen expression, the expression of Fas ligand (Fas-L) or CD200 that can induce apoptosis in activated T cells, the secretion of immunosuppressive cytokines, such as IL-10 or TGF-β, or the generation of regulatory T cells, and MHC downregulation or loss [19]. An alteration in HLA class I expression occurs in many cancers, such as gastric cancer [20] and potentially plays a role in the clinical course of the disease by enabling tumor cells to escape Tcell-mediated immune responses [21]. Recent observations suggest that the induction of T-cell apoptosis coexisting with a downregulation of TCR-ζ molecules may be responsible for T-cell dysfunction in patients with gastric cancer [22]. Within TIL population, there are also T regulatory cells (Tregs), which are able to inhibit the immune response mediated by CD4 + and CD8 + T cells in preventing allograft rejection, graft versus host disease, and autoimmune disease [23,24]. In cancer individuals, Tregs were found to downregulate the activity of effector function against tumors, resulting in T-cell dysfunction in cancer-bearing hosts [25,26]. High numbers of Tregs were indeed reported in patients with different type of cancer [27][28][29] such as gastric and esophageal cancer [30]. These observations led us to the hypothesis that tumor-bearing hosts with advanced cancers have an increased population of Tregs, which might inhibit the tumor-specific T-cell response. The aim of this paper is to highlight the role of different T-cell populations involved in gastric cancer immune response and to evaluate their impact in blocking/promoting the development of gastric cancer. Protective Role of Cytotoxic T Cells A large body of evidence indicates that in gastrointestinal malignancies, endogenous responses may inhibit tumor growth and perhaps modulate the clinical course of the disease. Many reports have been obtained on colorectal carcinoma: the type, density, and intratumoral location of the lymphocyte infiltrate have been shown to be more informative biomarkers than the TNM or Duke's classification [16]. In this context, dense infiltrates composed of cytotoxic memory T cells are strongly associated with a reduced risk of recurrence after surgery and increased overall survival. In particular, patients with early-stage cancers but an absence of T-cell infiltrates display poor outcomes, whereas subjects with significant tumor burdens but robust T-cell infiltrates showed improved outcomes [23,31]. The prognostic role of tumor-infiltrating immune cells in patients with gastric cancer is largely unknown. Only a few reports have been issued on the association between tumor infiltrating immune cells and the clinical outcome in GC: Ishigami et al. [32] reported that patients showing a high level of natural killer cell infiltration in tumor tissues have a better prognosis, and Maehara et al. [33] showed that a high density of dendritic cell infiltration is associated with the absence of lymph node metastasis. On the other hand, the group of Fukuda [34] found no significant difference in survival between patients with marked or slight TIL infiltration. However, they detected TILs by immunostaining in GC patients, classified cases into groups with marked or slight TIL infiltration, and did not determine TIL numbers. T-cell-mediated adaptive immunity is considered to play a major role in antitumor immunity. In mouse models, it has been demonstrated that adaptive immunity prevents the development of tumors and inhibit tumor progression [35]. Accordingly, recent data [36] showed that in GC high densities of immune cells related to adaptive immunity (especially cytotoxic T cells and memory T cells) are associated with favorable survival and indicate that adaptive immunity plays a role in the prevention of tumor progression. TIL density is also correlated with the presence of lymph node metastasis but not with the depth of tumor invasion. On the basis of this finding, the authors suspect that the prognostic role of TIL is mainly due to decreased metastatic Clinical and Developmental Immunology 3 potential and suggest the following possible mechanisms. First, the expansion of clones with metastatic potential usually containing larger amounts of aberrantly expressed proteins, including proteins that contribute to metastasis, which may act as tumor-associated antigens. As a result, these clones are more likely to be destroyed by in situ immune reactions. Second, a high density of TIL means a healthy immune system, and therefore, immune reaction occurring in lymph node may also exert a proper function against tumor cells that have drained into lymph nodes in patients with high TIL densities. Third, tumor burden of metastatic foci in lymph node is less bulky than those of primary foci, and thus, metastatic foci are more likely to be susceptible to complete destruction by immune reaction. Many experimental and clinical observations suggest that metastatic growth in mice and humans is more difficult to control through vaccination and T-cell response, and new observations indicate that immunity against early and even preneoplastic lesions is stronger than against advanced tumors [37][38][39]. In some cases, enhanced anticancer T-cell activity may thus prevent metastasis rather than eliminating established metastatic nodules. Recently, Kim and coll. [40] evaluated the antitumor activity of ex vivo expanded T cells against human GC. For this purpose, human peripheral blood mononuclear cells were cultured with IL-2-containing medium in anti-CD3 antibody-coated flasks for 5 days, followed by incubation in IL-2-containing medium for 9 days. The resulting populations were mostly CD3 + T cells (97%): 11% CD4 + and 80% CD8 + . This heterogeneous cell population was also called cytokine-induced killer (CIK) cells. CIK cells strongly produced IFN-γ, moderately TNFα, but not IL-2 and IL-4. At an effector-target cell ratio of 30 : 1, CIK cells destroyed 58% of MKN74 human GC cells. In addition, CIK cells at doses of 3 and 10 million cells per mouse inhibited 58% and 78% of MKN74 tumor growth in nude mouse xenograft assays, respectively. This study suggests that CIK cells may be used as an adoptive immunotherapy for gastric cancer patients. The adoptive immunotherapy of GC with CIK cells has been also reported in preclinical and clinical studies [41]. MHC-I-restricted CTLs from GC patients recognize tumor-associated antigen and react specifically against selftumor cells [41][42][43]. One tumor-specific antigen, MG7antigen, showed great potential for predicting early cancer as well as for inducing immune responses to GC [44,45]. Using HLA-A-matched allogeneic GC cells to induce tumorspecific CTLs appears to be an alternative immunotherapy option for gastric cancer [38]. Also, CIK cells in combination with chemotherapy showed benefits for patients who suffer from advanced gastric cancers [46,47]. The serum levels of tumor markers were significantly decreased, the host immune function was increased and the short-term curative effect as well as the quality of life, were improved in patients treated by chemotherapy plus CIK cells compared to those in patients treated by chemotherapy alone [48]. Most studies analyzing T-cell response to tumor-associated antigens (TAAs) have emphasized CD8 + T cells thus far. However, CD4 + T cells may play a crucial role in both the induction and activation of TAA-specific memory CD8 + T cells toward cytotoxic effector T cells [49,50]. Recently Amedei et al. [51] analyzed the functional properties of the T-cell response to different antigen peptides related to GC in patients with gastric adenocarcinoma. A Tcell response specific to different peptides of GC antigens tested was documented in 17 out of 20 patients. Most of the cancer peptide-specific TILs expressed a T helper 1 (Th1)/T cytotoxic 1 (Tc1) profile and cytotoxic activity against target cells. The effector functions of cancer peptide-specific T cells obtained from the peripheral blood of the same patients were also studied, and the majority of peripheral blood peptidespecific T cells also expressed the Th1/Tc1 functional profile. In conclusion, in most patients with gastric adenocarcinoma, a specific type 1 T-cell response to GC antigens was detectable and would have the potential of hamper tumor cell growth. T Regulatory Cells in Cancer The physiological role of Tregs is the protection against the autoimmune diseases through the direct suppression of T effector cells reacting against "self," although they can be also involved in the control of immune response against exogenous antigens [52]. Since most antigens expressed by neoplastic cells are "self "-antigens [53], it is commonly considered that Tregs are also involved in the suppression of the immune response against tumors, favoring tumor escape from immune response [54]. TILs consist of various antitumor effector and regulatory subsets. T-cell infiltration is associated with good tumor prognosis in many types of cancers. CD8 + and CD4 + T lymphocytes are effector cells thought to be associated with a favorable prognosis [55]. While CD8 + T cells are the main effectors of antitumor immunity, CD4 + T cells induce and maintain CD8 response [56]. On the other hand, regulatory lymphocytes, a subset of T cells which inhibit the antitumor immune reaction have been described to be associated with unfavorable prognosis [57][58][59][60][61][62]. Tregs cells are known to attenuate host antitumor immunity by suppressing T-cell proliferation, antigen presentation, and cytokine production [24]. As the tumor progresses and becomes established in the host, the population of TILs is skewed to favor regulatory T cells over the helper CD4 + T cells [56]. Studies of regulatory T cells in GC are very few and have yielded conflicting results. Haas et al. [63] reported that stromal but not intraepithelial regulatory T cells are associated with a favorable prognosis. Mizukami et al. [64] reported that the localization pattern but not the absolute number of regulatory T cells was associated with the prognosis. In breast cancer, it has been demonstrated that pathologic complete response to neoadjuvant chemotherapy of breast carcinoma is associated with the disappearance of tumor-infiltrating Foxp3 + regulatory T cells [65]. It has been demonstrated that in some kidney tumors Treg frequency is significantly higher in patients with worse prognosis [66]. Conversely, several recent reports highlighted a protective role of Treg in cancer [67]. In renal cancer, Siddiqui et al. showed no correlation between tumor-infiltrating Treg frequency and disease progression [67]. The significance of regulatory T cell in GC as a poor prognostic factor has also been investigated. Perrone et al. [62] and Shen et al. [68] reported unfavorable prognosis with increased intratumoral regulatory T cells. Two different studies [69,70] confirmed that GC cell can induce Tregs development via TGF-β1 production; in particular the level of serum TGF-β1 in GC patients (15.1 ± 5.5 ng/mL) was significantly higher than that of the gender-and age-matched healthy controls (10.3 ± 3.4 ng/mL). Furthermore, the higher TGF-β1 level correlated with the increased population of CD4 + Foxp3 + Tregs in advanced GC. A significant higher frequency of CD4 + Foxp3 + Tregs was observed in PBMCs cultured with the supernatant of MGC than GES-1 (10.6% ± 0.6% versus 8.7% ± 0.7%). Moreover, using the purified CD4 + CD25 − T cells, the authors confirmed that the increased Tregs were mainly induced from the conversation of CD4 + CD25 − naive T cells, and induced Tregs were functional and able to suppress the proliferation of effector T cells. Finally, they demonstrated that GC cells induced the increase in CD4 + Foxp3 + Tregs via TGF-β1 production. Gastric cancer cells upregulated the production of TGF-β1 and the blockade of TGF-β1 partly abrogated Tregs phenotype. The second study [70] investigated the frequency of Foxp3 + Tregs within CD4 + cells in TILs, regional lymph nodes, and PBL of GC patients. Furthermore, to elucidate the mechanisms behind Treg accumulation within tumors, authors evaluated the relationship between CCL17 or CCL22 expression and the frequency of Foxp3 + Tregs in GC. CD4 + CD25 + Foxp3 + Tregs were counted by flow cytometry and evaluated by immunohistochemistry. Moreover, an in vitro migration assay using Tregs derived from GC was performed in the presence of CCL17 or CCL22. As a result, the frequency of Foxp3 + Tregs in TILs was significantly higher than that in normal gastric mucosa (12.4% ± 7.5% versus 4.1% ± 5.3%). Importantly, the increase in Tregs in TILs occurred to the same extent in early and advanced disease. Furthermore, the frequency of CCL17 + or CCL22 + cells among CD14 + cells within tumors was significantly higher than that of normal gastric mucosa, and there was a significant correlation between the frequency of CCL171 + or CCL22 + cells and Foxp3 + Tregs in TILs. In addition, the in vitro migration assay indicated that Tregs were significantly induced to migrate by CCL17 or CCL22. In conclusion, CCL17 and CCL22 within the tumor are related to the increased population of Foxp3 + Tregs, with such an observation occurring in early GC. Since the Tregs may restrain the antitumor activity of cytotoxic T cells, the balance of effector and suppressor cells may also prove to be a decisive factor in patient outcome, and a recent study [30] contains the first evidence related to the prevalence of Tregs in gastric and esophageal cancer. The authors shown increased populations of CD4 + /CD25 + cells in peripheral blood T cells from patients with gastric and esophageal cancers in comparison with healthy donors. Moreover, the population of CD4 + /CD25 + cells in the TILs of GC was higher than that in normal gastric mucosa. Authors also confirmed that CD4 + /CD25 + isolated from patient peripheral blood had a regulatory function by evaluating cytokine production and suppressive activity. Moreover, the population of CD4 + /CD25 + cells in the TILs of GC patients with advanced disease was significantly more extended than that in TILs of patients with earlystage disease or that in intraepithelial lymphocytes of normal gastric mucosa. As a functional consequence, CD4 + /CD25 + cells did not produce IFN-γ but large amounts of IL-10. Also, the proliferation of CD4 + /CD25 − cells was inhibited in the presence of CD4 + /CD25 + cells in a dose-dependent manner, so confirming that CD4 + /CD25 + has an inhibitory activity corresponding to Tregs. Similar results were obtained by Shen and coll. [71], who demonstrated that increased CD4(+)CD25(+) CD127(low/−) regulatory T cells were also present in the tumor microenvironment, such as those found in the ascites fluid, tumor tissue, or adjacent lymph nodes. In addition, they found that CD4(+)CD25(+)CD127(low/−) Tregs suppressed effector T-cell proliferation and also correlated to advanced stage of GC, suggesting that CD4(+)CD25(+)CD127(low/−) can be used as a selective biomarker to enrich human Treg cells and also to perform functional in vitro assays in GC. Th17 in Cancer A new subset of Th cells, named Th17 cells, producing IL-17 alone or in combination with IFN-γ, has been identified [72]. Th17 cells may also secrete IL-6, IL-22, and TNF-α and play a critical role in protection against microbial challenges, particularly extracellular bacteria and fungi [73]. The role of Th17 cells in tumor immunology can be dichotomous: Th17 cells indeed seem to play a role both in tumorigenesis and eradication of an established tumor. Many laboratories have studied Th17 populations in blood and occasionally tissues of patients with various cancers. A potential protective effect of Th17 cells has been reported in cancer affecting mucosal tissues, such as gut, lung, and skin [74,75]. An increase in Th17 cells has been detected in peripheral blood, tumor microenvironment and tumor-draining lymph nodes of several different human and mouse tumor types [76], such as ovarian cancer [77]. A recent study has shown that the number of Th17 cells increased in the TILs from melanoma, breast, and colon cancers [78]; Th17 cells were also suggested as a prognostic marker in hepatocellular carcinoma [79]. In contrast to data on solid tumors, little is known about Th17 cells in hematological malignancies. Serum IL-17 levels were recently shown to be elevated in patients with multiple myeloma, especially in stages II and III of the disease. Thus, current data confirm a role for IL-17 in the promotion of angiogenesis and in the progression of multiple myeloma [80]. Th17 cell frequencies and IL-17 concentrations were significantly higher in peripheral blood samples from untreated patients with acute myeloid leukemia than in those from healthy volunteers and were reduced in the former after chemotherapy [81]. On the other hand, some studies have found that the number of Th17 cells is decreased in several types of tumor. The levels of tumor-infiltrating Th17 cells and IL-17 in ascites were reduced in a group of ovarian cancer patients with more advanced disease and seemed to positively predict outcome [82]. A low number of Th17 cell is present in the tumor microenvironment of non-Hodgkin's lymphoma because malignant B cells may upregulate Treg cells and inhibit Th17 cells [83]. Th17 cells are present in much lower numbers in HER2-positive breast cancer patients than in either healthy controls or HER2-negative patients [84]. One study in prostate cancer demonstrated that Th17 cells infiltrating the tumor correlated inversely with the Gleason score [85]. This implied that Th17 cells mediate an antitumor effect in the development of prostate cancer. One group found that IL-17 promoted the tumorigenicity of human cervical tumors in nude mice but inhibited the growth of hematopoietic tumors, mastocytoma P815, and plasmocytoma in immunocompetent mice [86,87]. It is clear that the Th17 cells have an ambiguous role in cancers: they can both encourage and inhibit cancer progression. It is well established that IL-17 acts as an angiogenic factor that stimulates the migration and cord formation of vascular endothelial cells in vitro and elicits vessel formation in vivo [88,89]. The mechanism of Th17 cells upregulation in tumor is not clear. Charles et al. found that TNF-α enhanced tumor growth via the inflammatory cytokine IL-17 in a mouse model of ovarian cancer and in patients with advanced cancer [90]. Su et al. demonstrated that tumor cells and tumorderived fibroblasts secrete monocyte chemotactic protein 1(MCP-1) and RANTES that mediate the recruitment of Th17 cells [78]. More recently, Kuang et al. showed that tumor-activated monocytes promote expansion of Th17 cells by secreting a set of key proinflammatory cytokines in the peritumoral stroma of hepatocarcinoma tissues [91]. It is clear that Treg cells efficiently suppressed the function of antitumor CD8 + T cells [92,93]. A recent study reported that IL-2 regulates the balance between tumor Treg and Th17 cells by stimulating the differentiation of the former and inhibiting that of the latter in the tumor microenvironment [76]. The mechanism of Th17 cells' antitumor activity remains largely unknown. One recent work has reported antitumor activity of IL-17 by means of a T-cell-dependent mechanism [87]. Two studies by Benatar et al. demonstrated that IL-17E, a cytokine with significant homology to IL-17, has antitumor activity in multiple tumor models, and that eosinophils and B cells are involved in the antitumor mechanism of action of IL-17E [94,95]. Th17 cells may contribute to protective human tumor immunity by inducing Th1-type chemokines and stimulating CXCL9 and CXCL10 production to recruit effector cells to the tumor microenvironment. A recent study has also demonstrated that almost half of IL-17-producing CD4 + T cells isolated from hepatocarcinoma tissues simultaneously produced IFN-γ [91]. An interesting work of the Gaudernack group has demonstrated that IL-17-secreting T cell clones obtained from long-term survivors after immunotherapy also secreted IFN-γ, IL-4, IL-5, and IL-13 [96]. More recently, it was shown that Th17 cells and IL-17 participate in antitumor immunity by facilitating dendritic cell recruitment into tumor tissues and promoting the activation of tumor-specific CD8 + T cells [97]. It was even most intriguing that the Th17 frequencies increased during treatment with trastuzumab in patients with breast cancer [82] or with metastatic melanoma treated with the anticytotoxic T lymphocyte-associated antigen 4 (CTLA4) antibody tremelimumab [10]. Alvarez et al. demonstrated that dendritic and tumor cell fusions transduced with adenovirus encoding CD40L eradicate B-cell lymphoma and induce a Th17 type response in a murine lymphoma model [98]. Moreover, Derhovanessian et al. has observed a highly significant correlation between a higher frequency of IL-17-producing T-cells prevaccination and a shorter time to metastatic progression after immunotherapy [99]. These data imply the important involvement of Th17 cells in the response to cancer immunotherapy (Table 1). Zhang et al. [100] preliminairly reported that compared with healthy volunteers, patients with GC had a higher proportion of Th17 cells in peripheral blood. The increased prevalence of Th17 cells was associated with clinical stage and in advanced disease increased populations of Th17 cells were present also in tumor-draining lymph nodes. Furthermore, the mRNA expression levels of Th17-related factors (IL-17 and IL-23p19) in tumor tissues and the serum concentrations of IL-17 and IL-23 cytokines were significantly increased in patients with advanced GC. The results indicate that Th17 cells may contribute to GC pathogenesis. Concluding Remarks This paper has highlighted the key roles that T-cell populations play in promotion and/or protection of gastric cancer. In summary, high densities of cytotoxic T cells and memory T cells are usually associated with favorable survival, indicating the importance of adaptive immunity in the prevention of gastric cancer [41]; as a matter of fact the adoptive immunotherapy of GC with T cells has been also reported in different preclinical and clinical studies [41]. MHC-I restricted CTLs from GC patients recognize tumorassociated antigen and react specifically against self-tumor cells [42,43], such as MG7-antigen, which shows great potential for predicting early cancer as well as for inducing immune responses to GC [44,45]. Different studies sometimes reported controversial results, for example, some study showed that Tregs are protective, while others that the Tregs, present in TILs or in peripheral blood of GC patients, are able to suppress the effector T cells, thus promoting the tumor progression [30,68]. Ye et al. [104] In addition, although not conclusively, recent data suggested that Th17 cells might somehow contribute to GC pathogenesis [96]. On the basis of clinical and experimental evidence, it is reasonable to conclude that the T immune response in GC has double faces as Janus, one friend and one foe, and that to obtain successful immunotherapy might involve a combined approach, which intensify the effector functions of cytotoxic T cells and probably reduce the suppressive T cells.
6,125
2012-05-31T00:00:00.000
[ "Biology", "Medicine" ]
DINC: A new AutoDock-based protocol for docking large ligands Background Using the popular program AutoDock, computer-aided docking of small ligands with 6 or fewer rotatable bonds, is reasonably fast and accurate. However, docking large ligands using AutoDock's recommended standard docking protocol is less accurate and computationally slow. Results In our earlier work, we presented a novel AutoDock-based incremental protocol (DINC) that addresses the limitations of AutoDock's standard protocol by enabling improved docking of large ligands. Instead of docking a large ligand to a target protein in one single step as done in the standard protocol, our protocol docks the large ligand in increments. In this paper, we present three detailed examples of docking using DINC and compare the docking results with those obtained using AutoDock's standard protocol. We summarize the docking results from an extended docking study that was done on 73 protein-ligand complexes comprised of large ligands. We demonstrate not only that DINC is up to 2 orders of magnitude faster than AutoDock's standard protocol, but that it also achieves the speed-up without sacrificing docking accuracy. We also show that positional restraints can be applied to the large ligand using DINC: this is useful when computing a docked conformation of the ligand. Finally, we introduce a webserver for docking large ligands using DINC. Conclusions Docking large ligands using DINC is significantly faster than AutoDock's standard protocol without any loss of accuracy. Therefore, DINC could be used as an alternative protocol for docking large ligands. DINC has been implemented as a webserver and is available at http://dinc.kavrakilab.org. Applications such as therapeutic drug design, rational vaccine design, and others involving large ligands could benefit from DINC and its webserver implementation. Background Modeling the structure of a protein-ligand complex is important for understanding the binding interactions between a potential medicinal compound (the ligand) and its therapeutic target (the protein). Moreover, such modeling aids in evaluating the thermodynamic stability of the complex. Computer-aided docking [1][2][3][4] is a technique that explores the motion space of the proteinligand complex in order to compute energetically stable conformation(s) that model(s) the structure of the complex. In general, the exploration of the motion space is done by a sampling algorithm and the stability of a conformation of the complex is evaluated using a scoring or energy function that estimates the binding affinity of the complex. Several methods/programs have been developed for computer-aided docking (for example, [5][6][7][8][9][10][11][12][13]). Most docking programs treat the protein as a rigid structure and explore only the motion space of the ligand, which is composed of the rotational degrees of freedom (DoFs) of the ligand, and the translational and orientational DoFs. Docking small ligands with 6 or fewer rotatable bonds is in general very fast and accurate [14,15]. However, as the dimensionality of the motion space increases with large ligands, fast and accurate docking becomes very challenging. Tackling the challenge of docking large ligands is important for designing putative drug compounds that have many rotatable bonds. Peptides or peptidomimetics [16,17], which are essentially small chains of natural or modified amino acids connected together with peptide bonds, are one such class of compounds. Drug design based on the peptides or peptidomimetics is rapidly gaining traction in the pharmaceutical industry [18]. These compounds are becoming popular because of their low toxicity and high specificity. Interest in these compounds has also increased with the development of sophisticated manufacturing techniques. The number of peptides authorized by the United States Food and Drug Administration is increasing at an annual rate of 8% and it is projected that the market for the peptide-based drugs will be huge [19]. Clearly, accurate and fast docking of peptides and peptidomimetic compounds will be very useful. A method for accurate and fast docking of large ligands could also be useful for rational vaccine design. Recognition of epitopes or peptide fragments (from antigenic proteins) bound to Major Histocompatibilty Complex (MHC) molecules triggers T-cells mediated immune response. Predicting the peptide fragments that bind to the MHC molecules is crucial for developing antigen-specific vaccines [20,21]. Computational prediction of the peptide fragments that bind to the MHC molecules is thus an active area of research [22,23]. Since a large number of peptide sequences and MHC molecules can potentially interact and form complexes leading to the immune response, there exists a pressing need for a computationally fast and accurate method for docking large ligands such as the peptide fragments. Docking of large ligands such as peptides has been a focus of some methods (e.g., [22][23][24][25]). Tong et al.'s method [22] first docks two anchor residues corresponding to each end of the peptide and then uses loop closure [26] to compute the positions of the rest of the residues. The pDock method [23] uses the ICM docking program [6] to dock the peptide and a Monte Carlo procedure to refine the docked conformation of the peptide. Computational methods such as those by Sood et al. [27] and Raveh et al. [28] are aimed at de-novo design and docking of peptides, and use peptide fragments from the Protein Data Bank (PDB) [29] to build the novel peptide. The Viterbi algorithm for de-novo peptide design [30] places residue pairs on a pre-determined path in the binding cavity of the target protein and then docks the residue pairs using AutoDock [9]. Molecular Dynamics based approaches for protein-peptide docking have also been proposed [31,32]. Although the methods described above have proven successful, they do not provide a general framework for docking large ligands as they make use of specific assumptions. For example, in the method by Tong et al. [22], it is assumed that the binding sub-pockets, where the anchor residues will bind, are approximately known. Other methods such as those based on Molecular Dynamics are computationally slow. Our strategy for docking large ligands does not require us to make any assumptions about specific binding interactions (although such assumptions can be incorporated) and we are able to expedite computation time. We rely on the general docking framework of AutoDock [9,33] which is an excellent, widely used noncommercial docking program. AutoDock typically performs a genetic algorithm based stochastic exploration of the motion space of a ligand while simultaneously minimizing an empirical scoring function. AutoDock docks small ligands, with 6 or fewer rotatable bonds, in an accurate and fast manner [15]. However, as a ligand becomes larger, the exploration of the motion space becomes more challenging and the accuracy deteriorates [14]. To improve accuracy, AutoDock's standard protocol for docking large ligands recommends a more exhaustive exploration of the motion space. This exhaustive exploration results in improved accuracy, but also a significant increase in the computational time. In our earlier work [34], we described an incremental docking protocol, henceforth called DINC, which was designed to address the limitations of AutoDock when docking large ligands. The incremental strategy adopted by DINC is similar in spirit to that used by several previously published docking methods [8,10,[35][36][37][38][39][40][41]. DINC performs docking using AutoDock incrementally instead of in one single step. First, a fragment of the ligand is selected. It is then repeatedly docked and extended until all of the atoms comprising the ligand are docked. At each incremental step, AutoDock is used to dock a small subset of the bonds (and associated atoms) of the ligand and, thus, instead of exploring the full motion space of the ligand in one single step, DINC explores, at each increment, only a low-dimensional subspace of the full motion space. Since AutoDock is fast and accurate when docking a small ligand with a small number of rotatable bonds, DINC results in computationally fast docking of a large ligand by dividing the docking problem into smaller sub-problems. This paper presents a detailed analysis of the docking performance of DINC and compares it with the docking performance of AutoDock's standard protocol. Three specific docking examples involving large ligands are presented which showcase different aspects of DINC. The results from an extended docking study are also presented. We also show that, when computing a docked conformation, DINC can also be used to restrain any part of the ligand to a specific binding sub-pocket based on either biological evidence or hypotheses related to specific binding interactions. We also show that in a docking application involving a large ligand, DINC can be used to quickly compute a docked conformation of the ligand which can then be refined. We finally introduce a webserver that is designed for docking large ligands with more than 6 rotatable bonds. The webserver uses DINC for docking and also includes the extension for setting up positional restraints. The analysis of the docking performance presented in this paper shows that docking of large ligands using DINC is significantly faster than AutoDock's standard protocol. Moreover this computational speed-up is achieved without sacrificing the docking accuracy that is obtained using the standard protocol. Results and Discussion In our earlier work [34], we presented an AutoDockbased [9,33] incremental protocol (DINC) for docking large ligands. The central idea of DINC is to use Auto-Dock, in each incremental step, for exploring a maximum of 6 rotatable bonds of a large ligand. This is done because AutoDock is fast and accurate when exploring motion spaces that are low-dimensional. DINC proceeds in multiple steps until all the rotatable bonds of a large ligand are explored. In the first step, a fragment of the ligand comprising of 6 rotatable bonds and atoms directly moved by rotations around those bonds is picked. The fragment is docked using AutoDock, and a few conformations of the docked fragment are selected and then extended by adding a small number of rotatable bonds and atoms. The extended fragments are docked, a few conformations are selected and extended, and the process is repeated until no unexplored bonds remain. Here we present a detailed analysis of the docking performance of DINC, show how DINC can be used to apply restraints on the ligand while docking, and introduce a webserver. Three representative examples We first present results obtained from the docking of three large ligands to their respective target proteins. Each ligand was docked to the target protein using both DINC and AutoDock's standard docking protocol. The docking results illustrate the strengths and weaknesses of both docking protocols. Note that in this paper we focus on protein-ligand complexes for which experimentally derived structures are available in the PDB. This allows us to evaluate docking accuracy by computing the Root Mean Square Deviation (RMSD) between the conformation of the ligand computed by DINC and the conformation of the ligand from the PDB structure of the complex. Each docking protocol was given an unbound conformation of the ligand, the experimentally derived conformation of the target protein from the PDB structure of the protein-ligand complex, and the approximate location of the binding pocket. The binding pocket was defined by a three-dimensional rectangular box encompassing the binding pocket. Details are available in the Methods Section. Each docking produced multiple docked conformations of the ligand as well as corresponding binding energy scores which were computed using AutoDock's scoring function. The conformations were ranked based on the scores, a lower scoring conformation was ranked higher. Since an experimentally derived conformation of the bound ligand (true conformation) is available, for each docked conformation of the ligand, a RMSD value was also computed. The RMSD value measures the distance between the docked conformation and the true conformation. The conformations were also ranked based on the RMSD values, a conformation with lower RMSD value was ranked higher. We will denote the highest ranked conformations based on the scores and the RMSD values as Top-scoring and Top-RMSD conformations respectively. PDB ID 2FDP This example illustrates the main strength of DINC: its ability to compute docked conformations in significantly shorter time with accuracy comparable to that achieved using AutoDock's standard protocol. The structure of the protein-ligand complex deposited in the PDB with ID 2FDP [42] contains a potential inhibitor of BACE-1 (beta amyloid precursor protein cleaving enzyme). BACE-1 is a beta-secretase implicated in Alzheimer's disease which is associated with deposition of amyloid-b peptide in the brain, and leads to the loss of brain function in Alzheimer's patients [43]. Inhibition of BACE-1 is therefore an important goal of the drug discovery community [44]. The potential inhibitor is a large ligand with 14 rotatable bonds. Docking of the ligand using AutoDock's standard docking protocol resulted in a Top-scoring conformation of the ligand that is at a RMSD distance of 1.43Å from the true conformation (see Figure 1A). The Top-scoring conformation of the ligand obtained using DINC is at a RMSD distance of 1.16Å (see Figure 1B). Thus, both protocols computed very accurate conformations with DINC performing slightly better. However, the strength of DINC is that it is significantly faster than the standard protocol: while the standard protocol used 9.77h to perform the docking, the docking time used by DINC was 0.45h. Note that all docking times are total CPU times, unless otherwise stated. Due to parallel implementation, the actual time used by DINC was 0.09h. PDB ID 2ER9 This example illustrates limitations of the scoring function and how these limitations affect docking. The scoring function is a major component of any computational prediction method as it provides a measure of the quality of prediction. Docking is no exception [45]. The sampling algorithm of a docking method explores the motion space of the ligand and computes many conformations. If a computed conformation is close to an experimentally observed one, then such a conformation can be identified only if the scoring function ranks it higher than the rest of the computed conformations. The structure deposited in the PDB with ID 2ER9 [46] contains a statin-based inhibitor complexed with an aspartic proteinase. The inhibitor was designed to study the binding of such statin-based inhibitors to the aspartic proteinases, with the larger goal of achieving inhibition of the plasma proteinase renin for the purposes of lowering blood pressure in humans [47], thus, leading to the treatment of hypertension. The inhibitor is a very large ligand with 25 rotatable bonds. Docking of the ligand using AutoDock's standard docking protocol resulted in a Top-scoring conformation of the ligand that is at a RMSD distance of 6.57Å from the true conformation (see Figure 2A). The Top-scoring conformation of the ligand obtained using DINC is at a RMSD distance of 6.59Å (see Figure 2B). Figures 2A and 2B show that although the RMSD distances are similar, the Top-scoring conformation computed by DINC is qualitatively more accurate. The conformation computed by DINC overlaps well with the true conformation except that they are slightly offset from each other in the rigid body translation space. As expected the docking time used by DINC (1.32h) was significantly lower than that used by the standard protocol (23.35h). It is interesting to note that comparison of the Top-RMSD conformations computed by DINC and the standard protocol, shows that DINC computed a conformation that was very close (RMSD = 1.87Å) to the true conformation (see Figure 2D). On the other hand, Top-RMSD conformation computed by the standard protocol was at a RMSD distance of 5.52Å from the true conformation (see Figure 2C). Thus, both protocols computed more accurate (RMSD-wise) Top-RMSD conformations as compared to the Top-scoring conformations. However, AutoDock's scoring function did not rank the Top-RMSD conformations higher than the Top-scoring conformations. Comparison of the RMSD values corresponding to the Top-RMSD conformations computed by DINC and the standard protocol shows that DINC clearly performed much better in this example. PDB ID 1NDZ This example represents a very challenging docking problem. The structure of the complex deposited in the PDB with ID 1NDZ [48] contains a ligand with 10 rotatable bonds in complex with adenosine deaminase which is an enzyme that is found in almost all human tissues. It is involved in purine metabolism [49] and is implicated in various immune system related diseases, including psoriasis, rheumatoid arthritis, and others [50]. The large ligand in this example is a highly potent inhibitor of adenosine deaminase. Docking of the ligand using AutoDock's standard docking protocol resulted in a Top-scoring conformation of the ligand that is at a RMSD distance of 9.62Å from the conformation deposited in the PDB (see Figure 3A). The Top-scoring conformation of the ligand obtained using DINC is at a RMSD distance of 9.71Å (see Figure 3B). The docking times required by DINC and the standard protocol were 0.29h and 8.10h respectively. Even though DINC was significantly faster, the conformations obtained using both protocols were not accurate. The inaccuracy is a direct consequence of a wellknown limitation of rigid docking programs [45,51,52]. Such programs do not account for protein flexibility and treat the protein as a rigid molecule. When the binding site of the protein deforms, docking with the rigid docking programs becomes challenging. This is the case with the protein-ligand complex 1NDZ. Docking accuracy suffers because the ligand is deeply buried in the binding site (see Figures 3C, D) which undergoes a conformational change upon binding that DINC, AutoDock, as well as other docking studies [48] are not able to Dhanik et al. BMC Structural Biology 2013, 13(Suppl 1):S11 http://www.biomedcentral.com/1472-6807/13/S1/S11 predict. Thus, even though docking was not successful using DINC, its computational efficiency ensures that such difficult docking scenarios could be quickly identified. On the other hand, docking using the standard protocol, in this particular example, wastes the computational resources on a problem that is not tractable using rigid body docking. Extended docking study To more comprehensively evaluate the docking performance of DINC, we conducted an extended docking study. Five repeated docking experiments were performed on a dataset of 73 protein-ligand complexes compiled from the core set of the PDBbind database [53]. The 73 selected complexes have ligands with more than 6 rotatable bonds. In each docking experiment 73 ligands were docked to their respective proteins using DINC as well as AutoDock's standard protocol. The details of the docking experiment are presented in the Methods Section as well as in our earlier work [34]. Here we present the results from the docking performance evaluation and compare DINC with AutoDock's standard protocol. The following docking performance metrics were evaluated based on the docking results from each experiment: • DT which represents the total CPU time, averaged over the 5 approximately 23 times faster than the standard protocol. As described later in the Methods Section, DINC is easily parallelized. With a parallelized implementation, DINC is up to 2 orders of magnitude faster. Thus, use of DINC results in a massive increase of computational speed. Although computational methods usually entail a trade-off between computational speed and accuracy, our results show that in the case of DINC increase in computational speed is obtained without sacrificing accuracy. The accuracy of a docking program is measured by its ability to sample a docked conformation of the ligand that is spatially close to the true conformation of the ligand from the experimentally derived structure of the protein-ligand complex and by its ability to assign a low score to the docked conformation, ideally the lowest score among all the sampled conformations. Figure 4 compares the docking accuracy of DINC and Auto-Dock's standard protocol. The figure shows the distribution of 73 R CS values corresponding to the lowest scoring docked conformations of the 73 ligands. The overall distributions are similar for both protocols which proves that the docking accuracy of the two protocols is similar. This is also reflected by the R C S a values (5.06Å for DINC and 5.17Å for AutoDock's standard protocol). In the case of the docking of a small ligand, a docked conformation that is within 2Å RMSD of the true conformation is considered very accurate. In the case of docking a large ligand, the accuracy criterion is sometimes relaxed [54,55]. However, it is clear from Figure 4 that the number of Top-scoring docked conformations that are acceptably accurate (RMSD ≤ 4Å) is low. The low number of accurate conformations can be due to two reasons: (a) an accurate docked conformation is not sampled due to insufficient exploration of the motion space, (b) an accurate docked conformation, although sampled, is not assigned the lowest score due to insufficient scoring function. To further investigate the reasons for the few acceptably accurate conformations, we analyzed the R CR values. These values correspond to the most accurate docked conformation of the ligand which might or might not be the lowest scoring conformation. A distribution plot of the R CR values computed from the docking experiments using DINC and AutoDock's standard protocol is shown in Figure 5. The distribution of the R CR values is similar for both protocols as also reflected by the R CR a values (3.01Å for DINC and 2.92Å for AutoDock's standard protocol). However, a comparison of distributions shown in Figures 4 and 5 illustrates that the number of docked conformations with low R CR values is higher than the number of docked conformations with low R CS values. In half of the cases for which an acceptably accurate docked conformation was sampled, it was not identified by AutoDock's scoring function as the lowest scoring conformation. The limitation of AutoDock's scoring function when estimating the binding affinity of complexes involving large ligands is, thus, evident. But the limitation of the scoring function is not the only reason for the low number of accurate conformations. Insufficient exploration of the motion space of the ligand is the other reason. Figure 5 also shows the distribution of Dhanik et al. BMC Structural Biology 2013, 13(Suppl 1):S11 http://www.biomedcentral.com/1472-6807/13/S1/S11 the best results obtained after combining the results from two docking experiments that were done using DINC. Note that repeated experiments using DINC produced different results because of the stochasticity inherent in DINC. The distribution clearly illustrates the noticeable increase in the number of ligands for which acceptably accurate docked conformations were sampled. The two docking experiments combined still took an order of magnitude lesser computational time than the docking experiment done using AutoDock's standard protocol. Thus, DINC's advantage is that it can more exhaustively explore the motion space of the ligand, and it does so in significantly less time than AutoDock's standard protocol. Restraints and molecular dynamics A useful feature of DINC is that a positional restraint can be enforced on a part of the ligand. For example, we recently applied DINC to a modeling problem [56] involving large peptidomimetic compounds targeting the SH2 (src homology 2) domain of STAT3 (signal transducer and activator of transcription 3) [57], a protein that is implicated in a variety of human cancers [58,59]. The peptidomimetic compounds contain a PTyr-Xaa-Yaa-Gln motif and the approximate location of the phosphorus atom (or phosphate group) contained in the phosphotyrosine (pTyr) residue is known. In such cases, where there is experimental evidence or a hypothesis regarding the approximate location of an atom of the ligand, DINC can exploit the positional restraint on the atom, thereby leading to a more accurate docking performance. As described in the Methods Section, DINC docks a large ligand incrementally where at each increment, it docks a fragment and then selects a few docked conformations for further docking. Thus, we can enforce a positional restraint on an atom of the ligand, by first picking it as a root atom (see Methods Section) and then, at each increment, selecting docked conformations of the fragments based on the following modified scoring function, where, D a is the square of the Euclidean distance of the atom from its desired location, and S AD is the score computed by AutoDock's scoring function. The weight for D a has been assigned such that a large distance (D a >10Å 2 ) between the atom and its desired location is penalized by 2.5kcal/mol (the standard error in Auto-Dock's scoring function). To refine the structure of a protein-ligand complex obtained through docking, Molecular Dynamics simulation [60] of the protein-ligand complex is often performed. DINC provides a computationally fast way of obtaining the starting conformation for refinement using Molecular Dynamics. In the context of the modeling problem involving the peptidomimetic compounds and STAT3, we recently described a modeling strategy [56] which uses DINC for computing docked conformations of the complexes, selects the best docked conformation using the scoring function described by equation 1, and then performs the molecular dynamics simulations. Through rigorous experiments we showed [56] that the modeling strategy was able to model accurate binding modes, thus demonstrating a very useful application of DINC. Webserver A webserver implementation of the DINC protocol is freely available at http://dinc.kavrakilab.org (see Figure 6). Although the webserver can be used for docking ligands both small and large, it is mainly aimed at users who are interested in docking large ligands with more than 6 rotatable bonds. The webserver can be used to quickly compute reasonably accurate docked conformations of such large ligands. The docked conformations can be used for further refinement with Molecular Dynamics [60] or can be used in a consensus docking scheme [12,61] which combines the docking results from several methods to compute a consensus docking result. The webserver can be optionally used such that DINC can restrain an atom of the ligand to a desired location during the incremental docking process. The input to the webserver consists of a ligand structure in mol2 format and a protein structure in pdb format. To specify the binding site of the protein, the center and the dimensions of the AutoDock grid are also given as the input. The center can be the geometric center of either the ligand or the protein. It can also be specified in absolute terms. The dimensions of the grid are either specified in absolute terms or can be determined based on the ligand as described in the Methods Section. Restraints on an atom can be set up by specifying the name of the atom (as contained in the input ligand structure file) and the coordinates of the desired location of the atom. The webserver outputs six docked conformations and the corresponding AutoDock scores. Three conformations out of the six are the ones with the lowest AutoDock scores. The other three conformations represent the three largest clusters of all the docked conformations; each of the three conformations is the lowest scoring conformation in its respective cluster. Conclusions Computer-aided docking of large ligands, ligands with more than 5 or 6 rotatable bonds, is challenging [15]. Docking of any ligand requires the exploration of the motion space of the ligand. When the ligand has less than 5 or 6 rotatable bonds, most of the existing programs are able to dock the ligand in a fast and accurate manner. However, for a large ligand, the increased dimensionality of the motion space makes exploration for the docked conformation challenging and computationally slow. Like any other computational method, the docking program suffers from the trade-off between computational speed and accuracy. In this paper, we showed that improvements in the computer-aided docking of large ligands can be achieved by using our Auto-Dock-based incremental docking protocol, DINC, and we introduced a webserver implementation of DINC ( Figure 6). The DINC webserver is available at http:// dinc.kavrakilab.org. We presented a detailed analysis of the strengths and weaknesses of DINC as compared to the standard protocol recommended for docking large ligands using Auto-Dock. We compared the docking performance of DINC and AutoDock's standard protocol using three representative docking examples which involved large ligands with 10, 14, and 25 rotatable bonds. We also presented the results from an extended docking study. Analysis of the docking results from the three specific examples, as well as the extended study, shows that DINC is on par with AutoDock regarding the extent of the ligand's motion space exploration. Both protocols are able to sample acceptably accurate conformations. However, DINC achieves the exploration of the motion space in 2 Figure 6 DINC webserver. A webserver implementation of DINC is available at http://dinc.kavrakilab.org. The webserver takes as input a protein structure in pdb format and a ligand structure in mol2 format. Center and dimensions of the AutoDock grid, that encompasses the binding site, are specified. Both can either be specified in absolute terms or by using other options. For example, the grid dimensions can be specified based on the ligand as described in the Webserver Section. An atom of the ligand can also be restrained to a desired location as explained in the Restraints and Molecular Dynamics Section. The webserver outputs six docked conformations and corresponding AutoDock scores. Three of the six conformations are the conformations with the lowest AutoDock scores. The other three conformations are the representatives of the three largest clusters of the docked conformations. orders of magnitude less computational time compared to the standard protocol. Another important conclusion drawn from the docking results is that, even when acceptably accurate conformations are sampled, Auto-Dock's scoring function does not always rank them favorably. DINC's accuracy saw improvement when positional restraints were imposed on an atom whose approximate location is known [56]. The improvement occurs because imposition of the positional restraints reduces the volume of the motion space that is explored by DINC. DINC easily incorporates the restraint information by selecting the initial fragment such that it contains the atom to be restrained, and by modifying the conformation selection criterion. At each increment, the conformations that have low AutoDock scores and have the atom close to its desired location, are preferentially selected. The option of restraining the ligand is available through the DINC webserver. There are several applications for which DINC and the webserver can be used. Therapeutic drug design based on large compounds [62,63] such as peptides, peptidomimetics, and others could benefit from the use of DINC. DINC can be used to quickly model the protein-ligand complex and to provide an understanding of potential binding interactions that can be exploited for improving the design of drug compounds. One such application of DINC was demonstrated in our recent work [56] on predicting binding modes of peptidomimetics in complex with a cancer target. Vaccine design is another application for which DINC could be used. Predicting the fragments of an antigenic peptide which can bind to the MHC molecules is of importance to the vaccine design process [20,21]. DINC can be used for docking the fragments of the antigenic peptide in complex with the MHC molecule. The computed structures of the peptide-MHC complexes can then be evaluated using a scoring function that is specifically designed for estimating the binding affinities of such complexes. The lowest scoring candidate fragments can then be potentially used for further development in the vaccine design process. DINC can also be used within the framework of a consensus docking [12,61] scheme which has been shown to improve docking performance. The consensus scheme combines docked conformations computed by multiple docking methods and evaluates them based on a scoring criterion that reflects the consensus between the scores generated by these methods. Quickly computed docking results using DINC could, therefore, be used in such a consensus scoring scheme along with the results from existing docking programs. To further improve DINC, two major improvements are needed. As shown in Figures 4 and 5, for a large majority of protein-ligand complexes, acceptably accurate (RMSD ≤ 4.0Å) docked conformations were computed, but the scoring function did not rank them as the Top-scoring conformations. Since most of the docking programs are geared for small-molecule drug discovery, a scoring function, that is designed specifically for predicting the binding affinities of the complexes involving large ligands, is needed. Such a scoring function can be developed using empirical energy terms [13] and using statistical-regression based function approximation methods [64][65][66]. A major improvement is needed to model the flexibility of the target binding site; this is of critical importance when the holo and apo conformations of the binding site are significantly different [67]. In such a scenario, a rigid docking program is bound to fail as illustrated earlier in one of the representative examples (Figure 3). Accounting for protein flexibility is, thus, a major focus of our research efforts. Although there are some limitations to DINC, we have shown that it can be used in various applications to quickly explore the motion space of a large ligand and compute docked conformations efficiently. Methods The AutoDock-based incremental docking protocol DINC was introduced in our earlier work [34]. Here, we first present a brief overview of DINC for completeness purpose and then, using one of the three representative docking examples discussed earlier in the Results and Discussion Section, we show how the DINC webserver docks a large ligand. Given a ligand, a protein, and the specifications of a bounding box that encompasses the binding site, DINC first processes the ligand and the protein which primarily includes assigning bonds of the ligand as rotatable or non-rotatable, and assigning atom types and charges. DINC then computes a torsion tree in which each edge represents a rotatable bond; if an edge connects node A to node B, then node B contains the set of atoms associated with the bond (i.e., the atoms that are directly moved by the rotation around the bond). The edges of the tree are ranked by the visit order in a breadth-first traversal of the tree. The root node of the tree contains a selected root atom and the atoms connected to the root atom by a sequence of non-rotatable bonds. A fragment of the ligand is selected, which comprises the atoms in the root node as well as the atoms associated with a small number of top-ranked rotatable bonds. The fragment is docked using AutoDock with parameter ga_num_evals set to 250000. A few lowest scoring docked conformations of the fragment are selected and are extended by adding the next few top-ranked rotatable bonds and the associated atoms. The extended conformations are docked again. In these dockings, only the rotational DoFs corresponding to the newly added bonds and some of the bonds that existed prior to the fragment extension are explored. A few of the lowest scoring docked conformations are selected, extended, and docked again. This is repeated until all of the rotatable bonds are explored and the associated atoms are docked. Figure 7 explains how the docking of a large ligand proceeds after a conformation of the ligand and the protein are submitted to the DINC webserver. The conformations of the ligand and the protein are derived from the structure of the protein-ligand complex deposited in the PDB with ID 2FDP. The binding site is approximated by a three dimensional rectangular box (also known as AutoDock grid) and is determined based on the true conformation of the ligand from the complex. The grid is created such that it encompasses the Figure 7 Docking a large ligand using DINC. Given a protein (in yellow color), a ligand (in purple color), and the approximate location of binding site (encompassed by the box), DINC docks the ligand incrementally. The protein and the ligand shown in this figure are derived from the structure of the protein-ligand complex deposited in the PDB with ID 2FDP. An initial fragment of the ligand is selected such that it has 6 rotatable bonds. The fragment is docked and 5 docked conformations are selected. These conformations are extended by adding 3 more bonds and atoms that are directly rotated by the 3 bonds. The extended conformations are docked, 5 of the docked conformations are selected and are then extended. This is repeated until the ligand is fully docked. As also shown in Figure 1, the docked conformation of the ligand is spatially close to the true conformation. For clarity, only one of the 5 selected conformations is shown at each step. true conformation and is then extended along each dimension [13]. The grid is centered at the geometric center of the ligand (9.67, -2.94, 48.36) and the x, y, and z dimensions of the grid are 76, 80, and 60 respectively. Note that when the true conformation of the ligand is unknown, the binding site can be either specified in absolute terms or based on the input conformation of the protein (as described in the Webserver Section). After processing of the ligand and the protein, the root atom is selected such that the first fragment of the ligand contains the highest number of hydrogen bond donors and acceptors combined. A random conformation of the ligand is generated so that the docking result is not influenced by the true conformation of the ligand that was input. From the ligand, a torsion tree is generated and as there are 14 rotatable bonds, they are assigned ranks 1 to 14. Atoms in the root node, plus the atoms associated with the bonds ranked from 1 to 6 are selected as the first fragment of the ligand. The first fragment is docked using AutoDock which produces 50 conformations of the first fragment in complex with the protein. Out of these 50 conformations, 5 lowest scoring conformations are selected. Each of the selected conformations is then extended by adding the atoms that are associated with the bonds ranked from 7 to 9. Now we freeze the rotational DoFs corresponding to the bonds ranked from 1 to 3, and dock the 5 extended conformations while exploring the rotational DoFs corresponding to the bonds ranked from 4 to 9. The docking of the 5 extended conformations is done in parallel for computational speed-up. Thus, we explore three newly added DoFs and re-explore three of the previously explored DoFs. The docking of each extended conformation produces 20 conformations, and out of the 100 total conformations produced, 5 lowest scoring conformations are selected. The selected conformations of the fragment are extended, and docked (in parallel) repeatedly. Thus, in two more iterations, rotational DoFs are explored for the bonds ranked from 7 to 12, and then for the bonds ranked from 10 to 14. After the DoFs corresponding to the 14 rotatable bonds are explored, we obtain 100 docked conformations of the full ligand as well as the corresponding AutoDock scores. Each ligand in this work was docked using DINC as well as AutoDock's standard protocol. In the standard protocol, the AutoDock parameters ga_num_evals and ga_run were set to 25000000 and 50 respectively as is recommended for docking large ligands. The ligand and the protein were processed identically in dockings done using both the protocols. The 4.2 version of AutoDock was used and the experiments were done on a computing cluster (2304 total processor cores, each core runs at 2.83 GHz) at Rice University.
8,946
2013-11-08T00:00:00.000
[ "Chemistry", "Computer Science" ]
Investigation and analysis of aging behavior and tensile fracture study on precipitation hardened al7075-white cast iron particulate reinforced composites Abstract In this research work the machining waste white cast iron powder (WCI) is introduced in the age hardenable Al7075 alloy matrix with the objective of enhancing the tensile properties by well-known age hardening treatment. This material holds good in light duty dies where in the strength-related characteristics improvement is the order of the day. In view of this the effect of aging kinetics on the hardness pattern and tensile strength of stir cast Al7075-white cast iron (WCI) particulate reinforced composites in as cast and peak aging conditions is investigated. The composite is cast by two step liquid stir casting route. During precipitation hardening heat treatment, reinforcement weight percentage (internal variable) and aging temperature (external variable) are considered to be the major strengthening variables. To understand the nature of failure in tensile study, fracture surface analysis is carried out using SEM. The experimental values revealed that there is a substantial enhancement in the desired properties during the precipitation hardening treatment, if the strengthening variables are properly controlled. Hardness and tensile strength of the composite have increased with increase in weight percentage of reinforcement and reduction in aging temperature. Maximum hardness and tensile strength are observed when Al7075 alloy is reinforced with 6 weight% of WCI, aged at 100ᴼC. SEM images of tensile fracture surface exhibits abundant cup-like depressions or dimples as well as river patterns indicating mixed mode of failure. The equally aligned facets (dendrites) present in the peak-aged fracture surfaces is the evidence for the attainment of optimum (peak) aging condition. The aim of the present research work is to improve the hardness-related properties by the combined effect of precipitation of secondary phases (intermetallics) through aging treatment and dispersion strengthening by the introduction of hard reinforcement WCI. Abstract: In this research work the machining waste white cast iron powder (WCI) is introduced in the age hardenable Al7075 alloy matrix with the objective of enhancing the tensile properties by well-known age hardening treatment. This material holds good in light duty dies where in the strength-related characteristics improvement is the order of the day. In view of this the effect of aging kinetics on the hardness pattern and tensile strength of stir cast Al7075-white cast iron (WCI) particulate reinforced composites in as cast and peak aging conditions is investigated. The composite is cast by two step liquid stir casting route. During precipitation hardening heat treatment, reinforcement weight percentage (internal variable) and aging temperature (external variable) are considered to be the major strengthening variables. To understand the nature of failure in tensile study, fracture surface analysis is carried out using SEM. The experimental values revealed that there is a substantial enhancement in the desired properties during the precipitation hardening treatment, if the strengthening variables are properly controlled. ABOUT THE AUTHOR Our group's research area is mechanical characterization and microstructure-related study of Al 7075 alloy-based composites is as cast and precipitation-hardened conditions. Al 7075based composites are manufactured by two step stir casting method. Precipitation hardening treatment is imparted to alloy and composites to enhance the mechanical properties. Mechanical and microstructural characteristics are explored in both as-cast and age hardened conditions. Analysis of microstructure is carried out to correlate it with hardness-related properties. Precipitation hardening treatment has a positive impact on the mechanical behavior of alloy/composites since there is a considerable improvement in hardness and strength. Aging kinetics will be is accelerated by the increase in wt.% of reinforcement. This can be attributed to the enhancement in the dislocation density and higher precipitation rate of intermetallic phases. Enhancement in strength can be attributed to the combined effect of intermetallic precipitates and reinforcement particulates on the hindrance to dislocation movement. PUBLIC INTEREST STATEMENT The exclusive goal of this research was to investigate the relationships of water quality parameters and indices versus soil erodibility parameters and develop a new Iraqi Water Quality Index (IWQI) based on 10 water quality parameters along Tigris Riverbanks in Baghdad City for samples that collected during winter of 2019 and summer of 2020. The mini-JET instrument was used to derive soil erodibility parameters for the five selected sites along Tigris Riverbanks. The results showed that the new IWQI classified the water as good quality. The results showed that there was a relationship between water quality indices and soil erodibility parameters of Tigris riverbanks. This research concluded that high soil erodibility was obtained at poor water quality. The significance of this study is to obtain the extent correlation between the water quality indices and its impact on the soil erodibility parameters and their correlations to riverbank stability. Hardness and tensile strength of the composite have increased with increase in weight percentage of reinforcement and reduction in aging temperature. Maximum hardness and tensile strength are observed when Al7075 alloy is reinforced with 6 weight% of WCI, aged at 100ᴼC. SEM images of tensile fracture surface exhibits abundant cup-like depressions or dimples as well as river patterns indicating mixed mode of failure. The equally aligned facets (dendrites) present in the peak-aged fracture surfaces is the evidence for the attainment of optimum (peak) aging condition. The aim of the present research work is to improve the hardness-related properties by the combined effect of precipitation of secondary phases (intermetallics) through aging treatment and dispersion strengthening by the introduction of hard reinforcement WCI. Introduction Past few decades has witnessed tremendous increase in the use of light weight and high strength materials to fabricate components chiefly for automotive and aerospace domains. This is the motivation for research in the development of new age materials. Aluminium (Al) is second most widely employed metal in the world due to its favorable properties. Al and its alloys are characterized by high specific strength and enhanced corrosion resistance. Among the commercially available Al alloys, Al7xxx series offers high specific strength and hardness, hence widely employed in aerospace and automotive structural applications (Deaquino et al., 2014). The strength and hardness of Al7xxx alloy can be enhanced with reinforcement of harder and refractory particulates followed by heat treatment, known as precipitation hardening. This heat treatment can be effectively employed for the enhancement of tensile properties in Al7xxx alloy and composites. Al7xxx alloys display an enhancement in partial solid solubility as the temperature is increased due to the presence of solvus line in its phase diagram. This is the prime reason for Al7xxx alloys to positively respond for precipitation hardening (Mandal, 2016). Particulate reinforced metal matrix composites (PMCs) has attracted the attention of many researchers due to its ease of availability at competitive price. A number of successful manufacturing processes has been designed for the production of these composites. PMCs exhibit properties of isotropic nature. Desirable properties of the matrix material are improved by the particulate reinforcements because of enormous nucleation sites (reinforcements, dislocations, voids etc.). Generally carbides of silicon (SiC), boron (B 4 C) and titanium (TiC) and aluminium (Al 4 C 3 ), oxides of aluminium (Al 2 O 3 ) are employed as particulate reinforcements. Higher specific strength and stiffness are the common attributes of all the reinforcement materials (Ali & Yilmaz, 2008). In this study, the composites are produced by a two-step stir casting method due to ease of production and economic viability. Homogeneous mixing of particulates in the matrix and optimum wetting of reinforcement surface can be obtained in two-step stir casting by suitably selecting the process parameters (Achutha . Reinforcement material is selected based on expected property improvement and cost. According to the literature survey conducted, there exists very limited research on the use of alternate materials as particulate reinforcements in MMCs. This study is focused on the combination of reinforcement and heat treatment effects in the improvement of hardness and tensile strength of precipitation hardened Al7075 alloy by varying the wt.% of WCI powder. As per literature survey WCI particulates have not been employed as reinforcement in MMCs. WCI is employed as particulate reinforcement since iron carbide (cementite) is present in it, as it is a very hard and refractory phase dominating the microstructure. WCI can enhance the property of the matrix material as it mainly consists of harder and refractory carbide phase. At the same time Al7075-WCI powder reinforce composite is cheaper in comparison with Al7075-ceramic reinforcements such as SiC, B 4 C, Al 2 O 3 , Si 3 N 4 , and TiB 2 etc. These ceramic reinforcements also require special treatments and controlled conditioning. These reinforcements are not easily available as WCI. When these reinforcements are compared with WCI for hardness and strength-related properties, the latter is on par with such well-known reinforcements. The proposed system can be utilized in light duty dies where high hardness of the material can compensate for the wear and tear involved in the application. The main purpose of this research work is to obtain a cheaper class of metal matrix composite with excellent property obtained by the combined effect of solid solution strengthening by the heat treatment and harder reinforcement. Preparation of composites: Two step stir-casting technique Stir casting method was employed for the fabrication of composites. Initially, rods of Al7075 are cut into pieces and placed in a crucible made up of graphite. It is melted in 5 kW rating electric resistance furnace. Melting is continued until uniform temperature of 750 ᴼ C is attained. Slag or flux formed at the surface of the melt is removed by introducing small quantity of scum powder. Dry hexa chloroethane (C 2 Cl 6 , 0.3 wt.%) is added to the melt for degasification (John, 1999). Wettability of the reinforcement is enhanced by adding pieces of Mg (1 wt.%) to the melt. (Barbara et al., 2008). WCI particles are preheated to 500 ᴼ C for 2 h. This process removes all the volatile elements and maintains the particle temperature closer to melt. The melt is then allowed to cool in air up to 600ᴼC to a semi solid state. The melt is stirred to form the vortex by using a stirrer in the speed range of 150-300 rpm for 20 min. During stirring the vortex is formed and WCI powder (2, 4 and 6 wt.%) is transferred into the melt. Optimum distribution of the WCI particles in the alloy can be obtained by maintaining a stirrer in the range of 200-250 rpm for the processing duration of 15 min. The furnace temperature is monitored by a temperature probe. Once the mixing of reinforcement in semisolid state is complete, the furnace is reheated to 750ᴼC which is beyond the liquidus temperature of Al7075. Stirring is commenced again for about 10 min at around 400 rpm. Two step stir casting technique results in composites with less porosity, uniform reinforcement distribution and minimum casting defects. Viscosity of the melt is enhanced at semi solid state which in turn reduces the possibility of floating of the WCI particles. Due to the stirring action at this stage, the gas layers are broken around the particle liquid interface which leads to better spread out of molten metal onto the particle surface. Hence, wettability is improved. The combined effect of reheating of composite slurry above its liquidus temperature and agitations created during stirring action enhances particle distribution by reducing the effect of sedimentation (Kenneth.;and Ayotunde, 2012). Precipitation hardening treatment Specimens (2, 4 and 6 wt.% WCI composites) are solution treated at 550 ᴼ C for 2 h followed by aging at temperatures of 100, 150 and 200 ᴼ C as shown in Figure 1. After solutionising and aging, the specimens are water quenched to room temperature . Hardness measurement ASTM E18 standard is maintained for the preparation of hardness specimen. Rockwell hardness testing machine using a 1/16" diameter steel ball is utilized for the test. Minor and major loads of 10 and 100 kgf were applied during the measurement. B scale of measurement (HRB) is taken, as it is the most appropriate for aluminium alloys. Measurement of tensile strength Material behavior when subjected to tensile load is captured during tensile test. Electronic tensometer is employed for the test in accordance with ASTM-E8M. The specimen is firmly held onto a gripper. Tensile load is applied until the specimen is broken. A cross head speed of 10 mm/min having an increment in length of 0.01 mm and load cell value of 20.5 kN is employed. To minimize error three readings are averaged out for each condition and tabulated. Aging curve and hardness The hardness of composite improves with increase in wt.% of reinforcement as shown in Figure 2. The enhancement in the hardness can be attributed to the dislocations generated during solidification due to the incorporation of WCI particles. Dislocation density increases due to thermal gradient between the alloy and reinforcement, when the wt.% of WCI is increased. Microstructure and mechanical properties of the material is greatly influenced by the internal stresses developed due to the above said phenomena during solidification. The plastic deformation of the composite is hindered due to increase in dislocation density. This results in further enhancement of the hardness of composite (Mahathaninwonga et al., 2012). During precipitation hardening treatment the hardness of the material reduces after solutionising due to the formation of high temperature single phase supersaturated solid solution. Upon aging, the matrix lattice is strained due to the formation of intermediate phases as solute rich intermetallics like MgZn 2 . At the same time, the precipitation of coherent intermetallics in the matrix takes place in a systematic process with number of stages like formation of Guinier Preston (GP) zones, partially coherent intermediate zones and fully coherent zones etc. with respect to aging time. While refinement of intermetallics happens in stages, the matrix is systematically strained to increase the internal energy of the system. This internal energy increase is reflected in the form of property change especially, peak hardness. Strained matrix not only increases the hardness, but also the tensile strength and fatigue resistance (Bayazida et al., 2014). As the aging temperature increases, critical size of the precipitates nucleated is larger which leads to coarser precipitates. This phenomenon results in lower peak hardness and strength. As the alloy is aged further, the precipitate will form its own crystal structure which nullifies the coherency with the matrix lattice. The coherency strain between the matrix and formed precipitate is lost in this condition, at the same time dislocation annihilation takes place continuously as aging continues further. The number of precipitates in the matrix reduces due to the dissolution of some finer precipitates so that the remaining precipitates coarsen. This reduces the interparticle distance. This phenomenon also decreases the strain in the matrix lattice. Hence the alloy is said to be overaged (Bodunrin. et al., 2015). Figure 3 a-d it is clear that the rate of aging in the composite is higher compared to the base alloy. The aging kinetics is enhanced as the wt.% of reinforcement in the base alloy increases which in turn results in reduction in time to attain peak hardness. Dimples River pattern Voids Micro cracks Increase in the wt.% of reinforcement also leads to finer precipitates evolving from enormous number of nucleation sites in the peak-aged specimen. Hence the mobility of dislocation is hindered. Aging is also accelerated due to higher dislocation density near the matrix reinforcement interface. More the dislocation density, higher will be the number of precipitates (intermetallics) as it produces heterogeneous nucleation sites (Kulkarni et al., 2004) for the intermetallics. As the wt.% of reinforcement increases, peak hardness increases at all aging temperatures. Lower aging temperature increases the peak hardness value due to increase in number of intermediate metastable phases during the spontaneous precipitation process of aging phenomena compared to faster diffusion kinetics at higher temperature aging (Rao et al., 2010). Tensile strength It is evident from the values obtained for UTS that there is a direct correlation between the content of WCI particles in the composite and the tensile strength in both as cast and heat-treated conditions as shown in Figure 4. The UTS of peak aged composite has significantly increased in comparison with alloy. The enhancement in UTS can be attributed to the stronger interface between the matrix and reinforcement. UTS is directly proportional to the extent to which load is distributed from the matrix to the reinforcement. Solutionizing treatment will enhance the ductility of the specimen by sacrificing on its tensile strength. This is due to the structure obtained at the solutionizing stage consisting of supersaturated solid solution without the strengthening intermetallic phases. Lower tensile strength is exhibited by solutionized specimen compared to as cast condition. Lowering the aging temperature increases the aging time but results in an enhancement of tensile strength of the material. The most optimum combination to obtain the highest UTS is lowest aging temperature (100°C) with maximum 6 wt.% of reinforcement. River pattern Facets Void Figure 6. SEM fractograph of Al7075 alloy peak aged at 100° C. Increase in UTS of Al7075-WCI composites when subjected to aging, can also be attributed to the formation of intermetallic precipitates (MgZn 2 ) leading to straining of the matrix during the precipitation of intermetallics with a larger number of intermediate metastable phases (He et al., 2018). Al7075 alloy The mixed fracture mode (brittle and ductile) is predominant as observed due to the presence of river like pattern, micro cracks, voids and dimples along the fracture surface shown in Figure 5. Hence as cast alloy exhibits lower UTS. Coarser cluster dimples, identical facets and river patterns are observed along the fracture surface of the peak-aged alloy shown in Figure 6. Identical facets with lesser density of river patterns and voids present resulting in higher UTS. However, the facet clusters are separated by a large number of micro cracks even in the peak-aged samples shows failure is a mixed one. Al7075-6 wt.% WCI composites SEM fractographs of as cast and artificially aged composite with 6 wt.% WCI are shown in Figure 7 , 8 and 9. Both brittle and ductile failure characters are visible in the composites. The presence of WCI particle on the fractured surface is an indication for the possibility of failure which is driven by strained matrix material. The presence of WCI particles in the vicinity of dimples is an indication of strong interfacial bonding between the matrix and reinforcement. The stress concentration during tensile loading around WCI particles will be less due to the micron size of the particles. Hence, the reinforcement particles will be subjected to relatively smaller load. Thus the probability of particle cracking is very less (Mahadevan et al., 2007). Coarser dimples formed by merging of number of finer equi-axial identical dimples, wrinkles at the dimple tops and less density of river patterns represent higher tensile strength of this composite over the alloy. The number of locations for dimples and voids will enhance for 6 wt. Coarser dimples cluster containing finer dimples Coarser dimple edge wrinkles % of reinforcement (Pardeep et al., 2015). Since the number and density of river pattern is more than the dimples, the failure is a mixed mode one, where brittle failure mechanism is dominating. In peak-aged conditions, fracture is mainly due to improper interfacial bonding and void nucleation growth. These voids are then combined during tensile loading resulting in the coalescence of voids and subsequent formation of cracks (facets separation region) that lead to failure at the fracture surface termed as void nucleation growth failure (Nikhilesh & Shen, 2001). Tensile fractography of peak aged composite sample also show enormous number River pattern Facets separation region Facets of identical finer dimples. Repetitively dispersed finer dimples is responsible for higher strength. Close-packed facets are the evidence for attainment of optimum aging condition. Conclusions The study of precipitation hardening effect on the aging characteristics and tensile fractography on Al7075-WCI composites is systematically investigated, and analysed. The composite is successfully stir cast and subjected to heat treatment. Al7075-WCI composites exhibit an increase in hardness in the range of 30-40%, in as cast condition compared to base alloy. Age hardening resulted in significant improvement of the mechanical properties of Al7075 alloy and Al7075-WCI composites. The enhancement in UTS of peak-aged composites can be ascribed to a greater extent to the generation of intermetallic precipitates, which behaves as the domains of hindrances for restraining down the dislocations and hence decreases the degree of plastic deformation. As the weight percentage of reinforcement increases, the presence of large number of harder particles and intermetallics resulting in higher density of dislocation leads to higher UTS at lower aging temperature. Lower aging temperature (100°C) with longer duration is more effective to obtain high strength and hardness compared to high temperature aging (150 and 200°C). Considerable increase in peak hardness is observed for all aging temperatures. Most significant is at 100ᴼC which resulted in 110-130% enhancement. The UTS of peak aged Al7075 increases with the addition of WCI. The UTS of the peak-aged composites has increased by 20-30% compared to base alloy. Overall, age hardening resulted in 45-65% increase in UTS for the composites. In Al 7075 alloy, dimple rupture is the predominant fracture mode as large number of small equiaxed dimples are observed in the fractograph of fracture surface. In Al 7075-WCI composites, the failure is mainly through the matrix material as fractography indicates the presence of reinforcements on the fracture surface.
4,963
2021-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
The Function and Enlightenment of Cloud Network Convergence in Fighting against the New Crown Epidemic Firstly, this paper describes the challenge of the new coronary pneumonia epidemic to the ability of information service support. Secondly, it analyzes the main problems and related concepts of cloud network convergence, and points out the new problems faced by cloud network converge. Thirdly, this paper summarizes the current status and significant characteristics of the construction and development of cloud network convergence, and points out the important role and performance of cloud network convergence in fighting the new crown epidemic through specific examples; Finally, it analyzes the risks of cloud network convergence and the countermeasures was proposed. Introduction In the fight against the epidemic, China's cloud-network convergence service guarantee system provides data collection, situational awareness, coordinated management, epidemic analysis, disease trends, medical resource guarantees, personnel movement control, and high-risk geographical distribution of diseases. It has provided strong intellectual ability support and provided a model for the construction and organization of our government's comprehensive information service system. 2.Challenges brought by the new coronary pneumonia epidemic to information service capabilities This new coronary pneumonia epidemic obstruction warfare requires more flexible, precise, efficient and robust information service systems: One is flexible adaptation. The coordination unit of this epidemic involves more than 300,000 medical workers in various military units of the whole country. How to use appropriate means to allow them to communicate smoothly, divide labor and collaborate, and interact accurately is a problem; The second is fast and accurate At present, the scale of netizens in my country is 854 million. Accurately locating the position, trajectory and speech of an individual from such a huge crowd is a great challenge to the efficiency of information processing and execution;The third is intensive and efficient , social media gives netizens the ability to organize and mobilize worldwide, and netizens who are concerned about fighting the epidemic can organize their own resources, but at the same time pose new challenges to the docking and coordination capabilities of relevant governance systems;The fourth is safety and stability,when protecting nearly one billion netizens and hundreds of thousands of staff members in combating the epidemic, how to ensure safe, stable and sustainable operation of the information service system is also a comprehensive test of equipment performance, system operation efficiency and the quality of personnel. 3.1.Problems solved by cloud-network convergence With the advent of the 5G era, the amount of large-scale data has increased geometrically. These data are formed and accumulated in large-scale Internet of Things terminals, and are transmitted to cloud servers for data processing. After processing, they return to the terminal to guide the business. This series of actions will have an extremely high demand for hundreds of Gbps of network bandwidth. Not only will there be delays, but also need to face many problems such as weak ICT (Information and Communication Technology), low connection success rate, etc. Experience cannot be guaranteed. The existing Internet cloud service architecture obviously cannot meet the differentiated needs of large connections, low latency and high bandwidth [4]. The cost of ICT construction and maintenance is also a big obstacle. From the user's point of view, ①The construction cost of ICT facilities is high, and the comprehensive informatization makes the construction and operation and maintenance costs of ICT facilities undertaken by users continue to increase;②The technical requirements for the construction of ICT facilities are high, and there are too many types of equipment and software for centralized data construction. It is required that informatized talents must be all-round talents;③The failure rate of ICT facility operation and maintenance is high, and there are many compatibility problems between different devices and different software. When a failure occurs, it is difficult to find the specific location of the failure. From the perspective of service providers and operators, 1) ICT system integration is difficult, equipment types continue to increase, and application software is constantly updated; 2) Small and medium-sized ICT projects cannot be supported. Operators and integrators do not Sufficient energy and income to support; 3) System operation and maintenance costs are high, operation and maintenance of ICT facilities projects are difficult, and efficiency is low. 3.2.Conceptual analysis of cloud-network convergence At present, major telecom operators and service providers all have their own explanations on cloud network convergence, and there is no broad consensus on the concept of cloud network convergence. There are three types of proposals that are highly recognized in the industry: Method 1 (China Telecom): Cloud network convergence is for the purpose of application. By integrating various software and hardware resources in a certain form, users can flexibly adjust according to their own needs to achieve low consumption and high efficiency of virtual resources service [1]. Method 2 (China Unicom): Cloud network convergence is a technology that integrates cloud computing, artificial intelligence, big data, and communication networks, and has elements for vertical industry-specific applications, which is called cloud-network convergence [ 2]. Method 3 (China Information and Communication Research Institute): Cloud-network convergence is a profound change in the network architecture based on the parallel drive of business needs and technological innovation, which makes the cloud and the network highly collaborative, mutually supportive, and a model of reference for each other. At the same time, the bearer network is required to open network capabilities on demand according to various cloud service requirements, to achieve agile networking and on-demand interconnection between the network and the cloud, and to reflect the characteristics of intelligence, self-service, high speed, and flexibility [3]. China Telecom emphasizes resource virtualization, service orchestration and overall high efficiency. What China Unicom highlights is the agile adaptation of resource integration to vertical industries. They all meet the needs of users from the perspective of infrastructure providers. The difference is that 3.3.New problems faced by cloud-network convergence With the rapid development and popularization of big data, Internet of Things and artificial intelligence technologies, higher service requirements have been imposed on cloud computing. However, traditional network design, construction, and operation models have made cloud and network difficult to coordinate, and the network has become a cloud. The shortcomings of development have formed the problem of cloud, bottleneck between clouds and cloud edge coordination. (1) The problem of bottleneck in the cloud. In 2012, as shown in Figure 1, 12306 users in Spring Festival had a huge amount of dynamic and interactive access, resulting in frequent website crashes, unable to log in, and unable to pay. In 2013, 12306 handed over part of the process, the "remaining ticket inquiry" business, to Alibaba Cloud to provide services, and adopted the cloud-end collaboration model to alleviate the problem of frequent downtime. By stripping the "Remaining Tickets Inquiry" out of the entire system for independent operations on the cloud, it directly saves 75% of the computing power consumption. The highest daily peak page views in the year reached 40 billion times, and in 2019 this number reached 160 billion times. Changes in Taobao transaction volume on November 11, 2012 Transaction amount (100 million yuan) Figure 1The problem of bottleneck in the cloud (2) The bottleneck between clouds. Alipay users need to pay from the interface of Alipay and the bank after clicking to pay. The two belong to two different clouds. The connection between the clouds becomes a bottleneck, and can only support dozens to hundreds of transactions per second, and the stability is relatively poor. On Double Eleven in 2011, a few users were unable to pay during the peak period. After investigation, it was found that the online banking system of a few banks failed under pressure. In the Double Eleven in 2012, Alipay launched an activity to attract users to recharge first and then pay, allowing users to recharge their money to Alipay's balance, and deduct money directly from the balance during transactions, so that the external bottleneck problem was transferred to Within the payment cloud, the peak transaction value of the year reached 19.1 billion RMB,as shown in Figure 2. (3) The Problem of Cloud-side collaboration. With the continuous popularization of smart driving, smart factories, and smart communities, the integration of edge computing and cloud computing has gradually increased. However, the complexity of smart cars, ocean-going ships, CNC machine tools, and construction machinery equipment is high, and the information exchange protocol is not unified. The lack of a unified interface results in high cost of the cloud, long transformation period, and difficulty, high data processing difficulty, and low utilization rate, resulting in cloud-side collaboration becoming the bottleneck of the entire system. 4.The current situation and characteristics of cloud-network convergence construction At present, providing dynamic, accurate, and scalable information services has become the foundation of the development of the information society, and providing high-quality, easy-to-share, and easy-to-use information services has become the general trend of contemporary social development. As the pace of business internetization continues to accelerate, cloud computing platforms will usher in concurrent avalanche-like access. The centralized data storage and processing mode will face intractable bottlenecks and pressures, and problems such as cache penetration, cache avalanches, and cache invalidation will appear. 4.1.Development status of cloud net-work convergence At present, the main domestic cloud network convergence participants can be divided into three categories: telecom operators, service providers and equipment manufacturers. Telecom operators are represented by China Telecom, China Mobile and China Unicom, service providers are represented by Alibaba and Tencent, and equipment manufacturers are represented by Huawei, H3C and Yealink. (1) Huawei and China Telecom. Huawei and China Telecom, relying on the advantages of "seven networks + two-level cloud" in terms of network quality, network scale, access conditions and diversity, provide a flexible combination of "cloud + multiple access", including public cloud, exclusive Cloud, private cloud and other cloud products, as well as multiple product portfolios such as IPRAN dedicated line, CN2 dedicated line, cloud gateway, and high-speed between clouds, combined with the characteristics of telecommunications cloud network to develop network functions, can build a resource can be globally scheduled, and the ability can be fully open , Cloud-based platform with flexible capacity expansion and flexible architecture adjustment [6]. (2)Alibaba. In 2018, the edge intelligent access gateway was released, and a cloud-network convergence service system integrating cloud-on-network, inter-cloud network, and cloud-on-cloud network has been built, which can realize the enterprise cloud business and network department in minutes [5]. By building the Feitian cloud operating system, providing VPC, NAT gateway, and load balancing services through the cloud network, providing the GEN cloud enterprise network, GA global acceleration service through the cloud network, providing high-speed channels, VPN gateways, and (3)China Unicom. In 2018, seven new products including cloud networking, cloud networking, cloud dedicated line, cloud broadband, China Unicom cloud shield, video intelligent boutique network, and financial boutique network were released to provide cloud-network integration solutions. Unicom Cloud Shield is a DDoS unified protection platform based on the deployment of the entire network. The video intelligence boutique network is designed to deliver high-quality video transmission networks for real-time video transmission needs. The financial boutique network provides customers with exclusive high-bandwidth pipeline resources and a variety of protection method. (4)H3C. In 2015, it released its new generation UIS 6.0 unified infrastructure system, which integrates the lightweight version of Open Stack customized by H3C and H3C network technology to promote unified management, integrated delivery and operation and maintenance of the infrastructure platform. Solution-level converged architecture covering computing, storage, network and virtualization, as shown in Figure 3. 4.2.Capability characteristics of cloud-network convergence The purpose of cloud-network convergence is to build a flexible network, provide one-stop, agile, flexible, on-demand, intelligent and other service capabilities [5], with the following five notable features: (1) Intensive elasticity.Intensively constructed large-scale cloud-network converged data centers can save 5-10 times the unit operation and maintenance, power supply, and network costs of small and medium-sized data centers. Naturally supports multi-point networking and convenient expansion. Compared with traditional dedicated line networking, the number of customer lines drops from N(N-1)/2 to N. Cloud-network convergence can realize automatic control, intelligent control, automatic activation, network elastic bandwidth, and dynamic speed-up. (2) Reduce costs. Cloud-network integration makes IT rent-for-use construction a reality, saving costs in software, equipment, construction, operation and maintenance, etc. The operation and maintenance model has changed from independent operation and maintenance of sub-projects to unified operation and maintenance, and single-product operation and maintenance to integrated operation The transformation of maintenance can reduce the one-time IT investment by 90% and the comprehensive use cost by 30%, which obviously improves the operation and maintenance efficiency and reduces the operation and maintenance cost. (3) Lightweight collaboration.Cloud-network convergence promotes the light-loading of application software. After the deployment of independent software is converted to cloud deployment, there is no need to build a special ICT environment and the recruitment of specialized operation and maintenance personnel, which solves the technical and cost difficulties of independently deploying IT systems. It solves the difficulty of IT system collaboration among enterprises and supports multi-party collaboration such as cloud-side collaboration, edge-side collaboration, and multilateral collaboration in the same application scenario. (4) Stable and reliable.In the second half of 2018, major public clouds frequently failed, or services were down or data was lost, and even events that were destroyed by natural disasters occurred. The use of multiple public clouds, in addition to reducing the lock of public clouds on users, can also play a role in dispersing risks. (5) Multidimensional security.Cloud-network convergence has a multi-dimensional security system with full network and cloud network integration. MPLS technology builds an independent routing table to ensure that different users' data cannot be accessed. At present, Alibaba has provided more than 1,000 enterprise customers with 99.95% availability and data durability 99.99995% trusted and reliable cloud service. 5.The prominent role of cloud-network convergence in the fight against the new crown epidemic In the fight against the epidemic, the nationwide upper and lower information and communication service system was concentric and coordinated, and played an important role in network, data, system, and service, mainly in network, data, system, deployment, and service: (1) Fast network adaptation.By setting up a hospital 4/5G wireless network and a dedicated line network, it took only 4 days to complete the construction and commissioning of the Vulcan Hill Hospital network and lines, and opened a medical insurance and health network for Vulcan Hill Hospital communication; based on Tianyi Cloud Deploying hospital information systems, including hospital HIS (Hospital Information System), PACS (Picture archiving and communication systems), RIS (Radiology information system) and other core systems, greatly shortened the construction cycle, completed the Raytheon Mountain Hospital in only 12 hours Information system deployment. (2) The data is accurate and detailed.Through the "big data + grid" method, all regions accurately sorted out the data of vehicles, flights, and high-speed rail data from Wuhan to people everywhere. Academician Li Lanjuan, member of the Senior Expert Group of the National Health and Health Commission, said in an interview with CCTV on January 28 that experts are using big data technology to sort out the life trajectory of infected people, track the history of crowd contact, lock the source of infection and close contact with the crowd, for the prevention and control of the epidemic provide (3) The system responds quickly.During the closure of Wuhan, the urban big data system quickly aggregated data from relevant units such as the public security department, education department, transportation department, housing construction department, etc., and grasped the relevant data of the entry of the source of the disease source, providing for the prevention of the spread of the epidemic and effective gridding prevention and control. Big data support. Tianyiyun cooperates with Shanghai Lianying Medical to provide Wuhan 5th Cabin Hospital with a "5G+Cloud+AI" new coronary pneumonia intelligent auxiliary analysis system. It uses deep learning algorithms to segment lung CT images, automatically generates reports for doctors, etc., from diagnosis to treatment. Throughout the process, the reading efficiency is improved by 93%. (4) The deployment is agile and efficient.Cloud-network convergence makes rapid and agile deployment a reality. Many hospitals set up "online hot clinics" to provide online consultation and diagnosis and treatment services for fever patients, conduct online consultations, offline consultations, up-down interactions, and video connection authoritative experts, science Guide the treatment plan, make good use of medical resources and make full use of it, and make outstanding contributions to the control of the epidemic. (5) High concurrency and reliable service.Cloud network convergence overcomes the resource crisis brought about by high concurrency. At 9 a.m. on February 4, based on the 5G network, CCTV launched the "24-hour epidemic" to conduct a full HD live broadcast of the construction of the two hospitals of Vulcan and Raytheon, while watching online users more than 80 million online, becoming the strongest cloud in history Supervisor lineup. During the epidemic, the demand for internal and external services such as remote conferences, online classrooms, online consultations, and live broadcasts broke out. Cloud-network fusion technology and RTC (real-time audio and video), artificial intelligence, 5G, and big data technologies jointly supported hundreds of millions of concurrent needs The average daily call duration of Tencent's real-time audio and video TRTC exceeded 3 billion minutes, with peak calls and Lianmai concurrently reaching 10 million. 6.The enlightenment of the cloud network convergence to fight against the epidemic situation to our government Facing the new crown epidemic situation, summarizing the experience in the application of information and communication construction, reflecting on the shortcomings and shortcomings, this is not only of great significance for the prevention and control of the epidemic situation, but also conducive to promoting the modernization of our government's ability to build and manage the government. From the process of organizing the cloud-network integration service in the fight against the new crown epidemic, we got the following enlightenments: (1) In addition to reserve technology, reserve application.During the quarantine period, online micro-medical doctors, Dr. Lilac, Ping An Hao, and Dr. Chun Yu were welcomed. They all launched special consultation areas for new coronary pneumonia, and people can immediately consult and solve doubts. This is inseparable from the usual service reserves of popular platforms such as Tencent Medical Code and Ali Health. The military needs to reserve general public applications such as medical services, logistics, and communications in order to ensure that it can be used in a timely manner. (2) In addition to mobilizing supplies, we must also mobilize services.UFIDA launched the "Youyun Cai" special procurement and supply cloud service to fight pneumonia, and established a supply service platform between medical supplies suppliers and medical institutions to help solve the supply and management of medical supplies in epidemic areas with technology. This reminds us that the government not only needs to pay attention to the mobilization of tangible materials such as hardware, software, and personnel, but also needs to pay attention to the mobilization of basic services such as cloud network integration, cloud computing, and edge computing. (3) In addition to the tempering system, the model must be tempered.The management information system construction of Vulcan Hill Hospital received the task from January 24. It is planned that all commissioning trials must be completed before the opening of the clinic on February 2. The project volume in the past 2 to 3 months, this time was high within 9 days The quality is completed. This is inseparable from the rich experience and business model accumulated by Donghua Medical for the software update of Beijing Xiaotangshan Hospital. This requires that the government should not only focus on updating the business system, but also pay attention to the pain points, difficulties, and blocking points of the business, and optimize the business model of each professional post. (4) In addition to developing equipment, it is also necessary to build an ecology.During the outbreak battle, Huawei, China Telecom, China Unicom, China Mobile, China Tower, China Electronics, China Xinke and other front and rear companies spontaneously cooperated and coordinated operations based on the cloud-network fusion dynamic framework, and built a collection of information. Stable transmission, accurate positioning, trajectory tracking, situation display, to the epidemic situation traceability and analysis of the trend of the spread of the service ecosystem, provides the central epidemic work group to provide high-quality epidemic situation analysis, epidemic prevention and control deployment, etc. provides high-quality Big data decision support service. 7.Summary With the deepening of cloud network convergence research and deployment, the collaboration between edge computing, cloud computing, and networks (especially wide area networks) has become a new research point [8]. The sudden increase in demand brought about by the epidemic has brought opportunities for centralized performance to a flexible, reliable, and intelligent cloud-network convergence service system, and cloud-network convergence services have also handed in qualified answers. The accumulation of technology, personnel and experience is directly related. However, there are some hidden concerns about cloud-network convergence. For example, the availability of cloud on the system, the reduction of user experience due to the concentration of resources, the rapid recovery of users from denial of service, and the imitation of counterfeiting by users due to virtualization require continuous and iterative research on the basic theories and key technologies of cloud-network convergence. It is used to provide a reliable and safe infrastructure for future artificial intelligence.
4,919.6
2020-11-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Generative deep learning furthers the understanding of local distributions of fat and muscle on body shape and health using 3D surface scans Background Body shape, an intuitive health indicator, is deterministically driven by body composition. We developed and validated a deep learning model that generates accurate dual-energy X-ray absorptiometry (DXA) scans from three-dimensional optical body scans (3DO), enabling compositional analysis of the whole body and specified subregions. Previous works on generative medical imaging models lack quantitative validation and only report quality metrics. Methods Our model was self-supervised pretrained on two large clinical DXA datasets and fine-tuned using the Shape Up! Adults study dataset. Model-predicted scans from a holdout test set were evaluated using clinical commercial DXA software for compositional accuracy. Results Predicted DXA scans achieve R2 of 0.73, 0.89, and 0.99 and RMSEs of 5.32, 6.56, and 4.15 kg for total fat mass (FM), fat-free mass (FFM), and total mass, respectively. Custom subregion analysis results in R2s of 0.70–0.89 for left and right thigh composition. We demonstrate the ability of models to produce quantitatively accurate visualizations of soft tissue and bone, confirming a strong relationship between body shape and composition. Conclusions This work highlights the potential of generative models in medical imaging and reinforces the importance of quantitative validation for assessing their clinical utility. B ody composition is indicative of many disease states and adverse health outcomes 1 .For example, obesity and sarcopenic obesity (high adiposity) are associated with cardiovascular disease and diabetes 2 , and sarcopenia and frailty (loss of lean mass and muscle) 3 are associated with increased mortality 4,5 .In addition to total or whole-body (WB) composition, specific subregional composition has also been shown to have strong and unique associations to specific health outcomes 6 .For instance, every kilogram increase in appendicular lean mass (ALM) was shown to be associated with about a 10% reduction in mortality in elderly individuals 7 .However, with limited exceptions, only body composition assessments derived from advanced imaging methods can effectively segment the body to quantify appendicular regions.Commonly used anatomical cut points or subregions from dual-photon absorptiometry 8,9 and then dualenergy X-ray absorptiometry [DXA 10,11 ] whole body images were adopted in the 1980s and they have since been incorporated into standard clinical practice with little to no modification to original subregion definitions.Some relevant examples of standard DXA subregion for body composition include ALM's association to frailty 12 , visceral adipose tissue (VAT) and subcutaneous adipose tissue of the trunk region being associated to cardiometabolic outcomes 13 , leg fat and lean mass in diabetes patients 14 , leg fat mass (FM) and fat-free mass (FFM) being associated to frailty and injury recovery 15 .Besides these historical subregions, DXA offers the capability to explore body composition within userdefined subregions and some examples include user-defined abdominal subregions to monitor liver iron concentration 16 and leg subregions to monitor injury recovery 17 . While DXA is considered a criterion method for acquiring body composition, exposure to ionizing radiation limits accessibility and frequent use in individuals.Specially trained and licensed technicians are often required to operate DXA systems and mitigation of dose accumulation.Computed tomography (CT) 18 and magnetic resonance imaging (MRI) are alternatives to DXA and also offer regional body composition measures 19 .However, the limitations that hinder DXA accessibility and broader use are not overcome by either method.Highly skilled technicians are still needed to operate these systems, they are expensive for the user and facility to maintain, and CT utilizes even higher ionizing radiation doses than DXA.Bioelectrical impedance analysis (BIA) is a common non-image-based and accessible body composition method that can segment the body into trunk, arms and legs using selective placement of up to eight electrodes 20 .However the division between subregions is only vaguely definable, dependent on the composition distribution of the individual.DXA, CT, and MRI have precisely defined anatomical cut points verifiable from the image.Thus, neither DXA, CT, MRI, or BIA are ideal methodologies for the frequent monitoring of common or user-defined subregions of body composition with metabolic significance. An unlikely candidate, three-dimensional optical surface scanning (3DO), has demonstrated the ability to accurately and precisely measure total and regional body composition by way of detailed modeling of body shape 21,22 .Body shape is deterministically driven by the internal distributions of fat and muscle soft tissue.Body shape has been shown to have associations with blood metabolites, strength 23 , and metabolic syndrome 24 demonstrating the broad health utility of 3DO technology.Recent advances in depth camera technology has made whole-body scanning inexpensive and fast.These systems are broadly used for monitoring body shape and composition in homes, recreational facilities, and clinical settings 25,26 .3DO depth cameras are so ubiquitous that they can be found in many laptop computers, cell phones and gaming systems.Advances in image processing and machine learning techniques have resulted in body shape models that accurately predict body composition from 3DO scans 22,27,28 .A drawback to statistical and machine learning shape models for composition is that these models typically predict singular scalar values per each body composition measurement.Exploration into additional hypotheses is not possible with such models and requires retraining.Like other mentioned image-based methods, previously published body shape models have not been flexible enough to allow for ad hoc user-defined subregional analysis.Adding ad hoc user-based analysis for body composition to 3DO whole body scans would satisfy the ideal conditions outlined above. We present a novel approach, a cross-modality image-to-image model for quantitative body composition image predictions from 3DO, to the best of our knowledge.We use a generative deeplearning model that maps 3DO to DXA scans.Our model, Pseudo-DXA, outputs DXA scans in format usable by a commercially available body composition analysis software so that this advancement can be readily used by clinicians and researchers.Further, using this approach allows for direct validation of user-defined regions using paired DXA and 3DO scans.We further show that the Pseudo-DXA body composition results are surrogate measures to DXA by comparing DXA and Pseudo-DXA to metabolic blood markers.Pseudo-DXA was only achievable due to 1) the availability of large datasets, over 1000 sets, that included matched 3DO and DXA, 2) advances in deep learning and self-supervised training methods, and 3) technological advances which lead to improved 3DO capture and processing power needed to train our final model. Methods The development of our Pseudo-DXA model consisted of two distinct phases; a self-supervised learning (SSL) pretraining phase and a cross-modality fine-tuning phase.Pretraining strategies are commonly used in deep learning to increase robustness and combat overfitting when dataset sizes are modest.Imaging models have shown improvements in performance on downstream task 29,30 as a result of effective pretraining.A SSL [31][32][33] training strategy was employed which enabled the model to utilize large datasets of unlabeled DXA scans to learn the important and complex imaging features needed for generating accurate scans.Once the model learned to generate DXA scans during pretraining, it was then tuned specifically to learn the mapping between 3DO and DXA scans.The following sections detail the development in full. Study populations.The SSL pretraining phase utilized DXA data from two studies, Health, Aging, and Body Composition (Health ABC) 34,35 and Bone Mineral Density in Childhood Study (BMDCS) 36 .The Health ABC study is a prospective cohort study of 3075 individuals (48.4% male, 51.6% female) aged 70-79 years at the time of recruitment, 41.6% of whom are Black with the remaining 58.4% being non-Hispanic White.Participants were recruited from Medicare-eligible adults in metropolitan areas surrounding Pittsburgh, Pennsylvania and Memphis, Tennessee and were monitored yearly for 10 years.The BMDCS is also a prospective study cohort of 2014 individuals (49.3% male, 50.76% female) aged 5-20 years.Participants were recruited at five clinical centers in the US and participants were followed for 6 years which included annual assessments.Although both the Health ABC and BMDCS studies were longitudinal, we utilized the data in a cross-sectional manner for the SSL phase. The cross-modality training phase utilized 3DO scans and DXA scans from a third study, Shape Up! Adults (SUA (NIH R01 DK109008)) 23 .This study is a cross-sectional study of healthy adults.Participants were recruited at Pennington Biomedical Research Center (PBRC), University of Hawaii Cancer Center (UHCC), and University of California, San Francisco (UCSF).Recruitment was designed to result in a diverse population that is well stratified by sex, age, ethnicity, and body mass index (BMI).Patient demographics for all phases are shown in Table 1 and a flowchart detailing the data sources for each training phase is shown in Supplementary Fig. 1.For this study, all participants signed an informed written consent form which was approved by each respective study institutional review boards (IRB).The Heath ABC protocol was approved by the IRB at each field center (University of Pittsburg, PA and University of Tennessee, Memphis, TN), the BMDCS protocol was approved by the IRB at each clinical center (The Children's Hospital of Philadelphia, Cincinnati Children's Hospital Medical Center, Creighton University, Children's Hospital Los Angeles, and Columbia University) and the data coordinating center (Clinical Trials and Survey Corporation).The Shape Up! Adults protocol, which covers this study, was approved by the IRBs at PBRC, UCSF, and the University of Hawaii Office of Research Compliance. DXA data.All DXA scans were acquired on Hologic (Hologic Inc., MA, USA) scanners of similar models.The Health ABC whole-body scans utilized were collected using Hologic QDR 4500 systems and attempts were made to collect DXA scans on eight occasions throughout the study.Hologic QDR4500, Delphi, and Discovery models were used to acquire whole-body DXA scans for the BMDCS and scans were acquired yearly for 6 years.Participants of the Shape Up! Adults study received whole-body DXA scans with a Hologic Discovery/A system.Some participants also received duplicate precision scans within the same visit.To estimate test-retest precision, Shape Up participants were scanned twice with repositioning between the scans.Height and weight measures were available for all participants.Manufacturer-defined acquisition protocols were used to ensure reproducibility and standardization of patient positioning 37,38 .For each participant, the raw dual-energy attenuation images with their respective calibration images were represented at a bit depth of 16-bit. 3DO scan data.The 3DO scans were acquired on Fit3D Proscanner (Fit3D Inc., CA, USA).Participants were required to wear form-fitting tights, a swim cap, and sports bras if female.Participants grasped telescoping handles on the scanner platform and stood upright with arms positioned straight and abducted from their torso while the scan platform made one revolution.Final point clouds were converted to a mesh connected by triangles with ~300,000 vertices and 60,000 faces.Scans were then standardized to the same T-pose, same coordinates system, and same 110 K vertices using Meshcapade's (Meshcapade GmbH, Tübingen, Germany) skinned multi-person linear model (SMPL) service 39 . Deep learning modeling.In an attempt to mitigate overfitting, data sets for both training phases were split into train, validation, and holdout test sets using 80%, 10%, and 10% split, respectively.These split ratios applied to both the SSL pretraining phase and the Pseudo-DXA supervised training phase.The data was split based on participant subject ID to ensure that all duplicate scans remained together in the train, validation, or test splits.Splits were also performed in a stratified fashion to best preserve the age, height, weight, and BMI distributions within each data subset. Pretraining self-supervised learning.Pretraining via SSL allowed us to leverage the large set of raw DXA data from the BMDCS and Health ABC studies.A variation auto-encoder (VAE) 40 network architecture was chosen for its modular nature.VAEs consist of two main subnetwork components which include an encoder and a generator.In brief, the encoder portion of the network is tasked with learning the important imaging information from the DXA scans and encoding them into a reduced number of features known as a latent space.The generator is tasked with generating the original image from the reduced features or latent space. Our encoder network was made up of Densenet121 41 and the generator consisted of consecutive two-by-two bilinear up sampling and 2D convolutional units modeled after the Super resolution networks 42 architecture.Inputs were the DXA images and VAE output predictions of the original reconstructed DXA scan.A VGG-16 perceptual loss 43 and a custom DXA content loss function 44 was used to compare the predicted image to the original DXA input.Image inputs were augmented with a combination of translation and rotation operations to deincentivize the network from memorizing the data.Destructive augmentations were also used during training.Portions of the input image were randomly scrambled and noise was added to force the network to use the surrounding image structure to in paint 45 Image quality analysis.Normalized means absolute error (NMAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) are common computer vision image quality metrics and were computed for each test set observation 48,49 .NMAE values range between 0 and 1 where a lower value indicates less error and zero indicates a perfect reconstruction.NMAE is not invariant to positioning differences and thus we also use PSNR and SSIM which are less prone to error introduced by positioning.Higher PSNR values are ideal and for the 16-bit DXA images, 20 dB and higher are considered acceptable.SSIM ranges between 0 and 1 where higher values indicate better image quality and 1 indicates a perfect reconstruction. Body composition analysis.Quantitative image analysis was performed in addition to evaluations with standard image quality metrics.Hologic, Inc. Apex version 5.5 software was used to derive body composition measures from both the actual and Pseudo-DXA scans with the NHANES option disabled.An example of a Pseudo-DXA scan analysis is shown in Supplementary Fig. 3.The red lines indicate predefined regions of interest (ROI) that are essential to computing body composition. Although we used the "Auto-Analyze" feature, scans require manual review to ensure the regions are placed correctly.Also, this software is intended for clinical use and not designed for high throughput analysis and, therefore, was a consideration when determining the size of our final holdout test set. Special subregional composition analysis.To further demonstrate the validity and utility of Pseudo-DXA scans, we performed analysis on user defined or special subregions.The two subregions used for this analysis are shown in Supplementary Fig. 6 where R1 is the right thigh ROI and R2 is the left thigh ROI. The ROIs for both thigh subregions were defined similar to lower-body segmental analysis using DXA performed by Hart et al. 17 .The tops of each ROI were aligned with the patient's iliac crest while the bottom of the ROI was aligned with the space between the patient's femur and tibia.Each ROI was also aligned such that the medial angled edge of the ROI touched the anterior superior iliac spine and pubic arc, see Supplementary Fig. 6.All singleton participant actual DXA and pseudo-DXA scans were analyzed in this fashion to obtain subregion-specific composition. Statistical analysis.Regression analysis was used to evaluate the agreement of body composition between Pseudo-DXA images and actual DXA images.FM, FFM, and bone mass were evaluated for the entire body as well as subregions which include the trunk, arms, and legs.The coefficients of determination (R 2 ) and root mean squared error (RMSE) were reported for all body composition comparisons.Scale weight was evaluated as a covariate and the adjusted R 2 and RMSE values were computed.Select participants received duplicate 3DO and DXA scans.Coefficients of variation (%CV) and root mean squared standard error (RMSE(CV)) were calculated to quantify test-retest or short-term precision 50 of both the Pseudo-DXA model and DXA model.Precision is evaluated with respect to fat, lean, and bone mass for the entire body as well as subregions, which includes the trunk, arms, and legs. Reporting summary.Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Results The SSL data set consisted of 25,606 (48% male and 52% female) total scans from both the Health ABC and BMDCS studies, see Table 1.Eight hundred and eleven DXA scans were excluded from the Health ABC study because they were not acquired on a Hologic QDR 4500 or later system, resulting in a different raw image format.Scans were also excluded based on size and the height and width were exactly 150 and 109 pixels, respectively.Forty-eight Health ABC scans and 2812 BMDCS scans were excluded from SSL. At the time of this analysis, a total of 714 participants received both a 3DO and DXA scan on the same day as a part of the Shape Up! Adult, see Table 1.Select participants received duplicate scans on both the 3DO and DXA systems for precision monitoring and this resulted in 1169 pairs of scans.The paired data set has a holdout test set of size 70 unique individuals of which 50 participants received duplicate DXA and 3DO scans.All the following results are reported on the 70 unique participants that the Pseudo-DXA model had not seen during training. Image quality assessment of 3DO to DXA model.The NMAE, PSNR, and SSIM were computed and the average values from all predicted images 0.15, 38.15 dB, and 0.97, respectively.Good quality 16-bit images have low NMAE near zero, PSNR values greater than 25, and high SSIM values near one 48,49 .Ideal ranges and reference values are shown in Table 2 Some predictions resulted in a high NMAE with the highest value being 0.38.NMAE is not invariant to position, and positioning differences can lead to worse NMAE metrics, while PSNR and SSIM may show little to no change.Figure 1 contains scans from a representative female and male example within the holdout test set.Error maps show the majority of the errors around the skin edges and feet which suggest that positioning differences are the main source of pixel differences. Pseudo-DXA quantitative analysis for body composition. Comparing Pseudo-DXA and actual scans (Table 3) resulted in R 2 values for whole-body FM, lean soft tissue or FFM, bone mineral content (BMC), and total mass of 0.66, 0.82, 0.72, and 0.89, respectively.RMSEs for whole body FM, FFM, BMC, and mass are 6.89, 7.66, 0.30, and 5.48 kg, respectively.Standard DXA analysis also reports composition for predefined subregions which include the trunk, arms, and legs.Comparisons of Trunk FM and FFM resulted in R 2 of 0.71 and 0.81, respectively; arm FM, FFM, and BMC resulted in R 2 of 0.60, 0.84, and 0.71, respectively, and leg FM, FFM, and BMC resulted in R 2 of 0.48, 0.83, and 0.80, respectively.Normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), and structure similarity index (SSIM) were computed for all holdout test set predictions.The means, standard deviations (SD), minimum, and maximum are reported. For DXA, attenuation is directly related to the mass of the object within the X-ray path.Since the Pseudo-DXA model was not well calibrated specifically to account for this relationship, we use scale weight to correct the derived body composition.When correcting with scale weight, R 2 values for whole-body FM, FFM, BMC, and total mass of 0.73, 0.90, 0.74, and 0.99, respectively.Weight corrections improved RMSEs to 5.32, 6.56, 0.24, and 4.15 kg for FM, FFM, BMC, and total mass, respectively. Raw and weight-corrected Bland-Altman plots for each corresponding whole body and predefined subregion composition are shown in Supplementary Fig. 4 and Supplementary Fig. 5, respectively.There appears to be no obvious positive or negative trend, and the scatter is spread evenly.Special subregional performance.Subregional analysis was performed on 70 participants from the holdout test set, see Table 4.If participants received two DXA scans, the first scan was used for the analysis.The R 2 s for FM, FFM, and total mass of the right leg were 0.72, 0.77, and 0.90, respectively and the RMSEs were 1.34, 1.27, and 0.72 kg, respectively.The R 2 s of FM, FFM, and total mass of left leg were 0.70, 0.78, and 0.89, respectively, and the RMSEs were 1.25, 1.26, and 0.71 kg, respectively. Bland-Altman plots for each left and right leg FM, FFM, and total mass are shown in Supplementary Fig. 7.There appears to be no obvious positive or negative trend, and the scatter is spread evenly. Test-retest precision analysis.Fifty participants within the holdout test set received duplicate 3DO and DXA scans.These duplicates allowed us to evaluate the precision of our model against the actual DXA system.Test-retest precision for both DXA and Pseudo-DXA scans was assessed for all whole body and standard subregional DXA body composition measures, and the results are presented in Table 5. Precision CV ranged between 0.21-7.04%for DXA and 0.15-6.67 for Pseudo-DXA.Precision for both DXA and Pseudo-DXA were comparable; p values above 0.05.Pseudo-DXA demonstrated better precision than DXA on whole body measures of mass, bone mineral density, and VAT with %CV of 0.15, 0.36, and 6.67, respectively, compared to 0.21, 0.43, and 7.04, respectively for DXA.Pseudo-DXA had better precision for the subregional measure of trunk FFM with a %CV of 0.79 compared to 0.80 for DXA. Discussion We present the Pseudo-DXA model which has successfully learned to predict interior body composition from exterior body shape.From a 3DO scans, Pseudo-DXA generates a DXA scan of high image quality that can be quantitatively analyzed using standard body composition software.Our experiments confirm that soft tissue distribution and boney structure play an important role in determining a unique exterior body shape.While previous work has shown that body shape is predictive of aggregate body composition values 21,23 , this work extracts a much richer feature set from 3DO body scans than previous studies.In fact, body composition values reported in previous works can be derived from images output from our Pseudo-DXA model. Pseudo-DXA demonstrated similar if not indistinguishable test-retest precision for DXA measurements when compared to the original DXA images.With similar precision and no ionizing radiation, 3DO may be used more frequently than DXA to obtain a higher fidelity to change in body composition than DXA.As outlined in Gluer et al 50 .,four measures at baseline and follow-up visits reduces the precision error by 2 times and thus shorten the monitoring time interval by half 51 .To our knowledge, this work is the first instance in which deep learning reconstructed images were shown to be compatible with a clinical medical imaging algorithm and achieve quantitatively accurate results.Other noteworthy medical image reconstruction models, such as RegGAN for MRI 52,53 or Shan's for low-dose CT 54 , only report aggregate image quality metrics 55 which has limited clinical utility.Achieving quantitatively accurate image reconstructions is more difficult since errors in the magnitude or relative pixel values, not discernable by eye, can render the images useless for quantitative measures.Attempts at quantitative accuracy were made by Wang et al. using body shape to create CT abdominal images to quantify visceral adipose tissue 56 and liver steatosis 57 .However, Wang used the CT scans themselves as the shape which is perfectly registered with the target CT scans.Although this shows feasibility of their approach, much effort is still needed to show that 3DO body shape would accurately predict the same measures.In our work, we used 3DO scans of standing patients to predict the supine dual-energy X-ray images.Unlike their work, our work could not benefit by the spatial registration of the body.This work is not without limitations.While our model performed well when predicting whole body and subregional bone measurements, predictions were derived from the external body shape which is dominated by fat and muscle distribution.It is reasonable to think that body shape would be highly correlated to bone density especially for cortical bone since it makes up 80% of bone mass and has a very slow annual turnover 58 .Thus, Pseudodxa models may be good estimates of what the bone mass should be for given the muscle fat distribution but not a good indicator of higher turnover diseases that impact trabecular bone.Further, the model may be impacted by pathologies related to tumors and artifacts related to arthroplasty since it is unclear how various pathologies manifest as 3DO body shape signals, if at all.Pseudo-DXA images underperformed on some of the DXA compositional values, mainly associated to measure of fat.The data set size of our paired 3DO and DXA was a limitation.However, we utilized self-supervised learning on DXA images to address a portion of this issue.Pretraining the 3DO portion of the Pseudo-DXA model would likely benefit overall performance and can be performed with large unpaired 3DO datasets 59,60 .Lastly, underperformance of our model could be attributed to differences in demographic distributions within the datasets.The selfsupervised learning dataset consisted of two cohorts of which one was young and the other older with median ages of 13.3 and 73, respectively.The median age of the supervised learning cohort was 42.1.Although the age distribution of the supervised learning cohort overlapped with the other cohorts, it is a potential source of unavoidable bias that we acknowledge as a limitation. We conclude that 3DO scanning can provide access to an abundance of information beyond current clinical tools being used in obesity reduction.Our Pseudo-DXA model is end-to-end meaning it can take a 3DO scan and produce an image that can be analyzed for clinical measures of composition.The relationship between body composition and shape learned by our model demonstrates clinical relevance and warrants further research into 3DO body shape as an indicator of health.It is important to note that this work is not meant to demonstrate a replacement for DXA body composition but rather demonstrate translational health and medical applications of the information afford from accessible 3DO scans.Lastly, when possible, future medical image reconstruction deep learning work should be held to the standard of performing quantitative analysis as it will improve clinical translation 61 . Fig. 1 Fig. 1 Female (top) and male (bottom) test set examples of model inputs and prediction comparisons.Two views of a participant 3D scan standardized to the T-pose, the actual DXA scan, the Pseudo-DXA model predicted scan, and the error map comparing the actual DXA to the Pseudo-DXA.Error maps represent percent error where zero and 100 equate to no error and the maximum error, respectively. Table 1 Datasets and demographics 47e destroyed regions.Hyperparameters which include learning rate, learning rate decay, and batch size were tuned using an automated Python module entitled Sherpa46.An early stopping parameter halted training when the validation loss ceased to decrease significantly.The holdout test set was used to evaluate the VAE-predicted images.If the VAE was able to produce images with minimal error, we assumed that it has effectively learned the DXA image data type.The weights of the trained VAE were frozen.Pseudo-DXA modeling.The trained VAE generator subnetwork provided the starting point for final 3DO to DXA model.A Pointnet47model was attached to the VAE generator and was used to map the 3DO scans into the DXA space, see Supplementary Fig.2.Due to computational constraints, a preprocessing step was applied, on the fly, to reduce the 110 K vertices 3DO scans to 20% of the full resolution.Sherpa was again used during the construction of the final Pseudo-DXA model to optimize hyperparameters. Ealy stopping was used to determine when to halt training after which final evaluation was performed on the holdout test set. Table 2 Pseudo-DXA image quality performance. Table 4 Composition of special subregional analysis. Table 5 DXA vs pseudo-DXA Test-retest precision.Fitfty participants in the holdout test set received duplicate DXA and 3D scans.The test and retest scan values for each participant were used to compute the percent coefficient of variation (%CV) and root mean square error (RMSE) precision metrics.Precision metrics were computed for mass, fat (FM), lean or fat free mass (FFM), visceral adipose tissue (VAT), bone mineral density (BMD) and bone mineral content (BMC) from the whole body and subregions on actual and Pseudo-DXA scans. Table 3 Evaluation of pseudo-DXA images for quantitative accuracy.FM), lean or fat-free mass (FFM), visceral adipose tissue (VAT), bone mineral density (BMD), and bone mineral content (BMC) from the whole body and subregions were measured on actual and Pseudo-DXA scans.Univariate regression analysis was used to compare predicted and actual values.Coefficients of determination (R 2 ) and root mean squared errors (RMSE) are reported.In addtion to the raw values, we report results when using scale weight to correct body composition values.
6,434.4
2024-01-30T00:00:00.000
[ "Medicine", "Computer Science" ]
Monocular Vision-Based Robot Localization and Target Tracking This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement. Introduction Knowledge about the environment is a critical issue for autonomous vehicle operations.The capability of localizing targets with a robot for environment is highly demanded.In [1], it investigates the vision-based object recognition technique for detecting targets of the environment which have to be reached by a robot.After the robot finishes the target detection, it is an essential task for robots to know the positions of targets.The tasks include navigation and object tracking.As a result, the approach to localize targets for specific tasks is an essential issue in some applications.For UAVs, there are some similar demands [2,3] that UAVs have to track the positions of ground objects for reconnaissance or rescue assistance with monocular vision.Besides, UAVs need to observe the changing surroundings to understand the movements of the aircraft better, or the localizing system has to direct the aircraft to a region of interest after taking ground observations. In recent years, odometry sensors have been widely used for estimating the motion of vehicles moving in a 3D space environment such as UAVs and Unmanned Ground Vehicles (UGVs).For instance, Inertial Navigation Systems (INSs) are applied to measure linear acceleration and rotational velocity, and capable of tracking the position, velocity and attitude of a vehicle by integrating these signals [4] or mobile vehicles use the Global Position System (GPS) and Inertial Measurement Units (IMUs) for land vehicle applications [5].However, most of inertial navigation system sensors are expensive for some applications in the indoor environment.Optical wheel encoders and an electrical compass provide linear and angular velocities respectively.Both of two sensors are basic odometry sensors and widely used owing to their low cost, simplicity, and easy maintenance.Encoders provide a way of measuring the velocity to estimate the position of the robot and compasses are often used to detect the orientations of the robot.Based on the sensor information, motion control is done and then the localization of the robot is estimated [6,7].Despite some limitations of an encoder, most researchers agree that an encoder is a vital part of the robot's navigation system and the tasks will be simplified if the encoder accuracy is improved.Besides, the additional sensor, a camera, allows a robot to perform a variety of tasks autonomously.The use of computer vision for localization has been investigated for several decades.The camera has not been at the center of robot localization while most of researchers have more attention to other sensors such as laser range-finders and sonar.However, it is surprising that vision is still an attractive choice for sensors because cameras are compact, cheaper, well understood, and ubiquitous.In this paper, the algorithm is achieved by combining different types Explanation of the Algorithm The algorithm is mainly divided into five parts including motion modeling, new target adding, measurement modeling, image matching, and EKF updating.By sequential measurement from sensors, the EKF is capable of improving The robot moves from r w i to r w k .Based on the two views, the parallax is produced.The initial depth can be refined to approach the real distance and the position of the ith target is estimated. the initial estimate for the unknown information while simultaneously updating the localization of targets and the robot pose.Finally, Chelesky decomposition and forwardand-back substitution are presented to calculate the inverse covariance matrix efficiently. The Origins of the Proposed Algorithm. A single camera is mounted on our system as one of the sensors.The monocular vision infers that the depth of the target is not able to be measured by only one image but estimated by the sequential images.Therefore, the single camera has to estimate the depth by observing the target repeatedly to get parallax between different captured rays from the target to the robot.The orientations of target are estimated in the world coordinate system only by one image.Y w i is a sixdimension state vector and used to describe the position of the ith target in 3D space.Its equation is addressed as r w i is the location of the robot when the robot detects the ith target in the first time.G c i is defined as the position of the target with respect to the camera coordinate system and denoted by θ w i and ϕ w i are the orientations of the ith target and calculated by the pixel location of the target in only one image.The relationship between the position of the target G c i and the current robot localization r w k is presented in Figure 1.G w i is the position of the target with respect to the world frame system.1/ρ w i is defined as the distance from r w i to the target and m(θ w i , ϕ w i ) is its unit vector.Consequently, 1/ρ w i m(θ w i , ϕ w i ) is seen as the vector from r w i to the target and then G w i is calculated as ((r i is estimated if the depth 1/ρ w i is known in advance.However, the depth of the target is not able to be measured by a single image.The EKF is applied to estimate 1/ρ w i , θ w i , and ϕ w i and they converge toward more correct values under the recursive iteration by using sequential image measurement information.To sum up, (2) is the basic concept and origin of our proposed algorithm. Motion Modeling. We first derive the continuous-time system model that describes the time evolution in the state estimate.This model allows us to employ the sampled measurements of the wheel encoder and the compass for state propagation.The process state is described by the vector where r w k is the position of the robot with respect to the world frame.q w k is the quaternion that represents the orientation of the world frame for the robot.The linear velocity in the world frame and the angular velocity with respect to the camera frame are denoted by v w k and ω c k , respectively.In this system, the control inputs are measured by wheel encoders and the compass which provide the linear velocity and the angular velocity, respectively.Besides, the linear velocity is used to simulate the acceleration data of the robot for describing how components of the system state vector change completely.The definition of acceleration is defined as a w k = (v w k − v w k−1 )/Δt.In order to have similar on-line simulation, we define a w k as (v w k − v w k−1 )/Δt rather than (v w k+1 − v w k )/Δt.Although the former definition is not better than the later one which is closer to the real acceleration, an iterated EKF is still capable of compensating for errors when adopting (v w k − v w k−1 )/Δt as the robot's acceleration.The system model describing the time evolution of the wheel encoders and of the compass is given by The compass is used to measure Δθ k which is defined as where Δt.The covariance is predicted and modified if considering the control covariance Cov(v w k , ω c k , a w k ).We assume that the process noise of the control vector is not correlated with the process state vector Xm k so that Cov(Xm k , v w k , ω c k , a w k ) is set to be a zero matrix.By using (A.4) in Appendix A, the predicted covariance of the system model is expressed as where The first item F Xm Cov(Xm k )F Xm T represents the noise from the process state vector and Q describes the noise from the measurement of the wheel encoders and the compass.The Jacobian matrix of F Xm is shown as The Jacobian matrixes of F v and F ω are as the same as ∂f/∂v w k and ∂f/∂ω c k , respectively. Adding New Target. It is remarkable about our algorithm that new targets are initialized by using only one image.The initialization includes the state initial values and the covariance assignment. Target Initialization. Equation ( 1) is used to define the location of the new target and estimate the six parameters of Y w i with the EKF prediction-update loop.If the robot senses the 1st target at the state k for the first time, the new target information is added and then the process state vector is modified.The expanded process state vector is represented by the following equation: where The first time observation of the 1st target is done at the current camera location r w 1 .m(θ w 1 , ϕ w 1 ) is defined as an unit Figure 2: The orientations of the target with respect to the world frame can be estimated as a function of its undistorted pixel location h u . vector from the location r w 1 to the 1st target with respect to the world frame.Figure 2 illustrates the relationship about m(θ w 1 , ϕ w 1 ), θ w 1 , and T is defined as the pixel location of the 1st target in an undistorted image.g c is the location of the 1st target with respect to the camera frame.The location of the 1st target with respect to the world frame g w is addressed as where q 2 1 +q 2 2 − q 2 3 − q 2 4 2 q 2 q 3 − q 1 q 4 2 q 2 q 4 + q 1 q 3 2 q 2 q 3 + q 1 q 4 q 2 1 −q 2 2 + q 2 3 − q 2 4 2 q 3 q 4 − q 1 q 2 2 q 2 q 4 − q 1 q 3 2 q 3 q 4 + q Then we get Only g w x /g c z and g w y /g c z rather than g w x and g w y are computed by using h u .It is impossible to know g c z by only one image.However, it is possible to make use of g w x /g c z and g w y /g c z to calculate θ w 1 and ϕ w 1 which are shown as The unit vector m(θ w 1 , ϕ w 1 ) is derived as a function of the orientations of the target and described as The final parameter 1/ρ w 1 is not able to be measured by only one image but estimated by the sequential images with EKF prediction-update loop.As a result, it is assumed to be an initial value 1/ρ 0 . New Covariance where B 1 is not a zero matrix because the new target's location is correlated with the estimates of the robot.According to (B.4) and (B.5) in Appendix B, B 1 and A 1 are derived as where 2.4.Measurement Modeling.The robot moves continuously and records the sequential images.This is a process of detecting and identifying the targets.The parameters of targets have been set into the process state vector and the covariance has been estimated with the recursive loop. Sensor Model. The predicted pixel locations of targets are estimated as a function of the prior process state vector which is described in (4).According to (2), the location of the ith target in the camera frame can be defined in another way: Our measurement sensor of the camera is monocular vision. There is a camera sensor model to describe how the sensor maps the variables in the process state vector into the sensor variables.By using the pinhole model [16], T c i is derived as a function of the undistorted pixel location of the ith target (u i , v i ) and denoted by Equation ( 22) and ( 23) are combined to derive the predicted pixel location without distortion for the ith target Its equation is addressed as By using image correct, the predicted pixel location of the ith target within distortion Zd where k 1 and k 2 are coefficients for the radial distortion of the image. The actual image target the robot takes is the distorted pixel location rather than the undistorted one.Therefore, we have to calculate Zd k(i) to get the measurement innovation for EKF updating. Measurement Covariance Assignment.The measurement covariance is expected to describe what the likely variation of measurement is by the sensor under the current condition.The variation is infected by the variables in the process state vector and the noise which corrupts the sensor.The measurement Jacobian matrix H i is where When there are N targets observed concurrently, they are stacked in one measurement vector Zd k to form a single batch-form update equation which is shown as Similarly, the batch measurement Jacobian matrix is derived as where Based on the estimation of measurement covariance, the small area where the target lies with high probability is able to be predicted.The measurement covariance of the ith target is defined as The template match technique is used to search the actual image pixel location of the ith target in the small search area whose width and length are kσ uu(i) and kσ vv(i) , respectively.The coefficient k is a constant and defines how large the search area is.The larger k is, the more is the search time.There are two advantages regarding the search area which is estimated by the measurement covariance.One advantage is that a lot of time is saved because the image pixel location of the ith target is detected in a small search area rather than in a 320 × 240 image.The other one is that the successful search rate increases dramatically because the search area allows the image pixel location to be correctly matched with high probability.As a result, it is not necessary to use a complicated object recognition algorithm such as Scale Invariant Feature Transform (SIFT) [17] for image matching in our system.Cross-correlation search is applied to be our template match algorithm and the computing loading is lower due to its simplification.This approach uses crosscorrelation of image to search a suitable image patch.I D is defined as the template image patch for the ith target and stored in the database.I t is defined as a candidate image patch in the search area whose width is kσ uu(i) and length is kσ vv(i) .The cross-correlation value of I t with I D is given by M 2 is the number of pixels in an image patch and σ ID and σ It are the stand deviations of I D and I D , respectively.I D and I t are the average values of I D and I t , respectively.The maximum value of cross-correlation of I t with I D is the best template matching and it is seen as the matching target pixel patch for the ith target. Iterated EKF Updating. The EKF is one of the most widely used nonlinear estimators due to its similarity to the optimal linear filter, its simplicity of implementation, and its ability to provide accurate estimates in practice.We employ the iterated EKF to update the state.At each iteration step k, the prior process state vector is computed by using (4) and then Zd k is calculated as a function of Xm − k by using (28).Next, the measurement innovation and the measurement Jocobian matrix are computed as Zd k − Zd k and H k(2N×(13+6N)) , respectively.The measurement covariance and Kalman gain can be performed as (33) Finally, the process state vector and its covariance are updated at each iteration state and presented as where Zd k is the actual measurement and detected by using image matching search.Zd k is the predicted measurement and computed by the sensor model.The error in the estimation is reduced by the iteration and the unknown depths of the targets converge toward the real values gradually. Fast Inverse Transformation for Covariance Matrices. The inverse matrix of the measurement covariance has to be computed at each iteration step.It needs plenty of running time by using the transpose of the matrix of cofactors to invert the measurement covariance.In order to deal with the large size of the inverse matrix within the variations of N targets efficiently, Cholesky decomposition is applied to invert the measurement covariance and reduce the running time at each iteration step.The measurement covariance is factored in the form Cov(Zd k ) = LL T for a lower-left corner of the matrix L because the measurement covariance is positive semidefinite.The matrix L is not unique, and so multiple factorizations of a given matrix Cov(Zd k ) are possible.A standard method for factoring positive semidefinite matrices is the Cholesky factorization.The element of L is computed by using Cholesky decomposition and addressed as follows: where A i j is the element of A = C(Zd k ).A matrix equation in the form LX = b or UX = b can be solved by forward substitution for lower triangular matrix L and back substitution for upper triangular matrix U, respectively.In our proposed method, Cholesky decomposition and forward-and-back substitution are combined to invert the measurement covariance for reducing the computational loading. The Analysis of the Algorithm In this section, there are two key points about the proposed algorithm presented as follows.The first key point is that the details of algorithm will be analyzed to prove that it is not necessary to get the depth information in our algorithm even though it is designed to track the 3D localization of the targets.The other key point is introduced to explain why the simple template match technique, cross-correlation search, is applied. Depth Information Analysis in the Proposed Algorithm. The depth information should be known while the robot localizes the targets in 3D space by using monocular vision.However, the proposed algorithm does not need the depth information to localize targets.As presented in (14), it is not necessary to know the depth information to calculate θ w 1 and ϕ w 1 .Similarly, it seems that the depth information should be known to compute k q in (19).k q is derived as where The depth information g w z has to be known for calculating ∂θ w i /∂g w according to the following equation: However, ∂θ w i /∂q w k still can be computed without knowing g w z because of ∂g w /∂q w k .The details are proved as follows: where (40) g w x /g c z , g w y /g c z , and g w z /g c z are computed by (14).Equation (39) can be calculated without knowing the depth information g c z due to the following equation: In the same way, the depth information should be known to compute k hd in equation ( 19).k hd is derived as where ∂g w /∂g c = R wc and ∂h u /∂h d can be computed without g c z but ∂θ w i /∂g w cannot be calculated without g w z according to (39).However, ∂θ w i /∂h d still can be estimated without knowing g c z and g w z if ∂θ w i /∂g w and ∂g c /∂h u are combined.The details are proved by the following equations: where Therefore, ∂θ w i /∂h d is computed without g c z and g w z by using (44) and applying (14) to calculate g w x /g c z , g w y /g c z and g w z /g c z .Based on the results of calculating ∂θ w i /∂q w k , ∂θ w i /∂h d and ( 14), it infers that we have solved a problem that the depth information, g c z and g w z , is not able to be measured only by one image.This is a very important characteristic of the proposed algorithm. Analysis Regarding Using Cross-Correlation Search. Generally speaking, cross-correlation search is though as a simple template match algorithm and it is not as robust as SIFT.However, I D is still detected correctly in the search area by using cross-correlation search because the small area is estimated by an iterated EKF and it includes the actual pixel location of the target with high probability.As shown in Figure 3, the predicted pixel locations of the targets are estimated by using the sensor model and denoted by the green crosses.The pixel locations of the red crosses are the corresponding target pixel locations and detected by applying cross-correlation search.Based on the testing result that the red crosses are closer to the actual target pixel locations, it has proved that it is feasible for the proposed algorithm to apply cross-correlation search as its template match algorithm. Experimental Results In order to validate the proposed algorithm for localizing targets when the ground truth is available we have performed a number of experiments.The algorithm is implemented by C++ and performed by PC with 2.0 GHz microprocessor.The monocular sensor is a single camera with wide angle lens because we hope that more targets can be observed in one image and tracking rate can be higher.The camera's field of view is 170 • with a focal length of 1.7 mm.The image measurements received at a rate of 15 Hz are distorted with noise σ = 20 pixel.The addressed experimental results are tested under the high speed robot motion because the average velocity of each case is higher than 20 cm/sec and the maximum velocity of all cases is 69.11 cm/sec.For the duration of experiments, the initial distance between the camera and the targets ranges from 1.68 m to 5.76 m.The unknown depth of the target is estimated by sequential images with EKF and six cases (3.0 m, 3.5 m, 4.0 m, 4.5 m, 5.0 m, and 5.5 m) are assumed to be default depths for each experiment case.All of the sensors mounted on our system are shown in Figure 4. There are some measurement errors caused by the camera distortion when using the camera with wide angle lens.Before validating the proposed algorithm, we performed an experiment to estimate the distorted noise by making use of the artificial landmarks.We chose the corners of the artificial landmarks as targets.The undistorted pixel locations of the corners are estimated by the pinhole model and then the image correction is applied to compute their predicted pixel locations with distortion.Owing to using a camera with wide angle lens, there is an error between the predicted pixel location with distortion and its actual pixel location which is detected by the cross-correlation search.In the proposed algorithm, the predicted pixel location without distortion is estimated in terms of the prior process state vector.The distorted error is produced if transforming the undistorted pixel locations of targets to the distorted pixel locations.Therefore, the distorted error should be taken into consideration very carefully.Based on the experimental result, the distorted noise is assumed to be 20 pixels.Figure 6 shows the real environmental features as targets whose positions require to be estimated for a task that the robot passing through the door.Since the developed system does not require an initial setting at the first iteration, it provides a practical usage under a dynamical motion if target is detected instead of training processes of the environment features for robot localization.The two different moving paths with varied linear velocities and orientations are examined as shown in Figure 7.The initial true depth for paths A and B between robot and the doors is about 3.5 meter, path A is a straight forward path, and path B is a left way of robot to move forward to the doors.The experimental result is summarized in Table 1, and the averaged error of proposed approach is about 10 cm.According to the experimental result shown in Figures 8 and 9, it infers that the camera is not able to provide the precise measurement information to make MonoSLAM [18] correcting the prior process state variables under a highspeed movement when the control input is not provided.It also shows that the image-distorted noise σ = 20 pixel is too large for MonoSLAM [18] model.Owing to combining hybrid sensor information, the proposed MVTL algorithm is capable of localizing targets with image distorted noise σ = 20 pixel and its average error is lower than 0.11 m.It has proved that MVTL algorithm is robust to track targets under a higher-speed movement with larger measurement noise. Acceleration Approximation-Based Error Comparison. We also performed an experiment about the comparison between encoders and IMU.We choose an encoder as our sensor instead of IMU in indoor environment.The acceleration a w (k) is approximated as (v w (k) − v w (k−1) )/Δt by encoders in order to have similar on-line simulation data.It will increase additional acceleration noise if we use prevelocity v w (k) rather than v w (k+1) to simulate the acceleration of the robot at state k.However, an iterated EKF is robust enough to compensate for errors from this low-cost sensor and the definition of acceleration.The error of acceleration from a simulated IMU is lower than 0.001 m/s 2 .Based on the result shown in Table 2, the errors of localization by using encoders are still accepted even though we use prevelocity v w (k) to simulate acceleration at iterated state k.It has proved that errors are able to be reduced gradually with EKF recursive loop. Performance of Depth Initial Conditions. In terms of the experimental result shown in Figure 10, we could conclude that the ability of localizing targets depends on the parallax from different views rather than the distance between the robot and the target.It is a very crucial viewpoint to analyze the stability of the system in terms of means and deviations.Not only do we analyze one the target localization but also we understand the details about multiple target localization.Refering to the above experiments, MVTL algorithm localizes targets with a higher-speed motion and six kinds of default depth values converge to the similar value with little errors. Experiments of Fast Inverse Matrix Transformation. Finally, we performed an experiment to verify Cholesky decomposition and forward-and-back substitution.In this experiment, the totally testing time is 4.2 sec with 15 Hz image rate.At first, the transpose of the matrix of cofactors is used to invert the measurement covariance matrix whose size ranges from 2 × 2 to 8 × 8.The experimental results shown in Table 3 present that the running time is long if the system inverts a 6 × 6 or a more dimension matrix.However, the running time is reduced dramatically if the system inverts an 8 × 8 matrix by using Cholesky decomposition and forwardand-back substitution.The test result shows that the system is capable of being real-time even though localizing four targets concurrently.able to localize targets with "cm" level accuracy under a higher-speed movement.Besides, we have validated that it is practical to use odometry sensors to track targets as well.Not only does the system start the recursive procedure without the initial setting but also the robot can move rapidly to localize targets.The EKF-based algorithm is really practical and robust to reduce errors from sensors even though the low-cost sensors are used in our system.The efficiency of the proposed algorithm is impressive by using Cholesky decomposition and some computational tricks. Conclusion and Future Work In terms of the experimental result shown in Figure 10, we could conclude that the ability of localizing targets depends on the parallax from different views rather than the distance between the robot and the target.Consequently, the proposed algorithm has a high potential extension to surveillance and monitoring for UAVs with aerial odometry sensors.In the same way, it is able to be used widely for robot tasks as well. Future Work. The targets are assumed to be stationary landmarks in the proposed algorithm.It will be an interesting and challenging research if the algorithm is modified to track a moving target.This type of the technique is necessary and going to be widely used in many applications.For instance, UGV has to know the intended motions of other moving objects in order to plan its navigating path for the next state.Therefore, it will be a good future work to add a new approach into the proposed algorithm to track moving targets.Besides, there is another important future work for our system.This future work is to find some ways or algorithms to improve accuracy of the measurement data by the low-cost sensor because it will improve the localization errors in terms of more accurate measurement data. Appendices The proof of the system covariance is presented in appendix.It derives the modified covariance in motion model at each iteration step and the new covariance after adding new targets. A. The Modified System Covariance in Motion Model The process state vector is predicted by the system process model.It is determined by the old state and the control inputs applied in the process.Y k is a control vector and Figure 1 : Figure1: The robot moves from r w i to r w k .Based on the two views, the parallax is produced.The initial depth can be refined to approach the real distance and the position of the ith target is estimated. Assignment.The system covariance is modified after adding a new target.By using (B.1) in Appendix B, the new covariance Cov(X − k ) and the function of Y w 1 are denoted by Frame = 50 Figure 3 : Figure 3: Detect the actual target pixel location by using crosscorrelation search. Figure 4 : Figure 4: The encoder shown in (a) is used to measure the linear velocity and simulate the acceleration for the robot.The electrical compass shown in (b) is applied to measure the angular velocity of the robot.As shown in (c), the camera with wide angle lens is the additional measurement sensor.The three sensors are combined to estimate the localization of targets. Figure 5 : Figure 5: The experiments are designed to localize the natural landmarks as targets with varied linear and angular velocity patterns in different moving paths including the straight and curved paths. Figure 6 : Figure 6: The predicted pixel locations of the targets are denoted by the green crosses and estimated by using the sensor model.The pixel locations of the red crosses are the corresponding target pixel locations and detected by applying cross-correlation search. 4. 1 . Results of Target Localization.For demonstrating the performance and practical applications to the proposed MVLT, the experiments for a mobile robot go through the doors by tracking the feature targets located upon the door and localize robot location simultaneously.4.1.1.Target Localization Experiments.Once a robot has to pass through doors, one should recognize the accurate doors position.The experiment shown in Figure5is designed to verify accuracy of MVLT for the door as tracked targets. Figure 8 : Figure 8: The comparison of the target 1 depth errors (meter) between MVTL and MonoSLAM [18] in Path B. Figure 10 : Figure 10: The unknown depth of the target is estimated by sequential images with EKF and six cases (3.0 m, 3.5 m, 4.0 m, 4.5 m, 5.0 m, and 5.5 m) are assumed to be default depths for each experiment case. 5. 1 . Conclusion.Based on the different experiments we performed, it has proved that the proposed algorithm is pixel locations of the targets in a 320 × 240 image.Image matching is a fundamental and vital aspect of many problems in computer vision including motion tracking and object recognition.The image features are invariant to rotation, image scaling, change in illumination, and 3D camera viewpoint.It is still a popular research topic regarding how to allow a single candidate image to be correctly matched with high probability. ) 2.5.Image Matching by Using Cross-Correlation.The image patch information of the targets whose variables are set in the process state vector has been stored in the data-base when the robot senses them in the first time.The predicted pixel locations of the targets within distortion are estimated by using (25).Next important issue is how to search for the actual Table 1 : Error comparison of target tracking. Table 2 : Comparison of localization errors between the encoder and IMU. Table 3 : Comparison of execution time.
7,667.8
2011-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Solitary-Wave Solutions of Benjamin–Ono and Other Systems for Internal Waves: II. Dynamics Considered here are two systems of equations modeling the two-way propagation of long-crested, long-wavelength internal waves along the interface of a two-layer system of fluids in the Benjamin–Ono and the Intermediate Long-Wave regime, respectively. These systems were previously shown to have solitary-wave solutions, decaying to zero algebraically for the Benjamin–Ono system, and exponentially in the Intermediate Long-Wave regime. Several methods to approximate solitary-wave profiles were introduced and analyzed by the authors in Part I of this project. A natural continuation of this previous work, pursued here, is to study the dynamics of the solitary-wave solutions of these systems. This will be done by computational means using a discretization of the periodic initial-value problem. The numerical method used here is a Fourier spectral method for the spatial approximation coupled with a fourth-order, explicit Runge–Kutta time stepping. The resulting, fully discrete scheme is used to study computationally the stability of the solitary waves under small and large perturbations, the collisions of solitary waves, the resolution of initial data into trains of solitary waves, and the formation of dispersive shock waves. Comparisons with related unidirectional models are also undertaken. Introduction In the precursor [1] to this work, the authors considered Benjamin-Ono and Intermediate Long Wave systems of the form for x ∈ R and t ≥ 0. These were derived in [2] and are here written in unscaled, dimensionless variables.The system (1) is a one-spatial-dimensional model for the two-way propagation of long-crested internal waves along the interface of a two-layer system of homogeneous, inviscid fluids of densities ρ j , j = 1, 2, with the bottom layer density ρ 2 > ρ 1 for static stability, and depths d j , j = 1, 2. The upper layer is taken to be bounded above by a rigid surface (the so-called rigid lid approximation) and the lower layer is bounded below by a horizontal, flat, impermeable bottom.As mentioned, these models allow for bi-directional propagation, unlike the original Benjamin-Ono and Intermediate Long-Wave equations (abbreviated to BO and ILW henceforth).However, they do assume the waves are long-crested, so there is no appreciable variation in the horizontal direction orthogonal to the principle direction of propagation.The independent variable x is proportional to position in the direction of propagation while t is proportional to elapsed time, ζ = ζ(x, t) is the deviation of the interface from its rest position at the point x at time t.The dependent variable u = u(x, t) is a horizontal velocity-like variable while γ = ρ 1 /ρ 2 < 1 is the density ratio and α is a modeling parameter. In terms of the non-dimensional parameters (where a and λ denote, respectively, a typical amplitude and wavelength of the interfacial wave and δ = d 1 /d 2 is the depth ratio), the physical regimes under which the Euler equations for internal waves are consistent with the two-dimensional version of ( 1) are (see [2]): • Intermediate Long-Wave regime: µ ∼ ǫ 2 ∼ ǫ 2 ≪ 1, µ 2 ∼ 1. (This means that the upper layer is shallow, and the interfacial deformations are small with respect to the depths of both layers.) • Benjamin-Ono regime: µ ∼ ǫ 2 ≪ 1, ǫ 2 ≪ 1, µ 2 = +∞ (corresponding to a very deep lower layer). The two regimes are mathematically distinguished in (1) by the definition of the Fourier multiplier operator H.This takes the form H = ∂ x T √ µ 2 , with in the ILW case (where P.V. stands for the Cauchy principal value of the integral), and H = ∂ x H, where is the Hilbert transform, in the BO case.In terms of the corresponding Fourier symbols, if f denotes the Fourier transform of f , then Hf (k) = g(k) f (k), k ∈ R, where Briefly described now are some mathematical properties of the initialvalue problem for (1) (see [1] for details).The system (1) is linearly well posed if and only if α ≥ 1 [2] and the nonlinear initial-value problem for the ILW case has been shown in [3] to be well posed, locally in time, in suitable Sobolev spaces if α > 1.In the same paper, again when α > 1, similar well-posedness results for the BO system are suggested to hold because of the convergence of solutions of the ILW system to those of the BO system as µ 2 → ∞. On the other hand, to the best of our knowledge, the system (1) for α = 0 has only the simple conservation laws whereas the case α = 0, which is ill posed [4] admits two additional invariant quantities, and a Hamiltonian structure emerges. Another important property of (1) is the existence of solitary-wave solutions of the form ζ(x, t) = ζ(x − c s t), u(x, t) = u(x − c s t), moving with constant speed c s = 0, having profiles ζ = ζ(X), u = u(X) which, along with their derivatives, decay to zero as X = x − c s t → ±∞.They must be solutions of the system Existence of smooth, small-amplitude solutions of (6) was recently proved by Angulo-Pava and Saut in [5].In the same paper, the decay as |X| → ∞ was shown to be exponential in the ILW case and algebraic (as 1/|X| 2 ) in the BO case, just as for the solitary-wave solutions of their undirectional counterparts. The numerical generation of solitary-wave solutions of (1) was discussed by the authors in [1], which is Part I of the present study (and see also [6]).Three iterative methods, one of Newton type, the classical Petviashvili iteration, and a modification of it, were proposed to solve iteratively a discretization of (6) based on Fourier collocation approximation of the corresponding periodic problem.The three schemes were compared in accuracy and some properties of the resulting computed solitary waves that emerged from the experiments were pointed out.These concerned the speed-amplitude relationship, the convergence of the solitary waves of the ILW system to those of the BO system, and comparisons with solitary-wave solutions of both the classical and regularized versions of the unidirectional BO and ILW.The regularized versions of these unidirectional equations, which result from using the lowest-order relation, ∂ x = −∂ t to modify the dispersion relation, will be denoted henceforth by rBO and rILW, respectively. With a reasonably good grasp of the solitary-wave solutions in hand, it is natural to turn to their dynamics as solutions of the time-dependent problem.To this end, the corresponding periodic initial-value problem (IVP) is discretized in space with a spectral method and in time with the explicit, fourth-order Runge-Kutta (RK) scheme.Error estimates of the spectral semi-discretization were proved in [6].These estimates naturally depend upon the regularity of the solutions of (1) and, in particular, lead to spectral convergence in the smooth case.In addition, the fully discrete method was also used in Part I of this project to check the accuracy of the computed solitary-wave profiles.Its very satisfactory performance gave the confidence needed to make use of it for the developments in the present essay. From this computational perspective, several properties of (1) are studied.The first group of experiments analyzes the stability of the solitarywave solutions.Small perturbations of the traveling-wave profiles, which are determined as in [1], are taken as initial conditions of the time-dependent numerical scheme.The evolution of the corresponding numerical approximation suggests that the solitary waves are asymptotically stable.Indeed, what is observed is that a perturbed initial solitary wave develops into a principal part which appears to converge rapidly to a new solitary wave, together with smaller waves of a purely dispersive nature.There are two groups of these dispersive waves, traveling in opposite directions. One would expect that internal solitary waves would be unstable or lead to series of solitary waves when perturbed by a large amount.Surprisingly, the experiments suggest that this is not the case for the BO and ILW systems.Large perturbations of their solitary waves appear to develop into a single, large solitary wave.The amplitude of the emergent solitary wave is related to the size of the perturbation in a way to be discussed later.The solitary wave is followed by comparatively small amplitude, dispersive tails quite similar to those appearing when small perturbations are considered. Another issue studied in the present paper, which appears related to their stability, is the interaction of solitary waves.Since (1) admits twoway propagation, both experiments of head-on and of overtaking collisions are carried out.As the interactions are expected to be inelastic, additional information provided by the experiments will concern the emerging waves and the nature of the dispersion resulting from the collisions.We complete the computational study of properties related to the stability of solitary waves by analyzing numerically the resolution of general initial data into trains of solitary waves along with dispersive tails.In particular, the role of the energy and mass of the initial data, represented in terms of its amplitude and wavelength, is examined in the context of this resolution property. The computational study of (1) will be concluded with a discussion of two additional issues.The first one concerns the relationship between the system and the corresponding unidirectional equations.More specifically, we are interested in the dynamics of (1) when presented with data corresponding to unidirectional propagation.To this end, we analyze computationally the evolution of the interface according to the system using initial data derived from solitary-wave solutions of the unidirectional model and vice-versa.We find that the systems compare qualitatively, and to some extent quantita-tively, with their unidirectional counterparts in the unidirectional regime.Indeed, in both cases, the numerical experiments reveal that the evolution that emerges is similar to that which obtains from small perturbations of the exact solitary-wave solution of the relevant system. The second point concerns the formation of dispersive shock waves (DSW henceforth).When they exist, one expects such waves to involve two scales, one fast and oscillatory and a second one, slow and modulational, cf.[7,8].Thus, such waves would consist of a wavetrain with a local, rapidly oscillating structure while the envelope wave parameters themselves are changing on a much slower time scale.In the present paper, the formation of DSWs associated to (1) is investigated by integrating numerically two problems typically related to DSW formation: the Riemann problem and the dam break problem. The paper is structured as follows: Section 2 is devoted to a description and an accuracy study of the numerical method used for the computational work to follow.In Section 3, the stability of the solitary waves is analyzed via a group of experiments concerning the evolution of small and large perturbations of the solitary-wave initial data.The investigation of head-on and overtaking collision of solitary waves, as well as the resolution property is presented in Section 4. Discussion of some relationships between unidirectional models and the associated bi-directional systems is the subject of Section 5, while Section 6 contains the computational study of DSWs.Some concluding remarks may be found in the final Section 7. Overall, the dynamics for the ILW and the BO systems appear qualitatively quite similar.To keep the length of the script under control, the present paper will mostly report experiments on the dynamics of the BO system, partly because BO is computationally more challenging owing to the slower decay of its solitary waves. Description of the numerical method In this section the fully discrete scheme for the numerical approximation of (1) is described.The particular method was already used in [1] to check the accuracy of the computed solitary-wave profiles.This involved considering the computed solitary-wave profiles as initial conditions and monitoring by how much the resulting solutions evolve away from an appropriate translation of the profile. While the problems under consideration are set on the spatial domain R, the computations take place in a spatially periodic setting.The commonplace practice of approximating IVP's on R having localized initial data via periodic problems goes back at least to the 1960's.(see e.g.Zabusky and Kruskal [9] and Tappert [10]).Preliminary arguments justifying such approximation for unidirectional models can be found in [11], but see the more recent work of H. Chen [12], where explicit error estimates, valid over long time scales depending on the length of the period, are obtained.The same type of analysis can be applied to the bidirectional case. Taking this point as settled, attention is focused on the periodic IVP for (1) on a long enough interval [−l, l], with initial conditions given by two smooth periodic functions ζ 0 , u 0 , and where the dispersion operator H takes the form of the corresponding periodic operator, which is to say, H is a multiplier on the Fourier coefficients by the function whose discrete symbol is given as in (4) (cf.[1]). For an even integer N ≥ 1, let S N be the space of trigonometric polynomials of degree at most N, viz. Let {x j = −l + jh, j = 0, . . ., 2N − 1} be a uniform mesh of nodes on [−l, l] with h = l/N.Let P = P N denotes the L 2 -projection onto S N while If = I N f denotes the interpolating trigonometric polynomial to f based on the nodes x j . For T > 0, the semidiscrete Fourier-Galerkin approximation is defined as the mapping for t ≥ 0, with initial data The spectral discretization ( 7) was introduced in [6] and L 2 -error estimates were established there.For the experiments below, and to take advantage of the numerical generation of solitary waves performed in [1], the method was implemented in pseudospectral form.Algebraically, this is equivalent to the collocation formulation [13].With a small abuse of notation, (ζ N , u N ) : [0, T ] → S N × S N will also denote the semidiscrete Fourier collocation approximation, which is to say, the mapping The semidiscrete system ( 8)-( 9) is conveniently formulated using a nodal where •'s connote Hadamard products, D N is the Fourier pseudospectral differentiation matrix of order 2N and H N stands for the corresponding discrete version of the operator H.In terms of the k th discrete Fourier coefficients ζ N (k) and u N (k) of ζ N and u N , the discrete Fourier transform allows us to write (10) in the form where k = πk l , g k = g(| k|), −N ≤ k ≤ N and g is as given in ( 4).The initial-value problem for the semidiscrete system (11) is integrated in time using the explicit, 4th-order Runge-Kutta method. Remark 2.1.It is worth mentioning some additional properties of the semidiscrete schemes.Note first that for the continuous, periodic initial-value problem for the system (1) on (−l, l), the analogs of ( 5) are also independent of time.A natural discretization of ( 12) is given by hI 1,h , hI 2,h where and 1 N denotes the vector with all components equal to one.The angle brackets •, • here denote the Euclidean inner product in C N .Note that if F N stands for the N × N matrix associated to the discrete Fourier transform and and where H N (respectively D N ) denotes the diagonal matrix with diagonal entries Therefore, taking the inner product with 1 N in the first and second equations of (10) reveals that the quantities ( 13) are preserved up to round-off error by the semidiscrete approximation. Stability of solitary waves under small and large perturbations In our study of the stability of solitary waves of both ILW and BO systems, several types of numerical experiments are performed.The general procedure is always the same: first generate an approximate solitary-wave profile by using one of the methods described in [1].Then, this computed profile is perturbed in one or both of its components ζ and u.Finally, the perturbed profile is used as an initial condition for the fully-discrete scheme described in Section 2 and the evolution of the resulting numerical approximation is monitored. In all cases, the solitary waves appear to be stable under small perturbations.The behavior is illustrated by the following example: Consider the BO system with α = 1.2, γ = 0.8 and ε = √ µ = 0.1 and the solitary wave with c s = 0.57 and amplitude approximately a = 3.9129.This solitary wave is perturbed by multiplying the function ζ 0 (x) by the factor r = 1.1.Thus the perturbed initial data is (rζ 0 , u 0 ) where (ζ 0 , u 0 ) is the unperturbed solitarywave profile.This represents a perturbation of about 10%. Figure 1 shows the evolution of the perturbed solitary wave to a new, stable solitary wave followed by a rather interesting dispersive tail.The dispersive tail consists of a right and a left-traveling part as indicated in Figure 1.In this figure we only show the ζ-component of the solution since u behaves similarly.Analogous behaviour has been observed in the case of Boussinesq systems for surface waves in [14].The accuracy of the results is guaranteed in this case by using N = 32, 768 nodes in the interval [−512, 512] (so h = 3.125 × 10 −2 ) and ∆t = 0.01.The results reported here are representative of a whole series of tests with r = 0, but |r − 1| fairly small.The results obtained for the ILW case are quite similar to those presented in Figure 1.The main difference is that, because the solutions decays ex-ponentially to 0, the simulations are more easily carried out since a smaller spatial interval is required.This stable behaviour seems to persist for larger perturbations.For example, Figure 2 presents the evolution of a solitary wave of the ILW system with an amplitude perturbation r = 1.5 in the ζ component only.Keeping the parameters of the ILW system the same as in the BO system, the unperturbed speed of the solitary wave was taken to be c s = 0.51.With these specifications, the amplitude was approximately a = 1.7.The evolution of the perturbed initial condition leads to a new solitary of amplitude approximately a = 2.35 and two counter-propagating dispersive tails.A series of simulations with larger values of r showed similar behavior for both BO and ILW systems.The emerging solitary wave had roughly the amplitude rA, where A is the amplitude of the unperturbed solitary wave.These results speak to the extreme stability of the solitary-waves solutions of both systems. Dispersive tails Observe in Figure 1(d) that the tail generated by the perturbation of a solitary wave consists of two dispersive groups, a left-and a right-propagating part.These seem to be well separated and there appears to be no interaction between them or with the emerging solitary wave.This phenomenon has also been observed in Boussinesq systems for surface waves.It is worth remarking that in the Boussinesq system context, there were cases where an interaction between the two parts of the dispersive tails was observed; this seemed to occur when blowup phenomena was imminent (see [14]). A little analysis based on observation of the outcome of numerical simulation casts some light on the remarks above.Define two quantities (Note that R < 1.)As seen in [1], the speed of the solitary waves seems to be bounded from below by c γ in case of the BO system and by Rc γ in case of the ILW systems.The solution of these systems that start from a perturbed solitary wave evolves into a solitary wave of speed c s , say, and a small remainder, which we are calling the dispersive tail.We suppose c s > c γ in the BO case and that c s > Rc γ in the ILW situation.Small solutions of the system (1) satisfy approximately the linearized system where X = x − c s t is a frame of reference moving with a solitary wave of speed c s > c γ in the BO case and c s > Rc γ in the ILW case. Since the operators H and (15) and the second equation is used to solve for (∂ t − ∂ X )u in terms of ζ, there obtains Solutions of ( 16) are superpositions of plane waves ζ(X, t) = e i(kX−ω(k)t) where . As before, g is given in (4).Note that g is positive and increasing for x ≥ 0. Define g(0) = 1/ √ µ 2 in the ILW case.If one assumes α ≥ 1, which is required for well posedness, then φ has the following properties: Therefore, for all wavenumbers k > 0, the phase speed of the dispersive tail, relative to the speed of the solitary wave, satisfies The inequalities (19) show in which direction the plane-wave components of the dispersive tails propagate, relative to the solitary wave.Since φ(0) < 1, the absolute phase speed v + (k) + c s of the right-traveling waves e i(kX−ω + (k)t) is less than or equal to c γ , and similarly the left-traveling waves e i(kX−ω − (k)t) have |v − (k) + c s | ≤ c γ .Moreover, components corresponding to longer wavelength (smaller k) are faster than those of shorter wavelength. Examine now the associated group velocities.Observe that, 2 , x > 0, Note first that (18) implies that ψ(x) ≤ φ(x) for x ≥ 0. On the other hand, according to (4), xg ′ (x) = g(x) + x 2 C(x) where Consequently C(x) ≤ 0 for x > 0 and so xg ′ (x) ≥ g(x) for x > 0. This and the hypothesis α ≥ 1 imply that In consequence, it transpires that 0 ≤ ψ(x) ≤ φ(x), for x ≥ 0, and Therefore, for all wavenumbers k > 0, By using ( 14), (17), and the hypotheses on c s , it is deduced that c s > φ(0)c γ in both the BO and ILW cases.Then (20) means that there are two dispersive groups, one traveling to the left and one to the right (with the solitary wave), but with a group velocity smaller than c s . Solitary wave interaction and the resolution property The present section is concerned with the interactions of solitary waves.We remind the reader that for the unidirectional BO and ILW models, overtaking interactions are elastic owing to the complete integrability of the equations.For the regularized versions of these models, apparently the collisions are not elastic as witnessed by the simulations in [15,16].Our purpose now is to study computationally the interactions of solitary waves of the BO and ILW systems (1).Since the models support propagation in both directions, two types of collisions can be considered. Head-on collisions The experiments begin with a symmetric head-on collision for the BO system.This is illustrated by the following example.Two computed solitary waves with the same speed c s = 3.2 (with α = 1.2, γ = 0.1, ε = 0.5, √ µ = 0.1) on [−4096, 4096], propagating in opposite directions are computed.(To reverse the direction of propagation, simply remark that if (ζ, u) is a solitary wave propagating to the right with speed c s , then (ζ, −u) is a solitary wave propagating the the left with speed −c s .).The location of the peak amplitudes of the two solitary pulses was initially placed at x = −800 and x = 800, respectively.(In this and all the rest of the experiments of this section, h = 0.125 and ∆t = 0.01.)The head-on collision is illustrated in Figure 3 at several time instances of the evolution of the numerical approximation to the deviation of the interface.The interaction appears to be inelastic and similar to the head-on collision of Boussinesq systems for surface water waves, (see eg. [17]).During the interaction, the solitary waves change slightly in shape and their speed decreases.After the inelastic head-on collision the emerging solitary waves have altered amplitudes and speeds and they shed a symmetric dispersive tail propagating in both directions.(A magnification of the dispersive tail is shown in Figure 3 The peak amplitude of the numerical approximation during the interaction is presented in Figure 4(a), while Figure 4(b) shows the locations of the peak amplitudes.The amplitudes and their locations have been computed using Newton's method as described in [1].The amplitude before the collision was A = −0.362898while after the (symmetric and inelastic) interaction it has decreased to A = −0.362912which is a 0.39% change.A similarly small phase change can be observed. In addition, we performed asymmetric head-on collisions, with similar results and the same conclusions.This inelastic character is also observed in the case of the ILW system.Due to the exponential decay rate of the solitary wave profiles in this case, the numerical simulations do not require such a large spatial domain for accurate approximation. Overtaking collisions We also consider a second type of interaction between solitary waves, namely overtaking collisions.These occur when a wave of one speed is placed behind another with a smaller speed.Recall again the elastic behavior of this type of collision for the unidirectional ILW and BO equations.On the other hand, the regularized versions of these unidirectional models featured inelastic collisions (according to the experiments reported in [15,16] for the BO case). For the BO system, the typical behavior for this interaction is illustrated by the following experiment.Taking α = 1.2, γ = 0.1 and ε = √ µ = 0.1 on the interval [−4096, 4096], two solitary waves with speeds c s = 3.5 and c s = 3.1 (traveling to the right) with amplitudes A s = −4.7157and A s = −0.8978respectively, have been considered as initial data for the numerical integration of (1).The peaks of these two soliary waves were sited initially at x = −1, 000 and x = 1, 000.The numerical computations used N = 65, 536 modes (corresponding to h = 0.125) and ∆t = 0.01.For comparison, the same experiment for the corresponding solitary-wave solutions (with the same speeds) of the rBO model is also performed.The solitary-wave solutions for rBO have an exact formula, and it is not necessary to generate them numerically.Having the same speeds as those for the BO system, the amplitudes are not the same due to the different speed-amplitude relation, see [1]. Figure 5 presents the ζ profile during the interaction for different propagation times.The experiment suggests an inelastic overtaking collision of the solitary waves for both the BO system and rBO equation.We also note (see especially 5(d)) that the interaction of the solitary waves in (f): numerical approximation at times t = 0, 4800, 5000, 5500, 6000, 8000.After the interaction, there is a small difference in the speeds of the larger emergent solitary waves. the case of the rBO equation looks more inelastic (more nonlinear) and lasts longer.Perhaps as a result of this longer interaction time, the solitary waves of the rBO equation after the interaction appear to possess a larger phase change.This more inelastic character in the rBO case is noted, for example, by the fact that after the interaction, the larger solitary wave of the rBO equation has a change in amplitude of about 0.68%, in comparison with the BO system, where the change is about 0.51%. As expected, after the inelastic interaction, dispersive tails were generated.For the experiments under discussion, these emerging tails are shown in more detail in Figure 6 for the BO system and in Figure 7 for the rBO equation.The magnification given in Figure 6(b) shows that after the interaction of the solitary waves of the BO system, an N-shaped wavelet traveling to the left is generated.This does not appear in the case of the rBO equation, see Figure 7(b).The formation of these types of waves has also been observed in unidirectional and bidirectional surface water-wave models, cf.[18,19].The rest of the dispersive tails are comparable in both size and wavelength for the two models, see Figures 6(d The dispersive tails generated in the case of the ILW system consist of the same two parts as in the BO system and include the N-shaped wavelet.The collision is thus clearly inelastic, as is the case for the rILW equation.The result of these interactions is illustrated in Figure 8. Resolution into solitary waves Another important property, which is related to the stability of the solitary waves, is the process whereby solitary waves emerge from the evolution of more general initial conditions.This is often referred to as resolution into solitary waves, and explains why these special solutions attract so much attention.To illustrate this resolution property in the current context, a series of runs is reported.The initial conditions in this section are Gaussian perturbations of the interface with zero initial velocity, i.e. where A and λ are constants.Such initial conditions are reminiscent of the early stages of tsunami generation [20].For these initial conditions, the resolution seems to be determined by the energy and mass of the pulse, which in this case is related to the amplitude A and the wavelength λ, as seen in what follows. Consider the BO system with α = 1.2, γ = 0.1 and ε = √ µ = 0.1 with initial conditions (21), where A = 2 and λ = 10.These initial conditions are unable to trigger the generation of solitary waves.Instead, two counterpropagating dispersive wavetrains appear as shown in Fig. 9. Indeed, we monitored the L ∞ -norm of the solution as a function of t and observed that it decreased steadily as t increased from 0 to 2, 000.This decay of the maximum norm is one of the hallmarks of a solution containing no solitarywave component.The result is not surprising, as the branch of solitary-wave solutions of the BO system appears to have energies that are bounded away from 0 Increasing either the amplitude or wavelength of the initial condition has the effect of activating the generation of solitary waves.For example, the evolution of ( 21) with A = 5 and λ = 10 is shown in Figure 10.Note that two counter-propagating solitary waves are generated followed by dispersive tails.Similarly, increasing the wavelength of the initial condition by taking λ = 20 and holding A = 5 produces the four solitary waves shown in Figure 11, while Figure 12 shows the resolution of the initial condition with A = 1 and λ = 100 into at least four, and very probably six, solitary waves. Comparisons with solitary waves of the BO equation As discussed in Part I of this work [1], the rBO and rILW equations describe the propagation of internal waves in mainly one direction and have solitary-wave solutions found in closed form.The BO and ILW systems should be able to describe waves propagating in one direction similarly to the unidirectional models provided that the initial data are 'unidirectional'.In the present Section, the ability of the two-way systems to model the waves of their unidirectional counterparts, and especially solitary waves of the rBO and rILW equations is investigated.The results for the BO system are shown below; the experiments for the ILW system are easier to perform and the overall conclusions are the same. The tricky point here is that unidirectional models require initial data only for the interfacial variable ζ(x, 0) = ζ 0 (x), whereas the two-way propagation systems also require an initial velocity u(x, 0) = u 0 (x).Guided by the rigorous theory of surface-wave Boussinesq systems [21], we use the higherorder approximation to the unknown velocity that applies in the unidirectional case (see Equation (11) of [1]).Thus if we consider a solitary wave ζ 0 (x) of the rBO equations, given exactly in Equations ( 13)-( 16) of [1], then the unknown initial velocity u 0 (x) for the BO system is consistently chosen by the higher-order approximation (22). For the first reported experiment, we took α = 1.2, γ = 0.8, ε = √ µ = 0.1.Figure 13 presents the evolution of the BO system with initial data for ζ 0 taken to be a solitary-wave solution of the rBO equation with c s = 0.53 and amplitude A = −1.6,together with the u 0 derived from (22).The numerical approximation evolves into a principal profile of solitary-wave type, along with a trailing tail.Most of the resulting wave travels to the right with the rBO-solitary wave, while a very small portion travels in the opposite direction.After a simulation which is long enough for the tail to separate and disperse, the numerical approximation appears to be a new solitary wave of amplitude A ≈ −1.78, differing from that of the initial profile by about 11.25% in the maximum norm.What is particularly notable here is that this 'unidirectional' data does indeed produce a signal that, in the main, propagates in only one direction.22) for the BO system.The maximum negative excursion of the solution as a function of time is also displayed.Observe that the negative excursion levels off as t grows, consistent with a traveling-wave structure. Similarly, a simulation was run of the rBO equation with initial condition given by a solitary-wave solution of the BO system.The solitary-wave profile was generated following the numerical techniques introduced in [1] and with speed c s = 0.53.The computed speed-amplitude relation gives a numerical profile of the amplitude A ≈ −1.63.The IVP for the rBO equation is approximated by a fully discrete scheme based on the same procedures described in Section 2 for the BO system.The evolution of the corresponding numerical approximation is shown in Figure 14.As in the previous experiment, this shows a solution of the rBO equation consisting of a solitary-wave profile followed by a tail.As time evolves and the tail disperses, the approximation tends to an emerging solitary wave with amplitude A ≈ −1.47, which is a difference in uniform norm of about 10% when compared with the amplitude of the initial profile. Dispersive shock waves This section is devoted to studying numerically the formation of dispersive shock waves in the regularized ILW and BO equations, as well as the ILW and BO systems.Since the results are broadly similar for ILW and BO models, only those corresponding to the BO-type models will be shown. The formation of DSW's is associated to the propagation of a shock wave through a medium where dispersion effects dominate dissipation (see e.g.Hoefer and Ablowitz [7]).Under these conditions, the changes in the medium represented by the shock wave typically occur in the form of a structure involving two scales: a rapidly oscillatory wave scale and a slow modulational scale.Examples of DSWs appear in many disciplines such as Water Waves, Plasma Physics, Optics, etc. (see [22,23,8] and the references therein). From the mathematical point of view, media featuring DSW's are often described by a system of conservation laws together with dispersive and dissipative terms.In the present study, consideration is given to the situation where dispersion dominates and so dissipation can be safely ignored.For completely integrable equations, the slow modulation of a rapidly oscillating wave can be described using Whitham's modulation theory, [22,23,24,8].In such problems, exact solutions for the corresponding Whitham modulation equations can be constructed in terms of the Riemann invariants of the conservation law.These in turn can be used to obtain, from suitable initial and boundary data, asymptotic representations of dispersive shock waves.This was applied, in [25] for the dispersive Riemann problem associated with the KdV equation; see also [23,26].Other important integrable systems have been similarly analysed; see for example [27] for the mKdV equation, [28] for the BO equation and [29] and [30] for the NLS equation. In non-integrable systems, it is usually not possible to write the modulation equations in Riemann invariant form and several alternate strategies to study the formation of DSWs have emerged, e.g. the method of El [31] or the approximate method of Marchant and Smyth discussed in [32]. In many cases, DSW's are generated from discontinuities in the initial data.Our computational study will consider two initial-value problems for the rBO equation and the BO system with discontinuous initial data, namely the Riemann problem and the dam break problem. Note first that if one neglects the dispersive terms in the BO system (1), the hyperbolic conservation laws emerge.This system can be written in a matrix-vector form, viz. where v = (ζ, u) T and The distinct eigenvalues of A are and the associated eigenvectors, also known as Riemann invariants of the system (24), are They both satisfy the equations Thus, along the characteristic curves of the system, the quantities R ± are constant in time. The Riemann problem Consider first the BO system with Riemann-type initial data, which is to say the initial conditions are of the form with ζ ± , u ± satisfying the compatibility condition These initial data usually generate what is referred to as a simple DSW. Taking the slightly easier case ζ + = u + = 0, it transpires that To simulate Riemann-type problems with the numerical method described in Section 2, the initial data must be slightly smoothed.Otherwise our Fourier-spectral method cannot handle the problem.Instead of (25), initial data of the form with is posited.Both ζ 0 and u 0 , while smooth, feature large gradients reminiscent of the pure Riemann data (25) (see Figure 15).Since ζ(−x, 0) = ζ(x, 0), x ∈ R and ζ(x, 0) decays to zero exponentially as |x| → ∞, the corresponding periodic IVP with initial conditions ( 28)-( 29) for the BO system and (28) for the rBO equation are integrated for x in a long spatial interval (−l, l). For l = 1024 and step sizes h = 0.125 and ∆t = 0.001, the numerical simulation with these initial conditions is shown in Figures 15 and 16 respectively.In the case of the BO system, we observe the formation of a right-propagating simple DSW plus a dispersive rarefaction wave.Recall that in the absence of the dispersion, one expects a classical shock wave followed by a classical rarefaction wave.Since the compatibility condition (26) is not exact for the BO system, the initial conditions generate also leftward propagating dispersive tails.This does not seem to be the case of the rBO equation, as shown in Figure 16.Note that the formation of the simple DSW and the rarefaction wave does not look to involve additional structures. The dam break problem Next is introduced the so-called dam break problem.This is a special case of the Riemann problem in which u ± = 0 in (25).For the experiment below we will consider the periodic IVP for the BO system with initial conditions ζ(x, 0) on [−1024, 1024] given by (28) and u(x, 0) = 0.The resulting numerical simulation, see Figure 17, appears as a waveform consisting of two wave packets, each of which has a dispersive shock plus a rarefaction wave.The packets propagate symmetrically in opposite directions. Concluding remarks Studied here has been two asymptotic models for the propagation of internal waves along the interface of a two-layer fluid system, with the upper layer bounded above by a rigid lid and the lower layer bounded below by a rigid, featureless horizontal bottom.The systems were derived in [2] in the Intermediate Long Wave and the Benjamin-Ono regimes, respectively.The Benjamin-Ono system is the limiting case of the Intermediate Long-Wave system when a very deep (theoretically infinite) lower layer is assumed.The corresponding one-dimensional pseudo-differential systems (1) have the same quadratic-type nonlinearities, while the linear parts for both of them are nonlocal. In Part I of this study [1], the authors presented several numerical techniques to generate approximate solitary wave solutions of these systems whose existence, in the small-amplitude case, was proved in [5].A comparative study with solitary wave-solutions of related, unidirectional models of ILW and BO type was also developed. In the present essay, we further study the solitary-wave solutions of (1), analyzing by computational means some aspects of their dynamics.To this end, the periodic IVP is solved numerically on a long enough spatial interval that it approximates well the IVP on R. The spatial approximation of our scheme is via a spectral Fourier-Galerkin method (implemented as pseudospectral; the resulting method is algebraically equivalent to a Fourier collocation scheme).This is coupled with an explicit, 4th-order Runge-Kutta time-stepping.The L 2 -convergence of the spectral semidiscretization was proved in [6] and the resulting fully discrete scheme was already used in [1] to check the accuracy of the computed solitary-wave profiles.These facts, together with the convergence studies rerported in [1], provide confidence in the accuracy of the simulations reported herein. Specifically, our computational study is concerned with the following aspects: the dynamics of the solitary waves under small and large perturbations, the behaviour of overtaking and head-on collisions of solitary waves, and the resolution of initial data into solitary waves.We also compare computationally the bi-directional and uni-directional models corresponding to the same physical regime.The study is completed by analyzing numerically the formation of dispersive shock waves.Some of the conclusions of the study are the following: • Under small perturbations, the solitary waves seem to be nonlinearly stable, in the sense that an initially perturbed solitary-wave profile evolves into a modified solitary wave with slightly different amplitude and speed, along with a dispersive tail with two components, one trav-eling to the left and one to the right.The existence of these two dispersive groups was analysed via small-amplitude, plane-wave solutions of (1), linearized about the rest state. • Testing the stability of solitary waves of the BO and ILW systems under large perturbations we found that they are still extremely stable.They evolve into a new solitary wave of an enhanced magnitude plus a dispersive tail of relatively small amplitude. • The resolution property is illustrated with experiments involving the evolution of initial data of Gaussian type.The initial condition develops into a train of waves whose number depends on the energy of the initial Gaussian pulse (represented through its amplitude and wavelength parameters) and with a dispersive tail behind. • The interactions are observed to be inelastic, something that is expected on account of the lack of Hamiltonian structures and the fact that there are only linear conserved quantities.Since the models are bidirectional, two types of collisions are considered.In the case of overtaking collisions, the behaviour of the waves after the interaction was compared to that of similar experiments for the uni-directional regularized BO equation.The main difference observed in the experiments seems to be the formation, in the case of the BO system, of N-shaped wavelets in the tail, traveling in the direction opposite to that of the emerging solitary waves.In the case of head-on collisions, the interactions develop dispersive tails propagating in both directions, as expected from the analysis of the small solutions of the linearized system, mentioned above. • Comparisons between the unidirectional and bidirectional models when the initial data for the bidirectional model is well prepared for unidirectionality show remarkable similarities.This suggests that a theorem of the sort derived in [21] for surface waves likely holds for internal waves as well. • The numerical experiments concerning the formation of DSWs were focused on the classical Riemann and dam break problems.In the case of the Riemann problem, suitable initial data generated a simple DSW along with a dispersive rarefaction wave, with an additional small dispersive structure traveling in the opposite direction.We checked that from the same initial condition, the BO and regularized BO equations developed similar structures but without the small additional dispersion.In the case of the dam break problem, the experiments suggest that from initial data with zero flow, the approximate solution of the BO system develops two DSWs plus rarefaction structures traveling in opposite directions. Figure 1 : Figure 1: Nonlinear stability of a BO-solitary wave with c s = 0.57 with amplitude approximately a = 3.9129, perturbed initially by about 10% in only the ζ-component of the system. Figure 2 : Figure 2: Nonlinear stability of a solitary wave with c s = 0.51 perturbed initially by about 50% in only the ζ-profile of the ILW-system. Figure 3 : Figure 3: Symmetric head-on collision of solitary-wave solutions of the BO system with c s = 3.2: (a)-(e): numerical approximation at times t = 100, 250, 313, 400, 1500; (f) is a magnification of the marked part of (e). Figure 4 : Figure4: Location of peak amplitudes of the numerical approximation during the symmetric head-on collision of the BO system, see Figure3.Notice the small, retarded phase shift after the interaction. Figure 5 : Figure 5: Overtaking collision of two solitary waves of the BO system (solid lines) and the rBO equation (dashed lines) with speeds c s = 3.5 and c s = 3.1.(a)-(f): numerical approximation at times t = 0, 4800, 5000, 5500, 6000, 8000.After the interaction, there is a small difference in the speeds of the larger emergent solitary waves. Figure 6 : Figure 6: The dispersive tails after the interaction of the solitary waves of the BO system appearing in Figure 5.(b) and (d) are magnifications of the marked part of (a) and (c), respectively.Notice the N-shaped, left-propagating wave.The next experiment provides an indication of the similarities between the BO-and the ILW-systems.An overtaking collision of two solitary waves Figure 7 : Figure7: The dispersive tails after the interaction of the solitary waves of the rBO equation from Figure5.Graphs (b) and (d) are magnifications of the marked part of (a) and (c), respectively.There is no N-shaped residual here. Figure 8 : Figure 8: The dispersive tail and the N-shaped wavelet produced after the overtaking collision of two solitary waves of the ILW system compared to the solution of the rILW equation with the analogous solitary waves, cf.Figure 5. Figure 8: The dispersive tail and the N-shaped wavelet produced after the overtaking collision of two solitary waves of the ILW system compared to the solution of the rILW equation with the analogous solitary waves, cf.Figure 5. Figure 12 : Figure 12: Resolution of a Gaussian initial condition (21) with A = 1 and λ = 100 into solitary-wave solutions of the BO system. Figure 13 : Figure13: Evolution of a solitary wave of the rBO equation used as initial condition for ζ 0 with u 0 determined by(22) for the BO system.The maximum negative excursion of the solution as a function of time is also displayed.Observe that the negative excursion levels off as t grows, consistent with a traveling-wave structure. Figure 14 : Figure 14: Evolution of a solitary wave of the BO system used as an initial condition for the rBO equation.The maximum negative excursion of the solution as a function of time is also shown.
10,787
2023-07-31T00:00:00.000
[ "Physics", "Mathematics" ]
Gravi-Burst: Super-GZK Cosmic Rays from Localized Gravity The flux of cosmic rays beyond the GZK cutoff ($\sim 10^{20}$ eV) may be explained through their production by ultra high energy cosmic neutrinos, annihilating on the relic neutrino background, in the vicinity of our galaxy. This process is mediated through the production of a $Z$ boson at resonance, and is generally known as the $Z$-Burst mechanism. We show that a similar mechanism can also contribute to the super-GZK spectrum at even higher, ultra-GZK energies, where the particles produced at resonance are the Kaluza-Klein gravitons of weak scale mass and coupling from the Randall-Sundrum (RS) hierarchy model of localized gravity model. We call this mechanism Gravi-Burst. We discuss the parameter space of relevance to Gravi-Bursts, and comment on the possibility of its contribution to the present and future super-GZK cosmic ray data and place bounds on the RS model parameters. Under certain assumptions about the energy spectrum of the primary neutrinos we find that cosmic ray data could be potentially as powerful as the LHC in probing the RS model. Introduction About 25 years ago, Greisen, Zatsepin, and Kuzmin (GZK) noted that the observed spectrum of proton, photon, and nucleus cosmic rays must virtually end at energies above ∼ 10 20 eV, the GZK cutoff [1]. Their key observation was that Ultra High Energy Cosmic Rays (UHECR's) deplete their energy through various interactions with the 2.7 • K Cosmic Microwave Background Radiation (CMBR), over distances of order 10 − 100 Mpc. Above 10 19 eV, nuclei are photo-dissociated by interactions with the CMBR, and a 10 20 eV proton loses most of its energy over a distance of ∼ 50 Mpc. The analogous distance for a photon of the same energy is ∼ 10 Mpc, due to e + e − pair production on the radio background [2]. However, over the past three decades, different experiments have observed a total of about 20 events at or above this 10 20 eV bound [3]. Since there seem to be no feasible candidates for the sources of these cosmic rays, such as Active Galactic Nuclei, within a GZK distance ∼ 50 Mpc of the earth, the observation of these events poses a dilemma. A number of proposals have been made to resolve this puzzle [4]. One such proposal for the origin of the super-GZK events, due to Weiler, is based on the observation that UHECR neutrinos can travel over cosmological distances, with negligible energy loss [5,6]. Therefore, if these neutrinos are present in the universe they could in principle produce Z bosons on resonance through annihilation on the relic neutrino background, within a GZK distance of the earth. The highly boosted subsequent decay products of the Z will then be observed as primaries at super-GZK energies, since they do not have to travel cosmological distances to reach us. This mechanism for producing super-GZK cosmic rays is referred to as Z-Burst. The Z-burst mechanism has the advantage that it does not assume physics beyond the Standard Model (SM) and is, therefore, minimalistic. However, any extension of the SM that provides a particle X which couples to νν and decays into the usual primaries can in principle contribute to the super-GZK spectrum beyond the range presently observed. Assuming a mass m ν ∼ 10 −2 − 10 −1 eV for neutrinos as suggested by atmospheric oscillation data, the particle X must have a mass of order the weak scale (∼ 1 TeV) to be relevant to the spectrum near the GZK cutoff. In this paper, we will show that the massive Kaluza-Klein (KK) tower of gravitons in the Randall-Sundrum (RS) localized gravity model [7] are viable candidates for particle X. The RS model is based on a truncated five-dimensional Anti-deSitter(AdS 5 ) spacetime, with two 4-d Minkowski boundaries. Our visible 4-d universe and all fields associated with the SM are assumed to be confined on one of these boundaries, referred to as the TeV brane, with the other 'Planck' brane boundary separated from us by a fixed distance r c ∼ 10 M −1 P l , the compactification scale along the 5 th dimension; M P l is the reduced Planck mass. The RS geometry is such that the induced metric on the visible TeV brane generates the weak scale from a 5-d scale M 5 ∼ M P l , without fine-tuning, through an exponentiation. The interested reader is referred to Refs. [7,8] for the details of the RS model and its numerous phenomenological implications. However, here we mention that a distinct feature of this model is that it predicts the existence of a tower of spin-2 KK gravitons, G (n) (n = 1, 2, 3, . . .), starting at the weak scale, and with weak scale mass splittings and couplings. Phenomenological studies [8,9] suggest that the lowest lying KK graviton G (1) can be as light as ∼ 400 GeV. The G (n) have couplings to all particles, due to their gravitational origin and can be produced by νν annihilation, eventually decaying into qq, gg, γγ, . . .. Thus, the G (n) can in principle contribute to the super-GZK spectrum in a way that is similar to the Z-burst contribution. We call this graviton mediated process Gravi-Burst. Since the Z and G (n) have different couplings and branching fractions to the observed primary particles, we expect that experiments may be able to distinguish between Z-burst and gravi-burst initiated primaries. Also, depending on the behavior of the flux of neutrinos at super-GZK energies, more than one member of the KK graviton tower could contribute to gravi-burst. In this case, the RS model predicts a characteristic multi-peaked behavior for future data at super-GZK energies and beyond. However, collider experiments may be a better place to directly search for the graviton tower, with cosmic ray data providing complementary information as we will discuss in detail later. In the next section, we present the necessary formulae for estimating the super-GZK flux in the Z-burst model. We adapt this approach to gravi-burst and give the corresponding rate estimates in this scenario. Section 3 contains our results for a range of RS model parameters and a comparison with the Z-burst predictions. We will show that if the neutrino spectrum falls sufficiently slowly with energy we can use GZK data to greatly restrict the parameter space of the RS model. Our conclusions are given in section 4. The Burst Mechanism The burst mechanism relies on several well-motivated assumptions given the successes of the SM, Big Bang Nucleosynthesis and the observation of neutrino oscillations due to the existence of finite neutrino masses. This scenario is most easily demonstrated in terms of the conventional Z-burst. This model proposes that a high energy flux of neutrinos (and antineutrinos) are produced by some as yet unknown astrophysical source and collide with the relic background neutrinos in the galactic neighborhood. The origin of this flux is unspecified but constraints on its magnitude and energy dependence exist from Fly's Eye data [10]. If the flux at the Z-pole is sufficient to explain the super-GZK excess then the Fly's Eye data tells us that the fall off with energy of the neutrino flux at somewhat lowers energies goes at least as fast at E −0.9 . A similar energy behavior may be expected above the Z-pole. Due to the finiteness of neutrino masses one would expect that the local density of neutrinos will most likely be enhanced over the uniform cosmological background due to their gravitational clustering around the galaxy [6]. Massive neutrinos within a few Z widths of the right energy will then resonantly annihilate into hadrons with the local anti-neutrinos (and vice versa) at the Z-pole with the large cross section where B Z h ≃ 0.70 is the hadronic branching fraction of the Z. We assume that only lefthanded neutrinos exist and employ the narrow width approximation. Given a neutrino mass hierarchy and the Super-Kamiokande atmospheric oscillation results [11] we expect one of the neutrinos to have a mass near ≃ 0.05 − 0.06 eV. (This follows from using the latest two parameter fit to the Super-K data which yields a value for ∆m 2 of 3.2 × 10 −3 eV 2 and by supposing that one of the neutrino masses is at least a few times larger than the second.) The locally produced 30 or so hadrons from the decay of the Z are then the effective primaries for the super-GZK events that are observed with energies in excess of ∼ 10 20 eV. (In principle, there being three neutrinos, we should consider three different cases depending on their masses. This is a straightforward extension of the present discussion.) If the source of the initial neutrinos is randomly distributed in space then, as shown by Weiler [6], we can calculate the total rate of super-GZK events induced by ν −ν annihilation at the Z pole within a distance D of the Earth as where the narrow width approximation has again been employed, F ν (E R Z ) is the incident neutrino flux evaluated at the resonant energy, and n(x) is the column number density of neutrinos. In deriving this expression it is assumed that the product < σ ann > Z D 0 dx n(x) << 1 as is the case for the Z in the SM and in the RS model we consider below. In practice we are interested in rather close annihilation, i.e., values of D of order the GZK limit for protons which is ∼ 50 Mpc. Weiler has shown that for reasonable ranges of the parameters the resulting value of the flux F Z can indeed explain the ≃ 20 events beyond the GZK bound observed over the last few decades. We note that the model in its present form predicts that all of the super-GZK events are relatively well clustered in energy just beyond ∼ 10 20 eV and that essentially no events should exist beyond those induced near the Z pole. Obviously, if such 'ultra'-GZK events were observed then there must be new processes which can also lead to enhanced annihilation cross sections beyond those arising in the SM. In the RS model with the SM gauge and matter fields lying on the TeV brane there exist a Kaluza-Klein tower of massive, weak scale gravitons, G (n) , with essentially electroweak couplings. There are basically two parameters in this model: the ratio c = k/M P l , with k a mass parameter with a magnitude comparable to the five-dimensional Planck scale, and the mass of the lowest lying graviton state. The masses of the tower KK states relative to the first non-zero mode are given by the ratio of roots of the Bessel function J 1 and c is expected to lie in the range ∼ 0.01 − 1 [7,8,9]. Specifically, while the zero mode graviton couples with For values of kr c in the range 11-12, the RS model provides a solution to the hierarchy problem. The masses of the KK states, G (n) , are then given by m n = kx n e −πkrc with x n being the n th roots of J 1 . This implies that the tower mass spectrum is completely determined once the mass of the lowest lying excitation is known and is given by m n = m 1 x n /x 1 . Thus we see that the parameters c = k/M P l and m 1 determine all of the other quantities within the RS model. Both phenomenological and theoretical constraints can be used to restrict this two dimensional model parameter space as has been discussed in our previous works [8,9]. As an initial numerical example of the Gravi-burst mechanism let us consider the specific case assume for numerical purposes that the mass of the Higgs is 120 GeV.) By combining all of these individual process cross sections, including interference with SM Z exchanges, we can calculate the full energy dependence of the total νν → hadrons cross section. This allows us to determine the ratio of expected cosmic ray rates for super-and ultra-GZK events in units of the Z-pole induced rate F Z computed above. It is important to note that in forming this ratio almost all of the astrophysical uncertainties cancel except for the energy dependence of the neutrino flux. We find where we have assumed that the neutrino spectrum above the the resonant Z pole energy falls in a power-like manner as ∼ E −λ/2 ν . We denote by F SM the complete energy dependent flux anticipated in the SM beyond that obtained through the use of the narrow-width approximation alone. (In what follows it will be sufficient to assume that this power-like fall off adequately describes the neutrino spectrum for a few orders of magnitude in energy above E R Z ν .) Integration of R over a range of √ s values then tells us the relative rate of events expected in the RS model to those originating from Z-bursts. To get an idea of what this ratio looks like as a function of energy we show the simplest specific case where λ = 0 in Fig. 1. Note that the integral of R under the Z pole gives the value unity as it should to reproduce the Weiler results. We are also reminded by the figure that even in the SM there exists a long high-energy tail to this ratio. In the RS scenario, the Z peak is followed by a number of ever widening graviton peaks which also yield reasonably large cross sections. From the figure, one can see that if the neutrino flux falls off slowly enough with energy we should expect events at even higher energies than those observed at present assuming that the Z accounts for the 'usual' super-GZK events. We will make this assumption in what follows, i.e., that the Z-burst scenario explains the observed super-GZK events. Given that hadronic multiplicities grow only very slowly with √ s, as is observed in e + e − annihilation data, we would interpret events induced by Gravi-bursting on the first graviton resonance to result in hadronic effective primaries that have energies approaching 10 22 eV. As of yet no such events have been recorded which places bounds on the allowed parameters of the RS model for different values of the neutrino energy spectrum described by the parameter λ. Analysis As we found in the last section, under the assumption that Z-bursts explain the super-GZK events the existence of even higher energy ultra-GZK events is a rather generic prediction of the RS model. Let us restrict ourselves to the region √ s ≥ 300 GeV which corresponds to effective primary energies in excess of 10 21 eV of which none have been yet observed. Integrating R above this lower bound, even in the SM, can yield some 'background' events; the use of the narrow width approximation is not strictly correct in that some rare events can arise from values of √ s away from the Z pole. In the RS case, we integrate R over the region from 300 GeV up to √ s = 4m 1 beyond which perturbation theory fails. † This yields a conservative lower bound on the total number of ultra-GZK events that are predicted in † We note that the RS model as described in four dimensions is a non-renormalizable theory. In addition, once the value of √ s significantly exceeds Λ π the theory also becomes non-perturbative and only qualitative statements can be made about the behavior of the cross section [8]. the RS model since most certainly more events can arise from even larger values of √ s. Integrating R in the SM over the above ranges and assuming that the 20 super-GZK events are from the Z-pole region, we find that for λ = 1 (2,3) we would expect to have already seen ≃ 0.24(0.04, 0.008) ultra-GZK background events from the tail of the Z pole, which is quite acceptable. (As discussed above we might expect that λ ≥ 1.8 is allowed by Fly's Eye data if the energy dependence of the neutrino spectrum below and above the Z-pole are similar.) Performing the same calculation in the RS model for a fixed set of values of m 1 and λ it will be clear that for some range of k/M P l the cross sections will be too large to have avoided the present non-observation of ultra-GZK events. In the usual manner this means that we can place a 95% CL bound on k/M P l as a function of m 1 for different assumed values of λ, using the existing data. The dashed curves in Fig. 2 show the results of this analysis using the existing data for various values of λ; other constraints obtained on the (k/M P l )-m 1 parameter space from our earlier work [9] are also shown. We see immediately that the effectiveness of the bound is quite strongly dependent on the value of λ. For λ ≥ 3 at most only a tiny region beyond that excluded by existing Tevatron Run I data is now ruled out. As λ decreases the size of the region presently excluded by cosmic ray data grows rapidly. For λ = 2 a substantial region allowed by the present Tevatron data becomes excluded. Furthermore, a sizeable region beyond that accessible at Run II with a luminosity of 30 f b −1 is also excluded. Using accelerators alone this region would be inaccessible until after the LHC turns on but here we see that it would be excluded by cosmic ray data provided λ ≤ 2. For λ ≤ 1 the bound is extremely powerful and at most only a tiny sliver of the RS model parameter space would remain viable. In the region below the dashed curves, which is not excluded by existing cosmic Figure 2: Allowed region in the (k/M P l )-m 1 plane. The solid(dotted) diagonal red curve excludes the region above and to the left from direct searches for graviton resonances at the Run I(II, 30 f b −1 ) Tevatron. The light blue(green) curve is an indirect bound from the oblique parameter analysis (based on the hierarchy requirement that Λ π < 10 TeV) and excludes the region below it. The black dashed(dotted) curves excluding the regions above them at 95% CL based on present (anticipated future Auger) cosmic ray data. The top(bottom left, bottom right) panel corresponds to λ = 3(2, 1) which describes the fall with energy of the neutrino flux as E −λ/2 . ray data, we might expect ultra-GZK events to show up in future experiments at reasonable rates. If this does not happen it's clear that the present bounds discussed above will improve drastically especially with the new cosmic ray observatories such as Auger [12] coming on line. Within a 5 year period of data taking at Auger one would expect ∼ 1000 super-GZK events [13] induced by Z-Bursts with correspondingly higher sensitivity to the ultra-GZK region. If no events above the SM background from the Z pole tail are observed at Auger during this period we can repeat the analysis above to obtain strong constraints on the RS parameter space as shown by the dotted curves in Fig. 2. We find the SM background expectations in this case for λ = 1 (2,3) to be ≃ 12.5(2.05, 0.422) events. Here we see that for λ = 2, 3 the size of the presently allowed region is quite significantly reduced. Particularly note the case λ ≤ 1 where we find that the non-observation of any ultra-GZK events at Auger would completely exclude the RS model with the SM gauge and matter fields on the wall. This is a very powerful result. If events above background are observed at Auger due to gravi-bursts they will have two distinctive characteristics. First, due to the resonance structure predicted by the RS model the energies of the effective hadronic primaries will show peaking at a set of fixed energies provided the energy resolution of the detectors is sufficiently good. Second, in addition to the rather 'soft' photons arising from conventional fragmentation π 0 's, much harder photons can arise from the direct decays of the gravitons in the KK tower. As shown in our earlier work, gravitons in the mass range of interest can decay with a reasonable branching fraction, ≃ 4 − 5%, into photons. Since they carry half of the energy of the resonance mass these photons will have energies an order of magnitude or more larger than those arising from π 0 decays. Of course with this rather small branching fraction a reasonable number of more 'ordinary' ultra-GZK events should be observed before one induced by these very hard photons. So far we have only discussed the case of a single massive neutrino; data based on oscillation solutions to the solar neutrino problem [11] suggest a second massive state exists near 3 × 10 −3 eV, about a factor 20 or so in mass below the 0.06 eV case discussed above. This second neutrino can also induce a Z-burst but only if the energy of the corresponding incident neutrino is ≃ 20 times larger, ≃ 1.4 × 10 24 eV. In comparison to that for the case of the higher mass neutrino the flux of these lighter neutrinos would be ≃ (20) λ/2 times smaller. This assumed neutrino mass ratio, strongly suggests that λ ≥ 2 to avoid ultra-GZK events from the second Z resonance. To eliminate any additional background generated by this new Z contribution, we would need to raise lower bound on our √ s integration, which is actually an integral over the neutrino energy. (The √ s = 300 GeV lower bound translates into a minimum neutrino energy of ≃ 8 × 10 23 eV and thus would now include additional background events from the second Z peak.) Raising the minimum neutrino energy by a factor of two would remove the second Z contribution while still staying comfortably below the excitation energies of any of the gravitons. In this case we would expect at most only slight alterations in the bounds presented above. Before we conclude we briefly discuss a generalization of the RS model where the SM gauge and matter fields are taken off the TeV brane [9] and how this would influence our results above. In this case not only can graviton towers be exchanged in the νν → hadrons process but now there are also Z boson towers whose members are generally interspaced in mass with the gravitons. If these additional contributions are also present it is quite possible that the neutrino cross section can be significantly enhanced leading to even stronger limits than those obtained above using current data. One might also expected that with increased cross sections it might be possible to probe cases where the slope of the neutrino energy spectrum is even steeper than what we have considered here. Unfortunately, to determine how much our previous results are modified in a quantitative manner requires a detailed analysis which is far beyond the scope of this paper. Conclusions In this paper we have examined the possible contribution to the spectrum of cosmic rays beyond the GZK cutoff due to new physics arising in the Randall-Sundrum model of localized gravity. Our analysis is based on the assumptions (i) that the events observed immediately above the GZK bound can be explained by the Z-Burst mechanism and (ii) the neutrino spectrum needed for Z-Bursts extends s few orders of magnitude further in neutrino energy with a reasonably slow fall-off. If these conditions hold then the existence of a series of schannel Kaluza-Klein graviton resonances in the νν → hadrons channel, which is predicted in the RS model, can lead to events with even higher energies, ultra-GZK, due to Gravi-Bursts. The rate for these bursts are generally at or near the present level of observability for a wide range of RS model parameters. The fact that such events are not as yet observed can be used to constrain the parameter space of the RS model once a specific form of the neutrino energy spectrum is assumed. These bounds can be more restrictive than those that can be obtained from the lack of graviton resonance production at the Tevatron during Run II (30 f b −1 ) if the fall-off with energy of the UHECR neutrino flux is linear or less steep. If ultra-GZK events are not observed by future experiments such as the Auger Array, then the resulting bounds on the RS model can be complementary to those obtainable at the LHC. If such events are observed at future experiments, the RS resonance structure may be observable given both sufficient statistics and good hadronic energy resolution. In addition to hadronic modes, the RS graviton KK tower states can directly decay to photon pairs which will have more than an order of magnitude greater energies that those that can arise due to ordinary fragmentation into π 0 's which subsequently decay into two photons. If photon and hadron induced showers can be distinguished at such energies this will provide a unique signature
5,895.4
2000-10-06T00:00:00.000
[ "Physics" ]
Roles of the subfornical organ and area postrema in arterial pressure increases induced by 48‐h water deprivation in normal rats Abstract In rats, water deprivation (WD) increases arterial blood pressure (BP) in part due to actions of elevated osmolality in the brain to increase vasopressin levels and sympathetic activity. However, the osmoreceptors that mediate this response have not been identified. To test the hypothesis that osmoregulatory circumventricular organs are involved, BP and heart rate (HR) were continuously recorded telemetrically during 48 h of WD in normal rats with lesions (x) or sham lesions (sham) of the subfornical organ (SFO) or area postrema (AP). Although WD increased BP in SFOx and SFOsham rats, no significant difference in the hypertensive response was observed between groups. HR decreased transiently but similarly in SFOx and SFOsham rats during the first 24 h of WD. When water was reintroduced, BP and HR decreased rapidly and similarly in both groups. BP (during lights off) and HR were both lower in APx rats before WD compared to APsham. WD increased BP less in APx rats, and the transient bradycardia was eliminated. Upon reintroduction of drinking water, smaller falls in both BP and HR were observed in APx rats compared to APsham rats. WD increased plasma osmolality and vasopressin levels similarly in APx and APsham rats, and acute blockade of systemic V1 vasopressin receptors elicited similar depressor responses, suggesting that the attenuated BP response is not due to smaller increases in vasopressin or osmolality. In conclusion, the AP, but not the SFO, is required for the maximal hypertensive effect induced by WD in rats. Introduction Water deprivation (WD) is associated not only with decreases in total body water but also in sodium secondary to a dehydration-induced natriuresis (McKenna and Haines 1981;McKinley et al. 1983;Thrasher et al. 1984). Yet, despite the resulting significant decrease in extracellular fluid volume, arterial pressure rises (Gardiner and Bennett 1985;Blair et al. 1997;Scrogin et al. 2002;Osborn 2011, 2013;Veitenheimer et al. 2012). Previous work suggests that increased osmolality contributes to the hypertensive response via elevations in vasopressin and sympathetic nerve activity (Gardiner and Bennett 1985;Brooks et al. 2005a). However, the specific brain sites that house the osmoreceptors that activate these hypertensive pathways are unknown. Central osmolality is sensed largely by specific neurons within the circumventricular organs (CVOs), a discrete set of brain sites that lack the blood-brain barrier and are accessible to both the circulating blood as well as cerebrospinal fluid (Gross and Weindl 1987). One particular CVO, the organum vasculosum of the lamina terminalis (OVLT), has been implicated in the increases in vasopressin and renal sympathetic nerve activity evoked by acute hyperosmolality (for reviews, see McKinley et al. (2004); Stocker et al. (2008)). OVLT neurons are directly activated by increases in osmolality (Ciura et al. 2011), and OVLT lesions attenuated renal sympathoexcitation induced by acute increases in central osmolality (Shi et al. 2007). Interestingly, OVLT lesions did not affect the rise in blood pressure following increased central osmolality suggesting that even though it may mediate some degree of sympathoexcitation during hyperosmotic states, it does not mediate the pressor response to increased osmolality (Shi et al. 2007). Nonetheless, previous work suggests that other CVOs, the area postrema (AP) and subfornical organ (SFO), are also likely involved. Notably, the AP and SFO have known anatomical connections to sympathetic regulatory centers, such as the paraventricular nucleus of the hypothalamus (PVN) and rostral ventrolateral medulla (RVLM) (Shapiro and Miselis 1985;Wilson and Bonham 1994;Anderson et al. 2001), and have been shown to mediate in part the increases in vasopressin in response to cellular dehydration (Huang et al. 2000;McKinley et al. 2004). In addition, another pressor hormone increased by WD, angiotensin II, acts via both the AP and SFO to acutely and chronically elevate blood pressure and sympathetic activity (Mangiapane and Simpson 1980;Casto and Phillips 1984;Fink et al. 1987a;Hendel and Collister 2005). Therefore, the purpose of the present experiments was to test the following hypothesis: the pressor response induced by 48 h of WD in rats requires an intact AP or SFO. To test this hypothesis, it was determined if the increases in arterial pressure were reduced in rats with lesions of the AP or SFO. Surgical methods All experiments were conducted at the University of Minnesota and approved by the Institutional Animal Care and Use Committee. Adult male Sprague-Dawley rats (275-325 g) were randomly selected for either lesion of the area postrema (APx; n = 7), subfornical organ (SFOx; n = 6), or respective sham (APsham; n = 7 or SFOsham; n = 6) operation. For all surgeries, rats were preanesthetized with pentobarbital (32.5 mg/kg, IP) and atropine (0.2 mg/kg, IP), and surgical anesthesia was achieved with a second injection containing a cocktail of anesthetic agents (acetylpromazine, 0.2 mg/kg; butorphanol tartrate, 0.2 mg/kg; ketamine, 25 mg/kg, IM). Rats received an intramuscular antibiotic injection of 2.5 mg gentamycin and a subcutaneous injection of 0.075 mg butorphanol tartrate for analgesic purposes postoperatively. For APx surgeries, rats were placed in a stereotaxic device with the neck flexed. The AP was visualized through an incision between the occipital crest and the first vertebrae and was removed by suction using a blunt 25-gauge needle attached to a vacuum line. Sham operations were identical except for the attached vacuum line. For SFOx surgeries, rats were placed in a stereotaxic device and the head was leveled. A 3 mm hole was drilled in the top of the skull just caudal to bregma, and a Tef-lon-insulated monopolar tungsten electrode was lowered into four predetermined coordinates within the SFO, through which a 1 mA current was passed for 7 sec. For sham operations, the electrode was not lowered as deeply into the brain and no current was passed. Rats were allowed 1-2 weeks of recovery before implantation of radiotelemetric pressure transducers (model no. TA11PA-C40, Data Sciences International, St. Paul, MN) and femoral catheters as described previously for continuous blood pressure and heart rate monitoring and blood sampling, respectively. Briefly, a midline abdominal incision was made and the descending aorta was exposed. The aorta was clamped and the catheter of the transducer was introduced distal to the clamp and glued in place. The aortic clamp was released, and the transmitter unit was attached to the abdominal wall with 3-0 surgical suture during closure of the abdominal cavity. Next, a small ventral incision was made in the left leg and the femoral vein exposed. The vein was tied off and the catheter introduced approximately 9 mm into the vein and tied in place. The catheter was then tunneled subcutaneously to an exit location between the scapulae and passed through a flexible spring connected to a single-channel hydraulic swivel. After transmitter and catheter implantation, rats were given another week of recovery. During all recovery periods rats were given standard rat chow and distilled water ad libitum. Experimental procedure Rats were maintained on a 12-h light-dark schedule and mean arterial pressure (MAP) and heart rate (HR) were recorded via telemetry (500 Hz for 10 s each minute) continuously throughout the experiment. A control period of 2 days, during which rats were allowed standard rat chow and distilled water ad libitum, preceded the experimental phase. On the first experimental day, water was removed from the cages of all animals 4 h before the lights turned on. Water was withheld for 48 h, during which rats continued to have access to standard rat chow ad libitum. After the 48-h WD, again at 4 h before lights on, water was reintroduced to the rats. The bottle was touched to the mouth of each rat upon reintroduction to ensure that each rat began drinking immediately. Water intake was measured for each rat at 1-h intervals for the next 4 h. For vasopressin and osmolality measurements (APx and APsham rats), blood was drawn at approximately 10 min before water was removed and again 10 min before water was returned. In some rats (APx n = 3, APsham n = 3), a V1 vasopressin antagonist (Manning compound, [b-Mercapto-b,b-cyclopenta-methylenepropionyl 1 ,O-Me-Tyr 2 , Arg 8 ]-vasopressin; MP Biomedicals, Santa Ana, CA) was injected iv (10 lg/ kg) immediately after blood was drawn at the end of the WD period, but before water was reintroduced. For all experiments, blood pressure and heart rate were measured for a 2-day recovery period after reintroduction of water. Data collection The MAP and HR data collected by telemetry were largely averaged in 12-h blocks, representing the lights on and lights off periods. These 12-h averages were collected for the lights on period the day before WD, during the first 24 h of WD, the 12-h lights on period of the second day of WD, and the day following WD (recovery). During the lights off period just before WD and the lights off period of the second day of WD, only 8 h were averaged, up until water was removed or returned, 4 h before lights on. Instead, the data collected for 4 h after returning water were averaged in 1-h blocks. To analyze vasopressin (APx n = 5, APsham n = 2) and plasma osmolality (APx n = 3, APsham n = 3) responses in some rats, 0.5-mL blood was drawn through the catheter of each rat into a chilled syringe containing EDTA. Blood was transferred to chilled microcentrifuge tubes, then centrifuged and separated, with the plasma being drawn off. Osmolality was measured with a vapor pressure osmometer (model 5500, Wescor, Logan UT), and plasma was stored at À80°C for later assay of vasopressin at the Core Lab of the Medical College of Wisconsin as described previously (Carlson et al. 1997). An equal volume of saline was injected through the catheter after the blood draw to minimize volumetric effects. To determine if APx reduced the contribution of circulating vasopressin to MAP support, the average of data collected 15 min following V1 antagonist injection was compared to preinjection values averaged for the 10-min period prior. Histological verification of lesions On completion of the protocol, all rats were anesthetized as described above and perfused intracardially with 140 mL of heparinized saline (20 U/mL heparin in 0.9% saline), followed by 450 mL of 4% paraformaldehyde in phosphate buffer saline (PBS). Whole brains were dissected and soaked in 4% paraformaldehyde for 2 days. The brains were then transferred to a 30% sucrose solution for a minimum of 2 days. For SFOx and SFOsham rats, frozen serial sagittal sections (50 lm) were made at the lateral edge of the third ventricle and mounted on treated slides. For APx and APsham rats, coronal serial sections (50 lm) were sliced and similarly mounted for inspection. Slides were stained for Nissl substance with cresyl violet and examined using light microscopy for confirmation of an intact (sham) or lesioned CVO. All APx rats included in the study were confirmed to have undergone complete AP ablation with minimal damage to the surrounding tissue. While our lesions were centered on the AP, in order to completely lesion the AP, it should be noted that invariably some fibers or inherent connections with the medial nucleus tractus solitarius (NTS) were damaged as well. All SFOx rats included in the final analysis of the data were confirmed to have ≥ 80% of the SFO ablated by the lesion, as well as complete removal of the rostroventral portion including efferent fibers of the SFO. Figure 1 illustrates representative examples of lesioned and sham rats. Statistical analysis Within the lights on or lights off periods, between-group differences in both the absolute and the changes in MAP and HR were assessed using two-way ANOVA for repeated measures [factors are group (lesion, sham) and time] and the post hoc Tukey-Kramer test. Betweengroup differences in the decreases in MAP and HR immediately following reintroduction of water were assessed using t-tests to determine if the changes in MAP and HR before and 4 h after return of water differed between groups. A two-way ANOVA for repeated measures was also used to determine if there was a between-group difference in the effect of WD on plasma vasopressin levels. Finally, a t-test was used to detect any difference in the MAP response to V1 antagonism in APx and APsham rats. All results are reported as mean AE standard error and a critical value of P < 0.05 (two-sided) was considered statistically significant for all tests. Effects of WD on MAP and HR As illustrated by a continuous representative tracing (Fig. 2, bottom panel) and the combined results (Figs. 3, 4) from control sham-lesioned rats, MAP rose gradually during WD, to reach significantly elevated levels during the dark phase 24 h after removing water. HR decreased transiently, but returned to baseline by the end of the 2-day WD period. Figure 3 illustrates the MAP and HR responses to 48-h WD in SFOsham and SFOx rats. A typical experimental tracing from an individual SFOx rat is shown in Figure 2 (middle panel). Baseline MAP and the MAP responses to WD did not differ between the lesioned and shamlesioned groups. Baseline HR also did not differ between groups and fell similarly and transiently during the light and dark phases on the first day of WD. MAP and HR both decreased (P < 0.05) rapidly upon reintroduction of water (Fig. 2); however, the falls in MAP (À18 AE 2 mmHg, SFOsham; À19 AE 2 mmHg, SFOx; P > 0.10) and HR (À104 AE 5 bpm, SFOsham; À86 AE 11 bpm, SFOx; P > 0.10) 4 h after water was returned were similar between groups. Effects of APx Mean arterial pressure and HR responses to WD in APx and APsham rats are shown in Figure 4. Baseline MAP was lower in APx rats during the dark phase. Both groups of rats exhibited significant increases in MAP after 48 h WD, but the increase was attenuated in APx rats during the lights off phase (APx: 9.0 AE 1.5 mmHg; APsham: 15.1 AE 1.5 mmHg; P < 0.05). Baseline HR was lower in APx rats compared to APshams throughout the lightdark cycle. WD decreased HR during the first 24 h (compared to recovery but not baseline) in the APsham but not APx rats. Nevertheless, significant between-group differences in the changes in HR during and immediately following WD were not observed. Following reintroduction of water, both groups demonstrated significant decreases in MAP and HR (P < 0.05); however, the decreases in MAP (À20 AE 1 mmHg, sham; À16 AE 1 mmHg, APx) and HR (À93 AE 11 bpm, sham; À56 AE 12 bpm, APx) were attenuated (P < 0.05) in APx rats compared to APsham rats at 4 h after the return of water. Does APx alter the vasopressin or hyperosmolar response to WD? Discussion The primary purpose of this study was to test the hypothesis that the SFO and/or the AP mediate the rise in arterial pressure induced by WD, using conscious rats instrumented for telemetric recordings of arterial pressure and heart rate. We confirm using telemetry Osborn 2011, 2013;Veitenheimer et al. 2012) that (1) WD elicits a gradual and progressive pressor response that is evident within 24 h and reaches a peak increment of~15-20 mmHg after 48 h; (2) the rise in arterial pressure is accompanied by a transient bradycardia; and (3) both MAP and HR rapidly decrease when water is returned to the rats after 48 of WD. Our novel findings are that (1) SFO lesions do not reduce the pressor response; and (2) AP lesions attenuate the WD-induced pressor response by 40% and abolish the transient bradycardia. Collectively, these data suggest that the AP, but not the SFO, contributes to the changes in MAP and HR induced by WD. It is well established that, when water intake is prevented, obligatory water losses from the skin, lungs, and kidney result in decreased total body water, as evidenced by an increase in plasma osmolality. In addition, total body sodium is reduced, due to both decreased food intake and also increased urinary sodium excretion, with subsequent hypovolemia. Despite these reductions in water and sodium, at least in rats, arterial pressure rises (Gardiner and Bennett 1985;Blair et al. 1997;Scrogin et al. 2002;Osborn 2011, 2013;Veitenheimer et al. 2012). Three pressor pathways have been shown to contribute to this pressor response: angiotensin II, vasopressin, and the sympathetic nervous system (Gardiner and Bennett 1985;Scrogin et al. 1999Scrogin et al. , 2002. (right) segments of the day-night cycle. While WD did not decrease HR compared to baseline, HR was suppressed (P < 0.05) in the APsham rats, but not APx rats, after 24 h of WD (lights on and lights off) compared to the recovery period. (*P < 0.05 compared to baseline; † P < 0.05 between groups). Decreases in blood volume and subsequent arterial and cardiopulmonary baroreflex activation underlie in part the activation of the renin-angiotensin system, vasopressin release (Blair et al. 1997;Gottlieb et al. 2006), and likely also increases in sympathetic nerve activity. However, arterial pressure "overshoots" these homeostatic mechanisms to rise above normal because of a central effect of elevated osmolality to drive the sympathetic nervous system and stimulate vasopressin (Scrogin et al. 1999;Brooks et al. 2005a;Gottlieb et al. 2006). Moreover, central synergistic interactions between osmolality and angiotensin II may lead to excessive sympathetic activation, which contributes to the hypertension that is produced (Gardiner and Bennett 1985;Brooks et al. 2005b;Veitenheimer et al. 2012). However, although considerable evidence implicates key roles for PVN (Stocker et al. 2004a;Stocker et al. 2005;Freeman and Brooks 2007) and the RVLM (Brooks et al. 2004a,b;Stocker et al. 2006), the brain neuronal circuitry involved in these integrated responses has not been completely mapped. In particular, the brain osmosensitive sites that trigger these pressor pathways have not been identified. Several experimental approaches have provided key information implicating the CVOs that mediate the sensation of osmolality and angiotensin II during WD, in particular, quantification of Fos expression as an index of activated neurons and specific lesions of CVOs. Direct recordings of isolated SFO neurons reveal that these cells are stimulated by hypertonicity (Anderson et al. 2000). Moreover, Fos studies indicate that the SFO is activated, albeit modestly, following 48 h of WD (McKinley et al. 1994;Morien et al. 1999;Sly et al. 2001;De et al. 2002). Interestingly, a significant fraction of the activated neurons project to the supraoptic nucleus, thereby capable of increasing vasopressin secretion (McKinley et al. 1994), while another fraction projects indirectly to the kidney via the sympathetic nervous system (Sly et al. 2001). The SFO mediates the hypertension resulting from chronic infusion of low doses of angiotensin II, which acts in part via sympathoexcitation (Zimmerman et al. 2004;Collister and Hendel 2005), and WD can increase the expression of angiotensin II AT1 receptors in the SFO (Sanvitto et al. 1997). In sheep, SFO lesions markedly reduced the increase in vasopressin induced by acute increases in osmolality (McKinley et al. 2004); in contrast, SFO lesions were without effect in rats (Maliszewska-Scislo et al. 2008). Therefore, significant indirect evidence supports a potential role for the SFO in the pressor response. However, the present results indicate that SFO lesions were largely ineffective. One interpretation of this result is that the SFO does not play a major role in the hypertensive response. This interpretation is supported by the relatively low induction of Fos in the SFO after WD compared to other brain regions such as the OVLT, median preoptic nucleus, SON, and PVN (McKinley et al. 1994;Morien et al. 1999;De et al. 2002), as well as the failure of SFO lesions to attenuate the vasopressin response to acute increases in osmolality (Maliszewska-Scislo et al. 2008). An alternate interpretation is that the SFO does provide a significant contribution, but redundant mechanisms provided by the OVLT and AP can compensate when this site is eliminated. Indirect evidence also implicates the AP in the pressor response induced by WD. WD elicits increased Fos expression in the AP (Gottlieb et al. 2006), and AP lesions attenuate hypertension secondary to angiotensin II infusion (Fink et al. 1987a) as well as the increase in vasopressin triggered by acute systemic increases in osmolality (Huang et al. 2000). In this study, AP lesions abolished the bradycardia and markedly reduced the pressor response to WD. Therefore, we conclude that the AP is required for WD to reduce HR and to maximally increase arterial pressure in rats. Nevertheless, a significant component of the pressor response remained in APx rats, suggesting that another site, likely the OVLT, is also involved. One possible explanation of the reduced pressor responses in WD APx rats is that the participation of vasopressin is reduced, as AP lesions have been shown to blunt the rise in vasopressin following acute hypertonicity (Huang et al. 2000). However, this possibility is unlikely for the following reasons. First, the contribution of vasopressin to arterial pressure maintenance during WD and following acute increases in osmolality are small (Kawano et al. 1991;Scrogin et al. 1999). Second, the ability of AP lesions to reduce the vasopressin response to iv hypertonic saline administration was not evident until osmolality exceeded levels normally induced by WD (Huang et al. 2000). Finally, we found that the rise in vasopressin produced in WD AP lesioned rats is quite robust and not different from sham-lesioned animals. Moreover, the depressor responses to acute V1 vasopressin receptor blocker were not different in APsham and APx rats. Therefore, while our low sample sizes may have precluded the detection of a small vasopressin contribution, we conclude that reductions in the levels or actions of vasopressin are not a major factor in the ability of APx to attenuate the WD pressor response. As AP lesions can impair food intake (Hyde and Miselis 1983;Johnson and Gross 1993;Collister and Hendel 2003) and the rise in osmolality induced by WD depends on food intake, the smaller pressor response in APlesioned rats could have resulted from a smaller degree of hypertonicity. However, AP-lesioned rats have normal basal plasma osmolalities and exhibit normal increases in osmolality in response to acute elevations in osmolality 2014 | Vol. 2 | Iss. 1 | e00191 Page 6 (Huang et al. 2000) and WD. Indeed, in this study, baseline osmolality was not different in APx and APsham rats, and furthermore increased to similar levels after 48 h WD. Another possibility is that the lesion attenuated the sympathoexcitation known to be induced by increases in osmolality during WD (Brooks et al. 1997;Scrogin et al. 1999;Veitenheimer et al. 2012). In support of this hypothesis, AP lesions have been shown to reduce the pressor and tachycardic responses following intracisternal hypertonic saline infusion (Kawano et al. 1991). Moreover, the AP is known to project directly or indirectly (via the NTS) to the PVN and RVLM (Shapiro and Miselis 1985;Blessing et al. 1987;Wilson and Bonham 1994), two brain regions that contribute to arterial pressure maintenance and increases in sympathetic nerve activity during WD (Brooks et al. 2004a,b;Stocker et al. 2004a,b;Stocker et al. 2005;Stocker et al. 2006;;Freeman and Brooks 2007). Indeed, while our lesions were focused on the AP, it should be noted that some fibers or medial NTS dendritic processes projecting to the AP were likely damaged as well. Lastly and interestingly, the AP also projects to the nucleus ambiguus (Shapiro and Miselis 1985), which may explain its involvement in the decreases in HR observed in WD rats. Nevertheless, further experiments are required to directly test this hypothesis. Perspectives Like WD, increased dietary salt increases plasma osmolality (for reviews, see Brooks et al. (2005b) ;De Wardener et al. (2004); He et al. (2005)). Normally, the increases in osmolality are small and do not trigger the same pressor pathways as WD (e.g., sympathoexcitation and vasopressin), as a simultaneously expanded blood volume suppresses the renin-angiotensin-aldosterone system. Indeed, if anything sympathetic activity may actually decrease (Brooks and Osborn 1995;Brooks et al. 2005b). However, in humans and experimental models of salt-sensitive hypertension, increased dietary salt activates the sympathetic nervous system (Carlson et al. 2001;Leenen et al. 2002;Brooks et al. 2005b). One mechanistic explanation of this adverse pressor response is that concurrent inappropriate elevations of hormones such as angiotensin amplify the sympathoexcitatory actions of the small increases in osmolality (Brooks et al. 2001(Brooks et al. , 2005bOsborn et al. 2007). Thus, the neuroendocrine-cardiovascular picture presented by salt-sensitive hypertension is similar to WD. The present results reveal that the SFO is not required for the WD-induced pressor response, yet the AP makes a major contribution. Given the parallels between WD and salt-sensitive hypertension, although the AP does not appear to modulate BP during changes in dietary salt in the normal rat (35), we speculate that the AP may play a greater role than the SFO in the genesis of sympathoexcitation in salt-sensitive individuals as well. In support of this hypothesis, previous studies have shown that lesions of the AP (Fink et al. 1987b), but not the SFO (Osborn et al. 2006), reduce arterial pressure in DOCA-salt rats. Nevertheless, further experimental work is required to test this hypothesis, as well as the role of other key osmoregulatory CVOs, such as the OVLT.
5,810
2014-01-01T00:00:00.000
[ "Biology", "Psychology" ]
Folate Deficiency Decreases Apoptosis of Endometrium Decidual Cells in Pregnant Mice via the Mitochondrial Pathway It is well known that maternal folate deficiency results in adverse pregnancy outcomes. In addition to aspects in embryonic development, maternal uterine receptivity and the decidualization of stromal cells is also very important for a successful pregnancy. In this study, we focused on endometrium decidualization and investigated whether apoptosis, which is essential for decidualization, was impaired. Flow cytometry and TUNEL detection revealed that apoptosis of mouse endometrium decidual cells was suppressed in the dietary folate-deficient group on Days 7 and 8 of pregnancy (Day 1 = vaginal plug) when decidua regression is initiated. The endometrium decidual tissue of the folate deficiency group expressed less Bax compared to the normal diet group while they had nearly equal expression of Bcl2 protein. Further examination revealed that the mitochondrial transmembrane potential (ΔΨm) decreased, and the fluorescence of diffuse cytoplasmic cytochrome c protein was detected using laser confocal microscopy in normal decidual cells. However, no corresponding changes were observed in the folate-deficient group. Western blotting analyses confirmed that more cytochrome c was released from mitochondria in normal decidual cells. Taken together, these results demonstrated that folate deficiency could inhibit apoptosis of decidual cells via the mitochondrial apoptosis pathway, thereby restraining decidualization of the endometrium and further impairing pregnancy. Introduction Numerous studies have examined the effect of folate deficiency on birth defects, and folate deficiency has been acknowledged to be a vital risk factor of neural tube defects (NTD) [1]. Several studies on the effect of folate deficiency on reproduction have mainly focused on fetal development, and the adverse effects of folate deficiency on embryonic development have been well confirmed [2,3]. Successful gestation requires not only normal development of the embryo itself but also a suitable maternal endometrium, such as the establishment of uterine receptivity and decidualization. Our previous study revealed that there was no effect of folate deficiency on embryo implantation, and both the expression of uterine receptivity marker genes and the number of implantation sites showed no significant difference between the folate deficiency group and control group [4]. However, the outcomes of the folate-deficient pregnant mice were not favorable, and we found a lower birth rate and more embryo loss in folate-deficient pregnant mice (unpublished data). Thus, it remains an outstanding question of whether folate deficiency plays a role in the process of decidualization after embryo implantation. In this study, we investigated maternal uterine endometrium decidualization under folate-deficient conditions. During early pregnancy in mice, the onset of embryo implantation occurs in the receptive uterus, followed by a transformation of stromal cells to decidual cells in the morning on day 5 of pregnancy (Day 1 = vaginal plug) [5]; this process is termed decidualization. Apoptosis occurs following proliferation and differentiation of stromal cells in the decidual zone after implantation [6]. Numerous studies have demonstrated that involution of the pregnant uterus begins with luminal epithelial cells and subsequently spreads throughout the anti-mesometrial zone. Finally, the mesometrial decidual cells undergo degeneration [7]. On the basis that cell elimination in the luminal epithelium and mature decidua also occurs in artificially induced decidualization cells, the apoptosis of uterine cells in pregnant mice is thought to be due to an intrinsic cellular pathway [8]. The apoptosis of endometrium decidual cells has attracted some research attention. Abrahamsohn first investigated the morphological aspects of apoptosis by analyzing the ultrastructure of mouse decidual cells and described the initial period of involution on Day 7 (D7) and Day 8 (D8) of pregnancy [9]. Sima Katz et al. [7] subsequently observed the accumulation of clumps of chromatin, dilation of the cisterna and endoplasmic reticulum, changes in the morphology to a sphere and loss of plasma membrane and suggested cell death of the decidua as a type of programmed cell death (apoptosis). Proteins involved in the apoptosis of decidual cells may be multitudinous and the mechanism undetermined, while a shift in Bcl2 family protein expression plays a vital role in initiating apoptosis of the decidualized mesometrium [10]. Kamil C. et al. [11,12] showed that Bcl2 family members decide the fate of decidual cells in vivo and in vitro. Thus, the involution of uterine cells is deemed to be the terminal step of the cell differentiation process associated with decidualization due to the same spatial sequence of stromal transformation and apoptosis of decidual cells [11]. The effect of folate deficiency on apoptosis varies in different tissues or cells and involves a variety of molecular mechanisms [13][14][15][16][17]. Due to the key role of apoptosis and decidualization in the development and remodeling of the uterine endometrium after embryo implantation and the fact that the effect of folate deficiency on apoptosis of decidual cells remains unknown, the purpose of this study is to elucidate the effect of folate deficiency on apoptosis of the uterine endometrium decidual cells and the related potential mechanism. These results would provide critical clues to explain the adverse pregnancy outcomes resulting from folate deficiency. Ethical Approval All animal procedures were approved by the Ethics Committee of Chongqing Medical University (20110016) on 21 October 2011. Animals and Tissue Collection The folate-deficient pregnant mouse model was established according to the method of a previous report [4]. Briefly, six-to eight-week-old NIH mice approved for experimental use by the Laboratory Animal Center of Chongqing Medical University (NO. 20110016) were housed in a specific pathogen-free animal room under a controlled photoperiod (12 h light/12 h darkness). We randomly divided the female mice into two groups with 80 mice in each group. The folate-deficient group was fed a diet containing no folate, and the control group was fed a normal diet. After five weeks, estrus mice were selected to mate with mature healthy males of the same strain. The day in which a vaginal plug was found after mating was considered to be the first day of pregnancy (D1). Uterine endometrial tissue on D7 and D8 was collected on ice and quickly stored at −80 °C for further analyses. Detection of Serum Folate Levels Serum folate levels of pregnant mice were detected using an electro-chemiluminescence immunoassay as previously described [18]. Briefly, the serum of both control group and folate-deficient group pregnant mice were collected. The new capillary was preconditioned by flushing with 1M NaOH for 30 min before the first use. Samples were then injected into the capillary by hydrodynamic flow at a height differential of 20 cm for 10 s. Running voltages was 16 Kv. Electrophoresis electrolyte was 0.8 mM luminol in 35 mM borate buffer (pH 9.4). The chemiluminescence emission was collected by a photo multiplier tube (PMT, R374 equipped with a C1556-50 DA-type socket assembly, Hamamatsu, Shizuoka, Japan), and recorded and processed with an IBM compatible computer using in-house written software. Transmission Electron Microscopy The antimesometrial region was chosen as the subject of study. Briefly, the antimesometrial decidua and myometrium of pregnant mice on D7 and D8 were dissected under a stereomicroscope, immediately post-fixed in osmium tetroxide and embedded in Araldite. Ultrathin sections were cut transversely along the long axis of the uterus, stained with 2% aqueous uranyl acetate and 0.5% lead citrate. Examinations of the sections were performed by a professional operator using a TEM Hitachi-7500. Isolation and Culture of Primary Decidua Cells Twelve pregnant mice on D7 and D8 from the control diet and the folate-deficient diet group, respectively, were sacrificed, and the endometrial tissues were immediately removed and placed in PBS under aseptic conditions. After three washes with PBS to remove excess blood, the embryos were dissected under a stereomicroscope. The decidual tissue was finely minced and digested with enzyme Ι (containing 0.6% dispase and 2.5% trypsin) at 4 °C for 1 h, followed by room temperature for 1 h, and an additional 10 min at 37 °C for preliminary digestion. After further digestion with 0.05% collagenase, a 70-m cell strainer was used to sieve the supernatant, and the decidual cells were collected by centrifugation. The cell precipitation was resuspended with a complete medium and cultured in a flask in a 5% CO2 incubator at 37 °C. After 1 h of incubation, the culture medium was removed to eliminate the non-adherent cells and a new medium was added into the culture flask, which would ensure the purity of the primary decidual cells. Decidual cells from different groups were cultured in corresponding complete medium (Roswell Park Memorial Institute 1640, Sigma, St. Louis, MO, USA), both containing 10% fetal bovine serum (Sigma, St. Louis, MO, USA) and supplemented with 100 mg/mL streptomycin and 100 U/mL penicillin). The decidual cells showed a fusiform or round morphology with one or more nuclei and were identified by detecting the expression of bone morphogenetic protein 2 (BMP2) using immunofluorescence. Real-Time PCR Total RNA was extracted from mouse decidual tissue of normal diet-fed and folate-deficient diet-fed pregnant mice (D7 and D8) with TRIzol reagent (TaKaRa, Dalian, China) and reverse-transcribed into cDNA using the PrimeScriptTM RT Reagent Kit (TaKaRa, Dalian, China) according to the manufacturer's instructions. The primers used in this study are shown in Supplementary material 1 (see Supplementary Table S1). β-Actin was used as an internal control for standardization. Real-time PCR was performed using the SYBR Premix Ex TaqTM Kit (TaKaRa, Dalian, China) on a BIO-RAD iQ5 Multicolor Real-Time PCR Detection System. Experiments were performed in triplicate. Data obtained from real-time PCR were analyzed using the 2-ΔΔCt method, and statistical analysis was performed using Prism Graphpad 5.0. TUNEL Assay Terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) assays were performed using a TUNEL kit (Roche, Mannheim, Germany) according to the manufacturer's instructions. The uterine tissue sections were pretreated with 20 μg/mL Proteinase K for 15 min at 37 °C, washed in PBS, and then incubated with TUNEL reaction mixture (label solution and enzyme solution) for 1 h at 37 °C. After rinsing the sections three times in PBS for 5 min, the sections were observed under a confocal fluorescence-microscope system. Green fluorescence indicates positive cells. Western Blotting Analysis A tissue protein extraction kit (Beyotime, Shanghai, China) was used for protein preparation. Total and cytosolic proteins were extracted from the tissue samples of folate deficiency and control group pregnant mice (n = 6 each) and then boiled in 5× SDS sample loading buffer for 10 min. Equal amounts of total protein (50 g) were separated using 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto nitrocellulose membranes (Bio-Rad Laboratories). Membranes were blocked in 5% milk for 1 h at room temperature followed by the appropriate primary antibodies diluted in blocking buffer (4 °C, overnight) as previously described. After several washes, the membranes were incubated with specific secondary antibodies corresponding to the source of primary antibodies for 1 h at room temperature. After 4 washes with PBST (5 min each), the immunoreactive bands were visualized using ChemiDocTM XRS+(Bio-Rad) and chemiluminescence reagents (Millipore, WBKLS0500, Billerica, MA, USA). Densitometry measurements were analyzed using Quantity One v4 Immunohistochemistry The tissues were fixed in 4% paraformaldehyde, dehydrated by increasing concentrations of alcohol, and subsequently embedded in paraffin. Sections (5 μm) were prepared for further study. For immunohistochemistry, antigen retrieval was performed in sodium citrate buffer for 10 min at room temperature, followed by 15 min at 100 °C in a microwave oven. Endogenous peroxidase was inhibited by incubation with 3% hydrogen peroxide for 10 min at room temperature. The sections were blocked in 10% normal goat serum for 30 min at 37 °C, incubated with primary antibody at 4 °C overnight, and then incubated with secondary antibody for 30 min at 37 °C followed by streptavidin-conjugated horseradish peroxidase for 30 min at 37 °C. Antibody staining was developed using a diaminobenzidine substrate. The sections were subsequently stained with hematoxylin. Immunofluorescence and Confocal Fluorescence Microscopy The release of cytochrome c from mitochondria into the cytoplasm was observed using a confocal fluorescence microscope equipped with an argon-ion laser and He-Cd laser. Decidua cells isolated from the uterine endometrium tissue of pregnant mice (D7 and D8) were cultured on slides (10 mm × 10 mm) for two days with medium changed everyday. After three washes with PBS, the sections were fixed in cold methanol for 15 min at room temperature and blocked with 2% BSA for 1 h at 37 °C, incubated overnight with primary antibodies at 4 °C, and then incubated with fluorescein isothiocyanate (FITC)-labeled rabbit IgG for 1 h at 37 °C in the dark. After incubation with PI (Solarbio, P4170, Beijing, China) and sealing by glycerine (50%), the sections were observed using confocal fluorescence microscopy (MRC-600, Bio-Rad) and inverted epifluorescence microscopy (Nikon TMD-EFQ) at a wavelength of 488 nm. The primary antibodies used in this study were cytochrome c, BAX, and BMP2. Mitochondria Extraction and Detection of Mitochondrial Transmembrane Potential (ΔΨm) Isolation of mitochondria from endometrium decidual tissues was performed using the Tissue Mitochondria Isolation Kit (Beyotime, Beijing, China, C3006). The mitochondrial transmembrane potential (ΔΨm) of decidua primary cells isolated from each group was detected using tetrachloro-tetraethylbenzimidazol carbocyanine iodide (JC-1) staining (20 μg/mL) for 35 min at 37 °C. JC-1 fluorescence was observed using confocal fluorescence microscopy as previously described with an excitation wavelength of 490 nm and 510 nm for green fluorescence and red fluorescence, respectively. Flow Cytometric Evaluation of Apoptosis Decidua primary cells were harvested, washed and resuspended with culture medium. Cells that have lost membrane integrity will demonstrate red staining (propidium iodide, PI) throughout the nucleus and will thus be easily distinguished as early apoptotic, late apoptotic, or necrotic cells. Samples were incubated at room temperature for 15 min in the dark with Annexin V and PI and then quantitatively analyzed using a FACS Vantage SE flow cytometer. Statistical Analyses All experiments were replicated at least three times. The data were analyzed using the Statistical Package for the Social Sciences (SPSS) statistical software (Version 16.0; SPSS Inc., Chicago, IL, USA). Values are expressed as the mean ± SD. Student's t-test was used to analyze differences between groups. Differences were considered significant if p < 0.05. Folate-Deficient Mice Have a Lower Level of Serum Folate The serum folate concentration was detected to validate the folate-deficient mouse model. Serum folate concentrations were obtained from mice in the folate-deficient group and were significantly lower compared to the control group (4.83 ± 2.046 vs. 22.75 ± 1.315 ng/mL, p < 0.001, n = 10, mean ± SEM) (Figure 1), which indicated the successful establishment of the animal model. Figure 1. Serum folate concentrations detected using an electro-chemiluminescence immunoassay; lower serum folate concentrations were observed following folate deficiency treatment, *** p < 0.001. Folate Deficiency Reduced Apoptosis of Endometrium Decidual Cells in Pregnant Mice For a morphological comparison of the endometrium decidual cells between folate-deficient mice and control mice, transmission electron microscopy (TEM) was employed to observe differences in the organelles related to programmed cell death. As shown in Figure 2A, decidual cells of mice in the control group exhibited dilation of the perinuclear endoplasmic reticulum cisternae and distended mitochondria, while cells in the folate-deficient mice did not show corresponding changes or showed slight dilation in some cases. To further confirm the effect of folate deficiency on apoptosis in decidual cells, TUNEL and flow cytometry were performed ( Figure 2B,C). Compared with the control mice, the number of TUNEL-positive cells was much less in the folate-deficient diet mice, and the results of the flow cytometric analysis showed that in normal D7 and D8 pregnant mice decidual cells, the percentage of early apoptotic cells were 12.80% and 12.79% and the percentages of late apoptotic cells were 60.98% and 48.97%, respectively. While in folate-deficient D7 and D8 pregnant mice decidual cells, early apoptotic cells account for 7.99% and 8.33% and late apoptotic cells account for 22.87% and 17.19%, respectively. This data was consistent with the TUNEL assay. Furthermore, the protein expression of caspase-3, a downstream effector during apoptosis, was significantly downregulated in response to folate deficiency treatment ( Figure 2D). Folate Deficiency Alters the Expression of Bcl2 Family Proteins Previous studies have confirmed the regulation of Bcl2 family proteins in controlling the involution of endometrium decidual cells. Thus, we proposed that the expression of Bcl2 family proteins may be altered under folate-deficient conditions. Immunohistochemistry studies showed that Bax and Bcl2 proteins are widely distributed in the cytoplasm of decidual cells, and the number of Bax-positive cells was significantly less in folate-deficient mice, while there was no difference in the number of Bcl2-positive cells ( Figure 3A). Similarly, western blotting analyses revealed that there were no differences in Bcl2 expression between the folate-deficient and control groups, while the expression of Bax was significantly downregulated after folate deficiency treatment. Thus, the ratio of Bax/Bcl2 was significantly lower in folate-deficient mice ( Figure 3B). In addition, the expression of Bax in primary cells was confirmed using immunofluorescence. The fluorescence intensity of Bax protein was much weaker in folate-deficient primary decidual cells ( Figure 3C). The identification of decidual cells was shown in Supplementary material 2 (see Supplementary Figure S1). Furthermore, the expression of two upstream proteins (NFκB and MAPK1) of Bcl2 was detected using western blotting analyses. These results showed no obvious differences in the expression of NFκB and MAPK1 between the folate-deficient group and control group ( Figure 3D). These data confirmed that folate deficiency suppressed apoptosis of decidual cells by changing the expression of Bcl family proteins. Folate Deficiency Inhibited Apoptosis via the Mitochondrial Pathway The mitochondrial ΔΨm was assessed using a JC-1 fluorescence probe, which revealed that apoptotic primary decidual cells in the control group had decreased mitochondrial membrane potential, while primary decidual cells isolated from mice in the folate-deficient group did not exhibit similar changes (Figure 4). A decreased mitochondrial membrane potential would initiate a downstream "ripple effect," such as the release of cytochrome c from the intermembrane space of mitochondria. We further compared the release of cytochrome c from mitochondria in primary cells isolated from mice in the two groups using immunofluorescence and confocal fluorescence microscopy. As shown in Figure 5B, the control group showed a higher rate of cells with diffuse cytochrome c distribution, indicating the release of cytochrome c from mitochondria into the cytosol after depolarization. Conversely, decidual cells from the folate-deficient group showed punctate fluorescence, which indicates concentrated cytochrome c in the mitochondria. Western blotting analyses were performed to further confirm the profile of cytochrome c release ( Figure 5C). These results demonstrated that the expression of total cytochrome c was similar in both groups, but the tissue in the folate-deficient group contained less cytochrome c in the cytoplasm with the mitochondria excluded. Taken together, these results demonstrated that folate-deficient treatment inhibited the decrease in the mitochondrial membrane potential and the following release of cytochrome c in endometrium decidual cells, thereby suppressing the intrinsic apoptotic pathway. Folate Deficiency Impairs Decidualization in Mice As previously described, folate deficiency suppresses the natural apoptosis of decidual cells, which suggests that folate deficiency may have an effect on decidualization. Thus, we analyzed the marker gene expression of endometrium decidualization in mice, including bone morphogenetic protein 2 (BMP2), homeobox A10 (Hoxa10), matrix metalloproteinase 2 (MMP2) and matrix metalloproteinase 9 (MMP9) [19,20]. Real-time PCR results showed that only MMP2 mRNA was significantly decreased following folate deficiency treatment ( Figure 6A). Moreover, the expression of BMP2, Hoxa10 and MMP2 proteins was markedly reduced in folate-deficient mice, as revealed using western blotting analyses. The differential expression of MMP9 protein occurred on D8 ( Figure 6B,C). Taken together, these data suggested that folate deficiency impaired decidualization in mouse endometrial stromal cells. Discussion It is well known that folate deficiency or depletion induces an unfavorable pregnancy outcome, such as neural tube defects, intrauterine growth retardation, preterm birth [3] and hydrocephalus [21]. Previous studies have mainly focused on the fetus itself in the maternal folate-deficient condition and found multiple congenital abnormalities. As the first molecular event after embryo implantation, decidualization of the endometrium has attracted our attention. It has been confirmed that folate deficiency affects the apoptosis of various cells, and apoptosis of decidual cells is a vital important course during endometrium decidualization both in the mouse and in human. Furthermore, this process is accompanied by proliferation and differentiation of the endometrium stromal cells in mice as well as in humans, ensuring the well-remodeled uterine to receive the gradually-growing embryo [7,12]. Thus, we explored the effect of folate deficiency on the apoptosis of decidual cells, which is a natural process of decidualization. Transmission electron microscopy is one of the most persuasive methods used to observe cell apoptosis, and thus, it was employed to identify the ultrastructural differences in decidual cells between the control group and folate deficiency group [22]. According to the description provided by Katz et al. [7], apoptotic decidual cells showed dilation of the mitochondria and endoplasmic reticulum during involution and then appeared as clumps of chromatin, autophagosomes and heterophagosomes accumulating in the cytoplasm. Consistent with these observations, we found swollen mitochondria and dilated endoplasmic reticulum in decidual cells of normal diet mice, while decidual cells of mice treated with folate deficiency did not show these corresponding characteristics and their organelles appeared nearly non-apoptotic. No evidence of accumulated clumps of chromatin was found in both groups, which may be due to the thin tissue sections (70 nm). We were only able to observe a limited plane of decidual cells, which showed early apoptotic characteristics because apoptosis is a rapid process [23]. In addition, results obtained from the TUNEL assay, flow cytometric analysis and expression of caspase-3 confirmed that folate deficiency reduced apoptosis in decidual cells. In a further study, we found that the expression of Bcl2 family proteins, which regulate apoptosis in decidual cells, changed following folate deficiency treatment. Bcl2 family proteins are critical regulators of programmed cell death via their ability to permeabilize the mitochondrial outer membrane [24][25][26]. Thus, we hypothesized that the mitochondrial pathway might play a role in inhibiting apoptosis due to folate deficiency. The mitochondrial membrane potential was assessed using a fluorescent JC-1 probe with a laser scanning confocal microscope. As a cationic dye, JC-1 converts from a red to a green color if the mitochondrial membrane potential decreases [27]. In polarized (normal) mitochondria, which do not undergo cell apoptosis, JC-1 accumulates and aggregates with a red emission, and when the mitochondria depolarize (loss of mitochondrial membrane potential), JC-1 remains in the cytosol in its monomeric form and fluoresces green [28]. Following depolarization of the mitochondrial outer membrane, the release of cytochrome c is considered to be particularly important in the activation of downstream caspase signaling cascades [23]. Thus, the mitochondrial membrane potential (Δψm) and state of cytochrome c release could be detected. These data confirmed that apoptosis in decidual cells was inhibited by folate deficiency treatment via the mitochondrial pathway. The effect of folate deficiency on apoptosis varies in different studies. Folate deficiency triggers an oxidative-nitrosative stress-mediated apoptosis in RINm5F Pancreatic Islet β cells [14], while it induces cell apoptosis via a cell cycle arrest mechanism in mouse embryonic stem cells [16]. The NF-κB pathway and Bcl2-related mechanism are also involved in two independent studies [13,29]. However, David Garcia Crespo et al. [15] found that folate deficiency decreased apoptosis in normal mouse intestines. This result was not unexpected because previous studies have shown that the effect of folate deficiency is highly cell-specific in gene expression [30] and that the effect may vary between different mouse strains [31]. Thus, we first focused on the effect of folate deficiency on apoptosis of endometrium decidual cells and demonstrated that the natural apoptotic process of decidual cells was suppressed under maternal folate-deficient conditions. Furthermore, decidualization of endometrial stromal cells was impaired in pregnant mice, which was consistent with previous findings indicating that apoptosis is an important component of the decidualization process [32]. However, we proposed that impaired decidualization of the endometrium by folate deficiency may contribute to fetal abnormalities, as previous studies have shown that BMP2, a well-known marker of decidualization, plays an important role in cephalic neural tube closure [33,34]. Thus, further studies are needed to confirm this hypothesis. This study enriched our awareness that folate deficiency disrupts the proliferation-apoptosis balance and that the mitochondrial apoptosis pathway of endometrial decidual cells was inhibited in folate-deficient pregnant mice. Furthermore, decidualization was impaired. Although future investigations are needed to clarify the molecular mechanisms underlying the effects of folate deficiency on the expression of genes related to the mitochondrial apoptosis pathway and the effect of the folate metabolic pathway on apoptosis of decidual cells, we must first understand the harmful effects of folate deficiency that occur at the time point after embryo implantation. This time point is earlier than the acknowledged period when the neural tube develops, and disrupted decidualization of the uterine endometrium by folate deficiency may have a significant effect on undesirable pregnancy outcomes. Conclusions In this study, we proved that folate deficiency decreased apoptosis of the endometrium decidual cells in pregnant mice. And the disturbed proliferation-apoptosis balance could contribute to impaired decidulization process, which is vital important for embryonic development after embryo implantation. In addition, we speculated that folate deficiency may have an important effect on mitochondrial apoptosis pathway in decidual cells.
5,831
2015-03-01T00:00:00.000
[ "Biology", "Medicine" ]
In situ synthesis of MWCNT-graft-polyimides: thermal stability, mechanical property and thermal conductivity Herein, MWCNT-graft-polyimides (MWCNT-g-PIs) were prepared by the in situ grafting method. Strengthening the interfacial interaction between MWCNTs and polyimide chains decreased their interfacial thermal resistance (RC). In contrast to the RC of 10% MWCNT/PIs, the RC of 10% MWCNT-g-PI decreased by 16.7%. Hence, MWCNT-g-PIs possessed higher thermal conductivity than MWCNT/polyimides (MWCNT/PIs). Meanwhile, the Tg values of all the samples (MWCNT/PIs and MWCNT-g-PIs) were greater than 399 °C (by DMA). Compared with MWCNT/PIs, 5% and 10% MWCNT-g-PIs showed enhancement in thermal stability in air. The storage modulus retentions were greater than 63% at 200 °C and 45% at 300 °C. Also, 5% and 10% MWCNT-g-PIs maintained the high tensile strength of pure PI, and the tensile modulus increased up to 2.59 GPa on increasing the loading amount of MWCNTs. This study sheds light on improving the thermal conductivity of polyimides effectively at relatively low loadings. Introduction In recent years, with the rapid development of highperformance microelectronic equipment and energy harvesting devices, the demand for heat sinks in industrial and electronic elds has dramatically increased. 1,2 However, the thermal conductivity of common polymers is quite low and ranges from 0.1 W m À1 K À1 to 0.3 W m À1 K À1 . Hence, their applications are severely limited in industrial and electronic elds due to heat accumulation. [3][4][5] It is important to increase the thermal conductivity of polymers to enhance the thermal diffusion and then reduce the heat accumulation. A simple and feasible method for enhancing the thermal conductivity of polymers involves introducing highly thermally conductive llers (carbon nanotubes, 4,6,7 graphites, [8][9][10] boron nitrides, 11,12 aluminum nitrides, 13,14 and aluminum oxides 15 ) into polymers. Among all kinds of highly thermally conductive llers, carbon nanotubes (MWCNTs or SWCNTs) have been expected to be capable of improving the thermal conductivity of polymers effectively at relatively low loadings. [16][17][18][19] However, the poor thermal conductive performances of carbon nanotube composites are due to the high interfacial thermal resistance between carbon nanotubes and polymers. Improving the ller/polymer interfaces can reduce "thermal resistance", and some methods have also been considered, such as non-covalent functionalization 7 and covalent functionalization. 6 Covalent functionalization involves graing some chemical functional groups (amines, silanes, polymers, etc.) onto carbon nanotubes. In this paper, polyimide was selected as a polymer matrix owing to its outstanding thermal and mechanical properties and MWCNTs acted as thermally conductive llers. MWCNT-gra-polyimides (MWCNT-g-PIs) were obtained by the in situ graing method for reducing the interfacial thermal resistance between nanotubes and polyimide to enhance the thermal conductivity. The thermal stability, mechanical properties and thermal conductivity of MWCNT-g-PIs were studied. For comparison, MWCNT/polyimides (MWCNT/PIs) were prepared by a simple blending method. Measurements FTIR spectra were recorded on a Nicolet iS10 spectrometer at a resolution of 2 cm À1 in the range of 400-4000 cm À1 with reection mode. Dynamic Mechanical Analysis (DMA) was performed with a TA instrument (DMA Q800) at the heating rate of 5 C min À1 and a load frequency of 1 Hz in the lm tension geometry and T g was regarded as the peak temperature of tan d curves. Thermogravimetric analysis (TGA) was performed with the TA instrument 2050, with a thermal heating rate of 10 C min À1 in nitrogen or air atmosphere. The mechanical properties of the samples were studied at room temperature by a Shimadzu AG-I universal testing apparatus with a crosshead speed of 2 mm min À1 . Measurements were obtained at 25 C with lm specimens (about 50 mm thick, 6 mm wide and 40 mm long). The cross-section morphology of lms was observed by Scanning Electron Microscopy (SEM, NOVA NANOSEM 450, England). The lms were fractured in liquid nitrogen and coated with gold prior to test. Thermal conductivity measurements were performed at 25 C by thermal conductivity instrument of TC 3000 series based on ASTM D5930 Standard Test Method for Thermal Conductivity of Plastics by means of a Transient Line Source Technique. Thermal conductivity K (W m À1 K À1 ) was calculated by the following equation: Here, q represents the heat conducted per unit length of the wire, DT represents the temperature changes in the wire and t represents the measuring time. Samples with different MWCNT contents (0%, 5%, and 10%) in polyimide were synthesized via the blending method and designated as PI, 5% MWCNT/PI, and 10% MWCNT/PI, respectively. The preparation of 5% MWCNT/PI was used as a representative to illustrate the detailed synthetic procedure. First, 0.2202 g MWCNTs and 25 g DMAc were added into a three-neck ask and then, the mixture was subjected to ultrasonic dispersion at room temperature for 3 h. Subsequently, ODA (10 mmol, 2.002 g), PMDA (10 mmol, 2.181 g), and 14.6 g DMAc were added into the three-neck ask. The reaction mixture was slowly stirred for 24 h. Next, the mixture was casted on a glass plate, followed by a preheating program (60 C/10 h, 80 C/2 h, 100 C/2 h, 120 C/2 h) and an imidization procedure under vacuum (200 C/1 h, 250 C/1 h, and 300 C/1 h) to produce the 5% MWCNT/PI lm. 2.3.2 Preparing MWCNT-g-PIs by in situ graing method (Scheme 1). The different MWCNT contents (0%, 5%, and 10%) were graed on polyimide via the in situ synthesis method and the corresponding samples were named g-PI, 5% MWCNT-g-PI, and 10% MWCNT-g-PI. The preparation of 5% MWCNT-g-PI was used as a representative to illustrate the detailed synthetic procedure. First, 0.2204 g MWCNT-OH and 25 g DMAc were added into a three-neck ask and then, the mixture was subjected to ultrasonic dispersion at room temperature for 3 h. Subsequently, ODA (9.8 mmol, 1.962 g), PMDA (10 mmol, 2.181 g), and 14.7 g DMAc were added into the three-neck ask. The reaction mixture was slowly stirred for 2 h. At last, APTES (0.2 mmol, 0.0443 g) was introduced into the system, and the system underwent polymerization for 24 h. Then, the mixture was casted on a glass plate, followed by a preheating program (60 C/10 h, 80 C/2 h, 100 C/2 h, 120 C/2 h) and an imidization procedure under vacuum (200 C/1 h, 250 C/1 h, and 300 C/1 h) to produce the 5% MWCNT-g-PI lm. The chemical structures of MWCNT-g-PIs were characterized by FT-IR spectroscopy. Fig. 2 demonstrates the FT-IR spectra for MWCNT/PIs and MWCNT-g-PIs. All the samples exhibited characteristic imide absorptions at around 1776 cm À1 (asymmetrical C]O stretching), 1714 cm À1 (symmetrical C]O stretching), and 1366 cm À1 (C-N stretching). The spectra of MWCNT-g-PIs show the asymmetrical and symmetrical stretching vibrations of -CH 2 at 2921 cm À1 and 2846 cm À1 , respectively. These vibrations belonged to APTES and carbon nanotubes, and no existence of the characteristic absorption bands of the -NH 2 and -OH groups proved the successful Scheme 1 The preparation process of MWCNT-g-PIs. Characterization of MWCNT/PIs and MWCNT-g-PIs graing of polyimide chains on carbon nanotubes. The interaction between MWCNT and PI in MWCNT-g-PIs via coupling is illustrated in Fig. 3. Fig. 4 shows the wide-angle X-ray diffraction (XRD) curves of MWCNT/PIs and MWCNT-g-PIs. PI and g-PI only exhibited a diffuse peak at 2q ¼ 17.5 , whereas 5% and 10% MWCNT/PIs and MWCNT-g-PIs exhibited two diffuse peaks at 2q ¼ 17.3 and 24.9 , respectively. A small diffuse peak at 2q ¼ 24.9 was observed in the diffraction curves of 5% and 10% MWCNT/PIs and MWCNT-g-PIs, indicating that the carbon nanotubes were successfully incorporated into the polyimide matrix. Table 1. The T 5% and T 10% values of PI were 559 C and 573 C under N 2 atmosphere, respectively. Compared with the values for PI, the T 5% and T 10% of g-PI decreased slightly under N 2 atmosphere; the values were 549 C and 569 C, respectively. However, the addition of carbon nanotubes improved T 5% and T 10% under N 2 atmosphere irrespective of whether by blending or graing. By the thermal degradation curves of MWCNTs and MWCNT-OH under N 2 atmosphere, we can infer that MWCNTs and MWCNT-OH have better thermal stability than PI, which results in the enhancement of T 5% and T 10% of the materials. The residual weight retentions at 800 C also improved under N 2 atmosphere; the values for 10% MWCNT/PI and 10% MWCNT-g-PI were 62.5% and 62.6%, respectively. In contrast to the values for PI, T 5% and T 10% had a marked decrease under air atmosphere for the materials prepared by the blending method. From the DTG curves of MWCNT/PIs in air, we can infer that the degradation of MWCNTs at a high-temperature stage is the main reason for the above-mentioned phenomenon. However, T 5% and T 10% had a marked increase under air atmosphere for the materials prepared by the graing method than the results obtained for the blending method. Aer graing, MWCNTs were tightly wrapped by polyimide chains owing to the covalent This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 13517-13524 | 13519 Thermal properties of MWCNT/PIs and MWCNT-g-PIs Paper RSC Advances bond linkage between MWCNTs and polyimide chains, which strengthened the interfacial interaction and thus, the MWCNT degradation was delayed. The heat-resistance index (T HRI ) was calculated; 11,20 the results are listed in Table 1. In N 2 , the T HRI values of MWCNT/PIs and MWCNT-g-PIs increased aer the addition of MWCNTs. Under air atmosphere, the T HRI of MWCNT/PIs decreased aer the addition of MWCNTs, but T HRI of the MWCNT-g-PIs brought into correspondence with that of pure PI. In short, MWCNT/PIs and MWCNT-g-PIs exhibited good thermal stability in N 2 . However, MWCNT-g-PIs possessed better thermal stability than MWCNT/PIs in air. Nevertheless, the reduction in thermal stability for MWCNT/PIs in air was retained at an acceptable degree. The dynamic mechanical analyses of MWCNT/PIs and MWCNT-g-PIs are shown in Fig. 7 and 8. The storage modulus retentions of MWCNT/PIs and MWCNT-g-PIs at 200 C and 300 C were analysed and listed in Table 2. All the samples had good storage modulus retention at a high-temperature stage. The storage modulus retentions were greater than 63% at 200 C and 45% at 300 C. Meanwhile, the glass transition temperature (T g ) was analysed; it was determined by the peak temperature of the tan d curves and listed in Table 1. T g is possibly determined by two competitive factors: the free volume and the steric effect. 21,22 In the MWCNT-g-PI system, T g shows a decreasing trend with the increase in the loading amount of MWCNTs. The polyimide chains graed on the MWCNT surfaces disrupted the ordered chain structure of the polyimides and resulted in the increase in free volume. However, the T g values of all the samples were greater than 399 C. Mechanical properties of MWCNT/PIs and MWCNT-g-PIs For the nanocomposites, the mechanical property is affected by many factors, such as the polymer matrix, loading amount of inorganic nanollers, dispersion in the polymer matrix and interfacial interaction. 23 Based on the several aspects mentioned above, the mechanical properties of MWCNT/PIs and MWCNT-g-PIs were discussed. The tensile strength, tensile modulus and elongation at break results of MWCNT/PIs and MWCNT-g-PIs are summarized in Table 3. The tensile strength, tensile modulus and elongation at break of PI were 129 MPa, 2.39 GPa and 57.5%, respectively. PI showed good mechanical properties. Compared with the results for PI, the tensile strength and tensile modulus of g-PI had a slight increase because of the existence of crosslinking points by the self-polycondensation of the coupling agent at the ending of the polyimide chains, which was also responsible for the decrease in the elongation at break of g-PI from 57.5% to 48.6%. Subsequently, the mechanical properties of MWCNT-g-PIs with different loading amounts were analysed. 5% and 10% MWCNT-g-PIs maintained the high tensile strength of PI. The tensile modulus increased up to 2.59 GPa on increasing the MWCNT loading. The elongation at break of MWCNT-g-PIs exhibited a reducing trend but was still more than 36%. The reduction in the elongation at break in our system was retained at an acceptable degree. The covalent bond linkage between MWCNTs and polyimide chains promoted the well-distributed dispersion of MWCNTs in polyimides and strengthened the interfacial interaction between MWCNTs and polyimide chains. Hence, MWCNT-g-PIs showed good mechanical properties. The mechanical properties of the sample prepared by the simple blending method (MWCNT/PIs) were also investigated. In this research, a short carbon nanotube (L/d ¼ 250) was selected, which could be easily dispersed in a polymer matrix and lead to the existence of p-p interactions between the carbon Table 1 Thermal properties of MWCNT/PIs and MWCNT-g-PIs Sample codes T g a ( C) a Measured by DMA at a heating rate of 5 C min À1 . b 5% weight loss temperature (T 5% ) and 10% weight loss (T 10% ) temperature measured by TGA. c Heat-resistance index (T HRI ) was calculated by the equation T HRI ¼ 0.49 Â [T 5% + 0.6 Â (T 30% À T 5% )]. d Residual weight retention at 800 C. nanotubes and benzene rings in polyimide chains. Thus, MWCNT/PIs also exhibited good mechanical properties. Morphology of MWCNT/PIs and MWCNT-g-PIs Fig . 9 exhibits the SEM images of MWCNT/PIs and MWCNT-g-PIs. In Fig. 9, it can also be noticed that the MWCNTs disperse more homogeneously in MWCNT-g-PIs than in MWCNT/PIs due to covalent bond linkage, strengthening the interfacial interaction between MWCNTs and the polyimide matrix. A small portion of agglomerated MWCNTs can be seen in the 5% and 10% MWCNT/PI composites. This is one of the key factors that can affect the thermal conductivity of the resulting composites. Apparently, a higher ller content is required to form "thermal conductive pathways" when the llers agglomerate in the polymer matrix. A good dispersion of MWCNTs in polyimides may contribute to the improvement in thermal conductivity. Thermal conductivity of MWCNT/PIs and MWCNT-g-PIs The thermal conductivity properties of the MWCNT/PIs and MWCNT-g-PI composites are shown in Fig. 9. The increasing MWCNT loading enhanced the thermal conductivity of MWCNT/PIs and MWCNT-g-PIs because more and more MWCNTs participated in forming "thermal conductive pathways". However, the thermal conductivity of MWCNT-g-PIs increased faster than that of MWCNT/PIs at the same loading. The thermal conductivity of 10% MWCNT/PIs improved by 69.6% than that of pure PI. The thermal conductivity of 10% MWCNT-g-PIs increased by 87.0% than that of pure PI (Fig. 10). The well-distributed dispersion of MWCNTs in polyimides can be boosted to form "thermal conductive pathways" at the same loading, and strengthening the interfacial interaction between MWCNTs and polyimide chains by covalent bond linkage can decrease the interfacial thermal resistance (R C ) between nanotubes and the polymer matrix. The interfacial thermal resistance (R C ) was calculated by the Maxwell-Garnett-type effective medium approach (EMA) in our research as follows: 24
3,459.4
2020-04-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Conservation of the structure and function of bacterial tryptophan synthases The tryptophan synthases from three human pathogens show remarkable structural conservation, but at the same time display local differences in both their catalytic and allosteric sites that may be responsible for the observed differences in catalysis and inhibitor binding. This functional dissimilarity may be exploited in the design of species-specific enzyme inhibitors. Tryptophan biosynthesis is one of the most characterized processes in bacteria, in which the enzymes from Salmonella typhimurium and Escherichia coli serve as model systems. Tryptophan synthase (TrpAB) catalyzes the final two steps of tryptophan biosynthesis in plants, fungi and bacteria. This pyridoxal 5 0 -phosphate (PLP)-dependent enzyme consists of two protein chains, (TrpA) and (TrpB), functioning as a linear heterotetrameric complex containing two TrpAB units. The reaction has a complicated, multistep mechanism resulting in the -replacement of the hydroxyl group of l-serine with an indole moiety. Recent studies have shown that functional TrpAB is required for the survival of pathogenic bacteria in macrophages and for evading host defense. Therefore, TrpAB is a promising target for drug discovery, as its orthologs include enzymes from the important human pathogens Streptococcus pneumoniae, Legionella pneumophila and Francisella tularensis, the causative agents of pneumonia, legionnaires' disease and tularemia, respectively. However, specific biochemical and structural properties of the TrpABs from these organisms have not been investigated. To fill the important phylogenetic gaps in the understanding of TrpABs and to uncover unique features of TrpAB orthologs to spearhead future drug-discovery efforts, the TrpABs from L. pneumophila, F. tularensis and S. pneumoniae have been characterized. In addition to kinetic properties and inhibitor-sensitivity data, structural information gathered using X-ray crystallography is presented. The enzymes show remarkable structural conservation, but at the same time display local differences in both their catalytic and allosteric sites that may be responsible for the observed differences in catalysis and inhibitor binding. This functional dissimilarity may be exploited in the design of species-specific enzyme inhibitors. Introduction Tryptophan synthase (TrpAB) is a pyridoxal 5 0 -phosphate (PLP)-dependent enzyme that participates in the final two steps of tryptophan synthesis in plants, fungi and bacteria (reviewed in Dunn, 2012;Raboni et al., 2003Raboni et al., , 2009Dunn et al., 2008). The enzyme consists of two protein chains, (TrpA) and (TrpB) (Crawford & Yanofsky, 1958), that operate as a linear heterotetrameric complex containing two functional TrpAB units (Fig. 1). In bacteria, TrpA and TrpB are encoded by usually adjacent trpA and trpB genes that belong to the highly regulated tryptophan-biosynthesis operon (reviewed in Merino et al., 2008). The TrpA subunit converts indole-3-glycerol phosphate (IGP) into glyceraldehyde 3-phosphate (G3P) and indole (IND) (Fig. 2). Subsequently, the latter product is utilized by TrpB, where it reacts with the l-serine (l-Ser) substrate to generate l-tryptophan (l-Trp). The reaction has a complicated, multistep mechanism involving enzyme-cofactor and substrate covalent adducts and results in the -replacement of the hydroxyl group of l-Ser with the indole moiety ( Fig. 2) (reviewed in Raboni et al., 2009). As originally shown for TrpAB from the Gram-negative Salmonella typhimurium (StTrpAB), TrpA adopts a canonical (/) 8 -barrel fold (also known as a TIM barrel) with numerous additional elements (Hyde et al., 1988;Figs. 1 and 3). The active site is located at the top of the central -barrel, with two acidic residues involved in catalysis: StGlu49 belonging to the S2 strand and StAsp60 originating from loop L2. Another structural element, loop L6, serves as a lid closing over the binding pocket. TrpB represents a type II PLP-dependent enzyme with two domains, the N-and C-terminal domains, with the active site located in a cleft between them and carrying the covalently attached PLP cofactor. The N-terminal domain encompasses the so-called communication domain (COMM) that plays a key role in coordinating the activity of the two active sites (Schneider et al., 1998). In the tetrameric arrangement, the TrpA and TrpB catalytic sites of the adjoining subunits are connected by a 25 Å long hydrophobic channel that facilities indole transport from TrpA to TrpB. The TrpA-and TrpB-catalyzed chemical transformations are highly controlled by allosteric effects and other factors, for instance the binding of monovalent cations to TrpB, linked to substrate channeling. These molecular measures, together with other bacterial regulatory mechanisms (Merino et al., 2008), are in place to ensure that the cellular resources are efficiently utilized to produce l-Trp, which is a scarce and most energetically expensive amino acid to biosynthesize (Akashi & Gojobori, 2002). The well documented ligand-induced reciprocal communication between subunits leading to the mutual activation involves conformational rearrangements. During the catalytic process, both TrpA and TrpB cycle between a low-activity open conformation ( O or O ) and a high-activity closed state ( C or C ) (Dunn, 2012), depending on the reaction state. The formation of the aminoacrylate Schiff-base intermediate, E AA , from l-Ser and PLP in TrpB triggers movement of the TrpB COMM domain towards a closed state ( C ), which subsequently activates TrpA by closure of the L6 loop ( C ). In a reciprocal process, IGP substrate binding to TrpA promotes an C state, which in turn activates TrpB ( C ). The two protein chains convert back to their open states when the l-Trp external aldimine, E A,ex2 , is produced. The availability of l-Trp, either supplied by the environment or synthesized in cellulo, is a prerequisite for bacterial survival. Some species rely heavily on external sources and maintain either no or only limited functionality of the l-Trp operon, while others preserve the complete system for de novo biosynthesis. The absence of the l-Trp biosynthetic pathway in animals and humans makes it a potentially attractive drug target for the treatment of bacterial diseases, even though the enzymes involved are only essential under certain conditions; that is, when exogenous l-Trp becomes depleted. Recent studies exploring these avenues showed that anthranilate synthase component I, TrpE (Zhang et al., 2013) Overall structure of the tryptophan synthase heterotetramer from S. pneumoniae. TrpA is shown in yellow and TrpB is shown in cyan, with the COMM domain shown in orange and the PLP cofactor depicted in a sphere representation. gers the expression of host indoleamine 2,3-dioxygenase (IDO-1), an enzyme responsible for l-Trp breakdown, or possibly even before this defense mechanism is mounted (Wellington et al., 2017). Similar mechanisms inducing l-Trp starvation also function in lung-specific mouse infections with Streptococcus pneumoniae and Francisella tularensis, which are Gram-positive and Gram-negative bacteria, respectively. Under such conditions, the latter organism also requires TrpAB for growth (Peng & Monack, 2010). Other pathogens that utilize tryptophan biosynthesis to evade host defenses or even to highjack it for their own purposes include urogenital serovars of Chlamydia trachomatis (a Gram-negative obligate intracellular parasite), which employ a partly dysfunctional TrpAB to produce l-Trp from external sources of indole provided by coexisting bacteria (Caldwell et al., 2003;Bonner et al., 2014). The growing list of human pathogens in which the l-Trp biosynthetic pathway plays an important role extends beyond prokaryotes. For example, Cryptosporidium species (parasitic protozoa) inhabiting intestines encode bacteriaderived TrpB, which potentially acts in a similar fashion as it does in C. trachomatis (Sateriale & Striepen, 2016). Specific biochemical and structural traits of the tryptophan synthases from these organisms have not been explored, with the recent exception of the M. tuberculosis ortholog. The structural and functional information gathered over the past 60 years has helped to explain the roles of individual residues in catalysis and allosteric regulation of the two active sites. Research has focused primarily on a prototypic tryptophan synthase from S. typhimurium (StTrpAB) and to a lesser extent those from E. coli (Heilmann, 1978;Lane & Kirschner, 1983;Drewe & Dunn, 1985, 1986Houben & Dunn, 1990;Lim et al., 1991) and Pyrococcus furiosus (Yamagata et al., 2001;Ogasahara et al., 2003;Hioki et al., 2004;Lee et al., 2005;Buller et al., 2015;Heilmann, 1978;Lane & Kirschner, 1983;Drewe & Dunn, 1985, 1986Houben & Dunn, 1990;Lim et al., 1991). Tryptophan synthase has become a prototype system to study the peculiarities of allostery and substrate channeling (Hilario et al., 2016;Niks et al., 2013;Rhee et al., 1996;Rowlett et al., 1998;Spyrakis et al., 2006). TrpA is also one of the model proteins that have been used to investigate protein-folding mechanisms (Wu & Matthews, 2002;Bilsel et al., 1999;Yang et al., 2007;Vadrevu et al., 2008;Wu et al., 2007;Michalska et al., 2015). The sparsity of biochemical/structural investigations of other orthologs possibly stems from challenges in obtaining high-quality TrpAB samples and also from interest being focused on very detailed mechanistic aspects rather than on species-specific variations. Importantly, though, as shown by our recent study of M. tuberculosis TrpAB (MtTrpAB; Wellington et al., 2017), these so-far ignored differences, especially within the nonconserved tunnel lining, may have profound consequences for the discovery and design of new allosteric inhibitors. Therefore, to fill the important phylogenetic gaps in our understanding of TrpABs and to uncover potential unique features of other orthologs to facilitate future drug-discovery efforts, we biochemically characterized three TrpABs from Gram-positive and Gram-negative pathogens: Legionella pneumophila Philadelphia, F. tularensis and S. pneumoniae (LpPhTrpAB, FtTrpAB and SpTrpAB, respectively). In addition to kinetic properties and inhibitor-binding capabilities, we also provide high-resolution structural information gathered using X-ray crystallography for the FtTrpAB and SpTrpAB complexes and for two subunits: LpPhTrpA and that from L. pneumophila Paris (LpPaTrpA). TrpAB gene cloning The gene cloning was performed as reported previously (Kim et al., 2011). Briefly, F. tularensis Schu 4, L. pneumophila Philadelphia, L. pneumophila Paris and S. pneumoniae TIGR4 genomic DNAs were used as templates for PCR of the genes coding for the TrpA and TrpB subunits of tryptophan synthase. Vector-compatible primers for the amplification of the DNA fragments coding for the subunits were designed using an online tool (https://bioinformatics.anl.gov/targets/ public_tools.aspx; Yoon et al., 2002). The TrpA subunit peptides that were cloned were as follows: 1-269 for FtTrpA, 1-272 for LpPhTrpA and LpPaTrpA, and 1-258 for SpTrpA. The TrpB subunit peptides that were cloned were as follows: 1-396 for FtTrpB, 13-396 for LpPhTrpB and 4-407 for SpTrpB. Purified PCR products were treated with T4 DNA polymerase in the presence of dCTP (Eschenfeldt et al., 2010) according to the vendor's specification (New England Biolabs, Ipswich, Massachusetts, USA). The protruded DNA fragment for each of the TrpA subunits was mixed with T4 DNA polymerasetreated vector pMCSG68 (PSI:Biology-Materials Repository) to allow ligation-independent cloning (Aslanidis & Jong, 1990;Eschenfeldt et al., 2009). Similarly, the protruded DNA fragment for each of the TrpB subunits was mixed with T4 DNA polymerase-treated vector pRSF with kanamycin resistance, which had an identical ligand-independent cloning site to pMCSG68. Both subunits from each genomic DNA were individually transformed into E. coli BL21-Gold (DE3) cells and grown in the presence of the corresponding antibiotic. A single colony of each transformant was picked, grown and induced with isopropyl -d-1-thiogalactopyranoside (IPTG). The cell lysate was analyzed to confirm a protein of the correct molecular weight. The solubility of the TrpA subunit was analyzed via small-scale Ni 2+ -affinity purification and overnight TEV protease cleavage. Once the DNA sequences of the TrpA and TrpB subunits had been verified, both subunit plasmids from each genomic DNA were co-transformed into E. coli BL21-Gold (DE3) cells in LB medium containing ampicillin (150 mg ml À1 ) and kanamycin (25 mg ml À1 ). Cotransformed colonies were analyzed using Ni 2+ -affinity purification, and overnight TEV protease cleavage was performed to verify that the complex was soluble and stable. Expression of TrpAB and purification for crystallization To express SpTrpAB and FtTrpAB, starter cultures were grown overnight at 37 C and 200 rev min À1 in LB medium with ampicillin (100 mg ml À1 ) and kanamycin (30 mg ml À1 ) research papers 652 Karolina Michalska et al. Tryptophan synthases from bacterial pathogens supplemented with 40 mM K 2 HPO 4 . The following morning, LB-PO 4 -glucose (2 g per litre) medium with antibiotics was inoculated with the overnight cultures. After reaching an OD 600 of 1.0 at 37 C, the SpTrpAB cultures were transferred to 4 C and, after 1 h, to 18 C. After a subsequent 15 min incubation, the cultures were induced with 0.5 mM IPTG and incubated at 18 C overnight to produce the native protein. SeMet-labeled FtTrpAB and native SpTrpAB were purified using the procedure described previously (Kim et al., 2004). The harvested cells were thawed and 1 mg ml À1 lysozyme was added. This mixture was kept on ice for 20 min with gentle shaking and was then sonicated. The lysate was clarified by centrifugation at 36 000g for 1 h and filtered through a 0.45 mm membrane. The clarified lysate was applied onto a 5 ml nickel HisTrap HP column (GE Healthcare Life Sciences) and the His 6 -tagged protein was released with elution buffer (500 mM NaCl, 5% glycerol, 50 mM HEPES pH 8.0, 250 mM imidazole, 10 mM -mercaptoethanol). This was followed by a bufferexchange step using a customized desalting column (Sephadex G-25 Fine XK 26/20, GE Healthcare Life Sciences) equilibrated with buffer consisting of 20 mM Tris-HCl pH 7.5, 500 mM NaCl, 2 mM DTT. All of these steps were performed using an Ä KTAxpress system (GE Healthcare Life Sciences). The fusion tag was removed by treatment with recombinant His 7 -tagged Tobacco etch virus (TEV) protease. Nickelaffinity chromatography was used to remove the His 6 tag, uncut protein and His 7 -tagged TEV protease (Blommel & Fox, 2007). The SpTrpAB ortholog was subjected to an extra purification step via size-exclusion chromatography on a Superdex 200 HiLoad 26/60 column (GE Healthcare Life Sciences) in crystallization buffer (200 mM NaCl, 20 mM HEPES pH 8.0, 2 mM DTT). The FtTrpAB protein was dialyzed against crystallization buffer consisting of 250 mM NaCl, 20 mM HEPES pH 8.0, 2 mM dithiothreitol (DTT) and the proteins were then concentrated to 68 mg ml À1 (FtTrpAB) and 33.6 mg ml À1 (SpTrpAB) using an Amicon Ultra centrifugal filter device with a 10 000 molecular-weight cutoff (Millipore, Billerica, Massachusetts, USA), flash-cooled in liquid nitrogen and stored at À80 C. The TrpAB protein concentration was determined spectrophotometrically by measuring the absorbance at 280 nm on a NanoDrop ND-1000 spectrophotometer (Thermo Scientific) against buffer containing an equimolar concentration of PLP. The concentration was calculated using extinction coefficients of 34185 and 39435 M À1 cm À1 , respectively, computed from the amino-acid sequence. Expression of TrpA and purification for crystallization An LB medium starter culture was supplemented with 40 mM K 2 HPO 4 and ampicillin (150 mg ml À1 ) for LpPhTrpA and LpPaTrpA, grown and shaken overnight at 37 C and 200 rev min À1 . The starter cultures were used to inoculate 1 l of enriched M9 medium for large-scale SeMet-labeled protein production, which was carried out as described above. From each litre of cell culture, 8 g of cell pellet containing SeMetlabeled LpPhTrpA or LpPaTrpA protein was obtained and was consequently resuspended in lysis buffer and stored at À80 C. SeMet-labeled LpPhTrpA and LpPaTrpA were purified in the same manner as SeMet-labeled FtTrpAB. However, instead of dialyzing these proteins against crystallization buffer, they were buffer-exchanged using an Amicon Ultra centrifugal filter device with a 10 000 molecular-weight cutoff (Millipore, Billerica, Massachetts, USA) with 250 mM NaCl, 20 mM HEPES pH 8.0, 2 mM DTT, flash-cooled in liquid nitrogen and stored at À80 C. Protein concentrations were also determined with a NanoDrop ND-1000 using extinction coefficients of 24870 and 23505 M À1 cm À1 , respectively, computed from the amino-acid sequence. Expression and purification for enzymatic assays For each ortholog, a starter culture was grown overnight at 37 C and 200 rev min À1 in LB medium with ampicillin (100 mg ml À1 ) and kanamycin (30 mg ml À1 ) and supplemented with 40 mM K 2 HPO 4 . The following morning, 4 l LB-PO 4glucose (2 g per litre) medium with antibiotics was inoculated with 30 ml of the overnight culture and was grown at 37 C and 200 rev min À1 . After reaching an OD 600 of 1.0 the cultures were transferred to 4 C to cool, and after 1 h the temperature was increased to 18 C. After 15 min, protein expression was induced with 0.5 mM IPTG. The cells were incubated at 18 C overnight. The harvested cells containing TrpAB were resuspended in lysis buffer [500 mM NaCl, 5%(w/v) glycerol, 50 mM HEPES pH 8.0, 20 mM imidazole, 10 mM -mercaptoethanol, protease inhibitor (one tablet per 50 ml of extract), 1 mM PLP] and stored at À80 C. All three native proteins were purified using the procedure described above for FtTrpAB. The samples were concentrated to 40 mg ml À1 (LpPhTrpAB), 40 mg ml À1 (SpTrpAB) and 140 mg ml À1 (FtTrpAB), flash-cooled in liquid nitrogen in 35 ml droplets and subsequently used in enzymatic assays. MtTrpAB was purified as described previously (Wellington et al., 2017). Crystallization The FtTrpAB and SpTrpAB proteins were crystallized using sitting-drop vapor diffusion at 16 and 24 C, respectively, in a CrystalQuick 96-well round-bottom plate (Greiner Bio-One North America, Monroe, North Carolina, USA). A 400 nl droplet of the protein (35 or 34 mg ml À1 ) with 1 mM PLP and 1 mM l-Ser (FtTrpAB) or 0.5 mM PLP (SpTrpAB) was mixed with a 200 nl droplet and 400 nl crystallization reagent and allowed to equilibrate against 135 ml crystallization reagent. The nanopipetting was performed using a Mosquito nanolitre liquid-handling system (TTP Labtech, Cambridge, Massa-chusetts, USA). The plates were then incubated within a RoboIncubator automated plate-storage system (Rigaku). Automated crystal visualization (Minstrel III, Rigaku) was utilized to locate several crystals. The best crystals of SeMet-labeled FtTrpAB were obtained from 0.2 M calcium acetate, 0.1 M imidazole-HCl pH 8.0, 10%(w/v) PEG 8000. The SpTrpAB crystals grew from 0.2 M ammonium acetate, 0.1 M Tris-HCl pH 8.5, 25% PEG 3350. LpPhTrpA (at 25 mg ml À1 ) and LpPaTrpA (at 62.5 mg ml À1 ) were screened in the same manner, but without the addition of extra ligands, using a droplet consisting of 400 nl protein solution and 400 nl crystallization reagent that was allowed to equilibrate over 135 ml of the respective reservoir condition. The proteins were screened against the MCSG 1-4 screens (Microlytic) and the Index screen (Hampton Research) at 16 C. The best crystals of SeMet-labeled LpPhTrpA were obtained from 0.01 M sodium citrate, 33%(w/v) PEG 6000. The SeMet-labeled LpPaTrpA crystals grew from 0.2 M sodium chloride, 0.1 M bis-Tris pH 6.5, 25%(w/v) PEG 3350. Data collection The crystals were cryoprotected in their respective mother liquors supplemented with 10% (SpTrpAB, LpPhTrpA and LpPaTrpA) or 25% (FtTrpAB) glycerol and were subsequently flash-cooled in liquid nitrogen. X-ray diffraction data were collected on the Structural Biology Center 19-ID beamline at the Advanced Photon Source, Argonne National Laboratory. The images were recorded on an ADSC Q315r detector. The data sets were processed with the HKL-3000 suite (Minor et al., 2006). Intensities were converted to structure-factor amplitudes in the CTRUNCATE program (French & Wilson, 1978;Padilla & Yeates, 2003) from the CCP4 package (Winn et al., 2011). The data-collection and processing statistics are given in Table 1. Structure solution and refinement The SpTrpAB structure was solved by molecular replacement in Phaser (McCoy, 2007) Karplus & Diederichs (2012). § R = P hkl jF obs j À jF calc j = P hkl jF obs j for all reflections, where F obs and F calc are the observed and calculated structure factors, respectively. R free is calculated analogously for the test reflections, which wre randomly selected and excluded from the refinement. } As defined by MolProbity (Chen et al., 2010). Genomics of Infectious Diseases, unpublished work). The initial model was autobuilt in PHENIX (Adams et al., 2013) and was further improved by manual correction in Coot (Emsley & Cowtan, 2004) and crystallographic refinement in PHENIX (Afonine et al., 2012). The FtTrpAB, LpPhTrpA and LpPaTrpA structures were solved by the SAD method using selenium absorption peak data in SHARP (Vonrhein et al., 2007) or HKL-3000 (for LpTrpA; Minor et al., 2006) and were autobuilt in Buccaneer (Cowtan, 2006). The final model was obtained using alternating manual rebuilding in Coot and maximum-likelihood refinement in PHENIX (Afonine et al., 2012). The refinement statistics are given in Table 1. Preparation of material for kinetic assays Prior to kinetic and/or biophysical characterization, MtTrpAB was dialyzed for 2-4 h in TrpAB buffer (20 mM HEPES pH 8.0, 100 mM KCl, 1 mM TCEP, 40 mM PLP) to remove glycerol. After dialysis for 2-4 h, the buffer was exchanged with fresh buffer and dialysis continued overnight. The three other orthologs, however, were stored in 20 mM HEPES pH 8.0, 200 mM NaCl, 2 mM DTT buffer containing no glycerol after purification and did not require dialysis before use. The compounds F9, F6 and IPP were custom-synthesized by GVK Bio (Cambridge, Massachusetts, USA). The MtTrpAB inhibitor BRD4592 was synthesized internally at the Broad Institute as described previously (Wellington et al., 2017). Measurement of enzyme kinetics by UV absorption Enzyme kinetics for each ortholog were determined over 30 min under saturating substrate conditions (200 mM indole and 60 mM l-Ser) in 1 ml TrpAB buffer. An Agilent Technologies Cary 400 Series UV-Vis spectrophotometer set to 290 nm was used for UV absorption measurements. A baseline reading with no enzyme was established, after which enzyme was added every 2 min to give a final concentration range from 50 nM to 2.4 mM. Product progress curves were determined at appropriate enzyme concentrations over a 10 min period in which product generation was linear to determine the K m and k cat parameters. A value of Á" = 1890 M À1 cm À1 was used for the indole to l-Trp conversion. In all cases, these enzymes were studied at room temperature (22 C). These experiments were performed on triplicate test occasions with triplicate replicates in each case. LC-MS assay For the liquid chromatography-mass spectrometry (LC-MS) assay, all reagents were prepared in a 96-well plate with a final reaction volume of 50 ml. Compound IC 50 reactions were run at substrate K m conditions (10 mM indole, 20 mM l-serine). Compound concentrations ranged from 0 to 200 mM. 10Â K m substrate solutions were prepared, with 5 ml additions of both indole and serine solutions to the wells. The final concentrations of each protein were as follows: 100 nM SpTrpAB, 5 nM FtTrpAB, 600 nM LpPhTrpAB and 100 nM MtTrpAB prepared in TrpAB buffer. Standard curves for l-Trp and indole were included with each mass-spectrometry experiment for quantification purposes only. An l-Ser standard curve was also included as a biological check for each ortholog. Final l-Ser standard curve concentrations included 48, 24, 12, 6, 3, 1.5, 0.75 and 0 mM at saturating (500 mM) indole (5Â solution at 2.5 mM indole with 10 ml additions). After all compound, substrate and standard curve solutions had been prepared, 30 ml of a 1.67Â protein solution was added to each well to start the reaction. After mixing and allowing 10 min incubation at room temperature, the reactions were quenched using 150 ml 0.1% formic acid in methanol followed by storage at 4 C for at least 2 h. The sample plates were then centrifuged for 15 min at 3900 rev min À1 ($3061g) and an aliquot of the supernatant was diluted 1:10 with water. 3.75 ml of this final solution was injected and analyzed. l-Trp and indole were detected by UPLC-MS (Waters, Milford, Massachusetts, USA). Compounds were quantified by selected ion recording (SIR) on an SQ mass spectrometer by negative electrospray ionization. The SIR method was set for l-Trp at 203.4 m/z and for indole at 116.3 m/z. Mobile phase A consisted of 0.1% ammonium hydroxide in water, while mobile phase B consisted of 0.1% ammonium hydroxide in acetonitrile. The gradient ran from 2% to 95% mobile phase B over 2.65 min at 0.9 ml min À1 . An Acquity BEH C18, 1.7 mm, 2.1 Â 50 mm column was used with the column temperature maintained at 65 C. Data analysis Kinetic experiments were run in triplicate and the reported values represent the average of at least three independent experiments. K m , k cat and IC 50 data were plotted using GraphPad Prism 7.0 and Origin 8.0. Protein preparation The recombinant tryptophan synthases from the pathogenic bacteria F. tularensis, S. pneumoniae and L. pneumophila Philadelphia have been produced for detailed characterization and comparison with the previously studied enzymes from S. typhimurium, E. coli and M. tuberculosis (Wellington et al., 2017). The level of pairwise sequence identity between the TrpBs from these organisms ranges from 51% to 59%, with the exception of the FtTrpB/StTrpB pair, which show 81% conserved residues. The TrpAs are more variable, with only 25-33% sequence identity for most pairs and 50% for the FtTrpA/StTrpA pair ( LpPhTrpAB) or TrpB (SpTrpAB) subunits were equipped with an N-terminal His 6 tag, which was subsequently removed by treatment with TEV protease. The resulting proteins carry an additional three N-terminal residues SNA on the tagged subunit. In addition to TrpABs, TrpAs from the L. pneumophila strains Paris and Philadelphia (LpPaTrpA and LpPhTrpA, respectively; 99% identical) have been produced for crystallographic studies, also with a removable N-terminal His 6 tag. FtTrpAB and LpPhTrpA were produced as SeMetlabeled derivatives, while all other proteins were expressed in the native form. The purified proteins were at least 90% pure as judged by PAGE. Structure determination The SpTrpAB protein was crystallized in space group P2 1 with the entire heterotetramer present in the asymmetric unit (Fig. 1, Table 1). The structure, which was determined at 2.45 Å resolution, was solved by molecular replacement. In chains A and C, corresponding to TrpA (amino-acid residues 1-258), residues 1, 180-189 and 182-187, respectively, were not modeled owing to a lack of interpretable electron density. Similarly, in TrpB (amino-acid residues 4-407) the N-terminal SNA sequence and the C-terminal end (residues 403-407) are not present in the respective chains B and D. The other ortholog, FtTrpAB, crystallized in space group C222 1 and the asymmetric unit contains only one module. This structure was solved by experimental SAD phasing and was refined to 2.80 Å resolution. In FtTrpAB, TrpA (chain A; residues 1-269) lacks the N-terminal SNA sequence and residues 183-191, while in TrpB (chain B; residues 1-396) the C-terminal residue is not present. For L. pneumophila only the TrpA subunit could be crystallized. The LpPaTrpA and LpPhTrpA structures were determined by experimental SAD phasing at 1.91 and 2.02 Å resolution, respectively. The LpPhTrpA protein crystallized in the orthorhombic space group P2 1 2 1 2 1 . The asymmetric unit contains one molecule of TrpA and the model lacks the N-terminal SNA residues, residues 57-59, residues 180-186 and the C-terminal residue 272. LpPaTrpA also crystallized in space group P2 1 2 1 2 1 with one chain in the asymmetric unit. The N-terminal SN residues and residues 180-187 and 270-273 are missing from the final model. Kinetic characterization Simultaneously with structural characterization, we performed kinetic analyses of the three new orthologs (FtTrpAB, SpTrpAB and LpPhTrpAB) and compared them with the MtTrpAB reference. A UV-based assay was used to measure the production of l-Trp from indole and l-Ser. Firstly, the enzyme concentration versus catalytic rate relationship was determined to identify the linear rate dependencies. Both the SpTrpAB and FtTrpAB enzymes displayed specific activities that were comparable to (SpTrpAB, 1.4 M l-Trp s À1 M À1 enzyme) or higher (FtTrpAB, 26 M l-Trp s À1 M À1 enzyme) than that of MtTrpAB (2.0 M l-Trp s À1 M À1 enzyme), with the rate being linearly dependent on enzyme concentration over the entire tested range. The LpPhTrpAB enzyme, however, was less active than the MtTrpAB enzyme, displaying a biphasic dependency with both components appearing to be linear. The specific activity at low enzyme concentrations (50-800 nM) was much lower (0.17 M l-Trp s À1 M À1 enzyme), while the higher concentration range (1000-2400 nM) displayed an improved but still significantly lower specific activity (0.38 M l-Trp s À1 M À1 enzyme) (Fig. 4). The source of this higher order effect is not obvious, but could be explained by the equilibrium between subunits and dimers and tetramers, with higher protein concentrations favoring the more active oligomeric state. We have observed such an equilibrium for the MtTrpAB enzyme (Wellington et al., 2017). The specific activity order is as follows: FtTrpAB >> MtTrpAB, SpTrpAB >> LpPhTrpAB. These data were used to set the appropriate enzyme concentrations (5 nM FtTrpAB, 100 nM MtTrpAB, 100 nM SpTrpAB and 600 nM LpPhTrpAB), resulting in linear l-Trp production progress curves over a 10 min reaction period, to determine the apparent K m and k cat parameters using the LC-MS assay. The apparent K m values are similar across all of the species for both substrates tested (indole and l-Ser). The k cat values were reproducible across experiment replicates and substrates, suggesting that saturation was achieved for the independent substrate in each case. The absolute k cat values were consistent with the specific activities described above, following the activity order FtTrpAB >> MtTrpAB, SpTrpAB >> LpPhTrpAB (Fig. 5). Inhibition studies In addition, the three TrpAB orthologs were profiled against the reported commercially available inhibitors Table 2 Primary structure identity and structural similarity between orthologous TrpA and TrpB. The first number corresponds to the percentage sequence identity (calculated in EMBOSS Needle; Rice et al., 2000), followed by r.m.s.d. (in Å ) for C -atom superposition for the number of pairs given in parentheses (calculated in CCP4; Winn et al., 2011, Krissinel & Henrick, 2004 and IPP (indolepropanol phosphate; CID identifier 3713), as well as the recently discovered MtTrpAB inhibitor BRD4592 (CID identifier 54650477; Wellington et al., 2017) (Fig. 6). The LC-MS-based assays examined inhibition of the reaction with indole and l-Ser as substrates. F9 was found to be a potent inhibitor (IC 50 = 114 nM) of FtTrpAB under substrate K m conditions (10 mM indole, 20 mM l-Ser), while only slightly inhibiting LpPhTrpAB. Interestingly, F9 appears to be an activator of SpTrpAB (Fig. 6) observed for the FtTrpAB enzyme, with IC 50 = 1.46 mM for F6 and IC 50 = 0.08 mM for IPP. A different profile was seen when using the MtTrpAB inhibitor BRD4592. All three orthologs are slightly inhibited; however, a measureable IC 50 was only obtained for the SpTrpAB ortholog (IC 50 = 21 mM) (Fig. 6). Structural comparison with other TrpAB orthologs We have determined the structures of the FtTrpAB and SpTrpAB heterotetramers and of the subunits LpPaTrpA and LpPhTrpA. The overall structures of the complexes, along with the subunits, are essentially identical to those of the orthologs characterized previously, with the heterotetramer representing the complete functional unit (Fig. 1). Despite the rather low sequence identity of the TrpAs, the three polypeptides superpose with r.m.s.d.s of 1.4-1.9 Å amongst themselves and with the orthologs MtTrpA or StTrpA (Table 2, Fig. 7). The enzyme from F. tularensis, which is the most closely related to StTrpAB, shows even better agreement, with an r.m.s.d. of 0.8 Å for corresponding StTrpA C atoms. A similar pattern is observed for the TrpBs, which overlap with r.m.s.d.s of 0.7-1.0 Å . As expected in the absence of any TrpA ligand, the subunit adopts an open conformation with a disordered loop L6, regardless of whether the subunit is complexed with TrpB or alone. In isolated LpPhTrpA parts of loop L2 could not be modeled, indicating its high flexibility. The TrpA binding pocket and these critical loops are generally well conserved in terms of composition, including the catalytic residues, one of which is provided by loop L2. One important feature, although only noted at the sequence level owing to disorder, is the lack of conservation in the N-terminal region of loop L6. In the Salmonella enzyme this section carries Arg179, which has been shown to provide loop stabilization via hydrogen bonds between the guanidinium group and the main-chain atoms (Schneider et al., 1998). With the exception of FtTrpA, this residue is replaced by much smaller and in some cases hydrophobic residues, Ile in SpTrpA, Leu in LpTrpA and Thr in MtTrpA, and cannot form interactions equivalent to those of Arg179. It has previously been shown that an Arg179Leu mutation reduces the affinity of the substrate IGP for StTrpA and slows the TrpAB reaction (Brzović et al., 1993). It is not clear that this is a valid assumption for the other orthologs; however, MtTrpAB indeed has a higher K m for IGP than StTrpA. In addition, it is also consistent with the relative rank order of specific activities observed across this panel of TrpAB orthologs, although only in the context of the reaction. Within the ordered fragments of the TrpA pocket, some sequence variability is observed at the positions of Pro129Sp (the equivalent residues are Pro135Mt, Ala130Ft, Ala129St and Val129Lp), Met100Sp (Met100Lp and Met106Mt but Leu101Ft and Leu100St) and Tyr23Sp (replaced by Phe in FtTrpA, LpTrpA and StTrpA). Notably, though, despite the good superposition of the main-chain atoms throughout most of the subunit, the side chains adopt slightly different conformations (Fig. 7). The most pronounced discrepancy is observed for Phe212Sp, a residue that T-stacks against the aromatic ring of indole in the ligand-bound StTrpA state (Weyand & Schlichting, 1999). The position of this residue is affected by the mobile L6 loop in the substratebound closed state that reinforces the proper placement of the Phe side chain with respect to the substrate moiety. Without such constraints, in SpTrpA, as well as in LpTrpA, it points somewhat outside of the binding pocket towards the helical layer of the protein. In FtTrpA it is oriented more towards the cavity, but its position is still halfway from the state achieved in the substrate-bound complex (Fig. 7). Interestingly, this residue is replaced by Leu218 in the MtTrpA ortholog, where it also swings outside the binding pocket. The catalytic Glu52Sp and its equivalents in other orthologs also display some conformational diversity; in some cases, such as FtTrpA or StTrpA, it points towards the protein core, while in others (SpTrpA and MtTrpA) it faces the binding pocket. There are no apparent structural differences between TrpA in the TrpAB complex versus TrpA alone. The only exception is a slight movement of loop L2 towards the active site of TrpA in the heterodimer unit. In our FtTrpAB and StTrpAB structures the subunits exist in the open conformation, or more precisely in the expanded open conformation eO reported previously for several StTrpAB structures [PDB entries 2j9z , 1qoq (Weyand & Schlichting, 1999) and 1kfb (Kulik et al., 2002)], the P. furiosus ortholog [PDB entries 5e0k (Buller et al., 2015) and 1wdw (Lee et al., 2005)] and MtTrpAB (PDB entry 5tcf; Wellington et al., 2017), suggesting that this state may be more common than previously indicated. The active site carries a PLP moiety covalently attached to Lys91Sp (Lys86Ft, Lys101Mt). The active site is very conserved both in terms of sequence and the conformation of the PLP cofactor and side chains, with a few exceptions. FtTrpB and SpTrpB share an Ala with StTrpB (Ala84, Ala89 and Ala85, respectively), but MtTrpB has an equivalent Ser99 that makes a direct hydrogen bond to PLP. This interaction is missing in the other three orthologs. Thr87 is present in SpTrpB (and Thr97 in MtTrpB), which is replaced by glycine in FtTrpB and StTrpB. There is no obvious role for this substitution. Two important catalytic residues, threonine (Thr114Sp, Thr109Ft, Thr124Mt and Thr110St) and aspartic acid (Asp310Sp, Asp304Ft, Asp319Mt and Asp305St), show a very different conformational behavior in the open state of -subunit orthologs. The threonine, which is involved in coordination of the substrate/product carboxylate, shows nearly the same conformation in all four orthologs, while the conformations of the aspartic acid, which is involved in interaction with the amino group of the reagents, are very different. Larger conformational diversity is also observed for Gln118, a residue that is conserved in all four enzymes. However, only in MtTrpB does this residue form a direct hydrogen bond to O3 of the PLP cofactor. The side chains of a few other residues (Gln89, Ser234 and Lys381 in FtTrpB) also show somewhat different conformations, but these are much less pronounced. The phosphate group of PLP is Comparison of TrpAB orthologs. (a) Superposition of SpTrpB (yellow, TrpA, chain C; coral/cyan, TrpB, chain D) with FtTrpB (blue), MtTrpB (purple; chains A and B; PDB entry 5tcf; Wellington et al., 2017) and StTrpB (gray; PDB entry 1bks; Rhee et al., 1996). PLP from SpTrpAB is shown in a sphere representation. TrpA is shown to indicate the mutual orientation of the subunits. (b) Superposition of TrpA extracted from the TrpAB heterodimers. (c) Stereoview of the TrpA active-site superposition of SpTrpA (yellow), FtTrpA (blue) and StTrpA in complex with IPP (gray; PDB entry 1qop; Weyand & Schlichting, 1999). anchored by interaction with the N-terminal dipole of helix H9, direct hydrogen bonds to several main-chain amino groups (helix H9 and a short loop between S7 and H9) and three conserved side chains (His85, Ser234 and Asn235 in FtTrpB and His90, Ser240 and Asn241 in SpTrpB). These small changes in sequence and conformational propensity may explain the differences in substrate affinities and reaction rates. The structures of the FtTrpAB and SpTrpAB heterotetramers provide a new set of high-quality models and enable comparison of the intermolecular tunnel connecting the TrpA and TrpB catalytic pockets. In contrast to the active sites, the composition of the tunnel, which is mostly encompassed by TrpB, varies between the orthologs (Fig. 8), although generally SpTrpAB shares some features with MtTrpAB while FtTrpAB is similar to StTrpAB. This is consistent with the relative specific activities and the conservation of local primary sequence. The cross-comparisons indicate a number of differences. For example, one side of the SpTrpB tunnel contains Tyr311, His285 and the neighboring Leu284, with the tyrosine rotated towards the active site of TrpB, where it could potentially interfere with the reaction. The opposite side contributes Val174, Leu178 and Leu192. In FtTrpB all of the former residues are replaced by phenylalanines (Phe305, Phe279 and Phe278, respectively), while the leucines are conserved and Val174Sp is replaced by Cys169Ft. A similar scenario is present in StTrpB (Phe306 and Phe280), with the exception of Tyr279St, which substitutes for Phe278Ft. In MtTrpB the equivalent residues are Tyr320, His294 and Phe293, resembling the SpTrpB composition, but in this case the tyrosine ring points in a different direction, making a hydrogen bond to His294. Such an arrangement would be more constrained in SpTrpB owing to the proximity of Leu196, a residue that is substituted by a much smaller Ala in the other enzymes. MtTrpB also contains phenylalanines (Phe188 and Phe202) instead of the leucines that are conserved in the three other TrpBs, and Ile184Mt takes the place of Val174Sp. Previous data for the StTrpB ortholog showed that large side chains, such as Phe or Trp, in this position hamper indole channeling (Anderson et al., 1995;Schlichting et al., 1994;Weyand & Schlichting, 2000). Therefore, it appears that these variations in the residues composing the tunnel may have a direct impact on the rate of indole transfer and influence the kinetic activities of these enzymes. This may represent a fine-tuning of the enzyme activity without directly involving the residues in the catalytic sites. Generally, the tunnel displays some level of flexibility and can adapt to enable indole translocation or to specifically bind certain inhibitors. For instance, we showed previously that in MtTrpAB Phe188 changes conformation to accommodate BRD4592 (Wellington et al., 2017) both in the open and closed states of the subunit, while in StTrpAB Phe280 and Tyr279 swing away to provide space for the F6 molecule (Hilario et al., 2016) in the open state (Fig. 8). The latter work also proposed that the indole moiety enters TrpB in the vicinity of Leu21St (conserved as Leu24Sp, Leu34Mt and Leu20Ft), Leu174St and Phe280St, which need to move to open up a farther segment of the channel that is lined with residues that do not present major obvious obstacles. In principle, an analogous mechanism can be envisioned for the very similar enzyme from F. tularensis. In the other two orthologs alternative mechanisms are most likely to exist. In the SpTrpB/MtTrpB structures, in which Phe280St is replaced by a histidine, this residue adopts a conformation that is compatible with an open channel both in the O (SpTrpB/ MtTrpB) and C (MtTrpB) states. Moreover, in MtTrpB such an architecture is stabilized by a hydrogen bond to Tyr320Mt (in O and C ) and another to Asn185Mt (in Mt C ), suggesting that it represents the most common conformational state. An analogous interaction with asparagine might be created in SpTrpB upon subunit closure, while His-Tyr bonding would require the concomitant movement of Tyr311Sp and Leu196Sp. This coordinated movement is potentially a necessary step for the COMM-domain shift and TrpB closure, as otherwise Leu170Sp would clash with Tyr311Sp. On the other hand, the mycobacterial enzyme may need to undergo a different adjustment on the opposite side of the tunnel. Here, there are two bulkier phenylalanine residues, Phe188 and Phe202. In both cases these residues appear to be mobile, as in some structures of MtTrpAB Phe202 exists in double conformations while Phe188 has been shown to rotate in the complex with the BRD4592 inhibitor. However, for Phe188 in this alternative state the access from subunit is blocked; thus, it is possible that the ligand-free conformation of Phe188 corresponds to the open-tunnel state with only a minor adjustment required. Allosteric contacts Previous investigations of allosteric communication between the TrpAB subunits recognized a number of key interactions at theinterface that transmit activation signals. One of them is the main-chain-main-chain hydrogen bond between Ser178 and Gly181 in StTrpAB (Spyrakis et al., 2006;Schneider et al., 1998). The former residue is preserved in FtTrpB; however, the other two orthologs contain valine. On the other hand, the glycine residue (Gly181Sp, Gly182Ft and Gly187Mt) belongs to the highly conserved GVTG motif of the L6 loop. In the S. typhimurium TrpA C state the conserved threonine residue from this motif, Thr183, binds through its hydroxyl group to the carboxylate of the catalytic Asp60 (Asp61Sp, Asp63Ft and Asp68Mt), in addition to the main-chain-main-chain inter-action with the L2 loop. Deletions or point mutations within the L6 loop, such as Thr183Ala in StTrpA, dramatically reduce the -subunit activity (Yang & Miles, 1992). Similar modifications in the L2 loop, including changes to Pro57St (Pro60Sp, Pro58Ft and Pro65Mt) and Asp56St (Asp59Sp, Asp57Ft and Asp64Mt) reduce TrpA activity, although significant effects only occur in the context of the TrpAB complex, i.e. not when the subunit alone is assayed (Ogasahara et al., 1992;Rowlett et al., 1998). In the available O and C states of the mycobacterial enzyme, the side chain of Asp64Mt (the main chain of Ser63 in eO ) interacts with Lys181 from the COMM domain, while the carbonyl group of Asp68 binds to Arg189 in some of the subunits, as seen before in the StTrpA ortholog (Weyand & Schlichting, 1999). In the SpTrpAB eO state there is also a hydrogen bond between the Ser58 carbonyl group and Lys171, but Arg179 is too distant to interact with the catalytic aspartate. None of these contacts is observed in the reported FtTrpAB structure, either owing to disorder or to longer distances between the relevant atoms. Overall, the available data suggest that the geometry and contacts established by loops L6 and L2 have a pronounced effect on the enzyme activity. Transition from O to C triggers the closure of L6, which, together with the L2 and H6 elements, activates the catalytic aspartate residue. Changes in these elements or in their neighborhood possibly lock L6 into a low-activity open state (Spyrakis et al., 2006), thus preventing the proper positioning of the catalytic aspartic acid. Simultaneously with the -subunit malfunction, destabilization of the L2-H6 interactions in mutants reduces the -subunit activity (Ogasahara et al., 1992), with the detrimental effect partly alleviated by cation binding. Monovalent cations have been shown to stabilize the StTrpAB enzyme, with large cations (Cs + and NH 4 + ) exhibiting the most pronounced effect (Rowlett et al., 1998). These effects might result from the chain of interactions linking L2 to H6 and further, via the monovalent cation-binding site (MVC), to the active site of the subunit. The MVC is established by a set of residues localized in the proximity of the channel and the active site of TrpB, which interact with the cation through four main-chain carbonyl moieties (in S. typhimurium and M. tuberculosis) and a threonine side chain (only in M. tuberculosis owing to the presence of Pro in the equivalent position in StTrpB). While no monovalent cations have been modeled in the current structures, by analogy to the data collected from the MtTrpAB and StTrpAB systems the MVC must be created by Tyr311Sp, Gly313Sp, Ala273Sp, Gly237Sp and Thr275Sp in SpTrpAB and Phe305Ft, Ser307Ft, Gly267Ft, Gly231Ft in FtTrpAB, with Pro269Ft replacing the threonine residue. Depending on the size of the cation, either all residues equivalent to those in StTrpAB and MtTrpAB would be involved in cation binding, or only a subset, where the unfilled valencies in the coordination sphere may be completed by water molecules. As mentioned above, the MVC is indirectly connected to the H6 element of the COMM domain and to TrpA via either a histidine (His285Sp and His294Mt) or a phenylalanine (Phe279Ft, Phe280St), research papers IUCrJ (2019). 6, 649-664 switching between hydrophobic Phe-Phe contacts (FtTrpB and StTrpB) and the well defined His-Tyr hydrogen bond seen in MtTrpB and most likely to be present in the activated form of SpTrpB. It is not clear how this different organization of the MVC and its interactions with other structural elements affect the sensitivity of the protein to different cations or how the signal transduction is affected. Enzymatic properties In the -elimination reaction of TrpAB, with a k cat of between 1.7 and 78.6 min À1 , all of the investigated enzymes appear to be poorer catalysts of the indole-to-tryptophan conversion than the previously studied MtTrpAB (k cat = 197 min À1 ; Wellington et al., 2017 1 ), EcTrpAB (348 min À1 ; Lane & Kirschner, 1983) and StTrpAB (288 min À1 ; Raboni et al., 2007), at least under the given experimental conditions: at room temperature (20-22 C) at pH 7.6-8.0 in the presence of potassium ion. Similarly, the K m for serine is at least 35 times higher for the SpTrpAB and FtTrpAB enzymes (18.3-43.2 mM) than for those previously characterized (0.37, 4.4 and 0.58 mM for EcTrpAB, MtTrpAB and StTrpAB, respectively). Interestingly, however, the K m for indole is at least approximately three times lower for all of the currently tested orthologs than those reported for MtTrpAB and StTrpAB and is comparable to that of EcTrpAB. Inhibition Several inhibitors have been designed to study the mechanistic details of TrpAB. A number of them are competitive indole-3-glycerol phosphate analogs that bind to subunit , such as IPP and similar indole-3-alkyl 1-phosphates (Kirschner et al., 1975), indole-3-acetyl amino acids (Marabotti et al., 2000) or aryl compounds linked via an amide/ sulfonamide/thioether/thiourea to a phosphoalkyl moiety Sachpatzidis et al., 1999). The IC 50 parameters for these inhibitors against TrpAB have not been determined, with the exception of thioether-linked substrate analogs (Sachpatzidis et al., 1999), which showed nanomolar values for the reaction of StTrpAB. In addition to competitive inhibition of the reaction, some of the -binders, for example indole-3-acetyl-amino acids, IPP and F9, exert allosteric effects on subunit (Marabotti et al., 2000;. The more promiscuous ligand F6 has been found to bind not only to the active site of TrpA but also to the intersubunit tunnel, close to the active site (Hilario et al., 2016). The influence of competitive inhibitors of TrpA on the TrpB reaction has been linked to their ability to remodel the site, with the higher degree of ordered TrpA structure triggering more pronounced changes in TrpB . Here, we have tested the commercially available compounds IPP, F6 and F9 against the reaction. Notably, we observed potent inhibition only for FtTrpAB, which is the most similar to the prototypical StTrpAB of all the tested enzymes. It therefore seems that the allosteric effect influencing the activity of TrpB is sensitive to local sequence variations and structural features, and consequently might be unique to a subset of orthologs. Alternatively, it is also possible that the lack of TrpB susceptibility originates directly from the poor affinity of these inhibitors for TrpA, but we have not investigated such a scenario biochemically. From a structural perspective, the TrpA active sites are similar enough to at least bind to the very close substrate mimetic IPP, suggesting that the former argument for the lack of inhibition is more likely. Another explanation of these differences involves long-distance effects within and between subunits. The activation of SpTrpAB by the -binders is unexpected and surprising. However, allosteric sites serve modulatory purposes and a single binding pocket may exert activatory or inhibitory roles. It is therefore possible that the binding of the same ligands to various TrpAB orthologs may result in opposite kinetic effects because of small sequence variations. In agreement with our previous work demonstrating that BRD4592 inhibition is limited to orthologs containing a glycine residue in the L2 loop of TrpAB, such as in the case of the MtTrpAB enzyme, no significant effect was observed for all of the tested synthases. The weak inhibition of SpTrpAB, which carries the smallest side chain among the tested enzymes (Val61 in place of Gly66 in MtTrpAB, Leu59 in FtTrpAB and Met58 in LpPhTrpAB), supports the previous conclusion that any substitution in the loop would drastically reduce the size of the BRD4592 binding pocket, limiting the inhibitor affinity. Conclusions Tryptophan synthases have been shown to be conditionally essential enzymes in a number of important human pathogens, but the enzymes of the family have remained unexplored beyond a limited number of representatives. To broaden our perspective on TrpABs, we have purified and characterized three enzymes from L. pneumophila, F. tularensis and S. pneumoniae to uncover the potential unique features of TrpABs and to support future drug-discovery efforts. X-ray crystallography and biochemical studies show a remarkable structural conservation of the architecture and the catalytic and allosteric sites of the enzyme, suggesting preservation of the catalytic mechanism and regulation. At the same time, these enzymes display local sequence and structural differences in the catalytic, allosteric and metal-binding sites. These enzymes also exhibit differences in kinetic properties and their response to inhibitors, yet they display some correlations between biochemical properties and sequence/structural conservation. Notably, not all enzymes were inhibited by the tested compounds. In fact, for the S. pneumoniae ortholog the reaction was more efficient in the presence of -binders. Some of the differences can be explained structurally; however, others may result from the altered conditions in which these enzymes operate in cellulo. Nevertheless, understanding these dissimilarities may provide a basis for the design of new species- specific tryptophan synthase inhibitors against both the and active sites as well as the allosteric sites, which show higher conformational and sequence variability. Recognition that the targeting of unique allosteric sites may have species-specific effects may be important for the treatment of coexisting infections.
11,893
2019-05-29T00:00:00.000
[ "Chemistry", "Biology" ]
THE POTENTIAL OF SAWDUST AND COCONUT FIBER AS SOUND-REDUCTION MATERIALS In this study, biodegradable materials that could be utilized to reduce noise were examined. Sound absorption test was conducted with an impedance tube. Sawdust, coconut fiber, and expansive clay were used to create test samples. Noise reduction coefficient results for sawdust and expansive clay mixture ranged from 0.24 to 0.62. A mixture of coconut fiber and expansive clay recorded in noise reduction coefficient between 0.31 and 0.58. Coconut fiber mixed with expansive clay recorded noise reduction coefficient ranging from 0.31 to 0.58. The study findings suggests that these materials have good acoustic properties and can therefore be used as alternative noise reduction materials. These findings have important implications in reducing environmental pollution if adopted in the development of noise reducing materials. Introduction In the industrial society today, noise pollution has become a major source of health problem facing humanity and the environment (Gheorghe, 2013). Noise is a nuisance experienced by humans as a result of machines and equipment used in everyday activities (Indrianti et al., 2016). It is also considered to be unwanted sound that is usually unpleasant, loud, or disturbing to the hearing organs and is considered to be one of the negative environmental health hazards (Vašina, 2022). Sound, in fact, is the transmission of a disturbance in a fluid or solid, most commonly in the form of a wave motion, which is initiated when an element moves the nearest particle of air, causing a pressure differential in the medium through which the wave travels (Goelzer et al., 2001). Sound propagates at different speeds depending on the medium through which it passes and the pressure differential (Breysse & Lees, 2006). Most mechanical devices, including industrial equipment, home appliances, cars, and houses, have Noise and vibration associated with them (Navhi et al., 2009;Rmili et al., 2009). Noise and vibration are not always a nuisance, they can be used as the source of signals for machinery diagnostics and health monitoring (Randall, 2009;Tuma, 2009). Many workers are exposed to higher levels of noise above the approved threshold. Some negative effects posed by noise pollution on human health include, hearing problems, physical or mental losses, annoyance, and tiredness, among others. As a result, public knowledge and education about the effects of noise pollution and how it can be minimized is needed (Abd-elfattah & Abd-Elbasseer, 2011). It is therefore necessary to eliminate or reduce excessive noise. This is done by converting the excessive mechanical energy of oscillating motion causing noise and acoustic energy into other types of energy, especially heat (Vašina, 2022). At the current state of development in technology, noise tends to be an inevitable problem in the society. Noise being an occupational hazard, controlling and reducing it to the barest or tolerable levels for human comfort is a worthwhile challenge to be addressed using available local materials to cut down cost of importation of conventional materials. Achieving a manageable noise level would depend on the material used. Vibration isolation, partitions, sound-absorbing materials, device enclosures, and other enclosures can be used to reduce noise and vibration in mechanical systems if the sources of noise and vibration are identified (Roozen et al., 2009;Upadhyay et al., 2009). Eliminating noise completely, may not be possible since the environment could not be changed entirely. Continuous exposure, however, may pose health risk far beyond hearing damage. It is therefore necessary to put in place measures to reduce it to a manageable level. Depending on the type of noise and vibration generated, controlling the noise and vibration waves is always a physical challenge, requiring adequate control mechanisms that are practical (Tuler & Kaewunruen, 2017). To increase the efficiency of these sound-absorbing materials, they are used in combination with barriers and within enclosure (Crocker & Arenas, 2007). Materials that absorb and transmit sound waves should be able to absorb and transmit more sound waves than they reflect (Doutres & Atalla, 2012). The ability of a material to absorb sound is influenced by its thickness, density, and porosity (D'Alessandro & Pispola, 2005). At low frequencies (100-2000Hz), the material thickness is relevant or has a direct link; at high frequencies (>2000Hz), it is irrelevant (Seddeq, 2009). Porosity, aggregate size, aggregate gradation, aggregate type, and specimen thickness are the key parameters that influence sound absorption properties of porous materials (Zhang et al., 2020). The sound absorption coefficient increases as the thickness of the samples rises, according to Azkorra et al., (2015). The explanation for this may be that low frequency waves had longer wavelengths, implying that thicker material led to better absorption (Adnan & Rus, 2013). Density influences the acoustic impedance since the impedance determines the reflection of materials; the impedance is proportional to the density. According to Wertel (2000), a high-density material absorbs more sound because of its mass (Wertel, 2000). Most of the research into the sound absorption characteristics of materials is on synthetic materials. Muhazeli et al., (2020) studied the sound-absorption properties of a magneto-induced foam or magnetorheological foam by adding different concentrations of carbonyl iron particles. They discovered that the introduction of a magnetic field resulted in a peak frequency shift from the middle to higher frequency ranges which had a significant influence on sound absorption, and concluded that the magnetorheological foam could be applied as a material to control noise. Monkova et al., (2020) studied the sound absorption capabilities of 3D printed open porous acrylonitrile butadiene styrene (ABS) samples that were printed with four different lattice structures namely: Cartesian, Starlit, Rhomboid, and Octagonal; and concluded that 3D printed materials were good in sound absorption and could be used industrially as such. Liang et al., (2022) studied the use of inorganic materials, especially their fibers in sound and noise absorption and noted that the acoustic properties of polymers were usually improved by adding fillers, using perforated structures, gradient porous structures, and multiple-layer composite structures. In their study, Qunli Chen et al., (2022) investigated the performance of aluminum silicate fibers and other materials and successfully proved that the pure aluminium silicate fibers performed very well in absorbing low frequency noise compared with the others. They also confirmed that the thicker the material and higher the density of the material the higher its noise absorption ability. Development of new synthetic sound absorption materials is well advanced and successful, but these materials are expensive since they are mostly synthesized from petroleum-based resources, and consequently, contribute to adverse environmental effects like global warming and climate changes (Galbrun & Scerri, 2017). Most materials for acoustic applications today, like synthetic plastics, glass wool, and synthetic foams, are harmful to human health, and pose a threat to social life and the environment (Sailesh et al., 2022). The proliferation of synthetic plastics is being corrected at all levels of humanity (Hong & Chen, 2017). Sailesh et al., (2022) investigated sound absorption and transmission loss characteristics of 3D printed bio-degradable material made with Poly Lactic Acid (PLA). Even though it has long life and is bio degradable, it still needs to be hydrolyzed at high temperature using micro-organisms in industrial composting facilities to be compostable (Karamanlioglu et al., 2017). That means they are not naturally biodegradable but requires effort and money to compost them. Presently, most sound absorption materials available commercially consists of glass or mineral-fiber material, however, today's public consciousness and concern is to prevent pollution's harmful effects by favouring more friendly fabrics, less polluting practices, and recycled items (Asdrubali, 2006). As a result, it is critical to expand research into finding alternative acoustical materials made from renewable resources such as natural fibers. Natural fibers are known to have good acoustic properties and are relatively less expensive, biodegradable, abundant, and eco-friendly (Taban et al., 2019). The use of natural fiber and agricultural products as sound-absorbing materials have been investigated by some researchers Yang et al., 2020). investigated the possibility of producing low-cost sound-absorbing panels from date palm waste fibers and concluded that it could be used to enhance room acoustics properties. Or et al. (2017) investigated the acoustic properties of sound absorbers made from raw palm empty fruit bunch and discovered that at the frequency ranges above 1000 Hz, the average Sound Absorption Coefficient of 0.9 could be achieved. There is therefore the potential of using natural materials to absorb sound and for that matter, noise. The need to investigate and find alternative natural fibers that have potential of absorbing low frequency sounds and noise for possible industrial use and for that matter, promote the use of renewable resources for a better ecology is therefore paramount. Such viable alternatives should be materials that do not interfere with human health. It is therefore necessary to explore an opportunity to look for less expensive bio-degradable local materials to be used, as alternative noise reducing material. This study considers the use of natural fibers and natural renewable and biodegradable materials like coconut fiber, saw dust and clay which are abundant, cheaper, pose less health risk and safety concern, during handling, processing and use. Materials selection In this study, sawdust and coconut fiber which is generally discarded as waste were gathered and used as the main materials. The coconut fiber was extracted manually from the outer shells of mature coconut fruits and left to dry in the sun. It was then brought to the grinding machine where the coconut fiber was milled into smaller particles. Sawdust was sieved into micron and referred to in this work as fine grade sawdust. Tables 1 and 2 presents the mix proportions and properties of materials used to develop the test specimen, whereas Figure 1 shows the images of the test specimen. The table consist of the materials with their respective proportions on weight basis. They were further mixed with a binding agent in the ratio 1:2 (Particle: Binder). Samples formed from these mix proportions were used to carry out the experiment. Experimental Setup The measurement of sound level at the given frequencies was conducted using the impedance tube. The experimental setup, as shown in Figure 2, consists of an impedance tube with sample holder, a signal generator, a precision digital sound level meter (DT8852), a speaker inserted at one end, and a laptop computer to log the data recorded by the sound level meter. The impedance tube was set-up using ISO 10534-2 standard (Tao et al., 2015). A signal generator connected to a speaker was used to generate the sound at given frequencies for the experiment. The selected frequencies used to investigate the sound level ranged from 1 kHz to 8 kHz. Sound levels were recorded using sound level meter in decibels (dBA), before and after placing developed samples into the test tube holder. Table 3 shows measurements of sound pressure level at the frequencies of 1 -8 kHz inclusive, in decibel (dBA), without samples of the materials developed in place. These materials were investigated within the frequency range of 1 -8 kHz inclusive, at 0.5 kHz intervals. Table 4 shows measurements of sound pressure level at different frequencies, in decibel (dBA), with samples of the materials developed in place. These materials were investigated within the frequency range of 1 -8 kHz inclusive, at 0.5 kHz intervals. From Equation (1), the effectiveness of the Noise Reduction Coefficient (NRC) of each material developed at the given frequencies were determined and represented in Table 7. Noise Reduction Coefficient (NRC) = ( − )⁄ Where, a is sound intensity in decibel (dBA) without material developed. b is sound intensity in decibel (dBA) with material developed in place Effects of different biodegradable material composition on NRC This section summarizes and discusses the main findings of the work. The NRC results are given in Table 5 and shown graphically in Figure 3. Based on the results, it was observed that NRC values ranged between 0.09 for coconut fiber at 1 kHz frequency and 0.62 for fine grade sawdust mixed with expansive clay at 4.0 kHz. By carefully examining the data, it is found that fine grade sawdust mixed with expansive clay yielded the highest NRC value of 0.62 at 4 kHz. The other samples used in this study also show promising NRC values. The high NRC value of fine grade sawdust mixed with expansive clay may be attributed to the material's high density. This is consistent with the findings of Wertel (2000) which suggest that high-density material absorbs more sound because of its mass. Results further show that, materials with high density have good sound reduction properties due to their high NRC values. Further work to improve on these samples' NRC values is suggested. Table 6 presents the study's best NRC results (Fine grade sawdust mixed with expansive) together with other NRC results of different materials from previous studies (Balan and Asdrubali, 2006;Tengku Izhar et al., 2015;Oancea et al., 2018;Shivasankaran, 2019). The other sample compositions compared with the present study sample are 75% maize and 25% textile waste, Coconut fibre, Cob concrete, sawdust, and Sugar cane and egg tray. From the results shown in Table 6, fine grade sawdust mixed with expansive (from the present study) has the highest NRC value of 0.62 as compared to the other materials from previous studies (75% maize and 25% textile waste with NRC value of 0.28, coconut fibre with NRC value of 0.515, cob concrete with NRC value of 0.285, sawdust with NRC value of 0.05, and sugar cane and ash tray with NRC value of 0.59). The presented results show that, material sample (Fine grade sawdust mixed with expansive clay) from the present study has great NRC potential and should be considered in future NRC experiments. Conclusion This research investigated and analysed the acoustic properties of sawdust, coconut fiber and expansive clay for sound reduction purposes. The results show that fine grade sawdust mixed with expansive clay yielded the highest NRC value of 0.62 at 4 kHz and has a promising NRC potential as compared to the other materials studied. The materials made with fine grade sawdust, coconut fiber and expansive clay, which are natural materials, have good acoustic properties and therefore could be used as alternative, less expensive, local materials to produce acoustic board panels or tiles with appreciable noise reduction properties. They can be used in industrial halls, conference rooms, studios, offices and other building. Making use of these materials which otherwise are waste products, for sound reduction purposes has high prospects because they are renewable, abundant, cheaper, pose less health risk and safety concern to human health during handling and processing compared to other materials such as Glass fibers, which emit high CO2 emissions during their manufacturing process. It is recommended that further work be done to look at the binding agent, improving the material, as well as determine the mechanical and thermal properties. Select the best material composition (expansive clay). All the same further research could be conducted to improve the other material compositions.
3,453.8
2023-06-05T00:00:00.000
[ "Materials Science" ]
The Production, Spectrum and Evolution of Cosmic Strings in Brane Inflation Brane inflation in superstring theory predicts that cosmic strings (but not domain walls or monopoles) are produced towards the end of the inflationary epoch. Here, we discuss the production, the spectrum and the evolution of such cosmic strings, properties that differentiate them from those coming from an abelian Higgs model. As D-branes in extra dimensions, some type of cosmic strings will dissolve rapidly in spacetime, while the stable ones appear with a spectrum of cosmic string tensions. Moreover, the presence of the extra dimensions reduces the interaction rate of the cosmic strings in some scenarios, resulting in an order of magnitude enhancement of the number/energy density of the cosmic string network when compared to the field theory case. I. INTRODUCTION The cosmic microwave background (CMB) data [1,2] strongly supports the inflationary universe scenario [3] to be the explanation of the origin of the big bang. However, the origin of the inflaton and its potential is not well understood-a paradigm in search of a model. Recently, the brane world scenario suggested by superstring theory was proposed, where the standard model of the strong and electroweak interactions are open string (brane) modes while the graviton and the radions are closed string (bulk) modes. In a generic brane world scenario, there are three types of light scalar modes : (1) bulk modes like radions (i.e. the sizes/shape of the compactified dimensions) and the dilaton (i.e. the coupling), (2) brane positions (or relative positions) and (3) tachyonic modes which are present on non-BPS branes or branes that are not BPS relative to each other [4]. In general, the bulk modes have gravitational strength couplings (so too weak to reheat the universe at the end of inflation) and so are not good inflaton candidates. Neither are the tachyonic modes, which roll down the potential too fast for inflation. This leaves the relative brane positions (i.e. brane separation) as candidates for inflation. So, natural in the brane world is the brane inflation scenario [5], in which the inflaton is an open string mode identified with an inter-brane separation, while the inflaton potential emerges from the exchange of closed string modes between branes; the latter is the dual of the oneloop partition function of the open string spectrum, a property well-studied in string theory This interaction is gravitational strength, resulting in a very weak (that is, relatively flat) potential, ideally tailored for inflation. The scenario is simplest when the radion and the dilaton (bulk) modes are assumed to be stabilized by some unknown non-perturbative bulk dynamics at the onset of inflation. Since the inflaton is a brane mode, and the inflaton potential is dictated by the brane mode spectrum, * Electronic address<EMAIL_ADDRESS>† Electronic address<EMAIL_ADDRESS>‡ Electronic address<EMAIL_ADDRESS>it is reasonable to assume that the inflaton potential is insensitive to the details of the bulk dynamics. Brane inflation has been shown to be very robust (see e.g. [6,7,8,9]). The inflaton potential is essentially dictated by the gravitational attractive (and the Ramond-Ramond) interaction between branes. As the branes move towards each other, slow-roll inflation takes place. This yields an almost scale-invariant power spectrum for the density perturbation. As they reach a distance around the string scale, the inflaton potential becomes relatively steep so that the slow-roll condition breaks down. Inflation ends when branes collide and heat the universe [10], which is the origin of the big bang. Towards the end of the brane inflationary epoch in the brane world, tachyon fields appear. As a tachyon rolls down its potential, defects are formed (see e.g. [11]). Due to properties of the superstring theory and the cosmological conditions, only cosmic strings (but not domain walls or monopoles) are copiously produced during the brane collision [8,12]. These cosmic strings are Dp-branes with (p-1) dimensions compactified. The CMB radiation data fixes the superstring scale to be close to the grand unified (GUT) scale, which then determines the cosmic string tensions, which turn out to have values that are compatible with today's observation, but may be tested in the near future. In field theory, one may also devise an abelian Higgs like model around the GUT scale to produce cosmic strings towards the end of inflation in which the cosmic string tension is essentially a free parameter. Although such a model may not be as well motivated as brane inflation, it is a possibility, so we aim to find signatures that distinguish cosmic strings in brane inflation from those coming from an abelian Higgs model. In this paper, we explore more closely the production of cosmic strings after inflation, the properties of the cosmic strings, in particular their tensions and stability, and finally their evolution to an eventual network. In summary, we find that the final outcome depends crucially on the quantitative details of a particular brane inflationary scenario being contemplated. In some scenarios, the cosmic strings produced via the Kibble mechanism may dissolve quickly. It is likely that their dissolution (which can happen soon after (re)heating) leads to the thermal production of lower-dimensional branes as cosmic strings. This is very likely if the (re)heating process is efficient [10], since the (re)heat temperature is comparable to the superstring scale. In other scenarios, they will evolve to a cosmic string network. In this case, the general properties of the resulting cosmic string network is likely to be quite different from that arising from field theory. The cosmic strings appear as defects of the tachyon condensation and can be D1 branes or Dp-branes wrapping a (p-1)-dimensional compact manifold. They yield a spectrum of cosmic string tensions including Kaluza-Klein modes. Moreover, due to the presence of the compactified dimensions the interaction rate of the cosmic strings in some scenarios decreases, and when compared to the case in ordinary field theory, the result is an increase by orders of magnitude in the number density of the cosmic string network in our universe. II. A VARIETY OF BRANE INFLATIONARY SCENARIOS The brane inflationary scenarios we are interested in have the string scale close to the GUT scale so we consider only brane world models which are supersymmetric (post-inflation) at the GUT scale. (Supersymmetry is expected to be broken at the TeV scale, which is negligible for the physics we are interested in here.) In the 10dimensional superstring theory, the cosmic strings in our 4-dimensional spacetime shall be D-branes with one spatial dimension lying along the 3 large spatial dimensions representing our universe. Hence we seek to enumerate the possible stable configurations of branes of different dimensionality in 10 dimensions, compactified on a six manifold. To be specific, let us consider a typical Type IIB orientifold model compactified on (T 2 ×T 2 ×T 2 )/ N or some of its variations (see for instance [13]). The model has N = 1 spacetime supersymmetry. Although we shall focus the discussion on Type IIB orientifolds, the underlying picture is clearly more general. We seek to categorize stable configurations of branes which remain after inflation, and give rise to stable cosmic strings in the universe. By "stable" we mean that some fraction of the cosmic strings produced is required to persist until at least the epoch of big-bang nucleosynthesis in order for observable effects to be generated. In supersymmetric Type IIB string theory with branes and orientifold planes, it is well known that only odd (spacial) dimensional branes are stable. The conditions for stable brane configurations are simple given the compactification manifold; branes must differ by only 0, 4 or 8 in dimension, and branes of the same dimension can be angled at right angles in two orthogonal directions [14,15]. In the generic case where the second homotopy class of the compactification manifold is π 2 = 3 , branes will be stable when wrapping 2-cycles in the compact manifold. From these conditions, we formulate Table I In cosmological situations, branes which are non-BPS relative to the others can be present. Generally the non-BPS configurations will decay, and the decay products of many are well known. For instance a Dp-D(p-2) brane combination will form a bound state of a Dp brane with an appropriate amount of "magnetic" flux [16]. This process is best understood as the delocalisation or "smearing out" of the D(p-2) brane within the Dp brane. This process in the Dp-D(p-2) brane system is described by the presence of a tachyon field, an open string that stretches between them. This tachyon condenses as the D(p-2) brane decays and leads to a singular "magnetic" flux on the Dp brane; this "magnetic" flux then spreads out across the Dp brane and diminishes, leaving the total flux conserved. In an uncompactified theory, the residual "magnetic" field strength then vanishes. Since the tachyon in the Dp-D(p-2) brane combination is a complex scalar field inside the D(p-2) brane world volume, its rolling/condensation allows the formation of D(p-4)branes as defects. (The actual formation/production of D(p-4)-branes may require the dissolution of a D(p-2)-D(p-2) pair inside the Dp-brane.) Another important set of non-BPS brane configurations which will be generated in early universe braneworld cosmology are branes of the same dimension oriented at general angles, which will also decay into branes with magnetic flux, as described above. There are also special cases of non-BPS configurations which will not decay; between a D3 3 and D5 1 brane (or its T-dual equivalents, for instance a D1 and D7 brane) there is a repulsive force as seen in the total interbrane potential, which includes all gravitational and RR forces, between a Dp and a Dp ′ -brane (p ′ <p) (in terms of the separation distance r when r ≫ M −1 s ) where a is the number of directions in which the branes are orthogonal [15]. This potential also makes clear that there is no force between the BPS configurations of branes described above -those which differ in dimension by 4 and those of the same dimension which are angled in two orthogonal directions. Brane world models of inflation require brane antibrane pairs (or branes oriented at non-BPS angles) [6,7,8,9]; the inflaton field is described by the separation between the branes, and its potential can be organized to give slow roll inflation. To describe the Standard Model, we demand a chiral post-inflation brane-world, which requires that the branes which form our universe are angled in some dimension; sets of D5 1 and D5 2 branes will give a stable chiral low energy effective theory, for instance. After the compactification to 4-dimensional spacetime, the Planck mass M P = (8πG) −1/2 = 2.4 × 10 18 GeV is given by where M s is the superstring scale and the compactification volumes (of the (45)-, (67)-and (89) The string coupling g s should be large enough for non-perturbative dynamics to stabilize the radion and the dilaton modes (but not too large that a dual version of the model has a weak coupling). We expect the string coupling generically to be g s > ∼ 1. To obtain a theory with a weakly coupled sector in the low energy effective field theory (i.e. the standard model of strong and electroweak interactions with weak gauge coupling constant), it then seems necessary to have the brane world picture [18]. Suppose the D5 1 -branes contain the standard model open string modes, then where α GUT ≃ 1/25 is the standard model coupling at the GUT scale, which is close to the superstring scale M s . This implies that (M s r 1 ) 2 ∼ 30. If some standard model modes come from D5 2 -branes, or from open strings stretching between D5 1 -and D5 2 -branes, then (M s r 2 ) 2 ∼ 30. In the early universe, additional branes (and antibranes) may be present. Additional branes must come in pairs of brane-anti-brane (or at angles), so that the total (conserved) RR charge in the compactified volume remains zero. Any even dimensional D-branes are non-BPS and so decay rapidly. The Hubble constant during inflation is roughly In Table II, we catalogue the various brane anti-brane pairs (provided that they are separated far enough apart) which can inflate the 4 dimensional Minkowski brane world volume of D5 1 -and D5 2 -branes. Towards the end of inflation, a tachyon field appears and its rolling allows the production of defects. A priori, the defects (only cosmic strings here) which are allowable under the rules of K-theory [19] may be produced immediately after inflation, when the tachyon field starts rolling down. Following Eq. (2) and Eq. (4), we see that the Hubble size 1/H during this epoch is much bigger than any of the compactification radii, This means that the Kibble mechanism is capable of producing only defects with vortex winding in the three large spacial dimensions. The cosmological production of these defects towards the end of inflation are referred to as "Kibble" in Table II. During this epoch, the universe is essentially cold and so no thermal production of any defect is possible. Generically, codimension-one non-BPS defects may also be produced. However, these decay rapidly and will be ignored here. Let us now elaborate on the various possibilities listed in Table II (below, a Dp-Dp pair includes the case of a stack of Dp-branes separated from a stack of Dp-branes): • D9-D9 pair. In this case, the tachyon field is always present and the annihilation happens rapidly. Also since the branes are coincident, there is no inflaton. • D1-D1 pair. Since they do not span the 3 uncompactified dimensions, they do not provide the necessary inflation. In the presence of inflation (generated by other pairs), a density of these D1-branes will be inflated away. • D(3-D3) 0 pair. They span the 3 uncompactified dimensions and move towards each other inside the volume of the 6 compactified dimensions during inflation. (The conservation of the total zero RR charge prevents them from becoming parallel and so BPS with respect to each other.) At the end of inflation, their collision heats the universe and yields D1 0 -branes as vortex-like solitons. These D1 0 -branes appear as cosmic strings. They form a gas of D1 0 -branes (at all possible orientations in the 3-dimensional uncompactified space). The D3 0 -branes are unstable in the presence of the D5 1 and D5 2 branes. It is possible that during inflation, the D3 0 -brane can simply move towards a D5-brane and then dissolve into it. The D3 0 -brane can either hit the same D5-brane ending inflation, producing D1 0 -branes as cosmic strings, or it can collide with another D5-brane. This D5-brane shall no longer be BPS with respect to the other D5-branes and more inflation may result from their interactions. Towards the end of inflation these D5-branes collide with the BPS D5-branes. D1 0 -branes are expected to be produced as defects in this scenario. • D5 1 -D5 1 pair. This D5 1 brane is indistinguishable from the other D5 1 -branes that are present. They span the 3 uncompactified dimensions and move towards each other inside the volume of the 4 compactified dimensions (i.e. (6789)) during inflation. Towards the end of inflation, a tachyon field appears and its rolling produces D3 1 branes as cosmic strings. However such D3 1 branes are unstable and eventually a tachyon field (an open string mode between the D3-and the D5-branes) will emerge. Its rolling signifies the dissolution of the D3-brane into the D5 1 branes. Generically, by the time these D3branes start dissolving, (re)heating of the universe should have taken place, so the tachyon rolling can thermally produce D1 0 -branes as cosmic strings. • D5 3 -D5 3 pair. They may generate inflation directly, and being mutually BPS with the D5 1 and D5 2 branes shall not be subject to more complicated interactions. After inflation, D3 3 -branes as cosmic strings will be produced. Although they are not BPS with respect to the D5-branes, the interaction is repulsive (with p =5, p ′ =3 and a =2 in Eq. (1)), so we expect them to move away from the D5 1 -branes in the (67) directions (to the antipodal point) and from the D5 2 -branes in the (45) directions. This way, these D3 3 -branes shall mostly survive and evolve into a cosmic string network. However, some of the D3 3 -branes will scatter with the D5-branes in the thermal bath. This may also result in the production of some D1 0 -branes as cosmic strings. • D7 1,3 -D7 1,3 pair. To provide the needed inflation, these pairs wrap 4 of the 6 compactified dimensions and move towards each other in the remaining 2 compactified dimensions during the inflationary epoch. Their collision heats the universe and yields D5 1,3 -branes as cosmic strings. The D5-branes that wrap only 2 of the 4 wrapped dimensions of the D7 branes may appear to simply span all 3 uncompactified dimensions. However, the production of these objects is severely suppressed since the Hubble size is much bigger than the typical compactification sizes. While the tachyon is falling down, the universe is still cold, so no thermal production is possible either. As a result, only D5 1,3 -branes that appear as cosmic strings are produced. It is possible for the D5 1 -branes to dissolve into magnetic flux on the D7-brane during inflation. After the annihilation of the D7 1,3 -D7 1,3 pair, this flux shall reemerge as D5-branes, together with any additional D5-branes solitons as cosmic strings. • D7 1,2 -D7 1,2 pair. This case is similar to the above case, except both sets of D5-branes may dissolve into the D7 pair during inflation. We have considered only the IIB theory with two sets of D5-branes. Under T-duality, the branes become D9-D5-branes, or D7-D3-branes in a IIB orientifold theory, with corresponding descriptions. Generalizing the above analysis to the branes-at-angle scenario [7] should be interesting. It is also possible to describe similar inflationary models with cosmic strings in Type IIA theory, in which even dimensional branes are stable. In this case, one simply adds additional brane-anti-brane pairs to the N = 1 spacetime supersymmetric IIA orientifold models [17]. It will be interesting to consider the brane inflationary scenario in M theory and the Horava-Witten model. In general, we see that the brane inflationary scenario includes numerous possibilities, each with its own intriguing features and consequences. Although not necessary, we may consider the early universe starting as a gas of branes (see for example [20]). The presence of the orientifold planes fixes the total RR charge. After all but one pair of D-brane-anti-brane (that span the 3 large dimensions) have annihilated, we end with an early universe that is the starting point of the above discussion. In this picture, it is hard to predict which set of D-brane-anti-brane should be last standing. III. THE SPECTRUM OF THE COSMIC STRINGS The cosmic string tension µ is estimated for a number of brane inflationary scenarios [8,12]. The value µ is quite sensitive to the specific scenario. Here we give an order of magnitude sketch. For all brane separation smaller than the compactification size, the D-Dpotential is too steep for enough efolding. When the brane and the anti-brane is far apart in the compactified volume, the images of the brane exert attractive forces on the anti-brane, so that at the antipodal point the force is exactly zero. In the cubic compactification, this results in a potential V (φ) = B −λφ 4 , where φ measures the distance from the anti-podal point [6]. The density perturbation generated by the quantum fluctuation of the inflaton field is [6,8] Using COBE's value δ H ≃ 1.9 × 10 −5 [1], This still leaves M s unfixed. To estimate M s and the cosmic string tension µ, let us consider a couple of scenarios. Consider D5 1 -D5 1 brane inflation. With (M s r 1 ) 2 ∼30 and r 2 = r 3 = r ⊥ , Eq. (2) and Eq. (7) then imply that M s ∼ 10 14 GeV. If the cosmic strings are D1-branes, the cosmic string tension µ 1 is simply the D1-brane tension τ 1 : This implies that Gµ ≃ 6×10 −12 . Now the D1-brane may have discrete momenta in the compactified dimensions. These Kaluza-Klein modes give a spectrum of the cosmic string tension, where e i (i =1, 2, 3) are respectively the discrete eigenvalues of the Laplacians on the (45), (67) and (89) compactification cycles. To get an order of magnitude estimate, we find that the lowest excitation raises the tension by about a few percent. For D7 1,2 -D7 1,2 pair inflation, and (M s r 1 ) 2 ≃ (M s r 2 ) 2 ∼30, we have r 3 = r ⊥ . In this case, M s ∼ 4 × 10 14 GeV, with D5 1,2 -branes as cosmic strings. Noting that a Dp-brane has tension τ p = M p+1 s /(2π) p g s , the tension of such cosmic strings is This yields Gµ ∼ 10 −8 . This tension is bigger than that of D1-branes. Depending on the particular inflationary scenario, this value may vary by an order of magnitude. For D-Dinflation, we have roughly [12] 10 −7 ≥ Gµ ≥ 10 −12 (11) Higher values of Gµ are possible for the branes-at-smallangle scenario. The interesting feature of this type of cosmic strings is that there is a spectrum of cosmic string tension. The branes can wrap the compactified (4567)-dimensions more than once. This gives where n is the defect winding number (i.e., the vorticity) and w is the wrapping number (i.e., the number of times it wraps the compactified volume) inside the brane, so nw is equivalent to the number of cosmic strings. Moreover, there can be "momentum" (Kaluza-Klein) excitations of the branes propagating in these compactified directions. All these result in quite an intricate spectrum of cosmic string tensions. For n = w = 1, we have : where p 1 and p 2 are discrete "momentum" excitation modes depending on the geometry of the (45) and the (67) directions. Using (M s r 1 ) 2 ∼ (M s r 2 ) 2 ∼30, we see that each momentum excitation typically raises the cosmic string tension roughly by a few percent. We see that the cosmic string tension can have a rich spectrum. This is very different from the field theory case, where the cosmic string always appear with the same tension, up to the vorticity number n. IV. EVOLUTION OF THE COSMIC STRING NETWORK To see the impact of the extra dimensions on the cosmic string network evolution, let us use the simple onescale model for the evolution of the cosmic string network [21]. The energy in the cosmic strings is much smaller than the energy in the radiation (or in the matter at later time). Let L(t) be the characteristic length scale of the string network. The energy density of the cosmic string network is given by: where E is the energy of the cosmic string network per characteristic volume. String self-intersections typically break off a loop, which then decays (e.g. via gravitational waves). String intercommutations generate cusps and kinks, which also decay rapidly. So the change in energy is given by Now, the cosmic string energy in an expanding universe E = ρV 0 a 3 where the constant V 0 is the reference volume, and a(t) is the cosmic scale factor. The number of interaction per unit volume per unit time is λ(v/L)/L 3 where v is a typical peculiar velocity and λ measures the probability of string intersections. Assuming slowmoving strings (to simplify the analysis), and substituting these quantities into Eq. (15), we obtain the equation governing the evolution of the energy density: Here, H =ȧ/a is the Hubble constant. Substituting the ansatz for L (t) = γ (t) t in Eq. (16), we obtain the following equation for γ (t) during the radiation dominated era:γ This equation has a stable fixed point at γ(t) = λ. We see from this solution that the characteristic length scale of the string network tends asymptotically towards the horizon size, L ≃ λt. As a check, we see that in the absence of string interactions (that is, λ = 0), γ ∼ √ t so ρ ∼ a −2 , as expected. In the presence of cosmic string interactions (that is, λ = 0), the asymptotic (late time) energy density of the cosmic string is given by ρ = µ λ 2 t 2 ∼ µ/ λ 2 a 4 radiation dominated era µ/ λ 2 a 3 matter dominated era (18) Suppose λ 0 is the interaction strength for field theory models. The cosmic strings live in 4 + d ⊥ dimensions and are localized in the d ⊥ compact dimensions. The effect of the extra dimensions is to reduce the collision (self-intersection) probability of the cosmic strings. The simplest way to model this effect is to change the efficiency with which loops are formed by the long cosmic strings, so λ < λ 0 . So the number density of the scaling cosmic string network is enhanced by a factor of (λ 0 /λ) 2 ≥ 1 Generically, we expect this to be a large enhancement. Since the extra dimensions are stabilized, ρ still scales like radiation during the radiation-dominated epoch (and it scales like matter during the matter-dominated epoch). The resulting cosmic string network then yields a scaling energy density ρ where ρ r is the energy density of radiation during the radiation-dominated epoch (or of matter during the matter-dominated epoch). In field theory models, Γ = β. Numerical simulations [22] give β ∼ 6. Let us give an order-of-magnitude estimate of the effect of the extra dimensions on the cosmic string collision probability, namely the ratio λ 0 /λ. Consider two points of two different cosmic strings or of the same cosmic string that coincide in the 4-dimensional spacetime. In 4-dimensional field theory, they are touching. The probability of this happening is dictated by λ 0 . In the brane world, they may still be separated in the extra dimensions. We like to estimate the likeliness of them actually touching (which then allows intercommuting or the pinching off of a loop). Consider the compact directions where these two points (of cosmic strings) appear as points (that is, they are not wrapping these compact directions). In the case of D5 3 -D5 3 pair inflation, the repulsive force from the D5 1 -branes will push the D3 3 -branes into a corner in the (45) directions, while the repulsive force from the D5 2branes will push the D3 3 -branes into a corner in the (67) directions. As a result, all the D3 3 -branes end up at a corner in the (4567) directions. In this case, the extra dimensions should have little or no effect on their interaction, that is In other scenarios, they are free to roam in the compact directions. If two cosmic strings coincide in the 4-dimensional spacetime, and in the compactified directions in which they are pointlike they are separated by a distance comparable to the superstring scale 1/M s , a tachyon field appears and the rolling of this tachyon field has a time scale around the superstring scale. So we expect them to interact. Consider the scenario where the D5 1,2 -branes are cosmic strings. If they are randomly placed, the likeliness of them coming within that distance in the compact (89) directions is given by λ 0 /λ ≃ (M s r 3 ) 2 . Now let us take the cosmic string interaction into account. Since the cosmic string appears as points and interact via an attractive Coulomb type potential in the extra dimensions (which becomes important only when the separation between them is relatively small). Let us get an estimate of this enhancement of the probability of interaction. The scattering cross-section of the two string points interaction in transverse dimensions via an attractive potential V (r) = −A/r d ⊥ −2 is given by: where Ω d ⊥ −1 is the volume of the unit (d ⊥ − 1)-sphere. The capture radius r capture is comparable to the superstring scale 1/M s , so the likelihood of two string points within that distance becomes Note that, generically, larger Gµ gives less enhancement. The reason is : the total volume of the compactified dimensions is fixed by the value of G; larger tension comes from brane-wrapping over larger compactified volume, which implies smaller volume for the cosmic strings to avoid each other, so smaller Γ. This means ΓGµ (the cosmic string density) is relatively insensitive compared to either µ or Γ alone. If observations give a bound ρ/ρ r < 10 −5 , the values of Gµ appearing in the branesat-small-angle scenario may seem too large. However, the production of cosmic strings in this scenario are more localized around the brane intersection, implying a smaller production as well as a smaller enhancement in Γ. Clearly, a careful estimate of λ 0 /λ and Γ in that case will be very important. The important message here is that the comic string network continues to have a scaling solution, and the enhancement in its energy density due to the extra dimensions can be very large. For a given µ, this will yield a very different cosmic string energy density than that in the field theory case. Measuring µ and ρ separately will be valuable. Dvali and Vilenkin also noted that the presence of compactified dimensions can substantially increase the num-ber density of the cosmic string network. We thank Louis Leblond, Levon Pogosian, Sash Sarangi, Gary Shiu, Alex Vilenkin and Ira Wasserman for valuable discussions. This research is partially supported by the National Science Foundation under Grant No. PHY-0098631.
6,969.2
2003-03-31T00:00:00.000
[ "Physics" ]
METAPHORICAL THINKING OF STUDENTS WITH DIFFERENT SENSING PERSONALITY TYPES IN SOLVING ALGEBRA PROBLEMS Dinda This research aims at describing metaphorical thinking of students with varying personality types in solving algebra problems. The subjects of this study consist of two students of the same sex and equal mathematical abilities namely female students with guardian personality type and artisan personality type based on Keirsey personality type. Data collection methods used are problem solving task and task-based interview. The results show that metaphorical thinking of two subjects differ mainly in the component connect. In connect component, the guardian student is connects the problem given with the weekly savings process, while the artisan student connects with the farmer's hat and process of making a ladder. In relate component both students find common ideas between the problem given and the ideas they have. In explore component, the guardian student describes the similarity of ideas between the problem given with the weekly savings process and makes a model, while the artisan student describes the similarity of ideas with process of making a ladder using pictures and curves. In analyze component, the two students re-explain the previous steps that have been taken. Then in transform component, the two students change the model of their ideas. Whereas in experience component, both students do not apply the results obtained to solve the problems in new context. INTRODUCTION Algebra is one of the fields given to junior high school students. Based on Permendikbud Number 21 and 24 of 2016, algebra material began to be approved at the junior high school grade VII semester 1. Algebra is also a symbolic language used to express ideas in many branches of mathematics (Tabak, 2011: 68). This is in line with Watson (2007: 3) which states that algebra is an individual way to express generalizations about numbers, equations, relations, and functions by using symbols (usually consisting of letters or variables) as a simplification and help of problem aids. However, Ramadhani (2016: 20) states that students do not understand the meaning of variables and make mistakes in solving equations in algebra material with one of the causes in the form of lack of mastery of algebra material that will have an impact on the problem-solving process. Novitasari (2018: 9) confirms with the results of his research that the average VIII grade junior high school students have not been able to reach the middle level on PISA content change and relationship questions with a percentage score for level 3 of 18.33% and level 4 of 11.67%. Based on the description junior high school students in Indonesia are less able to solve mathematical problems, especially in the field of algebra. While Wilson et al (Bhat, 2014: 685) suggested that problem-solving has a special role in learning mathematics with the main goal of teaching mathematics and learning mathematics to develop the ability to solve various complex problems. This is also in line with NCTM (2000) which sets five standard processes in mathematics, namely problem solving, reasoning and proof, communication, connections, and representations. Based on these descriptions, make it clear that problem-solving is one of the important priorities in mathematics. Salleh (Zakaria, 2009: 233) revealed that one of the abilities possessed in solving problems is by making analogies. Making analogies is a cognitive process of connecting information or meaning of a particular problem (source domain) to another (target domain) that corresponds to that process. For example, in solving problems related to algebraic equations can be analogous to the balance or can also be with the scales. By using this analogy students will more easily understand a mathematical concept because of the phenomenon that is per the mathematical concept and is often found in everyday life. Setiawan (2016: 210) explains that this analogy process is one of thinking that uses metaphors, which connects students' mathematical knowledge with real-world phenomena around, this is per the results of Arni's research (2019: 90) where students find metaphors from the problem of linear one-variable equations with scales and or seesaw. Thinking is called metaphorical thinking, Setiawan (2016: 210) defines that metaphorical thinking is a mental activity by using metaphors that are appropriate to the situation it faces to understand a concept. Siler (1996: 7) revealed that metaphorming comes from meta (transcending) and phora (transference). Metaphorming is an activity that refers to the act of changing something (material) from one meaning to another. Bazzini (Lai, 2013: 32) views metaphors as a tool to explain or interpret mathematical ideas and processes in terms of real-world events, which involve everyday objects and processes.. Sterenberg (2008: 91) confirms that metaphors link abstract ideas to concrete images, thus evoking an experiential connection. Metaphors' thought supports embodied knowing and is not merely a communication or visualization device. Thus in metaphorical thinking, abstract concepts will be transformed into real objects in everyday life. Carreira (2001: 267) explains that models and metaphors have a very close relationship. Each model formed has a metaphor in it. To make a mathematical model of a problem requires a relationship between two conceptual domains, but to develop such interconnections there must be a metaphor. Carreira's opinion is in line with the opinion of Mathieu (2009: 8) which revealed that a metaphor is the link we naturally make between two domains: one source domain, usually more concrete, and a target domain, usually more abstract. This allows us to better understand and think about the target domain. Thus in metaphorical thinking students are asked to connect real phenomena with the domain which will create a mathematical model and hopefully students can solve these problems. Siler (1996: 22-25) reveals the stage of metaphorical thinking, namely connection, discovery, invention, and application by involving the CREATE acronym which means "Connect, Relate, Explore, Analyze, Transform, Experience". The following is an explanation of CREATE based on the description of Siler (1996:26-31): 1. Connect two or more seemingly different things or ideas. 2. Relate linking a difference both objects and ideas to things that we already know or know, begins to observe their similarities.. 3. Explore exploring similarities, drawing ideas, building models, playing roles, and describing those models. 4. Analyze things that have been thought of. Therefore, it is necessary to outline the ideas and models that already exist to find the relationship between these ideas and models. 5. Transform discover or invent something new based on your connections, explorations, and analysis. 6. Experience apply your drawing, model, or invention in as many new contexts as possible. Each student has a different way of thinking as in determining a choice or decision making due to differences in the personality of each individual. The results of Barhaghtalab's research (2016: 790) show that thinking has a significant correlation with personality type, so there is a possibility that personality is one of the factors in the thought process. One personality classification is done by David Keirsey. Keirsey (1998) describes briefly the classification of personality types based on the way a person behaves towards an event. He distinguishes it into two, which is to observe as a type of sensing and introspective as an intuitive type. We only focus on the type of sensing, because this type requires more information and real memory to behave towards an event. With this real information, it is possible to have a connection with metaphorical thinking. There are two types of sensing, guardian, and artisan. Broadly speaking, guardian personality types have intelligence in logistics where this intelligence is used in organizing a problem correctly before carrying out a process, so everything must be confirmed first (Advisor, 2017). Artisan-type people have intelligence in tactics where this ability is used in seeing situations quickly, evaluating various choices, and taking actions to obtain the desired results (Advisor, 2017). Because it is possible the influence of personality types in decision making which later will have an impact on problem-solving. Based on some of the descriptions above, the purpose of this study is to describe the metaphorical thinking of students with different sensing personality types in solving algebra problems. METHODS This research is a qualitative descriptive study. Based on the type, this research aims to describe the metaphorical thinking of students with different sensing personality types in solving algebraic problems related to material number patterns. This research data was obtained from the results of students' problem-solving assignments and taskbased semi-structured interviews to clarify written data and explore information that might not be present in written data about students' metaphorical thinking in solving algebraic problems. The subjects in this study consisted of two eighth grade students of junior high school. Subject selection was done by giving Keirsey personality type tests so that the chosen subjects were one student with guardian personality and one student with artisan personality with both students being female and having equivalent mathematical abilities based on the results of daily tests on material number patterns in the range of scores 86-100. Researchers control the sex of research subjects to avoid differences in data influenced by sex, this is confirmed by the results of Mubarok's research (2019) which shows there are differences in the thought processes between male and female students. Researchers assume that material number patterns have analogous processes that are per metaphorical thinking, this is in line with the results of the research of Kadir and Ulfah (2013) that solving the problem look for a pattern, students can connect the relationship between one pattern with another pattern. Researchers also assume that with high daily test scores mastering material number patterns. Students' metaphorical thinking data come from problem-solving tasks and task-based semi-structured interviews. The problem-solving task given to the subject is a non-routine math problem related to material number patterns. Interviews are used to clarify written data and explore information that might not be obtained from written data about students' metaphorical thinking in solving algebraic problems. The data that has been obtained will be analyzed using three stages, namely data condensation, data display and drawing conclusions (Miles & Huberman, 2014: 12), based on metaphorical thinking indicators adapted from Siler (1996), Setiawan (2016) and Arni (2019 ) presented in Table 1. Invention Transform Change the form of draws, models, or ideas that have been made to find something new Application Experience Apply draws, models, or ideas in various new contexts The algebra problem used in this study includes the following 2 questions. 1. Pay attention to the picture of the following square shape. The relationship between the two sequential arrangements above is the same. a. Determine the number of square shapes in each arrangement from the 1st order to the 10th arrangement. b. Determine the number of square shapes in the th arrangement. 2. Pay attention to the picture of the following cubeshaped arrangement. The relationship between the two sequential arrangements above is the same. a. Determine the number of cuboid shapes in each arrangement from the 1st order to the 8th arrangement. b. Determine the number of cuboid shapes in the th arrangement. RESULTS AND DISCUSSIONS Based on data analysis of problem-solving tasks results and interviews of Guardian subject (SG) and Artisan subject (SA), a description of metaphorical thinking of students with different sensing personality types in solving algebra problems. Guardian Students' Metaphorical Thinking in Solving Algebra Problems Based on the results of the analysis conducted on SG in solving algebra problems with CREATE criteria can be revealed as follows. The following excerpts of SG interviews relating to the connect component. PG01: Where did you get the idea to solve the problem that I gave? SG01: From my daily life, I save money, miss. PG02: How did you get that idea? SG02: I got the idea from my routine. The thing is, I save money every week. Initially saving 50.000 IDR then every week the next plus 5.000 IDR, well from there, miss. PG03: Why use the idea? SG03: Because I think my saving system, which is the same as this number pattern every week, has the same addition, miss. For example in the first week I saved 50.000 IDR, then the second week I would save the same as the first week but I added 5.000 IDR so the second week I saved 55.000 IDR, while the third week 60.000 IDR, until the following week with the same pattern. So, if I save the system the previous savings plus 5.000 IDR if this pattern was originally 5 continues to be 9 means plus 4, then 9 is finished 13 is also added 4. Well from there it's already seen that the previous pattern is always added 4. Viewed from here the addition is always still, miss. PG04: Do you have any other ideas? Besides the saving process. SG04: Nothing, miss, I think that's all, miss. In the connect component, SG connects the given problem by saving it weekly. SG only found one idea, but SG explained in detail the relationship of the pattern of how she saved the same as the problem given, both of which had a fixed difference. SG explained that the relationship did not use writing but verbally, in this case in accordance with Dewiyani's statement (2017: 307) that the guardian type did not express ideas and information obtained in written form. The following excerpt from the SG interview relates to the relate component. PG07: Explain, how are these ideas related to the problem given? SG07: The link is like this, miss. I saved it also patterned, then the system is also the same as this pattern (a given problem) where the enhancers are always fixed, so the connection equally has a permanent enhancer. PG08: What links saving ideas with this problem? SG08: Addition system earlier, miss. If saving the previous week was added by 5.000 IDR while this pattern (the problem is given), the previous pattern was added by 4, miss. PG09: How did you find that relationship? SG09: Because I had already found a pattern in the problem, suddenly I remembered how I saved every week. So, I found that my savings system was like this in this pattern. Furthermore, in the relate component, SG finds similarities between the problems given and the way she saves each week. SG explained the similarities between ideas found in a fixed difference. In the explore component, SG describes verbally the similarities between the ideas she gets. Where she saves every week has a difference of 5.000 IDR for two consecutive weeks, while the problem is given, has a difference of 4 for two sequential arrangements, so SG explains that the similarity between ideas lies in a fixed difference (different) and has the same pattern. Then SG makes the idea in the form of a mathematical model like in Figure 1 which aims to facilitate it in solving problem number 1a, where SG is defined as the th week. In the analyze component, SG explains the steps previously taken verbally, starting from how SG connects and finds similarities between ideas and problems, then redefines the similarities and makes a model. The following excerpt from the SG interview is related to the analyze component. PG19: Explain how you found the relationship between saving and this pattern? SG19: Because there was a constant addition to the next pattern using the previous pattern added to the difference in the pattern, so I suddenly remembered the same way I saved it, the addition was fixed, miss. PG20: Explain the relationship between ideas? SG20: The relationship in the form of saving my second week is the same as the first week plus 5,000 IDR, while the second pattern is 9 equal to 5 plus 4. So, the next pattern is the same as the previous pattern plus the permanent addition, miss. PG21: Okay, then explain how to make this model ( Figure 1) and what this model means ( Figure 1)? SG21: So let's say the variable m_1 for the first week, m_2 for the second week, and m_n for the nth week, for example, m_1 is 50,000 IDR, while m_2 is 55,000 IDR, meaning m_2 is the same as m_1 plus 5,000 IDR so it is made like this (pointing the model in Figure 1). From here, I mean to make such a model be more easily understood and concise, miss. In the transform component, SG converts the model that has been made into the next pattern similar to the previous number pattern plus a fixed difference to solve problem number 1a. From changing the model SG concluded that she got the arithmetic sequence has a difference of 4, so she solved problem number 1a by adding the previous number 4 and registering it as in Figure 2. While solving problem number 1b, SG did not use the model but SG used th term formula for arithmetic sequences. The following excerpt from the SG interview is related to the experience component. The concept is similar to number 1, miss. But the enhancers in number 2 are not fixed. Number 2 has the addition of 2 , different from number 1 which is 4. PG26: Well, about the idea that you use to answer number 2 is the same as number 1? SG26: It's almost the same as the sum with the previous number, but the addition is not fixed for number 2, miss. PG27: So, did you use the previous idea or not? SG27: No, miss. But the addition was changed with 2 . In the experience component, SG explains that the model she got from problem number 1 can't be used in problem number 2. SG explains that the arithmetic sequence in number 2 does not have a fixed difference, but each subsequent term is obtained from the previous term plus 2 , as can see in Figure 3. Then, SG records each tribe to find a fixed difference and knows that problem number 2 is a three-tiered arithmetic sequence. Therefore, SG uses a multilevel arithmetic sequence formula to solve problem number 2b presented in Figure 4. Thus in the experience component, SG does not apply the previous idea but rather it applies a new idea. Artisan Students' Metaphorical Thinking in Solving Algebra Problems Based on the results of the analysis conducted on SA in solving algebra problems with CREATE criteria can be revealed as follows. In the connect component, the SA connects the problem given by making a ladder and the diameter of the farmer's hat (caping). SA explains the relationship between how stairs are made and the problem given using pictures and mentions if both have fixed patterns and differences as shown in Figure 5. However, for caping diameter, SA can only assume if caping has a fixed and patterned difference. The following excerpts from an SA interview in explaining the relationship between making stairs and caping with the given problem. PA03: Why use the idea? SA03: Because if the first rung is 1/2 meter long, then the second rung is 1 meter long, the third rung is 1 1/2 meter long, I can conclude that each ladder goes down to form a pattern with a fixed difference of 1/2 meter and length the steps are the same as the previous steps plus the difference, and it turns out the same as this problem, miss . If this problem starts from 5 to 9 to 13, you can conclude, if the difference is 4 and the pattern in the form of the previous pattern is always added to 4. PA04: Do you have any other ideas? Besides the process of making stairs. SA04: Farmer pack hat, the diameter is getting down the bigger, forming a fixed pattern and the difference is fixed, miss. But this is just my hypothesis, miss. Because it looks like that, but don't know more, miss. Then, in the relate component, SA is more familiar with making stairs than caping because it is more often encountered and used by SA. SA explained that the making of stairs and the problems given have in common that lies in a fixed difference and the next pattern is obtained from the previous pattern plus a fixed difference. The following excerpt from the SA interview relates to the relate component as follows. PA10: Then, what connects the idea of making stairs with this problem? SA10: Yes, that was earlier, miss. Because the difference is fixed and the pattern is fixed. If the stairs that were the difference were 1/2 meter while the problem was the difference 4. Then the pattern was the same, namely the previous pattern plus the difference, miss. In the explore component, the SA describes the similarities between the ideas of understanding the problem and is related to the curve depicted as shown in Figure 6. For SA, the curves formed are more like stairs, so the SA looks for the relationship between the problem given and the ladder. The following is an interview excerpt from the SA. PA11: Oh well, then how did you find that relationship? SA11: Because if I think of a pattern like question number 1, if the curve is drawn it will be like this, miss ( Figure 6), and it looks like a ladder, not suddenly thinking of going to the stairs. PA12: Explain, how is the relationship between ideas? SA12: Yes, it was sis, the pattern of stairs, and the problem was fixed and the difference was also fixed. From the difference, if you draw a curve, it will form a straight line. So, the relationship is from the fixed difference, miss. So, I made this curve model with the stairs earlier. Thus, SA thinking starts from the given problem which is modeled in the form of a curve then connected to the stairs. SA draws curves and stairs to make it easier to apply the given problem. SA more often poured his ideas in the form of pictures and writings such as the connect and explore components. SA shows its artistic nature by pouring out his ideas using drawings by the statement of Keirsey (1998), andDewiyani (2017: 307) also states that the type of artisan gives more scribbles, and makes writing and drawing in understanding problems and planning problem-solving. In the analyze component, the SA explains the steps that have been taken previously, starting from how the SA connects and finds similarities between ideas and problems, then redefines the similarities and makes a picture. SA also explains if the ladder and the problem are given have differences where the first ladder has a length according to a fixed difference value. While the first pattern on a given problem does not begin with a fixed difference value. However, both have the same pattern with a fixed difference. In the transform component, SA changes the model that has been made into the next pattern similar to the previous number pattern plus a fixed difference. From changing the model in Figure 6, the SA concludes that if the pattern is straight, it is most likely that the difference is fixed and only needs to calculate the difference which is then applied according to the model she made. For SA the results of changing the model are used to solve problem number 1a, while in solving problem number 1b, the SA does not use the model but uses the th term formula for arithmetic sequences presented in Figure 7. In the experience component, SA explains that the model she got from problem number 1 can't be used in problem number 2. SA explains the arithmetic sequence in number 2 does not have a fixed difference, but each subsequent term is obtained from the previous term plus 2 . The following is an excerpt from the interview explanation of the SA in the experience component.. SA26: I looked at the difference, then looked for a relationship with n what it was, and it turned out that the selection was n^2. PA27: Do you think there is something in common with the making of the stairs? SA27: not the same, miss. Because the difference is different so the pattern is not as regular as the stairs and question number 1. PA28: Alright. Then how do you use the difference idea n^2 in solving problems at number 2? SA28: Yes, I list as I answer it, miss (Figure 8). Then problem 2b uses the same method as problem 1b. SA is not careful when calculating problem number 2, she applies the arithmetic formula but does not reexamine the results, it can be seen in Figure 8. In this case, by the statement of Keirsey (1998) that the artisan personality type has the nature to dare to look for other ways that can be applied but often careless and in a hurry so that causes inaccurate answers. Thus the experience component, SA does not apply the previous idea. Comparison of Guardian and Artisan Metaphorical Thinking in Solving Algebra Problems. Based on the results and discussion each subject has similarities and differences in thinking on the CREATE component in solving algebra problems presented in Table 2. Connect Connect the problem given by saving. Connect the problem given by making a ladder and hat for farmers. Relate Find common ideas between problems given by saving each week. Find the similarity of ideas between the problems given and the making of stairs. Subject Artisan Subject Explore Describe the similarity of ideas between the problems given by oral savings and mathematical models. Describe the similarity of ideas between problems given by making stairs through drawings and curves. Analyze Re-explain the previous steps that have been taken. Re-explain the previous steps that have been taken and find the differences between ideas. Transform Changing the model made into the next pattern is the same as the previous pattern plus a fixed difference. Changing the model made into the next pattern is the same as the previous pattern plus a fixed difference. Experience Not applying the previous idea, but applying a formula of multilevel arithmetic. Not applying the previous idea, but applying the arithmetic sequence formula. Based on Table 2, in the connect component, SG connects the given problem with the weekly saving process, while the SA connects the problem given with making ladder and farmer's hat. In this case, it appears that SA is bolder and more confident in using varied ideas and conveying their ideas verbally and visually compared to SG that only uses one idea and delivered verbally, this is in line with the statement of Keirsey (1998). In the relate component, SG finds the similarity of ideas between the problems given and the saving process every week, while the SA finds the similarity of ideas with the making of a ladder. Both subjects find the similarity of ideas in the form of a fixed difference. In the explore component, SG describes the similarity of ideas between problems given by the process of saving verbally and makes a mathematical model, while the SA describes the similarity of these ideas by making stairs through pictures and curves. In this case, it shows that SG is more often conveying the idea verbally, compared to the SA which prefers to draw pictures/illustrations. It shows that SA with an artistic spirit more often gives scribbles and makes pictures in understanding or planning problem solving, this is in line with Dewiyani's (2017) opinion. In the analyze component, both subjects reiterate the results obtained from the previous steps, but the SA also explains the difference between the ideas he found and the problems faced. This shows that SA is open-minded so that it discovers differences between ideas, compared to SG who acts carefully and does not find differences between ideas. In the transform component, the two subjects conclude by changing the model that has been made into the next pattern the same as the previous pattern added with different from each pattern. Then in the experience component, the two subjects do not apply the results obtained previously to solve the problem in a new context. However, SG uses a multilevel arithmetic sequence formula, while the SA uses an arithmetic sequence formula. In this case, the SA acts hastily and courageously so that it is careless in applying other ways that are less precise, in contrast to SG which acts cautiously and methodically in applying other ways. Conclusion Based on the results of the analysis and discussion above, it can be concluded that students' metaphorical thinking with sensing personality types has differences, especially in the connect component. In connect component, the guardian student is connecting the problem given with the weekly savings process, while the artisan student connects with the farmer's hat and process of making a ladder. In relate component both students find common ideas between the problem given and the ideas they have. In explore component, the guardian student describes the similarity of ideas between the problem given with the weekly savings process and makes a model, while the artisan student describes the similarity of ideas with the process of making a ladder using pictures and curves. In analyze component, the two students re-explain the previous steps that have been taken. Then in transform component, the two students change the model of their ideas. Whereas in experience component, both students do not apply the results obtained to solve the problems in new context. Suggestions Based on research that has been done, the advice given by researchers is as follows.. 1. The results of this study indicate that there are differences in the metaphorical thinking of guardians and artisan students in solving algebra problems. Therefore, for teachers, the results of this study can be considered in providing opportunities for students to solve problems using the methods they have. 2. The results of this study are limited to the subjects of sensing personality types namely guardians and artisan. We assume in metaphorical thinking other factors can influence besides sensing personality types. If you want to do similar research, it can be viewed from other aspects, for example, gender, MBTI personality type, learning style, or mathematical ability.
6,977.6
2020-06-30T00:00:00.000
[ "Mathematics", "Psychology" ]
Weighting estimation under bipartite incidence graph sampling Bipartite incidence graph sampling provides a unified representation of many sampling situations for the purpose of estimation, including the existing unconventional sampling methods, such as indirect, network or adaptive cluster sampling, which are not originally described as graph problems. We develop a large class of design-based linear estimators, defined for the sample edges and subjected to a general condition of design unbiasedness. The class contains as special cases the classic Horvitz-Thompson estimator, as well as the other unbiased estimators in the literature of unconventional sampling, which can be traced back to Birnbaum et al. (1965). Our generalisation allows one to devise other unbiased estimators in future, thereby providing a potential of efficiency gains. Illustrations are given for adaptive cluster sampling, line-intercept sampling and simulated graphs. Introduction study the situation where patients are sampled indirectly via the hospitals from which they receive treatment. Insofar as a patient may be treated at more than one hospital, the patients are not nested in the hospitals like elements in clustered sampling. Birnbaum and Sirken consider three estimators for such indirect sampling. The first one is the classic Horvitz-Thompson (HT) estimator (Horvitz and Thompson 1952) based on all the sample patients, each of which is weighted by the inverse of the probability of being included in the sample. The second estimator is based on all the sample hospitals and a constructed value for each of them, and the third one is only based on a sub-sample of hospitals determined by a priority rule. In particular, the estimator using all the sample hospitals is often referred to as a Hansen-Hurwitz (HH) type estimator. The HH-type estimator and its variations are used for network sampling (Sirken 1970(Sirken , 2005; it is recast as a "generalised weight share method" (Lavalleè 2007); and a modified HH-type estimator is considered for adaptive cluster sampling (Thompson 1990(Thompson , 1991. All the sampling techniques mentioned above are considered somewhat unconventional, compared to the standard sampling methods using stratification or multistage selection. Unconventional sampling techniques are often characterised by the presence of some rules of observation, in addition to the probability design of an initial sample. For example, under network sampling (Sirken 1970), "siblings report each other" are needed to reach a "network" of siblings following an initial sample of households. Under adaptive cluster sampling (Thompson 1990), sample propagation depends on the "network" relationship among the units and the values of the surveyed units. Moreover, unconventional sampling require that information of "multiplicity" of sources is collected in addition to the sample. For instance, in the example of indirect sampling of patients via hospitals, one needs to identify all the relevant hospitals outside the initial sample, in order to compute the inclusion probability of a sample patient. The same requirement exists as well for any other unconventional sampling, such as "counting rules" of links between population elements and selection units under network sampling (Sirken 2005), or the relationship between edge units and their neighbouring networks under adaptive cluster sampling (Thompson 1990). Zhang and Patone (2017) formally define sampling from finite graphs, in analogy to sampling from finite populations (Neyman 1934), extending the previous works by Frank (1971Frank ( , 1980aFrank ( , 1980bFrank ( , 2011, which deal with different graph motifs separately. In particular, they show that each of the aforementioned unconventional sampling techniques can be given different graph sampling representations. Zhang and Oguz-Alper (2020) identify sufficient and necessary conditions for feasible representation of sampling from arbitrary graphs as bipartite incidence graph sampling (BIGS), including indirect, network and adaptive cluster sampling. For instance, the nodes can be the hospitals and the patients and an edge exists between a hospital and any patient that receive treatment at the hospital. This is a bipartite graph since the nodes of the graph are bi-partitioned, where an edge can exist only between two nodes in different parts, but not between any two nodes in the same part. Under graph sampling (Zhang and Patone 2017), one needs to specify an observation procedure, by which the edges of the sample graph are observed following an initial sample of nodes. As demonstrated by Zhang and Oguz-Alper (2020), BIGS can provide a unified representation of various situations of sampling, which are originally described in other terms, where one part of the nodes refer to the initial sampling units and the other part the measurement units of interest, to be referred to as motifs, such that the edges represent the observational links between sampling units and motifs. More examples will be given later in this paper. Also, the observation procedure needs to be ancestral (Zhang and Patone 2017), so that one knows which other out-of-sample nodes could have led to the motifs in the sample graph, had they been selected in the initial sample of nodes. The information of multiplicity or ancestry is apparent under BIGS, which is simply the knowledge of the nodes (representing sampling units) that are adjacent to the node representing a sample motif in the BIG. BIGS can provide a unified representation of many so-called unconventional sampling techniques in the literature, and the three estimators considered by (Birnbaum and Sirken 1965) are applicable under any BIGS. Our aim in this paper is to formulate a large class of unbiased incidence weighting estimators, which includes the three estimators of (Birnbaum and Sirken 1965) as special cases but are not limited to them. This allows one to study design-based estimation under the general setting of ancestral BIGS (satisfying the requirement of ancestral observation), where the results are immediately applicable to all the relevant situations. Notice that we do not consider model-based estimation in this paper, which requires additional assumptions but would allow one to draw conclusions about the superpopulation from which the given population graph is taken from. We shall develop the class of unbiased incidence weighting estimators, based on the sample edges that link the sampling units to the observed motifs. As will be explained, all the three estimators used by (Birnbaum and Sirken 1965) are special cases of this class of estimators, which is an insight hitherto unknown in the literature. Many other unbiased estimators can be devised as members of the proposed class, and one can apply the Rao-Blackwell method (Rao 1945, Blackwell 1947 to the non-HT estimators, to generate distinct unbiased estimators that can improve the estimation efficiency. Thus, the discovery of the class of incidence weighting estimators provides a potential for efficiency gains. Below, in Sect. 2, we formally introduce ancestral BIGS, and develop the incidence weighting estimators. The general condition of unbiased estimation is established. New understandings of the three aforementioned estimators are discussed. We consider also the application of Rao-Blackwell method, which motivates a new subclass of the HH-type estimators. Illustrations are given in Sect. 3 of adaptive cluster sampling (Thompson 1990), line-intercept sampling (Becker 1991) and simulated graphs, which demonstrate the scope and flexibility of the proposed approach across a variety of situations. Some concluding remarks are given in Sect. 4. Incidence weighting estimator under BIGS Denote by B ¼ ðF; X; HÞ a bipartite simple directed graph, where ðF; XÞ form a bipartition of the node set F [ X, and each edge in H points from one node in F to another in X. No edge exists between any two nodes in F or any two in X. For BIGS from B, let F be the set of initial sampling units, and X the population of motifs that are of interest, where a motif is a subgraph exhibiting a particular pattern, for example a pair of nodes with directed edges to each other, or three nodes forming a triangle in an undirected simple graph. An edge ðijÞ that is incident to i 2 F and j 2 X exists, if and only if the selection of i in a sample s from F leads to the observation of motif j in X, hence the edges (and the graph B) are defined to be directed. The edge set H is unknown to start with. Let the size of F be M ¼ jFj, and that of X be N ¼ jXj, where N is generally unknown. The incidence relationships corresponding to the edges in H represent thus the observational links between the sampling units and the motifs of interest. Zhang and Oguz-Alper (2020, Theorem 1) establish the sufficient and necessary conditions, by which an arbitrary instance of graph sampling can be given a feasible BIGS representation. They examine and discuss the BIGS representation of indirect, network and adaptive cluster sampling. For instance, for indirect sampling of patients via hospitals, let F consist of the hospitals and X the patients, where ðijÞ 2 H iff patient j receives treatment at hospital i. For network sampling of siblings via households, one can let F consist of the households and X the networks of siblings, i. e. each j represents a group of people who are siblings of each other, where ðijÞ 2 H iff at least one of the siblings in j belongs to household i. Adaptive cluster sampling will be discussed in Sect. 3. Let a i ¼ fj : j 2 X; ðijÞ 2 Hg be the successors of i in B. Given the initial sample s from F, the observation procedure of BIGS is incident (Zhang and Patone 2017), such that all the nodes in a i are included in the sample graph provided i 2 s; hence, the term BIGS. Let X s ¼ S i2s a i , which consists of all the sample motifs. Following the general definition of sample graph (Zhang and Patone 2017), the sample BIG is given by is the sample of edges. To be able to calculate the inclusion probabilities of each j in X s , the observation procedure needs to be ancestral as well. Let b j ¼ fi : i 2 F; ðijÞ 2 Hg be the ancestors (or predecessors) of j in B. Let bðX s Þ ¼ S j2X s b j . The knowledge of ancestry (or multiplicity) amounts to the observation of bðX s Þ n s, although these nodes are not part of the sample graph B s , such as the out-of-sample hospitals of the sample patients. Example 1 Consider ancestral BIGS from the population BIG below. We have F ¼ fi 1 ; i 2 ; i 3 ; i 4 g and X ¼ fj 1 ; j 2 ; j 3 g and H ¼ fði 1 j 1 Þ; ði 2 j 1 Þ; ði 2 j 2 Þ; ði 3 j 3 Þg. Suppose s ¼ fi 1 ; i 3 g & F. By incident observation procedure, we have X s ¼ fj 1 ; j 3 g and H s ¼ fði 1 j 1 Þ; ði 3 j 3 Þg, and the sample graph B s ¼ ðs; X s ; H s Þ as defined above. In addition, we observe bðX s Þ n s ¼ fi 2 g, where i 2 is not part of the sample BIG. Notice that the ancestry knowledge requires one to obtain additionally the information identifying all the ancestors of all the observed sample motifs in X s . For instance, for each patient j sampled from the hospital i, all the hospitals (other than i) in which k receives treatments must be identified, whether or not they are among the actual sample of hospitals. This can e.g. be achieved by adding a survey question to each sample patient in X s , which enumerates all the relevant hospitals. Sometimes, it may be more natural to survey the units in s instead. For instance, when sampling children via their parents, where the mother and father are used as separate sampling units in F, one can ask the in-sample parent about the out-of-sample parent(s). Finally, it may be possible or preferable to retrieve the ancestry knowledge from external sources, such as the Birth Register when sampling children via parents. Notice also that, in computer science (e.g. Leskovec and Faloutsos 2006;Hu and Lau 2013), one may be concerned with situations where the graph is in principle known but is too large or dynamic to be fully processed or stored for practical purposes. Taking a sub-graph according to some chosen probability scheme is then a possible approach. For an example the whole Twitter graph consisting of users and their following/follower relationships can be constructed by the company at any given time point. However storing every instance of the graph might be unfeasible due to the enormous amount of memory required and the fact that the graph is changing all the time. Taking a sample may suffice for the purpose of estimating e.g. the follower to following ratio. As another example, let F be the products available in an online market place and X the buying customers, and let ðijÞ 2 H iff the customer j has bought the product i. Again, the whole graph is available to the owner of the market, but sampling may be preferred for various market analytics. Of course, in these situations the ancestry knowledge of the sample X s is guaranteed. Sometimes, either the design or circumstances may prevent one from obtaining the complete ancestry knowledge, such that not all the ancestors b j of an observed motif j are known. Without losing generality, suppose one only manages to obtain information about a subset of b j , denoted by b à j , where b à j is non-empty now that j is already observed. It is then both necessary and possible to modify the sampling strategy (including the estimators described below), an example of which will be discussed in Sect. 3.1 later. Moreover, we refer to Zhang and Oguz-Alper (2020) for a treatment of incomplete ancestry knowledge, which can arise in a number of situations of graph sampling. The incidence weighting estimator Let y j be an unknown constant associated with motif j, for j 2 X, given the population graph B. The aim is to estimate the total h ¼ P j2X y j , including e.g. Given the sample graph B s , let fW ij ; ðijÞ 2 H s g be the incidence weights of the sample edges, and W ij 0 if ðijÞ 6 2 H s . The incidence weighting estimator (IWE) is given by Notice that the definition (1) allows for sample dependent weights W ij . Proof The expectation ofĥ with respect to the sampling distribution of s is given by The condition (2) ensures that the IWE is unbiased under repeated sampling. When the weights are constant of sampling, denoted by x ij for distinction, it reduces to P i2b j x ij ¼ 1 for any j 2 X. Let p ij be the second-order sample inclusion probability of i; j 2 F. Proposition 2 The BIG sampling variance of an unbiased IWE is given by where HT-type estimator Let p ðjÞ ¼ Pr ðj 2 X s Þ and p ðj'Þ ¼ Pr ðj 2 X s ; ' 2 X s Þ for j; ' 2 X, where parentheses are used in the subscript to distinguish these inclusion probabilities of the motifs from those of the sampling units. The HT-estimator is given bŷ where p b k is the exclusion probability of b k in s, which is the probability that none of the ancestors of j in B is included in the initial sample s, and the knowledge of the out-of-sample ancestors b j n s is required to compute . The HT-estimator is a special case of the IWE, where the weights W ij for each k and s satisfy X Notice that these weights W ij are not constant of sampling if jb j j [ 1, since they depend on how s intersects b j . For Example 1 earlier, we have (4), since both j 2 and j 3 have only one ancestor in the BIG. Moreover, The value a does not matter, since the coefficient of y j 1 in the IWE (1) is To see that the weights given by (4) satisfy the condition (2) generally, let / s j be the probability that the sample intersection is Arguing similarly in terms of the joint probability that the sample intersections for j and ' are s j and s ' , it can be shown that D j' in (3) reduces to p ðj'Þ =p ðjÞ p ð'Þ given (4) and (2). More generally, let g s j ¼ p ðjÞ P i2s j W ij =p i for any weights W ij that are not constants of sampling. To satisfy the condition (2), for any j 2 X, the weights must be such that X The HT-estimator is the special case where g s j 1. It is possible to assign g s j that differs from 1 for different sample intersects s j , subjected to the restriction (5). Any estimator satisfying (5) but not (4) may be referred to as a HT-type estimator. HH-type estimator While a HT-type estimator uses sample dependent weights W ij , a HH-type estimator uses weights x ij that are constant of sampling. The condition (2) is reduced to P i2b j x ij ¼ 1, for any j 2 X. Thus, for Example 1 earlier, we have now Birnbaum and Sirken (1965) It follows that the HH-type estimator given bŷ is unbiased for h under repeated sampling, where z i is a constructed constant for each initial sample unit i. The BIG sampling variance ofĥ z is given by Notice that one only needs z i for the initial sample units in order to applyĥ z , which is possible provided ancestral BIGS. Moreover, the HH-type estimator (6) defines actually a family of estimators, depending on the choice of x ij , although (Birnbaum and Sirken 1965) use only the equal weights x ij ¼ 1=jb j j. The correspondinĝ h z is referred to as the multiplicity estimator, denoted byĥ zb . Variations of the multiplicity estimator under other settings of indirect, network sampling are considered by Sirken (1970), Sirken and Levy (1974), Sirken (2004) and Lavalleè (2007). Unlike the HT-estimator, it is in principle possible to apply the Rao-Blackwell method to improve the HH-type estimator, to which we return in Sect. 2.5. Some other HH-type estimators will be discussed then. Priority-rule estimator Birnbaum and Sirken (1965) invent a third estimator based on a prioritised subset of H s , where they let and 0 otherwise, i.e. if unit i happens to be enumerated first in the frame F among all the in-sample ancestors of j, for each j 2 X s . For Example 1 earlier, we have I i 2 j 2 ¼ 1 whenever j 2 2 X s and I i 3 j 3 ¼ 1 whenever j 3 2 X s , since both j 2 and j 3 have only one ancestor in the BIG. The priority-rule only matters for j 1 here. If fi 1 ; i 2 ; i 3 ; i 4 g is the frame arranged in the order of enumeration, then we would have I i 1 j 1 ¼ 1 if i 1 2 s whether or not i 2 2 s, and I i 2 j 2 ¼ 1 only if i 2 2 s and i 1 6 2 s. Whereas if fi 4 ; i 3 ; i 2 ; i 1 g is the frame arranged in the order of enumeration, then we would have I i 2 j 1 ¼ 1 if i 2 2 s whether or not i 1 2 s, and I i 1 j 2 ¼ 1 only if i 1 2 s and i 2 6 2 s. The priority-rule estimator based on fðijÞ : I ij ¼ 1; ðijÞ 2 H s g is given bŷ where p ij ¼ Pr is the conditional probability that ðijÞ is prioritised given ðijÞ 2 H s , and x ij ¼ 1=jb j j are the equal weights for any j 2 X. Clearly, other priority rules or choices of x ij are possible. One can easily recogniseĥ p as a special case of IWE with W ij ¼ I ij x ij =p ij . It can satisfy the unbiasedness condition (2), provided p ij [ 0 for all ðijÞ 2 H s , in which case EðW ij jd i ¼ 1Þ ¼ x ij . Birnbaum and Sirken (1965) did not provide an expression of V ðĥ p Þ, but indicated that it is unwieldy. Now thatĥ p is a special case of IWE, its variance follows readily from Proposition 2. Let because P i2b j x ij ¼ 1 for any j 2 X. An unbiased variance estimator can be given by The priority probabilities p ij and p ij;j' depend on the priority rule, as well as the sampling design. The details for the estimtator of Birnbaum and Sirken (1965) under initial simple random sampling (SRS) without replacement of s are given in Appendix A. It should be noticed that the priority rule is not part of sampling; the sample graph B s includes all the edges incident to every sample unit in s. Had one applied subsampling by randomly selecting one of the edges incident to each i in s with some designed probabilities, the sample graph would have contained one and only one edge from each sample unit. Instead, the priority rule selects only one sample edge incident to each motif in X s for the purpose of estimation. There is a possibility that a unit i can be sampled but never prioritised, in which caseĥ p would be biased. For an extreme example, suppose a motif j is incident to all the sampling units in F, then the last unit in F can never be prioritised (for j) according to the priority rule of Birnbaum and Sirken (1965), as long as jsj [ 1. Generally,ĥ p is biased under this priority rule, provided there exists at least one motif j in X, where jb j j [ 1 and Pr ðjs j j [ 1 j j 2 X s Þ ¼ 1 such that the ancestor i ¼ maxðb j Þ has no chance of being prioritised when it is in s. The probability above depends on the ordering of sampling units in F, as well as the initial sample size. Given any ordering of the units in F, as the initial sample increases, it is possible forĥ p to behave more erratically and become biased eventually. Using Rao-Blackwell method The minimal sufficient statistic under BIGS is fðj; y j Þ : j 2 X s g, or simply X s as long as one keeps in mind that the y-values are constants associated with the motifs. Letĥ be an unbiased IWE. Applying the Rao-Blackwell method toĥ yieldsĥ RB ¼ EðĥjX s Þ as an improved estimator, if the conditional variance V ðĥjX s Þ is positive. Since the HT-estimatorĥ y is fixed conditional on X s , we haveĥ yRB ĥ y . For a non-HT estimator, it is in principle possible that the RB method can improve its efficiency, as illustrated below. Example 2 Consider the BIG in Example 1. Given jsj ¼ 1, there are 4 distinct initial samples, leading to 4 distinct X s , such that V ðĥjX s Þ ¼ 0 andĥ RB ¼ĥ for any unbiased IWE. Given jsj ¼ 2, there are 6 different initial samples, leading to 5 distinct X s , where both s ¼ fi 1 ; i 2 g and s 0 ¼ fi 2 ; i 4 g lead to the same motifs fj 1 ; j 2 g, so thatĥ RB 6 ¼ĥ given motif sample fj 1 ; j 2 g, ifĥðsÞ 6 ¼ĥðs 0 Þ. Take e.g. the HH-type estimatorĥ z by (6), we havê The calculation required for the RB method may be intractable, if the conditional sample space of s given X s is large and the initial sampling design p(s) is not fully specified, which is common in practice for designs with unequal inclusion probabilities over F. Moreover, the result of RB method is generally not a unique minimum variance unbiased estimator under BIGS, because the minimal sufficient statistic is not complete. It is thus worth exploring other useful choices of the IWE. Due to the inherent shortcoming of the priority-rule estimator pointed out earlier, we concentrate on the HH-type estimatorĥ z below. Consider the special case where ja i j 1 in the population BIG, such as when sampling households via persons. Suppose first with-replacement sampling of s, where the different draws generate an IID sample, and compareĥ y andĥ z based on a single draw. Let p i and p ðjÞ ¼ P i2b j p i be the respective selection probabilities. We have p ij ¼ p i if i ¼ j and 0 if i 6 ¼ j, and p ðj'Þ ¼ p ðjÞ if j ¼ ' and 0 if otherwise, now that ja i j 1. We have given which we haveĥ z ¼ĥ zRB . The variance of any other h z would be larger, as long as x ij =p i is not a constant over b j , because A similar argument holds approximately for the choice x ij / p i under sampling without replacement of s, provided p ij % p i p j and p ðj'Þ % p ðjÞ p ð'Þ , as in the case of sampling households via persons with a small sampling fraction |s|/|F|. This can make z i =p i more similar to each other over F, which is advantageous with respect to the anticipated mean squared error ofĥ z under the sampling design and a population model of z i , according to Theorem 6.2 of Godambe and Joshi (1965). To make z i =p i more similar to each other over F without the restriction ja i j ¼ 1, one may consider setting x ij \x jj if ja i j [ ja j j, despite p i ¼ p j , because there are more motifs contributing to z i than z j . Thus, under general unequal-probability sampling of s, it may be reasonable to consider the probability and inverse-degree adjusted (PIDA) weights subjected to the condition (2), where c [ 0 is a tuning constant of choice. Denote bŷ h zac the corresponding PIDA-IWE. The multiplicity estimatorĥ zb becomes a special case ofĥ zac given c ¼ 0 and constant p i over F. Notice that to apply the weights (8) with c 6 ¼ 0, one needs to know ja i j for all i 2 b j and j 2 X s , in addition to the ancestral observation of b j . For instance, under indirect sampling of children via parents, one would need to collect the number of children for the out-of-sample parents in bðX s Þ n s as well. For network sampling of siblings via households, one would need to collect the number of other sibling networks in each household i with at least one member from a sample sibling network j. Adaptive cluster sampling Consider the example of adaptive cluster sampling (ACS) discussed by Thompson (1990). The population F consists of 5 grids, with y-values f1; 0; 2; 10; 1000g. Each grid has either one or two neighbours which are adjacent in the given sequence, as in the graph G below, where as Thompson (1990) we simply denote each grid by its yvalue. Given an initial sample of size 2 by SRS from F, one would survey all the neighbour grids (in both directions if possible) of a sample grid i if y i exceeds the threshold value 5 but not otherwise. The observation procedure is repeated for all the neighbour grids, which may or may not generate further grids to be surveyed. The process is terminated, when the last observed grids are all below the threshold. The interest is to estimate the total amount of species (or mean per grid) over the given area. In particular, the grid 2 is a so-called edge unit, which can be observed from 10 or 1000, but would not lead to 10 or 1000 if only 2 is selected in s. The inclusion probability of grid 2 under ACS cannot be calculated correctly when it is selected in s but not 10 or 1000, in which case the knowledge of multiplicity (or ancestry) is lacking. Thompson (1990) proposes a modified HT-estimator which uses the grid 2 in estimation, only if it is selected on its own, the probability of which is known from the design of the initial sample. Zhang and Oguz-Alper (2020) develop feasible BIGS representations of ACS from G above. Here we use one of them to illustrate how the IWE can be applied to ACS. The population BIG is given by B ¼ ðF; F; HÞ, with X ¼ F and H as below. B : 1 0 2 10 1000 1 0 2 10 1000 Zhang and Oguz-Alper (2020) point out that it is possible to consider BIGS from B, where the observational links between (10, 2) and (1000, 2) under ACS are removed to ensure ancestral observation, and apply the classic HT-estimator under this BIGS representation of ACS from G. They show that the two strategies (ACS, modified HT) and (BIGS, HT) actually lead to the same estimator. The difference is that one cannot apply the RB method to the HT-estimator under BIGS, as one can with the modified HT-estimator under ACS. We refer to Zhang and Oguz-Alper (2020) for more details. Thompson (1990) proposes also a modified HH-type estimator, where an edge unit is used in estimation only if it is selected in s directly. This modified HH-type estimator is simply the multiplicity estimatorĥ zb under BIGS from B, with equal weights x ij ¼ 1=jb j j in (6). The two strategies (ACS, modified HH-type) and (BIGS,ĥ zb ) lead to the same estimator. Moreover, application of the RB method tô h zb is the same as that for the modified HH-type estimator; we refer to Thompson (1990) for the details. Finally, since the contiguous grids that form a network are all observed together under ACS if any of them is observed, ancestral BIGS from B entails the observation of ja i j needed for the PIDA weights given by (8). However, since ja i j is the same for all the grids in the same network and the initial sampling is SRS, the weights by (8) are all equal in this case, so that the estimatorĥ zac coincides with the multiplicity estimatorĥ zb . Line-intercept sampling Line-intercept sampling (LIS) is a method of sampling habitats in a region, where a habitat is sampled if a chosen line segment transects it. The habitat may e.g. be animal tracks, roads, forestry, which are of irregular shapes. Kaiser (1983) considers the general situation, where a point is randomly selected on the map and an angle is randomly chosen, yielding a line segment of fixed length or transecting the whole area in the chosen direction. Repetition generates an IID sample of lines. In the simplest setting, each transect line is selected at random by selecting randomly a position along a fixed baseline that traverses the whole study area, in the direction perpendicular to the baseline. We apply IWE under BIGS to the following example of LIS (Becker 1991) under this simple setting. The aim is to estimate the total number of wolverines in the mapped area, as sketched in Fig. 1. Four systematic samples A, B, C and D, each containing 3 positions, are drawn on the baseline that is equally divided into 3 segments of length 12 miles each. Following the 12 selected lines and any wolverine track that intercepts them yields 4 observed tracks, denoted by j ¼ 1; :::; 4 and heuristically indicated by the dashed lines in Fig. 1. Let y j be the associated number of wolverines, and L j the length of the projection of j on the baseline. From top to bottom and left to right, we observe ðy 1 ; L 1 Þ ¼ ð1; 5:25Þ, ðy 2 ; L 2 Þ ¼ ð2; 7:5Þ, ðy 3 ; L 3 Þ ¼ ð2; 2:4Þ and ðy 4 ; L 4 Þ ¼ ð1; 7:05Þ. Feasible BIGS representation of LIS First we construct a feasible BIGS representation of LIS in this case. Given the observed tracks, partition the baseline into 7 projection segments, each with associated length x i , for i ¼ 1; :::; 7 from left to right, where x 1 refers to the overlapping projection of j ¼ 1 and 2, x 2 the projection of j ¼ 2 that does not overlap with j ¼ 1, x 3 the distance between projections of j ¼ 2 and 3, x 4 the projection of j ¼ 3, x 5 the distance between projections of j ¼ 3 and 4, x 6 the projection of j ¼ 4, and x 7 the distance between j ¼ 4 and right-hand border. The probability that the i-th projection segment is selected by a systematic sample is The 4 systematic samples are IID. The sample BIG on the r-th draw is given by B r ¼ ðs r ; X r ; H r Þ, where s r contains the selected projection segments, and a i the wolverine tracks that intercept the sampled line originating from i 2 s r , such that X r ¼ S i2s r a i and H r ¼ S i2s r i  a i . In this example, we have s 1 ¼ s 2 ¼ f1; 5; 6g, yielding X 1 ¼ X 2 ¼ f1; 2; 4g on the first two draws A and B, and s 3 ¼ s 4 ¼ f4; 6; 7g, yielding X 3 ¼ X 4 ¼ f3; 4g on the last two draws C and D. The distinct projection segments selected over all the draws are s ¼ S 4 r¼1 s r ¼ f1; 4; 5; 6; 7g, and the distinct tracks are X s ¼ S 4 r¼1 X r ¼ f1; 2; 3; 4g. Let F à ¼ f1; 2; :::; 7g contain the 7 projection segments constructed from ðs; X s Þ, and H s ¼ S 4 r¼1 H r . Let B à ¼ ðF à ; X s ; H s Þ be given as below: Let X ¼ f1; :::; j; :::; N g contain all the wolverine tracks in the area, where N ! 4 given the sample X s . Let FðXÞ ¼ f1; :::; i; :::; M g be the sampling frame, which consists of all the projection segments constructed from X. Let H ¼ fðijÞ; i 2 F; j 2 Xg, where an edge exists from i to j provided j intercepts In practice, only B à can be constructed but not B. The two are not the same generally, in that one needs to further partition the projection segments of F à in F based on X, in order to accommodate the unobserved tracks in X n X s . For instance, suppose there is a track that can only be intercepted from the 7-th projection segment in F à and the track does not reach the right-hand border, then this projection segment would be partitioned into 3 segments in F, and (F, H) would differ from ðF à ; H Ã Þ accordingly. Under LIS, field observation along a line has an actual width of detectability. Dividing the baseline accordingly yields thus a known sampling frame F 0 of detectability partitions. Let B 0 ¼ ðF 0 ; X; H 0 Þ be the corresponding BIG. By Theorem 1 of Zhang and Oguz-Alper (2020), LIS can be represented as BIGS from B 0 where, in particular, the observation procedure of LIS ensures that BIGS from B 0 is ancestral for X s . Now, as along as the unit of detectability is negligible in scale compared to the baseline, one can assume the elements of F 0 to be nested in those of F à (or F), such that the selection probability of each observed track j with respect to BIGS from B 0 can be correctly calculated using B à (or B). Thus, the strategy BIGS-IWE defined for B 0 can be applied using the observed B à , just as when B were known. Given the systematic sampling design of the transect lines, the tracks f1; 2; 4g can only be observed if a position is selected in the left part of 1st projection segment, which would only result in f1; 5; 6g as the sampled projection segments. Similarly, the tracks f3; 4g can only be observed if a position is selected in 4th projection segment, which would only result in f4; 6; 7g as the sampled projection segments. Thus, applying the RB method would not change any unbiased IWE based on the observed sample BIGS in this case. The estimatorĥ HH of Becker (1991) is the IWEĥ za0 . The HT-estimatorĥ y noted by Thompson (2012) can be given as the IWE with weights satisfying (4). Other unbiased IWE can be used for LIS under BIGS from B à , two of which are as given in Table 1. Neither the HT-estimatorĥ y nor the multiplicity estimatorĥ zb is efficient here. Efficiency gains can be achieved using the PIDA weights (8). In this case, adjusting the equal weights by the selection probability while disregarding the degrees of the initial sample units performs well, whereĥ za0 has the lowest estimated variance. Of course, the true variance ofĥ za0 may or may not be smaller than that of, say,ĥ za:5 . Meanwhile, setting c ¼ 1:227 would numerically reproduce the equal weights x 12 ¼ x 22 ¼ 0:5 based on the observed sample. It seems that the IWE by (8) has the potential to approximate the relatively more efficient estimators in different situations, if one is able to choose the coefficient c in (8) appropriately. A simulation study Two graphs B ¼ ðF; X; HÞ and B 0 ¼ ðF; X; H 0 Þ are constructed for this simulation study. Both B and B 0 have the same node sets F and X, and jFj ¼ 54 and jXj ¼ 310. The edge sets |H| and jH 0 j have the size jHj ¼ jH 0 j ¼ 1200, but different distributions of the degree on the motifs in F, as shown in Fig. 2. The distribution of the degree of the motifs in F is relatively uniform over a small range of values in B, but much more skewed and asymmetric in B 0 . Let h ¼ jXj, and y j 1 for j 2 X. We consider the following 7 estimators of h under BIGS from B or B 0 with SRS of s, where m ¼ jsj varies from 2 to 53: • the IWEĥ y with weights satisfying (4) (the HT estimator); • the IWEĥ zac with weights satisfying (8) for c ¼ 0; 1; 2, (the multiplicity estimator); • the IWEĥ p by (7) (the priority-rule estimator of Birnbaum and Sirken (1965)). We explore different ordering of the motifs in F: random, ascending or descending yielding three estimators, denoted byĥ pR ,ĥ pA andĥ pD , respectively. Table 2 gives the relative efficiency of the 6 other estimators against the HTestimator, for a selected set of initial sample sizes, each based on 10000 simulations of BIGS from either B or B 0 . All the results are significant with respect to the simulation error. We notice that all the three priority-rule estimatorsĥ pR ,ĥ pA andĥ pD are biased when the sample size is large enough. This happens at m ¼ 45 for B and m ¼ 46 for B 0 . Note that the maximum degree of the motifs is 10 in B and 9 in B 0 . Moreover, the variance of any priority-rule estimators decreases as the sample size m increases until a threshold value after which the variance starts to increase. In these simulation the threshold is somewhere between 10 and 30. The sampling variance of the priority-rule estimator is also affected by the ordering of the sampling units in F. The variance tends to be lowest when F is arranged in descending ordering by ja i j, whereas ascending ordering tends to yield the largest variance. Without prioritisation, the value z i is a constant of sampling given x ij . Due the randomness induced by the priority-rule, z i varies over different samples. A sampling unit with large ja i j has a large range of possible z i values and placing such a unit towards the end of the ordering tends to increase the sample variance of fz i : i 2 sg due to prioritisation. It then makes sense that descending ordering by ja i j may work better than ascending ordering. However, one may not know fja i j : i 2 Fg in practice, in which case applyingĥ p given whichever ordering of F can be a haphazard business. Given initial SRS, the different HH-type estimators here differ only with respect to the use of ja i j in the PIDA weights (8) via the choice of c. The equal-weights esmator h za0 is the least efficient of the three HH-type estimators, especially for B 0 where the distribution of ja i j is more skewed. The differences between the other two estimatorŝ h za1 andĥ za2 are relatively small, compared to their differences toĥ za0 , so that a nonoptimal choice of c 6 ¼ 0 is less critical than simply setting c ¼ 0. Taken together, these results suggest that the extra effort that may be required to obtain ja i j is worth considering in practice, and a sensible choice of c depending on the distribution of ja i j over F if it is known, or B s if it is only observed in the sample BIG, is an interesting question to be studied. Finally, bothĥ za1 andĥ za2 are more efficient than the HT-estimator when m is small, whereas the HT-estimator improves more quickly as m becomes larger, especially for B 0 . The matter depends on the sampling fractions jX s j=jXj and |s|/|F|, as well as the respective inclusion probabilities of motifs and sampling units. The interplay between them is complex as it depends on the population BIG. Further research is needed in this respect. Concluding remarks In this paper we develop a large class of incidence weighting estimators (1) under BIGS. The IWE is applicable to all situations of unconventional sampling techniques that require a specific observation procedure in addition to an initial sample, which can be represented by ancestral BIGS, including indirect, network, adaptive cluster and line-intercept sampling. The condition (2) ensures exactly design-unbiased IWE, which synthesises and generalises the conditions underlying the other unbiased estimators known in the literature. The classic HT-estimator from finite-population sampling is shown to be a special case of IWE, with any sample dependent weights satisfying the restriction (4), which provides a novel insight. A more general restriction (5) is given for sample dependent weights. It will be intriguing to investigate other HT-type estimators satisfying this restriction. The priority-rule estimator invented by Birnbaum and Sirken (1965) is another a special case of IWE. However, it may become biased as the initial sample size increases and behave erratically long before that, such that its application may be a haphazard business if one is unable to control the interplay between the ordering of sampling units and the priority-rule of Birnbaum and Sirken (1965). It remains to be seen whether one is able to overcome these shortcomings by future developments. The HH-type estimators used in the literature are also members of the proposed class. While it is in principle possible to apply the Rao-Blackwell method to an HHtype estimator to improve its efficiency, the computation may be intractable if the conditional sample space of s is large and/or if the initial sampling design p(s) is not fully specified. However, consideration of the Rao-Blackwell method and the degrees (in the BIG) of the sampling units points to the PIDA weights (8) for IWE, as a general alternative to the commonly used equal weights and the corresponding multiplicity estimator. The numerical illustration of line-intercept sampling and the simulation results suggest that the PIDA weights can easily outperform the equal weights. Further study is warranted, in order to identify the sensible choice of the PIDA weights in applications. Finally, other incidence weights can be explored subjected to the condition (2), beyond those examined in this paper. This is clearly another direction of future research. m À 2 if j 6 ¼ '; i 6 ¼ j and jb i j \ fjgj þ jb j ' \ figj ¼ 0 0 i f j 6 ¼ '; i 6 ¼ j and jb i j \ fjgj þ jb j ' \ figj [ 0 8 > > > > > > > > > > > < > > > > > > > > > > > : where b i j is the subset ancestors of j with higher priority than i, and d iðj;'Þ ¼ jb i j [ b i ' j is the number of units in b j [ b ' with higher priority than i, and d iðjÞ;jð'Þ ¼ jb Funding The authors did not receive support from any organization for the submitted work. Data Availability The datasets generated during the current study are available from the corresponding author on reasonable request. Declarations Conflict of interest The authors have no conflicts of interest to declare that are relevant to the content of this article. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
10,995.6
2022-09-25T00:00:00.000
[ "Mathematics" ]